entry_id
stringlengths 0
35
| text
stringlengths 0
311k
|
---|---|
transmission-justification-warrant | ## 1. Introduction
Arthur has the measles and stays in a closed environment with his
sister Mafalda. If Mafalda ends up contracting the measles herself
because of her staying in close contact with Arthur, there must be
something--the measles virus--which is transmitted from
Arthur to Mafalda in virtue of a relation--the relation of
staying in close contact with one another--instantiated by the
two siblings. Epistemologists have been devoting their attention to
the fact that *epistemic* properties--like being justified
or known--are often transmitted in a similar way. An example (not
discussed in this entry) is the transmission of knowledge via
testimony. A different but equally important phenomenon (which
occupies center stage in this entry) is the transmission of epistemic
justification across an inference or argument from one proposition to
another.
Consider proposition \(q\) constituting the conclusion of an argument
having proposition \(p\) as premise (where \(p\) can be a set or
conjunction of propositions). If \(p\) is justified for a subject
\(s\) by evidence \(e\), this justification may transmit to \(q\) if
\(s\) is aware of the entailment between \(p\) and \(q\). When this
happens, \(q\) is justified for \(s\) *in virtue of* her
justification for \(p\) based on \(e\) and her knowledge of the
inferential relation from \(p\) to \(q\). Consider this example from
Wright (2002):
*Toadstool*
\(\rE\_1\).
Three hours ago, Jones inadvertently consumed a large risotto of
Boletus Satana.
\(\rP\_1\).
Jones has absorbed a lethal quantity of the toxins that toadstool
contains.
Therefore:
\(\rQ\_1\).
Jones will shortly die.
Here \(s\) can deductively infer
\(\rQ\_1\)
from
\(\rP\_1\)
given background information.[1]
Suppose \(s\) acquires justification for \(\rP\_1\) by learning
\(\rE\_1\).
\( s\) will also acquire justification for \(\rQ\_1\) in virtue of her
knowledge of the inferential relation from \(\rP\_1\) to \(\rQ\_1\) and
her justification for
\(\rP\_1\).[2]
Thus, \(s\)'s justification for \(\rP\_1\) will transmit to
\(\rQ\_1\).
It is widely recognized that epistemic justification sometimes
*fails* to transmit across an inference or argument. In
interesting cases of transmission failure, \(s\) has justification for
believing \(p\) and knows the inferential link between \(p\) and
\(q\), but \(s\) has no justification for believing \(q\) *in
virtue of* her justification for \(p\) and her knowledge of the
inferential link. Here is an example from Wright (2003). Suppose
\(s\)'s background information entails that Jessica and Jocelyn
are indistinguishable twins. Consider this possible reasoning:
*Twins*
\(\rE\_2\).
This girl looks just like Jessica.
\(\rP\_2\).
This girl is actually Jessica.
Therefore:
\(\rQ\_2\).
This girl is not Jocelyn.
In *Twins*
\(\rE\_2\)
can give \(s\) justification for believing
\(\rP\_2\)
only if \(s\) has *independent* justification for believing
\(\rQ\_2\)
in the first instance. Suppose that \(s\) does have independent
justification for believing \(\rQ\_2\), and imagine that \(s\) learns
\(\rE\_2\). In this case \(s\) will acquire justification for believing
\(\rP\_2\) from \(\rE\_2\). But it is intuitive that \(s\) will acquire
no justification for \(\rQ\_2\) *in virtue of* her justification
for believing \(\rP\_2\) based on \(\rE\_2\) and her knowledge of the
inferential link between \(\rP\_2\) and \(\rQ\_2\). So, *Twins*
instantiates transmission failure when \(\rQ\_2\) is independently
justified.
An argument incapable of transmitting to its conclusion a
*specific* justification for its premise(s)--e.g., a
justification based on evidence \(e\)--may be able to transmit to
the same conclusion a different justification for its
premise(s)--e.g., one based on different evidence \(e^\*\).
Replace for instance
\(\rE\_2\)
with
\(\rE\_2^\*.\)
This girl's passport certifies she is Jessica.
in *Twins*,
\(\rE\_2^\*\)
can provide \(s\) with justification for believing
\(\rP\_2\)
even if \(s\) has no independent justification for
\(\rQ\_2\).
Suppose then that \(s\) has no independent justification for
\(\rQ\_2\), and that she acquires \(\rE\_2^\*\). It is intuitive that
\(s\) will acquire justification from \(\rE\_2^\*\) for \(\rP\_2\) that
does transmit to \(\rQ\_2\). Now the inference from \(\rP\_2\) to
\(\rQ\_2\) instantiates epistemic transmission.
Although many of the epistemologists participating in the debate on
epistemic transmission and transmission failure speak of transmission
of *warrant*, rather than *justification*, almost all of
them use the term 'warrant' to refer to some kind of
epistemic justification (in
Sect. 3.3
we will however consider an account of warrant transmission failure
in which 'warrant' is interpreted differently.) Most
epistemologists investigating epistemic transmission and transmission
failure--e.g., Wright (2011, 2007, 2004, 2003, 2002 and 1985),
Davies (2003, 2000 and 1998), Dretske (2005), Pryor (2004), Moretti
(2012) and Moretti & Piazza (2013)--broadly identify the
epistemic property capable of being transmitted with
*propositional*
justification.[3]
Only a few authors explicitly focus on transmission of
*doxastic* justification--e.g., Silins (2005), Davies
(2009) and Tucker (2010a and 2010b
[OIR]).
In this entry we follow the majority in discussing transmission and
transmission failure of justification as phenomena primarily
pertaining to propositional justification. (See however the supplement
on
Transmission of Propositional Justification *versus* Transmission of Doxastic Justification.)
Epistemologists typically concentrate on transmission of
(propositional or doxastic) justification across *deductively
valid* (given background information) arguments. The fact that
justification can transmit across deduction is crucial for our
cognitive processes because it makes the advancement of
knowledge--or justified belief--through deductive reasoning
possible. Suppose evidence \(e\) gives you justification for believing
hypothesis \(p\), and you know that \(p\) entails another proposition
\(q\) that you haven't directly checked. If the justification
you have for believing \(p\) transmits to its unchecked prediction
\(q\) through the entailment, you acquire justification for believing
\(q\) too.
Epistemologists may analyze epistemic transmission across
*inductive* (or *ampliative*) inferences too. Yet this
topic has received much less attention in the literature on epistemic
transmission. (See however interesting remarks in Tucker 2010a.)
In the remaining part of this entry we will focus on transmission and
transmission failure of propositional justification across deductive
inference. Unless differently specified, by 'epistemic
justification' or 'justification' we always mean
'propositional justification'.
## 2. Epistemic Transmission
As said, \(s\)'s justification for \(p\) based on evidence \(e\)
transmits across entailment from \(p\) to \(p\)'s consequence
\(q\) whenever \(q\) is justified for \(s\) in virtue of \(s\)'s
justification for \(p\) based on \(e\) and her knowledge of
\(q\)'s deducibility from \(p\). This initial characterization
can be distilled into three conditions individually necessary and
jointly sufficient for epistemic transmission:
>
>
> \(s\)'s justification for \(p\) based on \(e\) transmits to
> \(p\)'s logical consequence \(q\) if and only if:
>
>
>
> (C1)
> \(s\) has justification for believing \(p\) based on \(e\),
> (C2)
> \(s\) knows that \(q\) is deducible from \(p\),
> (C3)
> \(s\) has justification for believing \(q\) *in virtue of*
> the satisfaction of
> (C1)
> and
> (C2).
>
>
Condition
(C3)
is crucial for distinguishing *transmission* of justification
across (known) entailment from *closure* of justification
across (known) entailment. Saying that \(s\)'s justification for
believing \(p\) is closed under \(p\)'s (known) entailment to
\(q\) is saying that:
>
>
> If
>
>
>
> (C1)
> \(s\) has justification for believing \(p\) and
> (C2)
> \(s\) knows that \(p\) entails \(q\), then
> (C3\(^{\textrm{c}}\))
> \(s\) has justification for believing \(q\).
>
>
One can coherently accept the above principle--known as the
*principle of epistemic closure*--but
deny a corresponding *principle of epistemic transmission*,
cashed out in terms of the conditions outlined above:
>
>
> If
>
>
>
> (C1)
> \(s\) has justification for believing \(p\) and
> (C2)
> \(s\) knows that \(p\) entails \(q\), then
> (C3)
> \(s\) has justification for believing \(q\) *in virtue of*
> the satisfaction of (C1) and (C2).
>
>
The *principle of epistemic closure* just requires that when
\(s\) has justification for believing \(p\) and knows that \(q\) is
deducible from \(p, s\) have justification for believing \(q\). In
addition, the *principle of epistemic transmission* requires
this justification to be had in virtue of her having justification for
\(p\) and knowing that \(p\) entails \(q\) (for further discussion see
Tucker 2010b
[OIR]).
Another important distinction is the one between the *principle of
epistemic transmission* and a different principle of transmission
discussed by Pritchard (2012a) under the label of *evidential
transmission principle*. According to it,
>
>
> If \(s\) perceptually knows that \(p\) in virtue of evidence set
> \(e\), and \(s\) competently deduces \(q\) from \(p\) (thereby coming
> to believe that \(q\) while retaining her knowledge that \(p)\), then
> \(s\) knows that \(q\), where that knowledge is sufficiently supported
> by \(e\). (Pritchard 2012a: 75)
>
>
>
Pritchard's principle, to begin with, concerns (perceptual)
knowledge and not propositional justification. Moreover, it deserves
emphasis that there are inferences that apparently satisfy
Pritchard's principle but fail to satisfy the *principle of
epistemic transmission*. Consider for instance the following
triad:
*Zebra*
\(\rE\_3\).
The animals in the pen look like zebras.
\(\rP\_3\).
The animals in the pen are zebras.
Therefore:
\(\rQ\_3\).
The animals in the pen are not mules cleverly disguised to look
like zebras.
According to Pritchard, the evidence set of a normal zoo-goer standing
before the zebra enclosure typically includes, above and beyond
\(\rE\_3\),
the background proposition that
\((\rE\_B)\)
to disguise mules like zebras would be very costly and
time-consuming, would bring no comparable benefit and would be
relatively easy to unmask.
Suppose \(s\) knows
\(\rP\_3\)
on the basis of her evidence set, and competently deduces
\(\rQ\_3\)
from \(\rP\_3\) thereby coming to believe \(\rQ\_3\). Pritchard's
*evidential transmission principle* apparently accommodates the
fact that \(s\) thereby comes to know \(\rQ\_3\). For her evidence
set--because of its inclusion of
\(\rE\_B\)--sufficiently
supports \(\rQ\_3\). But the *principle of epistemic
transmission* is not satisfied. Although
(C1\(\_{\textit{Zebra}}\))
\(s\) has justification for \(\rP\_3\) and
(C2\(\_{\textit{Zebra}}\))
she knows that \(\rP\_3\) entails \(\rQ\_3\),
she has justification for believing \(\rQ\_3\) in virtue of, not
(C1\(\_{\textit{Zebra}}\)) and (C2\(\_{\textit{Zebra}}\)),
but the independent
support lent to it by \(\rE\_B\).
Condition
(C3)
suffices to distinguish the notion of epistemic transmission from
different notions in the neighborhood. However, as it stands, it is
still unsuitable for the purpose to completely characterize epistemic
transmission. The problem is that there are cases in which it is
intuitive that the justification for \(p\) based on \(e\) transmits to
\(q\) even if (C3), strictly speaking, is *not* satisfied.
These cases can be described as situations in which *only part*
of the justification that \(s\) has for \(q\) is based on her
justification for \(p\) and her knowledge of the entailment from \(p\)
to \(q\). Consider this example. Suppose you are traveling on a train
heading to Edinburgh. At 16:00, as you enter Newcastle upon Tyne, you
spot the train station sign. Then, at 16:05, the ticket controller
tells you that you are not yet in Scotland. Now consider the following
reasoning:
*Journey*
\(\rE\_4\).
At 16:05 the ticket controller tells you that you are not yet in
Scotland.
\(\rP\_4\).
You are not yet in Scotland.
Therefore:
\(\rQ\_4\).
You are not yet in Edinburgh.
As you learn
\(\rE\_4\),
given suitable background information, you get justification for
\(\rP\_4\);
moreover, to the extent to which you know that not being in Scotland
is sufficient for not being in Edinburgh, you also acquire via
transmission justification for
\(\rQ\_4\).
This additional justification is transmitted irrespective of the fact
that you *already* have justification for \(\rQ\_4\), acquired
by spotting the train station sign 'Newcastle upon Tyne'.
If
(C3)
were read as requiring that the *whole* of the justification
available for a proposition \(q\) were had in virtue of the
satisfaction of
(C1)
and
(C2),
cases like these would become invisible.
A way to deal with this complication is to amend the tripartite
analysis of epistemic transmission by turning
(C3)
into (C3\(^{+}\)), saying that *at least part* of the
justification that \(s\) has for \(q\) has been achieved by her in
virtue of the satisfaction of
(C1)
and
(C2).
Let us say that a justification for \(q\) is an *additional*
justification for \(q\) whenever it is not a *first-time*
justification for it. Condition (C3\(^{+}\)) can be reformulated as
this disjunction:
(C3\(^{+}\))
\(s\) has *first-time* justification for \(q\) in virtue
of the satisfaction of
(C1)
and
(C2),
or
\(s\) has an *additional* justification for \(q\) in virtue
of the satisfaction of
(C1)
and
(C2).
Much of the extant literature on epistemic transmission concentrates
on examples of transmission of first-time justification. These
examples include
*Toadstool*.
We have seen, however, that what intuitively transmits in other cases
is simply additional justification. Epistemologists have identified at
least two--possibly overlapping--kinds of additional
justification (cf. Moretti & Piazza 2013).
One is what can be called *independent* justification because
it appears--intuitively--independent of the original
justification for \(q\). This notion of justification can probably be
sharpened by appealing to counterfactual analysis. Suppose
\(s\)'s justification for \(p\) based on \(e\) transmits to
\(p\)'s logical consequence \(q\). This justification
transmitted to \(q\) is an additional *independent*
justification just in case these three conditions are met:
(IN1)
\(s\) was already justified in believing \(q\) before acquiring
\(e\),
(IN2)
as \(s\) acquires \(e, s\) is still justified in believing \(q\),
and
(IN3)
if \(s\) had not been antecedently justified in believing \(q\),
upon learning \(e, s\) would have acquired via transmission a
first-time justification for believing \(q\).
*Journey*
instantiates transmission of justification by meeting
(IN1),
(IN2), and
(IN3).
Thus, *Journey* exemplifies a case of transmission of
additional independent justification.
Consider now that justification for a proposition or belief normally
comes in degrees of strength. The second kind of additional
justification can be characterized as *quantitatively
strengthening* justification. Suppose again that \(s\)'s
justification for \(p\) based on \(e\) transmits to \(p\)'s
consequence \(q\). This justification transmitted to \(q\) is an
additional *quantitatively strengthening* justification just in
case these two conditions are satisfied:
(STR1)
\(s\) was already justified in believing \(q\) before acquiring
\(e\), and
(STR2)
as \(s\) acquires \(e\), the strength of \(s\)'s
justification for believing \(q\) increases.
Here is an example from Moretti (2012). Your background information
says that only one ticket out of 5,000 of a fair lottery has been
bought by a person born in 1970, and that all other tickets have been
bought by older or younger people. Consider now this reasoning:
*Lottery*
\(\rE\_5\).
The lottery winner's passport certifies she was born in
1980.
\(\rP\_5\).
The lottery's winner was born in 1980.
Therefore:
\(\rQ\_5\).
The lottery's winner was not born in 1970.
Given its high chance,
\(\rQ\_5\)
is already justified on your background information only. It is
intuitive that as you learn
\(\rE\_5\),
you acquire an additional quantitatively strengthening justification
for \(\rQ\_5\) via transmission. For your justification for
\(\rP\_5\)
transmitted to \(\rQ\_5\) is intuitively quantitatively stronger than
your initial justification for \(\rQ\_5\).
In many cases when \(q\) receives via transmission from \(p\) an
additional independent justification, \(q\) also receives a
quantitatively strengthening justification. This is not always true
though. For there seem to be cases in which an additional independent
justification transmitted from \(p\) to \(q\) intuitively
*lessens* an antecedent justification for \(q\) (cf. Wright
2011).
An interesting question is whether it is true that as \(q\) gets via
transmission from \(p\) an additional quantitatively strengthening
justification, \(q\) also gets an independent justification. This
seems true in some cases--for instance in
*Lottery*.
It is controversial, however, if it is the case that
*whenever* \(q\) gets via transmission a quantitatively
strengthening justification, \(q\) also gets an independent
justification. Wright (2011) and Moretti & Piazza (2013) describe
two examples in which a subject allegedly receives via transmission a
quantitatively strengthening but not independent additional
justification.
To summarize, *additional* justification comes in two species
at least: *independent* justification and *quantitatively
strengthening* justification. This enables us to lay down three
specifications of the general condition
(C3\(^{+}\))
necessary for justification transmission, each of which represents a
condition necessary for the transmission of one particular type of
justification. Let's call these specifications
(C3\(^{\textrm{ft}}\)),
(C3\(^{\textrm{ai}}\)), and
(C3\(^{\textrm{aqs}}\)).
(C3\(^{\textrm{ft}}\))
\(s\) has *first time* justification for \(q\) in virtue
of the satisfaction of
(C1)
and
(C2).
(C3\(^{\textrm{ai}}\))
\(s\) has *additional independent* justification for \(q\)
in virtue of the satisfaction of
(C1)
and
(C2).
(C3\(^{\textrm{aqs}}\))
\(s\) has *additional quantitatively strengthening*
justification for \(q\) in virtue of the satisfaction of
(C1)
and
(C2).
Transmission of first-time justification makes the advancement of
justified belief through deductive reasoning possible. Yet the
acquisition of first-time justification for \(q\) isn't the sole
possible improvement of one's epistemic position relative to
\(q\) that one could expect from transmission of justification.
Moretti & Piazza (2013), for instance, describe a variety of
different ways in which s's epistemic standing toward a
proposition can be improved upon acquiring an additional justification
for it via transmission.
We conclude this section with a note about *transmissivity as
resolving doubts*. Let us say that \(s\) *doubts* \(q\)
just in case \(s\) either *disbelieves* or *withholds*
belief about \(q\), namely, refrains from both believing and
disbelieving \(q\) *after deciding about* \(q\). \(s\)'s
doubting \(q\) should be distinguished from \(s\)'s being
*open minded* about \(q\) and from \(s\)'s having
*no* doxastic attitude whatsoever towards \(q\) (cf. Tucker
2010a). Let us say that a deductively valid argument from \(p\) to
\(q\) is able to resolve doubt about its conclusion just in case it is
possible for \(s\) to be rationally moved from doubting \(q\) to
believing \(q\) solely in virtue of grasping the argument from \(p\)
to \(q\) and the evidence offered for \(p\).
Some authors (e.g., Davies 1998, 2003, 2004, 2009; McLaughlin 2000;
Wright 2002, 2003, 2007) have proposed that we should conceive of an
argument's epistemic transmissivity in a way that is very
closely related or even identical to the argument's ability to
resolve doubt about its conclusion. Whereas some of these authors have
eventually conceded that epistemic transmissivity cannot be defined as
ability to resolve doubt (e.g., Wright 2011), others have attempted to
articulate this view in full (see mainly Davies 2003, 2004, 2009).
However, there is nowadays wide agreement in the literature that the
property of being a transmissive argument *doesn't*
coincide with the one of being an argument able to resolve doubt about
its conclusion (see for example, Beebee 2001; Coliva 2010; Markie
2005; Pryor 2004; Bergmann 2004, 2006; White 2006; Silins 2005; Tucker
2010a). A good reason to think so is that whereas the property of
being transmissive appears to be a genuinely *epistemic*
property of an argument, the one of resolving doubt seems to be only a
*dialectical* feature of it, which varies with the audience
whose doubt the argument is used to address.
## 3. Failure of Epistemic Transmission
### 3.1 Trivial Transmission Failure *versus* Non-Transmissivity
It is acknowledged that justification sometimes fails to transmit
across known entailment (the acknowledgment dates back at least to
Wright 1985). Moreover, it is no overstatement to say that the
literature has investigated transmission failure more extensively than
transmission of justification. As we have seen, justification based on
\(e\) transmits from \(p\) to \(q\) across the entailment if and only
if
(C1)
\(s\) has justification for \(p\) based on \(e\),
(C2)
\(s\) knows that \(q\) is deducible from \(p\), and
(C3\(^{+}\))
at least part of \(s\)'s justification for \(q\) is based
on the satisfaction of (C1) and (C2).
It follows from this characterization that *no* justification
based on *e transmits* from \(p\) to \(q\) across the
entailment if
(C1),
(C2), or
(C3\(^{+}\))
are not satisfied. These are cases of transmission failure.
Some philosophers have argued that knowledge and justification are not
always *closed* under competent (single-premise or
multiple-premise) deduction. In the recent literature, the explanation
of closure failure has often been essayed in terms of
*agglomeration of epistemic risk*. This type of explanation is
less controversial when applied to multi-premise deduction. An example
of it concerning justification is the deduction from a high number
\(n\) of premises, each of which specifies that a different ticket in
a fair lottery won't win, which are individually justified even
though each premise is *somewhat* risky, to the conclusion that
none of these \(n\) tickets will win, which is *too risky* to
be justified. (For more controversial cases in which knowledge or
justification closure would fail across single-premise deduction
because of risk accumulation, see for example Lasonen-Aarnio 2008 and
Schechter 2013; for responses, see for instance Smith 2013 and 2016.)
These cases of failure of epistemic closure can be taken to also
involve failure of justification *transmission*. For instance,
it can be argued that even if
(C1\(^{\*}\))
\(s\) has justification for each of the \(n\) premises stating
that a ticket won't win, and
(C2\(^{\*}\))
\(s\) knows that it follows from these premises that none of the
\(n\) tickets will win,
\(s\) has no justification--and, therefore, no justification in
virtue of the satisfaction of
(C1\(^{\*}\))
and
(C2\(^{\*}\))--for
believing that none of the \(n\) tickets will win. In these cases,
transmission failure appears to be a consequence of closure failure,
and it is therefore natural to treat the former simply as a
symptomatic manifestation of the latter. For this reason, we will
follow the current literature on transmission failure in not
discussing these cases. (For an analysis of closure failure, see the
entry on
epistemic closure,
especially section 6.)
The current literature treats transmission failure as a self-standing
phenomenon, in the sense that it focuses on cases in which
transmission failure is not imputed--and thus not considered
reducible--to an underlying failure of closure. For the rest of
this entry, we shall follow suit and investigate transmission failure
in this pure or genuine form.
The most trivial cases of transmission failure are such that
(C3\(^{+}\))
is unsatisfied because
(C1)
or
(C2)
are unsatisfied, but it is also true that (C3\(^{+}\))
would have been satisfied if both
(C1)
and
(C2)
have been satisfied (cf. Tucker 2010b
[OIR]).
It deserves emphasis that these cases involve arguments that
aren't unsuitable for the purpose to transmit justification
depending on evidence \(e\): had the epistemic circumstances been
congenial to the satisfaction of
(C1)
and
(C2),
these arguments would have transmitted the justification based on
\(e\) from \(p\) to \(q\). As we could put the point, these arguments
are *transmissive* of the justification depending on
\(e\).[4]
Cases of transmission failure of a more interesting variety are those
in which, regardless of the validity of closure of justification,
(C3\(^{+}\))
isn't satisfied because it could not have been satisfied, no
matter whether or not
(C1)
and
(C2)
have been satisfied. These cases concern arguments
*non-transmissive* of justification depending on a given
evidence, i.e., arguments incapable of transmitting justification
depending on that evidence under any epistemic circumstance. An
example in point is
*Twins*.
Suppose first that \(s\) has independent justification for
\(\rQ\_2\).
In those circumstances
(C1\(\_{\textit{Twins}}\))
\(s\) has justification from
\(\rE\_2\)
for
\(\rP\_2\).
Furthermore
(C2\(\_{\textit{Twins}}\))
\(s\) does know that
\(\rP\_2\)
entails
\(\rQ\_2\).
However, \(s\) hasn't justification for \(\rQ\_2\) in virtue of
the satisfaction of
(C1\(\_{\textit{Twins}}\))
and
(C2\(\_{\textit{Twins}}\)).
So
(C3\(^{+}\))
is not met. Suppose now that \(s\) has no independent justification
for \(\rQ\_2\). Then, it isn't the case that
(C1\(\_{\textit{Twins}}\))
\(s\) has justification from
\(\rE\_2\)
for
\(\rP\_2\).
So,
(C3\(^{+}\))
is not met either. Since there is no other possibility--either
\(s\) has independent justification for
\(\rQ\_2\)
or she doesn't--it cannot be the case that \(s\)'s
belief that \(\rQ\_2\) is justified *in virtue of* her
justification for
\(\rP\_2\)
from
\(\rE\_2\)
and her knowledge that \(\rP\_2\) entails \(\rQ\_2\).
Note that none of the cases of transmission failure of justification
considered above entails failure of epistemic closure. For in none of
these cases \(s\) has justification for believing \(p, s\) knows that
\(p\) entails \(q\), and \(s\) fails to have justification for
believing \(q\).
Unsurprisingly, the epistemologists contributing to the literature on
transmission failure have principally devoted their attention to cases
involving non-transmissive arguments. Epistemologists have endeavored
to identify conditions whose satisfaction suffices to make an argument
*non-transmissive* of justification based on a given evidence.
The next section reviews the most influential of these proposals.
### 3.2 Varieties of Non-Transmissive Arguments
Some non-transmissive arguments *explicitly* feature their
conclusion among their premises. Suppose \(p\) is justified for \(s\)
by \(e\), and consider the premise-circular argument that deduces
\(p\) from itself. This argument cannot satisfy
(C3\(^{+}\))
even if
(C1)
and
(C2)
are satisfied. The reason is that no part of \(s\)'s
justification for \(p\) can be acquired *in virtue of*, among
other things, the satisfaction of
(C2).
For if
(C1)
is satisfied, \(p\) is justified for \(s\) by \(e\)
*independently* of \(s\)'s knowledge of \(p\)'s
self-entailment, thus not in virtue of
(C2).
Non-transmissive arguments are not necessarily premise-circular. A
different source of non-transmissivity instantiating a subtler form of
circularity is the dependence of evidential relations on background or
collateral information. This type of dependence is a rather familiar
phenomenon: the boiling of a kettle gives one justification for
believing that the temperature of the liquid inside is approximately
100degC *only if* one knows that the liquid is water and that
atmospheric pressure is the one of the sea-level. It doesn't if
one knows that the kettle is on the top of a high mountain, or the
liquid is, say, sulfuric acid.
Wright argues, for instance, that the following epistemic set-up,
which he calls the *information-dependence template*, suffices
for an argument's inability to transmit justification.
>
>
> A body of evidence, \(e\), is an information-dependent justification
> for a particular proposition \(p\) if whether \(e\) justifies \(p\)
> depends on what one has by way of collateral information, \(i\).
> [...] Such a relationship is always liable to generate examples
> of transmission failure: it will do so just when the particular \(e,
> p\), and \(i\) have the feature that needed elements of the relevant
> \(i\) are themselves entailed by \(p\) (together perhaps with other
> warranted premises). In that case, any warrant supplied by \(e\) for
> \(p\) will not be transmissible to those elements of \(i\). (Wright
> 2003: 59, edited.)
>
>
>
The claim that \(s\)'s justification from \(e\) for \(p\)
requires \(s\) to have background information \(i\) is customarily
understood as equivalent (in this context) to the claim that
\(s\)'s justification from \(e\) for \(p\) depends on some type
of independent justification for believing or accepting
\(i\).[5]
Instantiating the *information-dependence template* appears
sufficient for an argument's inability to transmit
*first-time* justification. Consider again this triad:
*Twins*
\(\rE\_2\).
This girl looks just like Jessica.
\(\rP\_2\).
This girl is actually Jessica.
Therefore:
\(\rQ\_2\).
This girl is not Jocelyn.
Suppose \(s\)'s background information entails that Jessica and
Jocelyn are indistinguishable twins. Imagine that \(s\) acquires
\(\rE\_2\).
It is intuitive that \(\rE\_2\) could justify
\(\rP\_2\)
for \(s\) only if \(s\) had *independent* justification for
believing
\(\rQ\_2\)
in the first instance. Thus, *Twins* instantiates the
*information-dependence template*. Note that \(s\) acquires
first-time justification for \(\rQ\_2\) in *Twins* only if
(C1\(\_{\textit{Twins}}\))
\(\rE\_2\)
gives her justification for
\(\rP\_2\),
(C2\(\_{\textit{Twins}}\))
\(s\) knows that
\(\rP\_2\)
entails
\(\rQ\_2\)
and
(C3\(^{\textrm{ft}}\_{\textit{Twins}}\))
\(s\) acquires first-time justification for believing
\(\rQ\_2\)
*in virtue of*
(C1\(\_{\textit{Twins}}\))
and
(C2\(\_{\textit{Twins}}\)).
The satisfaction of
(C3\(^{\textrm{ft}}\_{\textit{Twins}}\))
requires \(s\)'s justification for believing
\(\rQ\_2\)
*not* to be independent of \(s\)'s justification from
\(\rE\_2\)
for
\(\rP\_2\).
However, if
(C1\(\_{\textit{Twins}}\))
is true, the *informational-dependence template* requires
\(s\) to have justification for believing \(\rQ\_2\)
*independently* of \(s\)'s justification from \(\rE\_2\)
for \(\rP\_2\). Thus, when the *information-dependence template*
is instantiated,
(C1\(\_{\textit{Twins}}\))
and (C3\(^{\textrm{ft}}\_{\textit{Twins}}\)) cannot be satisfied at
once. In general, no argument satisfying this template together with a
given evidence will be transmissive of first-time justification based
on that evidence.
One may wonder whether a deductive argument from \(p\) to \(q\)
instantiating the *information-dependence template* is unable
to transmit *additional* justification for \(q\). The answer
seems affirmative when the additional justification is
*independent* justification. Suppose the
*information-dependence template* is instantiated such that
\(s\)'s justification for \(p\) from \(e\) depends on
\(s\)'s independent justification for \(q\). Note that \(s\)
acquires additional independent justification for \(q\) only if
(C1)
\(e\) gives her justification for \(p\),
(C2)
\(s\) knows that \(p\) entails \(q\), and
(C3\(^{\textrm{ai}}\))
\(s\) acquires an additional independent justification in virtue
of (C1) and (C2).
This additional justification is independent of \(s\)'s
antecedent justification for \(q\) only if, in particular, condition
(IN3)
of the characterization of additional independent justification is
satisfied. (IN3) says that if \(s\) had not been antecedently
justified in believing \(q\), upon learning \(e, s\) would have
acquired via transmission a first-time justification for believing
\(q\). (IN3) entails that if \(q\) were not antecedently justified for
\(s, e\) would still justify \(p\) for \(s\). Hence, the satisfaction
of (IN3) is incompatible with the instantiation of the
*information-dependence template*, which entails that if \(s\)
had not antecedent justification for \(q, e\) would *not*
justify \(p\) for \(s\). The instantiation of the
*information-dependence template* then precludes transmission
of independent justification.
As suggested by Wright (2007), the instantiation of the
*information-dependence template* might also appear sufficient
for an argument's inability to transmit *additional
quantitatively strengthening* justification. This claim might
appear intuitively plausible: perhaps it is reasonable that if the
justification from \(e\) for \(p\) depends on independent
justification for another proposition \(q\), the strength of the
justification available for \(q\) sets an upper bound to the strength
of the justification possibly supplied by \(e\) for \(p\). However,
the examples by Wright (2011) and Moretti & Piazza (2013)
mentioned in
Sect. 2
appear to undermine this intuition. For they involve arguments that
instantiate the *information-dependent template* yet seem to
transmit quantitatively strengthening justification to their
conclusions. (Alspector-Kelly 2015 suggests that these arguments are a
symptom that Wright's explanation of non-transmissivity is
overall inadequate.)
Some authors have attempted Bayesian formalizations of the
*information dependence template* (see the supplement on
Bayesian Formalizations of the *Information-Dependence Template*.)
Furthermore, Coliva (2012) has proposed a variant of the same
template. In accordance with the *information-dependence
template*, \(s\)'s justification from \(e\) for \(p\) fails
to transmit to \(p\)'s consequence \(q\) whenever \(s\)'s
possessing that justification for \(p\) requires \(s\)'s
independent justification for \(q\). According to Coliva's
variant, \(s\)'s justification from \(e\) for \(p\) fails to
transmit to \(q\) whenever \(s\)'s possessing the latter
justification for \(p\) requires \(s\)'s independent
*assumption* of \(q\), *whether this assumption is justified
or not*. Pryor (2012: SS VII) can be read as pressing
objections against Coliva's template, which are addressed in
Coliva (2012).
We have seen that non-transmissivity may depend on premise-circularity
or reliance on collateral information. There is at least a third
possibility: an argument can be non-transmissive of the justification
for its premise(s) based on given evidence because the evidence
justifies *directly* the conclusion--i.e.,
*independently* of the argument itself (cf. Davies 2009). In
this case the argument instantiates *indirectness*, for
\(s\)'s going through the argument would result in nothing but
an indirect (and unneeded) detour for justifying its conclusion. If
\(e\) directly justifies \(q\), no part of the justification for \(q\)
is based on, among other things, \(s\)'s knowledge of the
inferential relation between \(p\) and \(q\). So
(C3\(^{+}\))
is unfulfilled whether or not
(C1)
and
(C2)
are fulfilled. Here is an example from Wright (2002):
*Soccer*
\(\rE\_6\).
Jones has just kicked the ball between the white posts.
\(\rP\_6\).
Jones has just scored a goal.
Therefore:
\(\rQ\_6\).
A game of soccer is taking place.
Suppose \(s\) learns evidence
\(\rE\_6\).
On ordinary background information,
(C1\(\_{\textit{Socc}}\))
\(\rE\_6\)
justifies
\(\rP\_6\)
for \(s\),
(C2\(\_{\textit{Socc}}\))
\(s\) knows that
\(\rP\_6\)
entails
\(\rQ\_6\),
and
\(\rE\_6\)
also justifies
\(\rQ\_6\)
for \(s\).
It seems false, however, that
\(\rQ\_6\)
is justified for \(s\) *in virtue of* the satisfaction of
(C1\(\_{\textit{Socc}}\))
and
(C2\(\_{\textit{Socc}}\)).
Quite the reverse, \(\rQ\_6\) seems justified for \(s\) by
\(\rE\_6\)
independently the satisfaction of (C1\(\_{\textit{Socc}}\)) and
(C2\(\_{\textit{Socc}}\)). This is so because in the (imagined) actual
scenario it seems true that \(s\) would still possess a justification
for \(\rQ\_6\) based on \(\rE\_6\) even if \(\rE\_6\) did *not*
justify
\(\rP\_6\)
for \(s\). In fact suppose \(s\) had noticed that the referee's
assistant raised her flag to signal Jones's off-side. Against
this altered background information, \(\rE\_6\) would no longer justify
\(\rP\_6\) for \(s\) but it would still justify \(\rQ\_6\) for \(s\).
Thus, *Soccer* is non-transmissive of the justification
depending on \(\rE\_6\). In general, no argument instantiating
*indirectness* in relation to some evidence is transmissive of
justification based on that evidence.
The *information-dependence template* and *indirectness*
are diagnostics for a deductive argument's inability to transmit
the justification for its premise(s) \(p\) based on evidence \(e\) to
its conclusion \(q\), where \(e\) is conceived of as a
*believed* proposition capable of supplying
*inferential* and (typically) *fallible* justification
for \(p\). (Note that even though \(e\) is conceived of as a belief,
the collateral information \(i\), which is part of the
*template*, doesn't need to be believed on some views.)
The justification for a proposition \(p\) might come in other forms.
For instance, it has been proposed that a proposition \(p\) about the
perceivable environment around the subject \(s\) can be justified by
the fact that \(s\) *sees* that \(p\) (where
'*sees*' is taken to be factive). In this case,
\(s\) is claimed to attain a kind of *non-inferential* and
*infallible* justification for believing \(p\). This view has
been explored by *epistemological disjunctivists* (see for
instance McDowell 1982, 1994 and 2008, and Pritchard 2007, 2008, 2009,
2011 and 2012a).
One might find it intuitively plausible that when \(s\) sees that \(p,
s\) attains non-inferential and infallible justification for believing
\(p\) that doesn't rely on \(s\)'s background information.
Since this justification for believing \(p\) would be *a
fortiori* unconstrained by \(s\)'s independent justification
for believing any consequence \(q\) of \(p\), in these cases the
information-dependence template could not possibly be instantiated.
Therefore, one might be tempted to conclude that when \(s\) sees that
\(p, s\) acquires a justification that typically transmits to the
propositions that \(s\) knows to be deducible from \(p\) (cf. Wright
2002). Pritchard (2009, 2012a) comes very close to endorsing this view
explicitly.
The latter contention has not stayed unchallenged. Suppose one accepts
a notion of epistemic justification with *internalist*
resonances saying that a factor \(J\) is relevant to \(s\)'s
justification only if \(s\) is able to determine, by reflection alone,
whether \(J\) is or is not realized. On this notion, \(s\)'s
seeing that \(p\) cannot provide \(s\) with justification for
believing \(p\) unless \(s\) can rationally claim that she is
*seeing* that \(p\) upon reflection alone. Seeing that \(p\),
however, is *subjectively* indistinguishable from hallucinating
that \(p\), or from being in some other delusional state in which it
merely *seems* to \(s\) that she is seeing that \(p\). Hence,
one may find it compelling that \(s\) can claim by reflection alone
that she's seeing that \(p\) only if \(s\) has an independent
reason for ruling out that it merely seems to her that she does (cf.
Wright 2011). If this is true, for many deductive arguments from \(p\)
to \(q, s\) won't be able to acquire non-inferential and
infallible justification for believing \(p\) of the type described by
the disjunctivist and transmit it to \(q\). This will happen whenever
\(q\) is the logical negation of a proposition ascribing to \(s\) some
delusionary mental state in which it merely seems to her that she
directly perceives that \(p\).
Wright's *disjunctive template* is meant to be a
diagnostic of transmission failure of non-inferential justification
when epistemic justification is conceived of in the internalist
fashion suggested
above.[6]
According to Wright (2000), for any propositions \(p, q\) and \(r\)
and subject \(s\), the *disjunctive template* is instantiated
whenever:
(D1)
\(p\) entails \(q\);
(D2)
\(s\)'s justification for \(p\) consists in \(s\)'s
being in a state subjectively indistinguishable from a state in which
\(r\) would be true;
(D3)
\(r\) is incompatible with \(p\);
(D4)
\(r\) would be true if \(q\) were false.
To see how this template works, consider again the following
triad:
*Zebra*
\(\rE\_3\).
The animals in the pen look like zebras.
\(\rP\_3\).
The animals in the pen are zebras.
Therefore:
\(\rQ\_3\).
The animals in the pen are not mules cleverly disguised to look
like zebras.
The justification from
\(\rE\_3\)
for
\(\rP\_3\)
arguably fails to transmit across the inference from \(\rP\_3\) to
\(\rQ\_3\)
because of the satisfaction of the *information-dependence
template*. For it seems true that \(s\) could acquire a
justification for believing \(\rP\_3\) on the basis of \(\rE\_3\) only
if \(s\) had an independent justification for believing \(\rQ\_3\). Now
suppose that \(s\)'s justification for \(\rP\_3\) is based on,
not \(\rE\_3\) but \(s\)'s *seeing* that \(\rP\_3\).
Let's call *Zebra\** the corresponding variant of
*Zebra*. Given the *non-inferential* nature of the
justification considered for \(\rP\_3\), *Zebra\** could not
instantiate the *information-dependence template*. However, it
is easy to check that *Zebra\** instantiates the *disjunctive
template*. To begin with,
(D1\(\_{\textit{Zebra}}\))
\(\rP\_3\)
entails
\(\rQ\_3\);
(D2\(\_{\textit{Zebra}}\))
\(s\)'s justification for believing
\(\rP\_3\)
is constituted by \(s\)'s seeing that \(\rP\_3\), which is
subjectively indistinguishable from the state that \(s\) would be in
if it were true that
\(\rR\_{\textit{Zebra}}\).
the animals in the pen are mules cleverly disguised to look like
zebras;
(D3\(\_{\textit{Zebra}}\))
\(\rR\_{\textit{Zebra}}\)
is incompatible with
\(\rP\_3\);
and, trivially,
(D4\(\_{\textit{Zebra}}\))
if
\(\rQ\_3\)
were false
\(\rR\_{\textit{Zebra}}\)
would be true.
Since *Zebra\** instantiates the *disjunctive template*,
it is non-transmissive of at least *first-time* justification.
In fact note that \(s\) acquires first-time justification for
\(\rQ\_3\)
in this case if and only if
(C1\(\_{\textit{Zebra}}\))
\(s\) has justification for
\(\rP\_3\)
based on seeing that \(\rP\_3\),
(C2\(\_{\textit{Zebra}}\))
\(s\) knows that
\(\rP\_3\)
entails
\(\rQ\_3\),
and
(C3\(^{\textrm{ft}}\_{\textit{Zebra}}\))
\(s\) acquires first-time justification for believing
\(\rQ\_3\)
in virtue of
(C1\(\_{\textit{Zebra}}\))
and
(C2\(\_{\textit{Zebra}}\)).
Also note that
(C3\(^{\textrm{ft}}\_{\textit{Zebra}}\))
requires \(s\)'s justification for believing
\(\rQ\_3\)
*not* to be independent of \(s\)'s justification for
\(\rP\_3\)
based on seeing that \(\rP\_3\). However, if
(C1\(\_{\textit{Zebra}}\))
is true, the *disjunctive template* requires \(s\) to have
justification for believing \(\rQ\_3\) *independent* of
\(s\)'s justification for \(\rP\_3\) based on her seeing that
\(\rP\_3\). (For if \(s\) could not independently exclude that
\(\rQ\_3\) is false, given (D4), (D3) and (D2), \(s\) could not exclude
that the incompatible alternative \(\rR\_{\textit{Zebra}}\) to
\(\rP\_3\), which \(s\) cannot subjectively distinguish from \(\rP\_3\)
on the ground of her seeing that \(\rP\_3\), is true.) Thus, when the
*disjunctive template* is instantiated,
(C1\(\_{\textit{Zebra}}\))
and (C3\(^{\textrm{ft}}\_{\textit{Zebra}}\)) cannot be satisfied at
once.
The disjunctive template has been criticized by McLaughlin (2003) on
the ground that the template is instantiated whenever the
justification for \(p\) is fallible, i.e. compatible with
\(p\)'s falsity. Here is an example from Brown (2004). Take this
deductive argument:
*Fox*
\(\rP\_7\).
The animal in the garbage is a fox.
Therefore:
\(\rQ\_7\).
The animal in the garbage is not a cat.
Suppose \(s\) has a fallible justification for believing
\(\rP\_7\)
based on \(s\)'s *experience* as if the animal in the
garbage is a fox. Take now \(\rR\_{\textit{fox}}\) to be
\(\rP\_7\)'s logical negation. Since the justification that \(s\)
has for \(\rP\_7\) is fallible, condition
(D2)
above is met *by default*. As one can easily check, conditions
(D1),
(D3), and
(D4)
are also met. So,
*Fox*
instantiates the *disjunctive template*. Yet it is intuitive
that *Fox* does transmit justification to its conclusion.
One could respond to McLaughlin that his objection is misplaced
because the *disjunctive template* is meant to apply to
*infallible*, and not fallible, justification. A more
interesting response to McLaughlin is to refine some condition listed
in the *disjunctive template* to block McLaughlin's
argument while letting this template account for transmission failure
of both fallible and infallible justification. Wright (2011) for
instance suggests replacing
(D3)
with the following
condition:[7]
(D3\(^{\*}\))
\(r\) is incompatible with some presupposition of the cognitive
project of obtaining justification for \(p\) in the relevant
fashion.
According to Wright's (2011) characterization, a presupposition
of a cognitive project is any condition such that doubting it before
carrying out the project would rationally commit one to doubting the
significance or competence of the project irrespective of its
outcome.[8]
For a wide class of cognitive projects, examples of these
presuppositions include: the normal and proper functioning of the
relevant cognitive faculties, the reliability of utilized instruments,
the obtaining of the circumstances congenial to the proposed method of
investigation, the soundness of relevant principles of inference
utilized in developing and collating one's results, and so
on.
With
(D3\(^{\*}\))
in the place of
(D3),
*Fox* no longer instantiates the *disjunctive
template*. For the truth of \(\rR\_{\textit{fox}}\), stating that
the animal in the garbage is not a fox, appears to jeopardize no
presupposition of the cognitive project of obtaining perceptual
justification for
\(\rP\_7\).
Thus (D3\(^{\*}\)) is not fulfilled. On the other hand, arguments that
intuitively don't transmit do satisfy (D3\(^{\*}\)). Take
*Zebra\**. In this case \(\rR\_{\textit{fox}}\) states that the
animals in the pen are mules cleverly disguised to look like zebras.
Since \(\rR\_{\textit{fox}}\) entails that conditions are unsuitable
for attaining perceptual justification for believing
\(\rP\_3\),
\(\rR\_{\textit{fox}}\) looks incompatible with a presupposition of
the cognitive project of obtaining perceptual justification for
\(\rP\_3\). Thus, \(\rR\_{\textit{fox}}\) does satisfy (D3\(^{\*}\)) in
this case.
### 3.3 Non-Standard Accounts
In this section we outline two interesting branches in the literature
on transmission failure of justification and warrant across valid
inference. We start with Smith's (2009) non-standard account of
non-transmissivity of justification. Then, we present
Alspector-Kelly's (2015) account of non-transmissivity of
Plantinga's warrant.
According to Smith (2009), epistemic justification requires
*reliability*. \(s\)'s belief that \(p\), held on the
basis of \(s\)'s belief that \(e\), is reliable in Smith's
sense just in case in all possible worlds including \(e\) as true and
that are as *normal* (from the perspective of the actual world)
as the truth of \(e\) permits, \(p\) is also true. Consider
*Zebra*
again.
*Zebra*
\(\rE\_3\).
The animals in the pen look like zebras.
\(\rP\_3\).
The animals in the pen are zebras.
Therefore:
\(\rQ\_3\).
The animals in the pen are not mules cleverly disguised to look
like zebras.
\(s\)'s belief that
\(\rP\_3\)
is reliable in Smith's sense when it is based on \(s\)'s
belief that
\(\rE\_3\).
(Disguising mules to make them look like zebras is certainly an
*abnormal* practice.) Thus, all possible \(\rE\_3\)-worlds that
are as normal as the truth of \(\rE\_3\) permits aren't worlds in
which the animals in the pen are cleverly disguised mules. Rather,
they are \(\rP\_3\)-worlds--i.e., worlds in which the animals in
the pen *are* zebras.
Smith describes two ways in which a belief can possess this property
of being reliable. One is that a belief that \(p\) has it in virtue of
the modal relationship with its basis \(e\). In this case, \(e\) is a
contributing reliable basis. Another possibility is when it is the
content of the belief that \(p\), rather than the belief's modal
relationship with its basis \(e\), that guarantees by itself the
belief's reliability. In this case, \(e\) is a non-contributing
reliable basis. An example of the first kind is \(s\)'s belief
that
\(\rP\_3\),
which is reliable because of its modal relationship with
\(\rE\_3\).
There are obviously many sufficiently normal worlds in which
\(\rP\_3\) is false, but no sufficiently normal world in which
\(\rP\_3\) is false and \(\rE\_3\) true. An example of the second kind
is \(s\)'s belief that
\(\rQ\_3\)
as based on \(\rE\_3\). It is this belief's content, and not its
modal relationship with \(\rE\_3\), that guarantees its reliability. As
Smith puts it, there are no sufficiently normal worlds in which
\(\rE\_3\) is true and \(\rQ\_3\) is false, but this is simply because
there are no sufficiently normal worlds in which \(\rQ\_3\) is
false.
According to Smith, a deductive inference from \(p\) to \(q\) fails to
transmit to \(q\) justification relative to \(p\)'s basis \(e\),
if \(e\) is a contributing reliable basis for believing \(p\) but is a
non-contributing reliable basis for believing \(q\). In this case the
inference fails to *explain* \(q\)'s reliability: if
\(s\) deduced one proposition from another, she would reliably believe
\(q\), but not--not even in part--in virtue of having
inferred \(q\) from \(p\) (as held on the basis of \(e)\).
*Zebra* fails to transmit to
\(\rQ\_3\)
justification relative to
\(\rP\_3\)'s
basis
\(\rE\_3\)
in this
sense.[9]
Let's turn to Alspector-Kelly (2015). As we have seen,
Wright's analysis of failure of warrant transmission interprets
the epistemic good whose transmission is in question as, roughly, the
same as epistemic
justification.[10]
By so doing, Wright departs from Plantinga's (1993a, 1993b)
influential understanding of warrant as the epistemic quality that
(whatever it is) suffices to turn true belief into knowledge. One
might wonder, however, if there are deductive arguments incapable of
transmitting warrant in Plantinga's sense. (Hereafter we use
'warrant' only to refer to Plantinga's warrant.)
Alspector-Kelly answers this question affirmatively contending that
certain arguments cannot transmit warrant because the cognitive
project of establishing their conclusion via inferring it from their
premise is *procedurally self-defeating.*
Alspector-Kelly follows Wright (2012) in characterizing a cognitive
project as a pair of a question and a procedure that one executes to
answer the question. An *enabling condition* of a cognitive
project is, for Alspector-Kelly, any proposition such that, unless it
is true, one cannot learn the answer to its defining question by
executing the procedure associated with it. That a given object \(o\)
is illuminated, for example, is an enabling condition of the cognitive
project of determining its color by looking at it. For one cannot
learn by sight that \(o\) is of a given color unless \(o\) is
illuminated.
Enabling conditions can be *opaque*. An enabling condition of a
cognitive project is opaque, relative to some actual or possible
result, if it is the case that whenever the execution of the
associated procedure yields this result, it would have produced the
same result had the condition been unfulfilled. The enabling condition
that \(o\) be illuminated of the cognitive project just considered is
not opaque, since looking at \(o\) never produces the same response
about \(o\)'s color when \(o\) is illuminated and when it
isn't. (In the second case, it produces *no*
response.)
Now consider the cognitive project of establishing by sight whether
(\(\rP\_3)\)
the animals enclosed in the pen are zebras. An enabling condition of
this project states that
(\(\rQ\_3)\)
the animals in the pen are not mules disguised to look like zebras.
For one couldn't learn whether \(\rP\_3\) is true by looking, if
\(\rQ\_3\)were true. \(\rQ\_3\) is opaque with respect to the possible
outcome that \(\rP\_3\). In fact, suppose \(\rQ\_3\) is satisfied
because the pen contains (undisguised) zebras. In this case, looking
at them will attest they are zebras, and the execution of the
procedure associated with this project will yield the outcome that
\(\rP\_3\). But this is exactly the outcome that would be generated by
the execution of the same procedure if \(\rQ\_3\) were *not*
satisfied. In this case too, looking at the animals would produce the
response that \(\rP\_3\).
We can now elucidate the notion of a *procedurally
self-defeating* cognitive project. The distinguishing feature of
any project of this type is that it seeks to answer the question
whether an *opaque* enabling condition--call it
'\(q\)'--of the project itself is fulfilled. A
project of this type has the defect that it necessarily produces the
response that \(q\) is fulfilled when \(q\) is unfulfilled. Given
this, it is intuitive that it cannot yield warrant to believe
\(q\).
As an example of a procedurally self-defeating project, consider
trying to answer the question whether your informant is sincere by
asking *her* this very question. Your informant's being
sincere is an enabling condition of the very project you are carrying
out. If your informant were not sincere, you couldn't learn
anything from her. This condition is also opaque with respect to the
possible response that she is sincere. For your informant would
respond that she is sincere both in case she is so and in case she
isn't. Since the execution of this project is guaranteed to
yield the result that your informant is sincere when she isn't,
it is intuitive that it cannot yield warrant to believe that the
informant is sincere.
Alspector-Kelly contends that arguments like *Zebra*
don't transmit warrant because the cognitive project of
determining inferentially whether their conclusion is true is
procedurally self-defeating in the sense advertised. Let's apply
the explanation to *Zebra*.
Imagine you carry out *Zebra*. This initially commits you to
executing the project of establishing whether
\(\rP\_3\)
by looking. Suppose you get
\(\rE\_3\)
and hence \(\rP\_3\) as an outcome. As we have seen,
\(\rQ\_3\)
is an opaque enabling condition relative to the outcome \(\rP\_3\) of
the cognitive project you have executed. Imagine that now, having
received \(\rP\_3\) as a response to your first question, you embark in
the second project which carrying out *Zebra* commits you to:
establishing whether \(\rQ\_3\) is true by inference from \(\rP\_3\).
This project appears doomed.
\(\rQ\_3\)
is an enabling condition of the initial project of determining
whether
\(\rP\_3\),
which is opaque with respect to the very *premise* \(\rP\_3\).
It follows from this that \(\rQ\_3\) is also an enabling condition of
the second project. For if \(\rQ\_3\) were unfulfilled, you
couldn't learn \(\rP\_3\) and, *a fortiori,* you
couldn't learn anything else--\(\rQ\_3\) included--by
inference from \(\rP\_3\). Another consequence is that \(\rQ\_3\) is an
enabling condition of the second project that is *opaque*
relative to the very outcome that \(\rQ\_3\). Suppose first that
\(\rQ\_3\) is true and that, having visually inspected the pen, you
have verified \(\rP\_3\). You then execute the inferential procedure
associated with the second project and produce the outcome that
\(\rQ\_3\). Now consider the case in which \(\rQ\_3\) is false. Looking
into the pen still generates the outcome that \(\rP\_3\) is true. Thus,
when you execute the procedure associated with the second project, you
still infer that \(\rQ\_3\) is true. Since you would get the result
that \(\rQ\_3\) is true even if \(\rQ\_3\) were false, the second
project is procedurally self-defeating. In conclusion, carrying out
*Zebra* commits you to executing a project that cannot generate
warrant for believing its conclusion \(\rQ\_3\). This explanation of
non-transmissivity generalizes to all structurally similar
arguments.
## 4. Applications
The notions of transmissive and non-transmissive argument, above and
beyond being investigated for their own sake, have been put to work in
relation to specific philosophical problems and issues. An important
problem is whether Moore's infamous proof of an external world
is successful, and whether so are structurally similar Moorean
arguments directed against the perceptual skeptic and--more
recently--the non-believer. Another issue pertains to the
solution of McKinsey paradox. A third issue concerns
Boghossian's (2001, 2003) explanation of our logical knowledge
via implicit definitions, criticized as resting on a non-transmissive
argument schema (see for instance Ebert 2005 and Jenkins 2008). As the
debate focusing on the last topic is only at an early stage of its
development, it is preferable to concentrate on the first two, which
will be reviewed in respectively
Sect. 4.1
and
Sect. 4.2
below.
### 4.1 Moorean Proofs, Perceptual Justification and Religious Justification
Much of the contemporary debate on Moore's proof of an external
world (see Moore 1939) is interwoven with the topic of epistemic
transmission and its failure. Moore's proof can be reconstructed
as follows:
*Moore*
\(\rE\_8\).
My experience is in all respects as of a hand held up in front of
my face.
\(\rP\_8\).
Here is a hand.
Therefore:
\(\rQ\_8\).
There is a material world (since any hand is a material object
existing in space).
Evidence
\(\rE\_8\)
in *Moore* is constituted by a proposition *believed*
by \(s\). One might suggest, however, that this is a misinterpretation
of Moore's proof (and variants of it that we shall consider
shortly). One might argue that what is meant to give \(s\)
justification for believing
\(\rP\_8\)
is \(s\)'s *experience* of a hand. Nevertheless, most of
the epistemologists participating in this debate implicitly or
explicitly assume that one's experience as if \(p\) and
one's belief that one has an experience as if \(p\) have the
same justifying power (cf. White 2006 and Silins 2007).
Many philosophers find Moore's proof unsuccessful. Philosophers
have proposed explanations of this impression according to which
*Moore* is non-transmissive in some of the senses described in
Sect. 3
(see mainly Wright 1985, 2002, 2007 and
2011).[11] A different explanation is that Moore's proof does
transmit justification but is dialectically ineffective (see mainly
Pryor 2004).
According to Wright, there exist *cornerstone propositions* (or
simply *cornerstones*), where \(c\) is a cornerstone for an
area of discourse \(d\) just in case for any proposition \(p\)
belonging to \(d, p\) could not be justified for any subject \(s\) if
\(s\) had no independent justification for accepting \(c\) (see mainly
Wright
2004).[12]
Cornerstones for the area of discourse about perceivable things are
for instance the *logical negations of skeptical conjectures*,
such as the proposition that one's experiences are nothing but
one's hallucinations caused by a Cartesian demon or the Matrix.
Wright contends that the conclusion
\(\rQ\_8\)
of *Moore* is also a cornerstone for the area of discourse
about perceivable things. Adapting terminology introduced by Pryor
(2004), Wright's conception of the architecture of perceptual
justification thus treats \(\rQ\_8\) *conservatively* with
respect to any perceptual hypothesis \(p\): if \(s\) had no
independent justification for \(\rQ\_8\), no proposition \(e\)
describing an apparent perception could supply \(s\) with *prima
facie* justification for any perceptual hypothesis \(p\). It
follows from this that *Moore* instantiates the
*information-dependence template* considered in
Sect. 3.2.
For \(\rQ\_8\) is part of the collateral information which \(s\) needs
independent justification for if \(s\) is to receive some
justification for
\(\rP\_8\)
from
\(\rE\_8\).
Hence *Moore* is non-transmissive (see mainly Wright 2002).
Note that the thesis that Moore's proof is epistemologically
useless because non-transmissive in this sense is compatible with the
claim that by learning \(\rE\_8, s\) does acquire a justification for
believing \(\rP\_8\). For instance, Wright (2004) contends that we all
have a special kind of *non-evidential* justification, which he
calls *entitlement*, for accepting \(\rQ\_8\) as well as other
cornerstones in
general.[13]
So, by learning \(\rE\_8\) we do acquire justification for \(\rP\_8\).
Wright's analysis of Moore's proof and Wright's
conservatism have mostly been criticized in conjunction with his
theory of entitlement. A presentation of these objections is beyond
the scope of this entry. (See however Davies 2004; Pritchard 2005;
Jenkins 2007. For a defense of Wright's views see for instance
Neta 2007 and Wright 2014.)
As anticipated, other philosophers contend that Moore's proof
does transmit justification and that its ineffectiveness has a
different explanation. An important conception of the architecture of
perceptual justification, called *dogmatism* in Pryor (2000,
2004), embraces a generalized form of *liberalism* about
perceptual justification. This form of liberalism is opposed to
Wright's *conservatism*, and claims that to have
*prima facie* perceptual justification for believing \(p\) from
an apparent perception that \(p, s\) doesn't need independent
justification for believing the negation of skeptical conjectures or
non-perceiving hypotheses like \(\notQ\_8\). This is so, for the
dogmatist, because our experiences give us *immediate* and
*prima facie* justification for believing their contents.
Saying that perceptual justification is immediate is saying that it
doesn't presuppose--not even in part--justification
for anything else. Saying that justification is prima facie is saying
that it can be defeated by additional evidence. Our perceptual
justification would be defeated, for example, by evidence that a
relevant non-perceiving hypothesis is true or just as probable as its
negation. For instance, \(s\)'s perceptual justification for
\(\rP\_8\) would be defeated by evidence that \(\notQ\_8\) is true, or
that
\(\rQ\_8\)
and \(\notQ\_8\) are equally probable. On this point the dogmatist and
Wright do agree. They disagree on whether \(s\)'s perceptual
justification for
\(\rP\_8\)
requires independent justification for believing or accepting
\(\rQ\_8\). The dogmatist denies that \(s\) needs that independent
justification. Thus, according to the dogmatist, Moore's proof
*transmits* the perceptual justification available for its
premise to the
conclusion.[14]
The dogmatist argues (or may argue), however, that Moore's proof
is *dialectically* flawed (cf. Pryor 2004). The contention is
that Moore's is unsuccessful because it is useless for the
purpose to convince the idealist or the external world skeptic that
there is an external (material) world. In short, neither the idealist
nor the global skeptic believes that there is an external world. Since
they don't believe
\(\rQ\_8\),
they are rationally required to distrust any *perceptual*
evidence offered in favor of
\(\rP\_8\)
in the first instance. For this reason they both will reject
Moore's proof as one based on an unjustified
premise.[15]
Moretti (2014) suggests that the dogmatist could alternatively
contend that Moore's proof is non-transmissive because
\(s\)'s experience of a hand gives \(s\) *immediate*
justification for believing both the premise \(\rP\_8\) and the
*conclusion* \(\rQ\_8\) of the proof at once. According to this
diagnosis, Moore's proof is *epistemically* flawed
because it instantiates a variant of *indirectness* (in which
the evidence is an *experience* of a hand). Pryor's
analysis of Moore's proof has principally been criticized in
conjunction with his liberalism in epistemology of perception. (See
Cohen 2002, 2005; Schiffer 2004; Wright 2007; White 2006; Siegel &
Silins 2015.)
Some authors (e.g., Wright 2003; Pryor 2004; White 2006; Silins 2007;
Neta 2010) have investigated whether certain *variants* of
Moore are transmissive of justification. These arguments start from a
premise like
\(\rP\_8\),
describing a (supposed) perceivable state of affairs of the external
world, and deduce from it the logical negation of a relevant
*skeptical conjecture*. Consider for example the variant of
*Moore* that starts from \(\rP\_8\) and replaces
\(\rQ\_8\)
with:
\(\rQ\_8^\*\).
It is not the case that I am a handless brain in a vat fed with
the hallucination of a hand held up in front of my face.
Let's call *Moore\** this variant of
*Moore*.
While dogmatists *a la* Pryor argue that
*Moore\** is transmissive but dialectically flawed (cf. Pryor
2004), conservatives *a la* Wright contend that it is
non-transmissive (cf. Wright 2007). Although it remains very
controversial whether or not *Moore* is transmissive,
epistemologists have found some *prima facie* reason to think
that arguments like *Moore\** are non-transmissive.
An important difference between *Moore* and *Moore\** is
this: whereas the logical negation of
\(\rQ\_8\)
does *not* explain the evidential statement
\(\rE\_8\)
("My experience is in all respects as of a hand held up in
front of my face") adduced in support of
\(\rP\_8\),
the logical negation of
\(\rQ\_8^\*\)--\(\notQ\_8^\*\)--somewhat
*explains* \(\rE\_8\). Since \(\notQ\_8^\*\) provides a potential
explanation of \(\rE\_8\), is it intuitive that \(\rE\_8\) is
*evidence* (perhaps very week) for believing \(\notQ\_8^\*\). It
is easy to conclude from this that \(s\) cannot acquire justification
for believing \(\rQ\_8^\*\) via transmission through *Moore\**
upon learning \(\rE\_8\). For it is intuitive that if this were the
case, \(\rE\_8\) should count as evidence for \(\rQ\_8^\*\). But this is
impossible: one and the same proposition cannot simultaneously be
evidence for a hypothesis and its logical negation. By formalizing
intuitions of this type, White (2006) has put forward a simple
Bayesian argument to the effect that *Moore\** and similar
variants of *Moore* are not transmissive of
justification.[16]
(See Silins 2007 for discussion. For responses to White, see
Weatherson 2007; Kung 2010; Moretti 2015.)
From the above analysis it is easy to conclude that the
*information-dependence template* is satisfied by
*Moore\** and akin proofs. In fact note that if
\(\rE\_8\)
is evidence for both
\(\rP\_8\)
and \(\notQ\_8^\*\), it seems correct to say that \(s\) can acquire a
justification for believing \(\rP\_8\) only if \(s\) has independent
justification for *disbelieving* \(\notQ\_8^\*\) and thus
*believing* \(\rQ\_8^\*\). Since \(\notQ\_8^\*\) counts as a
*non-perceiving* hypothesis for Pryor, this gives us a reason
to doubt dogmatism (cf. White
2006).[17]
Coliva (2012, 2015) defends a view--baptized by her
*moderatism*--that aims to be a middle way between
Wright's conservatism and Pryor's dogmatism. The moderate
contends--against the conservative--that cornerstones cannot
be justified and that to possess perceptual justification \(s\) needs
no justification for accepting any cornerstones. On the other hand,
the moderate claims--against the dogmatist--that to possess
perceptual justification \(s\) needs to *assume* (without
justification) relevant
cornerstones.[18]
By relying on her variant of the *information-dependent
template* described in
Sect. 3.2,
Coliva concludes that neither *Moore* nor any proof like
*Moore\** is transmissive. (For a critical discussion of
moderatism see Avnur 2017; Baghramian 2017; Millar 2017; Volpe 2017;
Coliva 2017.)
Epistemological disjunctivists like McDowell and Pritchard have argued
that in paradigmatic cases of perceptual knowledge, what is meant to
give \(s\) justification for believing
\(\rP\_8\)
is, not
\(\rE\_8\),
but \(s\)'s factive state of seeing that \(\rP\_8\). This seems
to have consequences for the question whether *Moore\**
transmits propositional justification. Pritchard explicitly defends
the claim that when \(s\) sees that \(\rP\_8\), thereby learning that
\(\rP\_8, s\) can come to know by inference from \(\rP\_8\) the negation
of any skeptical hypothesis inconsistent with \(\rP\_8\), like
\(\rQ\_8^\*\)
(cf. Pritchard 2012a: 129-30). This may encourage the belief
that, for Pritchard, *Moore\** can transmit the justification
for \(\rP\_8\), based on *s'*s seeing that \(\rP\_8\), to
\(\rQ\_8^\*\) (see Lockhart 2018 for an explicit argument to this
effect). This claim must however be handled with some care.
As we have seen in
Sect. 2,
Pritchard contends that when one knows \(p\) on the basis of evidence
\(e\), one can know \(p\)'s consequence \(q\) by inference from
\(p\) only if \(e\) sufficiently supports \(q\). For Pritchard this
condition is met when \(s\)'s support for believing
\(\rP\_8\)
is constituted by \(s\)'s seeing that \(\rP\_8, s\)'s
epistemic situation is objectively good (i.e., \(s\)'s cognitive
faculties are working properly in a cooperative environment) and the
skeptical hypothesis ruled out by
\(\rQ\_8^\*\)
has not been epistemically motivated. For, in this case, \(s\) has a
reflectively accessible *factive* support for believing
\(\rP\_8\) that *entails*--and so sufficiently
supports--\(\rQ\_8^\*\). Thus, in this case, nothing stands in the
way of \(s\) competently deducing \(\rQ\_8^\*\) from \(\rP\_8\), thereby
coming to know \(\rQ\_8^\*\) on the basis of \(\rP\_8\).
If upon *deducing* one proposition from another, \(s\) comes to
justifiably believe
\(\rQ\_8^\*\)
for the first-time, the inference from
\(\rP\_8\)
to \(\rQ\_8^\*\) presumably transmits *doxastic* justification.
(See the supplement on
Transmission of Propositional Justification *versus* Transmission of Doxastic Justification.)
It is more dubious, however, that when \(s\)'s support for
\(\rP\_8\) is constituted by \(s\)'s seeing that \(\rP\_8\),
*Moore\** is also transmissive of *propositional*
justification. For instance, one might contend that *Moore\** is
non-transmissive because it instantiates the *disjunctive
template* described in
Sect. 3.2
(cf. Wright 2002). To start with, \(\rP\_8\) entails \(\rQ\_8^\*\), so
(D1)
is satisfied. Let \(\rR^{\*}\_{\textit{Moore}}\) be the proposition
that this is no hand but \(s\) is victim of a hallucination of a hand
held up before her face. Since \(\rR^{\*}\_{\textit{Moore}}\) would be
true if \(\rQ\_8^\*\) were false,
(D4)
is also satisfied. Furthermore, take the grounds of \(s\)'s
justification for \(\rP\_8\) to be \(s\)'s seeing that \(\rP\_8\).
Since this experience is a state for \(s\) indistinguishable from one
in which \(\rR^{\*}\_{\textit{Moore}}\) is true,
(D2)
is also satisfied. Finally, it might be argued that the proposition
that one is not hallucinating is a presupposition of the cognitive
project of learning about one's environment through perception.
It follows that \(\rR^{\*}\_{\textit{Moore}}\) is incompatible with a
presupposition of \(s\)'s cognitive project of learning about
one's environment through perception. Thus
(D3\(^{\*}\))
appears fulfilled too. So, *Moore\** won't transmit the
justification that \(s\) has for \(\rP\_8\) to \(\rQ\_8^\*\).
To resist this conclusion, a disjunctivist might insist that
*Moore\**, relative to \(s\)'s support for
\(\rP\_8\)
supplied by \(s\)'s seeing that \(\rP\_8\), doesn't always
instantiate the *disjunctive template* because
(D2)
isn't necessarily fulfilled (cf. Lockhart 2018.) By invoking a
distinction drawn by Pritchard (2012a), one might contend that (D2)
isn't necessarily fulfilled because \(s\), though unable to
introspectively discriminate seeing that \(\rP\_8\) from hallucinating
it, may have evidence that favors the hypothesis that she is in the
first state, which makes the state reflectively accessible. For
Pritchard, this happens--as we have seen--when \(s\) sees
that \(\rP\_8\) in good epistemic conditions and the skeptical
conjecture that \(s\) is having a hallucination of a hand hasn't
been epistemically motivated. In this case, \(s\) can come to know
\(\rQ\_8^\*\)
by inference from \(\rP\_8\) even if she is unable to introspectively
discriminate one situation from the other.
Pritchard's thesis that, in good epistemic conditions,
\(s\)'s factive support for believing
\(\rP\_8\)
coinciding with \(s\)'s seeing that \(\rP\_8\) is reflectively
accessible is controversial (cf. Piazza 2016 and Lockhart 2018). Since
this thesis is essential to the contention that *Moore\** may
not instantiate the *disjunctive template* and may thus be
transmissive of propositional justification, the latter contention is
also controversial.
So far we have focused on perceptual justification. Now let's
switch to *religious* justification. Shaw (2019) aims to
motivate a view in religious epistemology--he contends that the
'theist in the street', though unaware of the traditional
arguments for theism, can find independent rational support for
believing that God exists through proofs for His existence. These
proofs are based on religious experiences and parallel in structure
Moore's proof of an external world interpreted as having the
premise supported by an experience.
A Moorean proof for the existence of God is one that deduces a belief
that God exists from what Alston (1991) calls a 'manifestation
belief' (M-belief, for short). An M-belief is a belief based
non-inferentially on a corresponding mystical experience. Typical
M-beliefs are about God's acts toward a given subject at a given
time: for example, beliefs about God's bringing comfort to me,
reproving me for some wrongdoing, or demonstrating His love for me,
and so on. Here is Moorean proof for the existence of God:
*God*
\(\rP\_9\).
God is comforting me just now.
Therefore:
\(\rQ\_9\).
God exists.
Note that the 'good' case in which my experience that
\(\rP\_9\)
is veridical has a corresponding, subjectively indistinguishable,
'bad' case, in which it only seems to me that \(\rP\_9\)
because I'm suffering from a delusional religious experience. A
concern is, therefore, that *God* may instantiate the
*disjunctive template*. In this case, my experience as if
\(\rP\_9\) could actually justify \(\rP\_9\) only if I had independent
justification for
\(\rQ\_9\)
(cf. Pritchard 2012b). If so, *God* isn't
transmissive.
Shaw (2019) concedes that
*God*
is dialectically ineffective. To defend its transmissivity, he
appeals to religious epistemological disjunctivism, which says that an
M-belief that \(p\) can enjoy infallible rational support in virtue of
one's *pneuming* that \(p\), where this mental state is
both factive and accessible on reflection. Shaw intends
'pneuming that \(p\)' to stand as a kind of
religious-perceptual analogue to 'seeing that \(p\)' (cf.
Shaw 2016, 2019). When it comes to *God*, my belief that
\(\rP\_9\) is supposed to be justified by my pneuming that God is
comforting me just now. As we have seen in the case of
*Moore\**, a worry is that conceiving of the justification for
\(\rP\_9\) along epistemological disjunctivist lines may not suffice to
exempt *God* from the charge of instantiating the
*disjunctive template* and being thus non-transmissive.
### 4.2 McKinsey's Paradox
McKinsey (1991, 2002, 2003, 2007) has offered a reductio argument for
the incompatibility of first-person privileged access to mental
content and externalism about mental content. The privileged access
thesis roughly says that it is necessarily true that if \(s\) is
thinking that \(x\), then \(s\) can in principle know *a
priori* (or in a non-empirical way) that she is thinking that
\(x\). Externalism about mental content roughly says that predicates
of the form 'is thinking that \(x\)'--e.g., 'is
thinking that water is wet'--express properties that are
*wide*, in the sense that possession of these properties by
\(s\) logically or conceptually implies the existence of relevant
contingent objects external to \(s\)'s mind--e.g., water.
McKinsey argues that \(s\) may reason along these lines:
*Water*
\(\rP\_{10}\).
I am thinking that water is wet.
\(\rP\_{11}\).
If I am thinking that water is wet, then I have (or my linguistic
community has) been embedded in an environment that contains
water.
Therefore:
\(\rQ\_{10}\).
I have (or my linguistic community has) been embedded in an
environment that contains water.
*Water* produces an absurdity. If the privileged access thesis
is true, \(s\) knows
\(\rP\_{10}\)
*non-empirically*. If semantic externalism is true, \(s\)
knows
\(\rP\_{11}\)
*a priori* by conceptual analysis. Since \(\rP\_{10}\) and
\(\rP\_{11}\) do entail
\(\rQ\_{10}\)
and knowledge is presumably closed under known entailment, \(s\)
knows \(\rQ\_{10}\)--which is an *empirical*
proposition--by simply competently deducing it from \(\rP\_{10}\)
and \(\rP\_{11}\) and without conducting any *empirical
investigation*. Since this is absurd, McKinsey concludes that the
privileged access thesis or semantic externalism must be false.
One way to resist McKinsey's *incompatibilist* conclusion
that the privileged access thesis and externalism about mental content
cannot be true together is to argue that
*Water*
is non-transmissive. Since knowledge is presumably closed under known
entailment, it remains true that \(s\) cannot know
\(\rP\_{10}\)
and
\(\rP\_{11}\)
while failing to know
\(\rQ\_{10}\).
However, McKinsey's paradox originates from the stronger
conclusion--motivated by the claim that *Water* is a
deductively valid argument featuring premises knowable
non-empirically--that \(s\), by running *Water*, could
know *non-empirically* the *empirical* proposition
\(\rQ\_{10}\) that she or members of her community have had contact
with water. This is precisely what could not happen if *Water*
is non-transmissive: in this case \(s\) couldn't learn
\(\rQ\_{10}\) on the basis of her non-empirical justification for
\(\rP\_{10}\) and \(\rP\_{11}\), and her knowledge of the entailment
between \(\rP\_{10}\), \(\rP\_{11}\), and \(\rQ\_{10}\) (see mainly
Wright 2000, 2003,
2011).[19]
A first possibility to defend the thesis that *Water* is
non-transmissive is to argue that it instantiates the *disjunctive
template* (considered in
Sect. 3.2)
(cf. Wright 2000, 2003, 2011). If *Water* is non-transmissive,
\(s\) could acquire a justification for, or knowledge of,
\(\rP\_{10}\)
and
\(\rP\_{11}\)
only if \(s\) had an *independent* justification for, or
knowledge of,
\(\rQ\_{10}\).
(And to avoid the absurd result that McKinsey recoils from, this
independent justification should be empirical.) If this diagnosis is
correct, one need not deny \(\rP\_{10}\) or \(\rP\_{11}\) to reject the
intuitively false claim that \(s\) could know the empirical
proposition \(\rQ\_{10}\) in virtue of only non-empirical
knowledge.
To substantiate the thesis that *Water* instantiates the
*disjunctive template* one should first emphasize that the kind
of externalism about mental content underlying \(\rP\_{10}\) is
compatible with the possibility that \(s\) suffers from illusion of
content. Were this to happen with \(\rP\_{10}, s\) would seem to
introspect that she believes that water is wet whereas there is
nothing like that content to be believed by \(s\) in the first
instance. Consider:
\(\rR\_{\textit{water}}\).
'water' refers to *no* natural kind so that
there is *no* content expressed by the sentence 'water is
wet'.
\(s\)'s state of having introspective justification for
believing
\(\rP\_{10}\)
is arguably subjectively indistinguishable from a situation in which
\(\rR\_{\textit{water}}\) is true. Thus condition
(D2)
of the *disjunctive template* is met. Moreover,
\(\rR\_{\textit{water}}\) appears incompatible with an obvious
presupposition of \(s\)'s cognitive project of attaining
introspective justification for believing \(\rP\_{10}\), at least if
the content that water is wet embedded in \(\rP\_{10}\) is constrained
by \(s\) or her linguistic community having being in contact with
water. Thus condition
(D3\(^{\*}\))
is also met. Furthermore, \(\rP\_{10}\) entails \(\rQ\_{10}\) (when
\(\rP\_{11}\) is in background information). Hence, condition
(D1)
is fulfilled. If one could also show that
(D4)
is satisfied in *Water*, in the sense that if \(\rQ\_{10}\)
were false \(\rR\_{\textit{water}}\) would be true, one would have
shown that the *disjunctive template* is satisfied by
*Water*. Wright (2000) takes (D4) to be fulfilled and concludes
that the *disjunctive template* is satisfied by
*Water*.
Unfortunately, the claim that
(D4)
is satisfied in *Water* cannot easily be vindicated (cf.
Wright 2003). Condition (D4) is satisfied in *Water* only if it
is true that if \(s\) (or \(s\)'s linguistic community) had not
been embedded in an environment that contains *water*, the term
'water' would have referred to no natural kind. This is
true only if the closest possible world \(w\) in which this
counterfactual's antecedent is true is like Boghossian's
(1997) *Dry Earth*--namely, a world where no one has ever
had any contact with any kind of watery stuff, and all apparent
contacts with it are always due to multi-sensory hallucination. If
\(w\) is not *Dry Earth*, but Putnam's *Twin
Earth*, however, the counterfactual turns out false, as in this
possible world people usually have contact with some other watery
stuff that they call 'water'. So, in this world
'water' refers to a natural kind, though not to
H2O. Since determining which of *Dry Earth* or
*Twin Earth* is modally closer to the actual world (supposing
\(s\) is in the actual world)--and so determining whether (D4) is
satisfied in *Water*--is a potentially elusive task, the
claim that *Water* instantiates the *disjunctive
template* appears less than fully
warranted.[20]
An alternative dissolution of McKinsey paradox--also based on a
diagnosis of non-transmissivity--seems to be available if one
considers the proposition that
\(\rQ\_{11}\).
\(s\) (or \(s\)'s linguistic community) has been embedded
in an environment containing some *watery substance* (cf.
Wright 2003 and 2011).
This alternative strategy assumes that
\(\rQ\_{11}\),
rather than
\(\rQ\_{10}\),
is a presupposition of \(s\)'s cognitive project of attaining
introspective justification for
\(\rP\_{10}\).
Even if *Water* doesn't instantiate the *disjunctive
template*, a new diagnosis of what's wrong with
McKinsey's paradox could rest on the claim that the different
argument yielded by expanding *Water* with
\(\rQ\_{11}\)
as conclusion--call it *Water\**--instantiates the
*disjunctive template*. If
\(\rR\_{\textit{water}}\)
is the same proposition as above, it is easy to see that
*Water\** satisfies conditions
(D4),
(D2), and
(D3\(^{\*}\))
of the *disjunctive template*. Furthermore
(D1)
is satisfied at least in the sense that it seems *a priori*
that
\(\rP\_{10}\)
via
\(\rQ\_{10}\)
entails
\(\rQ\_{11}\)
(if
\(\rP\_{11}\)
is in background information) (cf. Wright 2011). On this novel
diagnosis of non-transmissivity, what would be paradoxical is that
\(s\) could earn justification for \(\rQ\_{11}\) in virtue of her
non-empirical justification for \(\rP\_{10}\) and \(\rP\_{11}\) and her
knowledge of the *a priori* link from \(\rP\_{10}\),
\(\rP\_{11}\) via \(\rQ\_{10}\) to \(\rQ\_{11}\). If one follows
Wright's (2003)
suggestion[21]
that \(s\) is *entitled* to accept \(\rQ\_{11}\)--namely,
the presupposition that there is a watery substance that provides
'water' with its extension--*Water* becomes
*innocuously* transmissive, and the apparent paradox
surrounding *Water* vanishes. This is so at least if one grants
that it is *a priori* that water is the watery stuff of our
actual acquaintance, once it is presupposed that there is any watery
stuff of our actual acquaintance. For useful criticism of responses of
this type to McKinsey's paradox see Sainsbury (2000), Pritchard
(2002), Brown (2003, 2004), McLaughlin (2003), McKinsey (2003), and
Kallestrup (2011). |
identity-transworld | ## 1. What is transworld identity?
### 1.1 Why transworld identity?
Suppose that, in accordance with the possible-worlds framework for
characterizing modal statements (statements about what is possible or
necessary, about what might or could have been the case, what could
not have been otherwise, and so on), we treat the general statement
that there might have been purple cows as equivalent to the statement
that there is some possible world in which there are purple cows, and
the general statement that there could not have been round squares
(i.e., that it is necessary that there are none) as equivalent to the
statement that there is no possible world in which there are round
squares.
How are we to extend this framework to statements about what is
possible and necessary for *particular* individuals--what
are known as *de re* modal statements ('*de
re*' meaning 'about a thing')--for example,
that Clover, a particular (actually existing) four-legged cow, could
not have been a giraffe, or that she could have had just three legs? A
natural extension of the framework is to treat the first statement as
equivalent to the claim that there is no possible world in which
Clover is a giraffe, and the second as equivalent to the claim that
there is some possible world in which Clover is three-legged. But this
last claim appears to imply that there is some possible world in which
Clover exists and has three legs--from which it seems inescapably
to follow that one and the same individual--Clover--exists
in some merely possible world as well as in the actual world: that
there is an identity between Clover and some individual in another
possible world. Similarly, it appears that the *de re* modal
statements 'George Eliot could have been a scientist rather than
a novelist' and 'Bertrand Russell might have been a
playwright instead of a philosopher' will come out as
'There is some possible world in which George Eliot (exists and)
is a scientist rather than a novelist' and 'There is some
possible world in which Bertrand Russell (exists and) is a playwright
and not a philosopher'. Again, each of these appears to involve
a commitment to an identity between an individual who exists in the
actual world (Eliot, Russell) and an individual who exists in a
non-actual possible world.
To recapitulate: the natural extension of the possible-worlds
interpretation to *de re* modal statements involves a
commitment to the view that some individuals exist in more than one
possible world, and thus to what is known as 'identity across
possible worlds', or (for short) 'transworld
identity'. (It is questionable whether the shorthand is really
apt. One would expect a 'transworld' identity to mean an
identity that holds across (and hence within) *one* world, not
an identity that holds between objects in *distinct* worlds.
(As David Lewis (1986) has pointed out, our own Trans World Airlines
is an intercontinental, not an interplanetary, carrier.) Nevertheless,
the term 'transworld identity' is far too well established
for it to be sensible to try to introduce an alternative, although
'interworld identity' or even 'transmodal
identity' would in some ways be more appropriate.) But is this
commitment acceptable?
### 1.2 Transworld identity and conceptions of possible worlds
To say that there is a transworld identity between *A* and
*B* is to say that there is some possible world
*w*1, and some distinct possible world
*w*2, such that *A* exists in
*w*1, and *B* exists in
*w*2, and *A* is identical with *B*.
(Remember that we are treating the actual world as one of the possible
worlds.) In other words, to say that there is a transworld identity is
to say that the *same* object exists in distinct possible
worlds, or (more simply) that some object exists in more than one
possible world.
But what does it mean to say that an individual exists in a merely
possible world? And--even if we accept that paraphrases of modal
statements in terms of possible worlds are in general
acceptable--does it even make *sense* to say that actual
individuals (like you and your neighbour's cat and the Eiffel
Tower) exist in possible worlds other than the actual world? To know
what a claim of transworld identity amounts to, and whether such
claims are acceptable, we need to know what a possible world is, and
what it is for an individual to exist in one.
Among those who take possible worlds seriously (that is, those who
think that there are possible worlds, on some appropriate
interpretation of the notion), there is a variety of conceptions of
their nature. On one account, that of David Lewis, a non-actual
possible world is something like another universe, isolated in space
and time from our own, but containing objects that are just as real as
the entities of our world; including its own real concrete objects
such as people, tables, cows, trees, and rivers (but also, perhaps,
real concrete unicorns, hobbits, and centaurs). According to Lewis,
there is no objective difference in status between what we call
'the actual world' and what we call 'merely possible
worlds'. We call our world 'actual' simply because
we are in it; the inhabitants of another world may, with equal right,
call *their* world 'actual'. In other words,
according to Lewis, 'actual' in 'the actual
world' is an indexical term (like 'here' or
'now'), not an indicator of a special ontological status
(Lewis 1973, 84-91; Lewis 1986, Ch. 1).
On Lewis's 'extreme realist' account of possible
worlds, it looks as if, for Clover to exist in another possible world
as well as the actual world would be for her to be a part of such a
world: Clover would somehow have to exist as a (concrete) part of two
worlds, 'in the same way that a shared hand might be a common
part of two Siamese twins' (Lewis 1986, 198). But this is
problematic. Clover actually has four legs, but could have had three
legs. Should we infer that Clover is a part of some world at which she
has only three legs? If so, then how many legs does Clover have: four
(since she actually has four legs), or seven (since she has four in
our world and three in the alternative world)? Worse still, we appear
to be ascribing contradictory *properties* to Clover: she has
four legs, and yet has no more than three.
Those who believe in the 'extreme realist' notion of
possible worlds may respond by thinking of Clover as having a
four-legged part in our world, and a three-legged part at some other
world. This is Yagisawa's (2010) view (cf. Lewis 1986,
210-220). Yagisawa thinks of concrete entities--everyday
things such as cats, trees and macbooks--as extended across
possible worlds (as well as across times and places), in virtue of
having *stages* (or parts) which exist at those worlds (and
times and places). Ordinary entities thus comprise spatial, temporal
and modal stages, all of which are equally real. Metaphysically, modal
stages (and the worlds at which they exist) are on a par with temporal
and spatial stages (and the times and places at which they exist).
(This view is the modal analogue of the 'perdurance'
account of identity over time, according to which an object persists
through time by having 'temporal parts' that are located
at different times.) Thus, when we say that Clover has four legs in
our world but only three in some other world, we are saying that she
has a four-legged modal stage and a distinct three-legged modal stage.
Clover herself is neither four-legged nor three-legged. (However,
there is a sense in which Clover herself--the entity comprising
many modal stages--has awfully many legs, even though she
actually has only four.)
Another option for the 'extreme realist' about possible
worlds is to hold that Clover is four-legged relative to our world,
but three-legged relative to some other world. In general, qualities
we would normally think of as monadic properties are in fact relations
to worlds. McDaniel (2004) defends a view along these lines. A
feature of this account is that one and the same entity may exist
according to many worlds, for that entity may bear the
*exists-at* relation to more than one world. Accordingly, the
view is sometimes called *genuine modal realism with overlap*
(McDaniel 2004). This view, transposed to the temporal case, is
precisely what the *endurantist* says: objects do not have
temporal parts; each object is wholly present at each time. (See the
separate entry on Temporal Parts.)
Lewis rejects both of these options. He rejects the overlap view
because of what he calls 'the problem of accidental
intrinsics'. On the overlap view, *having four legs* is a
relation to a world, and hence not one of Clover's intrinsic
properties. In fact, any aspect of a particular that changes across
worlds turns out to be non-intrinsic to that particular. As a
consequence, every particular has all its intrinsic properties
essentially, which Lewis thinks is unacceptable (1986,
199-209).
Lewis himself combines his brand of realism about possible worlds with
a denial of transworld identities. According to Lewis, instead of
saying that George Eliot (in whole or in part) inhabits more than one
world, we should say that she inhabits one world only (ours), but has
counterparts in other worlds. And it is the existence of counterparts
of George Eliot who go in for a career in science rather than novel
writing that makes it true that she could have been a scientist rather
than a novelist (Lewis 1973, 39-43; 1968; 1986, Ch. 4).
However, Lewis's version of realism is by no means the only
conception of possible worlds. According to an influential set of
rival accounts, possible worlds, although real entities, are not
concrete 'other universes', as in Lewis's theory,
but abstract objects such as (maximal) possible states of affairs or
'ways the world might have been'. (See Plantinga 1974;
Stalnaker 1976; van Inwagen 1985; Divers 2002; Melia 2003; Stalnaker
1995; also the separate entry on Possible Worlds. A state of affairs
*S* is 'maximal' just in case, for any state of
affairs *S\**, either it is impossible for both *S* and
*S\** to obtain, or it is impossible for *S* to obtain
without *S\**: the point of the restriction to the maximal is
just that a possible world should be a possible state of affairs that
is, in a relevant sense, complete.)
On the face of it, to treat possible worlds as abstract entities may
seem only to make the problem of transworld identity worse. If it is
hard to believe that you (or a table or a cat) could be part of
another Lewisian possible world, it seems yet harder to believe that a
concrete entity like you (or the table or cat) could be part of an
*abstract* entity. However, those who think that possible
worlds are abstract entities typically do not take the existence in a
merely possible world of a concrete actual individual to involve that
entity's being literally a part of such an abstract thing.
Rather, such a theorist will propose a different interpretation of
'existence in' such a world. For example, according to
Plantinga's (1973, 1974) version of this account, to say that
George Eliot exists in some possible world in which she is a scientist
is just to say that there is a (maximal) possible state of affairs
such that, had it obtained (i.e., had it been actual), George Eliot
would (still) have existed, but would have been a scientist. On this
(deflationary) account of existence in a possible world, it appears
that the difficulties that accompany the idea that George Eliot leads
a double life as an element of another concrete universe as well as
our own (or the idea that she is partially present in many such
universes) are entirely avoided. On Plantinga's account, to
claim that an actual object exists in another possible world with
somewhat different properties amounts to nothing more risque
than the claim that the object could have had somewhat different
properties: something that few will deny. (Note that according to this
account, if the actual world is to be one of the possible worlds, then
the actual world must be an abstract entity. So, for example, if a
merely possible world is 'a way the world might have
been', the actual world will be 'the way the world
is'; if a merely possible world is a maximal possible state of
affairs that is not instantiated, then the actual world will be a
maximal possible state of affairs that *is* instantiated. It
follows that we must distinguish the actual world *qua*
abstract entity from 'the actual world' in the sense of
the collection of spatiotemporally linked entities including you and
your surroundings that constitutes 'the universe' or
'the cosmos'. The sense in which you exist in this
concrete universe (by being part of it) must be different from the
sense in which you exist in the abstract state of affairs that is in
fact instantiated (cf. Stalnaker 1976; van Inwagen 1985, note 3;
Kripke 1980, 19-20).)
The discussion so far may suggest that whether the notion of
transworld identity (that an object exists in more than one world) is
problematic depends solely on whether one adopts an account of
possible worlds as concrete entities such as Lewis's (in which
case it is) or an account of possible worlds as abstract entities such
as Plantinga's (in which case it is not). However, matters are
not so simple, for a variety of reasons (to be discussed in Sections
3-5 below).
## 2. Transworld identity and Leibniz's Law
There may seem to be an obvious objection to the employment of
transworld identity to interpret or paraphrase statements such as
'Bertrand Russell could have been a playwright' or
'George Eliot might have been a scientist'. A fundamental
principle about (numerical) identity is Leibniz's Law: the
principle that if *A* is identical with *B*, then any
property of *A* is a property of *B*, and vice versa. In
other words, according to Leibniz's Law, identity requires the
sharing of all properties; thus any difference between the properties
of *A* and *B* is sufficient to show that *A* and
*B* are numerically distinct. (The principle here referred to
as 'Leibniz's Law' is also known as the
Indiscernibility of Identicals. It must be distinguished from another
(more controversial) Leibnizian principle, the Identity of
Indiscernibles, which says that if *A* and *B* share all
their properties then *A* is identical with *B*.)
However, the whole point of asserting a transworld identity is to
represent the fact that an individual could have had somewhat
*different* properties from its actual properties. Yet does not
(for example) the claim that a philosopher in the actual world is
identical with a non-philosopher in some other possible world conflict
with Leibniz's Law?
It is generally agreed that this objection can be answered, and the
appearance of conflict with Leibniz's Law eliminated. We can
note that the objection, if sound, would apparently prove too much,
since a parallel objection would imply that there can be no such thing
as genuine (numerical) identity through change of properties over
time. But it is generally accepted that no correct interpretation of
Leibniz's Law should rule this out. For example, Bertrand
Russell was thrice married when he received the Nobel Prize for
Literature; the one-year-old Bertrand Russell was, of course,
unmarried; does Leibniz's Law force us to deny the identity of
the prize-winning adult with the infant, on the grounds that they
differ in their properties? No, for it seems that the appearance of
conflict with Leibniz's Law can be dispelled, most obviously by
saying that the infant and the adult share the properties of *being
married in 1950* and *being unmarried in 1873*, but
alternatively by the proposal that the correct interpretation of
Leibniz's Law is that the identity of *A* and *B*
requires that there be no time such that *A* and *B*
have different properties *at that time* (cf. Loux 1979,
42-43; also Chisholm 1967). However, it seems that exactly
similar moves are available in the modal case to accommodate
'change' of properties across possible worlds. Either we
may claim that the actual Bertrand Russell and the playwright in some
possible world (say, *w*2) are alike in possessing
the properties of *being a philosopher in the actual world* and
*being a non-philosopher in w2*, or we may argue
that Leibniz's Law, properly interpreted, asserts that the
identity of *A* and *B* requires that there be no time,
and no possible world, such that *A* and *B* have
different properties *at that time and world*. The moral
appears to be that transworld identity claims (combined with the view
that some of an individual's properties could have been
different) need no more be threatened by Leibniz's Law than is
the view that there can be identity over time combined with change of
properties (Loux 1979, 42-43).
It should be mentioned, however, that David Lewis has argued that the
reconciliation of identity through change over time with
Leibniz's Law suggested above is oversimplified, and gives rise
to a 'problem of temporary intrinsics' that can be solved
only by treating a persisting thing that changes over time as composed
of temporal parts that do not change their intrinsic properties. (See
Lewis 1986, 202-204, and for discussion and further references,
Hawley 2001; Sider 2001; Lowe 2002; Haslanger 2003.) In addition, it
is partly because Lewis regards the analogous account of transworld
identity in terms of modal parts as an unacceptable solution to an
analogous 'problem of accidental intrinsics' that Lewis
rejects transworld identity in favour of counterpart theory (Lewis
1986, 199-220; cf. Section 1.2 above).
## 3. Is 'the problem of transworld identity' a pseudo-problem?
In the discussion of transworld identity in the 1960s and 1970s (when
the issue came to prominence as a result of developments in modal
logic), it was debated whether the notion of transworld identity is
genuinely problematic, or whether, on the contrary, the alleged
'problem of transworld identity' is merely a
pseudo-problem. (See Loux 1979, Introduction, Section III; Plantinga
1973 and 1974, Ch. 6; Kripke 1980 (cf. Kripke 1972); Kaplan 1967/1979;
Kaplan 1975; Chisholm 1967; for further discussion see, for example,
Divers 2002, Ch. 16; Hughes 2004, Ch. 3; van Inwagen 1985; Lewis 1986,
Ch. 4.)
It is difficult to pin down the alleged problem that is supposed to be
at the heart of this dispute. In particular, although the main
proponents of the view that the alleged problem is a pseudo-problem
clearly intended to attack (*inter alia*) Lewis's version
of modal realism, they did not attempt to rebut the thesis (discussed
in Section 1.2 above) that *if* one is a Lewisian realist about
possible worlds, then one should find transworld identity problematic.
Matters are complicated by the fact that proponents of the view that
the alleged problem of transworld identity is a pseudo-problem were to
some extent responding to hypothetical arguments, rather than
arguments presented in print by opponents (see Plantinga 1974, 93).
However, one central issue was whether the claim that an individual
exists in more than one possible world (and hence that there are cases
of transworld identity) needs to be backed by the provision of
*criteria of transworld identity*, and, if so, why.
The term 'criterion of identity' is ambiguous. In an
epistemological sense, a criterion of identity is a way of telling
whether an identity statement is true, or a way of recognizing whether
an individual *A* is identical with an individual *B*.
However, the notion of a criterion of identity also has a metaphysical
interpretation, according to which it is a set of (non-trivial)
necessary and sufficient conditions for the truth of an identity
statement. Although a criterion of identity in the second
(metaphysical) sense might supply us with a criterion of identity in
the first (epistemological) sense, it seems that something could be a
criterion of identity in the second sense even if it is unsuited to
play the role of a criterion of identity in the first sense.
The most influential arguments *against* the view that there is
a genuine problem of transworld identity (or 'problem of
transworld identification', to use Kripke's preferred
terminology) are probably those presented by Plantinga (1973, 1974)
and Kripke (1980). Plantinga and Kripke appear to have, as their
target, an alleged problem of transworld identity that rests on one of
three assumptions. The first assumption is that we must possess
criteria of transworld identity in order to ascertain, on the basis of
their properties in other possible worlds, the identities of (perhaps
radically disguised) individuals in those worlds. The second
assumption is that we must possess criteria of transworld identity if
our references to individuals in other possible worlds are not to miss
their mark. The third assumption is that we must possess criteria of
transworld identity in order to understand transworld identity claims.
Anyone who makes one of these assumptions is likely to think that
there is a problem of transworld identity--a problem concerning
our entitlement to make claims that imply that an individual exists in
more than one possible world. For it does not seem that we possess
criteria of transworld identity that could fulfil any of these three
roles. However, Plantinga and Kripke provide reasons for thinking that
none of these three assumptions survives scrutiny. If so, and if these
assumptions exhaust the grounds for supposing that there is a problem
of transworld identity, the alleged problem may be dismissed as a
pseudo-problem.
The three assumptions may be illustrated, using our examples of George
Eliot and Bertrand Russell, as follows. (The examples are alternated
simply for the sake of a little variety.)
(1)
The 'epistemological' assumption: We must possess a
criterion of transworld identity for George Eliot in order to be able
to tell, on the basis of knowledge of the properties that an
individual has in some other possible world, whether that individual
is identical with Eliot.
(2)
The 'security of reference' assumption: We must
possess a criterion of transworld identity for Bertrand Russell in
order to know that, when we say such things as 'There is a
possible world in which Russell is a playwright', we are talking
about Russell rather than someone else.
(3)
The 'intelligibility' assumption: We must possess a
criterion of transworld identity for George Eliot in order to
understand the claim that there is a possible world in which she is a
scientist.
### 3.1 Against the epistemological assumption
The epistemological assumption appears to imply that the point of our
having a criterion of transworld identity for George Eliot would be
that we could then *employ* the criterion in order to ascertain
which individual in a possible world is Eliot; if, on the other hand,
we do not possess such a criterion, we shall be unable to pick her out
or identify her in other possible worlds (Plantinga 1973; 1974, Ch. 6;
Kripke 1980, 42-53; cf. Loux 1979, Introduction; Kaplan
1967/1979). However, this suggestion, as stated, is vulnerable to the
charge that it is the product of confusion. For how *could* we
use a criterion of identity in the way envisaged? We must dismiss as
fanciful the idea that if we had a criterion of transworld identity
for George Eliot, we could use it to tell, *by empirical
inspection* of the properties of individuals in other possible
worlds (perhaps using a powerful telescope (Kripke 1980, 44) or
'Jules Verne-o-scope' (Kaplan 1967/1979, 93; Plantinga
1974, 94)), which, if any, of those individuals is Eliot. For no one
(including an extreme realist like Lewis) thinks that our
epistemological access to other possible worlds is of this kind.
(According to Lewis, other possible worlds are causally isolated from
our own, and hence beyond the reach of our telescopes or any other
perceptual devices.) But once we face up to the fact that a criterion
of transworld identity (if we had one) could have no such empirical
use, the argument based on the epistemological assumption appears to
collapse. It is tempting to suggest that the argument is the product
of the (perhaps surreptitious) influence of a misleading picture of
our epistemological access to other possible worlds. As Kripke writes
(using President Nixon as his example):
One thinks, in this [mistaken] picture, of a possible world as if it
were like a foreign country. One looks upon it as an observer. Maybe
Nixon has moved to the other country and maybe he hasn't, but
one is given only qualities. One can observe all his qualities, but,
of course, one doesn't observe that someone is Nixon. One
observes that something has red hair (or green or yellow) but not
whether something is Nixon. So we had better have a way of telling in
terms of properties when we run into the same thing as we saw before;
we had better have a way of telling, when we come across one of these
other possible worlds, who was Nixon. (1980, 43)
(It is possible, though, that in this passage Kripke's principal
target is not a mistaken conception of our epistemological access to
other possible worlds, but what he takes to be a mistaken
('foreign country') conception of their nature: a
conception that (when divorced from the fanciful epistemology) would
be entirely appropriate for a Lewisian realist about worlds.)
### 3.2 Against the 'security of reference' assumption
It might be suggested that the point of a criterion of transworld
identity is that its possession would enable me to tell which
individual I am referring to when I say (for example) 'There is
a possible world in which Russell is a playwright'. Suppose that
I am asked: 'How do you know that the individual you are talking
about--this playwright in another possible world--is
Bertrand Russell rather than, say, G. E. Moore, or Marlene Dietrich,
or perhaps someone who is also a playwright in the actual world, such
as Tennessee Williams or Aphra Behn? Don't you need to be able
to supply a criterion of transworld identity in order to secure your
reference to Russell?' (Cf. Plantinga 1974, 94-97; Kripke
1980, 44-47.) It seems clear that the right answer to this
question is 'no'. As Kripke has insisted, it seems
spurious to suggest that the question how we know which individual we
are referring to when we make such a claim can be answered only by
invoking a criterion of transworld identity. For it seems that we can
simply *stipulate* that the individual in question is Bertrand
Russell (Kripke 1980, 44).
Similarly, perhaps, if I say that there is some past time at which
Angela Merkel is a baby, and am asked 'How do you know that
it's the infant *Angela Merkel* that you are talking
about, rather than some other infant?', an apparently adequate
reply is that I am *stipulating* that the past state of affairs
I am talking about is one that concerns *Merkel* (and not some
other individual). It seems that I can adequately answer the parallel
question in the modal case by saying that I am *stipulating*
that, when I say that there is some possible world in which Russell is
a playwright, the relevant individual in the possible world (if there
is one) is *Russell* (and not some other potential or actual
playwright).
### 3.3 Against the 'intelligibility' assumption
A third job for a criterion of transworld identity might be this: in
order to *understand* the claim that there is some possible
world in which George Eliot is a scientist, perhaps we must be able to
give an informative answer to the question 'What would it take
for a scientist in another possible world to be identical with
Eliot?' Again, however, it can be argued that this demand is
illegitimate, at least if what is demanded is that one be able to
specify a set of properties whose possession, in another possible
world, by an individual in that world, is non-trivially necessary and
sufficient for being George Eliot (cf. Plantinga 1973; Plantinga 1974,
94-97; van Inwagen 1985).
For one thing, we may point to the fact that it is doubtful that, in
order to *understand* the claim that there is some past time at
which Angela Merkel is a baby, we have to be able to answer the
question 'What does it take for an infant at some past time to
be identical with Merkel?' in any informative way. Secondly, it
may be proposed that in order to understand the claim that there is
some possible world in which George Eliot is a scientist, we can rely
on our prior understanding of the claim that she might have been a
scientist (cf. Kripke 1980, 48, note 15; van Inwagen 1985).
### 3.4 Remaining issues
However, even if all three of these assumptions can be dismissed as
bad, or at least inadequate, reasons for supposing that transworld
identity requires criteria of transworld identity (and hence for
supposing that there is a problem of transworld identity), it does not
follow that there are no *good* reasons for this supposition.
In particular, even if the three assumptions are discredited, a fourth
claim may survive:
(4)
There must *be* a criterion of transworld identity for
Russell (in the sense of a set of properties whose possession by an
object in any possible world is non-trivially necessary and sufficient
for being Russell) if the claim that there is a possible world in
which Russell is a playwright is to be true. (Such a set of properties
would be what is called a non-trivial *individual essence* of
Russell, where an individual essence of an individual *A* is a
property, or set of properties, whose possession by an individual in
any possible world is both necessary and sufficient for identity with
*A*.)
That this possibility is left open by the arguments so far considered
is suggested by at least two points. The first concerns the analogy
drawn above between transworld identity and identity through time.
Even if we can understand the claim that there is some past time at
which Angela Merkel is a baby without being able to specify
informative (non-trivial) necessary and sufficient conditions for the
identity of the adult Angela Merkel with some previously existing
infant, it does not follow that there *are* no such necessary
and sufficient conditions. And many philosophers have supposed that
there are such conditions for personal identity over time. Secondly,
the fact that one may be able to ensure, by stipulation, that one is
talking about a possible world in which *Bertrand Russell* (and
not someone else) is a playwright (if there is such a world) does not
imply that, when making this stipulation, one is not
*implicitly* stipulating that this individual satisfies, in
that world, conditions non-trivially necessary and sufficient for
being Russell, even if one is not in a position to say what these
conditions are.
This second point is an extension of the observation that, if (as most
philosophers believe) Bertrand Russell has some essential properties
(properties that he has in all possible worlds in which he exists), to
stipulate that one is talking about a possible world in which Russell
is a playwright is, at least implicitly, to stipulate that the
possible world is one in which someone with Russell's essential
properties is a playwright. For example, according to Kripke's
'necessity of origin' thesis, human beings have their
parents essentially (Kripke 1980). If this is correct, then, when we
say 'There is a possible world in which Russell is a
playwright', it seems that, if our stipulation is to be
coherent, we must be at least implicitly stipulating that the possible
world is one in which *someone with Russell's actual
parents* is a playwright, even if the identity of Russell's
parents is unknown to us, and even though we are (obviously) in no
position to conduct an empirical investigation into the ancestry, in
the possible world, of the individuals who exist there. Thus, it
seems, even if Kripke is right in insisting that we need not be able
to *specify* non-trivial necessary and sufficient conditions
for being Russell in another possible world if we are legitimately to
claim that there are possible worlds in which he is a playwright, it
might nevertheless be the case that there *are* such necessary
and sufficient conditions (cf. Kripke 1980, 46-47 and 18, note
17; Lewis 1986, 222).
But what positive reasons are there for holding that transworld
identities require non-trivial necessary and sufficient conditions
(non-trivial individual essences), if arguments that are based on the
epistemological, security of reference, and intelligibility
assumptions are abandoned? (Similar issues arise for the transworld
identities of properties, discussed in the supplement on
transworld identity of properties.)
## 4. Individual essences and bare identities
The principal argument for this view--that transworld identities
require non-trivial individual essences--is that such essences
are needed in order to avoid what have been called 'bare
identities' across possible worlds. And some regard bare
identities as too high a price to pay for the characterization of
*de re* modal statements in terms of transworld identity. If
they are right, and if (as many philosophers believe) there are no
plausible candidates for non-trivial individual essences (at least for
such things as people, cats, trees, and tables) there is, indeed, a
serious problem about transworld identity. (The expression 'bare
identities' is taken from Forbes 1985. The notion, as used here,
is approximately the same as the notion of 'primitive
thisness' employed by Adams (1979), although Adams's
notion is that of an identity that does not supervene on qualitative
facts, rather than an identity that does not supervene on any other
facts at all.)
Suppose that we combine transworld identity with the claim (without
which the introduction of transworld identity seems pointless) that a
transworld identity can hold between *A* in
*w*1 and *B* in *w*2 even
though the properties that *B* has in *w*2
are somewhat different from the properties that *A* has in
*w*1 (or, to put it more simply, suppose that we
combine the claim that there are transworld identities with the claim
that not all of a thing's properties are essential to it). Then,
it can be argued, unless there are non-trivial individual essences, we
are in danger of having to admit the existence of possible worlds that
differ from one another only in the identities of some of the
individuals that they contain.
### 4.1 Chisholm's Paradox and bare identities
One such argument, adapted from Chisholm 1967, goes as follows. Taking
Adam and Noah in the actual world as our examples (and pretending, for
the sake of the example, that the biblical characters are real
people), then, on the plausible assumption that not all of their
properties are essential to them, it seems that there is a possible
world in which Adam is a little more like the actual Noah than he
actually was, and Noah a little more like the actual Adam than he
actually was. But if there is such a world, then it seems that there
should be a further world in which Adam is yet more like the actual
Noah, and Noah yet more like the actual Adam. Proceeding in this way,
it looks as if we may arrive ultimately at a possible world that is
exactly like the actual world, except that Adam and Noah have
'switched roles' (plus any further differences that follow
logically from this, such as the fact that in the
'role-switching' world Eve is the consort of a man who
plays the Adam role, but is in fact Noah). But if this can happen with
Adam and Noah, then it seems that it could happen with any two actual
individuals. For example, it looks as if there will be a possible
world that is a duplicate of the actual world except for the fact that
in this world *you* play the role that Queen Victoria plays in
the actual world, and *she* plays the role that you play in the
actual world (cf. Chisholm 1967, p. 83 in 1979). But this may seem
intolerable. Is it really the case that Queen Victoria could have had
all *your* actual properties (except for identity with you)
while you had all of hers (except for identity with her)?
However, if one thinks that such conclusions are intolerable, how are
they to be avoided? The obvious answer is that what is needed, in the
Adam-Noah case, is that the roles played by Adam and Noah in the
actual world include some properties that are essential to their
bearers' being Adam and Noah respectively: that Adam and Noah
differ non-trivially in their *essential* properties as well as
in their accidental properties: more precisely, that Adam has some
essential property that Noah essentially lacks, or vice versa. For if
'the Adam role' includes some property that Noah
essentially lacks, then, of course, there is no possible world in
which Noah has that property, in which case the Adam role (in all its
detail) is not a possible role for Noah, and the danger of a
role-switching world such as the one described above is avoided.
The supposition that Adam and Noah differ in their essential
properties in this way, although sufficient to block the generation of
this example of a role-switching world, does not by itself imply that
each of Adam and Noah has an *individual essence:* a set of
essential properties whose possession is (not only necessary but also)
sufficient for being Adam or Noah. Suppose that Adam has, as one of
his essential properties, living in the Garden of Eden, whereas Noah
essentially lacks this property. This will block the possibility of
Noah's playing the Adam role, although it does not, by itself,
imply that *nothing other than Adam* could play that role.
However, when we reflect on the potential generality of the argument,
it appears that, if we are to block all cases of role-switching
concerning actual individuals, we must suppose that every actual
individual has some essential property (or set of essential
properties) that every other actual individual essentially lacks. For
example, to block all cases of role-switching concerning Adam and
other actual individuals, there must be some component of 'the
Adam role' that is not only essential to being Adam, but also
cannot be played, in any possible world, by any actual individual
other than Adam.
Even if we suppose that all actual individuals are distinguished from
one another by such 'distinctive' essential properties,
this still does not, strictly speaking, imply that they have
individual essences. For example, it does not rule out the existence
of a possible world that is exactly like the actual world except that,
in this possible world, the Adam role is played, not by Adam, but by
some *merely possible* individual (distinct from all actual
individuals). However, if we find intolerable the idea that there are
such possible worlds--worlds that, like the role-switching world,
differ from the actual world only in the identities of some of the
individuals that they contain--then, it seems, we must suppose
that individuals like Adam (and Noah and you) have (non-trivial)
individual essences, where an individual essence of Adam is (by
definition) some property (or set of properties) that is both
essential to being Adam and also such that it is not possessed, in any
possible world, by any individual other than Adam--i.e., an
essential property (or set of properties) that guarantees that its
possessor is Adam and no one else.
Chisholm (1967) arrives at his role-switching world by a series of
steps. Thus his argument appears to rely on the combination of the
transitivity of identity (across possible worlds) with the assumption
that a succession of small changes can add up to a big change. And
'Chisholm's Paradox' (as it is called) is sometimes
regarded as relying crucially on these assumptions, suggesting that it
has the form of a *sorites* paradox (the type of paradox that
generates, from apparently impeccable assumptions, such absurd
conclusions as that a man with a million hairs on his head is bald).
(See, for example, Forbes 1985, Ch. 7.)
However, there are versions of the role-switching argument that do not
rely on the cumulative effect of a series of small changes. Suppose we
assume that Adam and Noah do not differ from one another in their
essential properties; in other words, that all the differences between
them are accidental (i.e., contingent) differences. It seems
immediately to follow that any way that Adam could have been is a way
that Noah could have been, and vice versa. But one way that Adam could
have been is the way Adam actually is, and one way that Noah could
have been is the way Noah actually is. So (if Adam and Noah do not
differ in their essential properties) it seems that there is a
possible world in which Adam plays the Noah role, and a possible world
in which Noah plays the Adam role. But there is no obvious reason why
a world in which Adam plays the Noah role and a world in which Noah
plays the Adam role shouldn't be the very same world. And in
that case there is a possible world in which Adam and Noah have
swapped their roles. This argument for the generation of a
role-switching world does not rely on a series of small changes: all
that it requires is the assumption that there is no essential
difference between Noah and Adam: or, to put it another way, that any
essential property of Noah is also an essential property of Adam, and
vice versa. (See Mackie 2006, Ch. 2; also Adams 1979; cf. Dorr,
Hawthorne, and Yli-Vakkuri 2021.)
### 4.2 Forbes on individual essences and bare identities
Another type of argument for the conclusion that unless things have
non-trivial individual essences there will be 'bare'
transworld identities: identities that do not supervene on (are not
grounded in) other facts, is presented by Graeme Forbes. (Strictly
speaking, Forbes is concerned to avoid identities that are not
grounded in what he calls 'intrinsic' properties.) A
sketch of a type of argument used by Forbes is this. (What follows is
based on Forbes 1985, Ch. 6; see also Mackie 2006, Ch. 3.) Suppose (as
is surely plausible) that an actually existing oak tree could have
been different in some respects from the way that it is; suppose also
that, even if it has some essential properties (perhaps it is
essentially an oak tree, for example), it has no non-trivial
individual essence consisting in some set of its intrinsic properties.
Then there is the danger that there may be three possible worlds (call
them '*w*2',
'*w*3', and
'*w*4'), where in *w*2
there is an oak tree that is identical with the original tree
(*w*2 representing one way in which the tree could
have been different), and in *w*3 there is an oak
tree that is identical with the original tree (*w*3
representing another way in which the tree could have been different),
and in *w*4 there are *two* oak trees, one of
which is an intrinsic duplicate of the tree as it is in
*w*2, and the other an intrinsic duplicate of the
tree as it is in *w*3. *If* all of
*w*2, *w*3, and
*w*4 are possible, then, given that at least one of
the trees in *w*4 is not identical with the original
tree (since two things cannot be identical with one thing) there are
instances of transworld identity (and transworld distinctness)
concerning a tree in one possible world and a tree in another that are
not grounded in (do not supervene on) the intrinsic features that
those trees possess in those possible worlds. For example, suppose
that, of the two trees in *w*4, the intrinsic
duplicate of the *w*2 tree is *not* identical
with the original tree. Then, obviously, the distinctness
(non-identity) between this *w*4 tree and the tree
in *w*2 is not grounded in the intrinsic features
that the trees have in *w*2 and
*w*4--and nor is the identity between the tree
in *w*2 and the original tree grounded in the
intrinsic features that the tree has in *w*2 and in
the actual world.
Forbes argues that, in order to avoid this (and similar) consequences,
we should suppose that (contrary to the second assumption used in
setting up the 'reduplication argument' sketched above)
the oak tree *does* have a non-trivial individual essence
consisting in some of its intrinsic properties, and his favoured
candidate for its essence is one that includes the tree's coming
from the particular acorn from which it actually originated. If the
tree does have such an 'intrinsic' individual essence,
then, *if* *w*2 and *w*3
are both possible, each of them must contain a tree that has (in that
world) intrinsic properties that are guaranteed to be
*sufficient* for identity with the original tree, in which case
(as a matter of logic) there can be no world such as
*w*4 that contains intrinsic duplicates of both of
them. (See Forbes 1985, Ch. 6, and, for discussion, Mackie 1987;
Mackie 2006; Robertson 1998; Yablo 1988; Chihara 1998; Della Rocca
1996; further discussions by Forbes include his 1986, 1994, and
2002.)
Finally, it is obvious that the structure of Forbes's argument
has nothing to do with the fact that the chosen example is a tree.
Forbes's 'reduplication argument' therefore appears
to pose a general problem for the characterization of *de re*
modal statements about individuals in terms of transworld identity:
either we must admit that their transworld identities can be
'bare', or we must find non-trivial individual essences,
based on their intrinsic properties, that can ground their identities
across possible worlds.
### 4.3 Transworld identity and conditions for identity over time
So far it has been assumed that (non-trivial) necessary and sufficient
conditions for transworld identity with a given object would involve
the possession, by that object, of an individual essence: a set of
properties that it carries with it in every possible world in which it
exists. But one might wonder why we should make this assumption. Those
who believe that there are (non-trivial) necessary and sufficient
conditions for identity over time need not, and almost universally do
not, believe that these conditions consist in the possession, by an
object, of some 'omnitemporal core' (to use a phrase
suggested by Harold Noonan) that it has at every time in its
existence. So why should things be different in the modal case?
The obvious answer seems to be this. In the case of identity over
time, we can appeal to relations (other than mere similarity) between
the states of an individual at different times in its existence. For
example, it looks as if we can say that the adult Russell is identical
with the infant Russell in virtue of the existence of certain
spatiotemporal and causal continuities between his infant state in
1873 and his adult state in (say) 1950 that are characteristic of the
continued existence of a human being. But no such relations of
continuity are available to ground identities across possible worlds
(cf. Quine 1976).
However, on reflection, it may seem that this is too swift. If we
suppose that any possible history for Russell is a *possible*
spatiotemporal and causal extension of the state that he was actually
in at some time in his existence, then perhaps we may appeal to the
same continuities that ground his identity over time in the actual
world in order to ground his identity across possible worlds (cf.
Brody 1980, 114-115; 121). For example, perhaps to say that
Russell could have been a playwright is to say that there was some
time in his actual existence at which he could have *become* a
playwright. If so, then perhaps we can hold that for a playwright in a
possible world to be identical with Russell is for that playwright to
have, in that world, a life that is, at some early stage, exactly the
same as Russell's actual life at some early stage, but which
develops from that point, in the spatiotemporally and causally
continuous fashion that is characteristic of the continued existence
of a human being, into the career of a playwright rather than that of
a philosopher. However, although such a conception may seem initially
attractive, it runs into difficulties if it is intended to provide
conditions that are genuinely both necessary and sufficient for the
identity of individuals across possible worlds. These difficulties
include the fact that it seems too much to demand that Russell have
*exactly* the same early history (or origin) as his actual
early history (or origin) in every possible world in which he exists.
Yet if Russell's early history could have been different in
certain respects, we face the question: 'In virtue of what is an
individual in another possible world with a slightly
*different* early history from Russell's actual early
history identical with Russell?'--a question of precisely
the type that the provision of necessary and sufficient conditions for
transworld identity was intended to answer. (For discussion of this
'branching' conception of possibilities, and its
implications for questions of transworld identity and essential
properties, see Brody 1980, Ch. 5; Mackie 1998; Mackie 2006, Chs
6-7; Coburn 1986, Section VI; McGinn 1976; Mackie 1974; Prior
1960.)
### 4.4 Responses to the problems
The fact that, in the absence of non-trivial individual essences, a
transworld identity characterization of *de re* modal
statements appears to generate bare identities (via arguments such as
Chisholm's Paradox or Forbes's reduplication argument) may
produce a variety of reactions.
The moral that Chisholm (1967) drew from his argument was scepticism
about transworld identity, based partly on scepticism about whether
the non-trivial individual essences that would block the generation of
role-switching worlds are available. Others would go further, and
conclude that such puzzles provide not only a reason for rejecting
transworld identity, but also a reason for adopting counterpart
theory. (Note, though, that Lewis's reasons for adopting
counterpart theory appear to be largely independent of such puzzles
(cf. Lewis 1986, Ch. 4).)
A third reaction is to *accept* bare identities--or, at
least, to accept that individuals (including actual individuals) may
have qualitative duplicates in other possible worlds, and that
transworld identities may involve what have been called
'haecceitistic' differences. (See Adams 1979; Mackie 2006,
Chs 2-3; Lewis 1986, Ch. 4, Section 4; also the separate entry
on Haecceitism.)
A fourth reaction, that of Forbes, is to propose a mixed solution: he
holds that for some individuals (including human beings and trees)
suitable candidates for non-trivial individual essences can be found
(by appeal to distinctive features of their origins), although for
others (including most artefacts) it may be that no suitable
candidates are available, in which case counterpart theory should be
adopted for these cases (see Forbes 1985, Chs 6-7).
Koslicki (2020) provides another mixed solution. She accepts
(primarily for Quinean reasons) that there is a problem of transworld
identity for certain individuals including people: one that can be
solved only by attributing to them non-trivial individual essences;
she argues that these consist in their individual forms. Such an
individual form would provide a substantial answer to questions such
as 'what makes an individual Noah in the "role-switching" world rather
than Adam?' Thus, it would appear, the problem of transworld identity
for individuals such as Noah and Adam is replaced by a problem of
transworld identity for their individual forms. But why is this
supposed to represent an improvement? As Fine puts it: 'Why should we
not simply take the crossworld identity of [non-form] entities as
given and not standing in any need of a criterion?' (2020, 430). One
reason (considered by Koslicki) is conceptual. It is thought that we
need to make sense of *de re* modal claims in terms of *de
dicto* modal claims. However, if individual forms are to come to
the rescue, the question arises how the individual forms are to be
distinguished from one another. The problem is acute in cases where
the individuals in question are otherwise indiscernible (Fine 2020,
432). Moreover, it is not clear that Koslicki's invocation of the fact
that individual forms--unlike, say, unanalysable haecceities--have
a 'qualitative' element (2020, Sections 3.3.4-3.5) is supposed to help
with this problem.
It is perhaps significant, though, that no theorist appears to have
argued that a 'non-trivial individual essence' solution
can be applied to *all* the relevant cases. In other words, the
consensus appears to be that the price of interpreting *all*
*de re* modal claims in terms of transworld identity (as
opposed to counterpart theory) is the acceptance of (some) bare
identities across possible worlds.
Salmon (1996) claims 'something like a proof' of this
implication, from transworld identities to bare identities. He argues
for what he calls *Extreme Haecceitism,* the view that
transworld identities cannot be grounded in general facts about the
individuals concerned. The purported proof goes as follows. We
consider *x* in world *w* and *y* in world
*w*2 and suppose, for *reductio*, that *x = y*
and that this fact is reducible to (or grounded in, or entailed by)
general facts about *x* in *w* and *y* in
*w*2. But, says Salmon, the fact that *x = x* is not
reducible (grounded, etc.) in this way, since it is a fact of logic.
So, says Salmon, '*x* differs from *y* in at least
one respect. For *x* lacks *y*'s feature that its
identity with *x* is grounded in general (cross-world) facts
about *x* and it' (1996, 216), and hence (by
Leibniz's Law), *x [?] y*, contrary to assumption. So if
there are any transworld identities, those identities cannot be
grounded in general facts about those individuals. They must be
'bare'.
Salmon's argument is a variant on Evans's (1978) argument
against the possibility of vague objects. It is controversial in
general whether arguments of this form succeed, for applications of
Leibniz's Law in intensional contexts are questionable.
Catterson (2008) argues on independent grounds that Salmon's
argument is not sound. We should also note that, even if the argument
is successful, it does not directly establish Salmon's Extreme
Haecceitism, but only the conditional thesis that *if* there
are any true transworld identity statements *x = y*, then they
are not reducible to general facts about *x* and
*y*.
### 4.5 Leibniz and hyper-essentialism
Finally, it can be noted that the problems concerning transworld
identity discussed here arise only because it is assumed that not all
of an individual's properties are essential to it (and hence
that, if it exists in more than one possible world, it has different
properties in different worlds). If, instead, one were to hold that
*all* of an individual's properties are essential
properties--and hence, for example, that George Eliot could not
have existed with properties in any way different from her actual
ones--then no such problems would arise. Moreover, this
suggestion, implausible though it may be, is of some historical
interest. For, according to a standard interpretation of the views of
Gottfried Leibniz, the philosopher who is the father of theories of
possible worlds, Leibniz's theory of 'complete individual
notions' commits him to the thesis that an individual such as
George Eliot *does* have all her properties essentially (cf.
Leibniz, *Discourse on Metaphysics* (1687), Sections 8 and 13;
printed in Leibniz 1973 and elsewhere). According to the
'hyper-essentialist' view to which Leibniz appears to be
committed, any individual, in any possible world, whose properties in
that world differ from the actual properties of George Eliot is not,
strictly speaking, identical with Eliot. However, it also seems clear
that this does not represent a way of saving a transworld identity
interpretation of *de re* modality. On the contrary: if there
is no possible world in which George Eliot exists with properties
different from her actual properties, then it is plausible to conclude
that there is no possible world, other than the actual world, in which
she exists at all. For unless possible worlds can be exact duplicates
(something that Leibniz himself would deny), any merely possible world
must differ from the actual world in some respect. If so, then the
properties of any individual in another possible world must differ in
*some* respect from the actual properties of Eliot (even if the
difference is only a difference in relational properties), in which
case, if all Eliot's properties are essential to her, that
individual is not Eliot. (Leibniz's views may, however, be seen
as a partial anticipation of counterpart theory, which attempts to
save the truth of the claim that George Eliot *could* have been
different in some respects (thus denying
'hyper-essentialism') while preserving the metaphysical
thesis that no individual who is, strictly speaking, identical with
Eliot exists in any other possible world (cf. Kripke 1980, 45, note
13).)
### 4.6 Haecceities and haecceitism
The view that an individual's transworld identity is
'bare' is sometimes described as the view that its
identity consists in its possession of a 'haecceity' or
'thisness': an unanalysable non-qualitative property that
is necessary and sufficient for its being the individual that it is.
(The term 'individual essence' is sometimes used to denote
such a haecceity. It should be noted that according to the terminology
used in this article, although a haecceity would be an individual
essence, it would not be a *non-trivial* individual essence.)
However, it is not obvious that the belief in bare identities requires
the acceptance of haecceities. One can apparently hold that transworld
identities may be 'bare' without holding that they are
constituted by any properties at all, even unanalysable haecceities
(cf. Lewis 1986, 225; Adams 1979, 6-7). Thus we should
distinguish what is standardly known as 'haecceitism'
(roughly, the view that there may be bare identities across possible
worlds in the sense of identities that do not supervene on qualitative
properties) from the belief in haecceities (the belief that
individuals have unanalysable non-qualitative properties that
constitute their being the individuals that they are). (For more on
the use of the term 'haecceitism' see Lewis 1986, Ch. 4,
Section 4; Adams 1979; Kaplan 1975, Section IV; also the separate
entry on Haecceitism. For the history of the term
'haecceity', see the entry on Medieval Theories of
Haecceity.)
In addition, it should be noted that to believe in 'bare'
transworld identities, in the sense under discussion here, is not to
believe in 'bare particulars', if to be a bare particular
is to be an entity devoid of (non-trivial) essential properties. As
the arguments discussed in Sections 4.1-4.2 above demonstrate, a
commitment to a 'bare' (or 'ungrounded')
difference in the identities of two individuals *A* and
*B* in different possible worlds (two human beings, or two
trees, for example) does not imply that those individuals have no
non-trivial essential properties. All it implies is that *A*
and *B* do not *differ* in their non-trivial essential
properties--and hence that, although there may well be
non-trivial necessary conditions for being *A* in any possible
world, and non-trivial necessary conditions for being *B* in
any possible world, there are no non-trivial necessary conditions for
being *A* that are not also necessary conditions for being
*B*, and vice versa. (Cf. Adams's 'Moderate
Haecceitism' (1979, 24-26).)
## 5. Transworld identity and the transitivity of identity
It was argued above that the proponent of transworld identity without
non-trivial individual essences faces the prospect of bare
('ungrounded') identities across possible worlds. One such
argument is Chisholm's Paradox, which relies on the transitivity
of identity to produce the result that a series of small changes in
the properties of Adam and Noah leads to a world in which Adam and
Noah have swapped their roles. However, the transitivity of identity
generates additional problems concerning transworld identity, some of
which have nothing particularly to do with role-switching
possibilities or bare identities.
### 5.1 Chandler's transitivity argument
One such argument is given by Chandler (1976). It can be illustrated
simply as follows (adapting Chandler's own example). Suppose
that there is a bicycle originally composed of three parts: A1, B1,
and C1. (We might suppose that A1 is the frame, and B1 and C1 the two
wheels.) Suppose we think that any bicycle could have been originally
composed of any two thirds of its original parts, with a substitute
third component. We may call this (following Forbes 1985) 'the
tolerance principle'; it is a development of the intuitively
appealing thought that it is too much to demand, of an object such as
a bicycle, that it could not have existed unless *all* of its
original parts had been the same. Suppose, further, that we think that
no bicycle could have been originally composed of just one third of
its original parts, even with substitutes for the other two thirds.
Call this 'the restriction principle'. The combination of
these assumptions appears to generate a difficulty for the paraphrase
of *de re* modal claims about bicycles in terms of transworld
identity. For if there is (as the tolerance principle allows) a
possible world *w*2 in which our bicycle comes into
existence composed of parts A1 + B1 + C2, where C1 [?] C2, then, if
we apply the tolerance principle to *this* bicycle we must say
that *it* could have come into existence (in some further
possible world *w*3) with any two thirds of
*those* parts, with a substitute third component: for example,
that it could have come into existence (in *w*3) composed of A1
+ B2 + C2, where B1 [?] B2 and C1 [?] C2. The bicycle in
*w*3 is, *ex hypothesi*, identical with the
bicycle in *w*2, and the bicycle in
*w*2 is, *ex hypothesi*, identical with the
original bicycle; so, by the transitivity of identity, the bicycle in
*w*3 is identical with the original bicycle. Hence
our assumptions have generated a contradiction. We have a bicycle in
*w*3, originally composed of A1 + B2 + C2, that both
is identical with the original bicycle (by the repeated application of
the tolerance principle, together with the transitivity of identity)
and is not identical with the original bicycle (by the restriction
principle).
One might complain that the version of the tolerance principle cited
above is too lenient. Perhaps it is not true that the bicycle could
have come into existence with just two thirds of its original
components: perhaps a threshold of, say, 90% or more is required.
However, the simple argument given above can be adapted to generate a
contradiction between the restriction principle and any tolerance
principle that permits *some* difference in the bicycle's
original composition, simply by introducing a longer chain of possible
worlds. Thus the transitivity argument appears to force the proponent
of transworld identity to choose between two implausible claims: that
an object such as a bicycle has all of its original parts essentially
(thus denying any version of the tolerance principle) and that an
object such as a bicycle could have come into existence with few (if
any) of its original parts (thus denying any (non-trivial) version of
the restriction principle). Moreover, it is clear that the problem can
be generalized to any object to which versions of the tolerance
principle and the restriction principle concerning its original
material composition have application, which appears to include all
artefacts, if not biological organisms.
It seems legitimate to call this puzzle a 'problem of transworld
identity', for it turns partly on the transitivity of identity,
and can be avoided by interpreting claims about how bicycles could
have been different (*de re* modal claims about bicycles) in
terms of a counterpart relation that is not transitive (Chandler
1976). Thus a counterpart theorist may admit that the bicycle could
have been originally composed of A1 + B1 + C2 rather than A1 + B1 +
C1, on the grounds that (according to the tolerance principle) it has
a counterpart (in *w*2) that is originally so
composed. And the counterpart theorist may admit that a bicycle (such
as the one in *w*2) that is originally composed of
A1 + B1 + C2 could have been originally composed of A1 + B2 + C2,
since (by the tolerance principle) it has a counterpart (in
*w*3) that is originally so composed. However, since
the counterpart relation (unlike identity) is not transitive, the
counterpart theorist need *not* say that the bicycle in
*w*3 that is originally composed of A1 + B2 + C2 is
a counterpart of the bicycle in the actual world
(*w*1) originally composed of A1 + B1 + C1, for its
similarity to the bicycle in *w*1 may be
insufficient to allow it to be that bicycle's counterpart. Thus
the non-transitivity of the counterpart relation (a relation based on
resemblance) appears neatly to allow the counterpart theorist to
respect both the tolerance principle and the restriction principle,
without falling into contradiction.
### 5.2 Responses to the transitivity problem
One reaction to the transitivity puzzle is to abandon transworld
identity in favour of counterpart theory. But how--given the
structure of the puzzle--can the theorist who wishes to resist
that move, and retain transworld identity, respond?
One response would be to give up any non-trivial version of the
restriction principle, and hold that an artefact such as a bicycle
could have come into existence with an entirely different material
composition from its actual original composition. Although this
counterintuitive view has been defended (for example, Mackie (2006)
argues for it on grounds independent of the transitivity problem), it
has few adherents.
A second response would be to give up the tolerance principle, and
adopt what Roca-Royes (2016) calls an 'inflexible' version
of the principle that the material origin of an artefact is essential
to it, holding that an artefact such as a bicycle could not have come
into existence with a material composition in any way different from
its actual original composition. Although this view is admittedly
counterintuitive, Roca-Royes argues that it provides the best solution
to the 'Four Worlds Paradox' to be discussed in the next
section.
A third solution to the transitivity problem has been proposed (by
Chandler, followed by Salmon) which apparently allows us to reconcile
all three of the transitivity of identity, the tolerance principle,
and the restriction principle. This is to say that although there
*are* possible worlds (such as *w*3) in which
the bicycle is originally composed of only a small proportion of its
actual original parts, such worlds are not possible relative to (not
'accessible to') the initial world *w*1.
From the standpoint of *w*1, such an original
composition for the bicycle is only possibly possible: something that
would have been possible, had things been different in some possible
way, but is not, as things are, possible (Chandler 1976; Salmon 1979;
Salmon 1982, 238-240). Whether this solution is satisfactory is
disputed. (See, for example, Dorr, Hawthorne, and Yli-Vakkuri 2021,
Chs 7-8.) Admittedly, there are some contexts in which we talk of
possibility in a way that may suggest that the 'accessibility
relation' between possible worlds is non-transitive: that not
everything that would have been possible, had things been different in
some possible way, is possible *simpliciter*. (If Ann had
started writing her paper earlier, it would have been possible for her
to finish it today. And she *could* have started writing her
paper earlier. But, as things are, it is not possible for her to
finish it today.) Nevertheless, the idea that, as regards the type of
metaphysical possibility that is involved in puzzles such as that of
the bicycle, there might be states of affairs that are possibly
possible and yet not possible (and hence that *de re*
metaphysical possibility and necessity do not obey the system of modal
logic known as S4) is regarded with suspicion by many
philosophers.
It should be noted that the 'non-transitivity of
accessibility' response is distinct from an even more radical
response, which rejects the principle of the transitivity of
identity--a principle definitive of the classical notion of
identity. For example, Priest (2010) denies the transitivity of
identity in the context of his dialetheism about truth and a
paraconsistent logic in which the material conditional does not obey
the principle of *modus ponens*. Discussion of this extreme
position is, however, beyond the scope of this article. (On
dialetheism and paraconsistent logic, see the separate entry on
Dialetheism. On the logic of identity, see the entry on Identity.)
Finally, Dorr, Hawthorne, and Yli-Vakkuri 2021 propose that, in some
cases, the combination of (classical) transworld identity, tolerance,
and the restriction principle can all be retained, yet without denying
(as do Chandler and Salmon) the transitivity of accessibility between
possible worlds. They appeal to two principles: a metaphysical
principle of 'plenitude' and a metasemantic principle of 'semantic
plasticity' (for the relevant terms). This solution is defended by
Dorr *et al* partly by considering difficulties for rival
solutions (including that of Chandler and Salmon). The details are
complex, however: interested readers should consult Dorr *et
al* 2021. As with other proposed solutions to the puzzle, the
principles to which Dorr *et al* appeal (plenitude and semantic
plasticity for the relevant terms) are controversial.
It is fair to say that there is no consensus about how the proponent
of transworld identity should respond to the transitivity problem
posed by Chandler's example.
### 5.3 The 'Four Worlds Paradox'
Chandler's transitivity argument can be adapted to produce a
puzzle that is like those discussed in Sections 4.1-4.2 above in
that it involves the danger of 'bare identities', a puzzle
that Salmon (1982) has called 'The Four Worlds Paradox'.
To illustrate the puzzle: suppose that the actual world
(*w*1) contains a bicycle, *a*, that is
(actually) originally composed of A1 + B1 + C1, and suppose that there
is a possible world, *w*5, containing a bicycle,
*b* (not identical with *a*), that is originally
composed (in *w*5) of A2 + B2 + C1 (where A1 [?] A2
and B1 [?] B2). Then, it seems, the application of the tolerance
principle to each of *a* and *b* may generate two
further possible worlds, in one of which (*w*6)
there is a bicycle with the original composition A1 + B2 + C1 that is
identical with *a*, and in the other of which
(*w*7) there is a bicycle with the original
composition A1 + B2 + C1 that is identical with *b*. Since
there need apparently be no further difference between the intrinsic
features of *w*6 and *w*7 on which
this difference in identities could depend, we appear to have a case
of bare identities. This 'Four Worlds Paradox' is like
Chandler's original transitivity puzzle in that it does not seem
that an appeal to individual essences could solve it without
conflicting with the tolerance principle. If that is so, the proponent
of transworld identity (as opposed to counterpart theory) appears to
be left with two options consistent with the transitivity of identity:
the denial of the tolerance principle, and the acceptance of bare
identities. (But cf. Dorr, Hawthorne, and Yli-Vakkuri 2021.) It may be
argued, however, that the acceptance of bare identities can be made
more palatable in this case by the adoption of a non-transitive
accessibility relation between possible worlds. (See Salmon 1982,
230-252; and, for discussion, Roca-Royes 2016. For a defence of
the employment of counterpart theory to solve the Four Worlds Paradox,
see Forbes 1985, Ch. 7. For discussion of a radical response that
retains the tolerance principle and yet avoids bare identities, but
only at the cost of claiming that two bicycles could completely
coincide in one possible world, simultaneously sharing all their
parts, see Roca-Royes 2016, discussing Williamson 1990. On the
relevance to the Four Worlds Paradox to Kripke's principle of
the necessity (essentiality) of origin for artefacts, see also
Robertson 1998 and Hawthorne and Gendler 2000.)
## 6. Concluding remarks
### 6.1 Transworld identity and counterpart theory
One of our initial questions (Section 1 above) was whether a
commitment to transworld identity--the view that an individual
exists in more than one possible world--is an acceptable
commitment for one who believes in possible worlds. The considerations
of Sections 4-5 above suggest that this commitment does involve
genuine (although perhaps not insuperable) problems, even for one who
rejects David Lewis's extreme realism about the nature of
possible worlds. The problems do not arise *directly* from the
notion of an individual's existing in more than one possible
world with different properties. Rather, they derive principally from
the fact that it is hard to accommodate all that we want to say about
the modal properties of ordinary individuals (including all the things
we want to say about their essential and accidental properties) if
*de re* modal statements about such individuals are
characterized in terms of their existence or non-existence in other
possible worlds.
There is currently no consensus about the appropriate resolution of
these problems. In particular, there is no consensus about whether the
adoption of counterpart theory is superior to the solutions available
to a transworld identity theorist. A full examination of the issue
would require a discussion of the objections that have been raised
against counterpart theory as an interpretation of *de re*
modality. And a detailed discussion of counterpart theory is beyond
the scope of this article. (For David Lewis's presentation of
counterpart theory, the reader might start with Lewis 1973,
39-43, followed by (the more technical) Lewis 1968. Early
criticisms of Lewis's counterpart theory include those in Kripke
1980; Plantinga 1973; and Plantinga 1974, Ch. 6. Lewis develops the
1968 version of his counterpart theory in Lewis 1971 and 1986, Ch. 4;
he responds to criticisms in his "Postscripts to
'Counterpart Theory and Quantified Modal Logic'"
(1983, 39-46) and in Lewis 1986, Ch. 4. Other discussions of
counterpart theory include Hazen 1979, the relevant sections of Divers
2002, Melia 2003, and the more technical treatment in Forbes 1985.
More recent critiques of the theory (postdating Lewis's 1986
response to his critics) include Fara and Williamson 2005, Fara 2009,
and Dorr, Hawthorne, and Yli-Vakkuri 2021, Ch. 10.)
One way to argue in favour of transworld identity (distinct from the
defensive strategies discussed in Sections 4 and 5 above) is what we
might call 'the argument from logical simplicity' (Linsky
and Zalta 1994, 1996; Williamson 1998, 2000). The argument begins by
noting that Quantified Modal Logic--which combines individual
quantifiers and modal operators--is greatly simplified when one
accepts the validity of the Barcan scheme,
[?]*x*#*A* -
#[?]*x**A* (Marcus 1946). The resulting logic
is sound and complete with respect to *constant domain
semantics*, in which each possible world has precisely the same
set of individuals in its domain. The simplest philosophical
interpretation of this semantics is that one and the same individual
exists at every possible world.
Several remarks on this argument are in order. First, its conclusion
is very strong: it says that any entity that in fact exists or that
could have existed exists *necessarily*. There is no contingent
existence. This goes far beyond the claim that there are genuine
identities across worlds. (Williamson (2002) defends this conclusion
on independent grounds.) Second, the argument does not offer an
explanation of how transworld identities are possible; it insists only
that there are genuine transworld identities. (Nevertheless, the
metaphysical picture that can most naturally be 'read off'
the constant-domain semantics treats properties-at-a-world as
relations between particulars and worlds, as on McDaniel's
*modal realism with overlap* (McDaniel 2004), discussed in
Section 1.2 above.)
Third, the argument is not best understood as the claim that, if one
does not accept transworld identities, then one is *forced*
into denying the Barcan scheme (and hence forced into uncomfortable
logical territory). That claim would be true only if the Barcan scheme
were validated *only* by constant-domain semantics, which is
not the case. Counterpart-theoretic semantics can be restricted so as
to validate the Barcan scheme, by insisting that the counterpart
relation is an equivalence relation which, for each particular
*x* and world *w*, relates *x* to a unique
particular in *w*. (One could not then interpret the
counterpart relation in terms of similarity, as Lewis does.) Rather,
the argument should be understood as the claim that the best way to
gain the advantages of a logic containing the Barcan scheme is by
adopting constant-domain semantics (and genuine transworld identities
along with it). But just which metaphysical view counts as
'best' here will involve a trade-off between many factors.
These include the simplicity of the constant-domain semantics, on the
one hand, but also arguments of the kind raised by Lewis against modal
realism with overlap, on the other.
### 6.2 Lewis on transworld identity and 'existence according to a world'
Finally, we can note that Lewis (1986) has presented a challenge to
the self-styled champions of 'transworld identity' to
explain *why* the view that they insist on deserves to be
called a commitment to transworld identity at all.
Throughout this article, it has been assumed that a commitment to
transworld identity may be differentiated from a commitment to
counterpart theory on the grounds that the transworld identity
theorist accepts, while the counterpart theorist denies, that an
object *exists in* more than one possible world (cf. Section
1.2 above). However, as Lewis points out, there is a notion of
'existence *according to* a (possible) world' that
is completely neutral as between a counterpart-theoretic and a
'transworld identity' interpretation. In terms of this
neutral conception, as long as the counterpart theorist and transworld
identity theorist agree that Bertrand Russell could have been a
playwright instead of a philosopher, they must agree that Russell
exists *according to* more than one world. In particular, they
must agree that, *according to our world*, he exists and is a
philosopher, and *according to some other worlds*, he exists
and is a non-philosopher playwright (cf. Lewis 1986, 194). The
difference between the theorists, then, allegedly consists in their
different interpretations of what it is for Russell to exist
'according to' a world. In the view of the counterpart
theorist, for Russell to exist according to a possible world in which
he is a playwright is for him to have a *counterpart* in that
world who is (in that world) a playwright. According to the transworld
identity theorist, it is supposed to be for Russell (himself) to
*exist in* that world as a playwright.
If the transworld identity theorist were a Lewisian realist about
possible worlds, this notion of existence *in* a world could be
clearly distinguished from the neutral notion of existence
*according to* a world, on the grounds that the existence of
Russell in a world would require his complete or partial presence as a
part of such a world (cf. Section 1.2 above). But, as Lewis notes, the
self-styled champions of 'transworld identity' who oppose
his counterpart theory are philosophers who repudiate a Lewisian
realist conception of what it takes for Russell to exist in more than
one possible world. Hence, he argues, there is a question about their
entitlement to claim that, according to their theory, Russell exists
*in* other possible worlds in any sense that goes beyond the
neutral thesis (compatible with counterpart theory) that Russell
exists *according to* other worlds. Thus Lewis writes (using
the 1968 US presidential candidate Hubert Humphrey as his
example):
The philosophers' chorus on behalf of 'trans-world
identity' is merely insisting that, for instance, it is Humphrey
himself who might have existed under other conditions, ... who
might have won the presidency, who exists according to many worlds and
wins according to some of them. All that is uncontroversial. The
controversial question is *how* he manages to have these modal
properties. (1986, 198)
A natural reaction to Lewis's challenge is to point out that a
proponent of transworld identity who is not a Lewisian realist will
typically reject Lewis's counterpart theory on the grounds that
his counterpart relation does not have the logic of identity. If so,
then (*pace* Lewis) it is not the case, strictly speaking, that
the 'philosophers' chorus on behalf of "trans-world
identity"' is merely insisting on the neutral claim that
objects exist according to more than one world. However, even if this
is correct, it does not answer a further potential challenge. Suppose,
as seems plausible, that there could be a counterpart relation that
(unlike the one proposed by Lewis himself) is an equivalence relation
(transitive, symmetric, and reflexive), and 'one-one between
worlds'. What would distinguish, *in the case of a theorist
who is not a Lewisian realist about possible worlds*, between, on
the one hand, a commitment to the interpretation of *de re*
modal statements in terms of such an 'identity-resembling'
counterpart relation, and, on the other hand, a commitment to genuine
transworld identity (and hence to the view that an individual
genuinely *exists in* a number of distinct possible worlds)?
The aficionado of transworld identity owes Lewis a reply to this
challenge. |
persons-means | ## 1. Kantian Roots
Kant sets forth several formulations of the categorical imperative,
that is, the principle he holds to be the supreme principle of
morality. One formulation, often called the "Formula of
Humanity" states:
>
>
> So act that you treat humanity, whether in your own person or in the
> person of any other, always at the same time as an end, never merely
> as a means. (Kant 1785: 429, italics
> removed)[2]
>
>
>
The Formula of Humanity contains the command that we ought never to
treat persons merely as means.[3] A few points regarding this command
are helpful to keep in view. First, Kant holds that if a person treats
someone merely as a means, then she acts wrongly. The Formula of
Humanity encompasses an absolute constraint against treating persons
merely as means. Second, Kant does not hold that if in acting a person
refrains from treating anyone merely as a means, then she acts rightly
(Kant 1797: 395). A person can, for example, act wrongly in
Kant's view by expressing contempt for another, even if she is
not using him at all (Kant 1797: 462-464). She would be acting
wrongly by failing to treat the other as an end in himself, rather
than by treating him merely as a means. A third related point is that,
according to Kant, it is both a necessary and a sufficient condition
for one's treating persons in a morally permissible way that one
treat them as ends in themselves (Hill 1992: 41-42). Some
Kantians, especially those engaged primarily in interpretation, rather
than reconstruction, of his views thus understandably hold that far
more important than understanding his position on treating persons
merely as means is understanding his account of treating persons as
ends in themselves (Wood 1999, 143). Fourth, Kant holds that a person
can treat herself merely as a means. If a person acts contrary to
certain "perfect duties to oneself" (Kant 1797: 421),
including her duty not to kill herself (422-423), not to defile
herself by lust (424-425), and not to lie (429-430), then
she treats herself merely as a means, thereby contravening the Formula
of Humanity. It is difficult to discern how, according to Kant, one
treats oneself merely as a means in violating these duties (Kerstein
2008; Timmons 2017: Ch. 7). How, for example, does a person's
lying to another amount to her treating herself merely as a means?
In any case, this article's focus is on treating others merely
as means. Doing that is widely discussed as a possible violation of a
moral constraint. More specifically, the article explores when someone
uses another and either treats or refrains from treating the other
merely as a means. It concentrates on concepts that seem to have roots
in Kant's work, but that are familiar from ordinary moral
discourse.[3]
Kant himself devotes little discussion to clarifying the notion of
treating others merely as
means.[4]
Yet as is apparent below, some of his remarks have been a springboard
for detailed accounts of the notion.
One salient issue outside the article's purview is that of the
scope of "others" in the prescription not to treat others
merely as means. Does Kant embrace among these others all genetically
human beings, including human embryos and individuals in a persistent
vegetative state, or does he limit the others to beings who have
certain capacities, for example, that to set and rationally pursue
ends (Kain 2009; Sussman 2001)? How should Kant or other theorists set
the scope of those whom we ought not to treat merely as means? Should
we include some non-human animals (e.g., chimpanzees or dolphins)
among them? How should we determine to whom to grant this moral
status?
## 2. Using Another
In order to treat another *merely* as a means or *just*
use him, an agent must use the other or treat him as a means. But when
does someone count as doing that? As noted, using others or treating
them as means is often morally permissible. In everyday discourse,
expressions such as "She used me" can mean she
*just* used me, or treated me *merely* as a means and so
can imply a negative evaluation of action or attitude. But for our
purposes talk of a person using another or treating him as a means
implies no such moral judgment.
Accounts of treating others merely as means sometimes leave implicit
the notions of using another they rely on (Kleingeld 2020). Some
points regarding what using another does or does not amount to seem
uncontroversial. It does not seem sufficient for us to count as using
another as a means that we benefit from what the other has done
(Nozick 1974: 31-32). If, on her usual route, a runner derives
enjoyment from the singing of a stranger who happens to be walking by,
she does not appear to be treating the stranger as a means. Moreover,
not all cases of an agent's intentionally doing something in
response to another are cases of her using the other as a means. If
someone frowns at another approaching, for example, he might not
thereby be using the other at all; he might simply be expressing that
the other is unwelcome.
However, inquiry reveals challenges in specifying what using another
amounts to. We might say that an agent uses another or, equivalently,
treats her as a means just in case the agent intentionally does
something to the other in order to secure or as a part of securing one
of his ends (Kerstein 2009: 166). For example, a passenger uses a bus
driver if he boards her bus in order to get across town; a wife treats
her husband as a means if she lies to him so that his birthday party
will be a surprise; and a victim treats a mugger as a means if she
punches him in order to escape from his grasp. In contrast, a pilot
who drops bombs solely in order to kill enemy combatants might foresee
that innocent bystanders will be harmed. Yet if he does not
intentionally do anything to the bystanders, then he does not treat
them as means, according to this
account.[5]
But does the account count too much as treating another as a means?
Suppose that an usher at a concert is trying to prevent a small child
from falling through railing on a balcony. She pushes a spectator out
of the way to get to the child. The specification we are considering
implies that the usher has used the spectator as a means; for she has
intentionally done something to her (i.e., pushed her aside) in order
to attain an end (i.e., to get to the child). Some might say that the
usher has treated the spectator in some way, namely, as an obstacle to
be displaced. Yet she has not used the spectator.
In order for an agent to count as using another, it is not enough that
she do something to the other in order to realize some end of hers,
some have suggested. She must also intend the presence or
participation of some aspect of the other to contribute to the
end's realization (Scanlon 2008: 106-107; Guerrero 2016:
779). The usher does not intend the spectator's presence or
participation to play any role in her preventing the child from
falling. She thinks of her simply as "in the way". On one
account, an agent uses another (or, equivalently, uses or treats
another as a means) if and only if she intentionally does something to
or with the other in order to realize her end, and she intends the
presence or participation of the other to contribute to the
end's realization (Kerstein 2013: 58). On this account, an agent
can count as using another when she is striving to benefit him. For
example, a physician giving a patient a treatment to save his life is
using the patient. Some find this implication of the account
implausible (Parfit 2011:
222).[6]
Others do not, pointing to cases such as that of a physician using a
patient in a study of a new drug in order to ameliorate the
patient's condition.
In any case, consistent with this account, an agent might use another
through using the other's rational, emotional, or physical
capacities. A tourist might ask someone for directions, using the
other's knowledge to get to his destination; a politician might
use his constituents' fear of crime to gain their support for
more spending on law enforcement; a doctor might use a vein from a
patient's leg to repair her heart. One important question left
unanswered by this and other accounts of treating another as a means
is one of scope. For example, does an agent use another if he uses
biospecimens (e.g., cells) or information (e.g., concerning social
media activity) derived from the other? If so, then the scope of a
constraint on treating others merely as means would extend to the
practices of biobanks and technology companies.
## 3. Sufficient Conditions for Using Others Merely as Means
Much debate concerning what it means to treat others merely as means
stems from a single passage in the *Groundwork of the Metaphysics
of Morals*. Kant is attempting to demonstrate that the Formula of
Humanity generates a duty not to make false promises:
>
>
> He who has it in mind to make a false promise to others sees at once
> that he wants to make use of another human being *merely as a
> means*, without the other at the same time containing in himself
> the end. For, he whom I want to use for my purposes by such a promise
> cannot possibly agree to my way of behaving toward him, and so himself
> contain the end of this action. (1785: 429-430)
>
>
>
In these brief remarks, Kant hints at various ways in which we might
understand conditions for treating another merely as a means. We might
understand them in terms of the other's inability to share the
agent's end in using him or to consent to her using him, for
example. In this section, we discuss elaborations of these ways (and
others) of formulating sufficient conditions for someone who is using
another to be treating this other merely as a means.
### 3.1 End Sharing
On the basis of Kant's remarks, we might claim that if another
cannot "contain the end" of an agent's action, that
is, share the end the agent is pursuing in using her, then the agent
treats the other merely as a means. Two agents presumably share an end
just in case they are both trying, or have chosen to try, to realize
this end. But what, precisely, does it mean to say that two agents
*cannot* share an end? Returning to the example at hand, what
does it mean to say that the promisee cannot share the
promisor's end? From the outset, it is important to specify
precisely which of the promisor's ends the promisee cannot
share. It is presumably the promisor's end of getting money from
the promisee without ever paying it back. The promisor's
*ultimate* end might be one that the two can share (e.g., that
of curing cancer). What sense of 'cannot' would be
plausible to invoke in maintaining that a promisee cannot share a
false promisor's end?
#### 3.1.1 Logical impossibility of end sharing
According to one interpretation of Kant, the promisee cannot share the
promisor's end in that it is logically impossible for him to do
so (Hill 2002: 69-70). Suppose the promisor, a borrower, has the
end of getting money from the promisee, a lender, without ever paying
it back. At the time he makes a loan on the basis of this promise, the
lender cannot himself share the end of the borrower's getting
the money from him without ever paying it back, goes this reading. If
the lender shared the borrower's end, then he would not really
be making a loan. For according to our practice, it belongs to the
very concept of making a loan, as opposed, say, to giving money away,
that one believe that what one disburses will be repaid.
This interpretation of the false promising case leads naturally to the
view that a sufficient condition for an agent's treating another
merely as a means is that it is logically impossible for the other to
share the end the agent is pursuing in using her in some way. However,
this proposed sufficient condition might fail to register as treating
others merely as means paradigmatic cases of doing so (Kerstein 2009:
167-168). Take, for example, a loiterer who threatens an
innocent passerby with a gun in order to get $100. It would presumably
be good if a sufficient condition for treating another merely as a
means yielded the conclusion that the loiterer is treating the
passerby merely as a means; for he is mugging her, which, intuitively
speaking, seems to be a clear case of treating another merely as a
means. One might question whether the proposed sufficient condition
does this. Even highly unlikely events are logically possible. It is
improbable, but still logically possible, that the passerby shares the
loiterer's end of his getting $100, one might argue. For
example, the passerby might aim to give $100 to the loiterer, but not
recognize him when he threatens her and so hands over her money to him
as a result of his threat. If this possibility is realized, then the
account would not count the loiterer as treating the passerby merely
as a means. One might also argue that in the case of the false
promise, it is improbable, but still logically possible, that the
lender loans money to a borrower (and thereby believes that it will be
repaid), all the while sharing the borrower's end that she get
money from him (the lender) without paying it back. For example, the
lender might believe that the borrower will pay him back, but share
her end of her getting money from him without repaying it because he
believes that if she does, she will bring about something he covets,
namely, the demise of her reputation. Some philosophers insist,
however, that this sort of scenario is not logically possible; for in
order to be making a loan to another, a person must not only believe
that his money will be repaid, but want and hope that it will be
(Papadaki 2016: 78), they say. If these philosophers' views are
plausible, the proposed sufficient condition in view would deem as
instances of treating others merely as means a range of actions many
envisage as such.
#### 3.1.2 Preventing the other from choosing to pursue one's end
According to a different interpretation of Kant, another cannot share
the end an agent pursues in using him in some way if how the agent
behaves "prevents [the other] from *choosing* whether to
contribute to the realization of that end or not" (Korsgaard
1996: 139). The lender in our example cannot share the
borrower's end of getting money without ever repaying it; for
the borrower's false promise obscures his end and thus prevents
the lender from choosing whether to contribute to it.
This reading of possible end sharing might have implausible
implications when incorporated into a principle according to which a
person who uses another treats the other merely as a means if the
other cannot share the end she is pursuing in using him (Kerstein
2013: 63). Consider young men hiking in the Rocky Mountains for the
first time who find themselves on a mountain in late afternoon without
water and unsure of the way down. To their relief, they spot another
hiker, someone whom they saw park his car in the same area below. They
follow him, using his knowledge of the terrain to get down the
mountain safely. The young men realize that they could, but choose not
to, tell the hiker that they are following him. Out of embarrassment
for their dependence on him, they ensure that they remain undetected.
The way they act prevents the man from choosing whether to contribute
to the realization of their end. According to the notion of possible
end sharing we are here considering, we might have to embrace a view
that some find implausible: Since the hiker cannot share the young
men's end, they are treating him merely as a means and thereby
acting wrongly. To avoid this implication, one might affirm the
following: a person cannot share the end an agent pursues in using him
if the agent's behavior prevents the person from choosing
whether to contribute to the realization of that end and the person
has a right not to be prevented from making this choice (Papadaki
2016: 80). If the hiker fails to have a right not to be prevented from
choosing whether to contribute to the young men getting safely down
the mountain, then they do not treat him merely as a means according
to the amended account. Of course, this amendment invites questions as
to when a person has such a right, as well as making the account of
treating others merely as means depend on an account of moral rights
in a way that Korsgaard (or Kant) might not have intended.
#### 3.1.3 Practical irrationality
The allusion in the false promising passage to possible end sharing is
subject to a third interpretation: the promisee cannot share the
promisor's end in that it would be practically irrational for
her to do so. In typical cases, it would be irrational for the
promisee to try to realize the end of making a loan that is never to
be repaid. This end's being brought about would prevent him from
realizing other ends he is pursuing, ends such as paying rent, buying
groceries, or simply getting his money back.
The notion of practical irrationality at work here seems implicit in
the *Groundwork*. Kant there (1785: 413-418) introduces a
principle that Thomas Hill, Jr. calls (1992: 17-37) "the
hypothetical imperative": If you will an end, then you ought to
will the means to it that are necessary and in your power, or give up
willing the end. Willing an end presumably involves setting it and
attempting to realize it. According to Kant, the hypothetical
imperative is a principle of reason: all of us are rationally
compelled to conform to this principle. An agent would be violating
the hypothetical imperative and thus acting irrationally by willing an
end yet, at the same time, willing another end, the attainment of
which would, he is aware, make it impossible for him to take the
otherwise available and necessary means to his original end. An agent
would violate the hypothetical imperative, for example, by willing now
to buy a house and yet, at the same time, willing to use the money he
knows he needs for the down payment to make a gift to his niece. If he
willed to make the gift, he would be failing to will the necessary
means in his power to buy the house. The Kantian hypothetical
imperative implies that it is irrational to will to be thwarted in
attaining ends that one is pursuing. In typical cases, if a promisee
willed the end of a false promisor, she would be doing just that.
There are two things that an agent who has willed something can do
which would bring his action into compliance with the hypothetical
imperative. He can either will the means that are necessary and in his
power to the end (which, of course, would rule out his willing to be
thwarted in attaining the end) or he can give up willing the end. For
example, the hypothetical imperative would not imply that it was
irrational for the person described above to cease willing now to buy
a house and instead use the money that he knows would be required as a
down payment on it to make a gift to his niece.
A person cannot share an agent's end, according to this third
account, if:
>
>
> The person has an end such that his pursuing it at the same time that
> he pursues the agent's end would violate the hypothetical
> imperative, and the person would be unwilling to give up pursuing this
> end, even if he was aware of the likely effects of the agent's
> successful pursuit of her end.
>
>
>
By way of illustration, suppose that a doctor plans to use a healthy
patient to obtain a heart and lungs for transplant, that is, to
extract them from him in an operation that would kill him. We can
imagine that the patient has many ends, for example, that of attending
his daughter's wedding. According to the hypothetical
imperative, it would be irrational for him to pursue this end at the
same time he was pursuing the doctor's end of getting from him a
heart and lungs for transplant. The account implies that the patient
cannot share the doctor's end if he would be unwilling to give
up his end of attending his daughter's wedding against the
background of an awareness of the likely effects of the doctor's
successful pursuit of his organs (e.g., his life being lost and other
lives being saved).
This notion of conditions under which a person cannot share an
agent's end might be included in the following account: An agent
treats another merely as a means if the other cannot share the
proximate end or ends the agent is pursuing in treating him as a
means. An agent's proximate end is something she aims to bring
about directly from her use of the person. Her proximate end might
also be her ultimate end, say, if she uses another to avoid pain. But
her proximate end might be far removed from her ultimate end. Someone
might, for example, use another to develop her skill as a violinist to
earn a good living in an orchestra so she can put her little sisters
through college, and so forth. The account invokes proximate ends
because they are far more intimately connected to the use that brings
them about than ultimate ends need be.
Yet, like the other accounts we have considered, this account is
subject to criticism. One possible shortcoming stems from cases of
competition (Kerstein 2009: 170-171). Sometimes people have the
end of being the sole winner in a competition. A competitor pursuing
such an end might, according to the account, be treating his
competitor merely as a means and thus acting wrongly, even though she
abides fully with the competition's rules. To begin, competitors
sometimes count as treating one another as means. To invoke one
account of doing so (discussed in
SS2
above), Player *A* intentionally does something to her
opponent, Player *B*, for example, tries to defeat him, which
requires *B*'s presence or participation. Moreover,
*A*'s proximate end in trying to defeat *B* might
be to win top player for the year; and *B*'s proximate
end in trying to defeat *A* might also be to win top player for
the year. To focus on *A*, she is using *B* to realize
an end, namely her (*A*'s) winning top player for the
year, but *B* cannot share this end. In willing that *A*
be top player of the year, *B* would, in effect, be willing to
be thwarted in his attempt to win top player for the year, assuming
that there can be no tie for top player. Finally, awareness on
*B*'s part of the likely effects of *A*'s
successful pursuit of being the top player would presumably not result
in *B*'s being willing to give up his
(*B*'s) end of being top player. In trying to defeat
*B* to be number one, *A* would be treating *B*
merely as a means and thereby acting wrongly, the account seems to
imply, even if *A* competed fairly, that is, violated none of
the competition's rules. Some might find this implication
implausible. Sometimes becoming the best in some endeavor involves
defeating (and using) competitors to do so. But defeating (and using)
competitors to be the best, especially when they have freely entered a
competition, need not amount to acting wrongly, some might insist.
### 3.2 Possible Consent
In the passage on false promising, Kant references possible consent.
He suggests that the victim of the false promise cannot agree to the
use the false promisor is making of him. We might conclude that the
victim cannot agree on the grounds that he cannot share the
promisor's end; for it would, in the sense invoked above
(SS3.1.3),
be practically irrational for him to pursue this end. There is,
however, another way of interpreting the victim's inability to
consent in the context of considering candidates for a plausible
sufficient condition for an agent's treating another merely as a
means. Another account, prompted by the *Groundwork* passage,
is this: An agent uses another merely as a means if the other cannot
consent to her use of him (O'Neill 1989:
113).[7]
An agent cannot consent to being treated as a means if he does not
have the ability to avert his being treated as such by dissenting,
that is, by withholding his agreement to
it.[8]
If an agent deceives or coerces another, then the other's
dissent is "in principle ruled out" (1989: 111) and thus
so is his consent. Suppose, for example, that an appliance
serviceperson tricks a customer into authorizing an expensive repair.
The customer does not really have the opportunity to dissent to the
person's action by refusing to give his consent to it. For he
does not know what her action is, namely one of lying to him about
what is wrong with his refrigerator. (If he did know what her action
was, then he would not be deceived.) Or suppose that a mugger
approaches you on a dark street, points a gun at you, and tells you
that unless you give him all of your money, he will hurt you. He
leaves you no opportunity to avert his use of you by withholding your
consent. Regardless of what you say, he is presumably going to use
you, whether it is through your handing over your wallet or his
violently taking it from you. Since you cannot consent to his action,
the mugger is treating you merely as a
means.[9]
The account is subject to objections. It does not suffice for an agent
to treat another merely as a means that the other simply *be*
unable to consent to the way he is being used, some argue. If it did
suffice, then a passerby giving cardiopulmonary resuscitation (CPR) to
a collapsed jogger would be treating the jogger merely as a means and
thus acting wrongly. But the passerby does not seem to be doing
anything that is morally impermissible.
In light of this objection, someone might propose a different account:
Suppose an agent uses another. She uses him merely as a means if
something she has done or is doing to the other *renders* him
unable to consent to her using him. Of course, although the collapsed
jogger has no opportunity to consent to the passerby's giving
him CPR, the passerby has not put him in that position. So this
account avoids the unwelcome implication that the passerby treats the
collapsed jogger merely as a means.
However, this account is also open to objection. First, it fails to
designate as such some cases that we, intuitively speaking, would
surely classify as treating others merely as means (Kerstein 2013:
74). Think for example of a case where one person knocks someone out
with a "date rape" drug. Another person, who had no
knowledge of or involvement in drugging the victim, sexually assaults
him. Since this other person has not rendered the victim unable to
consent to his use of him, the account does not yield the conclusion
that he treats him merely as a means.
The account arguably not only fails to capture some cases of an
agent's treating another merely as a means, but also designates
as such some cases of deception that, intuitively speaking, are not.
For example, in order to make your spouse's birthday party a
surprise for her, you need to lie to your sister-in-law about your
whereabouts during a certain afternoon. You use her to quell your
spouse's suspicions regarding your plans. As you realize, if you
told your sister-in-law about the party, she would be unable to keep
the secret from your spouse. According to the account, you treat your
sister-in-law merely as a means, since your deception leaves her with
no opportunity to avert your use of her. This conclusion seems
questionable to some, albeit not to others. Here is another case of
what some think of as morally permissible deception (Parfit 2011:
178). Suppose that, in order to save the life of an innocent witness
to a crime, you use her to pass on a lie you have told her to the
perpetrator, Brown. If Brown did not believe the lie, he would kill
the witness. You realize that if you let the witness in on what was
necessary to save her life and told her to lie to Brown herself, she
would not be able to do so effectively. Your treatment of the person
renders impossible her consent to your use of her. But it is
implausible to conclude that you are treating her merely as a means,
some insist.
In these two cases, it makes sense to think that the person you are
using can share your ends, in the sense specified in
SS3.1.3.
Your sister-in-law can share the end of your spouse not getting
suspicious regarding a surprise party, and, of course, the witness can
share the end of Brown's coming to believe some lie. Perhaps
that is why in these cases the person's inability to consent to
your use of her seems to fall short of plausibly implying that you are
using her merely as a means.
### 3.3 Actual Consent
A proposal for a sufficient condition for treating another merely as a
means might invoke a notion of actual consent. Suppose an agent is
using another, the proposal might go; he is using her merely as a
means if she has not consented to his use of her (Nozick 1974:
30-31; Scanlon 2008: 107). A more detailed actual consent
proposal is the following: "An agent uses another person merely
as a means if and only if (1) the agent uses another person as a means
in the service of realizing her ends (2) without, as a matter of moral
principle, making this use conditional on the other's consent;
where (3) by 'consent' is meant the other's genuine
actual consent to being used, in a particular manner, as a means to
the agent's end" (Kleingeld 2020: 398). Both proposals
face challenges. For example, suppose a patient arrives in the
emergency room unconscious, with severe injuries. The physician on
duty judges that only an experimental treatment could preserve the
patient's life. She therefore uses the patient in a study of a
new technique to prevent blood loss. Both actual consent proposals
imply, seemingly implausibly, that the physician used the patient
merely as a means.
One might claim that an agent who uses another without the
other's actual consent treats him merely as a means unless
particular conditions obtain: the agent lacks competency to consent,
the other is rationally required to consent (e.g., perhaps to preserve
his life), or the agent is rationally forbidden to consent (e.g.,
perhaps to being someone's slave) (Formosa 2017: 99-100). This
proposal would avoid the implication that the physician in the
preceding example is using the patient merely as a means; for the
patient obviously lacks competency to consent. Yet the proposal seems
to suffer from a shortcoming that all the actual consent accounts
mentioned have. They all seem to imply, implausibly, that an agent
would be treating another merely as a means in cases such as this
(Cohen 1995: 243): Someone uses another without the other's
consent to prevent the light from shining in her eyes, by positioning
herself in an appropriate way.
### 3.4 Invoking Moral Notions (or Not)
A distinction worth noting in accounts of using others merely as means
is that between ones that do and ones that do not appeal to moral
concepts. A clear example of the former type of account is the
following: X uses Y merely as a means if and only if (1) X uses Y as a
means and (2) X's use of Y violates a duty that X owes to Y
(Fahmy 2023: 58). To determine whether an agent uses another merely as
a means on this account, it is obviously necessary to rely on views
regarding which duties the agent owes to the other. Not surprisingly,
philosophers who offer such accounts sometimes invoke Kant's
notion that persons have dignity that demands respect as a basis for
determining our moral duties (2023: 50) (Formosa 2017: 108). A worry
concerning these accounts is that the light they shed on what treating
others merely as means amounts to is limited by the clarity, or lack
thereof, in the moral notions they invoke.
Other philosophers suggest sufficient conditions for a person treating
another merely as a means that seem not to depend on any moral
concepts (Kerstein 2013: 76) (Audi 2016: 4, 56-57). It is often easier
to discern the implications, both plausible and implausible, of these
accounts than the implications of accounts that rely on disputed or
vague moral concepts such as our dignity or rights as persons. Of
course, not only accounts of when agents treat others merely as means,
but also accounts of when they do not do so (Section 5) differ in
whether they are grounded in moral concepts.
## 4. Treating Another Merely as a Means and Acting Wrongly
Kant holds that if someone treats another merely as a means, the
person acts wrongly, that is, does something morally impermissible.
Some accounts of treating others merely as means seem not to yield the
conclusion that if a person treats another in this way, then he acts
wrongly. On one "rough definition", we use another merely
as a means if we both use the other and regard him
>
>
> as a mere instrument or tool: someone whose well-being and moral
> claims we ignore, and whom we would treat in whatever ways would best
> achieve our aims. (Parfit 2011: 213 and 227)
>
>
>
For example, a kidnapper treats his victim merely as a means if she
uses him for profit and thinks of him simply as a tool that she would
treat in any way necessary for profit. This account takes quite
literally "merely" in "treating others merely as
means". According to it, treating another merely as a means
amounts roughly to treating the other solely or exclusively as a
tool.
If this is how we understand treating others merely as means, then
doing so does not always amount to acting wrongly, it
appears.[10]
Suppose that a gangster considers a barista a mere tool to get coffee
and that he would treat her in whichever way would best serve his
interests. In buying coffee from her, the gangster treats her merely
as a means on this account, but does not, it seems, act wrongly
(Parfit 2011: 216).
On a more detailed, but similar, understanding of treating another
merely as a means, it also seems that the gangster might be doing so
and yet not be acting wrongly. According to this account:
"Treating a person merely as a means is doing something toward
that person . . . on the basis of instrumental motivation, with no
motivational concern regarding any non-instrumental aspects of the
action(s), and with a disposition not to acquire such a concern"
(Audi 2016 56-57).
These notions of treating others merely as means seem not to coincide
with Kant's notion of doing so. Recall Kant's example of
making a false promise to another for financial gain. Suppose that a
particular false promiser would not do just anything to the other for
gain for himself, for example, he would not murder the other's
family. And suppose that this false promisor does have
non-instrumental motivational concern for the other in that he would
not make this false promise if it would likely result in the other
needing to declare bankruptcy. According to Kant's notion, but
not to these accounts, it appears, the false promisor would be
treating the other merely as a
means.[11]
We might question whether treating another merely as a means amounts
to acting wrongly even if we focus on the candidate sufficient
conditions examined above. For the sake of simplicity, let us focus on
the possible consent account
(SS3.2),
according to which an agent treats another merely as a means if the
other cannot consent to her use of him. (We could just as effectively
employ other proposed sufficient conditions we have discussed for just
using another.) Suppose that two muggers attack a victim. The victim
violently pushes one of the muggers into the other, so that he (the
victim) can make his escape. The victim uses the mugger he pushes, and
the mugger presumably is unable to avert this use simply by dissenting
from it. Yet many would object to the idea that the victim is acting
wrongly. One response to this issue would be to build into accounts of
treating another merely as a means the specification that one is not
doing that if he is using someone in order to prevent himself or
someone else from being treated in this way. Building in this
specification would, of course, tend to make accounts somewhat
unwieldy. Other examples might render more difficult to accept the
idea that treating another merely as a means is always morally
impermissible. Suppose, for example, that we use one person to save a
million people from nuclear conflagration, without giving the one any
opportunity to avert the use by dissenting from it. We thereby treat
the one merely as a means, according to a possible consent account.
But do we act wrongly? Some hold that we do
not.[12]
They might defend the view that while it is always wrong *pro
tanto* to treat another merely as a means, doing so is sometimes
morally permissible, all things considered. In other words, we always
have strong moral reasons not to treat others merely as means, but
these reasons can be outweighed by other moral considerations,
presumably including the good of many lives being preserved. If,
contrary to Kant's view, the moral constraint against treating
others merely as means is not absolute, then a question arises as to
when it gets overridden.
## 5. Using Another, but Not Merely as a Means
We have explored sufficient conditions for treating another merely as
a means. But just as challenging as pinpointing them is specifying
when someone uses another, but *not* merely as a means.
According to one proposal, if an agent uses another, she does
*not* use him merely as a means if he gives his voluntary,
informed consent to her use of him. To fix ideas let us say that the
consent of the person being used is voluntary only if he is not being
coerced into giving it and informed only if he understands how he is
being used and to what purpose(s). This proposal seems intuitively
attractive. If a person agrees to someone using him and understands
her ends in doing so, then how can she be treating him merely as a
means?
Appealing to reflective common sense, philosophers have tried to
illustrate how. We might refer to one range of cases they invoke as
*exploitation cases* because they seem to involve one person
taking unfair advantage of another, which is a hallmark of
exploitation (Wertheimer 1996). To cite one such case, suppose that a
mother of modest means cannot afford to give her children a good
education. A rich person proposes to finance her children's
enrollment at excellent schools in exchange for her serving as his
personal slave (Davis 1984: 392). The mother might understand the use
he intends to put her to and for what purposes. Moreover, if we think
of coercion as involving an agent threatening to make someone worse
off than she would be if she did not interact with the agent, then the
rich person would not count as coercing the mother into agreeing to
his use of her. The account of using another but not merely as a means
that we are considering might, therefore, imply that the rich person
is not just using the mother in making her his personal slave. That
implication strikes some as implausible.
Another type of case that might cause problems for this account
invokes unnecessary or otiose threats designed to force another to
serve one's purposes. Here is an example of such a case. An
elderly salesperson thinks that his company is trying to force him
into retirement by keeping its latest sales leads from him. Desperate
to make a sale, he intends to use his office manager to obtain the
latest leads. The manager has the password to a database housing the
leads. He tells her that he really needs to close some deals, and
unless she gets the latest leads for him, he is going to reveal to
everyone in the office that she is lesbian. He believes reasonably,
given his incomplete understanding of her and of the attitudes of his
other co-workers, that this revelation would be damaging to her
reputation. But the office manager takes the salesperson, whom she
thinks of as a friendly colleague, to be making an ill-advised joke.
Virtually everyone in the office is already aware of her sexual
preference. And, she believes, the salesperson is cognizant that it is
company policy that all salespeople are to be granted access to the
latest leads upon request. She gives him a puzzled look and agrees to
get him the leads right away.
The salesperson receives from the office manager her voluntary,
informed consent to his use of her to get the leads. She understands
that he intends to use her to this end. Granted, he threatens to make
her worse off if she does not give him the leads. But it is not the
threat, which she does not even register as such, that generates her
agreement to his use of her. She agrees voluntarily. Yet, despite
obtaining her consent for his use of her, some believe that the
salesperson treats the office manager merely as a means and acts (at
least *pro tanto*) wrongly. Others might argue that although
the salesperson has revealed a moral deficiency, he has not
*done* anything wrong. Rather, he has simply revealed a morally
faulty attitude toward the office manager (Scanlon 2008: 46; Walen
2014: 428-429). If we judge that the salesperson does act
wrongly, then we presumably take this case to illustrate that treating
another merely as a means does not necessarily amount to harming her.
In other words, in treating another merely as a means, an individual
can wrong another without harming her.
Regardless of whether we judge in this case that the salesperson acts
wrongly, the case helps to illustrate a distinction between
agent-focused and patient-focused accounts of treating or not treating
another merely as a means. According to the account we are
considering, let us recall, if an agent uses another, she does
*not* use him merely as a means if he gives his voluntary,
informed consent to her use of him. This account focuses on the other,
that is, on the individual treated as a means to determine whether the
agent is treating him merely as a means. If *he* (i.e., the
other) gives his informed, voluntary consent to being used in some
way, then the agent does not treat him merely as a means, according to
the account. To make this determination, an agent-focused account
would, of course, focus more on the agent. Such an account might hold,
for example, that if an agent uses another, she does not use him
merely as a means *if it is reasonable for her (the agent) to
believe* that the other gives his voluntary, informed consent to
her use of him. According to the notion of reasonable belief invoked
here, it is reasonable for someone to believe something roughly if the
belief is justifiable given the person's context (e.g., his
upbringing, cognitive limitations, and so forth). Contrary to the
patient-focused account, the agent-focused account is free from the
implication that the salesperson is not treating the office manager
merely as a means. It is *not* reasonable for the salesperson
to believe that the office manager has given her voluntary consent to
his use of her. It is, rather, reasonable for him to believe that he
has coerced her into giving him the sales leads. A parallel
patient-based vs. agent-based distinction applies, of course, to
accounts of sufficient conditions for treating another merely as a
means. One might, for example, hold that an agent just uses another if
the other cannot share the end the agent is pursuing in using him (a
patient-focused account). Or one might hold that an agent just uses
another if it is reasonable for her to believe that the other cannot
share the end the agent is pursuing in using him (an agent-focused
account).
We have been considering actual consent accounts of agents using
others, but not treating them merely as means. We can also develop
accounts that invoke other concepts familiar from discussion of
sufficient conditions for treating others merely as means, including
the concepts of possible end-sharing and possible consent. For
example, we might suggest that an agent who uses another does not use
that other merely as a means if the other can consent to the
agent's use of him, that is, if the other can avert the use
simply by dissenting from it. This suggestion, as well as others that
invoke additional concepts we have considered, such as that of end
sharing, might generate questionable verdicts regarding exploitation
cases. For example, the mother of modest means discussed above can
consent to the rich person's use of her. Her dissent from it
alone would stop it from occurring. But some would insist that the
rich person is nevertheless treating her merely as a means in making
her his personal slave in exchange for educating his children.
Another approach to conditions under which an agent uses another, but
does not treat him merely as a means takes shape against a literal
construal of treating others merely as means. On this construal,
discussed above, we use another merely as a means roughly if we both
use the other and regard him as a mere tool. According to the
approach, we do *not* treat another merely as a means if our
treatment of the other person "is governed or guided in
sufficiently important ways by some relevant moral belief or
concern" (Parfit 2011: 214). But when is an agent's use of
another governed in sufficiently important ways by some relevant moral
belief or concern? The agent's use of another is so governed,
according to one response, when the agent tries to and succeeds in
using the other only in ways to which the other can rationally
consent.
But when can the other rationally consent? To simplify matters, let us
make some background assumptions. Let us suppose that the people who
might be used understand what will be done to them and to what
purpose, as well as the effects the use will have. Let us also assume
that those who might be used have the power to give (or to withhold)
consent in the "act-affecting sense" (Parfit 2011:
183-184). When we ask whether they can rationally consent to
being used, we are asking whether it would be rational for them to
consent (or dissent) supposing that their choice would determine
whether or not they were used.
Against the background of these assumptions, we can say that, on this
account, a person can rationally consent to being treated as a means
just in case he has *sufficient reasons* to consent to it. The
account rests on an "objective" view of reasons, according
to which
>
>
> there are certain facts that give us reasons both to have certain
> desires and aims, and to do whatever might achieve these aims. These
> reasons are given by facts about the *objects* of these desires
> or aims, or what we might want or try to achieve. (Parfit 2011:
> 45)
>
>
>
For example, the fact that a child is in pain as a result of a
splinter stuck in his finger gives me reason to want to and to try to
get it
out.[13]
On this account, we have impartial as well as partial reasons for
agreeing to be treated in various ways. Our impartial reasons are
"person-neutral" (2011: 138). We do not need to invoke
ourselves when we describe the facts that yield these reasons. The
fact that some event would cause tremendous pain to a particular
person, for example, gives us reason (albeit perhaps not sufficient
reason) to prevent the event or relieve the pain "whoever this
person may be, and whatever this person's relation to us"
(2011: 138). Our partial reasons are "person-relative":
they "are provided by facts whose description must refer to
us" (2011: 138). The fact that the little boy being hurt by the
splinter is *my* son gives me a partial reason to pull it out.
We each have partial reasons to be particularly attentive to our own
well-being and to the well-being of those in our circle, for example,
our family and friends. According to the account,
>
>
> When one of our two possible acts would make things go in some way
> that would be impartially better, but the other act would make things
> go better either for ourselves or for those to whom we have close
> ties, we often have sufficient reasons to act in either of these ways.
> (2011: 137)
>
>
>
For example, regarding a case in which a person could either save
himself from some injury or do something that would save some
stranger's life in a distant land, the person presumably has
sufficient reasons to do either one. In a similar vein, a person can
have sufficient reason to consent to being treated as a means by
virtue of some impartial reason, such as the fact that his being so
treated will save many lives, even if he also has sufficient reason to
dissent from being treated as a means by virtue of some partial
reason, such as the fact that his being so treated will result in
suffering for him. In sum, this account holds that if an agent uses
another, he does not use the other merely as a means if the other has
sufficient reasons, as just characterized, for agreeing to be
used.
The account seems to imply that cases thought to be paradigmatic of
treating others merely as means do not involve treating them in this
way. Take a case in which a pedestrian is on a bridge above a track
where a train is barreling toward five people (Parfit 2011: 218). The
only way to save the five would be to open, by remote control, a
trap-door on which the pedestrian is standing, so that he would fall
in front of the train. The train would kill him, but the impact would
trigger its automatic brake. If a bystander opens the trap door, then
she uses the pedestrian as a means to save the five. The pedestrian
has sufficient reasons to refrain from consenting to being used to
stop the train. After all, it will result in his premature death. But,
according to the account, he also might have sufficient reasons to
consent to being used to stop the train; for his being used in this
way would save the lives of five people, contributing to an outcome
which is presumably impartially best (2011: 220). Suppose the
bystander opens the trap door and uses the person on the bridge to
save the five. In so doing, she might be limiting her use of another
to ways to which the other can rationally consent. If she is, then she
is not treating the person merely as a means, according to the
account.
Consider another well-known example. Five patients at a hospital are
in immediate need of different organs. One patient needs a kidney,
another needs a liver, and so forth. If a surgeon used a healthy
person undergoing routine tests as a resource for organs, killing him
in the process, all of the five would be saved (Harman 1977:
3-4). The healthy person presumably has strong partial reasons
to dissent from being used for his organs. But he also arguably has
enough impartial reason to consent, namely, that five people will
thereby be saved, such that overall he has sufficient reason to
consent. So, assuming that the surgeon is trying to treat people only
in ways to which they can rationally consent, she might not be
treating the healthy person merely as a means, even if before she
succeeds in putting him under, he is begging for his
life.[14]
If the account we are considering of using others, but not merely as
means does entail that the bystander and the doctor in these two cases
are not treating others merely as means, the account suffers from a
significant flaw, according to some.
In the words of one philosopher, the idea that it is wrong to treat
others merely as means is "both very important and very hard to
pin down" (Glover 2006: 65). Our investigation has illustrated
challenges in specifying what it means to treat others merely as
means. It has not revealed one, univocal concept, grounded in common
sense, of what just using another amounts to. In the end, there may be
no such concept, but rather a set of overlapping notions, which point
to a range of morally problematic actions or attitudes concerning the
use of others. |
trinity | ## 1. One-self Theories
One-self theories assert the Trinity, despite initial appearances, to
contain exactly one self.
### 1.1 Selves, gods, and modes
A self is a being who is in principle capable of knowledge,
intentional action, and interpersonal relationships. A deity is
commonly understood to be a sort of extraordinary self. In the Bible,
the deity Yahweh (a.k.a. "the LORD") commands, forgives,
controls history, predicts the future, occasionally appears in
humanoid form, enters into covenants with human beings, and sends
prophets, whom he even allows to argue with him. More than a common
deity in a pantheon of deities, he is portrayed as being the one
creator of the cosmos, and as having uniquely great power, knowledge,
and goodness.
Trinitarians hold this revelation of the one God as a great self to
have been either supplemented or superseded by later revelation which
shows the one God in some sense to be three "Persons."
(Greek: *hypostaseis* or *prosopa*, Latin:
*personae*) But if these divine Persons are selves, then the
claim is that there are three divine selves, which would seem to be
three gods. Some Trinity theories understand the Persons to be selves,
and then try to show that the falsity of monotheism does not follow.
(See section
2
below.) But a rival approach is to explain that these three divine
Persons are really ways the one divine self is, that is to say, modes
of the one god. In current terms, one reduces all but one of the three
or four apparent divine selves (Father, Son, Spirit, the triune God)
to the remaining one. One of these four is the one god, and the others
are his modes. Because the New Testament seems to portray the Son and
Spirit as somehow subordinate to the one God, one-self Trinity
theories always either reduce Father, Son, and Spirit to modes of the
one, triune God, or reduce the Son and Spirit to modes of the Father,
who is supposed to be numerically identical to the one God. (See
section
1.8
for views on which only the Holy Spirit is reduced to a mode of God,
that is, the Father.)
Because God in the Bible is portrayed as a great self, at the popular
level of trinitarian Christianity one-self thinking has a firm hold.
Liturgical statements, song lyrics, and sermons frequently use
trinitarian names ("Father", "Son",
"Jesus", "God", etc.) as if they were
interchangeable, co-referring terms, referring directly or indirectly
(via a mode) to one and the same divine self.
### 1.2 What is a mode?
But, what is a "mode"? It is a "way a thing
is", but that might mean several things. A "mode of
*X*" might be
* an intrinsic property of *X* (e.g., a power of *X*,
an action of *X*)
* a relation that *X* bears to some thing or things (e.g.,
*X*'s loving itself, *X*'s being greater
than *Y*, *X* appearing wonderful to *Y* and to
*Z*)
* a state of affairs or event which includes *X* (e.g.,
*X* loving *Y*, it being the case that *X* is
great)
One-self trinitarians often seem to have in mind the last of these.
(E.g., The Son is the event of God's relating to us as friend
and savior. Or the Son is the event of God's taking on flesh and
living and dying to reveal the Father to humankind. Or the Son is the
eternal event or state of affairs of God's living in a son-like
way.) If an event is (in the simplest case) a substance (thing) having
a property (or a relation) at a time, then the Son (etc.) will be
identified with God's having a certain property, or being in a
certain relation, at a time (or timelessly). By a natural slide of
thought and language, the Son (or Spirit) may just be thought of and
spoken of as a certain divine property, rather than God's having
of it (e.g., God's wisdom).
Modes may be essential to the thing or not; a mode may be something a
thing could exist without, or something which it must always have so
long as it exists. (Or on another way to understand the
essential/non-essential distinction, a mode may belong to a
thing's definition or not.)
There are three ways these modes of an eternal being may be temporally
related to one another: maximally overlapping, non-overlapping, or
partially overlapping. First, they may be eternally
concurrent--such that this being always, or timelessly, has all
of them. Second, they may be strictly sequential (non-overlapping):
first the being has only one, then only another, then only another.
Finally, some of the modes may be had at the same times, partially
overlapping in time.
### 1.3 One-self Theories and "Modalism" in Theology
Influential 20th century theologians Karl Barth (1886-1968) and
Karl Rahner (1904-84) endorse one-self Trinity theories, and
suggest replacements for the term "Person". They argue
that in modern times "person" has come to mean a self. But
three divine selves would be three gods. Hence, even if
"Person" should be retained as traditional, its meaning in
the context of the Trinity should be expounded using phrases like
"modes of being" (Barth) or "manners of
subsisting" (Rahner) (Ovey 2008, 203-13; Rahner 1997,
42-5, 103-15).
Barth's own summary of his position is:
>
> As God is in Himself Father from all eternity, He begets Himself as
> the Son from all eternity. As He is the Son from all eternity, He is
> begotten of Himself as the Father from all eternity. In this eternal
> begetting of Himself and being begotten of Himself, He posits Himself
> a third time as the Holy Spirit, that is, as the love which unites Him
> in Himself. (Barth 1956, 1)
>
All of Barth's capitalized pronouns here refer to one and the
same self, the self-revealing God, eternally existing in three ways.
Similarly, Rahner says that God
>
> ...is - at once and necessarily - the unoriginate who
> mediates himself to himself (Father), the one who is in truth uttered
> for himself (Son), and the one who is received and accepted in love
> for himself (Spirit) - and... *as a result of this*,
> he [i.e. God] is the one who can freely communicate himself. (Rahner
> 1997, 101-2)
>
Similarly, theologian Alastair McGrath writes that
>
> ...when we talk about God as one person, we mean one person
> *in the modern sense of the word* [i.e. a self], and when we
> talk about God as three persons, we mean three persons *in the
> ancient sense of the word* [i.e. a persona or role that is
> played]. (McGrath 1988, 131)
>
All three theologians are assuming that the three modes of God are all
essential and maximally overlapping.
Mainstream Christian theologians nearly always reject
"modalism", meaning a one-self theory like that of
Sabellius (fl. 220), an obscure figure who was thought to teach that
the Father, Son, and Holy Spirit are sequential, non-essential modes,
something like ways God interacts with his creation. Thus, in one
epoch, God exists in the mode of Father, during the first century he
exists as Son, and then after Christ's resurrection and
ascension, he exists as Holy Spirit (Leftow 2004, 327; McGrath 2007,
254-5; Pelikan 1971, 179). Sabellian modalism is usually
rejected on the grounds that such modes are strictly sequential, or
because they are not intrinsic features of God, or because they are
intrinsic but not essential features of God. The first aspect of
Sabellian modalism conflicts with episodes in the New Testament where
the three appear simultaneously, such as the Baptism of Jesus in
Matthew 3:16-7. The last two are widely held to be objectionable
because it is held that a doctrine of the Trinity should tell us about
how God really is, not merely about how God appears, or because a
trinitarian doctrine should express (some of) God's essence.
Sabellian and other ancient modalists are sometimes called
"monarchians" because they upheld the sole monarchy of the
Father, or "patripassians" for their (alleged) acceptance
of the view that the Father (and not only the Son) suffered in the
life of the man Jesus.
While Sabellian one-self theories were rejected for the reasons above,
these reasons don't rule out all one-self Trinity theories, such
as ones positing the Three as God's modes in the sense of his
eternally having certain intrinsic and essential features. Sometimes
the Trinity doctrine is expounded by theologians as meaning just this,
the creedal formulas being interpreted as asserting that God
(non-contingently) acts as Creator, Redeemer, and Comforter, or
describing "God as transcendent abyss, God as particular yet
unbounded intelligence, and God as the immanent creative energy of
being... three distinct ways of being God", with the named
modes being intrinsic and essential to God, and not mere ways that God
appears (Ward 2002, 236; cf. Ward 2000, 90; Ward 2015).
### 1.4 Trinity as Incoherent
The simplest sort of one-self theory affirms that God is, because
omniscient, omnipotent, and omnibenevolent, the one divine self, and
each Person of the Trinity just is that same self. The
"Athanasian" creed (on which see section
5.3)
seems to imply that each Person just is God, even while being
distinct from the other two Persons. Since the high middle ages
trinitarians have used a diagram of this sort to explain the teaching
that God is a Trinity.
![The traditional Trinity shield or scutum fidei (shield of faith)](Trinityshield.png)
If each occurrence of "is" here expresses numerical
identity, commonly expressed in modern logical notation as
"=" then the chart illustrates these claims:
1. Father = God
2. Son = God
3. Spirit = God
4. Father [?] Son
5. Son [?] Spirit
6. Spirit [?] Father
But the conjunction of these claims, which has been called
"popular Latin trinitarianism", is demonstrably incoherent
(Tuggy 2003a, 171; Layman 2016, 138-9). Because the numerical
identity relation is defined as transitive and symmetrical, claims
1-3 imply the denials of 4-6. If 1-6 are steps in an
argument, that argument can continue thus:
7. God = Son (from 2, by the symmetry of =)
8. Father = Son (from 1, 4, by the transitivity of =)
9. God = Spirit (from 3, by the symmetry of =)
10. Son = Spirit (from 2, 6, by the transitivity of =)
11. God = Father (from 1, by the symmetry of =)
12. Spirit = Father (from 3, 7, the transitivity of =)
This shows that 1-3 imply the denials of 4-6, namely, 8,
10, and 12. Any Trinity doctrine which implies all of 1-6 is
incoherent. To put the matter differently: it is self-evident that
things which are numerically identical to the same thing must also be
numerically identical to one another. Thus, if each Person just is
God, that collapses the Persons into one and the same thing. But then
a trinitarian must also say that the Persons are numerically distinct
from one another.
But none of this is news to the Trinity theorists whose work is
surveyed in this entry. Each theory here is built with a view towards
undermining the above argument. In other words, each theorist
discussed here, with the exception of some mysterians (see section
4.2),
denies that "the doctrine of the Trinity", rightly
understood, implies all of 1-6.
### 1.5 Divine Life Streams
Brian Leftow sets the agenda for his own one-self theory in an attack
on "social", that is, multiple-self theories. (See
sections
2
and
3.1
below.) In contrast to these, he asserts that
>
> ...there is just one divine being (or substance), God...[As
> Thomas Aquinas says,] God begotten receives numerically the same
> nature God begetting has. To make Aquinas' claim perfectly
> plain, I introduce a technical term, "trope". Abel and
> Cain were both human. So they had the same nature, humanity. Yet each
> also had his own nature, and Cain's humanity was not identical
> with Abel's... A trope is an individualized case of an
> attribute. Their bearers individuate tropes: Cain's humanity is
> distinct from Abel's just because it is Cain's, not
> Abel's. With this term in hand, I now restate Aquinas'
> claim: while Father and Son instance the divine nature (deity), they
> have but one trope of deity between them, which is
> God's...bearers individuate tropes. If the Father's
> deity is God's, this is because the Father *just is* God.
> (1999, 203-4)
>
Leftow characterizes his one-self Trinity theory as
"Latin", following the recent practice of contrasting
Western or Latin with Eastern or Greek or "social" Trinity
theories. Leftow considers his theory to be in the lineage of some
prominent Latin-language theorists. (See the supplementary document on
the history of trinitarian doctrines,
section 3.3.2,
on Augustine, and
section 4.1,
on Thomas Aquinas.) In a later discussion Leftow adds that this
Trinity theory needn't commit to trope theory about
properties. Rather, whether or not properties are tropes,
"...the Father's having deity = [is numerically
identical to] the Son's having deity. For both are at bottom
just *God's* having deity." (Leftow 2007, 358)
Leftow makes an extended analogy with time travel; just as a dancer
may repeatedly time travel back to the dance stage, resulting in a
whole chorus line of dancers, so God may eternally live his life in
three "streams" or "strands" (2004,
312-23). Each Person-constituting "strand" of
God's life is supposed to (in some sense) count as a
"complete" life (although for any one of the three,
there's more to God's life than it) (2004, 312). Just as
the many stages of the time-traveling dancer's life are united
into stages of her by their being causally connected in the right way,
so too, analogously, the lives of each of the three Persons count as
being the "strands of" the life of God, because of the
mysterious but somehow causal inter-trinitarian relations (the Father
generating the Son, and the Father and the Son spirating the Spirit)
(313-4, cf. 321-2, Leftow 2012a, 313).
Time-travel does not require that entities are four-dimensional
(2012b, 337). If a single dancer, then, time travels to the past to
dance with herself, this does not amount to one temporal part of her
dancing with a different temporal part of her. If that were so,
neither dancer would be identical to the (whole, temporally extended)
woman. But Leftow supposes that both would be identical to her, and so
would not be merely her temporal parts. He holds that if time travel
is possible, a self may have multiple instances or iterations at a
time. His theory is that the Trinity is like this, subtracting out the
time dimension. God, in timeless eternity, lives out three lives, or
we might say exists in three aspects. In one he's Father, in
another Son, and in another the Holy Spirit. But they are all one
self, one God, as it were three times repeated or multiplied.
Leftow argues that his theory isn't any undesirable form of
"modalism" (i.e. a *heretical* one-self theory)
because
>
> Nothing in my account of the Trinity precludes saying that the
> Persons' distinction is an eternal, necessary, non-successive
> and intrinsic feature of God's life, one which would be there
> even if there were no creatures. (2004, 327)
>
Leftow wants to show what is wrong with the following argument (2004,
305-6; cf. 2007, 359):
1. the Father = God
2. the Son = God
3. God = God
4. the Father = the Son (from 1-3)
5. the Father generates the Son
6. God generates God (from 1, 2, 5).
Creedal orthodoxy requires 1-3 and 5, yet 1-3 imply the
unorthodox 4, and 1, 2 and 5 imply the unorthodox (and necessarily
false) statement 6. So what to do? Lines 1-4 seem perfectly
clear, and the inference from 1-3 to 4 seems valid. So too does
the inference from 1, 2, and 5 to 6. Why should 6 be thought
impossible? The idea is that whatever its precise meaning,
"generation" is some sort of causing or originating,
something in principle nothing can do to itself. One would expect
Leftow, as a one-self trinitarian, to deny 1 and 2, on the grounds
that neither Father nor Son are identical to the one self which is
God, but rather, each is a mode of God. But Leftow instead argues that
premises 1 and 2 are unclear, and that depending on how they are
understood, the argument will either be sound but not heretical, or
unsound because it is invalid, 4 not following from 1-3, and 6
not following from 1, 2, and 5.
The argument seems straightforward so long as we read
"Father" and "Son" as merely singular
referring terms. But Leftow asserts that they are also definite
descriptions "which may be temporally rigid or non-rigid"
(Leftow 2012b, 334-5). A temporally rigid term refers to a being
at all parts of its temporal career. Thus, if "the president of
the United States" is temporally rigid, then in the year 2013 we
may truly say that "The president of the United States lived in
Indonesia", not of course, while he was president, but it is
true of the man who was president in 2013, that in his past, he lived
in Indonesia. If the description "president of the United
States" (used in 2013) is *not* temporally rigid, then it
refers to Barack Obama only in the presidential phase of his life, and
so the sentence above would be false.
"The Father", then, is a disguised description, something
like "the God who is in some life unbegotten" (2012b, 335)
(For "the Son" we would substitute "begotten"
for the last word.) Because the "=" sign can have a
temporally non-rigid description on one or both sides of it, then
there can be "temporary identities", that is, identity
statements which are true only at some times but not others. Leftow
gives as an example the sentence "that infant = Lincoln";
this is true when Lincoln is an infant but false when he has grown up.
Such identity statements can only be true or false relative to times,
or to something time-like (2004, 324). If the terms
"Father" and "Son" are temporally rigid, or at
least like such a term in that each applies to God at all portions of
his life (which isn't *temporally* ordered), then 4 does
follow from 1-3. But 4, Leftow argues, is theologically
innocuous, as it means something like "the God who is in some
life the Father is also the God who is in some life the Son"
(2012b, 335). This is "compatible with the lives, and so the
Persons, remaining distinct," seemingly, distinct instances of
God (each of which is identical to God), and Leftow accepts 1-4
as sound only if 4 means this (*ibid.*).
If the terms "Father" and "Son" are temporally
*non-*rigid, or at least *like* such a term in
that each applies to God relative to some one portion of his life but
not relative to the others, then the argument is unsound. Relative to
the Father-strand of God's life, 1 will be true but 2 will be
false. Relative to the Son-strand, 2 will be true, but 1 will be
false. 3 and 5 will be true relative to any strand, but in any case,
we will not be able to establish either 4 or 6.
Leftow's theory crucially depends on a concept of
modes-intrinsic, essential, eternal ways God is, that is, lives
or life-strands. But he does *not* identify the
"persons" of the Trinity with these modes. Rather, he
asserts that the modes somehow constitute, cause, or give rise to each
Person (2007, 373-5). Like theories that reduce these
Persons to mere modes of a self, Leftow's theory has it
that what may appear to be three selves actually turn out to be one
self, God. But they, all (apparently) three of them, *just are*
(are numerically, absolutely identical to) that one self, that is, God
thrice over or thrice repeated.
Some philosophers object that Leftow's time-travel analogy is
unhelpful because time-travel is impossible (Hasker 2009, 158).
Similarly, one may object that Leftow is trying to illuminate the
obscure (the Trinity) by the equally or more obscure (the alleged
possibility of time travel, and timeless analogues to it). If
Leftow's one-self theory is intended as a literal interpretation
of trinitarian language, a "rational reconstruction"
(Tuggy 2011a), this would be problematic; but if he means it merely as
an apologetic defense (i.e. we can't rule out that the Trinity
means this, and this can't be proven incoherent) then the fact
that some intellectuals believe in the possibility of time travel
supports his case.
One may wonder whether Leftow's life stream theory is really
trinitarian. Do not his Persons really, so to speak, collapse
into one, since each is numerically identical to God? Isn't this
modalism, rather than trinitarianism? (McCall 2003, 428) Again, one
may worry that Leftow's concept of God being
"repeated" or having multiple instances or iterations is
either incoherent or unintelligible. And how can such Persons be fully
God or fully divine when they exist because of something which is more
fundamental, God's life-strands? (Byerly 2017, 81)
William Hasker objects that assuming Leftow's theory,
>
> In the Gospels, we have the spectacle of God-as-Son
> *praying to himself*, namely to God-as-Father.
> Perhaps most poignant of all... are the words of abandonment on
> the cross, "My God, why have you forsaken me?" On the view
> we are considering, this comes out as *"Why have
> I-as-Father forsaken
> myself-as-Son?"* (Hasker 2009, 166)
>
In reply, Leftow argues that if we accept the coherence of time travel
stories, we should not be bothered by the prospect of "one
person at one point in his life begging the same person at another
point" (2012a, 321). About the cry of abandonment on the cross,
Leftow urges that the New Testament reveals a Christ who (although
divine and so omniscient) did not have full access to his knowledge,
specifically knowledge of his relation to the Father, and so Christ
could not have meant what Hasker said above. Instead, he "would
have been using the Son's 'myself' and
'I,' which... pick out only the Son" (2012a,
322).
Hasker also objects that Leftow's one-self theory
collapses the interpersonal relationships of the members of the
Trinity into God's relating to himself, and suggests that in
Leftow's view, God would enjoy self-love, but not
other-love, and so would not be perfect (2009, 161-2,
2012a, 331). (On this sort of argument see sections
2.3
and
2.5
below.) Leftow replies that the self-love in question would be
"relevantly like love of someone else" and so, presumably,
of equal value (2012b, 339).
Does the theory imply "patripassianism", the traditionally
rejected view that the Father suffers? (After all, the Son suffers,
and both he and the Father are identical to God.) Leftow argues that
nothing heretical follows; if his analysis is right "then
claiming that the Father is on the Cross is like claiming that The
Newborn [sic] is eligible to join the AARP [an organization for
retirees]", that is, true but misleading (2012b, 336).
### 1.6 Analogy to an Extended Simple
Recent metaphysicians have discussed the possibility of a simple
(partless) object which is nonetheless spatially extended, occupying
regions of space, but without having parts that occupy the
sub-regions. Martin Pickup (2016) makes an analogy between these and
the idea of a triune God, inspiring his own "Latin"
account. Motivated by skepticism about any three-self theory, and
taking the "Athanasian" creed as a starting point, he
understands the claims that each Person "is God" as
asserting the numerical identity of God with that Person. While God is
like an extended simple, the relationships between the Persons is
"analogous to the relationships between the spatial regions that
an extended simple occupies" (418).
To expound the theory Pickup uses the terminology of a "person
space", an imagined realm of all possible persons, which is just
"an abstraction of the facts about possible personhood"
(422). A point within person space is "a representation of a
group of properties that are jointly necessary and sufficient for
being a certain possible person" (423). This is supposed to help
us "see the conceptual space between being a certain person and
being a certain entity" (422, n. 18). Pickup gives some
non-theological examples to motivate the idea that one thing may
occupy multiple points of person space: the fictional example of Dr.
Jekyll and Mr. Hyde, and humans suffering from Dissociative Identity
Disorders (420-1).
The prima facie trinitarian claims which generate a contradiction are
as follows, where each "is" means numerical (absolute,
non-relative) identity.
1. The Father is God.
2. The Son is God.
3. The Holy Spirit is God.
4. The Father is not the Son.
5. The Father is not the Holy Spirit.
6. The Son is not the Holy Spirit.
If the three Persons are numerically distinct (4-6), then it
can't be that all of them are numerically identical to some
one God. (1-3) Pickup proposes to understand these using person
space concepts. Using p1, p2, and p3 for different points in person
space, the points which correspond to being, respectively, the Father,
Son, and Spirit, claims 1-3 are read as:
1. The occupant of p1 is God.
2. The occupant of p2 is God.
3. The occupant of p3 is God.
These claims entail the 1-3 we started with, understood as
claims of numerical identity. But this method of interpretation
transforms 4-6, which become:
4. p1 is not p2.
5. p1 is not p3.
6. p2 is not p3.
In other words, "4. The Father is not the Son" is not
understood as asserting the numerical distinctness of the Father and
the Son, but rather, as asserting the distinction of the
Father's person space from the Son's person space. (And
similarly with 5, 6.) The point of all of this is that the six
interpreted claims seem to be coherent, such that possibly, all of
them are true. Notably, this account accepts what we can call
"Person-collapse", the implication of 1-3 that the Father
just is the Son, the Father just is the Spirit, and the Son just is
the Spirit. (In other words, those are numerically identical.) Thus,
unlike many Trinity theories, this one is arguably compatible with a
doctrine of divine simplicity.
Pickup defends this account from several objections. Against the
objection that this is a heretical modalism, Pickup argues that it
can't be, since here God's being three Persons is a
fundamental metaphysical fact, not a derivative fact. Further, each
Person is a real person, as real as any human person (426-7).
Still, one may think the account denies the reality of the Persons.
Pickup clarifies that they are not so many distinct realities, but
rather each just is God. Nor are they parts of God. When this theory
countenances the "distinctness of the Persons" it is not
implying the numerical distinctness of the Persons, but only the
distinctness of the person spaces they occupy (p1, p2, and p3). But as
explained above, person spaces are not understood to be real entities
(435).
Second, it can be objected that catholic tradition demands the
numerical distinctness of the Persons. Pickup denies that it does,
since it aims to avoid tritheism and affirms that each just is God
(427).
Third, Person-collapse entails claims any trinitarian should deny,
including that the Father just is the Son, and that the Spirit is
three Persons. (Being numerically the same with God, since God is
three Persons, the Holy Spirit will be three Persons.) Pickup concedes
that at first glance such claims "sound bad", but replies
that any "Latin" account of the Trinity will as such have
to accept such claims (428).
Fourth, it is self-evident that if any x and any y are numerically
identical, it follows that x and y can't ever differ. But
arguably any account of the Trinity must allow that the Persons differ
at least in respect of origin, so that only the Son is begotten, only
the Spirit proceeds, and only the Father begets the Son.
In reply, Pickup argues that metaphysicians who accept the possibility
of extended simples also accept that such "can be heterogeneous:
that they can have different properties at the different locations at
which they exist" (429). Metaphysicians differ in how they try
to show that this is possible (432-4). But according to Pickup,
it is plausible that there are "fundamental distributional
properties". An example of a distributional property is an
object being polka-dotted, which requires that it is one color some
places and another color in other places. What is controversial is
that such properties can be fundamental, that they can belong to the
basic metaphysical structure of reality, rather than being explained
as no more than various smaller items having non-distributional
properties (430). Thus, we should not think that an extended simple
which is heterogeneous both has and lacks a property, being F and not
being F. Rather, we should think that it has a single distributional
property of being F at some of its locations and not being F at
others. If, for instance, an extended simple could be colored, it
might be polka-dotted, where this is understood as a fundamental
property, rather than, for instance, black in some spots and not-black
in others. Thus, we avoid saying that one and the same object at a
single time is and is not black (429-31).
Applied to the Trinity , Pickup suggests that what at first look like
different properties (generating the Son, being generated, and being
spirated) really amount to a single distributional property of God,
which he calls "generation". This is the property of
generating the Son at p1, being generated at p2, and being spirated at
p3. This is not a matter of God being different at different person
spaces; rather, at each space God has the distributional property
called "generation" (431). In this way, the theory does
not deny the indiscernibility of identicals, the principle that if any
x and any y are numerically identical, then they can't ever
differ. The persons of the Trinity, on this account, never do differ,
although it may seem that way at first glance.
An objection to this is that the Father's generation of the Son
implies that the Father is logically or causally prior to the Son, and
a distributional property can't account for this. Pickup
suggests in reply that perhaps instead it is p1 which is prior to p2,
or perhaps God's occupying p1 is prior to his occupying p2
(432).
Finally, it can be objected that necessarily, any person just is a
certain entity, and that even if this is false, still, it seems that
necessarily persons and entities can't be separated, so that if
anything is one person it must be one entity as well (436). In reply,
Pickup denies that such claims are true, and suggests that
conceptually, it seems that one may count persons and beings
differently, even in merely human cases like the fictional Jekyll and
Hyde, which he takes to be an instance of one entity that is two
persons (436, 420). Pickup also argues that it is a virtue of this
account that it doesn't specify what it takes to be a person
(436).
One may object that the suggested paraphrase or interpretation of the
statements that no Person is either of the others seems to change the
subject from divine Persons to imaginary "person spaces".
Again, the cost of denying the numerical distinctness of the Persons
may be too high for many trinitarians to accept. And it would seem
that trinitarians are committed to many differences between the
Persons other than the properties or relations relating to origin. The
Son died, but the Father did not. The Holy Spirit descended upon Jesus
at his baptism, but the Father and Son did not. The Father at
Jesus's baptism said "This is my beloved Son", but
the Son and Spirit did not. It is not clear that all of these seeming
differences can be understood as really involving fundamental
distributional properties of God.
### 1.7 Difficulties for One-self Theories
Any one-self theory is hard to square with the New Testament's
theme of the interpersonal relationship between Father and Son.
(Layman 2016, 129-30; McCall 2010, 87-8, 2014c,
117-27; Plantinga 1989, 23-7) Any one-self theory is also
hard to square with the Son's role as mediator between God and
humankind (Tuggy and Date 2020, 122-3). These teachings arguably
assume the Son to be a self, not a mere mode of a self, and to be a
different self than his Father. Theories such as Ward's (section
1.3
above), which make the Son a mere mode, make him something less than
a self, whereas others (see section
1.6)
make him a self, but the same self as his Father. Either way, the Son
seems not to be qualified either to mediate between God and humankind,
or to be a friend of the one he calls "Father".
Again, some traditional incarnation theories seems to assume that the
eternal Son who becomes incarnate (who enters into a hypostatic union
with a complete human nature) is the same self as the historical man
Jesus of Nazareth. But no mere mode could be the same self as
anything, and the New Testament seems to teach that this man was sent
by *another* self, God.
Some one-self theories run into trouble about God's
relation to the cosmos. If God exists necessarily and is essentially
the creator and the redeemer of created beings in need of salvation,
this implies it is not possible for there to be no creation, or for
there to be no fallen creatures; God could not have avoided creating
beings in need of redemption. One-self trinitarians may get around
this by more carefully specifying the properties in question: not
*creator* but *creator of anything else there might be*,
and not *redeemer* but *redeemer of any creatures in need of
salvation there might be and which he should want to save*.
### 1.8 The Holy Spirit as a Mode of God
Some ancient Christians, most 17th-19th century unitarians,
present-day "biblical unitarians", and some modern
subordinationists such as the Jehovah's Witnesses hold the Holy
Spirit to be a mode of God--God's power, presence, or
action in the world. (See the supplementary document on
unitarianism.)
Not implying modalism about the Son, this position is harder to
refute on New Testament grounds, although mainstream theologians and
some subordinationist unitarians reject it as inconsistent with New
Testament language from which we should infer that the Holy Spirit is
a self (Clarke 1738, 147). Modalists about the Spirit counter with
other biblical language which suggests that the "Spirit of
God" or "Holy Spirit" refers to either God himself,
a mode of God (e.g., his power), or an effect of a mode of God (e.g.,
supernatural human abilities such as healing). (See Burnap 1845,
226-52; Lardner 1793, 79-174; Wilson 1846, 325-32.)
This exegetical dispute is difficult, as all natural languages allow
persons to be described in mode-terms ("Hillary is Bill's
strength.") and modes to be described in language which
literally applies only to persons. ("God's wisdom told him
not to create beer-sap trees.")
## 2. Three-self Theories
One-self Trinity theories are motivated in part by the concern that if
there are three divine selves, this implies that there are three gods.
Three-self theories, in various ways, deny this implication. They hold
the Persons of the Trinity to be selves (as defined above, section
1.1).
A major motivation here is that the New Testament writings seem to
assume that the Father and Son (and, some also argue, the Holy Spirit)
are different selves (e.g. Layman 2016, 131-2).
### 2.1 Relative Identity Theories
Why can't multiple divine selves be one and the same god? It
would seem that by being the same god, they must be numerically the
same entity; "they" are really one, and so
"they" can't differ in any way (that is, this one
entity can't differ from itself). But then, they (really: it)
can't be different divine selves.
Relative identity theorists think there is some mistake in this
reasoning, so that things may be different somethings yet the same
something else. They hold that the above reasoning falsely assumes
something about numerical sameness. They hold that numerical sameness,
or identity, either can be or always is relative to a kind or
concept.
Relative identity theorists are concerned to rebut this sort of
argument:
1. The Father is God.
2. The Son is God.
3. Therefore, the Father is the Son.
If each occurrence of "is" here is interpreted as identity
("absolute" or non-relative identity), then this argument
is indisputably valid. Things identical to the same thing must also be
identical to one another. The relative identity trinitarian argues
that one should read the "is" in 1 and 2 as meaning
"is the same being as" and the "is" in 3 as
meaning "is the same divine Person as". Doing this,
one may say that the argument is invalid, having true premises
but a false conclusion.
These theorists reject another response to the above argument, which
would be to reject it is invalid because 1 and 2 mean that each is a
mode of God (see section
1),
and while these claims are true, they don't imply 3,
since the Father and the Son are two different modes. Against
this, the theories of this section assume that the three Persons of
the Trinity are three selves (Rea 2009, 406. 419; van Inwagen 1995,
229-31).
Following Rea (2003) we divide relative identity trinitarian theories
into the pure and the impure. Pure theories accept (1) either that
there is no such relation as absolute identity or that such statements
are definable in terms of relative-identity relations, and (2) that
trinitarian statements of sameness and difference (e.g. the Father is
God, the Father is not the Son) are to be analyzed as involving
relative and not absolute identity relations, whereas the impure
theories accept only (2), allowing that statements about absolute
identity (e.g. a = b) may be both intelligible and true, against (1).
(434-8)
#### 2.1.1 Pure Relative Identity Theories
Peter Geach (1972, 1973, 1980) argues that it is meaningless to ask
whether or not some a and b are "the same"; rather,
sameness is relative to a sortal concept. Thus, while it is senseless
to ask whether or not Paul and Saul are identical, we can ask whether
or not Paul and Saul are the same human, same person, same apostle,
same animal, etc. The doctrine of the Trinity, then, is construed as
the claim that the Father, Son, and Holy Spirit are the same
*God*, but are not the same *Person*. They are
"God-identical but Person-distinct" (Rea 2003, 432).
As Joseph Jedwab explains, traditional Trinity language and
commitments arguably lead naturally to a relative identity
account.
>
> *Prima facie*, the doctrine of the Trinity implies the sortal
> relativity of identity thesis, which says that where
> "*R*" and "*S*" are sortals, it
> could be that for some x and y, x and y are the same *R* but
> different *S*s. The Father and the Son are the same God, else
> they are two Gods, which implies polytheism and so is false. But the
> Father and the Son are different divine Persons, else they are one
> divine Person, which implies the Sabellian heresy and so is false. So
> the Father and the Son are the same God but different divine Persons.
> (2015, 124)
>
Geach's approach to the Trinity is developed by Martinich (1978,
1979) and Cain (1989). Jedwab (2015) criticizes Cain's version
as implying philosophical, theological and christological
difficulties. Cain (2016) defends his more Geachian approach.
Pure relative identity trinitarianism depends on the controversial
claim that there's no such relation as (non-sortal-relative,
absolute) identity. Most philosophers hold, to the contrary, that the
identity relation and its logic are well-understood; such are
expounded in recent logic text-books, and philosophers frequently
argue in ways that assume there is such a relation as identity (Baber
2015, 165; Layman 2016, 141). One might turn to a weaker relative
identity doctrine; outside the context of the Trinity, philosopher
Nicholas Griffin (1977; cf. Rea 2003, 435-6) has argued that
while there *are* identity relations, they are not basic, but
must be understood in terms of relative identity relations. On either
view, relative identity relations are fundamental.
It has been objected to Geach's claim about the senselessness of
asking if a and b are (non-relatively) "the same"
that,
>
> Given that we have succeeded in picking out something by the use of
> "*a*" and in picking out something by the use of
> "*b*" it surely is a complete determinate
> proposition that *a* = *b*, that is, it is surely either
> true or false that the item we have picked out with
> "*a*" is the item we have picked out with
> "*b*". (Alston and Bennett 1984, 558)
>
Rea objects that relative identity theory presupposes some sort of
metaphysical anti-realism, the controversial doctrine that there is no
realm of real objects which exists independently of human thought
(2003, 435-6). Baber replies that such worries are misguided, as
the only aim of relative identity theory should be to show a way in
which the Trinity might be coherent (2015, 170).
Trenton Merricks objects that if a and b "are the same F",
this implies that a is an F, that b is an F, and that a and b are
(absolutely, non-relatively) identical. But this widely accepted
analysis is precisely what relative identity trinitarians deny. This
leads to the objection that relative-identity trinitarian claims are
unintelligible (that is, we have no grasp of what they mean). If
someone asserts that Fluffy and Spike are "the same dog"
and denies that they're both dogs which are one and the same, we
have no idea what this person is asserting. Similarly with the claim
that Father and Son are "the same God" but are not
identical (Merricks 2006, 301-5, 321; cf. Tuggy 2003a,
173-4, Layman 2016, 141-2).
Baber (2015) replies that if the sortal *dog* is
"dominant", meaning that for any sortal F, if x and y are
the same dog, they will also be the same F, then the claim that Fluffy
and Spike are the same dog but not absolutely identical *is*
intelligible. After all, we can understand that the claim implies that
Fluffy and Spike are the same animal, the same pet, and so on (167).
The relative identity trinitarian, Baber says, must hold that
"*Being* does not dominate [i.e. imply sameness with
respect to] *Person* but rather that *Person* dominates
*Being*". However, there's no easy way to prove
this, and dominance claims are theory-relative (*ibid.*). But
such a claim will just be a part of the relative identity
theorist's Trinity theory (169).
One may also object to either sort of relative identity account being
the historical doctrine on the grounds that only those conversant in
the logic of the last 120 years or so have ever had a concept of
relative identity. But this may be disputed; Anscombe and Geach (1961,
118) argue that Aquinas should be interpreted along these lines,
Richard Cartwright (1987, 193) claims to find the idea of
relative identity in the works of Anselm and in the Eleventh Council
of Toledo (675 C.E.), and Jeffrey Brower (2006) finds a similar
account in the works of Peter Abelard. (On Aquinas, see the
supplementary document on the history of trinitarian doctrines
section 4.)
Christopher Hughes Conn (2019) argues that Anselm was the first to
consciously develop a Trinity theory involving relative identity.
#### 2.1.2 Impure Relative Identity Theories: A Relative Identity Logic
Peter van Inwagen (1995, 2003) tries to show that there is a set of
propositions representing a possibly orthodox interpretation of the
"Athanasian" creed (see section
5.3)
which is demonstrably self-consistent, refuting claims that the
Trinity doctrine is obviously self-contradictory. He formulates a
trinitarian doctrine using a concept of relative identity, without
employing the concept of absolute identity or presupposing that there
is or isn't such a thing (1995, 241). Specifically, he proves
that the following eight claims (understood as involving relative and
never absolute identity, the names being read as descriptions)
don't imply a contradiction in his system of relative identity
logic.
* There is (exactly) one God.
* There are (exactly) three divine Persons.
* There are three divine Persons in one divine Being.
* God is the same being as the Father.
* God is a person.
* God is the same person as the Father.
* God is the same person as the Son.
* The Son is not the same person as the Father.
* God is the same being as the Father. (249, 254)
Van Inwagen neither endorses this Trinity theory, nor presumes to
pronounce it orthodox, and he admits that it does little to reduce the
mysteriousness of the traditional language.
It may be objected, as to the preceding theory, that van
Inwagen's relative identity trinitarianism is unintelligible.
Merricks argues that this problem is more acute for van Inwagen than
for Geach, as the former declines to adopt Geach's claim that
all assertions of identity, in all domains of discourse, and in
everyday life, are sortal-relative (Merricks 2006, 302-4).
Michael Rea (2003) objects that by remaining neutral on the issue of
identity, van Inwagen's theory allows that the three Persons are
(absolutely) non-identical, in which case "it is hard to see
what it could possibly mean to say that they are the *same
being*" (Rea 2003, 441). It seems that any things which are
non-identical are *not* the same being. Thus, van Inwagen must
assume that there is absolute identity, and deny that this relation
holds between the Persons. Thus, van Inwagen has not demonstrated the
consistency of (this version of) trinitarianism. Further, the theory
doesn't rule out polytheism, as it doesn't deny that there
are non-identical divine beings. In sum, the impure relative identity
trinitarian owes us a plausible and orthodox metaphysical story about
how non-identical beings may nonetheless be "one God", and
van Inwagen hasn't done this, staying as he has in the realm of
logic (Rea 2003, 441-2).
In a later discussion, van Inwagen goes farther, claiming that
trinitarian doctrine is inconsistent "if the standard logic of
identity is correct", and denying there is any "relation
that is both universally reflexive [i.e., everything bears the
relation to itself] and forces indiscernibility [i.e. things standing
in the relation can't differ]" (2003, 92). Thus,
there's no such relation as classical or absolute identity, but
there are instead only various relative identity relations
(92-3). In so doing he moves to a "pure" relative
identity approach to the Trinity, as described in section
2.1.1.
Many philosophers would object that whatever reason there is to
believe in the Trinity, it is *more* obvious that there's
such a relation as identity, that the indiscernibility of identicals
is true, and that we do successfully use singular referring terms.
Vlastimil Vohanka (2013) argues that van Inwagen has done
nothing to show the logical possibility of any Trinity theory. Just
because a set of claims can't be proven inconsistent in van
Inwagen's relative identity logic, it doesn't follow that
such claims don't imply a contradiction, or that it is
metaphysically possible that all the claims are true. At one point van
Inwagen tells a short non-theological story whose claims, when
translated into his relative identity logic, have the same forms as
the Trinity propositions. The story, he argues, is clearly not
self-contradictory; thus, he concludes, neither are the Trinity
propositions, since they have the same logical forms. In response,
Vohanka concocts a short non-theological story whose claims
translate into claims of the same form in relative identity logic, and
yet *are* clearly logically impossible (207-11). He
concludes that "there's no ground for thinking that formal
consistency in [relative identity logic] guarantees logical
possibility", and that "sharing a form in [relative
identity logic] with a logically possible proposition does not
guarantee logical possibility" (211-2).
#### 2.1.3 Impure Relative Identity Theories: Constitution Trinitarianism
Another theory claims to possess the sort of metaphysical story van
Inwagen's theory lacks. Based on the concept of constitution,
Rea and Brower develop a three-self Trinity theory according to which
each of the divine Persons is non-identical to the others, as well as
to God, but is nonetheless "numerically the same" as all
of them (Brower and Rea 2005a; Rea 2009, 2011). They employ an analogy
between the Christian God and material objects. When we look at a
bronze statue of Athena, we should say that we're viewing one
material object. Yet, we can distinguish the lump of bronze from the
statue. These cannot be identical, as they differ (e.g., the lump
could, but the statue couldn't survive being smashed flat). We
should say that the lump and statue stand in a relation of
"accidental sameness". This means that they needn't
be, but in fact are "numerically the same" without being
identical. While they are numerically one physical object, they are
two hylomorphic compounds, that is, two compounds of form and matter,
sharing their matter. This, they hold, is a plausible solution to the
problem of material constitution (Rea 1995).
Similarly, the Persons of the Trinity are so many selves constituted
by the same stuff (or something analogous to a stuff). These selves,
like the lump and statue, are numerically the same without being
identical, but they don't stand in a relation of
*accidental* sameness, as they could not fail to be related in
this way. Father, Son, and Spirit are three quasi form-matter
compounds. The forms are properties like "being the Father,
being the Son, and being the Spirit; or perhaps being Unbegotten,
being Begotten and Proceeding" (Rea 2009, 419). The single
subject of those properties is "something that plays the role of
matter," which Rea calls "the divine essence" or
"the divine nature" (Brower and Rea 2005a, 68; Rea 2009,
420). Whereas in the earlier discussion "the divine essence [is]
not... an individual thing in its own right" (Brower and
Rea 2005a, 68; cf. Craig 2005, 79), in a later piece, Rea holds the
divine nature to be a substance (i.e. an entity, an individual being),
and moreover "numerically the same" substance as each of
the three. Thus, it isn't a fourth substance; nor is it a fourth
divine Person, as it isn't, like each of the three, a
form-(quasi-)matter compound, but only something analogous to a lump
of matter, something which constitutes each of the Three (Rea 2009,
420; Rea 2011, Section 6). Rea adds that this divine nature is a
fundamental power which is sharable and multiply locatable. He
doesn't say whether it is either universal or particular,
saying, "I am unsure whether I buy into the universal/particular
distinction" (Rea 2011, Section 6). All properties, in his view,
are powers, and vice versa. Thus, this divine nature is both a power
and a property, and it plays a role like that of matter in the
Trinity.
This three-self theory may be illustrated as follows (Tuggy 2013a,
134).
![A representation of Constitution Trinitarianism](CT.png)
There would seem to be seven realities here, none of which is
(absolutely) identical to any of the others. Four of them are
properties: the divine nature (d), being unbegotten (u), being
begotten (b), and proceeding (p). Three are hylomorphic (form-matter)
compounds: Father, Son, and Holy Spirit (f, s, h)-each with the
property d playing the role of matter within it, and each having its
own additional property (respectively: u, b, and p) playing the role
of form within it. Each of these compounds is a divine self. The ovals
can be taken to represent the three hylomorphs (form-matter compounds)
or the three hylomorphic compounding relations which obtain among the
seven realities posited. Three of these seven (f, s, h) are to be
counted as one god, because they are hylomorphs with only one divine
nature (d) between them. Thus, of the seven items, three are
properties (u, b, p), three are substances which are hylomorphic
compounds (f, s, h), and one is both a property and a substance, but a
simple substance, not a compound one (d).
Brower and Rea argue that their theory stands a better chance of being
orthodox than its competitors, and point out that a part of their
motivation is that leading medieval trinitarians such as Augustine,
Anselm, and Aquinas say things which seem to require a concept of
numerical sameness without identity. (See Marenbon 2007, Brower 2005,
and the supplementary document on the history of Trinity theories,
sections
3.3.2,
on Augustine, and
4.1,
on Thomas Aquinas.)
In contrast to other relative-identity theories, this theory seems
well-motivated, for its authors can point to something outside
trinitarian theology which requires the controversial concept of
numerical sameness without identity. This concept, they can argue, was
not concocted solely to acquit the trinitarian of inconsistency. But
this strength is also its weakness, for on the level of metaphysics,
much hostility to the theory is due to the fact that philosophers are
heavily divided on the reality, nature, and metaphysical utility of
constitution. Thus, some philosophers deny that a metaphysics of
material objects should involve constitution, since strictly speaking
there are no statues or pillars, for these apparent objects should be
understood as mere modes of the particles that compose them. Arguably,
truths about statues and pillars supervene on truths about
arrangements of particles (Byerly 2019, 82-3).
This Constitution theory has been criticized as underdeveloped,
unclear in its aims, unintelligible, incompatible with self-evident
truths, unorthodox relative to Roman Catholicism, polytheistic and not
monotheistic, not truly trinitarian, involving too many divine
individuals (primary substances), out of step with the broad
historical catholic tradition, implying that the Persons of the
Trinity can't simultaneously differ in non-modal and
non-temporal properties, not a theological improvement over simpler
relative identity approaches, and as wrongly implying that terms like
"God" are systematically ambiguous (Craig 2005; Hasker
2010b; Hughes 2009; Layman 2016; Leftow 2018; Pruss 2009, Tuggy
2013a).
#### 2.1.4 Impure Relative Identity Theories: Constitution and Indexicals
Scott Williams has constructed a similar theory, which he calls a
"Latin Social" account. In common with Leftow's
"Latin" theory and Hasker's "Social"
theory (see sections
1.5,
2.4), Williams says that there is one "concrete
instance or trope of the divine nature" which is a constituent
of each Person. Each Person is also constituted by an incommunicable
attribute, begetting (Father), being begotten (Son), and being
spirated (Spirit) (Williams 2017, 324). He understands each Person to
be "an incommunicable existence of an intellectual nature"
(326). In his view any person is a person "ontologically and
explanatorily prior to any cognitive acts of volitions that
*that* person in question has or might have" (Williams
2017, 327; cf. 2013, 2019). And for him, the Persons are persons. Each
Person is essentially numerically the same essence as the one divine
essence, while being a numerically different Person from the other two
Persons. Thus, the account involves irreducible relations of
kind-relative numerical sameness. But the divine Persons are not
(absolutely) numerically identical to one another, and each is not
(absolutely) numerically identical to the divine essence. This divine
essence is like an Aristotelian first substance in that it exists on
its own (not in another) and in being a concrete particular, but
unlike first substances it is communicable, in other words, it can be
shared by non-identical things, the divine Persons (Williams 2017,
326). The term "God" can refer to any of the Persons, or
to the divine essence. The term "Trinity" is a
plural-referring term which refers to the plurality of the divine
Persons (Williams 2013, 85). (See section
5.1.)
Williams considers it an axiom of trinitarian theorizing that
"the divine persons are necessarily unified or necessarily agree
regarding all things" (2017, 321). Some rival theories try to
account for this "necessary agreement thesis" by showing
how, allegedly, the Persons would have to come up with some policy
which would prevent disagreement. Williams finds such claims
"philosophically unsatisfying", and instead argues that
the three Persons can never disagree because they have numerically one
will, one power of choosing (322). Unlike any other three persons, the
Persons of the Trinity, because they share one divine nature, share
one set of powers, and so any exercise of any divine power belongs to
each of the three. In this case, Williams analyzes thinking as
producing and using a token sentence in what we might call divine
mentalese. Building on work by philosopher John Perry on indexical
terms like "I", Williams points out that a single token of
a sentence in English may be used by different agents, and may thus
have multiple meanings. For example,
>
> Suppose that Peter produces...a sign that reads, "I am
> happy," and that Peter
> uses this sign by holding it up.
> Peter affirms that Peter is happy. Later,
> Peter puts the sign on the ground and Paul picks up the
> same sign and holds it up such that Paul affirms that
> Paul is happy. Paul uses numerically the same token as Peter did,
> yet when Paul uses it he affirms
> something different than Peter. (Williams 2013, 81)
>
Similarly, if divine Persons think using a language-like divine
mentalese, then one token of this may be used by different Persons and
have a different significance for each. The idea is that a person
relates to a proposition (the content of his thought) by means of a
token sentence which he produces and uses to think. But these mental
acts, given that the Persons share one set of powers, must be shared
by all three of them. Yet, the thoughts thereby thought will differ.
For example,
>
> ...if the Father uses a mental token of "I am God the
> Father" and in so doing affirms a proposition, then the Father
> affirms that God the Father is identical to God the Father. If the Son
> uses the same mental token of "I am God the
> Father"...the Son affirms the proposition that the Son is
> essentially numerically the same divine nature as the Father without
> being identical to the Father. (Williams 2017, 331)
>
This account denies what some philosophers assume to be obvious, that
"distinct and incommunicable intellectual acts and volitional
acts are necessary conditions for being a person" (339).
Williams rejects this as an ungrounded modern assumption. While it
employs recent thinking about indexical terms and other matters,
Williams considers this account to fit well with historical
theologians such as Gregory of Nyssa, Henry of Ghent, and John Duns
Scotus (345). That the persons share all mental acts does not imply
that they share one mind or that there is one consciousness in the
Trinity. Rather, the access consciousness, experiential consciousness,
and introspective consciousness of each Person may differ (2020,
Section 3).
A New Testament reader might question the assumption that the Persons
of the Trinity can't disagree, given the temptation of the Son
(but not of the Father) and an occasion when the Son asked the Father
to be excused from a difficult trial (Matthew 4:1-11; James
1:13; Mark 14:36).
In a response, William Hasker objects that it seems that sometimes
human beings can think without using any language. Why, then, should
we suppose divine Persons to think only by means of mental token
sentences? Perhaps they can just relate directly to propositions (the
contents of their thoughts). Worse, Williams posits that this divine
mental language is ambiguous, but Hasker says, "we would
naturally expect a divine language of thought to be very precise
indeed, perhaps maximally so" (Hasker 2018b, 364). He also
objects that the theory wrongly counts mental acts. Hasker imagines
that the Holy Spirit intends to become incarnate on some other planet,
and that before this Incarnation or the Incarnation of the Son, the
three of them together produce the mental token "I shall become
incarnate". Hasker urges that this seems to be two uses of the
same mental sentence, one by the Son and the other by the Spirit.
"To be aware of a proposition is precisely to perform a mental
act," and here the Son is aware of one, but the Spirit is aware
of another (365). But this clashes with Williams's claim that
there is but one mental power shared by the three.
Williams replies that divine mental tokens are needed "to
explain why a divine person's mental act is directed at (among
all possible propositions) the proposition it is directed at"
(Williams 2020, 115). Williams denies that ambiguity is always an
imperfection of a language, and urges that there is nothing
objectionable about divine Persons using mental tokens that can be
used to express various propositions (110-1). About
Hasker's allegation that the theory mis-counts the mental acts
of the Persons, Williams says that to the contrary, we should see but
one mental act here, though we should keep in mind the Persons'
background knowledge about who will become incarnate, which provides
the contexts relative to which the one token mental sentence means
different things. Moreover, "Why posit several mental acts here,
when one mental act will do the same explanatory work?" Finally,
Williams clarifies that the necessary sharing of divine acts does not
apply to "internal divine productions", such as the
Father's eternal generation of the Son (111-4,
115-6).
#### 2.1.5 Impure Relative Identity Theories: Episodic Personhood
Another relative identity theory by Justin Mooney (2020) depends on an
entirely different metaphysical account to show how multiple persons
may each be the same being. Metaphysician Ned Markosian proposes a
thought experiment in which a man dies and is mummified, and then a
long time later the mummy's parts are re-arranged into a living
woman who has an utterly different psychology than the dead man. The
point is that the woman is the same object as the man but is not the
same person as the man, because the instances of personhood in his
life aren't part of the same episode of personhood (3-4).
Mooney applies Markosian's ideas about "identity under a
sortal" to the Trinity. On this account, "God is a single,
divine substance that is simultaneously or atemporally participating
in three distinct episodes of personhood-those of the Father,
Son, and Spirit" (5). Thus, each Person just is God, but none is
the same Person as any other divine Person. The account may be
illustrated by modifying the traditional Trinity shield:
![An illustration of Mooney's relative identity theory of the Trinity.](Moonity.png)
On this theory, being different Persons doesn't imply being
numerically distinct.
One may worry that such Persons must be one and the same Person since
they have but one substance between them, but Mooney answers that they
are individuated by their causal relationships, following Swinburne
(1994) (5). In addition, following Effingham, he says that they are
not one and the same Person because they aren't linked by
immanent causal relations. (5-6) These are "those causal
links an entity bears to itself from one time to another whereby the
way it is earlier on causes how it is later on" (Effingham
2015, 35). Following Moreland and Craig (2017), Mooney adds that
God possesses three mental faculties, each had by one of the Persons
(6). Finally, adapting ideas from Swinburne (1994), he says that
>
> ...the Father's episode of personhood occurs simply because
> God is a divine being, and a divine being is essentially a personal
> being. By nature, God instantiates whatever psychological properties
> are necessary for being a person. The Son's episode of
> personhood occurs because the Father wills that there is an
> instantiation of personhood by the divine substance which is not
> immanent-causally linked to the Father's instantiation of
> personhood. And the Spirit's episode occurs because one or both
> of these persons will(s) that there is yet another instantiation of
> personhood by the divine substance which is not connected by immanent
> causal relations to either the Father's or the Son's
> instantiation of personhood (6).
>
He remains neutral on whether this process is either temporal or
necessary (*ibid.*).
Unlike other relative identity theories, this account, like some
one-self theories, affirms the absolute identity of each Person with
God; each is the same thing or being or primary substance, God (7).
This generates a concern that the account may count as a heretical
modalism. Mooney replies that "if Markosian's episodic
view of personal identity is right, the model is not modalist"
(*ibid.*). The reason is that on this account there are three
episodes of personhood, which implies that there are three Persons,
even though there is one being which is the component thing in each
episode, a single subject of the properties that are involved in being
a Person.
Even though the account has it that these three are different Persons,
still, it identifies each with God, which entails their identity with
one another; being the same thing as God, they must be the same thing
as each other. Given this, it would seem that they can't differ
in any way, e.g. the Son becomes incarnate but the Father does not
(7). Mooney replies that the Trinity is mysterious, and that probably
a sentence like "The Son became incarnate but the Father
didn't" might be understood as not requiring a
simultaneous or eternal difference between the being that is the Son
and the one which is the Father. In Markosian's thought
experiment, one would think that person-names would track with the
different stages in the career of the one object, so that, say,
"Alice" would refer to the thing only in its latest
stages, and "Bob" would apply to it only in its pre-mummy
career. Thus, names like "Father" and "Son"
should refer to God only in one or the other of God's
Person-episodes. Mooney suggests, then, that "The Son became
incarnate but the Father didn't" will be true if and only
if the Son but not the Father is the same person as someone who became
incarnate, that is, God becomes incarnate in the Person-episode
associated with the name "Son" but not in the one
associated with "Father" (8).
This, Mooney argues, shows why when counting objects, we should count
by (absolute) identity, while when counting persons we should count by
the relation same-person. In the Trinity, then, we count one thing but
three Persons; the Persons are the same thing but different persons
(9). The account also solves his "problem of Triunity"
(2018), which is that as normally analyzed, these three statements
can't all be true, and yet arguably a trinitarian is committed
to all three of them:
1. God is triune.
2. The Son is God.
3. The Son is not triune. (2020, 10)
The solution is that even though "strictly speaking, the Son is
triune" since the Son just is God and God is triune, the meaning
of 3 is that "the Son is not the *same person* as anyone
who is triune", which is both true and consistent with 1 and 2
(10). Mooney adds that the property being triune should not be
confused with the property of being the same Person as someone who is
triune; only the first, in his view, is an essential divine attribute
(10-1).
Mooney argues that this account also solves problems relating to
divine processions and aseity. The Son, being God, must have the
property of aseity. But Mooney suggests that the Father's
generation of the Son doesn't explain the Son's existence
(which would rule out the Son's aseity), but only the Son's
being a person distinct from the Father (12).
The viability of this theory rests on a particular metaphysics of
personhood. One might think, contra Markosian and Mooney, that the
woman in the story is one being, the mummy is a second being
(even though composed of many or all of the same parts), and the man
who follows is a third being. Similarly, one may wonder whether
numerically distinct Persons can each be numerically the same as one
god. The theory implies the falsity of the principle that for any x
and y, if they are different Fs, then x is an F, y is an F, and
x[?]y. One might also question the theory's way of dealing with
apparent differences between the (numerically identical) Persons; any
differences of the form the "the Father is F but the Son is not
F" get analyzed as meaning "the Father is the same Person
as someone who is F and the Son is not the same Person as someone who
is F". Does the original claim really mean what the analysis
says?
### 2.2 20th Century Theologians and "Social" Theories
Some influential 20th-century theologians interpreted the Trinity as
containing just one self. (See section
1.3
above.) In the second half of the century, many theologians reacted
against one-self theories, criticizing them as modalist or as somehow
near-modalist. This period also saw the wide and often uncritical
adoption of a paradigm for classifying Trinity theories which derives
from 19th c. French Catholic theologian Theodore de
Regnon (Barnes 1995). On this paradigm, Western or Latin or
Augustinian theories are contrasted with Eastern or Greek or
Cappadocian theories, and the difference between the camps is said to
be merely one of emphases or "starting points". The
Western theories, it is said, emphasize or "start with"
God's oneness, and try to show how God is also three, whereas
the Eastern theories emphasize or "start with" God's
threeness, and try to show how God is also one. The two are thought to
emphasize, respectively, psychological or social analogies for
understanding the Trinity, and so the latter is often called
"social" trinitarianism. But this paradigm has been
criticized as confused, unhelpful, and simply not accurate to the
history of Trinitarian theology (Cross 2002, 2009; Holmes 2012; McCall
2003).
Although the language of Latin vs. "social" Trinity
theories has been adopted by many analytic philosophers (e.g. Leftow
1999; Hasker 2010c; Tuggy 2003a), these have interpreted the different
theories as logically inconsistent (i.e. such that both
can't be true), and not merely as differing in style, emphasis,
or sequence.
Some 20th century theological sources, accepting the de Regnon
paradigm, proceed to blame the Western tradition for
"overemphasizing the oneness" of God, and recommend that
balance may be restored by looking to the Eastern tradition. A number
of concerns characterize theologians in this 20th and 21st century
movement of "social" trinitarianism:
* Preserving genuinely interpersonal relationships between the
Persons of the Trinity, particularly the Father and the Son.
* Doing justice to the New Testament idea of Christ as a personal
mediator between God and humankind.
* Suspicion that the "static" categories of Greek
philosophy have in previous trinitarian theologies obscured the
dynamic and personal nature of the triune God.
* Concern that traditional or Western trinitarian theology has made
the doctrine irrelevant to practical concerns such as politics, gender
relations, and family life.
* The idea that to be Love itself, or for God to be perfectly
loving, God must contain three subjects or persons (or at any rate,
more than one). (See sections
2.3,
2.5, and
2.6.)
(For surveys of this literature see Karkkainen 2007; Olson
and Hall 2002, 95-115; Peters 1993, 103-45.) These writers
are often unclear about what Trinity theory they're endorsing.
The views seem to range from tritheism, to the idea that the Trinity
is an event, to something that differs only slightly, or only in
emphasis, from pro-Nicene or one-self theories (see section
1
and section
3.3
of the supplementary document on the history of trinitarian
doctrines). Merricks observes that some views advertised as
"social trinitarianism" make it "sound equivalent to
the thesis that the Doctrine of the Trinity is true but modalism is
false" (Merricks 2006, 306). However, a number of Christian
philosophers, and some theologians employing the methods of analytic
philosophy, have started with this literature and then proceeded to
develop relatively clear three-self Trinity theories, which are
surveyed here. They differ in how they attempt to secure monotheism
(Leftow 1999). There are many such Trinity theories, and it is not
clear that all the options have yet been explored (Davidson 2016).
### 2.3 Ersatz Monotheism
A problem for any three-self Trinity theory is that numerically three
selves are, it would seem, numerically three things. And according to
a theory of essences or natures, a thing which has or which is an
instance of an essence or nature is thereby a thing of a certain kind.
All Trinity theories include the Nicene claim that the Persons of the
Trinity have between them but one essence or nature, the divine one.
But it would seem that by definition a thing with the divine essence
is a god, and so three such things would be three gods.
Some three-self theories in effect concede that they imply tritheism
(three things, each of which has properties sufficient for being a
god), but argue that surely a correct Trinity theory can't avoid
the right type of tritheism, and can avoid any undesirable tritheism,
such as ones involving unequal divinity of the Persons, Persons which
are in some sense independent, or Persons who are in principle
separable (McCall 2010, 2014c; Plantinga 1988, 1989; Yandell 2010,
2015).
Richard Swinburne has long developed and defended a type of three-self
Trinity theory which in the eyes of most critics seems to be "a
fairly straightforward form of tritheism" (Alston 1997, 55. See
also Clark 1996; Davidson 2016; Feser 1997; Howard-Snyder 2015a;
Moreland and Craig 2017; Rea 2006; van Inwagen and Howard-Snyder 1998;
van Inwagen 2003, 88; Vohanka 2014, 56). In a series of
articles and books Swinburne's views have changed in significant
ways (Swinburne 1988, 1994, 2008, 2018), but this entry focuses on his
latest work on the Trinity.
Swinburne aims to build his theory on widespread traditional
agreements between most catholic theologians since at least
the fourth and fifth centuries (Swinburne 2018, Section 1). The
Persons of the Trinity are three beings, each a self which satisfies
Boethius's definition of a "person" as "an
individual substance (*substantia*) of a rational nature"
(421). Each is divine in that each has all the divine attributes.
"A divine person is naturally understood as one who is
essentially eternally omnipotent and exists (in some sense)
'necessarily'" (427). He argues that omnipotence
entails perfect goodness and omniscience. While all three of the
Persons exist necessarily (inevitably), the Father does this
independently while the Son and Spirit exist of necessity dependently,
because necessarily, the Father exists, and his existence implies that
he causes them (437, n. 14). These actions of the Father are
inevitable and not voluntary, but they are via the Father's will
(425, 428). This causing is traditionally described as the eternal
generation of the Son by the Father, and the eternal proceeding of the
Spirit from the Father, or from the Father and the Son. For Swinburne,
the Son is "caused by the Father alone" while the Spirit
is "caused by the Father and/or through the Son" (429).
The theory then is committed to one of these two models of the divine
processions.
![Father causing the Son and the two of the causing the Spirit, or the Father causing the Son and the Father causing the Spirit through the Son](Swincauses.png)
Swinburne has constructed a couple versions of an argument which
purports to show why, if there is at least one divine being or Person,
there must be exactly three, with the second and third being caused
ultimately by the first. In other words, given that it is possible
that there be a divine Person, it is metaphysically impossible that
there be only one, and it is metaphysically necessary that there be
exactly three. Most trinitarians have assumed that such an argument is
neither possible nor desirable, as the Trinity can be known only by
divine revelation. Against this, Swinburne says that "even if
you regard the New Testament as an infallible source of doctrine, you
cannot derive from it a doctrine of the Trinity", because when
it comes to passages about the Spirit,
>
> ...there are non-Trinitarian ways of interpreting...[these]
> which are just as plausible as interpreting them as expressing the
> doctrine that the Holy Spirit is a divine person...So unless
> Christians today recognize some good a priori argument for a doctrine
> of the Trinity (and most of them do not recognize such an argument),
> or unless they consider that the fact that the subsequent Church
> taught a doctrine of the Trinity is a significant reason for
> interpreting the relevant passages in a Trinitarian way, it seems to
> me that most Christians today (that is, those not acquainted with any
> a priori argument for its truth) would not be justified in believing
> the doctrine. (419-20)
>
One may wonder if there could be two omnipotent beings; there have
been arguments from theism (at least one god) to monotheism (exactly
one god) based on the idea that it'd be impossible for there to
be more than one who is omnipotent. (See the Monotheism entry, section
5.)
Suppose that one omnipotent being willed a certain object to move and
simultaneously another omnipotent being willed that it should remain
in place. It would seem that whether the object moves or stays in
place, one of the being's wills is thwarted, so that, contrary
to our stipulation, one of them fails to be omnipotent. Swinburne
argues that such conflicts of will are impossible given the
omniscience, perfect goodness, and causal relations of the omnipotent
beings. In his view, in causing the Son and Spirit, the Father must
"lay down the rules determining who has the right to do which
actions; and the other members of the Trinity would recognize his
right, as the source of their being to lay them down" (428).
Inspired by similar arguments given by Richard of St. Victor,
Swinburne argues that a divine Person must be perfect in love. But
>
> ...perfect love must be fully mutual love, reciprocated in kind
> and quantity, involving total sharing, the kind of love involved in a
> perfect marriage; and only a being who could share with him the rule
> of the universe could fully reciprocate the love of another such.
> ...it would be a unique best action for the Father to cause the
> existence of the Son, and so inevitably he would do so. ...at
> each moment of everlasting time the Father must always cause the Son
> to exist, and so always keep the Son in being. (429-30)
>
Thus, if there is one divine Person, there must also be another.
Further, there must be a third, for
>
> A twosome can be selfish. ...Perfect love for a
> beloved...must involve the wish that the beloved should be loved
> by someone else also. Hence it will be a unique best action for the
> Father to cause the existence of a third divine being whom Father and
> Son could love and by whom each could be loved. Hence the Holy Spirit.
> And I suggest that it would be best if the Father included the Son as
> co-cause (as he is of all other actions of the Father) in causing the
> Spirit. And again they must have caused the Spirit to exist at each
> past moment of everlasting time. Hence the Trinity must always have
> existed. (430)
>
What stops this process of deity-proliferation from careening into
four, seventy-four, or four million divine Persons? Swinburne replies
that it is *not* better to cause four (or more) divine Persons
than it is to cause three, since
>
> ...when there is an infinite series of incompatible possible good
> actions, each better than the previous one, available to some agent,
> it is not logically possible that he do the best one-because
> there is no best action. An agent is perfectly good in that situation
> if he does any one of those good actions. So since to bring about only
> three divine persons would be incompatible with an alternative action
> of bringing about only four divine persons, and so generally, the
> perfect goodness of the Father would be satisfied by his bringing
> about only two further divine persons. He does not have to bring about
> a fourth divine person in order to fulfil his divine nature. To create
> a fourth divine person would therefore be an act of will, not an act
> of nature. But then any fourth divine person would not exist
> necessarily in the sense in which the second and third divine persons
> exist necessarily-his existence would not be a necessary
> consequence of the existence of a necessary being; and hence he would
> not be divine. So there cannot be a fourth divine person. There must
> be and can only be three divine persons. (430-1)
>
In sum, divinity implies "perfect love", which implies
exactly three divine Persons.
Lebens and Tuggy (2019) object that such arguments trade on the
ambiguity of "perfect love". Divinity, by implying moral
perfection, implies the character trait of being perfectly loving. But
someone may have this and yet not be in the sort of interpersonal
relationship that Swinburne describes as "perfect love".
(See also Tuggy 2015.) Using familial analogies, Brian Leftow
challenges Swinburne's claim that the three would lack an
overriding reason to produce a fourth, noting that "Cooperating
with two to love yet another is a greater 'balancing act'
than cooperating with one to love yet another" (1999, 241).
Tuggy (2004) objects that if a three-self theory like
Swinburne's were true, it would seem that one or more members of
the Trinity have wrongfully deceived us by leading us to falsely
believe that there is only one divine self. He also argues that the
New Testament writings assume that "God" and "the
Father of Jesus" (in all but a few cases) co-refer, reflecting
the assumption that God and the Father are numerically the same. (See
also Tuggy 2014, 2019.) Denying this last claim, he argues, amounts to
an uncharitable and unreasonable attribution of a serious confusion to
the New Testament writers and (if they're to be believed) to
Jesus as well. These arguments are rebutted by William Hasker (2009)
and the argument is continued in Hasker 2011, Tuggy 2011b, and Tuggy
2014.
But as mentioned at the outset, the most common objection to
Swinburne's Trinity theory is that it is tritheism and not
monotheism. Looking, for instance, at this account of divine
processions, a reader wonders why this doesn't amount to one god
eternally causing a second god, and with that second god eternally
causing a third god. To assuage such concerns, Swinburne argues that
on his model of the Trinity, it is natural to say that there is
"one God". Swinburne observes that the Greek
*theos* (and equally the Latin *deus*) may be used
either as a name, a singular referring term picking out a certain
individual thing, or as a predicate, a descriptive word equivalent in
meaning to "divine", which might in principle be applied
to more than one thing. Then he observes,
>
> While no doubt the Fathers of the [381] Council did not have a clear
> view of what was the sense in which there is just one
> "God" and the sense in which each of the three beings is
> "God" the distinction between the two senses of the
> crucial words makes available one obvious way of resolving the
> apparent contradiction. This is by thinking of these words as having
> the former sense [i.e. referring to one thing like a name] when the
> Creed says that there is "one God", and as having the
> latter sense [i.e. being equivalent the adjective
> "divine"] when it claims that each of the beings "is
> God." Thus understood, the Creed is saying that there is one
> unique thing which it names "God," which consists of three
> beings. (420)
>
The suggestion is that the tradition is somewhat confused, but that
charitably, we should think its talk of "one God" should
be understood as referring to the Trinity. (However, see the opening
line of the Nicene creed.) What sort of thing is this Trinity? It is
not a divine Person, and is not a thing (person or not) with the
divine essence. Rather, it is a thing of which the three divine beings
(selves, persons) are proper parts (425). Despite this complex entity
not having the divine essence (and so, not being a god), Swinburne
sometimes refers to it as "God himself" (424). He argues
that these three beings can't help but cooperate, and so agrees
with the traditional claim that apart from the aforementioned eternal
causings of one another, any act of one Person of the Trinity is an
act of all three Persons (425). In sum, "This common
omnipotence, omniscience, and perfect goodness in the community of
action makes it that case that in a natural sense there is one
God" (428). That is, given the three divine beings described
above, it is "natural", when it comes to the name-like use
of the word "God," to apply the term to that thing which
is the whole consisting of those three Persons. But it remains that
there are three things here each of which is divine, and that this
whole is not itself divine; it's hard to see why this is
monotheism and not tritheism.
Brian Leftow objects that in Swinburne's account God is not
itself divine. Nor does it makes sense to worship it, as it is not the
sort of thing which can be aware of our addressing it. Further, the
issue of monotheism isn't the issue of how unified the divine
beings are, but rather of how many there are.
>
> ...it is hardly plausible that Greek paganism would have been a
> form of monotheism had Zeus & Co. been more alike, better behaved,
> and linked by the right causal connections. (Leftow 1999, 232; cf. Rea
> 2006)
>
Moreover, Swinburne's theory entails serious inequalities of
power among the Three, jeopardizes the personhood of each, and carries
the serious price of allowing (contrary to most theists) that a divine
being may be created, and the possibility of more than one divine
being (Leftow 1999, 236-40).
Daniel Howard-Snyder (2016) argues that Swinburne is committed to
descriptive polytheism, normative polytheism, and cultic polytheism,
and so is a "polytheist *par excellence*". He also
argues that Swinburne's account of the Trinity is
unorthodox.
Daniel Spencer (2019) argues that the several factors which Swinburne
and others appeal to in order to lend some sort of unity to the three
persons are obviously inadequate to show how they amount to one God
and not three gods. At most, we get three divine beings who in some
ways resemble a god. Spencer observes that sometimes Swinburne simply
accepts tritheism, as when he says that there are three divine
individuals or beings (Spencer 2019, 192, 198 n. 2; Swinburne 1994,
170, 179). In his first treatment of the subject Swinburne talks of
"three Gods" (Swinburne 1988, 234). In later writings he
doesn't use that phrase, but his conception of the Persons is
substantially the same. Spencer observes that in principle, making the
Persons proper parts of a whole which is the only God might do the
trick (195-6), and Swinburne does suggest that there is a
part-whole relationship between the Persons and God; however, for
Swinburne the whole is not a god.
Perhaps the most sympathetic voice in the literature is William Hasker
(2013, Chapter 18), but in the end he agrees that Swinburne has not
done enough to unify the Persons. (Hasker 2013b, Chapters 25-8,
2018, 5-7; Swinburne 2014)
### 2.4 Trope-Constitution Monotheism
William Hasker (2013b) has constructed what is arguably the most
developed three-self theory of the Trinity. As with Swinburne, his
thoughts have developed over decades (Tuggy 2013b), but this entry
will focus on his recent publications. For Hasker, following
Plantinga, the Persons of the Trinity are "distinct centers of
knowledge, will, love, and action...*persons* in some full
sense of the term" (22; cf. Chapter 24). Hasker argues that such
a view is widespread in ancient sources, including Gregory of Nyssa
and Augustine (Chapters 4-5, 9). While we can't reasonably
retain the ancient doctrine of divine simplicity (Chapter 7, 2016,
2018a, 7-8, 18-9), we ought to uphold as many of the
traditional claims as possible, for we should assume divine guidance
of theological development, even though "the Church's
doctrine of the Trinity is not as such to be found in the New
Testament" (8). The "fathers" of the late fourth
century should be seen as "the giants on whose shoulders we need
to stand" (10). In the second part of his book (Chapters
11-20) Hasker interacts with a number of Trinity theories,
attempting to salvage whatever is correct in them for use in his own
three-self theory; he incorporates ideas particularly from Leftow,
Craig, Rea, and Swinburne.
For Hasker, the Persons of the Trinity are three divine selves
(Chapters 22-5). Against a modern Protestant trend, Hasker
insists that a doctrine of processions must be retained, arguing that
it enjoys "significant support" from scripture (217), and
he points out the awkwardness accepting "the main results of the
[ancient] trinitarian controversy" while thinking that this
"developmental process...had at its heart a fundamentally
wrong assumption", that is, that the Son and Spirit exist
because of the Father (222-3).
Hasker spends several chapters (25-8) addressing the question:
"in virtue of what do the three persons constitute *one*
God?" (203). The three enjoy some sort of unity of will and
fellowship, and they are united in that the second and third exist and
have the divine nature because of the first, but such factors
don't, by themselves, imply that they somehow amount to a single
god. Hasker holds that a crucial factor is the idea of their shared
divine nature as a concrete property or trope. Following Craig,
sometimes Hasker characterizes this concrete divine nature as a divine
mind or soul. He argues that for all we know, it is possible for one
such trope of divinity "to support simultaneously three distinct
lives" which belong to the Persons (228). He argues that this
possibility is indirectly supported by split-brain and
multiple-personality phenomena in human psychology. He takes these to
show that "It is possible for a single concrete human
nature-a single trope of humanness-to support
simultaneously two or more centers of consciousness" (236).
This supporting or sustaining relation, Hasker says, may optionally be
specified to involve the divine nature constituting each Person
(Chapter 28).
>
> We shall say, then, that the one concrete divine nature sustains
> eternally the three distinct life-streams of the Father, Son, and Holy
> Spirit, and that in virtue of this the nature *constitutes*
> each of the persons although it *is not identical* with the
> persons. (244)
>
Constitution is defined here as asymmetric, so none of the Persons
also constitutes the divine nature (245). In a later discussion, he
seems to make constitution central to the theory (Hasker 2018a).
Adapting work on the metaphysics of material constitution by Lynne
Rudder Baker, Hasker offers this definition:
Suppose x has F as its primary kind, and y has G as its primary kind.
Then x constitutes y just in case
1. x and y have all their parts in common;
2. x is in "G-favorable circumstances";
3. necessarily, if an object of primary kind F is in G-favorable
circumstances there is an object of primary kind G that has all its
parts in common with that object; and
4. it is conceptually possible for x to exist but for there to be no
object of primary kind G that has all its parts in common with x.
(Hasker 2018a, 16-7, cf. 2013b, 241-3; Howard-Snyder
2015b, 108-9)
Applying this doctrine of non-material constitution to the Trinity, to
say that the divine nature constitutes the Father is to say that those
have all their parts in common and that the nature is in
divine-Person-favorable circumstances. For a thing of the type
"divine mind/soul or concrete nature" to be in
"divine trinitarian Person"-favorable circumstances means
that there is a divine trinitarian Person which has all his parts in
common with the first thing, and that it is conceivable that the first
thing exists even though there is nothing of the type "divine
trinitarian Person" that has all its parts in common with it.
Hasker clarifies that in his view all the entities mentioned here are
simple (lacking in proper parts), so each will be what metaphysicians
call an improper part of one another, satisfying condition i. He also
clarifies that the conceptual possibility in condition iv does not
imply metaphysical possibility; Hasker denies that this is
metaphysically possible: the divine nature exists but no divine Person
exists (18). He adds that
>
> The divine nature constitutes the divine trinitarian Persons when it
> sustains simultaneously three divine life-streams, each life-stream
> including cognitive, affective, and volitional states. Since in fact
> the divine nature does sustain three such life-streams simultaneously,
> there are exactly three divine Persons. (2018a, 17)
>
Presumably, the divine-Person-favorable circumstances which the divine
nature is in, is support of these life-streams.
Hasker argues that the "grammar" of the Trinity forbids a
Christian from saying things like "three gods" based on
there being three Persons each of which is divine (2013b, 247). Again,
although "God" in the New Testament nearly always refers
to the Father, one can't infer that the Father and God are
numerically one (248). With a nod to a mereological account of the
Persons and God, he says that "Each Person is wholly God, but
each Person is not the whole of God" (250; cf. 257). Hasker also
argues, plausibly, that the "Athanasian" creed can be read
as non-paradoxical if we realize that it is laying down rules about
what must be said and what must not be said (250-4).
In the end, as with Swinburne, the Trinity which is called
"God" is not literally a god, as it is not divine. But
Hasker suggests that
>
> ...in virtue of the closeness of their union, the Trinity is at
> times referred to as if it were a single person. The Trinity is
> divine, exhibiting all the essential divine attributes--not by
> possessing knowledge, power, and so on distinct from those of the
> divine persons, but rather in view of the fact that the Trinity
> consists precisely of those three persons and of nothing else. It is
> this Trinity which we are to worship, and obey, and love as our Lord
> and our God. (2013b, 258)
>
In an attack on theories of divine simplicity, in which he sets aside
considerations of God as Trinity, Hasker objects that if God is
simple, God is "dehumanized" in that God must lack certain
qualities which Christians should think God literally shares with
human beings, such as caring for and being responsive to his
creatures, and being able to either judge or forgive them (2016,
Section 5). But while none of those qualities implies being human,
each arguably implies selfhood. Yet Hasker denies that God is
literally a self.
Brian Leftow points out the oddness of ascribing a soul to God the
Trinity.
>
> This [soul] is not God. It is not a Person either. It is some other
> sort of concrete divine individual. We had not suspected that a spirit
> could have a soul; lo, God does! (Leftow 2018, 10)
>
Leftow also objects that the sentence "God is the ultimate
reality" seems to be true by definition. But on Hasker's
theory, this soul (a.k.a. the divine nature), which is not God, would
be the ultimate reality, being the source of the Persons and so of God
(the Trinity) (12). Again, Leftow objects that this theory is not
monotheistic; rather, the theory features three deities which we
can't describe as such because there is one object (the divine
nature/soul) which constitutes them (15).
Daniel Howard-Snyder objects that Hasker's talk of the nature
"supporting" or "sustaining" the lives or
"life-streams" of the Persons is unintelligible (2015b,
108-10). He also argues that it is unclear quite what
constitutes the Persons, as in various places Hasker says that this is
the divine mind/soul, the concrete divine nature (a trope of
divinity), and a single mental substance-and these would appear
to be different claims (110). Also, monotheism uncontroversially
implies that there is exactly one god. But Hasker forbids saying that
any of the Persons is a god. And by definition being a god implies
having the divine nature, and like others Hasker understands divinity
to imply perfection in knowledge, power to intentionally act, and
moral goodness-thus, divinity implies being a self. This,
Howard-Snyder says, is a necessary truth and one with which basically
all Christians agree. But Hasker's "God", whether
this is a community or a composite object, is not a self, and so is
not literally divine. But then, we've run out of candidates for
being the only god; if neither the Father, nor the Son, nor the Spirit
is a god, then it would seem that for Hasker there is no god!
Anticipating monotheism-related objections, Hasker lobs various
charges at Howard-Snyder (112-3), but in the end it seems that
Hasker's view is just that "God" can be spoken of
*as if* it were a self (114-5). Taking a term from recent
philosophy of mind, Howard-Snyder says that for Hasker God is a
"zombie", a merely apparent self which in fact lacks any
consciousness, any point of view, and any mentality (114). He
concludes that Hasker is not aiming for the sober metaphysical truth
about the Trinity but is instead settling for some sort of
"as-ifery". How, he asks, could it be more accurate to
describe God as "omnipotent and omniscient" than it is to
describe God as "powerless and ignorant" when on
Hasker's account God is straightforwardly the latter? (115)
Hasker replies that his claim that the divine nature supports the
lives of the Persons is no more unintelligible than is the claim that
"my desktop computer supports word processing"; to support
is to "maintain in being or in action; to keep up, keep
going" (Hasker 2018a, 11). Nor should it worry us that we
can't understand how this supporting works (12).
### 2.5 Trinity Monotheism
The Trinity monotheist says that even though there are three divine
Persons, there is one God because there is one Trinity (Moreland and
Craig 2017, 588; Craig 2006; Layman 1988, 2016). William Lane Craig
has defended the best known such theory. The aim is to go beyond mere
analogies, providing a literal model of how to understand traditional
trinitarian claims.
Craig and Moreland offer Cerberus, the three-headed dog from Greek
mythology as "an image of the Trinity among creatures"
(592). The point of this fictional example is that Cerberus would be
one dog with three "centers of consciousness". Though only
parts of one dog, each head is literally canine. If we were to upgrade
the mental capacity of the three here, it would be one dog which is
three *persons*. And if we imagine that Cerberus survives
death, in that case we can't say that the three are one dog
because they have one body. In fact, we're now imagining one
(canine) soul which supports three persons. Change canine to divine,
and this is the model of the Trinity. (592-3)
>
> God is an immaterial substance or soul endowed with three sets of
> cognitive faculties each of which is sufficient for personhood, so
> that God has three centers of self-consciousness, intentionality, and
> will...the persons are [each] divine... since the model
> describes a God who is tri-personal. The persons are the minds of God.
> (Craig 2006, 101)
>
Only the Trinity, on this theory, is an instance of the divine nature,
as the divine nature includes the property of being triune. (See
section
5.1.)
Beyond the Trinity "there are no other instances of the divine
nature" (2017, 589). So if "being divine" means
"being identical with a divinity" (i.e., being a thing
which instantiates the nature divinity), then none of the Persons are
"divine". But the Father, Son, and Holy Spirit are each
"divine" in that they are parts of the one God, somewhat
as the bones of a cat are "feline", or as the heads of
Cerberus are "canine" (592).
But the theory also makes the Persons "divine" in other
ways too. In a sense the theory divides the divine attributes between
the Persons and the Trinity.
>
> ...when we ascribe omnipotence and omniscience to God, we are not
> making the Trinity a fourth person or agent. Divine attributes like
> omniscience, omnipotence and goodness are grounded in the
> persons' possessing these properties, while divine attributes
> like necessity, aseity and eternity are not so grounded. With respect
> to the latter, the persons have these properties because God as a
> whole has them. For parts can have some properties in virtue of the
> wholes of which they are parts. ...The point is... [the
> persons'] deity seems in no way diminished because they are not
> instances of the divine nature. (590)
>
Like Swinburne (see section
2.3)
Craig argues that it is impossible for God to be a single Person
because "if God is perfectly loving by his very nature, he must
[eternally] be giving himself in love to another" (593). And
since God is free not to create, but must be loving another,
"the other to whom God's love is necessarily directed must
be internal to God himself" (*ibid.*). For Craig this is
a plausibility argument rather than a strict proof, in support of the
claim that the concept of unipersonal God is incoherent. Unlike
Swinburne, he does not seem to think that this argument is important
to reasonable belief in the Trinity (Craig believes the Trinity can
somehow be derived from the Bible-on which see section 5.4), nor
does Craig mount a philosophical argument for why there must be
exactly three divine Persons.
One may object that this argument depends on an equivocation on the
phrase "perfectly loving". One who thinks that God is a
perfect being must hold that God has the character trait of being
perfectly loving, but this doesn't seem to imply the action of
perfectly loving (i.e. engaging in the best kind of loving
relationship with another) (Lebens and Tuggy 2019).
Daniel Howard-Snyder (2003) offers numerous objections to
Craig's theory. First, it can't avoid either polytheism or
different levels of divinity, either of which would make it
unorthodox. The Cerberus analogy is criticized on the grounds that it
would not be one dog with three minds, but rather, three dogs with
overlapping bodies. (This seems clear in the parallel case of human
conjoined twins; everyone considers them to be siblings, two humans
with overlapping bodies, not a human with two heads.) While
Craig's theory upholds (with the creeds) one divine substance,
by his own criteria each of the three Persons must be a substance as
well, and the account says that each Person is divine. Thus, the
theory implies polytheism (393-5). Here God is not a personal
being, in the sense of being numerically identical with a certain
self, even though it (God) has parts which are selves. Craig wants to
say, for example, that each of the three is all-knowing, and also that
God is all-knowing, in that God has parts which are all-knowing. But
Howard-Snyder objects that,
>
> ...there can be no "lending" of a property [i.e., a
> whole "getting" a property from one of its parts] unless
> the borrower is antecedently the sort of thing that can have
> it....[Therefore,] Unless God is antecedently the sort of thing
> that can act intentionally--that is, unless God is a
> person--God cannot borrow the property of creating the heavens
> and the earth from the Son....All other [statements involving]
> acts attributed to God [in the Bible] will likewise turn out to be,
> strictly and literally, false. (399-400)
>
According to Trinity monotheism, a thing can exemplify the divine
nature without itself being a self. Nor can divinity include
properties which require being a self, e.g., being all-knowing, being
perfectly free. This, Howard-Snyder argues, is "an abysmally
low" view of the divine nature, since "If God is not a
person or agent, then God does not know anything, cannot act, cannot
choose, cannot be morally good, cannot be worthy of worship"
(401).
Craig replies to Howard-Snyder's objection to the Cerberus
analogy that the claim that it represents three dogs is
"astonishing", as we all speak of two headed snakes,
turtles and such (Craig 2003, 102). While on Trinity monotheism God
isn't identical to any personal being, it doesn't follow
that God isn't "personal". He is personal in the
sense of having personal parts. Further, the view that God isn't
a self
>
> ...is part and parcel of Trinitarian
> orthodoxy...Howard-Snyder assumes that God cannot have such
> properties [i.e., knowledge, freedom, moral goodness,
> worship-worthiness] unless He is a person. But it seems to me that God
> can have them if God is a soul possessing the rational faculties
> sufficient for personhood. If God were a soul endowed with a single
> set of rational faculties, then He could do all these things. By being
> a more richly endowed soul, is God thereby somehow incapacitated?
> (105)
>
As to the charge of polytheism, Craig accuses Howard-Snyder of
confusing monotheism with unitarianism (106), i.e. assuming that the
existence of exactly one god entails that there is exactly one divine
self. Finally, Craig argues that the issue of whether or not the Three
count as parts of God is unimportant (107-13). Tuggy (2013b)
presses some of Howard-Snyder's objections, concluding that the
theory is either not monotheistic, or turns out to be a one-self
theory.
Stephen Layman (2016) has constructed a similar and arguably better
developed three-self Trinity theory. Motivated by the New Testament,
Layman says that the three Persons of the Trinity are three selves
(124-31). Each is "divine" in that he is a fitting
object of worship, and so is God the Trinity. God the Trinity is
literally a social entity, a concrete, primary substance which is
strongly analogous to a living thing, and which like a living thing is
a self-maintaining event (149-50). "Strictly speaking,
only the Trinity, the community of divine persons, is God, that is,
ruler of all" (148). Yet the Persons are "of one
substance" in that "each belongs to the kind *divine
being*", where this means a person which is a part of a god
(150-1, 165-6).
One may object that a social entity can't be a god, as such a
thing is merely an abstraction. Layman answers that social entities
are concrete, not abstract, and can intentionally act (159-60).
Intentionally acting requires having intentions, but social entities
may have these, even though they are not selves or even subjects of
consciousness. Social entities may have intentions because their parts
(i.e. various selves) have them. As a fallback, Layman suggests the
view that social entities may act even though they're incapable
of intentional action (159). Like Craig, Layman argues that the
Trinity can be omnipotent, perfectly good, and omniscient because its
persons are (160). Why then is the Trinity not a fourth divine person
(see section
3.1)?
>
> In order to count as a person, an entity must be able to refer to
> itself rightly with the first-person singular pronoun "I"
> (or its equivalent). And the Trinity can not do this. (161)
>
But doesn't the Bible portray God as a self who speaks in the
first-person? Layman concedes that the Old Testament does. But because
they believe in progressive divine revelation, Christians should read
the Old Testament as corrected by the New Testament. And in the New
Testament arguably there are "three divine persons (conscious
beings)" (164). Old Testament passages where "God"
speaks first-person should be read as the Father speaking on behalf of
the Trinity (*ibid.*).
The account is not polytheism because only the Trinity is God, and
because of the necessary unity of the three (160, 167). But
isn't "Every divine person is a god" true by
definition? No, because "divine" can mean relating to a
god (without being a god), and in this common meaning the Persons of
this theory are "divine". Similarly, a hand can be
"human" without itself being a human being (165).
How can the Son and Spirit be fully divine if each is caused by the
Father and so does not exist *a se*? Layman answers that
"the objector's intuition that *divinity requires
aseity* is not shared by those who drew up the [Nicene]
creed" (167). Further, "it seems to me that aseity is
clearly not essential to divinity, that is, it is not essential for
being worthy of worship" (168). The qualities of omnipotence,
omniscience, eternality, perfect goodness, and necessary existence are
sufficient to guarantee the worship-worthiness of the Persons
(*ibid.*).
Like Swinburne and Craig, Layman argues that a God who is a single
self is impossible. While aware than a theist may understand God to be
"perfectly loving" in the sense of having a perfect
disposition to love which doesn't have to be actualized, Layman
nonetheless asserts that
>
> There is...considerable plausibility in the claim that a truly
> solitary person who throughout all eternity never expressed any love
> for anyone would not be a perfectly loving person. (153)
>
Thus, given that God must be perfect independently of creation,
"a truly solitary person would not be divine, for it would not
be perfectly loving" (154). Additionally, Layman argues that it
is "inconceivable" that a divine Person should flourish
without loving another, and that surely only the love of finite selves
would not be enough (154-5). A solitary divine Person would be
"an appropriate object of pity" (155). Again, Layman
argues that the Bible suggests that a divine Person must have not only
splendor (exalted attributes) but also glory, "something at
least akin to fame-a kind of recognition, approval, or
appreciation" which is conferred by another (156). A solitary
divine Person would be lacking this glory; but presumably a divine
Person must have glory. Thus, there couldn't be just one divine
person (156-7).
Layman is skeptical about philosophical arguments purporting to show
why there must be exactly three divine Persons, but he think's
he's shown why there can't be only one. The limit of
divine Persons to three, in his view, can only come from the Bible
(157-8).
Tuggy (2015) objects to arguments that there can't be a single
divine self based on divine happiness or flourishing, urging instead
that a divine self who exists and is perfect of himself would
automatically be well-off, happy, or flourishing despite lacking
countless important goods. It is too anthropomorphic, he argues, to
suppose that a god or a divine self, like a human, is a social animal
which can't flourish without interpersonal relationships.
### 2.6 Material Monotheism
Christopher Hughes (2009) suggests a theory much like the Constitution
theory (section
2.1.2
above) but without its controversial claim that there can be
numerical sameness without identity. On this picture,
>
> ...we have just one (bit of?) divine "matter," three
> divine forms, and three ("partially overlapping,"
> materially indiscernible but formally discernible divine hylomorphs
> [compounds of form and matter]. ..."divine person" is
> true of the three hylomorphs, but... "God" is true of
> the (one and only) (bit of?) "divine matter." (2009, 309)
>
On this theory, "The Father is God," means that the Father
has God for his matter, or that the Father is "materiated
by" God, and "The Father is the same God as the Son"
means that these two are materiated by the same God
(309-10).
An objection is that the one God of Christianity is not supposed to be
a portion of matter. Hughes replies that *perhaps* it is
orthodox to say that God is a very unusual kind of matter (310).
Alternately, Hughes suggests a retreat from matter terminology, and
argues that Persons of the Trinity can't bear the same relation
they bear to one another that each bears to God. That is, it
can't be correct, for example, that Father and Son are
consubstantial, and that the Father and God are consubstantial. The
reason is that for two things to be consubstantial is for there to be
something which both are "substantiated" or
"ensubstanced" by. They are consubstantial
*because* they both bear this other relation to a third,
substantiating thing. Thus, e.g. "The Father is God" means
"The Father is (a person of the substance) God." Thus,
even though Father and Son are numerically two, still it can be true
that "There is just one (substance) God" (311).
On this alternate view, though, what does it mean to say that God is
the substance of a divine Person? Hughes suggests that the case is
analogous to material objects. A sweater and some wool thread are
"co-materiate" in that both are "materiated"
or "enmattered" by one portion of matter, though they are
numerically distinct (311; cf. 313). Hughes suggests it is an open
question whether this is a different theory, or just a restatement of
the first "in more traditional theological terminology."
It will be the latter "If we can stretch the notion of
'matter' far enough to cover God, and stretch the notion
of material substance (aka hylomorph) far enough to cover the divine
persons" (312). Hughes ends on a negative mysterian note (see
section
4.1 below),
claiming that it is an advantage of this last account that
ensubstancement is "a (very, though not entirely) mysterious
relation" (313).
Leftow (2018) objects that this theory features four things which are
divine, which is at least one too many. Further, on this account God
has but the Persons lack the divine attribute of aseity, which makes
the Persons "at best only second-class deities" (10).
### 2.7 Concept-relative Monotheism
Einar Bohn (2011) argues that trinitarian problems of
self-consistency vanish when one realizes that the Trinity "is
just an ordinary case of one-many identity" (363). He takes from
Frege the idea that number-properties are concept-relative. Thus,
>
>
> ...conceptualizing the portion of reality that is God as the
> Father, the Son, and the Holy Spirit, we have conceptualized it as
> being three in number, but it is nonetheless the same portion of
> reality as what we might conceptualize as God, and hence as being one
> in number. (366)
>
>
>
> There is no privileged way of conceptualizing [this portion of
> reality] in terms of which we can explain the other way. Both ways are
> equally legitimate. (369)
>
>
>
A difficulty for this approach is that most philosophers don't
think there can be one-many identity relations. Some think
identity is of necessity a one-one relation, although others
allow there can be many-many identity; for instance, it may be that
the three men who committed the robbery are identical to the three men
who were convicted of the robbery. Those who believe identity can be
one-many typically do so because they accept the controversial thesis
that composition (the relation of parts to a whole they compose)
should be understood as identity. Although Bohn does accept
that thesis (Bohn 2014), he argues that this Trinity theory
relies only on our having "a primitive notion of plural
identity" (371), that is, a concept we understand without
reference to any concept from mereological (parts and wholes) theory.
For example, we can recognize a certain human body to be identical to
a certain plurality of head, torso, two arms, and two legs. And we can
recognize that a pair of shoes is identical to a plurality of shoes
(365).
Bohn argues that orthodoxy, by the standards of either the New
Testament or the "Athanasian" Creed (see section
5.3),
requires that the Persons of the Trinity be distinct (i.e. no one is
identical to any other) but not that any is identical to the one God.
Rather, orthodoxy requires that the one God is identical to the Three
considered as a plurality. Thus, e.g. "The Father is God"
must be read predicatively, that is, not as identifying the Father
with God, but rather as describing the Father as divine (364, 367 n.
13).
Does this theory make God's triunity dependent on human thought?
And might the divine portion of reality equally well be conceived as
seventeen? Bohn replies,
>
> That numerical properties are relational properties with concepts as
> their relational units is compatible with reality having a real and
> objective numerical structure. (372)
>
Thus, it doesn't follow that any conceptualization of this
portion of reality is equally correct. While in this context he demurs
from saying anything about concepts (372), it seems that Bohn
assumes in Fregean fashion that concepts are objective and not
mind-dependent (Bohn 2013, Section 1).
Joseph Long (2019) objects that the theory is unorthodox because it
requires a type of thing which is divine and yet which is neither the
Trinity nor any divine Person. Further, Bohn's talk of
"portions of reality" is unintelligible. Finally,
orthodoxy demands that the Persons of the Trinity "are one God
*regardless* of our conceptual scheme", whereas on this
account whether or not the Persons are one god is relative to how we
conceptualize them.
Sheiva Kleinschmidt argues that theories on which composition is
explained in terms of identity are of no use to the trinitarian, for
such theories add no significant options to the options the
trinitarian already has (Kleinschmidt 2012).
## 3. Four-self, No-self, and Indeterminate Self Theories
Some Trinity theories don't fit in to either one-self or
three-self categories, because they imply more divine selves than
three, less than one, or are unclear about how many selves there might
be in the Trinity.
### 3.1 God as a Functional Person
Chad McIntosh (2015) formulates a Trinity theory which is similar to
three-self theories except that it adds God the Trinity as a fourth
divine self. This theory is inspired by recent work by philosophers on
group persons. It's a longstanding part of legal tradition to
treat various kinds of non-persons, such as corporations, *as
if* they were persons. This is particularly useful, e.g. in
holding corporations responsible for damages they cause. But some
philosophers have argued for group agency realism, the thesis that
some groups of persons are themselves literal persons, with interests,
knowledge, freedom, power to intentionally act, and moral
responsibility (168-71). McIntosh distinguishes
"intrinsicist" persons, persons which are so because of
their nature, from "functional" persons, persons which are
so because of how they function. On this account the Persons of
the Trinity are intrinsicist persons, while God the Trinity is a
functional person (171).
McIntosh argues that since moral responsibility implies personhood (of
some kind), and it is clear that the Trinity must be praiseworthy,
e.g. "for having achieved salvation for humankind" or
"just for having the character of a loving community",
then the Trinity must himself be a person. And it is widely agreed by
Christians that the Trinity should be worshiped, but a non-person
can't be a fitting recipient of worship (173).
One may object that the Christian God is supposed to be a Trinity of
Persons, not a Quaternity of Persons. As Leftow objects to another
theory,
>
> ...the very fact that the doctrine the Creed states is known as
> that of the Trinity militates against calling a
> four-[divine]-individuals view orthodox. Had the Creed-writers
> envisioned God plus the Persons as adding up to four divine
> individuals, surely the doctrine would have been called Quaternity
> from the beginning (Leftow 2018, 10).
>
McIntosh replies that the tradition demands that there are exactly
three Persons (Greek: *hypostases*) which share the divine
nature or essence, which is captured by his claim that there are
exactly three intrinsicist persons. This account does *not*
claim that God is a fourth hypostasis, a fourth intrinsicist person.
Rather, God is a functional person, a person not by his essence, but
rather who exists as a person because of the unified functioning of
the Father, Son, and Spirit (174).
McIntosh argues that the theory neatly sidesteps a number of common
objections to three-self theories: here, God is a self and not merely
a group or a composite object which is less than a self. In contrast
with three-self theories, as a literal self, personal pronouns may be
literally used of him. And this group which is a "he" can
have the divine attributes which imply being a self, such as
omniscience and moral perfection. (See sections
2.4,
2.5.) It also, McIntosh argues, either falsifies or casts
doubt on the key premise in Tuggy's divine deception argument
against three-self theories (175-7, 180; see section
2.3.)
Following some Old Testament scholars, McIntosh claims that ancient
Israelites recognized many groups, including their own nation, as
literal (group, functional) persons. He argues that this belief
explains a number of oddities in the Old Testament, such as the idea
of group guilt, apparent beings which seem in some sense to be
extensions of Yahweh's personhood, and texts which switch easily
between singular and plural subjects (177-80).
### 3.2 Temporal Parts Monotheism
H. E. Baber (2002) argues that a Trinity theory may posit the
Persons as "successive, non-overlapping temporal parts of
one God" (11). This one God is neither simple nor timeless, but
is a temporally extended self with shorter-lived temporally extended
selves as his parts. This does not violate the requirement of
monotheism, because we should count gods by "tensed
identity", which is "not identity but rather the relation
that obtains between individuals at a time, *t*, when they
share a stage [i.e. a temporal part] at *t*" (5). At any
given time, only one self bears this relation of temporal-stage
sharing with God.
How can any of these selves be divine given that they are neither
timeless nor everlasting? Following Parfit, she argues that a self may
last through time without being identical to any later self at the
later times; that is, "identity is not what matters for
survival" (6). Each of these non-eternal selves, then,
counts as the continuation of the previous one, and is everlasting in
the sense that it is a temporal part of an everlasting whole, God. The
obscure traditional generation and procession relations are
re-interpreted as non-causal relations between God and two of his
temporal parts, the Son and Spirit (13-4). In a later paper, she
argues that *any* trinitarian may and should accept this
re-interpretation (Baber 2008).
Although Baber argues that this is a "minimally
decent" Trinity theory, she admits that it is heretical, and
names it a "Neo-Sabellian" theory, because on it, the
Persons of the Trinity are non-overlapping, temporary modes of the one
God (15; on Sabellianism see section
1.3).
But the Persons in this theory are not mere modes; they are
truly substances and selves, and there are (at least) three of them,
though each is counted as the continuation of the one(s) preceding
him. It is unclear whether the theory posits *only* three
selves (10-1). But she argues that the theory is preferable to
many of its rivals "since it does not commit us to relative
identity or require any *ad hoc* philosophical
commitments" (15), and even though its divine selves don't
overlap, sense can be made of, e.g. Jesus's interaction with his
Father (meaning not the prior divine Person, but God, the temporal
whole of whom Jesus is a temporal part) (11-4).
This theory is notable in being a case not of rational reconstruction,
but of doctrinal revision (Tuggy 2011a). Many of its features are
controversial, such as its unorthodoxy, its metaphysical commitments
to temporal parts and the lasting of selves without diachronic
identity, its denials of divine simplicity and divine timelessness,
and its redefinitions of "monotheism",
"generation", and "procession".
In a later discussion Baber argues that some form or other of
Sabellianism about the Trinity is theoretically straightforward and
fits well with popular Christian piety. Further, such theories *can*survive the common objections that they imply that God is only
contingently trinitarian, and that they characterize God only in
relation to the cosmos. While some Sabellian theories do have those
implications, Baber argues that a trinitarian may just accept them
(2019, 134-8).
Alternately, Baber (2019) develops a structuralist approach to the
Trinity which doesn't imply anything about how many selves it
involves.
### 3.3 Persons as Relational Qua-Objects
Rob Koons (2018) constructs an account inspired by Aquinas, Augustine,
and recent work on "qua-objects" by Fine and by Asher.
Following the latter, Koons understands "*qua*-modified
noun phrases as picking out *intentional objects* consisting of
tropes or accidents, metaphysical parts of the base object"
(346). Koons holds that even everyday objects imply the existence of
such intentional objects, i.e. things that can be thought. Thus, we
can distinguish Trump-qua-husband from Trump-qua-President; while
these are real objects of thought, they both amount to being
properties of Trump, which Koons thinks of as metaphysical parts of
Donald Trump.
Unlike most of the other theories in this entry, Koons builds his on
the foundation of divine simplicity, traditionally understood.
According to this, God is numerically identical with his nature, his
one action, and his existence. God has no accidental (non-essential)
properties and no proper parts, and he just is any essential property
of his. In sum, God has no parts or components in any sense (339).
While one might suppose that this would rule out God being a Trinity,
Koons argues that to the contrary, we can understand how and why God
is a Trinity by "locating God in an extreme and exotic region of
logical space" (*ibid.*)
The divine nature just is any divine attribute, e.g. omnipotence. In
addition, Koons argues that the divine nature just is "an
*intentional relation*: namely, perfect knowledge and perfect
love.". Understanding or knowledge generally should be
understood "as an *internal relation* between the mind
and its external object." (340).
Following Aquinas, Koons says that God (a.k.a. the divine nature)
understands all things through himself; God is essentially omniscient
and essentially self-understanding. Thus, the divine nature implies
the existence of three relational qua-objects, which are the Persons
of the Trinity. These are not merely three ways we can think of God,
or three ways God may appear to us, but rather these objects result
from *God's* essential self-understanding (345). Each of
these four things-God (the divine nature), the Father, the Son,
and the Spirit-has the divine nature as its one metaphysical
component, and each has all the divine attributes (346). Each of those
four is numerically distinct from each of the three others (347).
The theory requires more than the relation of (absolute) numerical
sameness or identity. In addition, Koons defines a relation of
"real" sameness. Like identity, this relation is reflexive
and symmetrical, but unlike identity it is neither transitive nor
Euclidian (such that if any x is related to some y and to some z then
this implies that y and z are related in that same way). Thus, real
sameness is not an equivalence relation (348, 357 n. 5) According to
this theory each Person is really the same as the divine nature (God),
but not identical to him. But no Person here is really the same as any
other Person; all three are really distinct from one another. To
summarize: there are four divine realities on this model of the
Trinity. The three Persons are so many qua-objects, while God is not.
None of these four is really distinct from God; all are really the
same as him. Yet none of the four is identical to any of the others
(348).
One may ask why there should be only three qua-objects here, when
objects like a human person or an apple, having many properties, might
imply hundreds or thousands of qua-objects. The answer is that not
every qua-object of God is a divine Person. Many such, Koons says, are
contingent, e.g. God-as-creator, or God-as-friend-of-Abraham; such
would not have existed had God not created. And any qua-object of God
which involves only an essential property of his, e.g.
God-as-omnipotent, is numerically identical to God (345). To be a
"hypostatic qua-object" (i.e. a God-as-thing which is a
divine Person) something must exist necessarily, be numerically
distinct from God, and be such that "It is not wholly grounded
in a logical or conceptual way on any other divine *qua*-object
or objects. So, it must be fully determinate (non-general,
non-disjunctive, and non-negative) in its definition" (346).
This last condition is meant to prevent the proliferation of divine
qua-objects (354-6). Koons argues that this account explains why
there are exactly three divine Persons.
>
> My main claim is that, as a matter of metaphysical necessity, there
> are exactly three hypostatic *qua*-objects (namely, Father,
> Son, and Spirit, as defined above). This is because there are only two
> intrinsic, relational properties of God (knowing and being known), and
> these give rise (on purely logical grounds) to only three
> non-disjunctive combinations. (346)
>
Divine love doesn't imply further Persons because it's the
same relational property as divine self-knowing. God-as-knower
isn't numerically the same as God-as-known because of the
essential asymmetry of the knowing relation (*ibid.*). Divine
love, Koons says, is a kind of charity of friendship; thus, lover and
beloved can't be numerically identical. So if the Father loves
the Son, this implies that they are numerically distinct
(non-identical). It also implies that they are really distinct and not
really the same. In specifying what he means by real distinctness
Koons writes,
>
> Two *qua*-objects with the same ultimate base are really
> distinct if and only if they are numerically distinct and the
> distinction between them is *intrinsic* to their ultimate base.
> (348)
>
The distinction between these qua-objects Father and Son is intrinsic
to their ultimate base, God (the divine nature) because he is the
intrinsic yet relational property of love (348-9). This has the
consequence that "the divine nature cannot love or be loved by
any of the divine Persons" (351).
Koons argues that this theory has many advantages over some rivals.
Against the constitution based three-self theory of Brower and Rea
(see section
2.1.3),
it allows for divine simplicity, as the Trinity does not involve any
metaphysical components or parts other than the divine nature. And he
claims that their account amounts to tritheism, "since each
Person is divine in His own unique and incomparable way". In
contrast, on Koons's theory, "each of the three Persons is
divine in the same way-simply by being a divine
*qua*-object, and the divine nature is complete and fully
divine in itself". Again, Koons's theory can, and theirs
can't, explain why there are exactly three divine Persons. And
their theory requires three different odd and hard to explain personal
attributes (352). As contrasted with any "social" theory,
this one doesn't have divine Persons which are really distinct
from the divine nature (God) (353).
Koons recognizes that many will object that this theory is
tetratheism; it features four realities, each of which is divine;
prima facie, these would be four gods. Koons believes that the real
sameness of each of the Persons with God should rule out any
polytheism and rule in monotheism. He offers this definition of
monotheism:
>
> There is one and only one thing such that no divine being is really
> distinct from it (348).
>
This would be equivalent to:
>
> There is one and only one thing such that every divine being is really
> the same as it.
>
But a "divine being" is a god. Thus the meaning of this
definition can be restated as:
>
> There is one and only one thing such that every god is really the same
> as it. Or: There is one and only one thing such that no god is really
> distinct from it.
>
This is a controversial definition; one may think that Koons simply
redefines "monotheism" as compatible with any number of
gods greater than zero. Put differently, one may count things by
identity. Why can't one also count gods in this way? Again, if a
is a god, and b is a god, and they are non-identical, what is it about
"real sameness" that implies that they're really the
same god? Why isn't this an *ad hoc*, theory-saving
definition?
One may wonder here how the four realities can be equally divine. It
would seem that whereas God (the divine nature) would not exist
because of any other, and so would exist *a se*, each of the
qua-object persons would exist because of God, their base.
Wouldn't this make God greater than each of the Persons? Again,
on this account each of these four is intrinsically and essentially
divine, yet the Persons can love, while God can not. How then can all
four be omnipotent?
Some will judge this theory to inherit all the problems of the
traditional divine simplicity doctrine it assumes. Others will
consider its fit with simplicity to be a feature and not a bug. Koons
points out that it also assumes constituent ontology, a Thomistic
account of thought, and the claim that the divine nature is an
intentional relation (356).
### 3.4 Persons as Improper Parts of God
An account of the Trinity by Daniel Molto tries to preserve the idea
found especially in Western creeds that "each divine person is,
in some sense, all of God" (2018, 395). (Because of this he
claims the title "Latin" for the theory.) He employs a
non-standard
mereology
(theory of parts and wholes) to interpret what he considers to be the
three requirements for a Trinity theory: that there's only one
god, that no Person of the Trinity is identical to any other, and that
in some sense it is true that each Person individually "is
wholly" God (400). The view is that God, the Father, the Son,
and the Spirit are all improper parts of one another, while none is
numerically identical to any other. This is shown in the following
chart; the lines represent the symmetrical and transitive improper
parthood relation.
![A representation of the divine persons and the Trinity as improper parts of one another.](Moltotrin1.png)
On more "classical" mereological systems, if A is an
improper part of B, this implies that A and B are numerically
identical, but in the system suggested by Molto, this is denied. He
argues that this change is not merely theologically motivated, but may
be applicable to other issues in metaphysics (410-3).
Molto discusses a problem for the model which arises from the
transitivity of parthood and the axiom that things which are improper
parts of one another must have all their proper parts in common. If
the body of the incarnate Son is a proper part of him, then given
Molto's model of the Trinity, this body would also have to be a
proper part of God, of the Father, and of the Holy Spirit -
claims which most Christian theologians would reject. In response, he
adds three further elements to the model, as shown here:
![A mereological representation of the Trinity as in Molto's mereological one-self Trinity theory.](Moltotrin2.png)
Here, D = the divine nature of the Son, N = the human nature of the
Son, and B = the body of the Son. As before, the lines with arrows on
each end represent the symmetrical improper parthood relation. In this
illustration the one-arrow lines represent the asymmetrical *proper*parthood relation. Thus, the divine nature of the Son and the
human nature of the Son are proper parts of the composite Son, and the
human body is a proper part of the human nature (and thus, also of the
composite Son).
Molto leaves it up to theologians whether this sort of theory is
orthodox (514-7). His suggestion is only that this may be a
simpler and less controversial solution to the logical problem of the
Trinity, that is, to showing how trinitarian claims do not imply a
contradiction. (397-8, 416-7)
## 4. Mysterianism
Often "mystery" is used in a merely honorific sense,
meaning a great and important truth or thing relating to religion. In
this vein it's often said that the doctrine of the Trinity is a
mystery to be adored, rather than a problem to be solved. In the Bible
a "mystery" (Greek: *musterion*) is simply a truth
or thing which is or has been somehow hidden (i.e., rendered
unknowable) by God (Anonymous 1691; Toulmin 1791b). In this sense a
"revealed mystery" is a contradiction in terms (Whitby
1841, 101-9). While Paul seems to mainly use
"mystery" for what *used to be* hidden but is now
known (Tuggy 2003a, 175), it has been argued that Paul assumes that
what has been revealed will continue to be in some sense
"mysterious" (Boyer 2007, 98-101).
Mysterianism is a meta-theory of the Trinity, that is, a theory about
trinitarian theories, to the effect that an acceptable Trinity theory
must, given our present epistemic limitations, to some degree lack
understandable content. "Understandable content" here
means propositions expressed by language which the hearer
"grasps" or understands the meaning of, and which seem to
her to be consistent.
At its extreme, a mysterian may hold that no first-order theory of the
Trinity is possible, so we must be content with delineating a
consistent "grammar of discourse" about the Trinity, i.e.,
policies about what should and shouldn't be said about it. In
this extreme form, mysterianism may be a sort of sophisticated
position by itself--to the effect that one repeats the creedal
formulas and refuses on principle to explain how, if at all, one
interprets them. More common is a moderate form, where mysterianism
supplements a Trinity theory which has some understandable content,
but which is vague or otherwise problematic. Thus, mysterianism is
commonly held as a supplement to one of the theories of sections
1-3. Again, it may serve as a supplement not to a full-blown
theory (i.e., to a literal model of the Trinity) but rather to one or
more (admittedly not very helpful) analogies. (See
section 3.3.1
in the supplementary document on the history of trinitarian
doctrines.) Unitarian views on the Father, Son, and Spirit are
typically motivated in part by hostility to mysterianism. (See the
supplementary document on
unitarianism.)
But the same can said of many of the theories of sections
1-3.
Mysterians view their stance as an exercise of theological
sophistication and epistemic humility. Some mysterians appeal to the
medieval tradition of apophatic or negative theology, the view that
one can understand and say what God is not, but not what God is, while
others simply appeal to the idea that the human mind is ill-equipped
to think about transcendent realities.
Tuggy (2003a) lists five different meanings of "mystery"
in the literature:
>
> [1]...a truth formerly unknown, and perhaps undiscoverable by
> unaided human reason, but which has now been revealed by God and is
> known to some... [2] something we don't completely
> understand... [3] some fact we can't explain, or
> can't fully or adequately explain... [4] an unintelligible
> doctrine, the meaning of which can't be grasped....[5] a
> truth which one should believe even though it seems, even after
> careful reflection, to be impossible and/or contradictory and thus
> false. (175-6)
>
Sophisticated mysterians about the Trinity appeal to
"mysteries" in the fourth and fifth senses. The common
core of meaning between them is that a "mystery" is a
doctrine which is (to some degree) not understood, in the sense
explained above. We here call those who call the Trinity a mystery in
the fourth sense "negative mysterians" and those who call
it a mystery in the fifth sense "positive mysterians". It
is most common for theologians to combine the two views, though
usually one or the other is emphasized.
Sophisticated modern-era mysterians include Leibniz and the theologian
Moses Stuart (1780-1852). (Antognazza 2007; Leibniz
*Theodicy*, 73-122; Stuart 1834, 26-50.)
### 4.1 Negative Mysterianism
The negative mysterian holds that the true doctrine of the Trinity is
not understandable because it is too poor in intelligible content for
it to positively seem either consistent or inconsistent to us. In the
late fourth-century pro-Nicene consensus this takes the form of
refusing to state in literal language what there are three of in God,
how they're related to God or to the divine essence, and how
they're related to each other. (See
section 3.3
in the supplementary document on the history of Trinity theories.)
The Persons of the Trinity, in this way of thinking, are somewhat like
three men, but also somewhat like a mind, its thought, and its will,
and also somewhat like a root, a tree, and a branch. Multiple
incongruous analogies are given, the idea being that a minimal content
of the doctrine is thereby expressed, though we remain unable to
convert the non-literal claims to literal ones, and may even be unable
to express in what respects the analogies do and don't fit.
Negative mysterianism goes hand in hand with the doctrines of divine
incomprehensibility (that God or God's essence can't be
understood completely, at all, or adequately) and divine ineffability
(that no human concept, or at least none of some subset of these,
applies literally to God). Some recent studies have emphasized the
centrality of negative mysterianism to the pro-Nicene tradition of
trinitarian thought, chastising recent theorists who seem to feel
unconstrained by it (Ayres 2004; Coakley 1999; Dixon 2003).
The practical upshot of this is being content to merely repeat the
approved trinitarian sentences. Thus, after considering and rejecting
as inadequate multiple analogies for the Trinity, Gregory of Nazianzus
concludes,
>
> So, in the end, I resolved that it was best to say
> "goodbye" to images and shadows, deceptive and utterly
> inadequate as they are to express that reality. I resolved to keep
> close to the more truly religious view and rest content with some few
> words, taking the Spirit as my guide and, in his company and in
> partnership with him, safeguarding to the end the genuine illumination
> I had received from him, as I strike out a path through this world. To
> the best of my powers I will persuade all men to worship Father, Son,
> and Holy Spirit as the single Godhead and power, because to him belong
> all glory, honor, and might forever and ever. Amen. (Nazianzus,
> *Oration 31*, 143.)
>
Opponents of this sort of mysterianism object to it as misdirection,
special pleading, neglect of common sense, or even deliberate
obfuscation. They emphasize that trinitarian theories are human
constructs, and a desideratum of any theory is clarity. We literally
can't believe what is expressed in trinitarian language, if we
don't grasp the meaning of it, and to the extent that we
don't understand a doctrine, it can't guide our other
theological beliefs, our actions, or our worship (Cartwright 1987;
Dixon 2003, 125-31; Nye 1691b, 47; Tuggy 2003a, 176-80).
Negative mysterians reply that it is well-grounded in tradition, and
that those who are not naively overconfident in human reason expect
some unclarity in the content of this doctrine.
### 4.2 Positive Mysterianism
In contrast, the positive mysterian holds that the trinitarian
doctrine can't be understood because of an abundance of content.
That is, the doctrine seems to contain explicit or implicit
contradictions. So while we grasp the meaning of its individual
claims, taken together they seem inconsistent, and so the conjunction
of them is not understandable, in the sense explained above. The
positive mysterian holds that the human mind is adequate to understand
many truths about God, although it breaks down at a certain stage,
when the most profound divinely revealed truths are entertained.
Sometimes an analogy with recent physics is offered; if we find
mysteries (i.e., apparent contradictions) there, such as light
appearing to be both a particle and a wave, why should we be shocked
to find them in theology (van Inwagen 1995, 224-7)?
The best-developed positive mysterian theory is that of James Anderson
(2005, 2007), who develops Alvin Plantinga's epistemology so
that beliefs in mysteries (merely apparent contradictions) may be
rational, warranted, justified, and known. Orthodox belief about the
Trinity, Anderson holds, involves believing, for example, that
Jesus is identical to God, the Father is identical to God, and that
Jesus and the Father are not identical. Similarly, one must believe
that the Son is omniscient, but lacks knowledge about at least one
matter. These, he grants, are *apparent* contradictions, but
for the believer they are strongly warranted and justified by the
divine testimony of scripture. He argues that numerous attempts by
recent theologians and philosophers to interpret one of the apparently
contradictory pairs in a way that makes the pair consistent always
result in a lapse of orthodoxy (2007, 11-59). He argues that the
Christian should take these trinitarian mysteries to be
"MACRUEs", merely apparent contradictions resulting from
unarticulated equivocations, and he gives plausible non-theological
examples of these (220-5).
It is plausible that if a claim appears contradictory to someone, she
thereby by has a strong epistemic "defeater" for that
belief, i.e., a further belief or other mental state which robs the
first belief of rational justification and/or warrant. A stock example
is a man viewing apparently red objects. The man then learns that a
red light is shining on them. In learning this, he acquires a defeater
for his belief that the items before him are red. Thus with the
Trinity, if the believer discovers an apparent contradiction in her
Trinity theory, doesn't that defeat her belief in that theory?
Anderson argues that it does not, at least, if she reflects properly
on the situation. The above thought, Anderson argues, should be
countered with the doctrine of divine incomprehensibility, which says
that we don't know all there is to know about God. Given this
truth, the believer should not be surprised to find herself in the
above epistemic situation, and so, the believer's trinitarian
belief is either insulated from defeat, or if it's already been
defeated, that defeat is undone by the preceding realization (2007,
209-54).
Dale Tuggy (2011a) argues that Anderson's doctrine of divine
incomprehensibility is true but trivial, and not obviously relevant to
the rationality of belief in apparent contradictions about God. The
probability of our being stuck with such beliefs is a function not
only of God's greatness in comparison to humans' cognitive
powers, but also of what and how much God chooses to reveal about
himself. Nor is it clear that God would be motivated to pay the costs
of inflicting apparently contradictory divine revelations on us.
Moreover, Anderson has not ruled out that the apparent contradictions
come not from the texts alone, but also from our theories or
pre-existing beliefs. Finally, he argues that due to the comparative
strength of "seemings", a believer committed to paradoxes
like those cited above will, sooner or later, acquire an epistemic
defeater for her beliefs.
In a reply, Anderson (2018) denies that divine incomprehensibility is
trivial, while agreeing that many things other than God are
incomprehensible (297). While Tuggy had attacked his suggestions about
why God would want to afflict us with apparent contradictions,
Anderson clarifies that
>
> ...my theory doesn't *require* me to identify
> positive reasons for God permitting or inducing MACRUEs. For even if I
> concede Tuggy's point that "the prior probability of God
> inducing MACRUEs in us is either low or inscrutable," the
> doctrine of [divine] incomprehensibility can still serve as...an
> undercutting defeater for the inference from *D appears to be
> logically inconsistent* to *D is false.* (298-9)
>
The defense doesn't require, Anderson argues, any more than that
MACRUEs are "*not very improbable* given theism"
(299). As to whether these apparent contradictions result from the
texts rightly understood, or whether they result from the texts
together with mistaken assumptions we bring to them, this is a
question only biblical exegesis can decide, not any *a priori*
considerations (300). As to Tuggy's charge that a believer in
theological paradoxes will inevitably acquire an undefeated defeater
for her beliefs, Anderson argues that this has not been shown, and
that Tuggy overlooks how a believer may reasonably add a relevant
belief to her seemingly inconsistent set of beliefs, such as that the
apparently conflicting claims P and Q are only approximately true, or
that "P and Q are the best way for her to conceptualize matters
given the information available to her, but they don't represent
the whole story" (304).
Anderson's central idea is that the alleged contradictions
of Christian doctrine will turn out to be merely apparent. In
contrast, some theologians have held that doctrines including the
Trinity imply not merely apparent but also real contradictions, but
are nonetheless true. Such hold that there are exceptions to the law
of non-contradiction. While some philosophers have argued on mostly
non-religious grounds for
dialetheism,
the claim that there can be true (genuine, not merely apparent)
contradictions, this position has for the most part not been taken
seriously by analytic theologians (Anderson 2007, 117-26) (For a
recent exception, see Beall 2019.)
## 5. Beyond Coherence
Analytic literature on the Trinity has been laser-focused on the
logical coherence of "the" doctrine, addressing an
imagined critic arguing that the doctrine is clearly incoherent. They
do this by suggesting models of the Trinity, intelligible and arguably
coherent interpretations of most or all of the traditional language.
But in recent work the tools of analytic philosophy have been applied
to several closely related issues.
### 5.1 "The Trinity" and Tripersonality
The term "Trinity" has been used either as a singular
referring term or as a plural referring term (Tuggy 2016, 2020). The
first usage goes hand in hand with the claim that the one God just is
the tripersonal God, the Trinity. But the earlier use of
"Trinity"-(Greek: *trias*, Latin:
*trinitas*) where that term refers to a "they" and
not a "he" or an "it"-still survives,
and some Trinty theories imply that the term "Trinity" can
refer only in this way.
Most statements of faith by trinitarian Christian groups seem to
assume or imply that the Trinity just is God (and vice-versa); the
only god is the tripersonal God, the Trinity, and
"Trinity" is a singular referring term denoting that
reality. This God, it is assumed, does not merely happen to be
tripersonal, but must be so; on such a view, it looks like
tripersonality will be an essential divine attribute. Thus, it can
seem axiomatic that "Christians hold that God is Trinitarian in
God's essential nature" (Davis and Yang 2017, 226). Some
Trinity theories embrace this (section
2.5).
However, if this is so, it is hard to see how each of the Persons
could be divine in the way the one God is divine, since generally
trinitarians don't want to say that each is himself tripersonal.
(See section
2.1.5
for an exception.) Thus, some Trinity theories eschew a thing which
is tripersonal, while affirming three divine Persons whose divinity
does *not* require tripersonality (sections
2.1.3,
2.1.4). For such theories, "Trinity" is a plural
referring term.
### 5.2 Logic Puzzles and Language
While many discussions start with claims that are seen as the heart of
the "Athanasian" creed, a recent piece by Justin Mooney
(2018, 1) starts with this seemingly inconsistent triad of claims:
1. God is triune.
2. The Son is not triune.
3. The Son is God.
Dale Tuggy (2014, 186) presents this inconsistent triad.
1. The Christian God is a self.
2. The Christian God is the Trinity.
3. The Trinity is not a self.
One-self trinitarians deny 3, and three-self trinitarians deny 1. But
Tuggy argues that for scriptural reasons a Christian should deny 2.
(See also section
5.4
and the supplementary document on the history of trinitarian
doctrines,
section 2.)
Ryan Byerly (2019) explains "the philosophical challenge of the
Trinity" as centering on the key Nicene term
"consubstantial" (Greek: *homoousios*). How can the
three Persons be "consubstantial" so that each equally in
some sense "is God", where this implies neither their
numerical identity, nor that there's more than one god?
Jedwab and Keller (2019, 173) see the fundamental challenge for the
orthodox trinitarian as showing how this seemingly inconsistent triad
of claims is, rightly understood, consistent:
1. There is exactly one God.
2. There are exactly three divine persons.
3. Each divine person is God.
They argue that this must involve paraphrases, clearer formulations of
1-3 which can be seen as possibly all true. They compare how the
theories of sections
2.1,
2.3, and
1.5
above must do this, and conclude that it is easier for the first of
these to provide paraphrases which plausibly express the same claims
as the originals, which is a point in favor of such relative identity
theories.
Beau Branson (2019) explores these claims as constituting "the
logical problem of the Trinity": each Person "is
God", they are distinct from one another, and yet there is only
one thing which "is God". These provide materials for
a formidable argument against any doctrine that entails those seven
claims. He argues that all possible non-heretical solutions to that
problem either equivocate on the predicate "is God"
(roughly: what are called "social" theories, discussed in
sections 2.2-7) or insist that divine Persons must be counted by
some relation other than "absolute" or
"classical" identity (i.e. relative identity theories as
discussed in section
2.1).
Another recent piece compares different approaches to the Trinity by
how they respond to an anti-trinitarian argument based on alleged
differences between the Father and the Son (Tuggy 2016b).
### 5.3 Foundations
A tradition going back at least to Cartwright (1987) is using the
language of the so-called "Athanasian" creed to generate
contradictions, the task of the philosophical theologian then being to
show how these can be avoided by more careful analysis. This
Latin document is by an unknown author, and is not the product of any
known council. Modern scholarship places it some time in the fifth
century, well after the life of Athanasius (d. 373), and sees it as
influenced by the writings of Augustine (Kelly 1964). Objecting to
making it a standard of trinitarian theology, several authors have
pointed out its dubious provenance and coherence, and have observed
that it has mainly been accepted in the Western realm and not in the
East, and that it seems to stack the deck against three-self theories
(Layman 2016 136-7, 169-71; McCall 2003, 427; Tuggy 2003b,
450-55). Tuggy (2016b) objects that starting with this
problematic creed causes analytic theologians to neglect the question
of if and how the teaching of this creed is the same as
various statements from the "ecumenical" councils,
pre-Nicene theologies, or the Bible. But William Hasker argues that
rightly understood, the claims of this creed may not be
paradoxical, as it is largely concerned with what may and may not be
said (2013b, 250-4).
Apart from the "Athanasian" creed, H.E. Baber describes
five different foundations for theorizing about the Trinity, endorsing
the fourth.
>
> This poses the question of what the 'foundations' for the
> philosophical investigation of Trinitarian theology should be if it is
> not either [1] the declarations of Church councils or [2] the
> theological works of the Fathers or [3] Scripture which...does
> not include any Trinitarian doctrine. ...what philosophical
> theology should be about...[is] [4] the discourse and practice of
> the Church, and by that I do not mean [5] the doctrinal claims of the
> Church in its pretension to being a teaching institution, but [4] the
> liturgy, hymnody, and art, customs and practices, religious objects
> and religious devotions which, together, constitute the Christian
> religion and its practice. The aim of philosophical theology is to
> make sense of the discourse and provide a rationale for the practices
> while avoiding logical incoherence. (Baber 2019, 186-7)
>
Some in the literature fall cleanly into one of Baber's
categories, but more commonly, work in analytic theology is done while
leaving unclear just what are the foundations of trinitarian
theorizing. Following the example of his earlier work on christology,
Timothy Pawl (2020) focuses on the teachings of the
"ecumenical" councils, which arguably give the central
trinitarian language of catholic traditions.
In favor of Baber's second approach, Beau Branson (2018)
critiques what he calls "the virtue
approach"-basically, treating theological issues like
metaphysical or logical puzzles calling for creative
theorizing-as point-missing, question begging, and unclear.
In contrast, he advocates for "the historical approach",
which assumes that the content of "the doctrine of the
Trinity" should be considered as fixed by the views of
"mainly various fourth-century theologians" (Section 4).
It is misguided, he argues, to focus merely on the theoretical virtues
of various rational reconstructions of what traditional Trinity
language is really supposed to be expressing, as most of these will
not plausibly be expressing *the* historical doctrine.
New-fangled accounts, Branson argues, have a burden of showing how
they, if coherent, imply that the historical doctrine of the Trinity
is coherent, and indeed why the former should even count as a version
of the latter (Section 5). (Similarly with other theoretical virtues.)
At any rate, nothing about the project of analytic theology requires
the neglect of the crucial historical definers of the Trinity doctrine
(Section 13).
Analytic theologians have expended much effort on metaphysical models
which *if accurate* would arguably show that "the
doctrine of the Trinity" is coherent (i.e. seemingly not
self-contradictory). But only a recent monograph by Vlastimil
Vohanka (2014) asks in depth how, if at all, a person might be
able to know that the doctrine of the Trinity is logically possible.
Vohanka argues for "Weak Modal Scepticism about the
Trinity Doctrine" (83), which is the claim that "it is
psychologically impossible to see evidently and apart from religious
experience that the Trinity doctrine is logically possible"
(86). The arguments for this conclusion defy easy summary (but see
Chapter 7 and Jaskolla 2015). What is "the doctrine" in
question? Vohanka uses a minimal definition, that "there
are three persons, each of whom is God but there is just one being
which is God" (47). Admittedly, this better fits one-self
theories than it does three or four-self theories (52-7) Like
most, Vohanka assumes that a Trinity doctrine is essential to
Christianity (57-8), so the main thesis implies the
impossibility of knowing that Christianity is logically possible. But
the overall project is not an attack on the truth or knowability of
Christianity (see the author's clarifications about his aims on
244-7, 276-7). Rather, such claims as the Trinity and
Christianity seemingly can't be evident to us (i.e. roughly,
can't be obviously true to us, as is 1+1=2 and such
propositions-see 18), and moreover, we shouldn't expect
that we'll have any religious experience in this life which is
sufficient to make them evident (277). For all that's been said,
such claims "may well be epistemically justified, well-argued,
have clearly high non-logical probability, etc." (279). But the
Christian philosopher should give up on "fulfilling the
classical idea of evidentness in matters viewed by him as of utmost
importance: the truth of Christianity and of the Trinity
doctrine" (*ibid.*).
### 5.4 Competing Narratives
In some scholarly circles it is taken as obvious that New Testament
teaching is not trinitarian-that it neither asserts, nor
implies, nor assumes anything about a tripersonal God (see e.g. Baber
2019, 148-56; Kung 1995, 95-6; see also
supplement on the history of Trinity doctrines, section 2).
But most analytic literature on the Trinity assumes the truth of an
orthodox narrative about where Trinity theories come from. According
to this, from the beginning Christians were implicitly trinitarian;
that is, they held views which imply that God is a Trinity, but
typically did not realize this or have adequate language to express
it. By at least the late 300s, they had gained enough new language
and/or concepts to express what they had been committed to all
along.
But in recent analytic literature on the Trinity there are two
counternarratives, both of which see the idea of a triune God as
entering into Christian traditions in the last half of the 300s. Beau
Branson argues that "Monarchical Trinitarianism", which in
his view correctly understands the theology of the authoritative
fourth century Greek "fathers" (Basil of Caesarea, Gregory
of Nyssa, and Gregory of Nazianzus) is a trinitarian theology on which
"Strictly speaking, The One God just is the Father"
(Branson forthcoming, Section 6). He contrasts this with most other
Trinity theories, which he calls "egalitarian" or
"symmetrical", theories on which "all three persons
have an 'equal claim' to being called 'God,'
in any and every sense." (*ibid.*) Branson criticizes
Tuggy's (2016a) analyses of the concepts trinitarian and
unitarian, and offers rival definitions on which "Monarchical
Trinitarianism" is trinitarian and not unitarian. To be
trinitarian, a theology needs only to assert that "there are
exactly three divine 'persons' (or individuals, etc.).
Nevertheless...there is exactly one God" (Section 5). As
Branson understands the history of Christian theology, the idea that
"the Trinity" is a tripersonal God is a misunderstanding
of tradition which is due particularly to "Western"
thinkers, such as Augustine. Branson cites some recent Orthodox
theologians who hold, like John Behr, that "there is not One God
the Trinity, but One God Father Almighty" (Behr 2018, 330). In
reply, Tuggy has argued that recent Orthodox theologians
seem divided on this point, and that the idea of a triune God
(the one God as the Trinity) is found even in some of the Greek
writers Branson claims as exemplars of theological orthodoxy (Tuggy
2020).
Another recent counternarrative sees ancient mainstream Christian
theology as changing from unitarian to trinitarian. Tuggy (2019)
argues that in the New Testament the one God is not the Trinity but
rather the Father alone. The argument moves from facts about the texts
of the New Testament to what the authors probably thought about the
one God, using what philosophers of science call the likelihood
principle or the prime principle of confirmation. Tuggy sees such
identification of the one God with the Father dominating early
Christian theologies until around the time of the second ecumenical
council in 381 C.E. (Tuggy 2017, Chapter 5). Then, the Son and the
Spirit, which in many 2nd to early 4th c. speculations were two lesser
deities in addition to God, were taught to, together with the Father,
somehow comprise the one God, the Trinity. (2016b, Sections
2-3) |
tropes | ## 1. Historical Background
The father of the contemporary debate on tropes was D. C. Williams
(1953*a*; 1953*b*; 1963; 1986 [1959];
2018).[2]
Williams defends a one-category theory of tropes (for the first time
so labeled), a bundle theory of concrete particulars, and a
resemblance class theory of universals. All of which are now elements
of the so-called 'standard' view of tropes. Who to count
among Williams' trope-theoretical predecessors is unavoidably
contentious. It depends on one's views on the nature of the
trope itself, as well as on which theses, besides the thesis that
tropes exist, one is prepared to accept as part of a trope--or
*trope-like*--theory.
According to some philosophers, trope theory has roots going back at
least to Aristotle (perhaps to Plato (*cf.* Buckels 2018), or
even the pre-Socratics (*cf.* Mertz 1996: 83-118)). In
the *Categories*, Aristotle points out that Substance and
Quality both come in what we may call a universal and a particular
variety (*man* and *this man* in the case of substance,
and *pallor* and *this pale*--to ti leukon--in
the case of quality). Not everyone believe that this means that
Aristotle accepts the existence of tropes, however. On one
interpretation (Owen 1965) *this pale* names an absolutely
determinate, yet perfectly shareable, shade of pallor. But on a more
traditional interpretation (*cf.* Ackrill 1963 and, more
recently, Kampa & Wilkins 2018), it picks out a trope, i.e., a
particular 'bit' of pallor peculiar to the substance that
happens to exemplify it (for a discussion, *cf.* Cohen
2013).
In view of the strong Aristotelian influence on medieval thinkers, it
is perhaps not surprising that tropes or trope-like entities are found
also here. Often mentioned in this connection is Ockham (*cf.*
also Aquinas, Duns Scotus, and Suarez). Ockham held that there are in
total four sorts of individual things: substances, qualities,
substantial forms, and matter. Claude Panaccio (2015) argues that all
four sorts of individual things are best understood as tropes or, in
the case of substances, as bundles of tropes (more precisely as
*nuclear* bundles of tropes similar to those defended by e.g.,
Simons 1994 and Keinanen 2015).
This list of early proponents of trope-like entities could have been
made much longer. D. W. Mertz (1996) mentions, besides those already
listed, Boethius, Avicenna, and Averroes. And Kevin Mulligan
et al. (1984) point out that similar views can be found defended by
early modern philosophers, including Leibniz, Locke, Spinoza,
Descartes, Berkeley and Hume.
Still, it is in the writings of 19th century
phenomenological philosophers that the earliest and most systematic
pre-Williams 'trope'-theories are found (Mulligan et al.
1984: 293). The clearest example of an early trope theorist of this
variety is undoubtedly Edmund Husserl. In the third part of his
*Logical Investigations* (2001 [1900/1913]), Husserl sets out
his theory of *moments*, which is his name for the
world's abstract (and essentially dependent) individual parts
(Correia 2004; Beyer 2016). Husserl was most likely heavily influenced
by Bernard Bolzano, who held that everything real is either a
substance or an adherence, i.e., an attribute that cannot be shared
(Bolzano
1950 [1851]).[3]
Husserl thought of his moments as (one of) the fundamental
constituents of *phenomenal* reality. This was also how fellow
phenomenologists G. F. Stout (1921;
1952),[4]
Roman Ingarden (1964 [1947-1948]) and Ivar Segelberg
(1999 [1945;1947;1953])[5]
viewed their fundamental and trope-like posits. Williams' views
are not so easily classified. Although he maintained that all our
knowledge rests on perceptual experience, he agreed that it should not
be limited to the perceptually given and that it could be extended
beyond that by legitimate inference (Campbell et al. 2015). That more
or less all post-Williams proponents of tropes treat their posits as
the fundamental constituents of *mind-independent*--not
phenomenal--reality, is however clear (*cf.* e.g., Heil
2003).
After Williams, the second most influential trope theorist is arguably
Keith Campbell (1997 [1981]; 1990). Campbell adopted the basics of
Williams' (standard) theory and then further developed and
defended it. Later proponents of more or less standard versions of the
trope view include Peter Simons, John Bacon, Anna-Sofia Maurin,
Douglas Ehring, Jonathan Schaffer, Kris McDaniel, Markku
Keinanen, Jani Hakkarainen, Marta Ujvari, Daniel Giberman,
Robert K. Garcia, and Anthony Fisher (all of whose most central works
on the topic are listed in the Bibliography).
A very influential paper also arguing for a version of the theory
(inspired more by Husserl than by Williams) is
"Truth-Makers" (Mulligan et al. 1984; *cf.* also
Denkel 1996). This paper defends the view that tropes are essentially
dependent entities, the objects of perception, and the world's
basic truthmakers. Proponents of trope theories which posit tropes as
one of *several* fundamental categories include C. B. Martin
(1980), John Heil (2003), George Molnar (2003), and Jonathan Lowe
(2006). Molnar and Heil both defend ontologies that include (but are
not limited to) tropes understood as powers, and Lowe counts tropes as
one of *four* fundamental categories. Even more unorthodox are
the views of Mertz (1996, 2016), whose trope-like entities are
categorized as a kind of
relation.[6]
## 2. The Nature of Tropes
According to several trope theorists--perhaps most notably,
according to Williams--what exists when a trope does is an
*abstract* particular. The word 'abstract' is
ambiguous. On the one hand, Williams tells us, it means
"*transcending individual existence*, as a universal,
essence, or Platonic idea is supposed to transcend it"
(1953*a*: 15). On this meaning, to be abstract is to be
non-spatiotemporal. Tropes--which are standardly taken to exist
*in* spacetime--are clearly *not* (or, not
necessarily) abstract in this sense (which explains why some
philosophers--e.g., Kung 1967; Giberman 2014, 2022;
*cf.* also Simons 1994: 557--insist on referring to the
trope as 'concrete'). But then there is this
other--according to Williams "aboriginal"--sense
of the term. A sense in which, to be 'abstract' is to be
"*partial, incomplete, or fragmentary*, the trait of what
is less than its including whole" (Williams 1953*a*: 15).
It is in *this* sense that the trope is supposed to be
abstract.
Is saying of the trope that it is abstract in this sense enlightening?
Some have worried that it isn't (*cf.* e.g., Maurin 2002:
21; *cf.* also Daly 1997). That this worry is unwarranted has
recently been pretty convincingly argued by Fisher (2020). When
Williams argues that the trope is abstract in the sense of
"partial, incomplete, or fragmentary", Fisher notes, what
he means is that it "fails to exhaust the content of the region
it occupies or is merely part of the content of that region"
(Fisher 2020: 45; *cf.* also Williams 1986 [1959]: 3). One
consequence of this, is that the trope can only be attended to via a
process of *abstraction*. (In Campbell's words (1990: 2),
it can only be "brought before the mind ... by a process of
selection, of systematic setting aside, of these other qualities of
which we are aware".) But this, Fisher points out, does
*not* mean that the trope is abstract *because* it is a
product of abstraction (if it did, then since attending to basically
anything (short of everything) involves abstracting away from the
surrounding environment, saying of the trope that it is
'abstract' *would* be saying nothing very
informative at all). What it means, rather, is that the trope is
something that requires abstraction to be brought before the mind
*because* it is abstract (which, in turn, it is
*because* it is such that it does not exhaust the content of
the region it occupies or is merely part of the content of that
region). This takes care of this worry (for more reasons to view the
abstractness of the trope as integral to its nature, see the
discussion in section 3.1 below).
### 2.1 Property or Object?
In philosophy, new posits are regularly introduced by being compared
with, or likened to, an already familiar item. Tropes are no exception
to this rule. In fact, tropes have been introduced by being compared
with and likened to not one but *two* distinct but equally
familiar kinds of things: properties and objects (Loux 2015). Up until
very recently, that tropes can be introduced in both of these ways was
considered a feature of the theory, not a source of concern. Tropes,
was the idea, can be compared with and likened to both properties and
objects, because tropes *are* a bit of both. Recently, however,
both friends and foes of tropes have started to question whether
tropes *can be* a bit of both. At any rate, this will depend on
what *being a property* and *being an object* amounts
to, an issue on which there is no clear consensus.
Looking more closely at existing versions of the trope view, whether
one thinks of one's posit primarily as a kind of property or as
a kind of object certainly influences what one takes to be true (or
not) of it. To see this, compare the tropes defended by Williams with
those defended by Mulligan et al. Williams seems to belong to the camp
of those who view the trope (primarily) as a kind of object. Tropes
are that out of which everything else there is, is constructed. As a
consequence, names for tropes should not be understood as abbreviated
definite descriptions of the kind 'the Ph-ness of
*x*'. Instead, to name a trope should be likened with
baptizing a child or with introducing a man "present in the
flesh", i.e., ostensively (Williams 1953*a*: 5). Mulligan
et al. (1984), on the other hand, seem to regard the trope more as a
kind of property, and as a consequence of this, argue to the contrary
that the correct (in fact the only) way to refer to tropes is
precisely by way of expressions such as 'the Ph-ness of
*x*'. This, they claim (again *pace* Williams), is
because tropes are essentially *of* some object, because they
are *ways* the object is (*cf.* also Heil 2003: 126f).
In general, a theory which models its posit primarily on the property
hence thinks of it as a dependent sort of entity, as *of*
something else. And a theory which thinks of its tropes more as
objects--as 'the alphabet of being'--thinks of
them as independent, either in the sense that they need not make up
the things they actually make up, or--more radically--in the
sense that they need not make up anything at all (that they could be
so-called 'free-floaters').
How one views the nature of properties and objects, respectively, also
plays a role for some of the criticisms the trope view has had to
face. So, for instance, does Jerrold Levinson think that tropes cannot
be a kind of property because having a property--being
red--amounts to being in a certain condition, where conditions
are not particular (Levinson 1980; 2006). The alternative is that
tropes are what he calls 'qualities', by which he means
something resembling bits of abstract stuff. However, since a theory
accepting the existence of bits of abstract stuff would be both
"ontologically extravagant and conceptually outlandish"
(Levinson 2006: 564), abstract stuff most likely doesn't exist.
Which means that, according to Levinson, tropes are *neither* a
kind of property *nor* a kind of object. A circumstance that
makes him conclude that tropes do not exist.
Arkadiusz Chrudzimski, next, has argued that, although tropes
*can* be viewed either as a kind of property or as a kind of
object, they cannot be viewed as a bit of both (Chrudzimski 2002).
Which means that the theory loses its coveted
'middle-position', and with it any advantage it might have
had over rival views. For, he argues, to conceptualize the trope as a
property--a *way things are*--means imputing in it a
*propositional structure* (Levinson 1980: 107 holds a similar
view). Not so if the trope is understood as a kind of object. But
then, although tropes understood as properties are suitable as
semantically efficient truthmakers, the same is not true of tropes
understood as a kind of object. Conversely, although tropes understood
as a kind of object are suitable candidates for being that from which
both concrete particulars and abstract universals are constructed,
tropes understood as properties are not. Whichever way we conceive of
tropes, therefore, the theory's overall appeal is severely
diminished.
Both Levinson's and Chrudzimski's pessimistic conclusions
can be resisted. One option is simply to refuse to accept that one
cannot seriously propose that there are "abstract stuffs".
Levinson offers us little more than an incredulous stare in defense of
this claim, and incredulous stares are well-known for lacking the
force to convince those not similarly incredulous. Another option is
to reject the claim that tropes understood as properties must be
propositionally structured. Or, more specifically, to reject the claim
that complex truths need complex--(again)
*propositionally* structured--truthmakers. Some truthmaker
theorists--not surprisingly, Mulligan *et al*. (1984) are
among them--reject this claim. In so doing, they avoid having to
draw the sorts of conclusions to which Levinson and (perhaps
especially) Chrudzimski gesture.
A more radical option, finally, is to simply reject the idea that
tropes can be informatively categorized either as a kind of property
or as a kind of object. Some of the features we want to attribute to
tropes seem to cut across those categories anyhow. So, for instance,
if you think 'being shareable' is essential to
'being a property', then, obviously, tropes are not
properties. Yet tropes, even if not shareable, can still be
*ways* objects are, and they can still essentially depend on
the objects that have them. Likewise, if 'monopolizing
one's position in space-time' is understood as a central
trait for objects, tropes are not objects. Yet tropes can still be the
independent building-blocks out of which everything else there is, is
constructed.
According to Garcia (2015*a*, 2016), this is why we ought to
frame our discussion of the nature(s) of tropes in terms of another
distinction. Rather than distinguishing between tropes understood as a
kind of property, and tropes understood as a kind of object--and
risk getting caught up in infected debates about the nature of objects
and properties generally--Garcia suggests we distinguish between
tropes understood as 'modifiers' and tropes understood as
'modules'. The main difference between tropes understood
in these two ways--a difference that is the source of a great
many further differences--is that tropes understood as modifiers
do not have the character they confer (on objects), whereas tropes
understood as modules do. With recourse to this way of distinguishing
between different versions of the trope-view, Garcia argues, we can
now evaluate each version separately, independently of how we view
objects and properties,
respectively.[7]
### 2.2 Complex or Simple?
According to most trope-theorists, tropes are ontologically simple.
Here this should be taken to mean that tropes have no constituents, in
the sense that they are not 'made up' or
'built' from entities *belonging to some other
category*. Simple tropes, thus understood, can still have
parts--even necessarily so--as long as those parts are also
tropes (*cf.* e.g., Giberman 2022; Robb 2005:
469).[8],[9]
That tropes are ontologically simple arguably provides the trope
theorist with a tie-breaker vis-a-vis states of affairs (if
states of affairs are understood as substrates instantiating
universals). *Prima facie* a theory positing states of affairs
has the same explanatory power as a theory positing tropes. But a
theory of states of affairs posits--apart from the state of
affairs itself--at least two (fundamental) sorts of things
(universals and substrates), whereas a theory of tropes (at least a
theory of tropes according to which tropes are simple and objects are
bundles of tropes) posits only one (tropes). From the point of view of
ontological parsimony, therefore, the trope view ought to be
preferred.[10]
*Can* the trope be simple? According to a number of the
theory's critics, it cannot. Here is Herbert Hochberg's
argument to that effect (2004: 39; *cf.* also his 2001:
178-179; for versions of the argument *cf.* also
Brownstein 1973: 47; Moreland 2001; Armstrong 2004; Ehring
2011):[11]
>
>
> Let a basic proposition be one that is either atomic or the negation
> of an atomic proposition. Then consider tropes *t* and
> *t*\* where '*t* is different from
> *t*\*' and '*t* is exactly similar to
> *t*\*' are both true. Assume you take either
> 'diversity' or 'identity' as primitive. Then
> both propositions are basic propositions. But they are logically
> independent. Hence, they cannot have the same truth makers. Yet,
> for...trope theory /.../ they do and must have the same
> truth makers. Thus the theory fails.
>
>
>
A number of different things can be said in response to this argument.
It assumes, first, that if tropes are simple, trope theory must
violate what appears to be a truly fundamental principle (call it HP,
short for 'Hochberg's principle'): that
*logically independent basic propositions must have distinct
truthmakers*. Having to reject HP is hence a cost of having simple
tropes. Perhaps this cost is acceptable. That it is, seems to be a
view held by Mulligan et al. (a similar view is, according to
Armstrong 2005: 310 also held by Robb). For, they claim (1984:
296):
>
>
> ...[w]e conceive it as in principle possible that one and the
> same truth-maker may make true sentences with different meanings: this
> happens anyway if we take non-atomic sentences into account, and no
> arguments occur to us which suggest that this cannot happen for atomic
> sentences as well.
>
>
>
One reason for rejecting HP has been put forward by Fraser MacBride
(2004). HP, he notes, is formulated in terms of
'*logically* independent basic propositions'.
However, HP is only plausible if it takes more than logical (what
MacBride calls 'formal') (in)dependence into account:
*material* independent also
matters![12]
Only if two propositions are logically *and* materially
independent does it follow that they must have distinct truthmakers.
But formal and material independence can--and in this case most
likely will--come apart. For (ibid: 190):
>
>
> ...[i]nsofar as truth-makers are conceived as inhabitants of the
> world, as creatures that exist independently of language, it is far
> from evident that logically independent statements in the formal sense
> are compelled to correspond to distinct truth-makers.
>
>
>
Another argument against the simplicity of tropes comes from Ehring
and was inspired by an argument first delivered by Moreland (2001:
64). J. P. Moreland's argument concludes that trope theory is
unintelligible. Ehring thinks this is much too strong. This is
therefore the formulation of the objection he prefers (2011: 180):
>
>
> The nature and particularity of a trope are intrinsically grounded in
> that trope. If tropes are simple, their nature and their particularity
> are hence identical. The natures of a red trope and an orange trope
> are inexactly similar. Hence their respective particularities should
> be inexactly similar as well. However, these particularities are
> *exactly* similar. Hence, their particularities are not
> identical to their natures, and tropes are not simple.
>
>
>
Here the trope theorist could probably reply that the objection rests
on a kind of category mistake: that 'particularities' are
quite simply not things amenable to standing in similarity relations.
Alternatively, she might just concede that tropes are complex, yet
argue that they are so in the innocent sense of having other tropes as
parts (one particularity-trope and a distinct nature-trope). Against
this, Ehring (2011: 183f) has argued that if the trope has its
particularity reside in one of the tropes that make it up, we can
always ask about *that* trope what grounds *its*
particularity and nature respectively. Which would seem to lead us
into an infinite--and most probably vicious--regress.
Ehring's own solution to the problem is to adopt a version of
the trope view (what he calls Natural Class Trope Nominalism) on which
tropes do not resemble each other in virtue of their nature, but
rather in virtue of belonging to this or that natural class. On this
view, what makes two tropes qualitatively the same (their belonging to
the same natural class of tropes) is different from what makes them
distinct (the tropes themselves, primitively being what they are).
Which means that, that tropes can be distinct yet exactly resemble
each other is no longer a reason to think that tropes are complex.
A different kind of solution has recently been proposed by Giberman
(2022*,* *cf.* also his 2014). According to him, because
at least some of the tropes there are, are spatiotemporally located
(they are what he calls
'concrete')[13],
they must have size, shape, and duration. That tropes have these
different features means that they are capable of multiple
resemblances. Which, claims Giberman, means that they are
qualitatively complex. Yet that they are does not threaten the trope
view in any way. For, according to Giberman, tropes are (qualitatively
complex) *ostriches*. What this means is that tropes are such
that they "primitively account for their own multiple
resemblance capacities". Which means that "no (further)
ontological machinery is required to explain that an ostrich trope has
multiple properties" (even though we are admitting that it
does!) (Giberman 2022: 18). In this sense, the trope is like the
Quinean concrete particular (Giberman ibid.; *cf.* also Quine
1948, Devitt 1980):
>
>
> When the Quinean is asked what it is about a given electron that
> metaphysically explains its ability to resemble both massive things
> and charged things, he will likely answer 'nothing!--it
> just is that way'... Similarly, when asked what it is about the
> *n* charge trope that metaphysically explains its ability to
> resemble both charged things and things of a certain size, the ostrich
> trope theorist answers: 'the trope itself--no more
> structure to it than that'.
>
>
>
But then, on Giberman's view--and *pace*
Ehring--we are *not* allowed to ask of the properties of
the trope what grounds *their* particularity and nature
(that's primitive!). And if asking this is not allowed, is the
idea, no infinite regress can be generated (*cf.* also fn 20,
this entry).
What about the advantage the trope view was supposed to have over the
state-of-affairs view? Primitive or not, if the complexity is there,
doesn't this mean that those views are now on a par
parsimony-wise? Giberman doesn't think so, and gives two reasons
why not: (1) although the ostrich trope is qualitatively complex, it
is *not*--unlike the state of affairs--categorially
complex (no trope has a constituent from outside the category
*trope*) (2022: 16); (2) (more contentiously) even states of
affairs, in being spatiotemporal, need to have size and shape etc. But
then, apart from having to accept the existence of categorially
distinct substrates, the state of affairs theorist must also accept
that universals--like tropes--are primitively qualitatively
complex (ibid: 18).
### 2.3 Trope Individuation
What makes two tropes, existing in the same world at the same time,
distinct?[14]
A natural suggestion is that we take the way we normally identify and
refer to tropes very literally and individuate tropes with reference
to the objects that 'have' them:
**Object Individuation** (OI):
For any tropes *a* and *b* such that *a* exactly
resembles *b*, *a* [?] *b* iff *a*
belongs to an object that is distinct from the object to which
*b* belongs.
No trope theorist endorses this natural suggestion, however. The
reason why not is that, at least if objects are bundles of tropes, the
individuation of objects will depend on the individuation of the
tropes that make them up, which means that, on OI, individuation
becomes circular (Lowe 1998: 206f.; Schaffer 2001: 249; Ehring 2011:
77). Indeed, matters improve only marginally if objects are understood
as substrates in which tropes are instantiated. For, although on this
view, OI does not force one to accept a circular account of
individuation, this is because it is now the substrate which carries
the individuating burden. This leaves the individuation of the
substrate still unaccounted for, and so we appear to have gotten
nowhere (Mertz 2001). Any trope theorist who accepts the possible
existence of 'free-floating' tropes--i.e., tropes
that exist unattached to any object--*must* in any case
reject this account of trope individuation (at least as long as she
accepts the possibility of there being *more than one*
free-floating trope at any given time).
The main-contenders are instead *spatiotemporal individuation*
(SI) and *primitivist individuation* (PI). According to SI,
first, that two tropes belonging to the same world are distinct can be
metaphysically explained with reference to a difference in their
respective spatiotemporal position (Campbell 1990: 53f.; Lowe 1998:
207; Schaffer 2001: 249; Giberman 2022: 18):
**Spatiotemporal Individuation** (SI):
For any tropes *a* and *b* such that *a*
exactly resemble *b*, *a* [?] *b* iff *a*
is at non-zero distance from *b*.
This is an account of trope individuation that seems to respect the
way tropes are normally picked out, yet which does
not--circularly--individuate tropes with reference to the
objects they make up and which does not rule out the existence of
'free-floaters'. In spite of this, the majority of the
trope theorists (Schaffer 2001 being one important exception) have
opted instead for primitivism (*cf.* e.g., Ehring 2011: 76;
Campbell 1990: 69; Keinanen & Hakkarainen 2014). Primitivism
is best understood as the denial of the idea that there is any true
and informative way of filling out the biconditional 'For any
exactly resembling tropes *a* and *b*, *a* [?]
*b* iff ...'. That *a* and *b* are
distinct--if they are--is hence primitive. It has no further
(ontological) analysis or (metaphysical) explanation.
According to what is probably the most influential argument in favor
of PI over SI (an argument that changed Campbell's mind:
*cf.* his 1990: 55f.; *cf.* also Moreland 1985: 65), SI
should be abandoned because it rules out the (non-empty) possibility
that (parts of) reality could be
*non*-spatiotemporal.[15]
Against this, proponents of SI have argued that the thesis that
reality must be spatiotemporal can be independently justified
(primarily because naturalism can be independently justified,
*cf.* Schaffer 2001: 251). And even if it cannot, SI could
easily be modified to accommodate *the analogue* of the
locational order of space (Campbell 1997 [1981]: 136; Schaffer
ibid.).[16]
A common argument *in favor* of SI is that it allows its
proponents to rule out what most agree *are* empty
possibilities: swapping and piling.
*Swapping*: According to the so-called 'swapping
argument' (first formulated in Armstrong 1989: 131-132;
*cf.* also Schaffer 2001: 250f; Ehring 2011:
78f.),[17], [18]
if properties are tropes, and individuation is primitive, two
distinct yet exactly similar tropes might swap places (this redness
*here* might have been *there*, and vice versa). The
result, post-swap, is a situation that is ontologically distinct from
that pre-swap. However, empirically/causally the pre- and post-swap
situations are the same (*cf.* LaBossiere 1993: 262 and Denkel
1996: 173f. for reasons to doubt that they *are* the same).
That is, given the natural laws as we know them, that this red-trope
*here* swaps places with that red-trope *there* makes no
difference to the future evolution of things. Which means that, not
only would the world look, feel and smell exactly the same to us pre-
and post-swap, it would be in principle impossible to construct a
device able to distinguish the two situations from one another. For
any device able to detect the (primitive) difference between the two
situations would also have to be a device able to communicate this
difference (by making a sound, by turning a handle, etc...) yet
this is precisely what the fact that swapping makes no difference to
the future evolution of things prevents (*cf.* Dasgupta 2009).
This makes admitting the possibility of swapping seem unnecessary. If
we also accept the (arguably reasonable) Eleatic principle according
to which only changes that matter empirically/causally should count as
genuine, we can draw the even stronger conclusion that swapping is not
genuinely possible, and, hence, that any account of individuation from
which it follows that it is, should be
abandoned.[19]
To accept SI does not immediately block swapping (Schaffer 2001: 250).
For, SI (just like OI and PI) is a principle about trope individuation
that holds *intra*-worldly. In this case: within any given
world, no two exactly similar tropes are at zero distance from each
other. Swapping, on the other hand, concerns what is possibly true (or
not) of exactly similar tropes considered *inter*-worldly. But
this means that, although SI does not declare swapping possible, it
doesn't rule it out either. According to the proponent of SI,
this is actually a good thing. For there is one possibility that it
would be unfortunate if one's principle of individuation
*did* block, namely the possibility--called
*sliding*--that this red-trope *here* could have
been *there* had the wind blown differently (Schaffer 2001:
251). To get the desired result (i.e., to block swapping while
allowing for sliding), Schaffer suggests we combine trope theory with
SI *and* a Lewisian counterpart theory of transworld identity
(Lewis 1986). The result is an account according to which exactly
resembling tropes are *intra*-worldly identical if they inhabit
the same position in space-time. And according to which they are
*inter*-worldly counterparts, if they are distinct, yet stand
in sufficiently similar distance- and other types of relations to
their respective (intra-worldly) neighbors. With this addition in
place, Schaffer claims, a trope theory which individuates its posits
with reference to their spatiotemporal position *will* make
room for the possibility of sliding, because (2001: 253):
>
>
> On the counterfactual supposition of a shift in wind, what results is
> a redness exactly like the actual one, which is in perfectly
> isomorphic resemblance relations to its worldmates as the actual one
> is to its worldmates, with just a slight difference in distance with
> respect to, e.g., the roundness of the moon.
>
>
>
...yet it *won't* allow for the possibility of
swapping, because:
>
>
> ...the nearest relative of the redness of the rose which is
> *here* at our world would be the redness still *here*
> 'post-swap'. The redness which would be here has exactly
> the same inter- and intraworld resemblance relations as the redness
> which actually is here, and the same distance relations, and hence is
> a better counterpart than the redness which would be
> *there*.
>
>
>
This is not necessarily a reason to prefer SI over PI, however. For,
PI, just like SI, is an inter-worldly principle of individuation,
which means that it, just like SI, could be combined with a Lewisian
counterpart theory, thereby preventing swapping yet making room for
sliding. It is, in other words, the counterpart theory, and not SI (or
PI), which does all the work. In any case, it is not clear that
intra-worldly swapping *is* an empty possibility. According to
Ehring, there are circumstances in which a series of slidings
constitute one case of swapping, something that he thinks would make
swapping more a reason *for* than against PI (Ehring 2011:
81-85).
*Piling*: Even if swapping does not give us a reason to prefer
SI over PI, perhaps its close cousin 'piling' does.
Consider a particular red rose. Given trope theory, this rose is red
because it is partly constituted by a redness-trope. But what is to
prevent *more than one*--even indefinitely
many--exactly similar red-tropes from partly constituting this
rose? Given PI: nothing. It is however far from clear how one could
empirically detect that the rose has more than one redness trope, just
like it is not clear how one could empirically detect how many redness
tropes it has, provided it has more than one. This is primarily
because it is far from clear how having more than one redness trope
could make a causal difference in the world. But if piling makes no
empirical/causal difference, then given a (plausible) Eleatic
principle, the possibility of piling is empty, which means that PI
ought to be rejected (Armstrong 1978: 86; *cf.* also Simons
1994: 558; Schaffer 2001: 254, *fn*. 11).
In defense of PI, its proponents now point to a special case of
piling, called 'pyramiding' (an example being a 5 kg
object consisting of five 1 kg tropes). Pyramiding *does* seem
genuinely possible. Yet, if piling is ruled out, so is pyramiding
(Ehring 2011: 87ff.; *cf.* also Armstrong 1997: 64f.; Daly
1997: 155). According to Schaffer, this is fine. For, although
admittedly not quite as objectionable as other types of piling (which
he calls 'stacking'), pyramiding faces a serious problem
with predication: if admitted, it will be true of the 5 kg object that
"[i]t has the property of weighing 1 kg" (Schaffer 2001:
254). Against this, Ehring has pointed out that to say of the 5 kg
object that "[i]t has the property of weighing 1 kg" is at
most pragmatically odd, and that, even if this oddness is regarded as
unacceptable, to avoid it would not require the considerable
complication of one's theory of predication imagined by Schaffer
(Ehring 2011: 88-91).
According to Schaffer, the best argument for the possibility of
piling--hence the best argument against SI--is rather
provided by the existence of so-called bosons (photons being one
example). Bosons are entities which do not obey Pauli's
Exclusion Principle, and hence such that two or more bosons
*can* occupy the same quantum state. A 'one-high'
boson-pile is hence empirically distinguishable from a
'two-high' one, which means that the possibility of piling
in general is not ruled out even if we accept an Eleatic principle.
Schaffer (2001: 255) suggests we solve this problem for SI by
considering the wave--not the particle/boson--as the way the
object 'really' is. But this solution comes with
complications of its own for the proponent of SI. For, "[t]he
wave function lives in configuration space rather than physical space,
and the ontology of the wave function, its relation to physical space,
and its relation to the relativistic conception of spacetime which SI
so naturally fits remain deeply mysterious" (Schaffer 2001:
256).[20]
## 3. Tropes as Building Blocks
As we have seen, tropes can be conceptualized, not just as
particularized *ways* things are, but also--and on some
versions of trope theory, primarily--as that out of which
everything else there is, is constructed. Minimally, this means that
tropes must fulfill at least two constructive tasks: that of making up
(the equivalent of) the realist's universal, and that of making
up (the equivalent of) the nominalist's concrete particular.
### 3.1 Tropes and Universals
How can distinct things have things--their properties--in
common? This is the problem of 'the One over Many'
(*cf.* e.g., Rodriguez-Pereyra 2000; Maurin 2022: sect. 2).
Universals provide a straightforward solution to this problem:
distinct things can have things in common, because there is a type of
entity--the universal--capable of existing in (and hence
characterizing) more than one object at once. The trope
theorist--at least if she does not accept the existence of
universals *in addition to*
tropes[21]--does
not have recourse to entities that can be likewise identical in
distinct instances. She must therefore come up with an alternative
solution. Here I'll consider three.
*The Resemblance Class Theory:* The most commonly accepted
trope-solution to the problem of the One over Many takes two objects
*a* and *b* to 'share' a
property--*F-ness*-- if at least one of of the tropes
that make up *a* belongs to the same (exact) resemblance class
as at least one of the (numerically distinct) tropes that make up
*b* (*cf.* Williams 1953*a*: 10; Campbell 1990:
31f.; *cf.* also Lewis 1986: 64f.). Exact resemblance is an
*equivalence relation* (although *cf.* Mormann 1995 for
an alternative view), which means that it is a *symmetrical*,
*reflexive*, and *transitive* relation. Because it is an
equivalence relation, exact resemblance partitions the set of tropes
into mutually excluding and non-overlapping classes. Exact resemblance
classes of tropes, thus understood, function more or less as the
traditional universal does. Which is why proponents of this view think
the problem can be solved with reference to them.
A point of contention among those who hold this view is if accepting
it mean having to accept the existence of resemblance relations. Those
who think it doesn't, point out that resemblance is an
*internal* relation which supervenes on whatever it relates:
that the (degree of) resemblance between distinct tropes is entailed
simply given their
existence.[22]
Assuming that our ontology is 'sparse' (a thought many
believe can be independently justified, *cf.* e.g., Schaffer
2004 and Armstrong 1978), only what is minimally required to make true
all truths exists. Which means that, if resemblance is an internal
relation, what exists when distinct tropes resemble each
other--and so what plays the role of the realist's
universal--is nothing but the resembling tropes themselves
(Williams 1963: 608; Campbell 1990: 37f.; *cf.* also Armstrong
1989: 56).
Does it follow from the fact that exact resemblance must obtain given
the existence of its relata, that it is no ontological addition to
them? Some philosophers (Daly 1997: 152 is among them) do not think
so. But if it doesn't follow, some have argued, the trope
theorist must combat a trope theoretical version of what has become
known as 'Russell's regress'. Russell's
regress (perhaps better: Russell's regress *argument*) is
an argument to the effect that, if some particulars (Russell was
thinking of concrete particulars, not of tropes) (exactly) resemble
each other, then either the relation of resemblance exists *and is
a universal*, or we end up in (vicious) infinite regress (Russell
1997 [1912]: 48; *cf.* also Kung 1967). Chris Daly (1997:
149) provides us with a trope-theoretical version of this
argument:[23]
>
>
> Consider three concrete particulars which are the same shade of
> red...each of these concrete particulars has a red
> trope--call these tropes *F*, *G*, and
> *H*--and these concrete particulars exactly resemble each
> other in colour because *F*, *G*, and *H* exactly
> resemble each other in colour. But it seems that this account is
> incomplete. It seems that the account should further claim that
> resemblance tropes hold between *F*, *G*, and
> *H*. That is, it seems that there are resemblance tropes
> holding between the members of the pairs *F* and *G*,
> *G* and *H*, and *F* and *H*... Let
> us call the resemblance tropes in question *R*1,
> *R*2, and *R*3...each of
> these resemblance tropes in turn exactly resemble each other.
> Therefore, certain resemblance tropes hold between these
> tropes...we are launched on a regress.
>
>
>
This regress is a problem only if it is *vicious*. The most
convincing reason for thinking that it is not is provided if we
consider the 'pattern of dependence' it
instantiates.[24]
For, as we have seen, even those who do not think that the
internality of exact resemblance makes it a mere
'pseudo-addition' to its subvenient base, agree that
resemblance, whatever it is, is such that its existence is necessarily
incurred simply given the existence of its relata. But then, no matter
how many resemblances we regressively generate, ultimately they all
depend for their existence on the existence of the resembling tropes,
which resemble each other because of their individual nature, which is
primitive. This means that the existence of the regress in no way
contradicts--it does not function as a *reductio*
against--the resemblance of the original tropes. On the contrary;
it is *because* the tropes resemble each other, that the
regress exists. Therefore, the regress is benign (*cf.* e.g.,
Campbell 1990: 37; Maurin 2013).
This response only works if the nature of individual
tropes--*their being what they are*--is primitive and
not further analyzable (i.e., it only works if we assume a standard
view of the nature of tropes). To see this, compare the standard view
with a view with which it is often confused: resemblance nominalism.
On a version of the trope view according to which universals are
resemblance classes of tropes, tropes have the same nature if they
resemble each other. Yet, importantly, they resemble each other (or
not) *in virtue of the (primitive) nature they each
'have'* (or 'are'). Resemblance
nominalism, on the other hand, is the view that two objects have the
natures they do in virtue of the resemblance relations which obtain
between them. This means that, whether they resemble or not, is not
decided given the existence and nature of the objects themselves.
Rather, the pattern of dependence is the other way around. And
this--arguably--makes the regress vicious. Perhaps for that
reason, resemblance nominalism has no explicit proponent among the
trope theorists.
*The Natural Class Theory*: Those not convinced that
'Daly's regress' is benign might prefer a view
(first defended by Ehring 2011: 175f) according to which the trope is
not what it is either primitively or because of whatever resemblance
relations it stands in to other tropes, but because of the natural
class to which it belongs. Accepting this view provides us with a new
solution to the problem of the One over Many, one that does not depend
on the existence (or not) of exact resemblance relations. On this
view, more precisely, two objects *a* and *b*
'share' a property--*F-ness*-- if at
least one of the tropes that make up *a* belongs to the same
natural class as at least one of the tropes that make up *b.*
The problem with this alternative is that it appears
to--implausibly--turn explanation on its head. If accepted,
tropes do not belong to this or that class because of their nature.
Rather, tropes have the natures they do because they belong to this or
that class.
*The Trope-Kind Theory*: Although he in his (1953*a*)
briefly 'dallied' (his word) with the view that, to be a
universal is to be a resemblance class of tropes, Williams soon
thereafter changed his mind (*cf*. esp. his 1986 [1959] and
1963).[25]
According to the view he adopted instead of the resemblance view,
universals are neither "made" nor "discovered"
but are something we acknowledge "by a relaxation of the
identity conditions of thought and language" (1986 [1959]: 8;
*cf.* also Campbell 1990: 44). If the F-ness of *a* and
the F-ness of *b* is counted in a way that is subject to the
rule that anything indiscernible is identical, their sameness is
explained with reference to the universal they share. And if the
F-ness of *a* and the F-ness of *b* is counted in a way
that *isn't* subject to that rule, their sameness is
explained with reference to their individual, distinct, yet exactly
resembling tropes. Importantly, Williams thinks of this as a kind of
realism--an *immanent* realism--about
universals (Fisher 2017: 346; *cf.* also Fisher 2018 and
Heil 2012: 102f.). It takes universals to be "present in, and in
fact components of, their instances" (Williams 1986 [1959]: 10).
Yet, as noted by Fisher (2017: 346) tropes are still the primitive
elements of being. Universals are real in virtue of being
*mind-independent*. Yet, universals aren't fundamental
(Fisher: ibid.):
>
>
> Their reality is determined by mind-independent facts about tropes.
> Tropes manifest universals in the sense that universals are nothing
> over and above property instances as tropes are by their nature of
> kinds.
>
>
>
The trope-kind theory offers an interesting and so far surprisingly
little discussed solution to the problem of the One over Many. So far,
it is a view with few if any explicit contemporary proponents (though
*cf*. Paul 2002, 2017 and van Inwagen 2017:
348f.[26]).
It is not unlikely that this state of affairs is now about to
change.
### 3.2 Tropes and Concrete Particulars
The second constructive task facing the trope theorist is that of
building something that behaves like a concrete particular, using only
tropes. Exactly how a concrete particular behaves is of course a
matter that can be debated. This is not a debate to which the trope
theorist has had very much--or at least not anything very
original--to contribute. Instead, the trope theoretical
discussion has been focused on an issue that needs solving
*before* questions concerning what a concrete particular can or
cannot do more precisely become relevant: the issue of *if* and
*how* tropes make up concrete particulars in the first
place.
Whether this issue is best approached by considering if and how tropes
can make up or ground the existence of what me might call
'ordinary' or 'everyday' objects, or if it is
better to concentrate instead on the world's simplest, most
fundamental, objects--like those you find discussed in e.g.,
fundamental physics--is another issue on which trope theorists
disagree. Campbell thinks we should concentrate on the latter sort of
object. In particular, he thinks we should concentrate on objects that
have no other objects as parts, as that way we avoid confusing
'substantial' complexity (and unity) with the--here
relevant--qualitative one. David Robb (2005) and Kris McDaniel
(2001) disagree. This may in part be due to the fact that they both
(*cf.* also Paul 2002, 2017) think that objects are
mereologically composed *both* on the level of their
substantial parts and on the level of their
qualitative--trope--parts.[27]
According to a majority of the trope theorists, objects are bundles of
tropes. The alternative is to understand objects as complexes
consisting of a substrate in which tropes are instantiated (a view
defended by e.g., Martin 1980; Heil 2003; and Lowe 2006). Indeed,
according to D. M. Armstrong (1989, 2004)--a staunch but
comparatively speaking rather friendly trope-critic--this
minority view *ought to* be accepted by the trope theorist.
Armstrong is most likely wrong, however: several reasons exist for why
one ought to prefer the bundle view (Maurin 2016). One such reason has
to do with parsimony. If you adopt a substrate-attribute view, you
accept the existence of substrates on top of the existence of tropes.
Accepting this additional category of entity makes at least some sense
if properties are universals. For if objects are bundles of universals
and universals are entities which are numerically identical across
instances, then if object *a* is qualitatively identical to
object *b*, *a* is also numerically identical to
*b*. Which means that, if objects are bundles of universals,
the Identity of Indiscernibles is not just true, but necessarily true,
a consequence few universal realists have been prepared to accept
(indeed, a consequence that may turn out to be unacceptable *as a
matter of empirical fact*--*cf.* e.g., French 1988).
If objects are substrates exemplifying universals, on the other hand,
although *a* and *b* are qualitatively identical, they
are nevertheless distinct. They are distinct, that is, *in virtue
of* being partly constituted by (primitively) distinct
substrates.
This sort of argument in favor of the substrate-attribute view cannot
be recreated for a view on which properties are tropes, however. For,
if objects are bundles of tropes, because tropes are particulars
*not* universals, even if *a* and *b* are
qualitatively identical, they are not numerically identical. And they
are not numerically identical *because* they are constituted by
numerically distinct tropes. No need for substrates!
According to the bundle view, objects consist of, are made up by, or
are grounded in, a sufficient number of mutually *compresent*
and/or in some other way mutually dependent tropes (*cf.* e.g.,
McDaniel 2001, 2006 and Giberman 2014 for slightly different takes on
bundling). What is compresence? When the same question was asked about
(exact) resemblance, the trope theorist had the option of treating the
relation as a 'pseudo-addition'. This was because
resemblance is an internal relation and so holds necessarily simply
given the existence of its relata. According to most trope-theorists,
however, compresence is an *external* relation, and hence a
real addition to the tropes it
relates.[28]
But then adding compresence gives rise to an infinite regress (often
called 'Bradley's regress' after Bradley 1930 [1893];
*cf.* also Armstrong 1978; Vallicella 2002 and 2005; Schnieder
2004; Cameron 2008, 2022; Maurin 2012).
Unlike what was true in the case of resemblance, this regress is most
likely a vicious regress. This is because the 'pattern of
dependence' it instantiates is the opposite of that instantiated
by the resemblance regress. In the resemblance case, for tropes
*t*1, *t*2, and
*t*3 to exactly resemble each other, it is enough
that they exist. Not so in the compresence case. Tropes
*t*1, *t*2, and
*t*3 could exist and not be compresent, which means
that in order to ensure that they *are* compresent, a
compresence-trope, *c*1, must be added to the
bundle. But *c*1 could exist without being
compresent with those very tropes. Therefore, in order for
*t*1, *t*2, *t*3
*and* *c*1 to be compresent, there must be
something--call it *c*2--that makes them
so. But since *c*2 could exist and not be compresent
with *t*1, *t*2,
*t*3 and *c*1, it too needs
something that ensures its compresence with those entities. Enter
*c*3. And so on. The existence of this regress
arguably contradicts--and hence functions as a *reductio*
against--the compresence of the original (first-order) tropes
and, thereby, the (possible) existence of the concrete particular.
Since concrete particulars (possibly) exist, something must be wrong
with this argument. One option is to claim that compresence is
internal after all, in which case the regress (if there even is one)
is benign (Molnar 2003; Heil 2003 and 2012; *cf.* also
Armstrong 2006). This may seem attractive especially to those who
think of their tropes as *non-transferable* and as *ways
things are*. Even given this way of thinking of the nature of the
trope, however, to take compresence as internal means having to give
up what are arguably some deeply held modal beliefs. For even if you
have reason to think that properties must be 'borne' by
some object, to be able to solve the regress-problem one would have to
accept the much stronger thesis that every trope must be borne by a
*specific* object. If the *only* reason we have for
thinking that compresence is internal in this sense is that this
solves the problem with Bradley's regress, therefore, we should
opt to go down this route as a last resort only (*cf.* Cameron
2006; Maurin 2010).
As a way of saving at least some of our modal intuitions while still
avoiding Bradley's regress, Simons (1994; *cf.* also
Keinanen 2011 and Keinanen and Hakkarainen 2014 for a
slightly different version of this
view[29])
suggests we view the concrete particular as constituted partly by a
'nucleus' (made up from mutually and specifically
dependent tropes) and partly--at least in the normal
case--by a 'halo' (made up from tropes that depend
specifically on the tropes in the nucleus). The result is a structured
bundle such that, although the tropes in the nucleus at most depend
for their existence on the existence of tropes *of the same
kind* as those now in its halo, they do not depend specifically on
those tropes. In this way, at least some room is made for contingency,
yet Bradley's regress is avoided. For, as the tropes in the halo
depend specifically for their existence on the tropes that make up the
nucleus, their existence is enough to guarantee the existence of the
whole to which they belong. This is better but perhaps not good
enough. For, although the same object could now have had a slightly
different halo, the possibility that the tropes that actually make up
the halo could exist and not be joined to *this* particular
nucleus is ruled out with no apparent justification (other than that
this helps its proponent solve the problem with the Bradley regress)
(*cf.* also Garcia 2014 for more kinds of criticism of this
view).
According to several between themselves very different sorts of trope
theorists, finally, we should stop bothering with the (nature and
dependence of the) related tropes and investigate instead the
(special) nature of compresence itself. This seems intuitive enough.
After all, is it not the business of a relation to relate? According
to one suggestion along these lines (defended in Simons 2010; Maurin
2002, 2010 and 2011; and Wieland and Betti 2008; *cf.* also
Mertz 1996, Robb 2005 and Giberman 2014 for similar views),
non-relational tropes have an existence that is independent of the
existence of some specific--either non-relational or
relational--trope, but relational tropes (including compresence)
depend specifically for their existence on the very tropes they
relate. This means that if *c*1 exists, it
*must* relate the tropes it in fact relates, even though those
tropes might very well exist and not be compresent (at least not with
each other). There is, then, no regress, and except for
*c*1, the tropes involved in constituting the
concrete particular could exist without being compresent with each
other. And this, in turn, means that our modal intuitions are left
more or less
intact.[30]
According to Mertz, moreover, to be able to do the unifying work for
which it is introduced, compresence cannot be a universal. If it were,
then if one of the concrete particulars whose constituents it joins
ceases to exist, so will every other concrete particular unified by
the same (universal) relation of compresence. But, as Mertz points
out, "this is absurdly counterfactual!" (Mertz 1996: 190).
Nor can it be a state of affairs. For, states of affairs are in
themselves complexes, and so could not be used to solve the Bradley
problem.[31]
It seems, then, that compresence, if understood in a way that
blocks the regress, is a trope. Assuming that Bradley's regress
threatens *any* account according to which many things make up
one unified thing (i.e., assuming that it does not only threaten the
trope-bundle theorist), that there is this threat may therefore turn
out to be a reason *in favor of* positing tropes (Maurin
2011).
The suggestion is not without its critics. To these belong MacBride
who argues that, "...to call a trope relational is to pack
into its essence the relating function it is supposed to perform
without explaining what Bradley's regress calls into question,
viz. the capacity of relations to relate" (2011: 173). Rather
than solve the problem, in other words, MacBride thinks the suggestion
"transfers our original puzzlement to that thing [i.e., the
compresence-relation]". For, he asks "how can positing the
existence of a relational trope explain anything about its capacity to
relate when it has been stipulated to be the very essence of R that it
relates *a* and *b*. It is as though the capacity of
relational tropes to relate is explained by mentioning the fact that
they have a 'virtus relativa'"(ibid.).
Assuming we agree that there is *something* that needs
explaining (i.e., assuming we agree that how several tropes
can--contingently--make up one object needs explaining), we
can either reject a proposed solution because we prefer what we think
is a better solution, or we can reject it because it is *in
itself* bad or unacceptable (irrespective of whether there are any
alternative solutions on offer). MacBride appears to suggest we do the
latter. More precisely, what MacBride proposes is that the solution
fails because it leaves unexplained the special 'power' to
relate it attributes to the compresence trope. If this is why the
suggestion fails, however, then either this is because *no*
explanation that posits something ('primitively') apt to
perform whatever function we need explained, is acceptable, or it is
because *in this particular case*, an explanation of this kind
will not do. If the former, the objection risks leading to an
overgeneration of explanatory failures. Everyone will at some point
need to posit some things as fundamental. And in order for those
fundamental posits to be able to contribute somehow to the theory in
question, it seems we must be allowed to say something about them. We
must, to use the terminology introduced by Schaffer, outfit our
fundamental posits with axioms. But then, as Schaffer also points out
(2016: 587): "it is a bad question--albeit one that has
tempted excellent philosophers from Bradley through van Fraassen and
Lewis--to ask how a posit can do what its axioms say, for that
work is simply the business of the posit. End of story".
If, on the other hand, the problem is isolated to the case at hand, we
are owed an explanation of what makes this case so special. MacBride
complains that if the 'explanatory task' is that of
accounting for the capacity of compresence to relate, being told that
compresence has that capacity 'by nature', will not do.
Perhaps he is right about this. But, then, the explanatory task is
arguably not that one, but rather the task of accounting for the
possible existence of concrete objects (contingently) made up from
tropes. If *this* is the explanatory task, it is far from clear
why positing a special kind of (relational) trope that is 'by
nature' apt to perform its relating function, will not do as an
explanation.
## 4. Trope Applications
According to the trope proponent, if you accept the existence of
tropes, you have the means available to solve or to dissolve a number
of serious problems, not just in metaphysics but in philosophy
generally. In what follows, the most common trope-applications
proposed in the literature are very briefly introduced.
### 4.1 Tropes in Causation and Persistence
According to a majority of the trope theorists, an important reason
for thinking tropes exist is the role they play in causation. It is
after all not the whole stove that burns you, it is its temperature
that does the damage. And it is not any temperature, nor temperature
in general, which leaves a red mark. That mark is left by *the
particular temperature had by this particular stove now*. It makes
sense, therefore, to say that the mark is left by the stove's
temperature-*trope*, which means that tropes are very good
candidates for being the world's basic causal relata (Williams
1953*a*; Campbell 1990; Denkel 1996; Molnar 2003; Heil 2003; ;
Garcia-Encinas 2009; Ehring 2011).
That tropes *can* play a role in causation can hardly be
doubted. But can this role also provide the trope-proponent with a
reason to think that tropes exist? According to the theory's
critics, it cannot. The role tropes (can) play in causation does not
provide the trope proponent with any special reason to prefer an
ontology of tropes over alternative ontologies. More specifically, it
does not give her any special reason to prefer an ontology of tropes
over one of states of affairs or events. Just like tropes, state of
affairs and events are particular. Just like tropes, they are
localized. And, just like tropes, they are non-repeatable (although at
least the state of affairs contains a repeatable item--the
universal--as one of its constituents). Every reason for thinking
that tropes are the world's basic causal relata is therefore
also a reason to think that this role is played by states of affairs
and/or
events.[32]
Ehring disagrees. To see why, he asks us to consider the following
simple scenario: a property-instance at *t*1 is
causally responsible for an instance *of the same property* at
*t*2. This is a case of causation which is also a
case of *property persistence*. But what does property
persistence involve? According to Ehring, property persistence is not
just a matter of something not changing its properties. For, even in
cases where nothing discernibly changes, the property instantiated at
*t*1 could nevertheless have been replaced by
another property of the same type during the period between
*t*1 and *t*2. To be able to
ontologically explain the scenario, therefore, we first need an
account of property persistence able to distinguish 'true'
property persistence from cases of 'non-salient property
change' or what may also be called property *type*
persistence. But, Ehring claims, this is something a theory according
to which property instances are states of affairs cannot do (this he
demonstrates with the help of a number of thought experiments, which
space does not allow me to reproduce here, but *cf.* Ehring
1997: 91ff). Therefore, causation gives us reason to think that tropes
exist.
Ehring is not the only one who regards the relationship between
(theories of) persistence and tropes as an intimate one. According to
McDaniel (2001)--who defends a theory (TOPO) according to which
ordinary physical objects are mereological fusions of monadic and
polyadic tropes--adopting (his version of) the trope view can be
used to argue for one particular theory of persistence:
3-dimensionalism.[33]
And according to Benovsky (2013), because (non-presentist)
endurantism is *incompatible* with the view that properties are
(immanent) universals, the endurantist *must* embrace trope
theory.[34]
According to Garcia (2016), finally, what role tropes can play in
causation will depend on how we conceive of the nature of tropes. If
tropes are what he calls 'modifiers', they do not have the
character they confer, a fact that would seem to make them less
suitable as causal relata. Not so if tropes are of the module kind
(and so have the character they confer). But if tropes have the
character they confer, Garcia points out, we may always ask, e.g.: Is
it the couch or is it the couch's couch-shaped mass-trope that
causes the indentation in the carpet? Garcia thinks we have reason to
think they both do. The couch causes the indentation by courtesy, but
the mass trope would have sufficed to cause it even if it had existed
alone, unbundled with the couch's other tropes. But this
suggests that if tropes are of the module kind, we end up with a world
that is (objectionably) systematically causally overdetermined. The
role tropes play in causation may therefore be more problematic than
what it might initially seem (though *cf.* Giberman 2022 for an
objection to Garcia's argument).
### 4.2 Tropes and Issues in the Philosophy of Mind
Suppose Lisa burns herself on the hot stove. One of the causal
transactions that then follow can be described thus: Lisa removed her
hand from the stove *because* she felt pain. This is a
description which seems to pick out 'being in pain' as one
*causally relevant* property of the cause. That 'being in
pain' *is* a causally relevant property accords well with
our intuitions. However, to say it is leads to trouble. The reason for
this is that mental properties, like that of 'being in
pain', can be realized by physically very different systems.
Therefore, mental properties cannot be *identified* with
physical ones. On the other hand, we seem to live in a physically
closed and causally non-overdetermined universe. But this means that,
contrary to what we have supposed so far, Lisa did not remove her hand
because she felt pain. In general, it means that mental properties are
not causally relevant, however much they seem to be (*cf.* Kim
1989 for a famous expression of this problem).
If properties are tropes, some trope theorists have proposed, this
conclusion can be resisted (*cf.* Robb 1997; Martin and Heil
1999; Heil and Robb 2003; for a hybrid version *cf.* Nanay
2009; *cf.* also Gozzano and Orilia 2008). To see this, we need
first to disambiguate our notion of a property. This notion, it is
argued, is really *two* notions, namely:
* Property1 = that which imparts on an individual thing
its particular nature (property as *token*), and
* Property2 = that which makes distinct things the same
(property as *type*).
Once 'property' has been disambiguated, we can see how
mental properties can be causally relevant after all. For now, if
mental properties1 are tropes, they can be identified with
physical properties1. Mental properties2 can
still be distinguished from physical properties2, for
properties considered as *types* are--in line with the
standard view of tropes--identified with *similarity classes
of tropes*. When Lisa removes her hand from the stove because she
feels pain, therefore, she removes her hand *in virtue of*
something that is partly characterized by a trope which is such that
it belongs to a class of mentally similar tropes. This trope is
identical with a physical trope--it is *both* mental and
physical--because it *also* belongs to a (distinct)
similarity class of physically similar tropes. Therefore, mental
properties can be causally relevant in spite of the fact that the
mental is multiply realizable by the physical, and in spite of the
fact that we live in a physically closed and non-overdetermined
universe.
This suggestion has been criticized. According to Paul Noordhof (1998:
223) it fails because it does not respect the "bulge in the
carpet constraint". For now the question which was ambiguously
asked about properties, can be unambiguously asked about tropes: is it
in virtue of being mental or in virtue of being physical that the
trope is causally relevant for the effect (for a response,
*cf.* Robb 2001 and Ehring 2003)? And Sophie Gibb (2004) has
complained that the trope's simple and primitive nature makes it
unsuitable for membership in two such radically different classes as
that of the mentally and of the physically similar tropes,
respectively (for more reasons against the suggestion *cf.*
Macdonald and Macdonald 2006 and Zhang 2021).
### 4.3 Tropes and Perception
Another important reason for thinking that tropes exist, it has been
proposed, is the role tropes play in perception. That what we perceive
are the qualities of the things rather than the things themselves,
first, seems plausible (for various claims to this effect,
*cf.* Williams 1953*a*: 16; Campbell 1997 [1981]: 130;
Schaffer 2001: 247; *cf.* also Nanay 2012 and Almang
2013). And that the qualities we perceive are tropes rather than
universals or instantiations of universals (states of affairs) is,
according to Lowe, a matter that can be determined with reference to
our experience (*cf.* e.g., Skrzypulec 2021 for an argument to
the contrary). Lowe argues (1998: 205; *cf.* also, Lowe 2008;
Mulligan 1999):
>
>
> [W]hen I see the leaf *change* in colour--perhaps as it
> turned brown by a flame--I seem to see something *cease to
> exist* in the location of the leaf, namely, its greenness. But it
> could not be the *universal* greenness which ceases to exist,
> at least so long as other green things continue to exist. My opponent
> must say that really what I see is not something ceasing to exist, but
> merely the leaf's ceasing to instantiate greenness, or greenness
> ceasing to be 'wholly present' just here. I can only say
> that that suggestion strikes me as being quite false to the
> phenomenology of perception. The objects of perception seem, one and
> all, to be particulars--and, indeed, a causal theory of
> perception (which I myself favour) would appear to require this, since
> particulars alone seem capable of entering into causal relations.
>
>
>
A similar view is put forth by Mulligan et al. They argue (1984:
306):
>
>
> [W]hoever wishes to reject moments [i.e., tropes] must of course give
> an account of those cases where we seem to see and hear them, cases we
> report using definite descriptions such as 'the smile that just
> appeared on Rupert's face'. This means that he must claim
> that in such circumstances we see not just *independent things per
> se*, but also *things as falling under certain concepts* or
> *as exemplifying certain universals*. On some
> accounts...it is even claimed that we see the universal in the
> thing. But the friend of moments finds this counterintuitive. When we
> see Rupert's smile, we see something just as spatio-temporal as
> Rupert himself, and not something as absurd as a spatio-temporal
> entity that somehow contains a concept or a universal.
>
>
>
These are admittedly not very strong reasons for thinking that it is
tropes and not state of affairs that are the objects of perception.
For the view that our perception of a trope is not only distinct, but
also phenomenologically distinguishable, from our perception of a
state of affairs seems grounded in little more than its
proponent's introspective intuitions. States of affairs, just
like tropes, are particulars (*cf.* Armstrong 1997: 126 on the
"victory of particularity"). And to say, as Mulligan et
al. do, that the very idea of something spatiotemporal containing a
universal is absurd, clearly begs the question against the view they
are opposing.
### 4.4 Tropes and Semantics
That language furnishes the trope theorist with solid reasons for
thinking that there are tropes has been indicated by several trope
theorists and it has also been forcefully argued, especially by
Friederike Moltmann (2003, 2007, 2009, 2013*a* and
2013*b*; *cf.* also Mertz 1996: 3-6). Taking
(Mulligan et al. 1984) as her point of departure, Moltmann argues that
natural language contains several phenomena whose semantic treatment
is best spelled out in terms of an ontology that includes tropes.
*Nominalizations*, first, may seem to point in the opposite
direction. For, in the classical discussion, the nominalization of
predicates such as *is wise* into nouns fit to refer, has been
taken to count in favor of universal realism. A sub-class of
nominalizations--such as *John's wisdom*--can,
however, be taken to speak in favor of the existence of tropes. This
is a kind of nominalization which, as Moltmann puts it,
"introduce 'new' objects, but only partially
characterize them" (2007: 363). That these nominalizations refer
to tropes rather than to states of affairs, she argues, can be seen
once we consider the vast range of adjectival modifiers they allow
for, modifiers only tropes and not states of affairs can be the
recipients of (2009: 62-63; *cf.* also her 2003).
*Bare demonstratives*, next, especially as they occur in
so-called identificational sentences, provide another reason for
thinking that tropes exist (Moltmann 2013*a*). In combination
with the preposition *like*--as in *Turquoise looks
like that*--they straightforwardly refer to tropes. But even
in cases where they do not refer to tropes, tropes nevertheless
contribute to the semantics of sentences in which they figure. In
particular, tropes contribute to the meaning of sentences like
*This is Mary* or *That is a beautiful woman*. These are
no ordinary identity statements. What makes them stand out, Moltmann
points out, is the exceptional neutrality of the demonstratives in
subject position. These sentences are best understood in such a way
that the bare demonstratives that figure in them do not refer to
individuals (like Mary), but rather to perceptual features (which
Moltmann thinks of as tropes) in the situation at hand.
Identificational sentences, then, involve the identification of a
bearer of a trope *via* the denotation (if not reference) of a
(perceptual) trope.
*Comparatives*--like *John is happier than
Mary*--finally, are according to the received view such that
they refer to abstract objects that form a total ordering (so-called
degrees). According to Moltmann, a better way to understand these
sorts of sentences is with reference to tropes. *John is happier
than Mary* should hence be understood as *John's
happiness exceeds Mary's happiness*. Moltmann thinks this
way of understanding comparatives is preferable to the standard view,
because tropes are easier to live with than "abstract, rarely
explicit entities such as degrees or sets of degrees" (Moltmann
2009: 64).
Whether nominalizations, bare demonstratives and/or comparatives
succeed in providing the trope theorist with strong reasons to think
tropes exist will, among other things, depend on whether or not they
really do manage to distinguish between tropes and states of affairs.
Moltmann thinks they do but, again, this depends on how one
understands the nature of the items in question. It will also depend
on if and how one thinks goings on at the linguistic level can tell us
anything much about what there is at the ontological level. According
to quite a few trope theorists (*cf.* esp. Heil 2003), we
should avoid arguing from the way we conceptualize reality to
conclusions about the nature of reality itself. Depending on
one's take on the relationship between language and world,
therefore, semantics might turn out to have precious little to say
about the existence (or not) of tropes.
### 4.5 Tropes in Science
Discussions of what use can be made of tropes in science can be found
scattered in the literature. Examples include Rom Harre's
(2009) discussion of the role tropes play (and don't play) in
chemistry, and Bence Nanay's (2010) attempt to use tropes to
improve on Ernst Mayr's population thinking in biology. Most
discussions have however been focused on the relationship between
tropes and physics (Kuhlmann et al. 2002). Most influential in this
respect is Campbell's field-theory of tropes (defended in his
1990: Ch. 6; *cf.* also Von Wachter 2000) and Simons'
'nuclear' theory of tropes and the scientific use he
tentatively makes of it (Simons 1994; *cf.* also Morganti 2009
and Wayne 2008).
According to Campbell, the world is constituted by a rather limited
number of field tropes which, according to our (current) best science,
ought to be identified with the fields of gravitation,
electromagnetism, and the weak and strong nuclear forces (plus a
spacetime field). Standardly, these forces are understood as exerted
by bodies that are not themselves fields. Not so on Campbell's
view. Instead, matter is thought of as spread out and as present in
various strengths across a region without any sharp boundaries to its
location. What parts of the mass field we choose to focus on will be
to a certain degree arbitrary. A zone in which several fields all
sharply increase their intensity will likely be taken as one single
entity or particle. But given the overall framework, individuals of
this kind are to be viewed as "well-founded appearances"
(Campbell 1990: 151).
Campbell's views have been criticized by e.g., Christina
Schneider (2006). According to Schneider, the field ontology proposed
by Campbell (and by Von Wachter) fails, because the notion of a field
with which they seem to be working, is not mathematically
rigorous.[35]
And Matteo Morganti who, just like Campbell, wants to identify tropes
with entities described by quantum physics, finds several problems
with the identifications actually made by Campbell. He proposes
instead that we follow Simons and identify the basic constituents of
reality with the fundamental particles, understood as bundles of
tropes (Morganti 2009). If we take the basic properties described by
the Standard Model as fundamental tropes, is the idea, then the
constitution of particles out of more elementary constituents can be
readily reconstructed (possibly by using the sheaf-theoretical
framework proposed by Mormann 1995, or the algebraic framework
suggested by Fuhrmann 1991).
### 4.6 Tropes and Issues in Moral Philosophy
Relatively little has so far been written on the topic of tropes in
relation to issues in moral philosophy and value theory. Two things
have however been argued. First, that tropes (and not, as is more
commonly supposed, objects or persons or states of affairs) are the
bearers of final value. Second, that moral non-naturalists (who hold
that moral facts are fundamentally autonomous from natural facts) must
regard properties as tropes to be able to account for the
supervenience of the moral on the natural.
That tropes are the bearers of (final) value is a view held by several
trope theorists. To say that what we value are the *particular
properties* of things and persons is *prima facie*
intuitive (Williams 1953*a*: 16). And since concrete
particulars--but not tropes--are sometimes the subjects of
simultaneous yet conflicting evaluations, tropes seem
*especially* suited for the job as (final) value-bearers
(Campbell 1997 [1981]: 130-131). That tropes are the
*only* bearers of final value has however been questioned.
According to Rabinowicz and Ronnow-Rasmussen (2003), this is
because different pro-attitudes are fitting with respect to different
kinds of valuable objects. However, according to Olson (2003), even if
this is so, it does not show that tropes are not the only bearers of
final value. For that conclusion only follows if we assume that, to
what we direct our evaluative attitude is indicative of where value is
localized. But final value should be understood strictly as the value
something has *for its own sake*, which means that if e.g., a
person is valuable because of her courage, then she is not valuable
for her own sake but is valuable, rather, for the sake of one of her
properties (i.e., her tropes). But this means that, although the
evaluative attitude may well be directed at a person or a thing, the
person or thing is nevertheless valued because of, or for the sake of,
the tropes which characterize it.
Non-naturalists, next, are often charged with not being able to
explain what appears to be a necessary dependence of moral facts on
natural facts. Normally this dependence is explained in terms of
supervenience, but in order for such an account to be compatible with
the basic tenets of moral non-naturalism, it has been argued, this
supervenience must, in turn, be explainable in purely non-naturalistic
terms (for an overview of this debate, *cf.* Ridge 2018).
According to Shafer-Landau (2003) (as interpreted by Ridge 2007) this
problem is solved if moral and physical properties in the sense of
kinds, are distinguished from moral and physical properties in the
sense of tokens, or tropes. For then we can say, in analogy with what
has been suggested in the debate on the causal relevance of mental
properties, that although (necessarily) every moral trope is
constituted by some concatenation of natural tropes, it does not
follow that every moral type is identical to a natural type. This
suggestion is criticized in Ridge (2007). |
trust | ## 1. The Nature of Trust and Trustworthiness
Trust is an attitude we have towards people whom we hope will be
trustworthy, where trustworthiness is a property not an attitude.
Trust and trustworthiness are therefore distinct although, ideally,
those whom we trust will be trustworthy, and those who are trustworthy
will be trusted. For trust to be plausible in a relationship, the
parties to the relationship must have attitudes toward one another
that permit trust. Moreover, for trust to be well-grounded, both
parties must be trustworthy. (Note that here and throughout, unless
specified otherwise, "trustworthiness" is understood in a
thin sense according to which *X* is trustworthy for me just in
case I can trust *X*.)
Trusting requires that we can, (1) be vulnerable to
others--vulnerable to betrayal in particular; (2) rely on others
to be competent to do what we wish to trust them to do; and (3) rely
on them to be willing to do
it.[2]
Notice that the second two conditions refer to a connection between
trust and reliance. For most philosophers, trust is a kind of reliance
although it is not *mere* reliance (Goldberg 2020). Rather,
trust involves reliance "plus some extra factor" (Hawley
2014: 5). Controversy surrounds this extra factor, which generally
concerns why the trustor (i.e., the one trusting) would rely on the
trustee to be willing to do what they are trusted to do.
Trustworthiness is likewise a kind of reliability, although it's
not obvious what kind. Clear conditions for trustworthiness are that
the trustworthy person is competent and willing to do what they are
trusted to do. Yet this person may also have to be willing for certain
reasons or as a result of having a certain kind of motive for acting
(e.g., they care about the trustor).
This section explains these various conditions for trust and
trustworthiness and highlights the controversy that surrounds the
condition about motive and relatedly how trust differs from mere
reliance. Included at the end is some discussion about the nature of
distrust.
Let me begin with the idea that the trustor must accept some level of
vulnerability or risk (Becker 1996; Baier 1986). Minimally, what this
person risks, or is vulnerable to, is the failure by the trustee to do
what the trustor is depending on them to do. The trustor might try to
reduce this risk by monitoring or imposing certain constraints on the
behavior of the trustee; but after a certain threshold perhaps, the
more monitoring and constraining they do, the less they trust this
person. Trust is relevant "before one can monitor the actions of
... others" (Dasgupta 1988: 51) or when out of respect for
others one refuses to monitor them. One must be content with them
having some discretionary power or freedom, and as a result, with
being somewhat vulnerable to them (Baier 1986; Dasgupta 1988).
One might think that if one is relying while trusting--that is,
if trust is a species of reliance--then accepted vulnerability
would not be essential for trust. Do we not rely on things only when
we believe they will actually happen? And if we believe that, then we
don't perceive ourselves as being vulnerable. Many philosophers
writing on trust and reliance say otherwise. They endorse the view of
Richard Holton, who writes, "When I rely on something happening
... I [only] need to plan on it happening; I need to work around
the supposition that it will [happen]" (Holton 1994: 3). I need
not be certain of it happening and I could even have doubts that it
will happen (Goldberg 2020). I could therefore accept that I am
vulnerable. I could do that while trusting if trust is a form of
reliance.
What does trusting make us vulnerable to, in particular? Annette Baier
writes that "trusting can be betrayed, or at least let down, and
not just disappointed" (1986: 235). In her view, disappointment
is the appropriate response when one merely relied on someone to do
something but did not trust them to do it. To elaborate, although
people who monitor and constrain others' behavior may rely on
them, they do not trust them if their reliance can only be
disappointed rather than betrayed. One can rely on inanimate objects,
such as alarm clocks, but when they break, one is not betrayed though
one might be disappointed. This point reveals that reliance without
the possibility of betrayal (or at least "let down") is
not trust; people who rely on one another in a way that makes this
reaction impossible do not trust one another.
But does trust always involve the potential for betrayal?
"Therapeutic trust" may be an exception (Nickel 2007: 318;
and for further exceptions, see, e.g., Hinchman 2017). To illustrate
this type of trust, consider parents who
>
>
> trust their teenagers with the house or the family car, believing that
> their [children] may well abuse their trust, but hoping by such trust
> to elicit, in the fullness of time, more responsible and responsive
> trustworthy behaviour. (McGeer 2008: 241, her emphasis; see also
> Horsburgh 1960 and Pettit 1995)
>
>
>
Therapeutic trust is not likely to be betrayed rather than merely be
disappointed. It is unusual in this respect (arguably) and in other
respects that will become evident later on in this entry. The rest of
this section deals with usual rather than unusual forms of trust and
trustworthiness.
Without relying on people to display some competence, we also
can't trust them. We usually trust people to do certain things,
such as look after our children, give us advice, or be honest with us,
which we wouldn't do that if we thought they lacked the relevant
skills, including potentially moral skills of knowing what it means to
be honest or caring (Jones 1996: 7). Rarely do we trust people
completely (i.e., *A* simply trusts *B*). Instead,
"trust is generally a three-part relation: *A* trusts
*B* to do *X*" (Hardin 2002: 9)--or
"*A* trusts *B* with valued item *C*"
(Baier 1986) or *A* trusts *B* in domain *D*
(D'Cruz 2019; Jones
2019).[3]
To have trust in a relationship, we do not need to assume that the
other person will be competent in every way. Optimism about the
person's competence in at least one area is essential,
however.
When we trust people, we rely on them not only to be competent to do
what we trust them to do, but also to be willing or motivated to do
it. We could talk about this matter either in terms of what the
trustor expects of the trustee or in terms of what the trustee
possesses: that is, as a condition for trust or for trustworthiness
(and the same is true, of course, of the competence condition). For
simplicity's sake and to focus some of this section on
trustworthiness rather than trust, the following refers to the
motivation of the trustee mostly as a condition for
trustworthiness.
Although both the competence and motivational elements of
trustworthiness are crucial, the exact nature of the latter is
unclear. For some philosophers, it matters only that the trustee is
motivated, where the central problem of trustworthiness in their view
concerns the probability that this motivation will exist or endure
(see, e.g., Hardin 2002: 28; Gambetta 1988b). Jones calls these
"risk-assessment views" about trust (1999: 68). According
to them, we trust people whenever we perceive that the risk of relying
on them to act a certain way is low and so we rely on (i.e.,
"trust") them. They are trustworthy if they are willing,
for whatever reason, to do what they are trusted to do.
Risk-assessment theories make no attempt to distinguish between trust
and mere reliance and have been criticized for this reason (see, e.g.,
Jones 1999).
By contrast, other philosophers say that just being motivated to act
in the relevant way is not sufficient for trustworthiness; according
to them, the nature of the motivation matters, not just its existence
or duration. It matters in particular, they say, for explaining the
trust-reliance distinction, which is something they aim to do. The
central problem of trustworthiness for them is not simply whether but
also how the trustee is motivated to act. Will that person have the
kind of motivation that makes trust appropriate? Katherine Hawley
identifies theories that respond to this question as
"motives-based" theories (2014).
To complicate matters, there are "non-motives-based
theories", which are also not risk-assessment theories (Hawley
2014). They strive to distinguish between trust and mere reliance,
though not by associating a particular kind of motive with
trustworthiness. Since most philosophical debate about the nature of
trust and trustworthiness centers on theories that are either
motives-based or non-motives-based, let me expand on each of these
categories.
### 1.1 Motives-based theories
Philosophers who endorse this type of theory differ in terms of what
kind of motive they associate with trustworthiness. For some, it is
self-interest, while for others, it is goodwill or an explicitly moral
motive, such as moral integrity or
virtue.[4]
For example, Russell Hardin defines trustworthiness in terms of
self-interest in his "encapsulated interests" account
(2002). He says that trustworthy people are motivated by their own
interest to maintain the relationship they have with the trustor,
which in turn encourages them to encapsulate the interests of that
person in their own interests. In addition, trusting people is
appropriate when we can reasonably expect them to encapsulate our
interests in their own, an expectation which is missing with mere
reliance.
Hardin's theory may be valuable in explaining many different
types of trust relationships, including those between people who can
predict little about one another's motives beyond where their
self-interest lies. Still, his theory is problematic. To see why,
consider how it applies to a sexist employer who has an interest in
maintaining relationships with women employees, who treats them
reasonably well as a result, but whose interest stems from a desire to
keep them around so that he can daydream about having sex with them.
This interest conflicts with an interest the women have in not being
objectified by their employer. At the same time, if they were not
aware of his daydreaming--say they are not--then he can
ignore this particular interest of theirs. He can keep his
relationships with them going while ignoring this interest and
encapsulating enough of their other interests in his own. And this
would make him trustworthy on Hardin's account. But is he
trustworthy? The answer is "no" or at least the women
themselves would say "no" if they knew the main reason for
their employment. The point is that being motivated by a desire to
maintain a relationship (the central motivation of a trustworthy
person on the encapsulated interests view) may not require one to
adopt all of the interests of the trustor that would actually make one
trustworthy to that person. In the end, the encapsulated interests
view seems to describe only reliability, not trustworthiness. The
sexist employer may reliably treat the women well, because of his
interest in daydreaming about them, but he is not trustworthy because
of why he treats them well.
A different type of theory is what Jones calls a
"will-based" account, which finds trustworthiness only
where the trustee is motivated by goodwill (Jones 1999: 68). This view
originates in the work of Annette Baier and is influential, even
outside of moral philosophy (e.g., in bioethics and law, especially
fiduciary law; see, e.g., Pellegrino and Thomasma 1993, O'Neill
2002, and Fox-Decent 2005). According to it, a trustee who is
trustworthy will act out of goodwill toward the trustor, to what or to
whom the trustee is entrusted with, or both. While many readers might
find the goodwill view problematic--surely we can trust people
without presuming their goodwill!--it is immune to a criticism
that applies to Hardin's theory and also to risk-assessment
theories. The criticism is that they fail to require that the
trustworthy person care about (i.e., feel goodwill towards) the
trustor, or care about what the trustor cares about. As we have seen,
such caring appears to be central to a complete account of
trustworthiness.
The particular reason why care may be central is that it allows us to
grasp how trust and reliance differ. The above suggested that they
differ because only trust can be betrayed (or at least let down). But
why is that true? Why can trust be betrayed, while mere reliance can
only be disappointed? The answer Baier gives is that betrayal is the
appropriate response to someone on whom one relied to act out of
goodwill, as opposed to ill will, selfishness, or habit bred out of
indifference (1986: 234-5; see also Baier 1991). Those who say
that trusting could involve relying on people to act instead on
motives like ill will or selfishness will have trouble distinguishing
between trust and mere reliance.
While useful in some respects, Baier's will-based account is not
perfect. Criticisms have been made that suggest goodwill is neither
necessary nor sufficient for trustworthiness. It is not necessary
because we can trust other people without presuming that they have
goodwill (e.g., O'Neill 2002; Jones 2004), as we arguably do
when we put our trust in strangers.
As well as being unnecessary, goodwill may not be sufficient for
trustworthiness, and that is true for at least three reasons. First,
someone trying to manipulate you--a "confidence
trickster" (Baier 1986)--could "rely on your goodwill
without trusting you", say, to give them money (Holton 1994:
65). You are not trustworthy for them, despite your goodwill, because
they are not trusting you but rather are just trying to trick you.
Second, basing trustworthiness on goodwill alone cannot explain
unwelcome trust. We do not always welcome people's trust,
because trust can be burdensome or inappropriate. When that happens,
we object not to these people's optimism about our goodwill (who would
object to that?), but only to the fact that they are counting on us.
Third, we can expect people to be reliably benevolent toward us
without trusting them (Jones 1996: 10). We can think that their
benevolence is not shaped by the sorts of values that for us are
essential to
trustworthiness.[5]
Criticisms about goodwill not being sufficient for trustworthiness
have prompted revisions to Baier's theory and in some cases to
the development of new will-based theories. For example, in response
to the first criticism--about the confidence trickster--Zac
Cogley argues that trust involves the belief not simply that the
trustee will display goodwill toward us but that this person owes us
goodwill (2012). Since the confidence trickster doesn't believe
that their mark owes them goodwill, they don't trust this
person, and neither is this person trustworthy for them. In response
to the second criticism--the one about unwelcome
trust--Jones claims that optimism about the trustee's
goodwill must be coupled with the expectation that the trustee will be
"favorably moved by the thought that [we are] counting on
her" (1996: 9). Jones does that in her early work on trust where
she endorses a will-based theory. Finally, in response to the third
concern about goodwill not being informed by the sorts of values that
would make people trustworthy for us, some maintain that trust
involves an expectation about some shared values, norms, or interests
(Lahno 2001, 2020; McLeod 2002, 2020; Mullin 2005; Smith 2008). (To be
clear, this last expectation tends not to be combined with goodwill to
yield a new will-based theory.)
One final criticism of will-based accounts concerns how
"goodwill" should be interpreted. In much of the
discussion above, it is narrowly conceived so that it involves
friendly feeling or personal liking. Jones urges us in her early work
on trust to understand goodwill more broadly, so that it could amount
to benevolence, conscientiousness, or the like, *or* friendly
feeling (1996: 7). But then in her later work, she worries that by
defining goodwill so broadly we
>
>
> turn it into a meaningless catchall that merely reports the presence
> of some positive motive, and one that may or may not even be directed
> toward the truster. (2012a: 67)
>
>
>
Jones abandons her own will-based theory upon rejecting both a narrow
and a broad construal of goodwill. (The kind of theory she endorses
now is a trust responsive one; see below.) If her concerns about
defining goodwill are valid, then will-based theories are in serious
trouble.
To recapitulate about encapsulated-interest and will-based theories,
they say that a trustworthy person is motivated by self-interest or
goodwill, respectively. Encapsulated-interest theories struggle to
explain how trustworthiness differs from mere reliability, while
will-based theories are faced with the criticism that goodwill is
neither necessary nor sufficient for trustworthiness. Some
philosophers who say that goodwill is insufficient develop alternative
will-based theories. An example is Cogley's theory according to
which trust involves a normative expectation of goodwill (2012).
The field of motives-based theories is not exhausted by
encapsulated-interest and will-based theories, however. Other
motives-based theories include those that describe the motive of
trustworthy people in terms of a moral commitment, moral obligation,
or virtue. To expand, consider that one could make sense of the
trustworthiness of a stranger by presuming that the stranger is
motivated not by self-interest or goodwill, but by a commitment to
stand by their moral values. In that case, I could trust a stranger to
be decent by presuming just that she is committed to common decency.
Ultimately, what I am presuming about the stranger is moral integrity,
which some say is the relevant motive for trust relations (those that
are prototypical; see McLeod 2002). Others identify this motive
similarly as moral obligation, and say it is ascribed to the trustee
by the very act of trusting them (Nickel 2007; for a similar account,
see Cohen and Dienhart 2013). Although compelling in some respects,
the worry about these theories is that they moralize trust
inappropriately by demanding that the trustworthy person have a moral
motive (see below and also Mullin 2005; Jones 2017).
Yet one might insist that it is appropriate to moralize trust or at
least moralize trustworthiness, which we often think of as a virtuous
character trait. Nancy Nyquist Potter refers to the trait as
"full trustworthiness", and distinguishes it from
"specific trustworthiness", which is trustworthiness that
is specific to certain relationships (and equivalent to the thin sense
of trustworthiness I have used throughout; 2002: 25). To be fully
trustworthy, one must have a disposition to be trustworthy toward
everyone, according to Potter. Let us call this the
"virtue" account.
It may sound odd to insist that trustworthiness is a virtue or, in
other words, a moral disposition to be trustworthy (Potter 2002: 25;
Hardin 2002: 32). What disposition exactly is it meant to be? A
disposition normally to honor people's trust? That would be
strange, since trust can be unwanted if the trust is immoral (e.g.,
being trusted to hide a murder) or if it misinterprets the nature of
one's relationship with the trustee (e.g., being trusted to be
friends with a mere acquaintance). Perhaps trustworthiness is instead
a disposition to respond to trust in appropriate ways, given
"who one is in relation" to the trustor and given other
virtues that one possesses or ought to possess (e.g., justice,
compassion) (Potter 2002: 25). This is essentially Potter's
view. Modeling trustworthiness on an Aristotelian conception of
virtue, she defines a trustworthy person as "one who can be
counted on, as a matter of the sort of person he or she is, to take
care of those things that others entrust to one and (following the
Doctrine of the Mean) whose ways of caring are neither excessive nor
deficient" (her emphasis;
16).[6]
A similar account of trustworthiness as a virtue--an epistemic
one, specifically--can be found in the literature on testimony
(see Frost-Arnold 2014; Daukas 2006, 2011).
Criticism of the virtue account comes from Karen Jones (2012a). As she
explains, if being trustworthy were a virtue, then being untrustworthy
would be a vice, but that can't be right because we can never be
required to exhibit a vice, yet we can be required to be untrustworthy
(84). An example occurs when we are counted on by two different people
to do two incompatible things and being trustworthy to the one demands
that we are untrustworthy to the other (83). To defend her virtue
theory, Potter would have to insist that in such situations, we are
required either to disappoint someone's trust rather than be
untrustworthy, or to be untrustworthy in a specific not a full
sense.[7]
Rather than cling to a virtue theory, however, why not just accept the
thin conception of trustworthiness (i.e., "specific
trustworthiness"), according to which *X* is trustworthy
for me just in case I can trust *X*? Two things can be said.
First, the thick conception--of trustworthiness as a
virtue--is not meant to displace the thin one. We can and do
refer to some people as being trustworthy in the specific or thin
sense and to others as being trustworthy in the full or thick sense.
Second, one could argue that the thick conception explains better than
the thin one why fully trustworthy people are as dependable as they
are. It is ingrained in their character. They therefore must have an
ongoing commitment to being accountable to others, and better still, a
commitment that comes from a source that is compatible with
trustworthiness (i.e., virtue as opposed to mere self-interest).
An account of trustworthiness that includes the idea that
trustworthiness is a virtue will seem ideal only if we think that the
genesis of the trustworthy person's commitment matters. If we
believe, like risk-assessment theorists, that it matters only whether,
not how, the trustor will be motivated to act, then we could assume
that ill will can do the job as well as a moral disposition. Such
controversy explains how and why motives-based and risk-assessment
theories diverge from one another.
### 1.2 Non-motives-based theories
A final category are theories that base trustworthiness neither on the
kind of motivation a trustworthy person has nor on the mere
willingness of this person to do what they are relied on to do. These
are non-motives-based and also non-risk-assessment theories. The
conditions that give rise to trustworthiness according to them reside
ultimately in the stance the trustor takes toward the trustee or in
what the trustor believes they ought to be able to expect from this
person (i.e., in normative expectations of them). These theories share
with motives-based theories the goal of describing how trust differs
from mere reliance.
An example is Richard Holton's theory of trust (1994). Holton
argues that trust is unique because of the stance the trustor takes
toward the trustee: the "participant stance", which
involves treating the trustee as a person--someone who is
responsible for their actions--rather than simply as an object
(see also Strawson 1962 [1974]). In the case of trust specifically,
the stance entails a readiness to feel betrayal (Holton 1994: 4).
Holton's claim is that this stance and this readiness are absent
when we merely rely on someone or something.
Although Holton's theory has garnered positive attention (e.g.,
by Hieronymi 2008; McGeer 2008), some do find it dissatisfying. For
example, some argue that it does not obviously explain what would
justify a reaction of betrayal, rather than mere disappointment, when
someone fails to do what they are trusted to do (Jones 2004; Nickel
2007). They could fail to do it just by accident, in which case
feelings of betrayal would be inappropriate (Jones 2004). Others
assert, by contrast, that taking the participant stance toward
someone
>
>
> does not always mean trusting that person: some interactions [of this
> sort] lie outside the realm of trust and distrust. (Hawley 2014:
> 7)
>
>
>
To use an example from Hawley, my partner could come to rely on me to
make him dinner every night in a way that involves him taking the
participant stance toward me. But he needn't trust me to make
him dinner and so needn't feel betrayed if I do not. He might
know that I am loath for him to trust me in this regard: "to
make this [matter of making dinner] a matter of trust" between
us (Hawley 2014: 7).
Some philosophers have expanded on Holton's theory in a way that
might deflect some criticism of it. Margaret Urban Walker emphasizes
that in taking a participant stance, we hold people responsible (2006:
79). We expect them to act not simply as we assume they *will*,
but as they *should*. We have, in other words, normative rather
than merely predictive expectations of them. Call this a
"normative-expectation" theory, which again is an
elaboration on the participant-stance theory. Endorsed by Walker and
others (e.g., Jones 2004 and 2012a; Frost-Arnold 2014), this view
explains the trust-reliance distinction in terms of the distinction
between normative and predictive expectations. It also describes the
potential for betrayal in terms of the failure to live up a normative
expectation.
Walker's theory is non-motives-based because it doesn't
specify that trustworthy people must have a certain kind of motive for
acting. She says that trustworthiness is compatible with having many
different kinds of motives, including, among others, goodwill,
"pride in one's role", "fear of penalties for
poor performance", and "an impersonal sense of
obligation" (2006: 77). What accounts for whether someone is
trustworthy in her view is whether they act as they should, not
whether they are motivated in a certain way. (By contrast,
Cogley's normative-expectation theory says that the trustworthy
person both will and ought to act with goodwill. His theory is
motives-based.)
Prominent in the literature is a kind of normative-expectation theory
called a "trust- (or dependence-) responsive" theory (see,
e.g., Faulkner and Simpson 2017: 8; Faulkner 2011, 2017; Jones 2012a,
2017, 2019; McGeer and Petit 2017). According to this view, being
trustworthy involves being appropriately responsive to the reason you
have to do *X*--what you are being relied on (or
"counted on"; Jones 2012a) to do--when it's
clear that someone is in fact relying on you. The reason you have to
do *X* exists simply because someone is counting on you; other
things being equal, you should do it for this reason. Being
appropriately responsive to it, moreover, just means that you find it
compelling (Jones 2012a: 70-71). The person trusting you expects
you to have this reaction; in other words, they have a normative
expectation that the "manifest fact of [their] reliance will
weigh on you as a reason for choosing voluntarily to *X*"
(McGeer and Pettit 2017: 16). This expectation is missing in cases of
mere reliance. When I merely rely on you, I do not expect my reliance
to weigh on you as I do when I trust you.
Although trust-responsive theories might seem motives-based, they are
not. One might think that to be trustworthy, they require that you to
be motivated by the fact that you are being counted on. Instead, they
demand only that you be appropriately responsive to the reason you
have to do what you are being depended on to do. As Jones explains,
you could be responsive in this way and act ultimately out of
goodwill, conscientiousness, love, duty, or the like (2012a: 66). The
reaction I expect of you, as the trustor, is compatible with you
acting on different kinds of motives, although to be clear, not just
any motive will do (not like in Walker's theory); some motives are
ruled out, including indifference and ill will (Jones 2012a: 68).
Being indifferent or hateful towards me means that you are unlikely to
view me counting on you as a reason to act. Hence, if I knew you were
indifferent or hateful, I would not expect you to be trust
responsive.
Trust-responsive theories are less restrictive than motives-based
theories when it comes to defining what motives people need to be
trustworthy. At the same time, they are more restrictive when it comes
to stating whether, in order to be trustworthy or trusted, one must be
aware that one is being counted on. One couldn't be trust
responsive otherwise. In trusting you, I therefore must "make
clear to you my assumption that you will prove reliable in doing
*X*" (McGeer and Pettit 2017: 16). I do not have to do
that by contrast if, in trusting you, I am relying on you instead to
act with a motive like goodwill. Baier herself allows that trust can
exist where the trustee is unaware of it (1986: 235; see also Hawley
2014; Lahno 2020). For her, trust is ubiquitous (Jones 2017: 102) in
part for this reason; we trust people in a myriad of different ways
every single day, often without them knowing it. If she's right
about this fact, then trust-responsive theories are incomplete.
These theories are also vulnerable to objections raised against
normative-expectation theories, because they are again a type of
normative-expectation theory. One such concern comes from Hawley. In
writing about both trust and distrust, she states that
>
>
> we need a story about when trust, distrust or neither is objectively
> appropriate--what is the worldly situation to which (dis)trust]
> is an appropriate response? When is it appropriate to have
> (dis)trust-related normative expectations of someone? (2014: 11)
>
>
>
Normative-expectation theories tend not to provide an answer. And
trust-responsive theories suggest only that trust-related normative
expectations are appropriate when certain motives are absent (e.g.,
ill will), which may not to be enough.
Hawley responds to the above concern within her "commitment
account" of trust (2014, 2019). This theory states that in
trusting others, we believe that they have a commitment to doing what
we are trusting them to do (2014: 10), a fact which explains why we
expect them to act this way, and also why we fail to do so in cases
like that of my partner relying on me to make dinner; he knows I have
no commitment to making his dinner (or anyone else's)
repeatedly. For Hawley, the relevant commitments
>
>
> can be implicit or explicit, weighty or trivial, conferred by roles
> and external circumstances, default or acquired, welcome or unwelcome.
> (2014: 11)
>
>
>
They also needn't actually motivate the trustworthy person. Her
theory is non-motives-based because it states that to
>
>
> be trustworthy, in some specific respect, it is enough to behave in
> accordance with one's commitment, regardless of motive. (2014:
> 16)
>
>
>
Similarly, to trust me to do something, it is enough to believe that
I
>
>
> have a commitment to do it, and that I will do it, without believing
> that I will do it *because* of my commitment. (2014: 16; her
> emphasis)
>
>
>
Notice that unlike trust-responsive theories, the commitment account
does not require that the trustee be aware of the trust in order to be
trustworthy. This person simply needs to have a commitment and to act
accordingly. They don't even need to be committed to the
trustor, but rather could be committed to anyone and one could trust
them to follow through on that commitment (Hawley 2014: 11). So,
relying on a promise your daughter's friend makes *to
her* to take her home from the party would count as an instance of
trust (Hawley 2014: 11). In this way, the commitment account is less
restrictive than trust-responsive theories are. In being
non-motives-based, Hawley's theory is also less restrictive than
any motives-based theory. Trust could truly be ubiquitous if
she's correct about the nature of it.
Like the other theories considered here, however, the commitment
account is open to criticisms. One might ask whether Hawley gives a
satisfactory answer to the question that motivates her theory: when
can we reasonably have the normative expectations of someone that go
along with trusting them? Hawley's answer is, when this person
has the appropriate commitment, where "commitment" is
understood very broadly. Yet where the relevant commitment is implicit
or unwelcome, it's unclear that we can predict much about the
trustee's behavior. In cases like these, the commitment theory
may have little to say about whether it is reasonable to trust.
A further criticism comes from Andrew Kirton (2020) who claims that we
sometimes trust people to act contrary to what they are committed to
doing. His central example involves a navy veteran, an enlisted man,
whose ship sunk at sea and who trusted those who rescued them (navy
men) to ignore a commitment they had to save the officers first,
because the officers were relatively safe on lifeboats compared to the
enlisted men who were struggling in the water. Instead the rescuers
adhered to their military duty, and the enlisted man felt betrayed by
them for nearly letting him drown. Assuming it is compelling, this
example shows that trust and commitment can come apart and that
Hawley's theory is
incomplete.[8]
The struggle to find a complete theory of trust has led some
philosophers to be pluralists about trust--that is, to say,
"we must recognise plural forms of trust" (Simpson 2012:
551) or accept that trust is not just one form of reliance, but many
forms of it (see also Jacoby 2011; Scheman 2020; McLeod 2020). Readers
may be led to this conclusion from the rundown I've given of the
many different theories of trust in philosophy and the objections that
have been raised to them. Rather than go in the direction of
pluralism, however, most philosophers continue to debate what unifies
all trust such that it is different from mere reliance. They tend to
believe that a unified and suitably developed motives-based theory or
non-motives-based theory can explain this difference, although there
is little consensus about what this theory should be like.
In spite of there being little settled agreement in philosophy about
trust, there are thankfully things we can say for certain about it
that are relevant to deciding when it is warranted. The trustor must
be able to accept that by trusting, they are vulnerable usually to
betrayal. Also, the trustee must be competent and willing to do what
the trustor expects of them and may have to be willing because of
certain attitudes they have. Last, in paradigmatic cases of trust, the
trustor must be able to rely on the trustee to exhibit this competence
and willingness.
### 1.3 Distrust
As suggested above, distrust has been somewhat of an afterthought for
philosophers (Hawley
2014),[9]
although their attention to it has grown recently. As with trust and
trustworthiness, philosophers would agree that distrust has certain
features, although the few who have developed theories of distrust
disagree ultimately about the nature of it.
The following are features of distrust that are relatively
uncontroversial (see D'Cruz 2020):
1. Distrust is not just the absence of trust since it is possible to
neither distrust nor trust someone (Hawley 2014: 3; Jones 1996: 16;
Krishnamurthy 2015). There is gap between the two--"the
possibility of being suspended between" them (Ullmann-Margalit
2004 [2017: 184]). (For disagreement, see Faulkner 2017.)
2. Although trust and distrust are not exhaustive, they are
exclusive; one cannot at the same time trust and distrust someone
about the same matter (Ullmann-Margalit 2004 [2017: 201]).
3. Distrust is "not mere nonreliance" (Hawley 2014: 3). I
could choose not to rely on a colleague's assistance because I
know she is terribly busy, not because I distrust her.
4. Relatedly, distrust has a normative dimension. If I distrusted a
colleague for no good reason and they found out about it, then they
would probably be hurt or angry. But the same reaction would not
accompany them knowing that I decided not to rely on them (Hawley
2014). Being distrusted is a bad thing (Domenicucci and Holton 2017:
150; D'Cruz 2019: 935), while not being relied on needn't
be bad at all.
5. Distrust is normally a kind of nonreliance, just as trust is a
kind (or many kinds) of reliance. Distrust involves
"action-tendencies" of avoidance or withdrawal
(D'Cruz 2019: 935-937), which make it incompatible with
reliance--or at least complete reliance. We can be forced to rely
on people we distrust, yet even then, we try to keep them at as safe a
distance as possible.
Given the relationship between trust and distrust and the similarities
between them (e.g., one is "richer than [mere] reliance"
and the other is "richer than mere nonreliance"; Hawley
2014: 3), one would think that any theory of trust should be able to
explain distrust and vice versa. Hawley makes this point and
criticizes theories of trust for not being able to make sense of
distrust (2014: 6-9). For example, will-based accounts imply
that distrust must be nonreliance plus an expectation of ill will, yet
the latter is not required for distrust. I could distrust someone
because he is careless, not because he harbors ill will toward me
(Hawley 2014: 6).
Hawley defends her commitment account of trust, in part, because she
believes it is immune to the above criticism. It says that distrust is
nonreliance plus the belief that the person distrusted is committed to
doing what we will not rely on them to do. In spite of them being
committed in this way (or so we believe), we do not rely on them
(2014: 10). This account does not require that we impute any
particular motive or feeling to the one distrusted, like ill will. At
the same time, it tells us why distrust is not mere nonreliance and
also why it is normative; the suspicion of the one distrusted is that
they will fail to meet a commitment they have, which is bad.
Some have argued that Hawley's theory of distrust is subject to
counterexamples, however (D'Cruz 2020; Tallant 2017). For
example, Jason D'Cruz describes a financier who "buys
insurance on credit defaults, positioning himself to profit when
borrowers default" (2020: 45). The financier believes that the
borrowers have a commitment not to default, and he does not rely on
them to meet this commitment. The conclusion that Hawley's
theory would have us reach is that he distrusts the borrowers, which
doesn't seem right.
A different kind of theory of distrust can be found in the work of
Meena Krishnamurthy (2015), who is interested specifically in the
value that distrust has for political democracies, and for political
minorities in particular (2015). She offers what she calls a
"narrow normative" account of distrust that she derives
from the political writings of Martin Luther King Jr. The account is
narrow because it serves a specific purpose: of explaining how
distrust can motivate people to resist tyranny. It is normative
because it concerns what they ought to do (again, resist; 392). The
theory states that distrust is the confident belief that others will
not act justly. It needn't involve an expectation of ill will;
King's own distrust of white moderates was not grounded in such
an expectation (Krishnamurthy 2015: 394). To be distrusting, one
simply has to believe that others will not act justly, whether out of
fear, ignorance, or what have you.
D'Cruz complains that Krishnamurthy's theory is too narrow
because it requires a belief that the one distrusted will fail to
*do* something (i.e., act justly) (2020); but one can be
distrustful of someone--say a salesperson who comes to your door
(Jones 1996)--without predicting that they will do anything wrong
or threatening. D'Cruz does not explain, however, why
Krishnamurthy needs to account for cases like these in her theory,
which again is meant to serve a specific purpose. Is it important that
distrust can take a form other than "*X* distrusts
*Y* to [do] Ph" for it to motivate political
resistance (D'Cruz 2020: 45)? D'Cruz's objection is
sound only if the answer is "yes".
Nevertheless, D'Cruz's work is helpful in showing what a
descriptive account of distrust should look like--that is, an
account that unlike Krishnamurthy's, tracks how we use the
concept in many different circumstances. He himself endorses a
normative-expectation theory, according to which distrust involves
>
>
> a tendency to withdraw from reliance or vulnerability in contexts of
> normative expectation, based on a construal of a person or persons as
> malevolent, incompetent, or lacking integrity. (2019: 936)
>
>
>
D'Cruz has yet to develop this theory fully, but once he does
so, it will almost certainly be a welcome addition to the scant
literature in philosophy on distrust.
In summary, among the relatively few philosophers who have written on
distrust, there is settled agreement about some of its features but
not about the nature of distrust in general. The agreed-upon features
tell us something about when distrust is warranted (i.e., plausible).
For distrust in someone to be plausible, one cannot also trust that
person, and normally one will not be reliant on them either. Something
else must be true as well, however. For example, one must believe that
this person is committed to acting in a certain way but will not
follow through on this commitment. The "something else" is
crucial because distrust is not the negation of trust and neither is
it mere nonreliance.
Philosophers have said comparatively little about what distrust is,
but a lot about how distrust tends to be influenced by negative social
stereotypes that portray whole groups of people as untrustworthy
(e.g., Potter 2020; Scheman 2020; D'Cruz 2019; M. Fricker 2007).
Trusting attitudes are similar--who we trust can depend
significantly on social stereotypes, positive ones--yet there is
less discussion about this fact in the literature on trust. This issue
concerns the rationality (more precisely, the *ir*rationality)
of trust and distrust, which makes it relevant to the next section,
which is on the epistemology of trust.
## 2. The Epistemology of Trust
Writings on this topic obviously bear on the issue of when trust is
warranted (i.e., justified). The central epistemological question
about trust is, "Ought I to trust or not?" That is, given
the way things seem to me, is it reasonable for me to trust? People
tend to ask this sort of question only in situations where they
can't take trustworthiness for granted--that is, where they
are conscious of the fact that trusting could get them into trouble.
Examples are situations similar to those in which they have been
betrayed in the past or unlike any they have ever been in before. The
question, "Ought I to trust?" is therefore particularly
pertinent to a somewhat odd mix of people that includes victims of
abuse or the like, as well as immigrants and travelers.
The question "Ought I to distrust?" has received
comparatively little attention in philosophy despite it arguably being
as important as the question of when to trust. People can get into
serious trouble by distrusting when they ought not to, rather than
just by trusting when they ought not to. The harms of misplaced
distrust are both moral and epistemic and include dishonoring people,
being out of harmony with them, and being deprived of knowledge via
testimony (D'Cruz 2019; M. Fricker 2007). Presumably because
they believe that the harms of misplaced trust are greater
(D'Cruz 2019), philosophers--and consequently I, in this
entry--focus more on the rationality of trusting, as opposed to
distrusting.
Philosophical work that is relevant to the issue of how to trust well
appears either under the general heading of the epistemology or
rationality of trust (e.g., Baker 1987; Webb 1992; Wanderer and
Townsend 2013) or under the specific heading of testimony--that
is, of putting one's trust in the testimony of others. This
section focuses on the epistemology of trust generally rather than on
trust in testimony specifically. There is a large literature on
testimony (see the entry in this encyclopedia) and on the related
topic of epistemic injustice, both of which I discuss only insofar as
they overlap with the epistemology of trust.
Philosophers sometimes ask whether it could ever be rational to trust
other people. This question arises for two reasons. First, it appears
that trust and rational reflection (e.g., on whether one should be
trusting) are in tension with one another. Since trust inherently
involves risk, any attempt to eliminate that risk through rational
reflection could eliminate one's trust by turning one's
stance into mere reliance. Second, trust tends to give us blinkered
vision: it makes us resistant to evidence that may contradict our
optimism about the trustee (Baker 1987; Jones 1996 and 2019). For
example, if I trust my brother not to harm anyone, I will resist the
truth of any evidence to the contrary. Here, trust and rationality
seem to come apart.
Even if some of our trust could be rational, one might insist that not
all of it could be rational for various reasons. First, if Baier is
right that trust is ubiquitous (1986: 234), then we could not possibly
subject all of it to rational reflection. We certainly could not
reflect on every bit of knowledge we've acquired through the
testimony of others, such as that the earth is round or Antarctica
exists (Webb 1993; E. Fricker 1995; Coady 1992). Second, bioethicists
point out that some trust is unavoidable and occurs in the absence of
rational reflection (e.g., trust in emergency room nurses and
physicians; see Zaner 1991). Lastly, some trust--namely the
therapeutic variety--purposefully leaps beyond any evidence of
trustworthiness in an effort to engender trustworthiness in the
trustee. Is this sort of trust rational? Perhaps not, given that there
isn't sufficient evidence for it.
Many philosophers respond to the skepticism about the rationality of
trust by saying that rationality, when applied to trust, needs to be
understood differently than it is in each of the skeptical points
above. There, "rationality" means something like this: it
is rational to believe in something only if one has verified that it
will happen or done as much as possible to verify it. For example, it
is rational for me to believe that my brother has not harmed anyone
only if the evidence points in that direction and I have discovered
that to be the case. As we've seen, problems exist with applying
this view of rationality to trust, yet it is not the only option; this
view is both "truth-directed" and
"internalist", while the rationality of trust could
instead be "end-directed" or "externalist". Or
it could be internalist without requiring that we have done the
evidence gathering just discussed. Let me expand on these
possibilities, starting with those that concern truth- or end-directed
rationality.
### 2.1 Truth- vs. end-directed rationality
In discussing the rationality of trust, some authors distinguish
between these two types of rationality (also referred to as epistemic
vs. strategic rationality; see, e.g., Baker 1987). One could say that
we are rational in trusting emergency room physicians, for example,
not necessarily because we have good reason to believe that they are
trustworthy (our rationality is not truth-directed), but because by
trusting them, we can remain calm in a situation over which we have
little control (our rationality is therefore end-directed). Similarly,
it may be rational for me to trust my brother not because I have good
evidence of his trustworthiness but rather because trusting him is
essential to our having a loving
relationship.[10]
Trust can be rational, then, depending on whether one conceives of
rationality as truth-directed or end-directed. Notice that it matters
also how one conceives of trust, and more specifically, whether one
conceives of it as a belief in someone's trustworthiness (see
section 4).
If trust is a belief, then whether the rationality of trust can be
end-directed will depend on whether the rationality of a belief can be
end-directed. To put the point more generally, how trust is rationally
justified will depend on how beliefs are rationally justified (Jones
1996).
Some of the literature on trust and rationality concerns whether the
rationality of trust can indeed be end-directed and also what could
make therapeutic trust and the like rational. Pamela Hieronymi argues
that the ends for which we trust cannot provide reasons for us to
trust in the first place (2008). Considerations about how useful or
valuable trust is do not bear on the truth of a trusting belief (i.e.,
a belief in someone's trustworthiness). But Hieronymi claims
that trust, in a pure sense at least, always involves a trusting
belief. How then does she account for trust that is motivated by how
therapeutic (i.e., useful) the trust will be? She believes that trust
of this sort is not pure or full-fledged trust. As she explains,
people can legitimately complain about not being trusted fully when
they are trusted in this way, which occurs when other people lack
confidence in them but trust them nonetheless (2008: 230; see also
Lahno 2001: 184-185).
By contrast, Victoria McGeer believes that trust is more substantial
or pure when the available evidence does not support it (2008). She
describes how trust of this sort--what she calls
"substantial trust"--could be rational and does so
without appealing to how important it might be or to the ends it might
serve, but instead to whether the trustee will be
trustworthy.[11]
According to McGeer, what makes "substantial trust"
rational is that it involves hope that the trustees will do what they
are trusted to do, which "can have a galvanizing effect on how
[they] see themselves, as trustors avowedly do, in the fullness of
their potential" (2008: 252; see also McGeer and Pettit 2017).
Rather than complain (as Hieronymi would assume that trustees might)
about trustors being merely hopeful about their trustworthiness, they
could respond well to the trustors' attitude toward them.
Moreover, if it is likely that they will respond well--in other
words, that they will be trust-responsive--then the trust in them
must be epistemically rational. That is particularly true if being
trustworthy involves being trust-responsive, as it does for McGeer
(McGeer and Pettit 2017).
McGeer's work suggests that all trust--even therapeutic
trust--can be rational in a truth-directed way. As we've
seen, there is some dispute about whether trust can be rational in
just an end-directed way. What matters here is whether trust is the
sort of attitude whose rationality could be end-directed.
### 2.2 Internalism vs. externalism
Philosophers who agree that trust can be rational (in a truth- or
end-directed way or both) tend to disagree about the extent to which
reasons that make it rational must be accessible to the trustor. Some
say that these reasons must be available to this person in order for
their trust to be rational; in that case, the person is or could be
internally justified in trusting as they do. Others say that the
reasons need not be internal but can instead be external to the
trustor and lie in what caused the trust, or, more specifically, in
the epistemic reliability of what caused it. The trustor also
needn't have access to or be aware of the reliability of these
reasons. The latter's epistemology of trust is externalist,
while the former's is internalist.
Some epistemologists write as though trust is only rational if the
trustor themselves has rationally estimated the likelihood that the
trustee is trustworthy. For example, Russell Hardin implies that if my
trust in you is rational, then
>
>
> I make a rough estimate of the truth of [the] claim ... that you
> will be trustworthy under certain conditions ... and then I
> correct my estimate, or "update," as I obtain new evidence
> on you. (2002: 112)
>
>
>
On this view, I must have reasons for my estimate or for my updates
(Hardin 2002: 130), which could come from inductive generalizations I
make about my past experience, from my knowledge that social
constraints exist that will encourage your trustworthiness or what
have you. Such an internalist epistemology of trust is valuable
because it coheres with the commonsense idea that one ought to have
good reasons for trusting people (i.e., reasons grounded in evidence
that they will be trustworthy) particularly when something important
is at stake (E. Fricker 1995). One ought, in other words, to be
epistemically responsible in one's trusting (see Frost-Arnold
2020).
Such an epistemology is also open to criticisms, however. For example,
it suggests that rational trust will always be partial rather than
complete, given that the rational trustor is open to evidence that
contradicts their trust on this theory, while someone who trusts
completely in someone else lacks such openness. The theory also
implies that the reasons for trusting well (i.e., in a justified way)
are accessible to the trustor, at some point or another, which may
simply be false. Some reasons for trust may be too
"cunning" for this to be the case. Relevant here is the
reason for trusting discussed by Philip Pettit (1995): that trust
signals to people that they are being held in esteem, which is
something they will want to maintain; they will honor the trust
because they are naturally "esteem-seeking". (Note that
consciously having this as a reason for trusting--of using
people's need for esteem to get what you want from them--is
incompatible with actually trusting (Wanderer and Townsend 2013: 9),
if trust is motives-based and the required motive is something other
than self-interest.)
Others say that reasons for trust are usually too numerous and varied
to be open to the conscious consideration of the trustor (e.g., Baier
1986). There can be very subtle reasons to trust or distrust
someone--for example, reasons that have to do with body language,
with systematic yet veiled forms of oppression, or with a complicated
history of trusting others about which one can't easily
generalize. Factors like these can influence trustors without them
knowing it, sometimes making their trust irrational (e.g., because it
is informed by oppressive biases), and other times making it
rational.
The concern about there being complex reasons for trusting explain why
some philosophers defend externalist epistemologies of trust. Some do
so explicitly (e.g., McLeod 2002). They argue for reliabilist theories
that make trust rationally justified if and only if it is formed and
sustained by reliable processes (i.e., "processes that tend to
produce accurate representations of the world", such as drawing
on expertise one has rather than simply guessing; Goldman 1992: 113;
Goldman and Beddor 2015 [2016]). Others gesture towards externalism
(Webb 1993; Baier 1986), as Baier does with what she calls "a
moral test for trust". The test is that
>
>
> knowledge of what the other party is relying on for the continuance of
> the trust relationship would ... itself destabilize the relation.
> (1986: 255)
>
>
>
The other party might be relying on a threat advantage or the
concealment of their untrustworthiness, in which case the trust would
probably fail the test. Because Baier's test focuses on the
causal basis for trust, or for what maintains the trust relation, it
is externalist. Also, because the trustor often cannot gather the
information needed for the test without ceasing to trust the other
person (Baier 1986: 260), the test cannot be internalist.
Although an externalist theory of trust deals well with some of the
worries one might have with an internalist theory, it has problems of
its own. One of the most serious issues is the absence of any
requirement that trustors themselves have good (motivating) reasons
for trusting, especially when their trust makes them seriously
vulnerable. Again, it appears that common sense dictates the opposite:
that sometimes as trustors, we ought to be able to back up our
decisions about when to trust. The same is true about our distrust
presumably: that sometimes we ought to be able to defend it. Assuming
externalists mean for their epistemology to apply to distrust and not
just to trust, their theory violates this bit of common sense as well.
Externalism about distrust also seems incompatible with a strategy
that some philosophers recommend for dealing with biased distrust. The
strategy is to develop what they call "corrective trust"
(e.g., Scheman 2020) or "humble trust" (D'Cruz
2019), which demands a humble skepticism toward distrust that aligns
with oppressive stereotypes and efforts at correcting the influence of
these stereotypes (see also M. Fricker 2007). The concern about an
externalist epistemology is that it does not encourage this sort of
mental work, since it does not require that we reflect on our reasons
for distrusting or trusting.
There are alternatives to the kinds of internalist and externalist
theories just discussed, especially within the literature on
testimony.[12]
For example, Paul Faulkner develops an "assurance theory"
of testimony that interprets speaker trustworthiness in terms of
trust-responsiveness. Recall that on a trust-responsiveness theory of
trust, being trusted gives people the reason to be trustworthy that
someone is counting on them. They are trustworthy if they are
appropriately responsive to this reason, which, in the case of
offering testimony, involves giving one's assurance that one is
telling the truth (Adler 2006 [2017]). Faulkner uses the
trust-responsiveness account of trust, along with a view of trust as
an affective attitude (see
section 4),
to show "how trust can ground reasonable testimonial
uptake" (Faulkner and Simpson 2017: 6; Faulkner 2011 and
2020).
>
>
> He proposes that *A* affectively trust *S* if and only
> if *A* depends on *S* Ph-ing, and expects his
> dependence on *S* to motivate *S* to Ph--for
> *A*'s dependence on *S* to be the reason for which
> *S* Phs .... As a result, affective trust is a
> bootstrapping attitude: I can choose to trust someone affectively and
> my doing so creates the reasons which justify the attitude. (Faulkner
> and Simpson 2017: 6)
>
>
>
Most likely, *A* (the trustor) is aware of the reasons that
justify his trust or could be aware of them, making this theory an
internalist one. The reasons are also normative and non-evidentiary
(Faulkner 2020); they concern what *S* ought to do because of
*A*'s dependence, not what *S* will do based on
evidence that *A* might gather about *S*. This view
doesn't require that *A* have evidentiary reasons, and so
it is importantly different than the internalist epistemology
discussed above. But it is then also subject to the criticisms made of
externalist theories that they don't require the kind of
scrutiny of our trusting attitudes that we tend to expect and probably
ought to expect in societies where some people are stereotyped as more
trusting than others.
Presumably to avoid having to defend any particular epistemology of
trust, some philosophers provide just a list of common justifiers for
it (i.e., "facts or states of affairs that determine the
justification status of [trust]"; Goldman 1999: 274), which
someone could take into account in deciding when to trust (Govier
1998; Jones 1996). Included on these lists are such factors as the
social role of the trustee, the domain in which the trust occurs, an
"agent-specific" factor that concerns how good a trustor
the agent tends to be (Jones 1996: 21), and the social or political
climate in which the trust occurs. Philosophers have tended to
emphasize this last factor as a justification condition for trust, and
so let me elaborate on it briefly.
### 2.3 Social and political climate
Although trust is paradigmatically a relation that holds between two
individuals, forces larger than those individuals inevitably shape
their trust and distrust in one another. Social or political climate
contributes to how (un)trustworthy people tend to be and therefore to
whether trust and distrust are justified. For example, a climate of
virtue is one in which trustworthiness tends to be pervasive, assuming
that virtues other than trustworthiness tend to enhance it (Baier
2004).[13]
A climate of oppression is one in which untrustworthiness is
prevalent, especially between people who are privileged and those who
are less privileged (Baier 1986: 259; Potter 2002: 24; D'Cruz
2019). "Social trust", as some call it, is low in these
circumstances (Govier 1997; Welch 2013).
Social or political climate has a significant influence on the default
stance that we ought to take toward people's trustworthiness
(see, e.g., Walker 2006). We need such a stance because we can't
always stop to reflect carefully on when to trust (i.e., assuming that
some rational reflection is required for trusting well). Some
philosophers say that the correct stance is trust and do so without
referring to the social or political climate; Tony Coady takes this
sort of position, for example, on our stance toward others'
testimony (Coady 1992). Others disagree that the correct stance could
be so universal and claim instead that it is relative to climate, as
well as to other factors such as domain (Jones 1999).
Our trust or distrust may be prima facie justified if we have the
correct default stance, although most philosophers assume that it
could only be fully justified (in a truth- or end-directed way) by
reasons that are internal to us (evidentiary or non-evidentiary
reasons) or by the causal processes that created the attitude in the
first place. Whichever epistemology of trust we choose, it ought to be
sensitive to the tension that exists between trusting somebody and
rationally reflecting on the grounds for that trust. It would be odd,
to say the least, if what made an attitude justified destroyed that
very attitude. At the same time, our epistemology of trust ought to
cohere as much as possible with common sense, which dictates that we
should inspect rather than have pure faith in whatever makes us
seriously vulnerable to other people, which trust can most definitely
do.
## 3. The Value of Trust
Someone who asks, "When is trust warranted?" might be
interested in knowing what the point of trust is. In other words, what
value does it have? Although the value it has for particular people
will depend on their circumstances, the value it could have for anyone
will depend on why trust is valuable, generally speaking. Trust can
have enormous instrumental value and may also have some intrinsic
value. In discussing its instrumental value, this section refers to
the "goods of trust", which can benefit the trustor, the
trustee, or society in general. They are therefore social and/or
individual goods. What is more and as emphasized throughout, these
goods tend to accompany justified trust, rather than any old
trust.[14]
Like the other sections of this entry, this one focuses predominantly
though not exclusively on trust; it also mentions recent work on the
value of distrust.
Consider first the possibility that trust has intrinsic value. If
trust produced no goods independent of it, would there be any point in
trusting? One might say "yes", on the grounds that trust
is (or can be; O'Neil 2012: 311) a sign of respect for others.
(Similarly, distrust is a sign of disrespect; D'Cruz 2019.) If
true, this fact about trust would make it intrinsically worthwhile, at
least so long as the trust is justified. Presumably, if it was
unjustified, then the respect would be misplaced and the intrinsic
value would be lost. But these points are speculative, since
philosophers have said comparatively little about trust being
worthwhile in itself as opposed to worthwhile because of what it
produces, or because of what accompanies it. The discussion going
forward centers on the latter, more specifically on the goods of
trust.
Turning first to the instrumental value of trust to the
*trustor*, some argue that trusting vastly increases our
opportunities for cooperating with others and for benefiting from that
cooperation, although of course we would only benefit if people we
trusted cooperated as well (Gambetta 1988b; Hardin 2002; Dimock 2020).
Trust enhances cooperation, while perhaps not being necessary for it
(Cook et al. 2005; Skyrms 2008). Because trust removes the incentive
to check up on other people, it makes cooperation with trust less
complicated than cooperation without it (Luhmann 1973/1975
[1979]).
Trust can make cooperation possible, rather than simply easier, if
trust is essential to promising. Daniel Friedrich and Nicholas
Southwood defend what they call the "Trust View" of
promissory obligation (2011), according to which "making a
promise involves inviting another individual to trust one to do
something" (2011: 277). If this view is correct, then
cooperation through promising is impossible without trust. Cooperation
of this sort will also not be fruitful unless the trust is
justified.
Trusting provides us with goods beyond those that come with
cooperation, although again, for these goods to materialize, the trust
must be justified. Sometimes, trust involves little or no cooperation,
so that the trustor is completely dependent on the trustee while the
reverse is not true. Examples are the trust of young children in their
parents and the trust of severely ill or disabled people in their care
providers. Trust is particularly important for these people because
they tend to be powerless to exercise their rights or to enforce any
kind of contract. The trust they place in their care providers also
contributes to them being vulnerable, and so it is essential that they
can trust these people (i.e., that their trust is justified). The
goods at stake for them are all the goods involved in having a good or
decent life.
Among the specific goods that philosophers associate with trusting are
meaningful relationships or attachments (rather than simply
cooperative relationships that further individual self-interests;
Harding 2011, Kirton forthcoming) as well as knowledge and
autonomy.[15]
To expand, trust allows for the kinds of secure attachments that some
developmental psychologists ("attachment" theorists)
believe are crucial to our well-being and to our ability to be
trusting of others (Bowlby 1969-1980; Ainsworth 1969; see Kirton
2020; Wonderly 2016). Particularly important here are parent-child
relationships (McLeod et al. 2019).
Trust is also crucial for knowledge, given that scientific knowledge
(Hardwig 1991), moral knowledge (Jones 1999), and almost all knowledge
in fact (Webb 1993) depends for its acquisition on trust in the
testimony of others. The basic argument for the need to trust what
others say is that no one person has the time, intellect, and
experience necessary to independently learn facts about the world that
many of us do know. Examples include the scientific fact that the
earth is round, the moral fact that the oppression of people from
social groups different from our own can be severe (Jones 1999), and
the mundane fact that we were born on such-in-such a day (Webb 1993:
261). Of course, trusting the people who testify to these facts could
only generate knowledge if the trust was justified. If we were told
our date of birth by people who were determined oddly to deceive us
about when we were born, then we would not know when we were born.
Autonomy is another good that flows from trust insofar as people
acquire or exercise autonomy only in social environments where they
can trust people (or institutions, etc.) to support their autonomy.
Feminists in particular tend to conceive of autonomy this
way--that is, as a relational property (Mackenzie and Stoljar
2000). Many feminists emphasize that oppressive social environments
can inhibit autonomy, and some say explicitly that conditions
necessary for autonomy (e.g., adequate options, knowledge relevant to
one's decisions) exist only with the help of people or
institutions that are trustworthy (e.g., Oshana 2014; McLeod and Ryman
2020). Justified trust in others to ensure that these conditions exist
is essential for our autonomy, if autonomy is indeed
relational.[16]
Goods of trust that are instrumental to the well-being of the
*trustee* also do not materialize unless the trust is
justified. Trust can improve the self-respect and moral maturity of
this person. Particularly if it involves reliance on a person's
moral character, trust can engender self-respect in the trustee (i.e.,
through them internalizing the respect signaled by that trust). Being
trusted can allow us to be more respectful not only toward ourselves
but also toward others, thus enhancing our moral maturity. The
explicit goal of therapeutic trust is precisely to bring about this
end. The above
(section 2)
suggests that therapeutic trust can be justified in a truth-directed
way over time, provided that the trust has its intended effect of
making the trustee more trustworthy (McGeer 2008; Baker 1987: 12).
Clearly, for therapeutic trust to benefit the trustee, it would have
to be justified in this way, meaning that the therapy would normally
have to work.
Finally, there are social goods of trust that are linked with the
individual goods of cooperation and moral maturity. The former goods
include the practice of morality, the very existence of society
perhaps, as well as strong social networks. Morality itself is a
cooperative activity, which can only get off the ground if people can
trust one another to try, at least, to be moral. For this reason,
among others, Baier claims that trust is "the very basis of
morality" (2004: 180). It could also be the very basis of
society, insofar as trust in our fellow citizens to honor social
contracts makes those contracts possible.
A weaker claim is that trust makes society better or more livable.
Some argue that trust is a form of "social capital",
meaning roughly that it enables "people to work together for
common purposes in groups and organizations" (Fukuyama 1995: 10;
quoted in Hardin 2002: 83). As a result, "high-trust"
societies have stronger economies and stronger social networks in
general than "low-trust" societies (Fukuyama 1995;
Inglehart 1999). Of course, this fact about high-trust societies could
only be true if, on the whole, the trust within them was
justified--that is, if trustees tended not to
"defect" and destroy chances for cooperating in the
future.
The literature on distrust suggests that there are goods associated
with it too. For example, there is the social good discussed by
Krishnamurthy of "securing democracy by protecting political
minorities from tyranny" (2015: 392). Distrust as she
understands it (a confident belief that others will not act justly)
plays this positive role when it is justified, which is roughly when
the threat of tyranny or unjust action is real. Distrust in general is
valuable when it is justified--for the distrustors at least, who
protect themselves from harm. By contrast, the people distrusted tend
to experience negative effects on their reputation or self-respect
(D'Cruz 2019).
Both trust and distrust are therefore valuable particularly when they
are justified. The value of justified trust must be very high if
without it, we can't have morality or society and can't be
morally mature, autonomous, knowledgeable, or invested with
opportunities for collaborating with others. Justified distrust is
also essential, for members of minority groups especially. Conversely,
trust or distrust that is unjustified can be seriously problematic.
Unjustified trust, for example, can leave us open to abuse, terror,
and deception.
## 4. Trust and the Will
Trust may not be warranted (i.e., plausible) because the agent has
lost the ability to trust or simply cannot bring themselves to trust.
People can lose trust in almost everyone or everything as a result of
trauma (Herman 1991). The trauma of rape, for example, can profoundly
reduce one's sense that the world is a safe place with caring
people in it (Brison 2002). By contrast, people can lose trust just in
particular people or institutions. They can also have no experience
trusting in certain people or institutions, making them reluctant to
do so. They or others might want them to become more trusting. But the
question is, how can that happen? How can trust be restored or
generated?
The process of building trust is often slow and difficult (Uslaner
1999; Baier 1986; Lahno 2020), and that is true, in part, because of
the kind of mental attitude trust is. Many argue that it is not the
sort of attitude we can simply will ourselves to have. At the same, it
is possible to cultivate
trust.[17]
This section focuses on these issues, including what kind of mental
attitude trust is (e.g., a belief or an emotion). Also discussed
briefly is what kind of mental attitude distrust is. Like trust,
distrust is an attitude that people may wish to cultivate,
particularly when they are too trusting.
Consider first why one would think that trust can't be willed.
Baier questions whether people are able "to trust simply because
of encouragement to trust" (1986: 244; my emphasis). She
writes,
>
>
> "Trust me!" is for most of us an invitation which we
> cannot accept at will--either we do already trust the one who
> says it, in which case it serves at best as reassurance, or it is
> properly responded to with, "Why should and how can I, until I
> have cause to?". (my emphasis; 1986: 244)
>
>
>
Baier is not a voluntarist about trust, just as most people are not
voluntarists about belief. In other words, she thinks that we
can't simply decide to trust for purely motivational rather than
epistemic reasons (i.e., merely because we want to, rather than
because we have reason to think that the other person is or could be
trustworthy; Mills 1998). That many people feel compelled to say,
"I wish I could trust you", suggests that Baier's
view is correct; wishing or wanting is not enough. But Holton
interprets Baier's view differently. According to him,
Baier's point is that we can never decide to trust, not that we
can never decide to trust for motivational purposes (1994). This
interpretation ignores, however, the attention that Baier gives to
situations in which all we have is encouragement (trusting
"simply because of encouragement"). The
"cause" she refers to ("Why should and how can I,
until I have cause to [trust]?"; 1986: 244) is an epistemic
cause. Once we have one of those, we can presumably decide whether to
trust on the basis of
it.[18]
But we cannot decide to trust simply because we want to, according to
Baier.
If trust resembles belief in being non-voluntary, then perhaps trust
itself is a belief. Is that right? Many philosophers claim that it is
(e.g., Hieronymi 2008; McMyler 2011; Keren 2014), while others
disagree (e.g., Jones 1996; Faulkner 2007; D'Cruz 2019). The
former contend that trust is a belief that the trustee is trustworthy,
at least in the thin sense that the trustee will do what he is trusted
to do (Keren 2020). Various reasons exist in favour of such theories,
doxastic reasons (see Keren 2020) including that these theories
suggest it is impossible to trust a person while holding the belief
that this person is not trustworthy, even in the thin sense. Most of
us accept this impossibility and would want any theory of trust to
explain it. A doxastic account does so by saying that we can't
believe a contradiction (not knowingly anyway; Keren 2020: 113).
Those who say that trust is not a belief claim that it is possible to
trust without believing the trustee is
trustworthy.[19]
Holton gives the nice example of trusting a friend to be sincere
without believing that the friend will be sincere (1994: 75).
Arguably, if one already believed that to be the case, then one would
have no need to trust the friend. It is also possible to believe that
someone is trustworthy without trusting that person, which suggests
that trust couldn't just be a belief in someone's
trustworthiness (McLeod 2002: 85). I might think that a particular
person is trustworthy without trusting them because I have no cause to
do so. I might even distrust them despite believing that they are
trustworthy (Jones 1996, 2013). As Jones explains, distrust can be
recalcitrant in parting "company with belief"
(D'Cruz 2019: 940; citing Jones 2013), a fact which makes
trouble for doxastic accounts not just of trust but of distrust too
(e.g., Krishnamurthy 2015). The latter must explain how distrust could
be a belief that someone is untrustworthy that could exist alongside
the belief that the person is trustworthy.
Among the alternatives to doxasticism are theories stating that trust
is an emotion, a kind of stance (i.e., the participant stance; Holton
1994), or a disposition (Kappel 2014; cited in Keren 2020). The most
commonly held alternative is the first: that trust is an emotion.
Reasons in favour of this view include the fact that trust resembles
an emotion in having characteristics that are unique to emotions, at
least according to an influential account of them (de Sousa 1987;
Calhoun 1984; Rorty 1980; Lahno 2001, 2020). For example, emotions
narrow our perception to "fields of evidence" that lend
support to the emotions themselves (Jones 1996: 11). When we are in
the grip of an emotion, we therefore tend to see facts that affirm its
existence and ignore those that negate it. To illustrate, if I am
really angry at my mother, then I tend to focus on things that justify
my anger while ignoring or refusing to see things that make it
unjustified. I can only see those other things once my anger subsides.
Similarly with trust: if I genuinely trust my mother, my attention
falls on those aspects of her that justify my trust and is averted
from evidence that suggests she is untrustworthy (Baker 1987). The
same sort of thing happens with distrust, according to Jones (Jones
2019). She refers to this phenomenon as "affective
looping", which, in her words, occurs when "a prior
emotional state provides grounds for its own continuance" (2019:
956). She also insists that only affective-attitude accounts of trust
and distrust can adequately explain it (2019).
There may be a kind of doxastic theory, however, that can account for
the affective looping of trust, if not of distrust. Arnon Keren, whose
work focuses specifically on trust, defends what he calls an
"impurely doxastic" theory. He describes trust as
believing in someone's trustworthiness *and* responding
to reasons ("preemptive" ones) against taking precautions
that this person will not be trustworthy (Keren 2020, 2014). Reasons
for trust are themselves reasons of this sort, according to Keren;
they oppose actions like those of carefully monitoring the behavior of
the trustee or weighing the available evidence that this person is
trustworthy. The trustor's response to these preemptive reasons
would explain why this person is resistant (or at least not attune) to
counter evidence to their trust (Keren 2014, 2020).
Deciding in favour of an affective-attitude theory or a purely or
impurely doxastic one is important for understanding features of trust
like affective looping. Yet it may have little bearing on whether or
how trust can be cultivated. For, regardless of whether trust is a
belief or an emotion, presumably we can cultivate it by purposefully
placing ourselves in a position that allows us to focus on evidence of
people's trustworthiness. The goal here could be
self-improvement: that is, becoming more trusting, in a good way so
that we can reap the benefits of justified trust. Alternatively, we
might be striving for the improvement of others: making them more
trustworthy by trusting them therapeutically. Alternatively still, we
could be engaging in "corrective trust". (See the above
discussions of therapeutic and corrective trust.)
This section has centered on how to develop trust and how to account
for facts about it such as the blinkered vision of the trustor.
Similar facts about distrust were also mentioned: those that concern
what kind of mental attitude it is. Theorizing about whether trust and
distrust are beliefs, emotions or something else allows us to
appreciate why they have certain features and also how to build these
attitudes. The process for building them, which may be similar
regardless of whether they are beliefs or emotions, will be relevant
to people who don't trust enough or who trust too much.
## 5. Conclusion
This entry as a whole has examined an important practical question
about trust: "When is trust warranted?" Also woven into
the discussion has been some consideration of when distrust is
warranted. Centerstage has been given to trust, however, because
philosophers have debated it much more than distrust.
Different answers to the question of when trust is warranted give rise
to different philosophical puzzles. For example, in response, one
could appeal to the nature of trust and trustworthiness and consider
whether the conditions are ripe for them (e.g., for the proposed
trustor to rely on the trustee's competence). But one would
first have to settle the difficult issue of what trust and
trustworthiness are, and more specifically, how they differ from mere
reliance and reliability, assuming there are these differences.
Alternatively, in deciding whether trust is warranted, one could
consider whether trust would be rationally justified or valuable. One
would consider these things simultaneously when rational justification
is understood in an end-directed way, making it dependent on
trust's instrumental value. With respect to rational
justification alone, puzzles arise when trying to sort out whether
reasons for trust must be internal to trustors or could be external to
them. In other words, is trust's epistemology internalist or
externalist? Because good arguments exist on both sides, it's
not clear how trust is rationally justified. Neither is it entirely
clear what sort of value trust can have, given the nature of it. For
example, trust may or may not have intrinsic moral value depending on
whether it signals respect for others.
Lastly, one might focus on the fact that trust cannot be warranted
when it is impossible, which is the case when the agent does not
already exhibit trust and cannot simply will themselves to have it.
While trust is arguably not the sort of attitude that one can just
will oneself to have, trust can be cultivated. The exact manner or
extent to which it can be cultivated, however, may depend again on
what sort of mental attitude it is.
Since one can respond to the question, "When is trust
warranted?" by referring to each of the above dimensions of
trust, a complete philosophical answer to this question is complex.
The same is true about the question of when to distrust, because the
same dimensions (the epistemology of distrust, its value, etc.) are
relevant to it. Complete answers to these broad questions about trust
and distrust would be philosophically exciting and also socially
important. They would be exciting both because of their complexity and
because they would draw on a number of different philosophical areas,
including epistemology, philosophy of mind, and value theory. The
answers would be important because trust and distrust that are
warranted contribute to the foundation of a good society, where people
thrive through healthy cooperation with others, become morally mature
human beings, and are not subject to social ills like tyranny or
oppression. |
truth | ## 1. The neo-classical theories of truth
Much of the contemporary literature on truth takes as its starting
point some ideas which were prominent in the early part of the 20th
century. There were a number of views of truth under discussion at
that time, the most significant for the contemporary literature being
the correspondence, coherence, and pragmatist theories of truth.
These theories all attempt to directly answer the *nature
question*: what is the nature of truth? They take this question at
face value: there are truths, and the question to be answered concerns
their nature. In answering this question, each theory makes the notion
of truth part of a more thoroughgoing metaphysics or epistemology.
Explaining the nature of truth becomes an application of some
metaphysical system, and truth inherits significant metaphysical
presuppositions along the way.
The goal of this section is to characterize the ideas of the
correspondence, coherence and pragmatist theories which animate the
contemporary debate. In some cases, the received forms of these
theories depart from the views that were actually defended in the
early 20th century. We thus dub them the 'neo-classical
theories'. Where appropriate, we pause to indicate how the
neo-classical theories emerge from their 'classical' roots
in the early 20th century.
### 1.1 The correspondence theory
Perhaps the most important of the neo-classical theories for the
contemporary literature is the correspondence theory. Ideas that sound
strikingly like a correspondence theory are no doubt very old. They
might well be found in Aristotle or Aquinas. When we turn to the late
19th and early 20th centuries where we pick up the story of the
neo-classical theories of truth, it is clear that ideas about
correspondence were central to the discussions of the time. In spite
of their importance, however, it is strikingly difficult to find an
accurate citation in the early 20th century for the received
neo-classical view. Furthermore, the way the correspondence theory
actually emerged will provide some valuable reference points for the
contemporary debate. For these reasons, we dwell on the origins of the
correspondence theory in the late 19th and early 20th centuries at
greater length than those of the other neo-classical views, before
turning to its contemporary neo-classical form. For an overview of the
correspondence theory, see David (2018).
### 1.1.1 The origins of the correspondence theory
The basic idea of the correspondence theory is that what we believe or
say is true if it corresponds to the way things actually are -
to the facts. This idea can be seen in various forms throughout the
history of philosophy. Its modern history starts with the beginnings
of analytic philosophy at the turn of the 20th century, particularly
in the work of G. E. Moore and Bertrand Russell.
Let us pick up the thread of this story in the years between 1898 and
about 1910. These years are marked by Moore and Russell's
rejection of idealism. Yet at this point, they do not hold a
correspondence theory of truth. Indeed Moore (1899) sees the
correspondence theory as a source of idealism, and rejects it. Russell
follows Moore in this regard. (For discussion of Moore's early
critique of idealism, where he rejects the correspondence theory of
truth, see Baldwin (1991). Hylton (1990) provides an extensive
discussion of Russell in the context of British idealism. An overview
of these issues is given by Baldwin (2018).)
In this period, Moore and Russell hold a version of the *identity
theory of truth*. They say comparatively little about it, but it
is stated briefly in Moore (1899; 1902) and Russell (1904). According
to the identity theory, a true proposition is *identical* to a
fact. Specifically, in Moore and Russell's hands, the theory
begins with propositions, understood as the objects of beliefs and
other propositional attitudes. Propositions are what are believed, and
give the contents of beliefs. They are also, according to this theory,
the primary bearers of truth. When a proposition is true, it is
identical to a fact, and a belief in that proposition is correct.
(Related ideas about the identity theory and idealism are discussed by
McDowell (1994) and further developed by Hornsby (2001).)
The identity theory Moore and Russell espoused takes truth to be a
property of propositions. Furthermore, taking up an idea familiar to
readers of Moore, the property of truth is a simple unanalyzable
property. Facts are understood as simply those propositions which are
true. There are true propositions and false ones, and facts just are
true propositions. There is thus no "difference between truth
and the reality to which it is supposed to correspond" (Moore,
1902, p. 21). (For further discussion of the identity theory of truth,
see Baldwin (1991), Candlish (1999), Candlish and Damnjanovic (2018),
Cartwright (1987), Dodd (2000), and the entry on the
identity theory of truth.)
Moore and Russell came to reject the identity theory of truth in favor
of a correspondence theory, sometime around 1910 (as we see in Moore,
1953, which reports lectures he gave in 1910-1911, and Russell,
1910b). They do so because they came to reject the existence of
propositions. Why? Among reasons, they came to doubt that there could
be any such things as false propositions, and then concluded that
there are no such things as propositions at all.
Why did Moore and Russell find false propositions problematic? A full
answer to this question is a point of scholarship that would take us
too far afield. (Moore himself lamented that he could not "put
the objection in a clear and convincing way" (1953, p. 263), but
see Cartwright (1987) and David (2001) for careful and clear
exploration of the arguments.) But very roughly, the identification of
facts with true propositions left them unable to see what a false
proposition could be other than something which is just like a fact,
though false. If such things existed, we would have fact-like things
in the world, which Moore and Russell now see as enough to make false
propositions count as true. Hence, they cannot exist, and so there are
no false propositions. As Russell (1956, p. 223) later says,
propositions seem to be at best "curious shadowy things"
in addition to facts.
As Cartwright (1987) reminds us, it is useful to think of this
argument in the context of Russell's slightly earlier views
about propositions. As we see clearly in Russell (1903), for instance,
he takes propositions to have constituents. But they are not mere
collections of constituents, but a 'unity' which brings
the constituents together. (We thus confront the 'problem of the
unity of the proposition'.) But what, we might ask, would be the
'unity' of a proposition that Samuel Ramey sings -
with constituents Ramey and singing - except Ramey bearing the
property of singing? If that is what the unity consists in, then we
seem to have nothing other than the fact that Ramey sings. But then we
could not have genuine false propositions without having false
facts.
As Cartwright also reminds us, there is some reason to doubt the
cogency of this sort of argument. But let us put the assessment of the
arguments aside, and continue the story. From the rejection of
propositions a correspondence theory emerges. The primary bearers of
truth are no longer propositions, but beliefs themselves. In a
slogan:
>
> A belief is true if and only if it *corresponds to a fact*.
>
Views like this are held by Moore (1953) and Russell (1910b; 1912). Of
course, to understand such a theory, we need to understand the crucial
relation of correspondence, as well as the notion of a fact to which a
belief corresponds. We now turn to these questions. In doing so, we
will leave the history, and present a somewhat more modern
reconstruction of a correspondence theory. (For more on facts and
proposition in this period, see Sullivan and Johnston (2018).)
### 1.1.2 The neo-classical correspondence theory
The correspondence theory of truth is at its core an ontological
thesis: a belief is true if there *exists* an appropriate
entity - a fact - to which it corresponds. If there is no
such entity, the belief is false.
Facts, for the neo-classical correspondence theory, are entities in
their own right. Facts are generally taken to be composed of
particulars and properties and relations or universals, at least. The
neo-classical correspondence theory thus only makes sense within the
setting of a metaphysics that includes such facts. Hence, it is no
accident that as Moore and Russell turn away from the identity theory
of truth, the metaphysics of facts takes on a much more significant
role in their views. This perhaps becomes most vivid in the later
Russell (1956, p. 182), where the existence of facts is the
"first truism." (The influence of Wittgenstein's
ideas to appear in the *Tractatus* (1922) on Russell in this
period was strong, and indeed, the *Tractatus* remains one of
the important sources for the neo-classical correspondence theory. For
more recent extensive discussions of facts, see Armstrong (1997) and
Neale (2001).)
Consider, for example, the belief that Ramey sings. Let us grant that
this belief is true. In what does its truth consist, according to the
correspondence theory? It consists in there being a fact in the world,
built from the individual Ramey, and the property of singing. Let us
denote this \(\langle\)*Ramey*, *Singing*\(\rangle\). This fact
exists. In contrast, the world (we presume) contains no fact
\(\langle\)*Ramey*, *Dancing*\(\rangle\). The belief that Ramey sings
stands in the relation of correspondence to the fact
\(\langle\)*Ramey*, *Singing*\(\rangle\), and so the belief is
true.
What is the relation of correspondence? One of the standing objections
to the classical correspondence theory is that a fully adequate
explanation of correspondence proves elusive. But for a simple belief,
like that Ramey sings, we can observe that the structure of the fact
\(\langle\)*Ramey*, *Singing*\(\rangle\) matches the subject-predicate
form of the *that*-clause which reports the belief, and may
well match the structure of the belief itself.
So far, we have very much the kind of view that Moore and Russell
would have found congenial. But the modern form of the correspondence
theory seeks to round out the explanation of correspondence by appeal
to *propositions*. Indeed, it is common to base a
correspondence theory of truth upon the notion of a *structured
proposition*. Propositions are again cast as the contents of
beliefs and assertions, and propositions have structure which at least
roughly corresponds to the structure of sentences. At least, for
simple beliefs like that Ramey sings, the proposition has the same
subject predicate structure as the sentence. (Proponents of structured
propositions, such as Kaplan (1989), often look to Russell (1903) for
inspiration, and find unconvincing Russell's reasons for
rejecting them.)
With facts and structured propositions in hand, an attempt may be made
to explain the relation of correspondence. Correspondence holds
between a proposition and a fact when the proposition and fact have
the same structure, and the same constituents at each structural
position. When they correspond, the proposition and fact thus mirror
each-other. In our simple example, we might have:
\[\begin{matrix}
\text{proposition that} & \text{Ramey} & \text{sings} \\
& \downarrow & \downarrow \\
\text{fact} & \langle Ramey, & Singing \rangle
\end{matrix}\]
Propositions, though structured like facts, can be true or false. In a
false case, like the proposition that Ramey dances, we would find no
fact at the bottom of the corresponding diagram. Beliefs are true or
false depending on whether the propositions which are believed
are.
We have sketched this view for simple propositions like the
proposition that Ramey sings. How to extend it to more complex cases,
like general propositions or negative propositions, is an issue we
will not delve into here. It requires deciding whether there are
complex facts, such as general facts or negative facts, or whether
there is a more complex relation of correspondence between complex
propositions and simple facts. (The issue of whether there are such
complex facts marks a break between Russell (1956) and Wittgenstein
(1922) and the earlier views which Moore (1953) and Russell (1912)
sketch.)
According to the correspondence theory as sketched here, what is key
to truth is a relation between propositions and the world, which
obtains when the world contains a fact that is structurally similar to
the proposition. Though this is not the theory Moore and Russell held,
it weaves together ideas of theirs with a more modern take on
(structured) propositions. We will thus dub it the neo-classical
correspondence theory. This theory offers us a paradigm example of a
correspondence theory of truth.
The leading idea of the correspondence theory is familiar. It is a
form of the older idea that true beliefs show the right kind of
*resemblance* to what is believed. In contrast to earlier
empiricist theories, the thesis is not that one's ideas *per
se* resemble what they are about. Rather, the propositions which
give the contents of one's true beliefs mirror reality, in
virtue of entering into correspondence relations to the right pieces
of it.
In this theory, it is the way the world provides us with appropriately
structured entities that explains truth. Our metaphysics thus explains
the nature of truth, by providing the entities needed to enter into
correspondence relations.
For more on the correspondence theory, see David (1994, 2018) and the
entry on the
correspondance theory of truth.
### 1.2 The coherence theory
Though initially the correspondence theory was seen by its developers
as a competitor to the identity theory of truth, it was also
understood as opposed to the coherence theory of truth.
We will be much briefer with the historical origins of the coherence
theory than we were with the correspondence theory. Like the
correspondence theory, versions of the coherence theory can be seen
throughout the history of philosophy. (See, for instance, Walker
(1989) for a discussion of its early modern lineage.) Like the
correspondence theory, it was important in the early 20th century
British origins of analytic philosophy. Particularly, the coherence
theory of truth is associated with the British idealists to whom Moore
and Russell were reacting.
Many idealists at that time did indeed hold coherence theories. Let us
take as an example Joachim (1906). (This is the theory that Russell
(1910a) attacks.) Joachim says that:
>
> Truth in its essential nature is that systematic coherence which is
> the character of a significant whole (p. 76).
>
We will not attempt a full exposition of Joachim's view, which
would take us well beyond the discussion of truth into the details of
British idealism. But a few remarks about his theory will help to give
substance to the quoted passage.
Perhaps most importantly, Joachim talks of 'truth' in the
singular. This is not merely a turn of phrase, but a reflection of his
monistic idealism. Joachim insists that what is true is the
"whole complete truth" (p. 90). Individual judgments or
beliefs are certainly not the whole complete truth. Such judgments
are, according to Joachim, only true to a degree. One aspect of this
doctrine is a kind of holism about content, which holds that any
individual belief or judgment gets its content only in virtue of being
part of a system of judgments. But even these systems are only true to
a degree, measuring the extent to which they express the content of
the single 'whole complete truth'. Any real judgment we
might make will only be partially true.
To flesh out Joachim's theory, we would have to explain what a
significant whole is. We will not attempt that, as it leads us to some
of the more formidable aspects of his view, e.g., that it is a
"process of self-fulfillment" (p. 77). But it is clear
that Joachim takes 'systematic coherence' to be stronger
than consistency. In keeping with his holism about content, he rejects
the idea that coherence is a relation between independently identified
contents, and so finds it necessary to appeal to 'significant
wholes'.
As with the correspondence theory, it will be useful to recast the
coherence theory in a more modern form, which will abstract away from
some of the difficult features of British idealism. As with the
correspondence theory, it can be put in a slogan:
>
> A belief is true if and only if it is part of a coherent system of
> beliefs.
>
To further the contrast with the neo-classical correspondence theory,
we may add that a proposition is true if it is the content of a belief
in the system, or entailed by a belief in the system. We may assume,
with Joachim, that the condition of coherence will be stronger than
consistency. With the idealists generally, we might suppose that
features of the believing subject will come into play.
This theory is offered as an analysis of the nature of truth, and not
simply a test or criterion for truth. Put as such, it is clearly not
Joachim's theory (it lacks his monism, and he rejects
propositions), but it is a standard take on coherence in the
contemporary literature. (It is the way the coherence theory is given
in Walker (1989), for instance. See also Young (2001) for a recent
defense of a coherence theory.) Let us take this as our neo-classical
version of the coherence theory. The contrast with the correspondence
theory of truth is clear. Far from being a matter of whether the world
provides a suitable object to mirror a proposition, truth is a matter
of how beliefs are related to each-other.
The coherence theory of truth enjoys two sorts of motivations. One is
primarily epistemological. Most coherence theorists also hold a
coherence theory of knowledge; more specifically, a coherence theory
of justification. According to this theory, to be justified is to be
part of a coherent system of beliefs. An argument for this is often
based on the claim that only another belief could stand in a
justification relation to a belief, allowing nothing but properties of
systems of belief, including coherence, to be conditions for
justification. Combining this with the thesis that a fully justified
belief is true forms an argument for the coherence theory of truth.
(An argument along these lines is found in Blanshard (1939), who holds
a form of the coherence theory closely related to
Joachim's.)
The steps in this argument may be questioned by a number of
contemporary epistemological views. But the coherence theory also goes
hand-in-hand with its own metaphysics as well. The coherence theory is
typically associated with idealism. As we have already discussed,
forms of it were held by British idealists such as Joachim, and later
by Blanshard (in America). An idealist should see the last step in the
justification argument as quite natural. More generally, an idealist
will see little (if any) room between a system of beliefs and the
world it is about, leaving the coherence theory of truth as an
extremely natural option.
It is possible to be an idealist without adopting a coherence theory.
(For instance, many scholars read Bradley as holding a version of the
identity theory of truth. See Baldwin (1991) for some discussion.)
However, it is hard to see much of a way to hold the coherence theory
of truth without maintaining some form of idealism. If there is
nothing to truth beyond what is to be found in an appropriate system
of beliefs, then it would seem one's beliefs constitute the
world in a way that amounts to idealism. (Walker (1989) argues that
every coherence theorist must be an idealist, but not vice-versa.)
The neo-classical correspondence theory seeks to capture the intuition
that truth is a content-to-world relation. It captures this in the
most straightforward way, by asking for an object in the world to pair
up with a true proposition. The neo-classical coherence theory, in
contrast, insists that truth is not a content-to-world relation at
all; rather, it is a content-to-content, or belief-to-belief,
relation. The coherence theory requires some metaphysics which can
make the world somehow reflect this, and idealism appears to be it. (A
distant descendant of the neo-classical coherence theory that does not
require idealism will be discussed in section 6.5 below.)
For more on the coherence theory, see Walker (2018) and the entry on
the
coherence theory of truth.
### 1.3 Pragmatist theories
A different perspective on truth was offered by the American
pragmatists. As with the neo-classical correspondence and coherence
theories, the pragmatist theories go with some typical slogans. For
example, Peirce is usually understood as holding the view that:
>
> Truth is the end of inquiry.
>
(See, for instance Hartshorne et al., 1931-58, SS3.432.)
Both Peirce and James are associated with the slogan that:
>
> Truth is satisfactory to believe.
>
James (e.g., 1907) understands this principle as telling us what
practical value truth has. True beliefs are guaranteed not to conflict
with subsequent experience. Likewise, Peirce's slogan tells us
that true beliefs will remain settled at the end of prolonged inquiry.
Peirce's slogan is perhaps most typically associated with
pragmatist views of truth, so we might take it to be our canonical
neo-classical theory. However, the contemporary literature does not
seem to have firmly settled upon a received
'neo-classical' pragmatist theory.
In her reconstruction (upon which we have relied heavily), Haack
(1976) notes that the pragmatists' views on truth also make room
for the idea that truth involves a kind of correspondence, insofar as
the scientific method of inquiry is answerable to some independent
world. Peirce, for instance, does not reject a correspondence theory
outright; rather, he complains that it provides merely a
'nominal' or 'transcendental' definition of
truth (e.g Hartshorne et al., 1931-58, SS5.553,
SS5.572), which is cut off from practical matters of experience,
belief, and doubt (SS5.416). (See Misak (2004) for an extended
discussion.)
This marks an important difference between the pragmatist theories and
the coherence theory we just considered. Even so, pragmatist theories
also have an affinity with coherence theories, insofar as we expect
the end of inquiry to be a coherent system of beliefs. As Haack also
notes, James maintains an important verificationist idea: truth is
what is verifiable. We will see this idea re-appear in section 4.
For more on pragmatist theories of truth, see Misak (2018).
James' views are discussed further in the entry on
William James.
Peirce's views are discussed further in the entry on
Charles Sanders Peirce.
## 2. Tarski's theory of truth
Modern forms of the classical theories survive. Many of these modern
theories, notably correspondence theories, draw on ideas developed by
Tarski.
In this regard, it is important to bear in mind that his seminal work
on truth (1935) is very much of a piece with other works in
mathematical logic, such as his (1931), and as much as anything this
work lays the ground-work for the modern subject of model theory
- a branch of mathematical logic, not the metaphysics of truth.
In this respect, Tarski's work provides a set of highly useful
tools that may be employed in a wide range of philosophical projects.
(See Patterson (2012) for more on Tarski's work in its
historical context.)
Tarski's work has a number of components, which we will consider
in turn.
### 2.1 Sentences as truth-bearers
In the classical debate on truth at the beginning of the 20th century
we considered in section 1, the issue of truth-bearers was of great
significance. For instance, Moore and Russell's turn to the
correspondence theory was driven by their views on whether there are
propositions to be the bearers of truth. Many theories we reviewed
took *beliefs* to be the bearers of truth.
In contrast, Tarski and much of the subsequent work on truth takes
*sentences* to be the primary bearers of truth. This is not an
entirely novel development: Russell (1956) also takes truth to apply
to sentence (which he calls 'propositions' in that text).
But whereas much of the classical debate takes the issue of the
primary bearers of truth to be a substantial and important
metaphysical one, Tarski is quite casual about it. His primary reason
for taking sentences as truth-bearers is convenience, and he
explicitly distances himself from any commitment about the
philosophically contentious issues surrounding other candidate
truth-bearers (e.g., Tarski, 1944). (Russell (1956) makes a similar
suggestion that sentences are the appropriate truth-bearers "for
the purposes of logic" (p. 184), though he still takes the
classical metaphysical issues to be important.)
We will return to the issue of the primary bearers of truth in section
6.1. For the moment, it will be useful to simply follow Tarski's
lead. But it should be stressed that for this discussion, sentences
are *fully interpreted* sentences, having meanings. We will
also assume that the sentences in question do not change their content
across occasions of use, i.e., that they display no
context-dependence. We are taking sentences to be what Quine (1960)
calls 'eternal sentences'.
In some places (e.g., Tarski, 1944), Tarski refers to his view as the
'semantic conception of truth'. It is not entirely clear
just what Tarski had in mind by this, but it is clear enough that
Tarski's theory defines truth for sentences in terms of concepts
like reference and satisfaction, which are intimately related to the
basic semantic functions of names and predicates (according to many
approaches to semantics). For more discussion, see Wolenski
(2001).
### 2.2 Convention T
Let us suppose we have a fixed language \(\mathbf{L}\) whose
sentences are fully interpreted. The basic question Tarski poses is
what an adequate *theory of truth for* \(\mathbf{L}\) would
be. Tarski's answer is embodied in what he calls *Convention
T*:
>
> An adequate theory of truth for \(\mathbf{L}\) must imply, for
> each sentence \(\phi\) of \(\mathbf{L}\)
>
>
> \(\ulcorner \phi \urcorner\) is true if and only if \(\phi\).
>
(We have simplified Tarski's presentation somewhat.) This is an
adequacy condition for theories, not a theory itself. Given the
assumption that \(\mathbf{L}\) is fully interpreted, we may assume
that each sentence \(\phi\) in fact has a truth value. In light of this,
Convention T guarantees that the truth predicate given by the theory
will be *extensionally correct*, i.e., have as its extension
all and only the true sentences of \(\mathbf{L}\).
Convention T draws our attention to the biconditionals of the form
>
> \(\ulcorner \ulcorner \phi \urcorner\)
> is true if and only if \(\phi \urcorner\),
>
which are usually called the *Tarski biconditionals* for a
language \(\mathbf{L}\).
### 2.3 Recursive definition of truth
Tarski does not merely propose a condition of adequacy for theories of
truth, he also shows how to meet it. One of his insights is that if
the language \(\mathbf{L}\) displays the right structure, then
truth for \(\mathbf{L}\) can be defined recursively. For instance,
let us suppose that \(\mathbf{L}\) is a simple formal language,
containing two atomic sentences 'snow is white' and
'grass is green', and the sentential connectives \(\vee\) and
\(\neg\).
In spite of its simplicity, \(\mathbf{L}\) contains infinitely
many distinct sentences. But truth can be defined for all of them by
recursion.
1. Base clauses:
1. 'Snow is white' is true if and only if snow is
white.
2. 'Grass is green' is true if and only if grass is
green.
2. Recursion clauses. For any sentences \(\phi\) and \(\psi\) of
\(\mathbf{L}\):
1. \(\ulcorner \phi \vee \psi \urcorner\) is true if
and only if \(\ulcorner \phi \urcorner\) is true or
\(\ulcorner \psi \urcorner\) is true.
2. \(\ulcorner \neg \phi \urcorner\) is true if and
only if it is not the case that \(\ulcorner \phi \urcorner\) is true.
This theory satisfies Convention T.
### 2.4 Reference and satisfaction
This may look trivial, but in defining an extensionally correct truth
predicate for an infinite language with four clauses, we have made a
modest application of a very powerful technique.
Tarski's techniques go further, however. They do not stop with
atomic sentences. Tarski notes that truth for each atomic sentence can
be defined in terms of two closely related notions: *reference*
and *satisfaction*. Let us consider a language
\(\mathbf{L}'\), just like \(\mathbf{L}\) except that
instead of simply having two atomic sentences,
\(\mathbf{L}'\) breaks atomic sentences into terms and
predicates. \(\mathbf{L}'\) contains terms
'snow' and 'grass' (let us engage in the
idealization that these are simply singular terms), and predicates
'is white' and 'is green'. So
\(\mathbf{L}'\) is like \(\mathbf{L}\), but also
contains the sentences 'Snow is green' and 'Grass is
white'.)
We can define truth for atomic sentences of \(\mathbf{L}'\)
in the following way.
1. Base clauses:
1. 'Snow' refers to snow.
2. 'Grass' refers to grass.
3. \(a\) satisfies 'is white' if and only if
\(a\) is white.
4. \(a\) satisfies 'is green' if and only if
\(a\) is green.
2. For any atomic sentence \(\ulcorner t\) is
\(P \urcorner\): \(\ulcorner t\) is
\(P \urcorner\) is true if and only if the referent of
\(\ulcorner t \urcorner\) satisfies
\(\ulcorner P\urcorner\).
One of Tarski's key insights is that the apparatus of
satisfaction allows for a recursive definition of truth for sentences
with *quantifiers*, though we will not examine that here. We
could repeat the recursion clauses for \(\mathbf{L}\) to produce a
full theory of truth for \(\mathbf{L}'\).
Let us say that a Tarskian theory of truth is a recursive theory,
built up in ways similar to the theory of truth for
\(\mathbf{L}'\). Tarski goes on to demonstrate some key
applications of such a theory of truth. A Tarskian theory of truth for
a language \(\mathbf{L}\) can be used to show that theories in
\(\mathbf{L}\) are consistent. This was especially important to
Tarski, who was concerned the Liar paradox would make theories in
languages containing a truth predicate inconsistent.
For more, see Ray (2018) and the entries on
axiomatic theories of truth,
the
Liar paradox,
and
Tarski's truth definitions.
## 3. Correspondence revisited
The correspondence theory of truth expresses the very natural idea
that truth is a content-to-world or word-to-world relation: what we
say or think is true or false in virtue of the way the world turns out
to be. We suggested that, against a background like the metaphysics of
facts, it does so in a straightforward way. But the idea of
correspondence is certainly not specific to this framework. Indeed, it
is controversial whether a correspondence theory should rely on any
particular metaphysics at all. The basic idea of correspondence, as
Tarski (1944) and others have suggested, is captured in the slogan
from Aristotle's *Metaphysics* G 7.27, "to
say of what is that it is, or of what is not that it is not, is
true" (Ross, 1928). 'What is', it is natural enough
to say, is a fact, but this natural turn of phrase may well not
require a full-blown metaphysics of facts. (For a discussion of
Aristotle's views in a historical context, see Szaif
(2018).)
Yet without the metaphysics of facts, the notion of correspondence as
discussed in section 1.1 loses substance. This has led to two distinct
strands in contemporary thinking about the correspondence theory. One
strand seeks to recast the correspondence theory in a way that does
not rely on any particular ontology. Another seeks to find an
appropriate ontology for correspondence, either in terms of facts or
other entities. We will consider each in turn.
### 3.1 Correspondence without facts
Tarski himself sometimes suggested that his theory was a kind of
correspondence theory of truth. Whether his own theory is a
correspondence theory, and even whether it provides any substantial
philosophical account of truth at all, is a matter of controversy.
(One rather drastic negative assessment from Putnam (1985-86, p.
333) is that "As a philosophical account of truth,
Tarski's theory fails as badly as it is possible for an account
to fail.") But a number of philosophers (e.g., Davidson, 1969;
Field, 1972) have seen Tarski's theory as providing at least the
core of a correspondence theory of truth which dispenses with the
metaphysics of facts.
Tarski's theory shows how truth for a sentence is
*determined* by certain properties of its constituents; in
particular, by properties of reference and satisfaction (as well as by
the logical constants). As it is normally understood, reference is the
preeminent word-to-world relation. Satisfaction is naturally
understood as a word-to-world relation as well, which relates a
predicate to the things in the world that bear it. The Tarskian
recursive definition shows how truth is determined by reference and
satisfaction, and so is in effect determined by the things in the
world we refer to and the properties they bear. This, one might
propose, is all the correspondence we need. It is not correspondence
of sentences or propositions to facts; rather, it is correspondence of
our expressions to objects and the properties they bear, and then ways
of working out the truth of claims in terms of this.
This is certainly not the neo-classical idea of correspondence. In not
positing facts, it does not posit any single object to which a true
proposition or sentence might correspond. Rather, it shows how truth
might be worked out from basic word-to-world relations. However, a
number of authors have noted that Tarski's theory cannot by
itself provide us with such an account of truth. As we will discuss
more fully in section 4.2, Tarski's apparatus is in fact
compatible with theories of truth that are certainly not
correspondence theories.
Field (1972), in an influential discussion and diagnosis of what is
lacking in Tarski's account, in effect points out that whether
we really have something worthy of the name
'correspondence' depends on our having notions of
reference and satisfaction which genuinely establish word-to-world
relations. (Field does not use the term 'correspondence',
but does talk about e.g., the "connection between words and
things" (p. 373).) By itself, Field notes, Tarski's theory
does not offer an account of reference and satisfaction at all.
Rather, it offers a number of *disquotation clauses*, such
as:
1. 'Snow' refers to snow.
2. \(a\) satisfies 'is white' if and only if
\(a\) is white.
These clauses have an air of triviality (though whether they are to be
understood as trivial principles or statements of non-trivial semantic
facts has been a matter of some debate). With Field, we might propose
to supplement clauses like these with an account of reference and
satisfaction. Such a theory should tell us what makes it the case that
the word 'snow' refer to snow. (In 1972, Field was
envisaging a physicalist account, along the lines of the causal theory
of reference.) This should *inter alia* guarantee that truth is
really determined by word-to-world relations, so in conjunction with
the Tarskian recursive definition, it could provide a correspondence
theory of truth.
Such a theory clearly does not rely on a metaphysics of facts. Indeed,
it is in many ways metaphysically neutral, as it does not take a stand
on the nature of particulars, or of the properties or universals that
underwrite facts about satisfaction. However, it may not be entirely
devoid of metaphysical implications, as we will discuss further in
section 4.1.
### 3.2 Representation and Correspondence
Much of the subsequent discussion of Field-style approaches to
correspondence has focused on the role of representation in these
views. Field's own (1972) discussion relies on a causal relation
between terms and their referents, and a similar relation for
satisfaction. These are instances of representation relations.
According to representational views, meaningful items, like perhaps
thoughts or sentences or their constituents, have their contents in
virtue of standing in the right relation to the *things* they
represent. On many views, including Field's, a name stands in
such a relation to its bearer, and the relation is a causal one.
The project of developing a naturalist account of the representation
relation has been an important one in the philosophy of mind and
language. (See the entry on
mental representation.)
But, it has implications for the theory of truth. Representational
views of content lead naturally to correspondence theories of truth.
To make this vivid, suppose you hold that sentences or beliefs stand
in a representation relation to some objects. It is natural to suppose
that for true beliefs or sentences, those objects would be facts. We
then have a correspondence theory, with the correspondence relation
explicated as a representation relation: a truth bearer is true if it
represents a fact.
As we have discussed, many contemporary views reject facts, but one
can hold a representational view of content without them. One
interpretation of Field's theory is just that. The relations of
reference and satisfaction are representation relations, and truth for
sentences is determined compositionally in terms of those
representation relations, and the nature of the objects they
represent. If we have such relations, we have the building blocks for
a correspondence theory without facts. Field (1972) anticipated a
naturalist reduction of the representation via a causal theory, but
any view that accepts representation relations for truth bearers or
their constituents can provide a similar theory of truth. (See Jackson
(2006) and Lynch (2009) for further discussion.)
Representational views of content provide a natural way to approach
the correspondence theory of truth, and likewise,
anti-representational views provide a natural way to avoid the
correspondence theory of truth. This is most clear in the work of
Davidson, as we will discuss more in section 6.5.
### 3.3 Facts again
There have been a number of correspondence theories that do make use
of facts. Some are notably different from the neo-classical theory
sketched in section 1.1. For instance, Austin (1950) proposes a view
in which each statement (understood roughly as an utterance event)
corresponds to both a fact or situation, and a type of situation. It
is true if the former is of the latter type. This theory, which has
been developed by *situation theory* (e.g., Barwise and Perry,
1986), rejects the idea that correspondence is a kind of mirroring
between a fact and a proposition. Rather, correspondence relations to
Austin are entirely conventional. (See Vision (2004) for an extended
defense of an Austinian correspondence theory.) As an ordinary
language philosopher, Austin grounds his notion of fact more in
linguistic usage than in an articulated metaphysics, but he defends
his use of fact-talk in Austin (1961b).
In a somewhat more Tarskian spirit, formal theories of facts or states
of affairs have also been developed. For instance, Taylor (1976)
provides a recursive definition of a collection of 'states of
affairs' for a given language. Taylor's states of affairs
seem to reflect the notion of fact at work in the neo-classical
theory, though as an exercise in logic, they are officially
\(n\)-tuples of objects and *intensions*.
There are more metaphysically robust notions of fact in the current
literature. For instance, Armstrong (1997) defends a metaphysics in
which facts (under the name 'states of affairs') are
metaphysically fundamental. The view has much in common with the
neo-classical one. Like the neo-classical view, Armstrong endorses a
version of the correspondence theory. States of affairs are
*truthmakers* for propositions, though Armstrong argues that
there may be many such truthmakers for a given proposition, and vice
versa. (Armstrong also envisages a naturalistic account of
propositions as classes of equivalent belief-tokens.)
Armstrong's primary argument is what he calls the
'truthmaker argument'. It begins by advancing a
*truthmaker principle*, which holds that for any given truth,
there must be a truthmaker - a "something in the world
which makes it the case, that serves as an ontological ground, for
this truth" (p. 115). It is then argued that facts are the
appropriate truthmakers.
In contrast to the approach to correspondence discussed in section
3.1, which offered correspondence with minimal ontological
implications, this view returns to the ontological basis of
correspondence that was characteristic of the neo-classical
theory.
For more on facts, see the entry on
facts.
### 3.4 Truthmakers
The truthmaker principle is often put as the schema:
>
> If \(\phi\), then there is an \(x\) such that necessarily, if
> \(x\) exists, then \(\phi\).
>
(Fox (1987) proposed putting the principle this way, rather than
explicitly in terms of truth.)
The truthmaker principle expresses the *ontological* aspect of
the neo-classical correspondence theory. Not merely must truth obtain
in virtue of word-to-world relations, but there must be a thing that
makes each truth true. (For one view on this, see Merricks
(2007).)
The neo-classical correspondence theory, and Armstrong, cast facts as
the appropriate truthmakers. However, it is a non-trivial step from
the truthmaker principle to the existence of facts. There are a number
of proposals in the literature for how other sorts of objects could be
truthmakers; for instance, tropes (called 'moments', in
Mulligan et al., 1984). Parsons (1999) argues that the truthmaker
principle (presented in a somewhat different form) is compatible with
there being only concrete particulars.
As we saw in discussing the neo-classical correspondence theory,
truthmaker theories, and fact theories in particular, raise a number
of issues. One which has been discussed at length, for instance, is
whether there are *negative facts*. Negative facts would be the
truthmakers for negated sentences. Russell (1956) notoriously
expresses ambivalence about whether there are negative facts.
Armstrong (1997) rejects them, while Beall (2000) defends them. (For
more discussion of truthmakers, see Cameron (2018) and the papers in
Beebee and Dodd (2005).)
## 4. Realism and anti-realism
The neo-classical theories we surveyed in section 1 made the theory of
truth an application of their background metaphysics (and in some
cases epistemology). In section 2 and especially in section 3, we
returned to the issue of what sorts of ontological commitments might
go with the theory of truth. There we saw a range of options, from
relatively ontologically non-committal theories, to theories requiring
highly specific ontologies.
There is another way in which truth relates to metaphysics. Many ideas
about realism and anti-realism are closely related to ideas about
truth. Indeed, many approaches to questions about realism and
anti-realism simply make them questions about truth.
### 4.1 Realism and truth
In discussing the approach to correspondence of section 3.1, we noted
that it has few ontological requirements. It relies on there being
objects of reference, and something about the world which makes for
determinate satisfaction relations; but beyond that, it is
ontologically neutral. But as we mentioned there, this is not to say
that it has no metaphysical implications. A correspondence theory of
truth, of any kind, is often taken to embody a form of
*realism*.
The key features of realism, as we will take it, are that:
1. The world exists objectively, independently of the ways we think
about it or describe it.
2. Our thoughts and claims are about that world.
(Wright (1992) offers a nice statement of this way of thinking about
realism.) These theses imply that our claims are objectively true or
false, depending on how the world they are about is. The world that we
represent in our thoughts or language is an objective world. (Realism
may be restricted to some subject-matter, or range of discourse, but
for simplicity, we will talk about only its global form.)
It is often argued that these theses require some form of the
correspondence theory of truth. (Putnam (1978, p. 18) notes,
"Whatever else realists say, they typically say that they
believe in a 'correspondence theory of truth'.") At
least, they are supported by the kind of correspondence theory without
facts discussed in section 3.1, such as Field's proposal. Such a
theory will provide an account of objective relations of reference and
satisfaction, and show how these determine the truth or falsehood of
what we say about the world. Field's own approach (1972) to this
problem seeks a physicalist explanation of reference. But realism is a
more general idea than physicalism. Any theory that provides objective
relations of reference and satisfaction, and builds up a theory of
truth from them, would give a form of realism. (Making the objectivity
of reference the key to realism is characteristic of work of Putnam,
e.g., 1978.)
Another important mark of realism expressed in terms of truth is the
property of *bivalence*. As Dummett has stressed (e.g., 1959;
1976; 1983; 1991), a realist should see there being a fact of the
matter one way or the other about whether any given claim is correct.
Hence, one important mark of realism is that it goes together with the
principle of *bivalence*: every truth-bearer (sentence or
proposition) is true or false. In much of his work, Dummett has made
this the characteristic mark of realism, and often identifies realism
about some subject-matter with accepting bivalence for discourse about
that subject-matter. At the very least, it captures a great deal of
what is more loosely put in the statement of realism above.
Both the approaches to realism, through reference and through
bivalence, make truth the primary vehicle for an account of realism. A
theory of truth which substantiates bivalence, or builds truth from a
determinate reference relation, does most of the work of giving a
realistic metaphysics. It might even simply be a realistic
metaphysics.
We have thus turned on its head the relation of truth to metaphysics
we saw in our discussion of the neo-classical correspondence theory in
section 1.1. There, a correspondence theory of truth was built upon a
substantial metaphysics. Here, we have seen how articulating a theory
that captures the idea of correspondence can be crucial to providing a
realist metaphysics. (For another perspective on realism and truth,
see Alston (1996). Devitt (1984) offers an opposing view to the kind
we have sketched here, which rejects any characterization of realism
in terms of truth or other semantic concepts.)
In light of our discussion in section 1.1.1, we should pause to note
that the connection between realism and the correspondence theory of
truth is not absolute. When Moore and Russell held the identity theory
of truth, they were most certainly realists. The right kind of
metaphysics of propositions can support a realist view, as can a
metaphysics of facts. The modern form of realism we have been
discussing here seeks to avoid basing itself on such particular
ontological commitments, and so prefers to rely on the kind of
correspondence-without-facts approach discussed in section 3.1. This
is not to say that realism will be devoid of ontological commitments,
but the commitments will flow from whichever specific claims about
some subject-matter are taken to be true.
For more on realism and truth, see Fumerton (2002) and the entry on
realism.
### 4.2 Anti-realism and truth
It should come as no surprise that the relation between truth and
metaphysics seen by modern realists can also be exploited by
anti-realists. Many modern anti-realists see the theory of truth as
the key to formulating and defending their views. With Dummett (e.g.,
1959; 1976; 1991), we might expect the characteristic mark of
anti-realism to be the rejection of bivalence.
Indeed, many contemporary forms of anti-realism may be formulated as
theories of truth, and they do typically deny bivalence. Anti-realism
comes in many forms, but let us take as an example a (somewhat crude)
form of verificationism. Such a theory holds that a claim is correct
just insofar as it is in principle *verifiable*, i.e., there is
a verification procedure we could in principle carry out which would
yield the answer that the claim in question was verified.
So understood, verificationism is a theory of truth. The claim is not
that verification is the most important epistemic notion, but that
truth *just is* verifiability. As with the kind of realism we
considered in section 4.1, this view expresses its metaphysical
commitments in its explanation of the nature of truth. Truth is not,
to this view, a fully objective matter, independent of us or our
thoughts. Instead, truth is constrained by our abilities to verify,
and is thus constrained by our epistemic situation. Truth is to a
significant degree an epistemic matter, which is typical of many
anti-realist positions.
As Dummett says, the verificationist notion of truth does not appear
to support bivalence. Any statement that reaches beyond what we can in
principle verify or refute (verify its negation) will be a
counter-example to bivalence. Take, for instance, the claim that there
is some substance, say uranium, present in some region of the universe
too distant to be inspected by us within the expected lifespan of the
universe. Insofar as this really would be in principle unverifiable,
we have no reason to maintain it is true or false according to the
verificationist theory of truth.
Verificationism of this sort is one of a family of anti-realist views.
Another example is the view that identifies truth with warranted
assertibility. Assertibility, as well as verifiability, has been
important in Dummett's work. (See also works of McDowell, e.g.,
1976 and Wright, e.g., 1976; 1982; 1992.)
Anti-realism of the Dummettian sort is not a descendant of the
coherence theory of truth *per se*. But in some ways, as
Dummett himself has noted, it might be construed as a descendant
- perhaps very distant - of idealism. If idealism is the
most drastic form of rejection of the independence of mind and world,
Dummettian anti-realism is a more modest form, which sees epistemology
imprinted in the world, rather than the wholesale embedding of world
into mind. At the same time, the idea of truth as warranted
assertibility or verifiability reiterates a theme from the pragmatist
views of truth we surveyed in section 1.3.
Anti-realist theories of truth, like the realist ones we discussed in
section 4.1, can generally make use of the Tarskian apparatus.
Convention T, in particular, does not discriminate between realist and
anti-realist notions of truth. Likewise, the base clauses of a
Tarskian recursive theory are given as disquotation principles, which
are neutral between realist and anti-realist understandings of notions
like reference. As we saw with the correspondence theory, giving a
full account of the nature of truth will generally require more than
the Tarskian apparatus itself. How an anti-realist is to explain the
basic concepts that go into a Tarskian theory is a delicate matter. As
Dummett and Wright have investigated in great detail, it appears that
the background logic in which the theory is developed will have to be
non-classical.
For more on anti-realism and truth, see Shieh (2018) and the papers in
Greenough and Lynch (2006) and the entry on
realism.
### 4.3 Anti-realism and pragmatism
Many commentators see a close connection between Dummett's
anti-realism and the pragmatists' views of truth, in that both
put great weight on ideas of verifiability or assertibility. Dummett
himself stressed parallels between anti-realism and intuitionism in
the philosophy of mathematics.
Another view on truth which returns to pragmatist themes is the
'internal realism' of Putnam (1981). There Putnam glosses
truth as what would be justified under ideal epistemic conditions.
With the pragmatists, Putnam sees the ideal conditions as something
which can be approximated, echoing the idea of truth as the end of
inquiry.
Putnam is cautious about calling his view anti-realism, preferring the
label 'internal realism'. But he is clear that he sees his
view as opposed to realism ('metaphysical realism', as he
calls it).
Davidson's views on truth have also been associated with
pragmatism, notably by Rorty (1986). Davidson has distanced himself
from this interpretation (e.g., 1990), but he does highlight
connections between truth and belief and meaning. Insofar as these are
human attitudes or relate to human actions, Davidson grants there is
some affinity between his views and those of some pragmatists
(especially, he says, Dewey).
### 4.4 Truth pluralism
Another view that has grown out of the literature on realism and
anti-realism, and has become increasingly important in the current
literature, is that of pluralism about truth. This view, developed in
work of Lynch (e.g. 2001b; 2009) and Wright (e.g. 1992; 1999),
proposes that there are multiple ways for truth bearers to be true.
Wright, in particular, suggests that in certain domains of discourse
what we say is true in virtue of a correspondence-like relation, while
in others it is its true in virtue of a kind of assertibility relation
that is closer in spirit to the anti-realist views we have just
discussed.
Such a proposal might suggest there are multiple concepts of truth, or
that the term 'true' is itself ambiguous. However, whether
or not a pluralist view is committed to such claims has been disputed.
In particular, Lynch (2001b; 2009) develops a version of pluralism
which takes truth to be a functional role concept. The functional role
of truth is characterized by a range of principles that articulate
such features of truth as its objectivity, its role in inquiry, and
related ideas we have encountered in considering various theories of
truth. (A related point about platitudes governing the concept of
truth is made by Wright (1992).) But according to Lynch, these display
the functional role of truth. Furthermore, Lynch claims that on
analogy with analytic functionalism, these principles can be seen as
deriving from our pre-theoretic or 'folk' ideas about
truth.
Like all functional role concepts, truth must be realized, and
according to Lynch it may be realized in different ways in different
settings. Such multiple realizability has been one of the hallmarks of
functional role concepts discussed in the philosophy of mind. For
instance, Lynch suggests that for ordinary claims about material
objects, truth might be realized by a correspondence property (which
he links to representational views), while for moral claims truth
might be manifest by an assertibility property along more anti-realist
lines.
For more on pluralism about truth, see Pedersen and Lynch (2018) and
the entry on
pluralist theories of truth.
## 5. Deflationism
We began in section 1 with the neo-classical theories, which explained
the nature of truth within wider metaphysical systems. We then
considered some alternatives in sections 2 and 3, some of which had
more modest ontological implications. But we still saw in section 4
that substantial theories of truth tend to imply metaphysical theses,
or even *embody* metaphysical positions.
One long-standing trend in the discussion of truth is to insist that
truth really does not carry metaphysical significance at all. It does
not, as it has no significance on its own. A number of different ideas
have been advanced along these lines, under the general heading of
*deflationism*.
### 5.1 The redundancy theory
Deflationist ideas appear quite early on, including a well-known
argument against correspondence in Frege (1918-19). However,
many deflationists take their cue from an idea of Ramsey (1927), often
called the *equivalence thesis*:
>
> \(\ulcorner \ulcorner \phi \urcorner\)
> is true \(\urcorner\) has the same meaning as \(\phi\).
>
(Ramsey himself takes truth-bearers to be propositions rather than
sentences. Glanzberg (2003b) questions whether Ramsey's account
of propositions really makes him a deflationist.)
This can be taken as the core of a theory of truth, often called the
*redundancy theory*. The redundancy theory holds that there is
no property of truth at all, and appearances of the expression
'true' in our sentences are redundant, having no effect on
what we express.
The equivalence thesis can also be understood in terms of speech acts
rather than meaning:
>
> To assert that \(\ulcorner \phi \urcorner\) is true is
> just to assert that \(\phi\).
>
This view was advanced by Strawson (1949; 1950), though Strawson also
argues that there are other important aspects of speech acts involving
'true' beyond what is asserted. For instance, they may be
acts of confirming or granting what someone else said. (Strawson would
also object to my making sentences the bearers of truth.)
In either its speech act or meaning form, the redundancy theory argues
there is no property of truth. It is commonly noted that the
equivalence thesis itself is not enough to sustain the redundancy
theory. It merely holds that when truth occurs in the outermost
position in a sentence, and the full sentence to which truth is
predicated is quoted, then truth is eliminable. What happens in other
environments is left to be seen. Modern developments of the redundancy
theory include Grover et al. (1975).
### 5.2 Minimalist theories
The equivalence principle looks familiar: it has something like the
form of the *Tarski biconditionals* discussed in section 2.2.
However, it is a stronger principle, which identifies the two sides of
the biconditional - either their meanings or the speech acts
performed with them. The Tarski biconditionals themselves are simply
material biconditionals.
A number of deflationary theories look to the Tarski biconditionals
rather than the full equivalence principle. Their key idea is that
even if we do not insist on redundancy, we may still hold the
following theses:
1. For a given language \(\mathbf{L}\) and every \(\phi\) in
\(\mathbf{L}\), the biconditionals \(\ulcorner \ulcorner \phi
\urcorner\) is true if and only if \(\phi \urcorner\) hold by
definition (or analytically, or trivially, or by stipulation
...).
2. This is all there is to say about the concept of truth.
We will refer to views which adopt these as *minimalist*.
Officially, this is the name of the view of Horwich (1990), but we
will apply it somewhat more widely. (Horwich's view differs in
some specific respects from what is presented here, such as
predicating truth of propositions, but we believe it is close enough
to what is sketched here to justify the name.)
The second thesis, that the Tarski biconditionals are all there is to
say about truth, captures something similar to the redundancy
theory's view. It comes near to saying that truth is not a
property at all; to the extent that truth is a property, there is no
more to it than the disquotational pattern of the Tarski
biconditionals. As Horwich puts it, there is no substantial underlying
metaphysics to truth. And as Soames (1984) stresses, certainly nothing
that could ground as far-reaching a view as realism or
anti-realism.
### 5.3 Other aspects of deflationism
If there is no property of truth, or no substantial property of truth,
what role does our term 'true' play? Deflationists
typically note that the truth predicate provides us with a convenient
device of *disquotation*. Such a device allows us to make some
useful claims which we could not formulate otherwise, such as the
*blind ascription* 'The next thing that Bill says will be
true'. (For more on blind ascriptions and their relation to
deflationism, see Azzouni, 2001.) A predicate obeying the Tarski
biconditionals can also be used to express what would otherwise be
(potentially) infinite conjunctions or disjunctions, such as the
notorious statement of Papal infallibility put 'Everything the
Pope says is true'. (Suggestions like this are found in Leeds,
1978 and Quine, 1970.)
Recognizing these uses for a truth predicate, we might simply think of
it as introduced into a language by *stipulation*. The Tarski
biconditionals themselves might be stipulated, as the minimalists
envisage. One could also construe the clauses of a recursive Tarskian
theory as stipulated. (There are some significant logical differences
between these two options. See Halbach (1999) and Ketland (1999) for
discussion.) Other deflationists, such as Beall (2005) or Field
(1994), might prefer to focus here on rules of inference or rules of
use, rather than the Tarski biconditionals themselves.
There are also important connections between deflationist ideas about
truth and certain ideas about meaning. These are fundamental to the
deflationism of Field (1986; 1994), which will be discussed in section
6.3. For an insightful critique of deflationism, see Gupta (1993).
For more on deflationism, see Azzouni (2018) and the entry on the
deflationary theory of truth.
## 6. Truth and language
One of the important themes in the literature on truth is its
connection to meaning, or more generally, to language. This has proved
an important application of ideas about truth, and an important issue
in the study of truth itself. This section will consider a number of
issues relating truth and language.
### 6.1 Truth-bearers
There have been many debates in the literature over what the primary
bearers of truth are. Candidates typically include beliefs,
propositions, sentences, and utterances. We have already seen in
section 1 that the classical debates on truth took this issue very
seriously, and what sort of theory of truth was viable was often seen
to depend on what the bearers of truth are.
In spite of the number of options under discussion, and the
significance that has sometimes been placed on the choice, there is an
important similarity between candidate truth-bearers. Consider the
role of truth-bearers in the correspondence theory, for instance. We
have seen versions of it which take beliefs, propositions, or
interpreted sentences to be the primary bearers of truth. But all of
them rely upon the idea that their truth-bearers are
*meaningful*, and are thereby able to say something about what
the world is like. (We might say that they are able to represent the
world, but that is to use 'represent' in a wider sense
than we saw in section 3.2. No assumptions about just what stands in
relations to what objects are required to see truth-bearers as
meaningful.) It is in virtue of being meaningful that truth-bearers
are able to enter into correspondence relations. Truth-bearers are
things which meaningfully make claims about what the world is like,
and are true or false depending on whether the facts in the world are
as described.
Exactly the same point can be made for the anti-realist theories of
truth we saw in section 4.2, though with different accounts of how
truth-bearers are meaningful, and what the world contributes. Though
it is somewhat more delicate, something similar can be said for
coherence theories, which usually take beliefs, or whole systems of
beliefs, as the primary truth-bearers. Though a coherence theory will
hardly talk of beliefs representing the facts, it is crucial to the
coherence theory that beliefs are contentful beliefs of agents, and
that they can enter into coherence relations. Noting the complications
in interpreting the genuine classical coherence theories, it appears
fair to note that this requires truth-bearers to be meaningful,
however the background metaphysics (presumably idealism) understands
meaning.
Though Tarski works with sentences, the same can be said of his
theory. The sentences to which Tarski's theory applies are fully
interpreted, and so also are meaningful. They characterize the world
as being some way or another, and this in turn determines whether they
are true or false. Indeed, Tarski needs there to be a fact of the
matter about whether each sentence is true or false (abstracting away
from context dependence), to ensure that the Tarski biconditionals do
their job of fixing the extension of 'is true'. (But note
that just what this fact of the matter consists in is left open by the
Tarskian apparatus.)
We thus find the usual candidate truth-bearers linked in a tight
circle: interpreted sentences, the propositions they express, the
belief speakers might hold towards them, and the acts of assertion
they might perform with them are all connected by providing something
meaningful. This makes them reasonable bearers of truth. For this
reason, it seems, contemporary debates on truth have been much less
concerned with the issue of truth-bearers than were the classical
ones. Some issues remain, of course. Different metaphysical
assumptions may place primary weight on some particular node in the
circle, and some metaphysical views still challenge the existence of
some of the nodes. Perhaps more importantly, different views on the
nature of meaning itself might cast doubt on the coherence of some of
the nodes. Notoriously for instance, Quineans (e.g., Quine, 1960) deny
the existence of intensional entities, including propositions. Even
so, it increasingly appears doubtful that attention to truth *per
se* will bias us towards one particular primary bearer of
truth.
For more on these issues, see King (2018).
### 6.2 Truth and truth conditions
There is a related, but somewhat different point, which is important
to understanding the theories we have canvassed.
The neo-classical theories of truth start with truth-bearers which are
already understood to be meaningful, and explain how they get their
truth values. But along the way, they often do something more. Take
the neo-classical correspondence theory, for instance. This theory, in
effect, starts with a view of how propositions are meaningful. They
are so in virtue of having constituents in the world, which are
brought together in the right way. There are many complications about
the nature of meaning, but at a minimum, this tells us what the truth
conditions associated with a proposition are. The theory then explains
how such truth conditions can lead to the truth value *true*,
by the right fact *existing*.
Many theories of truth are like the neo-classical correspondence
theory in being as much theories of how truth-bearers are meaningful
as of how their truth values are fixed. Again, abstracting from some
complications about meaning, this makes them theories both of truth
*conditions* and truth *values*. The Tarskian theory of
truth can be construed this way too. This can be seen both in the way
the Tarski biconditionals are understood, and how a recursive theory
of truth is understood. As we explained Convention T in section 2.2,
the primary role of a Tarski biconditional of the form
\(\ulcorner \ulcorner \phi \urcorner\)
is true if and only if \(\phi \urcorner\) is to fix whether
\(\phi\) is in the extension of 'is true' or not. But it can
also be seen as stating the *truth conditions* of \(\phi\). Both
rely on the fact that the unquoted occurrence of \(\phi\) is an
occurrence of an interpreted sentence, which has a truth value, but
also provides its truth conditions upon occasions of use.
Likewise, the base clauses of the recursive definition of truth, those
for reference and satisfaction, are taken to state the relevant
semantic properties of constituents of an interpreted sentence. In
discussing Tarski's theory of truth in section 2, we focused on
how these determine the truth value of a sentence. But they also show
us the truth conditions of a sentence are determined by these semantic
properties. For instance, for a simple sentence like 'Snow is
white', the theory tells us that the sentence is true if the
referent of 'Snow' satisfies 'white'. This can
be understood as telling us that the truth *conditions* of
'Snow is white' are those conditions in which the referent
of 'Snow' satisfies the predicate 'is
white'.
As we saw in sections 3 and 4, the Tarskian apparatus is often seen as
needing some kind of supplementation to provide a full theory of
truth. A full theory of truth conditions will likewise rest on how the
Tarskian apparatus is put to use. In particular, just what kinds of
conditions those in which the referent of 'snow' satisfies
the predicate 'is white' are will depend on whether we opt
for realist or anti-realist theories. The realist option will simply
look for the conditions under which the stuff snow bears the property
of whiteness; the anti-realist option will look to the conditions
under which it can be verified, or asserted with warrant, that snow is
white.
There is a broad family of theories of truth which are theories of
truth conditions as well as truth values. This family includes the
correspondence theory in all its forms - classical and modern.
Yet this family is much wider than the correspondence theory, and
wider than realist theories of truth more generally. Indeed, virtually
all the theories of truth that make contributions to the
realism/anti-realism debate are theories of truth conditions. In a
slogan, for many approaches to truth, a theory of truth is a theory of
truth conditions.
### 6.3 Truth conditions and deflationism
Any theory that provides a substantial account of truth conditions can
offer a simple account of truth values: a truth-bearer provides truth
conditions, and it is true if and only if the actual way things are is
among them. Because of this, any such theory will imply a strong, but
very particular, biconditional, close in form to the Tarski
biconditionals. It can be made most vivid if we think of propositions
as sets of truth conditions. Let \(p\) be a proposition, i.e., a
set of truth conditions, and let \(a\) be the 'actual
world', the condition that actually obtains. Then we can almost
trivially see:
>
> \(p\) is true if and only if \(a \in p\).
>
This is presumably necessary. But it is important to observe that it
is in one respect crucially different from the genuine Tarski
biconditionals. It makes no use of a non-quoted sentence, or in fact
any sentence at all. It does not have the disquotational character of
the Tarski biconditionals.
Though this may look like a principle that deflationists should
applaud, it is not. Rather, it shows that deflationists cannot really
hold a truth-conditional view of content at all. If they do, then they
*inter alia* have a non-deflationary theory of truth, simply by
linking truth value to truth conditions through the above
biconditional. It is typical of thoroughgoing deflationist theories to
present a non-truth-conditional theory of the contents of sentences: a
non-truth-conditional account of what makes truth-bearers meaningful.
We take it this is what is offered, for instance, by the *use*
theory of propositions in Horwich (1990). It is certainly one of the
leading ideas of Field (1986; 1994), which explore how a conceptual
role account of content would ground a deflationist view of truth.
Once one has a non-truth-conditional account of content, it is then
possible to add a deflationist truth predicate, and use this to give
purely deflationist statements of truth conditions. But the starting
point must be a non-truth-conditional view of what makes truth-bearers
meaningful.
Both deflationists and anti-realists start with something other than
correspondence truth conditions. But whereas an anti-realist will
propose a different theory of truth conditions, a deflationists will
start with an account of content which is not a theory of truth
conditions at all. The deflationist will then propose that the truth
predicate, given by the Tarski biconditionals, is an additional
device, not for understanding content, but for disquotation. It is a
useful device, as we discussed in section 5.3, but it has nothing to
do with content. To a deflationist, the meaningfulness of
truth-bearers has nothing to do with truth.
### 6.4 Truth and the theory of meaning
It has been an influential idea, since the seminal work of Davidson
(e.g., 1967), to see a Tarskian theory of truth as a theory of
meaning. At least, as we have seen, a Tarskian theory can be seen as
showing how the truth conditions of a sentence are determined by the
semantic properties of its parts. More generally, as we see in much of
the work of Davidson and of Dummett (e.g., 1959; 1976; 1983; 1991),
giving a theory of truth conditions can be understood as a crucial
part of giving a theory of meaning. Thus, any theory of truth that
falls into the broad category of those which are theories of truth
conditions can be seen as part of a theory of meaning. (For more
discussion of these issues, see Higginbotham (1986; 1989) and the
exchange between Higginbotham (1992) and Soames (1992).)
A number of commentators on Tarski (e.g., Etchemendy, 1988; Soames,
1984) have observed that the Tarskian apparatus needs to be understood
in a particular way to make it suitable for giving a theory of
meaning. Tarski's work is often taken to show how to
*define* a truth predicate. If it is so used, then whether or
not a sentence is true becomes, in essence, a truth of mathematics.
Presumably what truth conditions sentences of a natural language have
is a contingent matter, so a truth predicate defined in this way
cannot be used to give a theory of meaning for them. But the Tarskian
apparatus need not be used just to explicitly define truth. The
recursive characterization of truth can be used to state the semantic
properties of sentences and their constituents, as a theory of meaning
should. In such an application, truth is not taken to be explicitly
defined, but rather the truth conditions of sentences are taken to be
described. (See Heck, 1997 for more discussion.)
### 6.5 The coherence theory and meaning
Inspired by Quine (e.g., 1960), Davidson himself is well known for
taking a different approach to using a theory of truth as a theory of
meaning than is implicit in Field (1972). Whereas a Field-inspired
representational approach is based on a causal account of reference,
Davidson (e.g., 1973) proposes a process of *radical
interpretation* in which an interpreter builds a Tarskian theory
to interpret a speaker as holding beliefs which are consistent,
coherent, and largely true.
This led Davidson (e.g. 1986) to argue that most of our beliefs are
true - a conclusion that squares well with the coherence theory
of truth. This is a weaker claim than the neo-classical coherence
theory would make. It does not insist that all the members of any
coherent set of beliefs are true, or that truth simply consists in
being a member of such a coherent set. But all the same, the
conclusion that most of our beliefs are true, because their contents
are to be understood through a process of radical interpretation which
will make them a coherent and rational system, has a clear affinity
with the neo-classical coherence theory.
In Davidson (1986), he thought his view of truth had enough affinity
with the neo-classical coherence theory to warrant being called a
coherence theory of truth, while at the same time he saw the role of
Tarskian apparatus as warranting the claim that his view was also
compatible with a kind of correspondence theory of truth.
In later work, however, Davidson reconsidered this position. In fact,
already in Davidson (1977) he had expressed doubt about any
understanding of the role of Tarski's theory in radical
interpretation that involves the kind of representational apparatus
relied on by Field (1972), as we discussed in sections 3.1 and 3.2. In
the "Afterthoughts" to Davidson (1986), he also concluded
that his view departs too far from the neo-classical coherence theory
to be named one. What is important is rather the role of radical
interpretation in the theory of content, and its leading to the idea
that belief is veridical. These are indeed points connected to
coherence, but not to the coherence theory of truth per se. They also
comprise a strong form of anti-representationalism. Thus, though he
does not advance a coherence theory of truth, he does advance a theory
that stands in opposition to the representational variants of the
correspondence theory we discussed in section 3.2.
For more on Davidson, see Glanzberg (2013) and the entry on
Donald Davidson.
### 6.6 Truth and assertion
The relation between truth and meaning is not the only place where
truth and language relate closely. Another is the idea, also
much-stressed in the writings of Dummett (e.g., 1959), of the relation
between truth and assertion. Again, it fits into a platitude:
>
> Truth is the aim of assertion.
>
A person making an assertion, the platitude holds, aims to say
something true.
It is easy to cast this platitude in a way that appears false. Surely,
many speakers do not aim to say something true. Any speaker who lies
does not. Any speaker whose aim is to flatter, or to deceive, aims at
something other than truth.
The motivation for the truth-assertion platitude is rather different.
It looks at assertion as a practice, in which certain rules are
*constitutive*. As is often noted, the natural parallel here is
with games, like chess or baseball, which are defined by certain
rules. The platitude holds that it is constitutive of the practice of
making assertions that assertions aim at truth. An assertion by its
nature presents what it is saying as true, and any assertion which
fails to be true is *ipso facto* liable to criticism, whether
or not the person making the assertion themself wished to have said
something true or to have lied.
Dummett's original discussion of this idea was partially a
criticism of deflationism (in particular, of views of Strawson, 1950).
The idea that we fully explain the concept of truth by way of the
Tarski biconditionals is challenged by the claim that the
truth-assertion platitude is fundamental to truth. As Dummett there
put it, what is left out by the Tarski biconditionals, and captured by
the truth-assertion platitude, is the *point* of the concept of
truth, or what the concept is used for. (For further discussion, see
Glanzberg, 2003a and Wright, 1992.)
Whether or not assertion has such constitutive rules is, of course,
controversial. But among those who accept that it does, the place of
truth in the constitutive rules is itself controversial. The leading
alternative, defended by Williamson (1996), is that knowledge, not
truth, is fundamental to the constitutive rules of assertion.
Williamson defends an account of assertion based on the rule that one
must assert only what one knows.
For more on truth and assertion, see the papers in Brown and Cappelen
(2011) and the entry on
assertion. |
truth-axiomatic | ## 1. Motivations
There have been many attempts to define truth in terms of
correspondence,
coherence or other notions.
However, it is far from clear that truth is a definable notion. In
formal settings satisfying certain natural conditions, Tarski's
theorem on the undefinability of the truth predicate shows that a
definition of a truth predicate requires resources that go beyond
those of the formal language for which truth is going to be defined.
In these cases definitional approaches to truth have to fail. By contrast,
the axiomatic approach does not presuppose that truth can be
defined. Instead, a formal language is expanded by a new primitive
predicate for truth or satisfaction, and axioms for that predicate are then laid
down. This approach by itself does not preclude the possibility that the truth predicate
is definable, although in many cases it can be shown that the truth
predicate is not definable.
In semantic theories of truth (e.g., Tarski 1935, Kripke 1975), in
contrast, a truth predicate is defined for a language, the so-called
object language. This definition is carried out in a metalanguage or
metatheory, which is typically taken to include set theory or at least
another strong theory or expressively rich interpreted
language. Tarski's theorem on the undefinability of the truth
predicate shows that, given certain general assumptions, the resources
of the metalanguage or metatheory must go beyond the resources of the
object-language. So semantic approaches usually necessitate the use of
a metalanguage that is more powerful than the object-language for
which it provides a semantics.
As with other formal deductive systems, axiomatic theories of truth can be
presented within very weak logical frameworks. These frameworks require very
few resources, and in particular, avoid the need for a strong metalanguage and
metatheory.
Formal work on axiomatic theories of truth has helped to shed some light on
semantic theories of truth. For instance, it has yielded information on what is
required of a metalanguage that is sufficient for defining a truth predicate.
Semantic theories of truth, in turn, provide one with the theoretical tools
needed for investigating models of axiomatic theories of truth and with
motivations for certain axiomatic theories. Thus axiomatic and semantic
approaches to truth are intertwined.
This entry outlines the most popular axiomatic theories of truth and
mentions some of the formal results that have been obtained concerning them. We
give only hints as to their philosophical applications.
### 1.1 Truth, properties and sets
Theories of truth and predication are closely related to theories of
properties and
property attribution. To say that an open formula \(\phi(x)\)
is true of an individual \(a\) seems equivalent (in some sense)
to the claim that \(a\) has the property of
*being such that* \(\phi\) (this property is signified by the open formula).
For example, one might say that '\(x\) is a poor philosopher' is true
of Tom instead of saying that Tom has the property of being a poor philosopher.
Quantification over definable properties can then be mimicked in a language
with a truth predicate by quantifying over formulas. Instead of saying, for
instance, that \(a\) and \(b\) have exactly the same properties, one
says that exactly the same formulas are true of \(a\) and \(b\). The
reduction of properties to truth works also to some extent for sets of
individuals.
There are also reductions in the other direction: Tarski (1935) has
shown that certain second-order existence assumptions (e.g.,
comprehension axioms) may be utilized to define truth (see the entry
on Tarski's definition of
truth). The mathematical analysis of axiomatic theories of truth
and second-order systems has exhibited many equivalences between these
second-order existence assumptions and truth-theoretic
assumptions.
These results show exactly what is required for defining a truth predicate
that satisfies certain axioms, thereby sharpening Tarski's insights into
definability of truth. In particular, proof-theoretic equivalences described in
Section 3.3 below make explicit to what extent a
metalanguage (or rather metatheory) has to be richer than the object language
in order to be able to define a truth predicate.
The equivalence between second-order theories and truth theories also has
bearing on traditional metaphysical topics. The reductions of second-order
theories (i.e., theories of properties or sets) to axiomatic theories of truth
may be conceived as forms of reductive nominalism, for they replace existence
assumptions for sets or properties (e.g., comprehension axioms) by
ontologically innocuous assumptions, in the present case by assumptions on the
behaviour of the truth predicate.
### 1.2 Truth and reflection
According to Godel's incompleteness theorems,
the statement that Peano Arithmetic (PA)
is consistent, in its guise as a number-theoretic statement (given the
technique of Godel numbering), cannot be derived in PA
itself. But PA can be strengthened by adding this consistency
statement or by stronger axioms. In particular, axioms partially
expressing the soundness of PA can be added. These are known as
reflection principles. An example of a reflection principle for PA
would be the set of
sentences \(Bew\_{PA}(\ulcorner \phi \urcorner)
\rightarrow \phi\) where \(\phi\) is a formula of the language of
arithmetic, \(\ulcorner \phi \urcorner\) a name for \(\phi\)
and \(Bew\_{PA}(x)\) is the standard provability
predicate for PA ('\(Bew\)' was introduced by
Godel and is short for the German word
'*beweisbar*', that is,
'provable').
The process of adding reflection principles can be iterated: one
can add, for example, a reflection principle R for PA to PA; this
results in a new theory PA+R. Then one adds the reflection principle
for the system PA+R to the theory PA+R. This process can be continued
into the transfinite (see Feferman 1962 and Franzen 2004).
The reflection principles express--at least
partially--the soundness of the system. The most natural and full
expression of the soundness of a system involves the truth predicate
and is known as the Global Reflection Principle (see Kreisel and
Levy 1968). The Global Reflection Principle for a formal system
S states that all sentences provable in S are true:
\[
\forall x(Bew\_{S} (x) \rightarrow Tx)
\]
\(Bew\_{S} (x)\) expresses here provability of
sentences in the system S (we omit discussion here of the problems of
defining \(Bew\_{S} (x))\). The truth predicate
has to satisfy certain principles; otherwise the global reflection
principle would be vacuous. Thus not only the global reflection
principle has to be added, but also axioms for truth. If a natural
theory of truth like T(PA) below is added, however, it is no longer
necessary to postulate the global reflection principle explicitly, as
theories like T(PA) prove already the global reflection principle for
PA. One may therefore view truth theories as reflection principles as
they prove soundness statements and add the resources to express these
statements.
Thus instead of iterating reflection principles that are formulated
entirely in the language of arithmetic, one can add by iteration new
truth predicates and correspondingly new axioms for the new truth
predicates. Thereby one might hope to make explicit all the
assumptions that are implicit in the acceptance of a theory like
PA. The resulting theory is called the reflective closure of the
initial theory. Feferman (1991) has proposed the use of a single truth
predicate and a single theory (KF), rather than a hierarchy of
predicates and theories, in order to explicate the reflective closure
of PA and other theories. (KF is discussed further
in Section 4.4 below.)
The relation of truth theories and (iterated) reflection principles
also became prominent in the discussion of truth-theoretic
deflationism (see Tennant 2002 and the follow-up discussion).
### 1.3 Truth-theoretic deflationism
Many proponents of
deflationist theories of truth
have chosen to treat truth as a primitive notion
and to axiomatize it, often using some version of
the \(T\)-sentences as axioms. \(T\)-sentences are
equivalences of the form
\(T\ulcorner \phi \urcorner \leftrightarrow \phi\), where \(T\) is the truth predicate, \(\phi\) is a sentence
and \(\ulcorner \phi \urcorner\) is a name for the
sentence \(\phi\). (More refined axioms have also been discussed by
deflationists.) At first glance at least, the axiomatic approach seems
much less 'deflationary' than those more traditional
theories which rely on a definition of truth in terms of
correspondence or the like. If truth can be explicitly defined, it can
be eliminated, whereas an axiomatized notion of truth may and often
does come with commitments that go beyond that of the base
theory.
If truth does not have any explanatory force, as some deflationists
claim, the axioms for truth should not allow us to prove any new
theorems that do not involve the truth predicate. Accordingly, Horsten (1995), Shapiro (1998) and Ketland (1999) have suggested that
a deflationary axiomatization of truth should be at
least *conservative*. The new axioms for truth are conservative
if they do not imply any additional sentences (free of occurrences of
the truth-predicate) that aren't already provable without the
truth axioms. Thus a non-conservative theory of truth adds new
non-semantic content to a theory and has genuine explanatory power,
contrary to many deflationist views. Certain natural theories of
truth, however, fail to be conservative (see Section
3.3 below, Field 1999 and Shapiro 2002 for further
discussion).
According to many deflationists, truth serves merely the purpose of
expressing infinite conjunctions. It is plain that not *all*
infinite conjunctions can be expressed because there are uncountably
many (non-equivalent) infinite conjunctions over a countable
language. Since the language with an added truth predicate has only
countably many formulas, not every infinite conjunction can be
expressed by a different finite formula. The formal work on axiomatic
theories of truth has helped to specify exactly which infinite
conjunctions can be expressed with a truth predicate. Feferman (1991)
provides a proof-theoretic analysis of a fairly strong system. (Again,
this will be explained in the discussion about KF
in Section 4.4 below.)
## 2. The base theory
### 2.1 The choice of the base theory
In most axiomatic theories, truth is conceived as a predicate of
objects. There is an extensive philosophical discussion on the
category of objects to which truth applies: propositions conceived as
objects that are independent of any language, types and tokens of
sentences and utterances, thoughts, and many other objects have been
proposed. Since the structure of sentences considered as types is
relatively clear, sentence types have often been used as the objects
that can be true. In many cases there is no need to make very specific
metaphysical commitments, because only certain modest assumptions on
the structure of these objects are required, independently from
whether they are finally taken to be syntactic objects, propositions
or still something else. The theory that describes the properties of
the objects to which truth can be attributed is called the *base
theory*. The formulation of the base theory does not involve the
truth predicate or any specific truth-theoretic assumptions. The base
theory could describe the structure of sentences, propositions and the
like, so that notions like the negation of such an object can then be
used in the formulation of the truth-theoretic axioms.
In many axiomatic truth theories, truth is taken as a predicate
applying to the Godel numbers of sentences. Peano arithmetic has
proved to be a versatile theory of objects to which truth is applied,
mainly because adding truth-theoretic axioms to Peano arithmetic
yields interesting systems and because Peano arithmetic is equivalent
to many straightforward theories of syntax and even theories of
propositions. However, other base theories have been considered as
well, including formal syntax theories and set theories.
Of course, we can also investigate theories which result by adding
the truth-theoretic axioms to much stronger theories like set
theory. Usually there is no chance of proving the consistency of set
theory plus further truth-theoretic axioms because the consistency of
set theory itself cannot be established without assumptions
transcending set theory. In many cases not even relative consistency
proofs are feasible. However, if adding certain truth-theoretic axioms
to PA yields a consistent theory, it seems at least plausible that
adding analogous axioms to set theory will not lead to an
inconsistency. Therefore, the hope is that research on theories of
truth over PA will give an some indication of what will happen when we
extend stronger theories with axioms for the truth predicate. However, Fujimoto (2012)
has shown that some axiomatic truth theories over set theory differ from their counterparts over Peano arithmetic in some aspects.
### 2.2 Notational conventions
For the sake of definiteness we assume that the language of
arithmetic has exactly \(\neg , \wedge\) and \(\vee\) as connectives and
\(\forall\) and \(\exists\) as quantifiers. It has as individual constants
only the symbol 0 for zero; its only function symbol is the unary
successor symbol \(S\); addition and multiplication are expressed
by predicate symbols. Therefore the only closed terms of the language
of arithmetic are the numerals
\(0, S\)(0), \(S(S\)(0)), \(S(S(S\)(0))),
....
The language of arithmetic does not contain the unary predicate
symbol \(T\), so
let \(\mathcal{L}\_T\) be the
language of arithmetic augmented by the new unary predicate
symbol \(T\) for truth. If \(\phi\) is a sentence
of \(\mathcal{L}\_T,
\ulcorner \phi \urcorner\) is a name for \(\phi\) in the
language \(\mathcal{L}\_T\);
formally speaking, it is the numeral of
the Godel number of
\(\phi\). In general, Greek letters like \(\phi\) and \(\psi\) are variables of
the metalanguage, that is, the language used for talking about
theories of truth and the language in which this entry is written
(i.e., English enriched by some symbols). \(\phi\) and \(\psi\) range over
formulas of the formal
language \(\mathcal{L}\_T\).
In what follows, we use small, upper case italic letters
like \({\scriptsize A}, {\scriptsize B},\ldots\) as
variables in \(\mathcal{L}\_T\)
ranging over sentences (or their Godel numbers, to be
precise). Thus
\(\forall{\scriptsize A}(\ldots{\scriptsize A}\ldots)\)
stands for
\(\forall x(Sent\_T (x)
\rightarrow \ldots x\ldots)\),
where \(Sent\_T (x)\) expresses in the
language of arithmetic that \(x\) is a sentence of the language
of arithmetic extended by the predicate symbol \(T\). The
syntactical operations of forming a conjunction of two sentences and
similar operations can be expressed in the language of
arithmetic. Since the language of arithmetic does not contain any
function symbol apart from the symbol for successor, these operations
must be expressed by sutiable predicate expressions. Thus one can say
in the language \(\mathcal{L}\_T\)
that a negation of a sentence
of \(\mathcal{L}\_T\) is true if and
only if the sentence itself is not true. We would write this as
\[
\forall{\scriptsize A}(T[\neg{\scriptsize A}] \leftrightarrow \neg T{\scriptsize A}).
\]
The square brackets indicate that the operation of forming the negation of
\({\scriptsize A}\) is expressed in the language of arithmetic. Since the
language of arithmetic does not contain a function symbol representing the
function that sends sentences to their negations, appropriate paraphrases
involving predicates must be given.
Thus, for instance, the expression
\[
\forall{\scriptsize A}\forall{\scriptsize B}(T[{\scriptsize A} \wedge{\scriptsize B}] \leftrightarrow(T{\scriptsize A} \wedge T{\scriptsize B}))
\]
is a single sentence of the language \(\mathcal{L}\_T\) saying that a conjunction of
sentences of \(\mathcal{L}\_T\) is true if
and only if both sentences are true. In contrast,
\[
T\ulcorner \phi \wedge \psi \urcorner \leftrightarrow
(T\ulcorner \phi \urcorner \wedge T\ulcorner \phi \urcorner)
\]
is only a schema. That is, it stands for the set of all sentences
that are obtained from the above expression by substituting sentences
of \(\mathcal{L}\_T\) for the Greek
letters \(\phi\) and \(\psi\). The single sentence
\(\forall{\scriptsize A}\forall{\scriptsize B}(T[{\scriptsize A} \wedge{\scriptsize B}] \leftrightarrow
(T{\scriptsize A} \wedge T{\scriptsize B}))\) implies all sentences
which are instances of the schema, but the instances of the schema do
not imply the single universally quantified sentence. In general, the
quantified versions are stronger than the corresponding schemata.
## 3. Typed theories of truth
In typed theories of truth, only the truth of sentences not
containing the same truth predicate is provable, thus avoiding the
paradoxes by observing Tarski's distinction between object and
metalanguage.
### 3.1 Definable truth predicates
Certain truth predicates can be defined within the language of
arithmetic. Predicates suitable as truth predicates for sublanguages
of the language of arithmetic can be defined within the language of
arithmetic, as long as the quantificational complexity of the formulas
in the sublanguage is restricted. In particular, there is a
formula \(Tr\_0 (x)\) that expresses
that \(x\) is a true atomic sentence of the language of
arithmetic, that is, a sentence of the form \(n=k\),
where \(k\) and \(n\) are identical numerals. For further
information on partial truth predicates see, for instance,
Hajek and Pudlak (1993), Kaye (1991) and Takeuti (1987).
The definable truth predicates are truly redundant, because they
are expressible in PA; therefore there is no need to introduce them
axiomatically. All truth predicates in the following are not definable
in the language of arithmetic, and therefore not redundant at least in
the sense that they are not definable.
### 3.2 The \(T\)-sentences
The typed \(T\)-sentences are all equivalences of the
form \(T\ulcorner \phi \urcorner \leftrightarrow \phi\), where \(\phi\) is a sentence not containing the truth
predicate. Tarski (1935) called any theory proving these equivalences
'materially adequate'. Tarski (1935) criticised an
axiomatization of truth relying only on the \(T\)-sentences, not
because he aimed at a definition rather than an axiomatization of
truth, but because such a theory seemed too weak. Thus although the
theory is materially adequate, Tarski thought that
the \(T\)-sentences are deductively too weak. He observed, in
particular, that the \(T\)-sentences do not prove the principle
of completeness, that is, the sentence
\(\forall{\scriptsize A}(T{\scriptsize A}\vee T[\neg{\scriptsize A}\)])
where the quantifier \(\forall{\scriptsize A}\) is restricted
to sentences not containing T.
Theories of truth based on the \(T\)-sentences, and their
formal properties, have also recently been a focus of interest in the
context of so-called deflationary theories of
truth. The \(T\)-sentences \(T\ulcorner \phi \urcorner \leftrightarrow \phi\) (where \(\phi\) does not contain \(T)\) are not
conservative over first-order logic with identity, that is, they prove
a sentence not containing \(T\) that is not logically valid. For
the \(T\)-sentences prove that the sentences \(0=0\) and \(\neg 0=0\) are
different and that therefore at least two objects exist. In other
words, the \(T\)-sentences are not conservative over the empty
base theory. If the \(T\)-sentences are added to PA, the
resulting theory is conservative over PA. This means that the theory
does not prove \(T\)-free sentences that are not already provable
in PA. This result even holds if in addition to
the \(T\)-sentences also all induction axioms containing the
truth predicate are added. This may be shown by appealing to the
Compactness Theorem.
In the form outlined above, T-sentences express the equivalence between \(T\ulcorner \phi \urcorner\) and \(\phi\) only when \(\phi\) is a sentence.
In order to capture the equivalence for properties \((x\) has property P iff 'P' is true of \(x)\) one must generalise the T-sentences. The result are usually referred to as the *uniform* T-senences and are formalised by the equivalences
\(\forall x(T\ulcorner \phi(\underline{x})\urcorner \leftrightarrow \phi(x))\) for each open formula \(\phi(v)\) with at most \(v\) free in \(\phi\).
Underlining the variable indicates it is bound from the outside.
More precisely, \(\ulcorner \phi(\underline{x})\urcorner\) stands for the result of replacing the variable \(v\)
in \(\ulcorner \phi(v)\urcorner\) by the numeral
of \(x\).
### 3.3 Compositional truth
As was observed already by Tarski (1935), certain desirable
generalizations don't follow from the T-sentences. For instance,
together with reasonable base theories they don't imply that a
conjunction is true if both conjuncts are true.
In order to obtain systems that also prove universally quantified
truth-theoretic principles, one can turn the inductive clauses of
Tarski's definition of truth into axioms. In the following
axioms, \(AtomSent\_{PA}(\ulcorner{\scriptsize A}\urcorner)\)
expresses that \({\scriptsize A}\) is an atomic sentence of the
language of
arithmetic, \(Sent\_{PA}(\ulcorner{\scriptsize A}\urcorner)\)
expresses that \({\scriptsize A}\) is a sentence of the language
of arithmetic.
1. \(\forall{\scriptsize A}(AtomSent\_{PA}({\scriptsize A})
\rightarrow(T{\scriptsize A} \leftrightarrow Tr\_0 ({\scriptsize A})))\)
2. \(\forall{\scriptsize A}(Sent\_{PA}({\scriptsize A})
\rightarrow(T[\neg{\scriptsize A}] \leftrightarrow \neg T{\scriptsize A}))\)
3. \(\forall{\scriptsize A}\forall{\scriptsize B}(Sent\_{PA}({\scriptsize A}) \wedge Sent\_{PA}({\scriptsize B}) \rightarrow
(T[{\scriptsize A} \wedge{\scriptsize B}]
\leftrightarrow(T{\scriptsize A}
\wedge T{\scriptsize B})))\)
4. \(\forall{\scriptsize A}\forall{\scriptsize B}(Sent\_{PA}({\scriptsize A}) \wedge Sent\_{PA}({\scriptsize B}) \rightarrow
(T[{\scriptsize A} \vee{\scriptsize B}] \leftrightarrow
(T{\scriptsize A} \vee T{\scriptsize B})))\)
5. \(\forall{\scriptsize A}(v)(Sent\_{PA}(\forall v{\scriptsize A})
\rightarrow(T[\forall v{\scriptsize A}(v)] \leftrightarrow \forall xT[{\scriptsize A}(\underline{x})\)]))
6. \(\forall{\scriptsize A}(v)(Sent\_{PA}(\forall v{\scriptsize A})
\rightarrow(T[\exists v{\scriptsize A}(v)] \leftrightarrow \exists xT[{\scriptsize A}(\underline{x})\)]))
Axiom 1 says that an atomic sentence of the language of Peano
arithmetic is true if and only if it is true according to the
arithmetical truth predicate for this language
\((Tr\_0\) was defined in Section
3.1). Axioms 2-6 claim that truth commutes with all
connectives and quantifiers. Axiom 5 says that a universally
quantified sentence of the language of arithmetic is true if and only
if all its numerical instances are true.
\(Sent\_{PA}(\forall v{\scriptsize A})\)
says that \({\scriptsize A}(v)\) is a formula with at
most \(v\) free (because
\(\forall v{\scriptsize A}(v)\) is a
sentence).
If these axioms are to be formulated for a language like set theory
that lacks names for all objects, then axioms 5 and 6 require the use
of a satisfaction relation rather than a unary truth predicate.
Axioms in the style of 1-6 above played a central role
in Donald Davidson's theory of meaning and
in several deflationist approaches to truth.
The theory given by all axioms of PA and Axioms 1-6 but with
induction only for \(T\)-free formulae is conservative over PA,
that is, it doesn't prove any new \(T\)-free theorems that
not already provable in PA. However, not all models of PA can be
expanded to models of PA + axioms 1-6. This follows from a
result due to Lachlan (1981). Kotlarski, Krajewski, and Lachlan (1981)
proved the conservativeness very similar to PA + axioms 1-6 by
model-theoretic means. Although several authors claimed that this result is also finitarily provable, no such
proof was available until Enayat & Visser (2015) and Leigh
(2015). Moreover, the theory given by PA + axioms 1-6 is relatively interpretable in PA. However, this result is sensitive to the choice of the base theory: it fails for finitely axiomatized theories (Heck 2015, Nicolai 2016). These proof-theoretic results have been used extensively in the discussion of truth-theoretic deflationism (see Cieslinski 2017).
Of course PA + axioms 1-6 is restrictive insofar as it does not
contain the induction axioms in the language *with* the truth
predicate.
There are various labels for the system that is obtained by
adding all induction axioms involving the truth predicate to the
system PA + axioms 1-6: T(PA), CT, PA(S) or PA + 'there is a
full inductive satisfaction class'. This theory is no longer
conservative over its base theory PA. For instance one can formalise
the soundness theorem or global reflection principle for PA, that is,
the claim that all sentences provable in PA are true. The global
reflection principle for PA in turn implies the consistency of PA,
which is not provable in pure PA by
Godel's
Second Incompleteness Theorem.
Thus T(PA) is not conservative over PA. T(PA) is much
stronger than the mere consistency statement for PA: T(PA) is
equivalent to the second-order system ACA of arithmetical
comprehension (see Takeuti 1987 and Feferman 1991). More precisely,
T(PA) and ACA are intertranslatable in a way that preserves all
arithmetical sentences. ACA is given by the axioms of PA with full
induction in the second-order language and the following comprehension
principle:
\[
\exists X\forall y(y\in X \leftrightarrow \phi(x))
\]
where \(\phi(x)\) is any formula (in which \(x\) may or may
not be free) that does not contain any second-order quantifiers, but
possibly free second-order variables. In T(PA), quantification
over sets can be defined as quantification over formulas with
one free variable and membership as the truth of the formula as
applied to a number.
As the global reflection principle entails formal consistency, the conservativeness result for PA + axioms 1-6 implies that the global reflection principle for Peano arithmetic is not derivable in the typed compositional theory without expanding the induction axioms.
In fact, this theory proves neither the statement that all logical validities are true (global reflection for pure first-order logic) nor that all the Peano axioms of arithmetic are true.
Perhaps surprisingly, of these two unprovable statements it is the former that is the stronger.
The latter can be added as an axiom and the theory remains conservative over PA (Enayat and Visser 2015, Leigh 2015).
In contrast, over PA + axioms 1-6, the global reflection principle for first-order logic is equivalent to global reflection for Peano arithmetic (Cieslinski 2010), and these two theories have the same arithmetic consequences as adding the axiom of induction for bounded \((\Delta\_0)\) formulas containing the truth predicate (Wcislo and Lelyk 2017).
The transition from PA to T(PA) can be imagined as an act of reflection on the truth of \(\mathcal{L}\)-sentences in PA. Similarly, the step from the typed \(T\)-sentences to the compositional axioms is also tied to a reflection principle, specifically the *uniform reflection principle* over the typed uniform \(T\)-sentences.
This is the collection of sentences
\(\forall x\, Bew\_{S} (\ulcorner \phi(\underline{x})\urcorner) \rightarrow \phi(x) \) where \(\phi\) ranges over formulas in \(\mathcal{L}\_T\) with one free variable and S is the theory of the uniform typed T-sentences.
Uniform reflection exactly captures the difference between the two theories: the reflection principle is both derivable in T(PA) and suffices to derive the six compositional axioms (Halbach 2001).
Moreover, the equivalence extends to iterations of uniform reflection, in that for any ordinal \(\alpha , 1 + \alpha\) iterations of uniform reflection over the typed \(T\)-sentences coincides with T(PA) extended by transfinite induction up to the ordinal \(\varepsilon\_{\alpha}\), namely the \(\alpha\)-th ordinal \(\kappa\) with the property that \(\omega^{\kappa} = \kappa \) (Leigh 2016).
Much stronger fragments of second-order arithmetic can be
interpreted by type-free truth systems, that is, by theories of truth
that prove not only the truth of arithmetical sentences but also the
truth of sentences of the language \(\mathcal{L}\_T\) with the truth
predicate; see Section 4 below.
### 3.4 Hierarchical theories
The above mentioned theories of truth can be iterated by
introducing indexed truth predicates. One adds to the language of PA
truth predicates indexed by ordinals (or ordinal notations) or one
adds a binary truth predicate that applies to ordinal notations and
sentences. In this respect the hierarchical approach does not fit the
framework outlined in Section 2, because the language
does not feature a single unary truth predicate applying to sentences
but rather many unary truth predicates or a single binary truth
predicate (or even a single unary truth predicate applying to pairs of
ordinal notations and sentences).
In such a language an axiomatization of Tarski's hierarchy of truth
predicates can be formulated. On the proof-theoretic side iterating
truth theories in the style of T(PA) corresponds to iterating
elementary comprehension, that is, to iterating ACA. The system of
iterated truth theories corresponds to the system of ramified analysis
(see Feferman 1991).
Visser (1989) has studied non-wellfounded
hierarchies of languages and axiomatizations thereof. If one adds
the \(T\)-sentences \(T\_n\ulcorner \phi \urcorner \leftrightarrow \phi\) to the language of arithmetic where \(\phi\) contains only
truth predicates \(T\_k\)
with \(k\gt n\) to PA, a theory is
obtained that does not have a standard \((\omega\)-)model.
## 4. Type-free truth
The truth predicates in natural languages do not come with any
ouvert type restriction. Therefore typed theories of truth (axiomatic
as well as semantic theories) have been thought to be inadequate for
analysing the truth predicate of natural language, although recently
hierarchical theories have been advocated by Glanzberg (2015)
and others. This is one motive for investigating type-free theories of
truth, that is, systems of truth that allow one to prove the truth of
sentences involving the truth predicate. Some type-free theories of
truth have much higher expressive power than the typed theories that
have been surveyed in the previous section (at least as long as
indexed truth predicates are avoided). Therefore type-free theories of
truth are much more powerful tools in the reduction of other theories
(for instance, second-order ones).
### 4.1 Type-free \(T\)-sentences
The set of
all \(T\)-sentences \(T\ulcorner \phi \urcorner \leftrightarrow \phi\), where \(\phi\) is any sentence of the
language \(\mathcal{L}\_T\), that
is, where \(\phi\) may contain \(T\), is inconsistent with PA (or
any theory that proves the diagonal lemma) because of
the Liar paradox. Therefore one might
try to drop from the set of all \(T\)-sentences only those that
lead to an inconsistency. In other words, one may consider maximal
consistent sets of \(T\)-sentences. McGee (1992) showed that
there are uncountably many maximal sets of \(T\)-sentences that
are consistent with PA. So the strategy does not lead to a single
theory. Even worse, given an arithmetical sentence (i.e., a sentence
not containing \(T)\) that can neither be proved nor disproved in
PA, one can find a consistent \(T\)-sentence that decides this
sentence (McGee 1992). This implies that many consistent sets
of \(T\)-sentences prove false arithmetical statements. Thus the
strategy to drop just the \(T\)-sentences that yield an
inconsistency is doomed.
A set of \(T\)-sentences that does not imply any false
arithmetical statement may be obtained by allowing only those \(\phi\)
in \(T\)-sentences \(T\ulcorner \phi \urcorner \leftrightarrow \phi\) that contain \(T\) only positively, that is, in the
scope of an even number of negation symbols. Like the typed theory
in Section 3.2 this theory does not prove
certain generalizations but proves the same T-free sentences as the
strong type-free compositional Kripke-Feferman theory below (Halbach
2009). Schindler (2015) obtained a deductively very strong truth theory based on stratified disquotational principles.
### 4.2 Compositionality
Besides the disquotational feature of truth, one would also like to
capture the compositional features of truth and generalize the axioms
of typed compositional truth to the type-free case. To this end,
axioms or rules concerning the truth of atomic sentences with the
truth predicate will have to be added and the restriction
to \(T\)-free sentences in the compositional axioms will have to
be lifted. In order to treat truth like other predicates, one will add
the axiom
\(\forall{\scriptsize A}(T[T{\scriptsize A}]
\leftrightarrow T{\scriptsize A})\) (where
\(\forall{\scriptsize A}\) ranges over all sentences). If the
type restriction of the typed compositional axiom for
negation is removed, the axiom
\(\forall{\scriptsize A}(T[\neg{\scriptsize A}]
\leftrightarrow \neg T{\scriptsize A})\) is obtained.
However, the axioms
\(\forall{\scriptsize A}(T[T{\scriptsize A}]
\leftrightarrow T{\scriptsize A})\) and
\(\forall{\scriptsize A}(T[\neg{\scriptsize A}]
\leftrightarrow \neg T{\scriptsize A})\) are inconsistent over
weak theories of syntax, so one of them has to be given up. If
\(\forall{\scriptsize A}(T[\neg{\scriptsize A}]
\leftrightarrow \neg T{\scriptsize A})\) is retained, one will
have to find weaker axioms or rules for truth iteration, but truth
remains a classical concept in the sense that
\(\forall{\scriptsize A}(T[\neg{\scriptsize A}]
\leftrightarrow \neg T{\scriptsize A})\) implies the law of
excluded middle (for any sentence either the sentence itself or its
negation is true) and the law of noncontradiction (for no sentence the
sentence itself and its negation are true). If, in contrast,
\(\forall{\scriptsize A}(T[\neg{\scriptsize A}]
\leftrightarrow \neg T{\scriptsize A})\) is rejected and
\(\forall{\scriptsize A}(T[T{\scriptsize A}]
\leftrightarrow T{\scriptsize A})\) retained, then it will
become provable that either some sentences are true together with
their negations or that for some sentences neither they nor their
negations are true, and thus systems of non-classical truth are
obtained, although the systems themselves are still formulated in
classical logic. In the next two sections we overview the most
prominent system of each kind.
### 4.3 The Friedman-Sheard theory and revision semantics
The system FS, named after Friedman and Sheard (1987), retains the
negation axiom
\(\forall{\scriptsize A}(T[\neg{\scriptsize A}]
\leftrightarrow \neg T{\scriptsize A})\). The further
compositional axioms are obtained by lifting the type restriction to
their untyped counterparts:
1. \(\forall{\scriptsize A}(AtomSent\_{PA}({\scriptsize A})
\rightarrow(T{\scriptsize A} \leftrightarrow Tr\_0 ({\scriptsize A})))\)
2. \(\forall{\scriptsize A}(T[\neg{\scriptsize A}] \leftrightarrow \neg T{\scriptsize A})\)
3. \(\forall{\scriptsize A}\forall{\scriptsize B}(T[{\scriptsize A} \wedge{\scriptsize B}] \leftrightarrow(T{\scriptsize A} \wedge T{\scriptsize B}))\)
4. \(\forall{\scriptsize A}\forall{\scriptsize B}(T[{\scriptsize A} \vee{\scriptsize B}] \leftrightarrow(T{\scriptsize A} \vee T{\scriptsize B}))\)
5. \(\forall{\scriptsize A}(v)(Sent(\forall v{\scriptsize A})
\rightarrow(T[\forall v{\scriptsize A}(v)] \leftrightarrow \forall xT[{\scriptsize A}(\underline{x})\)])
6. \(\forall{\scriptsize A}(v)(Sent(\forall v{\scriptsize A})
\rightarrow(T[\exists v{\scriptsize A}(v)] \leftrightarrow \exists xT[{\scriptsize A}(\underline{x})\)]))
These axioms are added to PA formulated in the
language \(\mathcal{L}\_T\). As the
truth iteration axiom
\(\forall{\scriptsize A}(T[T{\scriptsize A}]
\leftrightarrow T{\scriptsize A})\) is inconsistent, only the
following two rules are added:
If \(\phi\) is a theorem, one may infer
\(T\ulcorner \phi \urcorner\), and conversely, if
\(T\ulcorner \phi \urcorner\) is a theorem, one may
infer \(\phi\).
It follows from results due to McGee (1985) that FS is
\(\omega\)-inconsistent, that is, FS proves
\(\exists x\neg \phi(x)\), but proves also \(\phi\)(0),
\(\phi\)(1), \(\phi\)(2), ... for some formula \(\phi(x)\)
of \(\mathcal{L}\_T\). The
arithmetical theorems of FS, however, are all correct.
In FS one can define all finite levels of the classical Tarskian
hierarchy, but FS isn't strong enough to allow one to recover
any of its transfinite levels. Indeed, Halbach (1994) determined its
proof-theoretic strength to be precisely that of the theory of
ramified truth for all finite levels (i.e., finitely iterated T(PA);
see Section 3.4) or, equivalently, the theory of
ramified analysis for all finite levels. If either direction of the
rule is dropped but the other kept, FS retains its proof-theoretic
strength (Sheard 2001).
It is a virtue of FS that it is thoroughly classical: It is
formulated in classical logic; if a sentence is provably true in FS,
then the sentence itself is provable in FS; and conversely if a
sentence is provable, then it is also provably true. Its drawback is
its \(\omega\)-inconsistency. FS may be seen as an axiomatization of
rule-of-revision semantics for all finite levels (see the entry on
the revision theory of truth).
### 4.4 The Kripke-Feferman theory
The Kripke-Feferman theory retains the truth iteration axiom
\(\forall{\scriptsize A}(T[T{\scriptsize A}]
\leftrightarrow T{\scriptsize A})\), but the notion of truth
axiomatized is no longer classical because the negation axiom
\(\forall{\scriptsize A}(T[\neg{\scriptsize A}]
\leftrightarrow \neg T{\scriptsize A})\) is dropped.
The semantical construction captured by this theory is a
generalization of the Tarskian typed inductive definition of truth
captured by T(PA). In the generalized definition one starts with the
true atomic sentence of the arithmetical language and then one
declares true the complex sentences depending on whether its
components are true or not. For instance, as in the typed case, if
\(\phi\) and \(\psi\) are true, their conjunction \(\phi \wedge \psi\) will be
true as well. In the case of the quantified sentences their truth
value is determined by the truth values of their instances (one could
render the quantifier clauses purely compositional by using a
satisfaction predicate); for instance, a universally quantified
sentence will be declared true if and only if all its instances are
true. One can now extend this inductive definition of truth to the
language \(\mathcal{L}\_T\) by
declaring a sentence of the
form \(T\ulcorner \phi \urcorner\) true
if \(\phi\) is already true. Moreover one will declare
\(\neg T\ulcorner \phi \urcorner\) true if
\(\neg \phi\) is true. By making this idea precise, one obtains a variant
of Kripke's (1975) theory of truth with the so called Strong Kleene
valuation scheme (see the entry
on many-valued logic). If
axiomatized it leads to the following system, which is known as KF
('Kripke-Feferman'), of which several variants
appear in the literature:
1. \(\forall{\scriptsize A}(AtomSent\_{PA}({\scriptsize A})
\rightarrow(T{\scriptsize A} \leftrightarrow Tr\_0 ({\scriptsize A})))\)
2. \(\forall{\scriptsize A}(AtomSent\_{PA}({\scriptsize A})
\rightarrow(T[\neg{\scriptsize A}] \leftrightarrow \neg Tr\_0 ({\scriptsize A})))\)
3. \(\forall{\scriptsize A}(T[T{\scriptsize A}]
\leftrightarrow T{\scriptsize A})\)
4. \(\forall{\scriptsize A}(T[\neg T{\scriptsize A}]
\leftrightarrow T[\neg{\scriptsize A}\)])
5. \(\forall{\scriptsize A}(T[\neg \neg{\scriptsize A}]
\leftrightarrow T{\scriptsize A})\)
6. \(\forall{\scriptsize A}\forall{\scriptsize B}(T[{\scriptsize A} \wedge{\scriptsize B}] \leftrightarrow(T{\scriptsize A} \wedge T{\scriptsize B}))\)
7. \(\forall{\scriptsize A}\forall{\scriptsize B}(T[\neg({\scriptsize A} \wedge{\scriptsize B})] \leftrightarrow
(T[\neg{\scriptsize A}] \vee T[\neg{\scriptsize B}\)]))
8. \(\forall{\scriptsize A}\forall{\scriptsize B}(T[{\scriptsize A} \vee{\scriptsize B}] \leftrightarrow(T{\scriptsize A} \vee T{\scriptsize B}))\)
9. \(\forall{\scriptsize A}\forall{\scriptsize B}(T[\neg({\scriptsize A} \vee{\scriptsize B})] \leftrightarrow
(T[\neg{\scriptsize A}] \wedge T[\neg{\scriptsize B}\)]))
10. \(\forall{\scriptsize A}(v)(Sent(\forall v{\scriptsize A})
\rightarrow(T[\forall v{\scriptsize A}(v)] \leftrightarrow \forall xT[{\scriptsize A}(\underline{x})\)])
11. \(\forall{\scriptsize A}(v)(Sent(\forall v{\scriptsize A})
\rightarrow(T[\neg \forall v{\scriptsize A}(v)] \leftrightarrow \exists xT[\neg{\scriptsize A}(\underline{x})\)])
12. \(\forall{\scriptsize A}(v)(Sent(\forall v{\scriptsize A})
\rightarrow(T[\exists v{\scriptsize A}(v)] \leftrightarrow \exists xT[{\scriptsize A}(\underline{x})\)]))
13. \(\forall{\scriptsize A}(v)(Sent(\forall v{\scriptsize A})
\rightarrow(T[\neg \exists v{\scriptsize A}(v)] \leftrightarrow \forall xT[\neg{\scriptsize A}(\underline{x})\)]))
Apart from the truth-theoretic axioms, KF comprises all axioms of PA
and all induction axioms involving the truth predicate. The system is
credited to Feferman on the basis of two lectures for the Association
of Symbolic Logic, one in 1979 and the second in 1983, as well as in
subsequent manuscripts. Feferman published his version of the system
under the label Ref(PA) ('weak reflective closure of PA')
only in 1991, after several other versions of KF had already appeared
in print (e.g., Reinhardt 1986, Cantini 1989, who both refer to this
unpublished work by Feferman).
KF itself is formulated in classical logic, but it describes a
non-classical notion of truth. For instance, one can
prove \(T\ulcorner L\urcorner \leftrightarrow T\ulcorner\neg L\urcorner\)
if \(L\) is the Liar sentence. Thus KF proves that either both
the liar sentence and its negation are true or that neither is
true. So either is the notion of truth paraconsistent (a sentence is
true together with its negation) or paracomplete (neither is
true). Some authors have augmented KF with an axiom ruling out
truth-value gluts, which makes KF sound for Kripke's model
construction, because Kripke had ruled out truth-value gluts.
Feferman (1991) showed that KF is proof-theoretically equivalent to
the theory of ramified analysis through all levels below
\(\varepsilon\_0\), the limit of the sequence \(\omega ,
\omega^{\omega},
\omega^{\omega^{ \omega} },\ldots\), or a theory of
ramified truth through the same ordinals. This result shows that in KF
exactly \(\varepsilon\_0\) many levels of the classical Tarskian
hierarchy in its axiomatized form can be recovered. Thus KF is far
stronger than FS, let alone T(PA). Feferman (1991) devised also a
strengthening of KF that is as strong as full predicative analysis,
that is ramified analysis or truth up to the ordinal
\(\Gamma\_0\).
Just as with the typed truth predicate, the theory KF (more precisely, a common variant of it) can be obtained via an act of reflection on a system of untyped \(T\)-sentences. The system of \(T\)-sentences in question is the extension of the uniform positive untyped \(T\)-sentences by a primitive falsity predicate, that is, the theory features two unary predicates \(T\) and \(F\) and axioms
\[\begin{align\*}
&\forall x(T\ulcorner \phi(\underline{x})\urcorner \leftrightarrow \phi(x)) \\
& \forall x(F\ulcorner \phi(\underline{x})\urcorner \leftrightarrow \phi '(x))
\end{align\*}\]
for every formula \(\phi(v)\) positive in both \(T\) and \(F\), where \(\phi '\) represents the De Morgan dual of \(\phi\) (exchanging \(T\) for \(F\) and vice versa).
From an application of uniform reflection over this disquotational theory, the truth axioms for the corresponding two predicate version of KF are derivable (Horsten and Leigh, 2016). The converse also holds, as does the generalisation to finite and transfinite iterations of reflection (Leigh, 2017).
### 4.5 Capturing the minimal fixed point
As remarked above, if KF
proves \(T\ulcorner \phi \urcorner\) for some
sentence \(\phi\) then \(\phi\) holds in all Kripke fixed point models. In
particular, there are \(2^{\aleph\_0}\) fixed
points that form a model of the internal theory of KF. Thus from the
perspective of KF, the least fixed point (from which Kripke's theory
is defined) is not singled out. Burgess (2014) provides an
expansion of KF, named \(\mu\)KF, that attempts to capture the minimal
Kripkean fixed point. KF is expanded by additional axioms that express
that the internal theory of KF is the smallest class closed under the
defining axioms for Kripkean truth. This can be formulated as a single
axiom schema that states, for each open formula \(\phi\),
If \(\phi\) satisfies the same axioms of KF as the predicate \(T\)
then \(\phi\) holds of every true sentence.
From a proof-theoretic perspective \(\mu\)KF is significantly stronger
than KF. The single axiom schema expressing the minimality of the
truth predicate allows one to embed into \(\mu\)KF the system
ID\(\_1\) of one arithmetical inductive definition, an
impredicative theory. While intuitively plausible, \(\mu\)KF suffers the
same expressive incompleteness as KF: Since the minimal Kripkean fixed
point forms a complete \(\Pi^{1}\_1\) set and the
internal theory of \(\mu\)KF remains recursively enumerable, there are
standard models of the theory in which the interpretation of the truth
predicate is not actually the minimal fixed point. At present there
lacks a thorough analysis of the models of \(\mu\)KF.
### 4.6 Axiomatisations of Kripke's theory with supervaluations
KF is intended to be an axiomatization of Kripke's (1975)
semantical theory. This theory is based on partial logic with the
Strong Kleene evaluation scheme. In Strong Kleene logic not every
sentence \(\phi \vee \neg \phi\) is a theorem; in particular, this
disjunction is not true if \(\phi\) lacks a truth
value. Consequently \(T\ulcorner L\vee \neg L\urcorner\)
(where \(L\) is the Liar sentence) is not a theorem of KF and its
negation is even provable. Cantini (1990) has proposed a system VF
that is inspired by the supervaluations scheme. In VF all classical
tautologies are provably true
and \(T\ulcorner L \vee \neg L\urcorner\), for instance, is a theorem of
VF. VF can be formulated
in \(\mathcal{L}\_T\) and uses
classical logic. It is no longer a *compositional* theory of
truth, for the following is not a theorem of VF:
\[
\forall{\scriptsize A}\forall{\scriptsize B}(T[{\scriptsize A} \vee{\scriptsize B}] \leftrightarrow(T{\scriptsize A} \vee T{\scriptsize B})).
\]
Not only is this principle inconsistent with the other axioms of
VF, it does not fit the supervaluationist model for it
implies \(T\ulcorner L\urcorner \vee T\ulcorner \neg L\urcorner\),
which of course is not correct because according to the intended
semantics neither the liar sentence nor its negation is true: both
lack a truth value.
Extending a result due to Friedman and Sheard (1987), Cantini
showed that VF is much stronger than KF: VF is proof-theoretically
equivalent to the theory ID\(\_1\) of non-iterated inductive
definitions, which is not predicative.
## 5. Non-classical approaches to self-reference
The theories of truth discussed thus far are all axiomatized in
classical logic. Some authors have also looked into axiomatic theories
of truth based on non-classical logic (see, for example, Field 2008,
Halbach and Horsten 2006, Leigh and Rathjen 2012). There are a number
of reasons why a logic weaker than classical logic may be
preferred. The most obvious is that by weakening the logic, some
collections of axioms of truth that were previously inconsistent
become consistent. Another common reason is that the axiomatic theory
in question intends to capture a particular non-classical semantics of
truth, for which a classical background theory may prove unsound.
There is also a large number of approaches that employ paraconsistent or substructural logics. In most cases these approaches do not employ an axiomatic base theory such as Peano arithmetic and therefore deviate form the setting considered here, although there is no technical obstacle in applying paraconsistent or substructural logics to truth theories over such base theories. Here we cover only accounts that are close to the setting considered above. For further information on the application of substructural and paraconsistent logics to the truth-theoretic paradoxes see the relevant section in the entry on the liar paradox.
### 5.1 The truth predicate in intuitionistic logic
The inconsistency of the \(T\)-sentences does not rely on
classical reasoning. It is also inconsistent over much weaker logics
such as minimal logic and partial logic. However, classical logic does
play a role in restricting the free use of principles of truth. For
instance, over a classical base theory, the compositional axiom for
implication \((\rightarrow)\) is equivalent to the principle of completeness,
\(\forall{\scriptsize A}(T[{\scriptsize A}] \vee T[\neg{\scriptsize A}\)]). If the logic under
the truth predicate is classical, completeness is equivalent to the compositional axiom for disjunction. Without the law of
excluded middle, FS can be formulated as a fully compositional theory
while not proving the truth-completeness principle (Leigh & Rathjen
2012). In addition, classical logic has an effect on attempts to
combine compositional and self-applicable axioms of truth. If, for
example, one drops the axiom of truth-consistency from FS (the
left-to-right direction of axiom 2 in Section 4.3)
as well as the law of excluded middle for the truth predicate, it is
possible to add consistently the truth-iteration axiom
\(\forall{\scriptsize A}(T[{\scriptsize A}]
\rightarrow T[T{\scriptsize A}])\).
The resulting theory
still bears a strong resemblance to FS in that the constructive
version of the rule-of-revision semantics for all finite levels
provides a natural model of the theory, and the two theories share the same \(\Pi^{0}\_2\) consequences (Leigh & Rathjen 2012; Leigh, 2013).
This result should be contrasted with KF which, if formulated
without the law of excluded middle, remains maximally consistent with
respect to its choice of truth axioms but is a conservative extension of
Heyting arithmetic.
### 5.2 Axiomatising Kripke's theory
Kripke's (1975) theory in its different guises is based on partial
logic. In order to obtain models for a theory in classical logic, the
extension of the truth predicate in the partial model is used again as
the extension of truth in the classical model. In the classical model
false sentences and those without a truth value in the partial model
are declared not true. KF is sound with respect to these classical
models and thus incorporates two distinct logics. The first is the
'internal' logic of statements under the truth predicate
and is formulated with the Strong Kleene valuation schema. The second
is the 'external' logic which is full classical logic. An
effect of formulating KF in classical logic is that the theory cannot
be consistently closed under the truth-introduction rule
If \(\phi\) is a theorem of KF, so is
\(T\ulcorner \phi \urcorner\).
A second effect of classical logic is the statement of the excluded
middle for the liar sentence. Neither the Liar sentence nor its
negation obtains a truth value in Kripke's theory, so the disjunction
of the two is not valid. The upshot is that KF, if viewed as an
axiomatisation of Kripke's theory, is not sound with respect to its
intended semantics. For this reason Halbach and Horsten (2006) and
Horsten (2011) explore an axiomatization of Kripke's theory with
partial logic as inner *and* outer logic. Their suggestion, a
theory labelled PKF ('partial KF'), can be axiomatised as
a Gentzen-style two-sided sequent calculus based on Strong Kleene
logic (see the entry on many-valued
logic). PKF is formed by adding to this calculus the
Peano-Dedekind axioms of arithmetic including full induction and
the compositional and truth-iteration rules for the truth predicate as
proscribed by Kripke's theory. The result is a theory of truth that is
sound with respect to Kripke's theory.
Halbach and Horsten show that this axiomatization of Kripke's
theory is significantly weaker than it's classical cousin KF. The
result demonstrates that restricting logic only for sentences with the
truth predicate can hamper also the derivation of truth-free theorems.
### 5.3 Adding a conditional
Field (2008) and others criticised theories based on partial logic
for the absence of a 'proper' conditional and
bi-conditional. Various authors have proposed conditionals and
bi-conditionals that are not definable in terms of \(\neg , \vee\) and
\(\wedge\). Field (2008) aims at an axiomatic theory of truth not
dissimilar to PKF but with a new conditional. Feferman (1984) also
introduced a bi-conditional to a theory in non-classical logic. Unlike
Field's and his own 1984 theory, Feferman's (2008) theory DT is
formulated in classical logic, but it's internal logic is again a
partial logic with a strong conditional. |
truth-coherence | ## 1. Versions of the Coherence Theory of Truth
The coherence theory of truth has several versions. These versions
differ on two major issues. Different versions of the theory give
different accounts of the coherence relation. Different varieties of
the theory also give various accounts of the set (or sets) of
propositions with which true propositions cohere. (Such
a set will be called a *specified set*.)
According to some early versions of the coherence theory, the
coherence relation is simply consistency. On this view, to say that a
proposition coheres with a specified set of propositions is to say that
the proposition is consistent with the set. This account of coherence
is unsatisfactory for the following reason. Consider two propositions
which do not belong to a specified set. These propositions could both
be consistent with a specified set and yet be inconsistent with each
other. If coherence is consistency, the coherence theorist would have
to claim that both propositions are true, but this is impossible.
A more plausible version of the coherence theory states that the
coherence relation is some form of entailment. Entailment can be
understood here as strict logical entailment, or entailment in some
looser sense. According to this version, a proposition coheres with a
set of propositions if and only if it is entailed by members of the
set. Another more plausible version of the theory, held for example in
Bradley (1914), is that coherence is mutual explanatory support
between propositions.
The second point on which coherence theorists (coherentists, for
short) differ is the constitution of the specified set of propositions.
Coherentists generally agree that the specified set consists of
propositions believed or held to be true. They differ on the questions
of who believes the propositions and when. At one extreme, coherence
theorists can hold that the specified set of propositions is the
largest consistent set of propositions currently believed by actual
people. For such a version of the theory, see Young (1995). According
to a moderate position, the specified set consists of those
propositions which will be believed when people like us (with finite
cognitive capacities) have reached some limit of inquiry. For such a
coherence theory, see Putnam (1981). At the other extreme, coherence
theorists can maintain that the specified set contains the propositions
which would be believed by an omniscient being. Some idealists seem to
accept this account of the specified set.
If the specified set is a set actually believed, or even a set which
would be believed by people like us at some limit of inquiry,
coherentism involves the rejection of realism about truth. Realism
about truth involves acceptance of the principle of bivalence
(according to which every proposition is either true or false) and the
principle of transcendence (which says that a proposition may be true
even though it cannot be known to be true). Coherentists who do not
believe that the specified set is the set of propositions believed by
an omniscient being are committed to rejection of the principle of
bivalence since it is not the case that for every proposition either it
or a contrary proposition coheres with the specified set. They reject
the principle of transcendence since, if a proposition coheres with a
set of beliefs, it can be known to cohere with the set.
## 2. Arguments for Coherence Theories of Truth
Two principal lines of argument have led philosophers to adopt a
coherence theory of truth. Early advocates of coherence theories were
persuaded by reflection on metaphysical questions. More recently,
epistemological and semantic considerations have been the basis for
coherence theories.
### 2.1 The Metaphysical Route to Coherentism
Early versions of the coherence theory were associated with
idealism. Walker (1989) attributes coherentism to Spinoza, Kant, Fichte
and Hegel. Certainly a coherence theory was adopted by a number of
British Idealists in the last years of the nineteenth century and the
first decades of the twentieth. See, for example, Bradley
(1914).
Idealists are led to a coherence theory of truth by their
metaphysical position. Advocates of the correspondence theory believe
that a belief is (at least most of the time) ontologically distinct
from the objective conditions which make the belief true. Idealists do
not believe that there is an ontological distinction between beliefs
and what makes beliefs true. From the idealists' perspective, reality
is something like a collection of beliefs. Consequently, a belief
cannot be true because it corresponds to something which is not a
belief. Instead, the truth of a belief can only consist in its
coherence with other beliefs. A coherence theory of truth which results
from idealism usually leads to the view that truth comes in degrees. A
belief is true to the degree that it coheres with other beliefs.
Since idealists do not recognize an ontological distinction between
beliefs and what makes them true, distinguishing between versions of
the coherence theory of truth adopted by idealists and an identity
theory of truth can be difficult. The article on Bradley in this
Encyclopedia (Candlish 2006) argues that Bradley had an identity
theory, not a coherence theory.
In recent years metaphysical arguments for coherentism have found
few advocates. This is due to the fact that idealism is not widely
held.
### 2.2 Epistemological Routes to Coherentism
Blanshard (1939, ch. XXVI) argues that a coherence theory of
justification leads to a coherence theory of truth. His argument runs
as follows. Someone might hold that coherence with a set of beliefs is
the test of truth but that truth consists in correspondence to
objective facts. If, however, truth consists in correspondence to
objective facts, coherence with a set of beliefs will not be a test of
truth. This is the case since there is no guarantee that a perfectly
coherent set of beliefs matches objective reality. Since coherence with
a set of beliefs is a test of truth, truth cannot consist in
correspondence.
Blanshard's argument has been criticised by, for example, Rescher
(1973). Blanshard's argument depends on the claim that coherence with a
set of beliefs is the test of truth. Understood in one sense, this
claim is plausible enough. Blanshard, however, has to understand this
claim in a very strong sense: coherence with a set of beliefs is an
infallible test of truth. If coherence with a set of beliefs is simply
a good but fallible test of truth, as Rescher suggests, the argument
fails. The "falling apart" of truth and justification to which
Blanshard refers is to be expected if truth is only a fallible test of
truth.
Another epistemological argument for coherentism is based on the
view that we cannot "get outside" our set of beliefs and compare
propositions to objective facts. A version of this argument was
advanced by some logical positivists including Hempel (1935) and
Neurath (1983). This argument, like Blanshard's, depends on a coherence
theory of justification. The argument infers from such a theory that we
can only know that a proposition coheres with a set of beliefs. We can
never know that a proposition corresponds to reality.
This argument is subject to at least two criticisms. For a start, it
depends on a coherence theory of justification, and is vulnerable to
any objections to this theory. More importantly, a coherence theory of
truth does not follow from the premisses. We cannot infer from the fact
that a proposition cannot be known to correspond to reality that it
does not correspond to reality. Even if correspondence theorists admit
that we can only know which propositions cohere with our beliefs, they
can still hold that truth consists in correspondence. If correspondence
theorists adopt this position, they accept that there may be truths
which cannot be known. Alternatively, they can argue, as does Davidson
(1986), that the coherence of a proposition with a set of beliefs is a
good indication that the proposition corresponds to objective facts and
that we can know that propositions correspond.
Coherence theorists need to argue that propositions cannot
correspond to objective facts, not merely that they cannot be known to
correspond. In order to do this, the foregoing argument for coherentism
must be supplemented. One way to supplement the argument would be to
argue as follows. As noted above, the correspondence and coherence
theories have differing views about the nature of truth conditions. One
way to decide which account of truth conditions is correct is to pay
attention to the process by which propositions are assigned truth
conditions. Coherence theorists can argue that the truth conditions of
a proposition are the conditions under which speakers make a practice
of asserting it. Coherentists can then maintain that speakers can only
make a practice of asserting a proposition under conditions the
speakers are able to recognise as justifying the proposition. Now the
(supposed) inability of speakers to "get outside" of their beliefs is
significant. Coherentists can argue that the only conditions speakers
can recognise as justifying a proposition are the conditions under
which it coheres with their beliefs. When the speakers make a practice
of asserting the proposition under these conditions, they become the
proposition's truth conditions. For an argument of this sort see Young
(1995).
## 3. Criticisms of Coherence Theories of Truth
Any coherence theory of truth faces two principal challenges. The first
may be called the specification objection. The second is the
transcendence objection.
### 3.1 The Specification Objection
According to the specification objection, coherence theorists have
no way to identify the specified set of propositions without
contradicting their position. This objection originates in Russell
(1907). Opponents of the coherence theory can argue as follows. The
proposition (1) "Jane Austen was hanged for murder" coheres
with some set of propositions. (2) "Jane Austen died in her
bed" coheres with another set of propositions. No one supposes
that the first of these propositions is true, in spite of the fact that
it coheres with a set of propositions. The specification objection
charges that coherence theorists have no grounds for saying that (1) is
false and (2) true.
Some responses to the specification problem are unsuccessful. One
could say that we have grounds for saying that (1) is false and (2) is
true because the latter coheres with propositions which correspond to
the facts. Coherentists cannot, however, adopt this response without
contradicting their position. Sometimes coherence theorists maintain
that the specified system is the most comprehensive system, but this
is not the basis of a successful response to the specification
problem. Coherentists can only, unless they are to compromise their
position, define comprehensiveness in terms of the size of a
system. Coherentists cannot, for example, talk about the most
comprehensive system composed of propositions which correspond to
reality. There is no reason, however, why two or more systems cannot
be equally large. Other criteria of the specified system, to which
coherentists frequently appeal, are similarly unable to solve the
specification problem. These criteria include simplicity, empirical
adequacy and others. Again, there seems to be no reason why two or
more systems cannot equally meet these criteria.
Although some responses to the Russell's version of the
specification objection are unsuccessful, it is unable to refute the
coherence theory. Coherentists do not believe that the truth of a
proposition consists in coherence with any arbitrarily chosen set of
propositions. Rather, they hold that truth consists in coherence with a
set of beliefs, or with a set of propositions held to be true. No one
actually believes the set of propositions with which (1) coheres.
Coherence theorists conclude that they can hold that (1) is false
without contradicting themselves.
A more sophisticated version of the specification objection has been
advanced by Walker (1989); for a discussion, see Wright (1995). Walker
argues as follows. In responding to Russell's version of the
specification objection, coherentists claim that some set of
propositions, call it *S*, is believed. They are committed to
the truth of (3) "*S* is believed." The question of
what it is for (3) to be true then arises. Coherence theorists might
answer this question by saying that "'*S* is
believed' is believed" is true. If they give this answer,
they are apparently off on an infinite regress, and they will never
say what it is for a proposition to be true. Their plight is worsened
by the fact that arbitrarily chosen sets of propositions can include
propositions about what is believed. So, for example, there will be a
set which contains "Jane Austen was hanged for murder,"
"'Jane Austen was hanged for murder' is
believed," and so on. The only way to stop the regress seems to
be to say that the truth conditions of (3) consist in the objective
fact *S* is believed. If, however, coherence theorists adopt
this position, they seem to contradict their own position by accepting
that the truth conditions of some proposition consist in facts, not in
propositions in a set of beliefs.
There is some doubt about whether Walker's version of the
specification objection succeeds. Coherence theorists can reply to
Walker by saying that nothing in their position is inconsistent with
the view that there is a set of propositions which is
believed. Even though this objective fact obtains, the truth conditions
of propositions, including propositions about which sets of
propositions are believed, are the conditions under which they cohere
with a set of propositions. For a defence of the coherence theory against
Walker's version of the specification objection, see Young (2001).
A coherence theory of truth gives rise to a regress, but it is not a
vicious regress and the correspondence theory faces a similar regress.
If we say that *p* is true if and only if it coheres with a
specified set of propositions, we may be asked about the truth
conditions of "*p* coheres with a specified set."
Plainly, this is the start of a regress, but not one to worry
about. It is just what one would expect, given that the coherence
theory states that it gives an account of the truth conditions of all
propositions. The correspondence theory faces a similar benign
regress. The correspondence theory states that a proposition is true
if and only if it corresponds to certain objective conditions. The
proposition "*p* corresponds to certain objective
conditions" is also true if and only if it corresponds to
certain objective conditions, and so on.
### 3.2 The Transcendence Objection
The transcendence objection charges that a coherence theory of truth
is unable to account for the fact that some propositions are true which
cohere with no set of beliefs. According to this objection, truth
transcends any set of beliefs. Someone might argue, for example, that
the proposition "Jane Austen wrote ten sentences on November
17th, 1807" is either true or false. If it is false, some other
proposition about how many sentences Austen wrote that day is true. No
proposition, however, about precisely how many sentences Austen wrote
coheres with any set of beliefs and we may safely assume that none will
ever cohere with a set of beliefs. Opponents of the coherence theory
will conclude that there is at least one true proposition which does
not cohere with any set of beliefs.
Some versions of the coherence theory are immune to the
transcendence objection. A version which holds that truth is coherence
with the beliefs of an omniscient being is proof against the objection.
Every truth coheres with the set of beliefs of an omniscient being. All
other versions of the theory, however, have to cope with the objection,
including the view that truth is coherence with a set of propositions
believed at the limit of inquiry. Even at the limit of inquiry, finite
creatures will not be able to decide every question, and truth may
transcend what coheres with their beliefs.
Coherence theorists can defend their position against the
transcendence objection by maintaining that the objection begs the
question. Those who present the objection assume, generally without
argument, that it is possible that some proposition be true even though
it does not cohere with any set of beliefs. This is precisely what
coherence theorists deny. Coherence theorists have arguments for
believing that truth cannot transcend what coheres with some set of
beliefs. Their opponents need to take issue with these arguments rather
than simply assert that truth can transcend what coheres with a
specified system.
### 3.3 The Logic Objection
Russell (1912) presented a third classic objection to the coherence
theory of truth. According to this objection, any talk about coherence
presupposes the truth of the laws of logic. For example, Russell
argues, to say that two propositions cohere with each other is to
presuppose the truth of the law of non-contradiction. In this case,
coherentism has no account of the truth of law of
non-contradiction. If, however, the coherence theorist holds that the
truth of the law of non-contradiction depends on its coherence with a
system of beliefs, and it were supposed to be false, then propositions
cannot cohere or fail to cohere. In this case, the coherence theory of
truth completely breaks down since propositions cannot cohere with
each other.
Coherentists have a plausible response to this objection. They may
hold that the law of non-contradiction, like any other truth, is true
because it coheres with a system of beliefs. In particular, the law of
non-contradiction is supported by the belief that, for example,
communication and reasoning would be impossible unless every system of
beliefs contains something like law of non-contradiction (and the
belief that communication and reasoning are possible). It is true
that, as Russell says, if the law is supposed not to cohere with a
system of beliefs, then propositions can neither cohere nor fail to
cohere. However, coherence theorists may hold, they do not suppose the
law of non-contradiction to be false. On the contrary, they are likely
to hold that any coherent set of beliefs must include the law of
non-contradiction or a similar law.
## 4. New Objections to Coherentism
Paul Thagard is the author of the first of two recent new arguments
against the coherence theory. Thagard states his argument as
follows:
> if there is a world independent of representations of it,
> as historical evidence suggests, then the aim of representation should
> be to describe the world, not just to relate to other
> representations. My argument does not refute the coherence theory, but
> shows that it implausibly gives minds too large a place in
> constituting truth. (Thagard 2007: 29-30)
Thagard's argument seems to be that if there is a mind-independent
world, then our representations are representations of the world. (He
says representations "should be" of the world, but the
argument is invalid with the addition of the auxiliary verb.) The
world existed before humans and our representations, including our
propositional representations. (So history and, Thagard would likely
say, our best science tells us.) Therefore, representations,
including propositional representations, are representations of a
mind-independent world. The second sentence of the passage just quoted
suggests that the only way that coherentists can reject this argument
is to adopt some sort of idealism. That is, they can only reject the
minor premiss of the argument as reconstructed. Otherwise they are
committed to saying that propositions represent the world and, Thagard
seems to suggest, this is to say that propositions have the sort of
truth-conditions posited by a correspondence theory. So the coherence
theory is false.
In reply to this argument, coherentists can deny that
propositions are representations of a mind-independent world. To say
that a proposition is true is to say that it is supported by a
specified system of propositions. So, the coherentist can say,
propositions are representations of systems of beliefs, not
representations of a mind-independent world. To assert a proposition
is to assert that it is entailed by a system of beliefs. The
coherentist holds that even if there is a mind-independent world, it
does not follow that the "the point" of representations is
to represent this world. If coherentists have been led to their
position by an epistemological route, they believe that we cannot
"get outside" our system of beliefs. If we cannot get
outside of our system of beliefs, then it is hard to see how we can be
said to represent a mind-independent reality.
Colin McGinn has proposed the other new objection to coherentism. He
argues (McGinn 2002: 195) that coherence theorists are committed to
idealism. Like Thagard, he takes idealism to be obviously false, so
the argument is a reductio. McGinn's argument runs as
follows. Coherentists are committed to the view that, for example,
'Snow falls from the sky' is true iff the belief that snow
falls from the sky coheres with other beliefs. Now it follows from
this and the redundancy biconditional (*p* is true
iff *p*) that snow falls from the sky iff the belief that snow
falls from the sky coheres with other beliefs. It appears then that
the coherence theorist is committed to the view that snow could not
fall from the sky unless the belief that snow falls from the sky
coheres with other beliefs. From this it follows that how things are
depends on what is believed about them. This seems strange to McGinn
since he thinks, reasonably, that snow could fall from the sky even if
there were no beliefs about snow, or anything else. The linking of how
things are and how they are believed to be leads McGinn to say that
coherentists are committed to idealism, this being the view that how
things are is mind-dependent.
Coherentists have a response to this objection. McGinn's argument
works because he takes it that the redundancy biconditional means
something like "*p* is true
because *p*". Only if redundancy biconditionals are
understood in this way does McGinn's argument go through. McGinn needs
to be talking about what makes "Snow falls from the sky"
true for his reductio to work. Otherwise, coherentists who reject his
argument cannot be charged with idealism. He assumes, in a way that a
coherent theorist can regard as question-begging, that the truth-maker
of the sentence in question is an objective way the world
is. Coherentists deny that any sentences are made true by objective
conditions. In particular, they hold that the falling of snow from the
sky does not make "Snow falls from the sky"
true. Coherentists hold that it, like any other sentence, is true
because it coheres with a system of beliefs. So coherentists appear to
have a plausible defence against McGinn's objection. |
truth-correspondence | ## 1. History of the Correspondence Theory
The correspondence theory is often traced back to Aristotle's
well-known definition of truth (*Metaphysics* 1011b25):
"To say of what is that it is not, or of what is not that it is,
is false, while to say of what is that it is, and of what is not that
it is not, is true"--but virtually identical formulations
can be found in Plato (*Cratylus* 385b2, *Sophist*
263b). It is noteworthy that this definition does not highlight the
basic correspondence intuition. Although it does allude to a relation
(saying something *of* something) to reality (what
*is*), the relation is not made very explicit, and there is no
specification of what on the part of reality is responsible for the
truth of a saying. As such, the definition offers a muted, relatively
minimal version of a correspondence theory. (For this reason it has
also been claimed as a precursor of deflationary theories of truth.)
Aristotle sounds much more like a genuine correspondence theorist in
the *Categories* (12b11, 14b14), where he talks of underlying
things that make statements true and implies that these things
(*pragmata*) are logically structured situations or facts
(viz., *his sitting* and *his not sitting* are said to
underlie the statements "He is sitting" and "He is
not sitting", respectively). Most influential is
Aristotle's claim in *De Interpretatione* (16a3) that
thoughts are "likenesses" (*homoiomata*) of
things. Although he nowhere defines truth in terms of a
thought's likeness to a thing or fact, it is clear that such a
definition would fit well into his overall philosophy of
mind. (Cf. Crivelli 2004; Szaif 2006.)
### 1.1 Metaphysical and Semantic Versions
In medieval authors we find a division between
"metaphysical" and "semantic" versions of the
correspondence theory. The former are indebted to the
truth-as-likeness theme suggested by Aristotle's overall views,
the latter are modeled on Aristotle's more austere definition
from *Metaphysics* 1011b25.
The *metaphysical version* presented by Thomas Aquinas is the
best known: "*Veritas est adaequatio rei et
intellectus*" (Truth is the equation of thing and
intellect), which he restates as: "A judgment is said to be true
when it conforms to the external reality". He tends to use
"*conformitas*" and
"*adaequatio*", but also uses
"*correspondentia*", giving the latter a more
generic sense (*De Veritate*, Q.1, A.1-3; cf. *Summa
Theologiae*, Q.16). Aquinas credits the Neoplatonist Isaac Israeli
with this definition, but there is no such definition in
Isaac. Correspondence formulations can be traced back to the Academic
skeptic Carneades, 2nd century B.C., whom Sextus Empiricus
(*Adversos Mathematicos*, vii, 168) reports as having taught
that a presentation "is true when it is in accord
(*symphonos*) with the object presented, and false when it is
in discord with it". Similar accounts can be found in various
early commentators on Plato and Aristotle (cf. Kunne 2003,
chap. 3.1), including some Neoplatonists: Proklos (*In Tim*.,
II 287, 1) speaks of truth as the agreement or adjustment
(*epharmoge*) between knower and the known. Philoponus (*In
Cat*., 81, 25-34) emphasizes that truth is neither in the things
or states of affairs (*pragmata*) themselves, nor in the
statement itself, but lies in the agreement between the two. He gives
the simile of the fitting shoe, the fit consisting in a relation
between shoe and foot, not to be found in either one by itself. Note
that his emphasis on the relation as opposed to its relata is laudable
but potentially misleading, because *x*'s truth (its being
true) is not to be identified with a relation, R, between *x*
and *y*, but with a general relational property of *x*,
taking the form ([?]*y*)(*x*R*y* &
F*y*). Further early correspondence formulations can be found in
Avicenna (*Metaphysica*, 1.8-9) and Averroes (*Tahafut*,
103, 302). They were introduced to the scholastics by William of
Auxerre, who may have been the intended recipient of Aquinas'
mistaken attribution (cf. Boehner 1958; Wolenski 1994).
Aquinas' balanced formula "equation of thing *and*
intellect" is intended to leave room for the idea that
"true" can be applied not only to thoughts and judgments
but also to things or persons (e.g. a true friend). Aquinas explains
that a thought is said to be true *because* it conforms to
reality, whereas a thing or person is said to be true *because*
it conforms to a thought (a friend is true insofar as, and because,
she conforms to our, or God's, conception of what a friend ought
to be). Medieval theologians regarded both, judgment-truth as well as
thing/person-truth, as somehow flowing from, or grounded in, the
deepest truth which, according to the Bible, is God: "I am the
way and the truth and the life" (John 14, 6). Their attempts to
integrate this Biblical passage with more ordinary thinking involving
truth gave rise to deep metaphysico-theological reflections. The
notion of thing/person-truth, which thus played a very important role
in medieval thinking, is disregarded by modern and contemporary
analytic philosophers but survives to some extent in existentialist
and continental philosophy.
Medieval authors who prefer a *semantic version* of the
correspondence theory often use a peculiarly truncated formula to
render Aristotle's definition: A (mental) sentence is true if
and only if, as it signifies, so it is (*sicut significat, ita
est*). This emphasizes the semantic relation
of *signification* while remaining maximally elusive about what
the "it" is that is signified by a true sentence and
de-emphasizing the correspondence relation (putting it into the little
words "as" and "so"). Foreshadowing a favorite
approach of the 20th century, medieval semanticists like Ockham
(*Summa Logicae*, II) and Buridan (*Sophismata*, II)
give exhaustive lists of different truth-conditional clauses for
sentences of different grammatical categories. They refrain from
associating true sentences in general with items from a single
ontological category. (Cf. Moody 1953; Adams McCord 1987; Perler
2006.)
Authors of the modern period generally convey the impression that the
correspondence theory of truth is far too obvious to merit much, or
any, discussion. Brief statements of some version or other can be
found in almost all major writers; see e.g.: Descartes 1639, ATII 597;
Spinoza, *Ethics*, axiom vi; Locke, *Essay*, 4.5.1;
Leibniz, *New Essays*, 4.5.2; Hume, *Treatise*, 3.1.1;
and Kant 1787, B82. Berkeley, who does not seem to offer any account
of truth, is a potentially significant exception. Due to the influence
of Thomism, metaphysical versions of the theory are much more popular
with the moderns than semantic versions. But since the moderns
generally subscribe to a representational theory of the mind (the
theory of ideas), they would seem to be ultimately committed to
spelling out relations like correspondence or conformity in terms of a
psycho-semantic representation relation holding between ideas, or
sentential sequences of ideas (Locke's "mental
propositions"), and appropriate portions of reality, thereby
effecting a merger between metaphysical and semantic versions of the
correspondence theory.
### 1.2 Object-Based and Fact-Based Versions
It is helpful to distinguish between "object-based" and
"fact-based" versions of correspondence theories,
depending on whether the corresponding portion of reality is said to
be an object or a fact (cf. Kunne 2003, chap. 3).
Traditional versions of *object-based* theories assumed that
the truth-bearing items (usually taken to be judgments) have
subject-predicate structure. An object-based definition of truth might
look like this:
> A judgment is true if and only if its predicate
> corresponds to its object (i.e., to the object referred to by the
> subject term of the judgment).
Note that this actually involves *two relations* to an object:
(i) a reference relation, holding between the subject term of the
judgment and the object the judgment is about (*its* object);
and (ii) a correspondence relation, holding between the predicate term
of the judgment and a property of the object. Owing to its reliance on
the subject-predicate structure of truth-bearing items, the account
suffers from an inherent limitation: it does not cover truthbearers
that lack subject-predicate structure (e.g. conditionals,
disjunctions), and it is not clear how the account might be extended
to cover them. The problem is obvious and serious; it was nevertheless
simply ignored in most writings. Object-based correspondence was the
norm until relatively recently.
Object-based correspondence became the norm through Plato's
pivotal engagement with *the problem of falsehood*, which was
apparently notorious at its time. In a number of dialogues, Plato
comes up against an argument, advanced by various Sophists, to the
effect that false judgment is impossible--roughly: To judge
falsely is to judge *what is not*. But one cannot judge what is
not, for it is not there to be judged. To judge something that is not
is to judge nothing, hence, not to judge at all. Therefore, false
judgment is impossible. (Cf. *Euthydemus*
283e-288a; *Cratylus* 429c-e; *Republic*
478a-c; *Theaetetus* 188d-190e.) Plato has no good answer to
this patent absurdity until the *Sophist* (236d-264b), where he
finally confronts the issue at length. The key step in his solution is
the analysis of truthbearers as structured complexes. A simple
sentence, such as "Theaetetus sits.", though simple as a
sentence, is still a complex whole consisting of words of different
kinds--a name (*onoma*) and a verb
(*rhema*)--having different functions. By weaving together
verbs with names the speaker does not just name a number of things,
but accomplishes something: meaningful speech (*logos*)
expressive of the interweaving of ideas (*eidon
symploken*). The simple sentence is true when Theaetetus, the
person named by the name, is in the state of sitting, ascribed to him
through the verb, and false, when Theaetetus is not in that state but
in another one (cf. 261c-263d; see Denyer 1991; Szaif
1998). Only *things that are* show up in this account: in the
case of falsehood, the ascribed state still is, but it is a state
different from the one Theaetetus is in. The account is extended from
speech to thought and belief via Plato's well known thesis that
"thought is speech that occurs without voice, inside the soul in
conversation with itself" (263e)--the historical origin of
the language-of-thought hypothesis. The account does not take into
consideration sentences that contain a name of something that is not
("Pegasus flies"), thus bequeathing to posterity a
residual problem that would become more notorious than the problem of
falsehood.
Aristotle, in *De Interpretatione*, adopts Plato's
account without much ado--indeed, the beginning of *De
Interpretatione* reads like a direct continuation of the passages
from the *Sophist* mentioned above. He emphasizes that truth
and falsehood have to do with combination and separation (cf. *De
Int.* 16a10; in *De Anima* 430a25, he says: "where
the alternative of true and false applies, there we always find a sort
of combining of objects of thought in a quasi-unity"). Unlike
Plato, Aristotle feels the need to characterize simple affirmative and
negative statements (predications) separately--translating rather
more literally than is usual: "An affirmation is a predication
of something toward something, a negation is a predication of
something away from something" (*De Int.* 17a25). This
characterization reappears early in the *Prior Analytics*
(24a). It thus seems fair to say that the subject-predicate analysis
of simple declarative sentences--the most basic feature of
Aristotelian term logic which was to reign supreme for many
centuries--had its origin in Plato's response to a
sophistical argument against the possibility of falsehood. One may
note that Aristotle's famous definition of truth (see Section 1)
actually begins with the definition of falsehood.
*Fact-based* correspondence theories became prominent only in
the 20th century, though one can find remarks in Aristotle that fit
this approach (see Section 1)--somewhat surprisingly in light of
his repeated emphasis on subject-predicate structure wherever truth
and falsehood are concerned. Fact-based theories do not presuppose
that the truth-bearing items have subject-predicate structure; indeed,
they can be stated without any explicit reference to the structure of
truth-bearing items. The approach thus embodies an alternative
response to the problem of falsehood, a response that may claim to
extricate the theory of truth from the limitations imposed on it
through the presupposition of subject-predicate structure inherited
from the response to the problem of falsehood favored by Plato,
Aristotle, and the medieval and modern tradition.
The now classical formulation of a fact-based correspondence theory
was foreshadowed by Hume (*Treatise*, 3.1.1) and Mill
(*Logic*, 1.5.1). It appears in its canonical form early in the
20th century in Moore (1910-11, chap. 15) and Russell: "Thus a
belief is true when there is a corresponding fact, and is false when
there is no corresponding fact" (1912, p. 129; cf. also his
1905, 1906, 1910, and 1913). The self-conscious emphasis
on *facts* as the corresponding portions of reality--and a
more serious concern with problems raised by
falsehood--distinguishes this version from its
foreshadowings. Russell and Moore's forceful advocacy of truth
as correspondence to a fact was, at the time, an integral part of
their defense of metaphysical realism. Somewhat ironically, their
formulations are indebted to their idealist opponents, F. H. Bradley
(1883, chaps. 1&2), and H. H. Joachim (1906), the latter was an
early advocate of the competing coherence theory, who had set up a
correspondence-to-fact account of truth as the main target of his
attack on realism. Later, Wittgenstein (1921) and Russell (1918)
developed "logical atomism", which introduces an important
modification of the fact-based correspondence approach (see below,
Section 7.1). Further modifications of the correspondence theory,
bringing a return to more overtly semantic and broadly object-based
versions, were influenced by Tarski's (1935) technical work on
truth (cf. Field 1972, Popper 1972).
## 2. Truthbearers, Truthmakers, Truth
### 2.1 Truthbearers
Correspondence theories of truth have been given for beliefs,
thoughts, ideas, judgments, statements, assertions, utterances,
sentences, and propositions. It has become customary to talk
of *truthbearers* whenever one wants to stay neutral between
these choices. Five points should be kept in mind:
1. The term "truthbearer" is somewhat misleading. It is
intended to refer to bearers of truth or falsehood
(truth-value-bearers), or alternatively, to things of which it makes
sense to ask whether they are true or false, thus allowing for the
possibility that some of them might be neither.
2. One distinguishes between
*secondary* and *primary* truthbearers. Secondary
truthbearers are those whose *truth-values* (truth or
falsehood) are derived from the truth-values of primary truthbearers,
whose truth-values are not derived from any other
truthbearers. Consequently, the term "true" is usually
regarded as ambiguous, taking its primary meaning when applied to
primary truthbearers and various secondary meanings when applied to
other truthbearers. This is, however, not a brute ambiguity, since the
secondary meanings are supposed to be derived, i.e. definable from,
the primary meaning together with additional relations. For example,
one might hold that propositions are true or false in the primary
sense, whereas sentences are true or false in a secondary sense,
insofar as they *express* propositions that are true or false
(in the primary sense). The meanings of "true", when
applied to truthbearers of different kinds, are thus connected in a
manner familiar from what Aristotelians called
"analogical" uses of a term--nowadays one would call
this "focal meaning"; e.g., "healthy" in
"healthy organism" and "healthy food", the
latter being defined as healthy in the secondary sense of contributing
to the healthiness (primary sense) of an organism.
3. It is often unproblematic to advocate one theory of truth for
bearers of one kind and another theory for bearers of a different kind
(e.g., a deflationary theory of truth, or an identity theory, applied
to propositions, could be a component of some form of correspondence
theory of truth for sentences). Different theories of truth applied to
bearers of different kinds do not automatically compete. The standard
segregation of truth theories into competing camps (found in
textbooks, handbooks, and dictionaries) proceeds under the
assumption--really a pretense--that they are intended for
primary truthbearers of the same kind.
4. Confusingly, there is little agreement as to which entities are
properly taken to be primary truthbearers. Nowadays, the main
contenders are public language sentences, sentences of the language of
thought (sentential mental representations), and propositions. Popular
earlier contenders--beliefs, judgments, statements, and
assertions--have fallen out of favor, mainly for two reasons:
1. The problem of *logically complex truthbearers*. A
subject, S, may hold a disjunctive belief (the baby will be a boy or
the baby will be a girl), while believing only one, or neither, of the
disjuncts. Also, S may hold a conditional belief (if whales are fish,
then some fish are mammals) without believing the antecedent or the
consequent. Also, S will usually hold a negative belief (not everyone
is lucky) without believing what is negated. In such cases, the
truth-values of S's complex beliefs depend on the truth-values
of their constituents, although the constituents may not be believed
by S or by anyone. This means that a view according to which beliefs
are primary truthbearers seems unable to account for how the
truth-values of complex beliefs are connected to the truth-values of
their simpler constituents--to do this one needs to be able to
apply truth and falsehood to belief-constituents *even when they
are not believed*. This point, which is equally fundamental for a
proper understanding of logic, was made by all early advocates of
propositions (cf. Bolzano 1837, I.SSSS22, 34; Frege 1879, SSSS2-5; Husserl
1900, I.SS11; Meinong 1902, SS6). The problem arises in much the same
form for views that would take judgments, statements, or assertions as
primary truthbearers. The problem is not easily evaded. Talk of
unbelieved beliefs (unjudged judgments, unstated statements,
unasserted assertions) is either absurd or simply amounts to talk of
unbelieved (unjudged, unstated, unasserted) propositions or
sentences. It is noteworthy, incidentally, that quite a few
philosophical proposals (concerning truth as well as other matters)
run afoul of the simple observation that there are unasserted and
unbelieved truthbearers (cf. Geach 1960 & 1965).
2. The duality of *state/content*
a.k.a. *act/object*. The noun "belief" can refer
to *the state of believing* or to its content, i.e.,
to *what is believed*. If the former, the state of believing,
can be said to be true or false at all, which is highly questionable,
then only insofar as the latter, what is believed, is true or
false. Similarly for nouns referring to mental acts or their objects
(contents), such as "judgment", "statement",
and "assertion".
5. Mental sentences were the preferred primary truthbearers
throughout the medieval period. They were neglected in the first half
of the 20th century, but made a comeback in the second half through
the revival of the representational theory of the mind (especially in
the form of the language-of-thought hypothesis, cf. Fodor
1975). Somewhat confusingly (to us now), for many centuries the term
"proposition" (*propositio*) was reserved
exclusively for sentences, written, spoken or mental. This use was
made official by Boethius in the 6th century, and is still found in
Locke's *Essay* in 1705 and in
Mill's *Logic* in 1843. Some time after that, e.g., in
Moore's 1901-01, "proposition" switched sides, the
term now being used for *what is said* by uttering a sentence,
for what is believed, judged, stated, assumed (etc.)--with
occasional reversions to medieval usage, e.g. in Russell (1918,
1919).
### 2.2 Truthmakers
Talk of *truthmakers* serves a function similar, but
correlative, to talk of truthbearers. A truthmaker is anything that
makes some truthbearer true. Different versions of the correspondence
theory will have different, and often competing, views about what sort
of items true truthbearers correspond to (facts, states of affairs,
events, things, tropes, properties). It is convenient to talk of
truthmakers whenever one wants to stay neutral between these
choices. Four points should be kept in mind:
1. The notion of a truthmaker is tightly connected with, and
dependent on, the relational notion of *truthmaking*: a
truthmaker is whatever stands in the truthmaking relation to some
truthbearer. Despite the causal overtones of "maker" and
"making", this relation is usually not supposed to be a
causal relation.
2. The terms "truthmaking" and "truthmaker"
are ambiguous. For illustration, consider a classical correspondence
theory on which *x* is true if and only if *x*
corresponds to some fact. One can say (*a*) that *x* is
made true by *a fact*, namely the fact (or a fact)
that *x* corresponds to. One can also say (*b*)
that *x* is made true by *x's correspondence to a
fact*. Both uses of "is made true by" are correct and
both occur in discussions of truth. But they are importantly different
and must be distinguished. The (*a*)-use is usually the
intended one; it expresses a relation peculiar to truth and leads to a
use of "truthmaker" that actually picks out the items that
would normally be intended by those using the term. The
(*b*)-use does not express a relation peculiar to truth; it is
just an instance (for "F" = "true") of the
generic formula "what makes an F-thing an F" that can be
employed to elicit the definiens of a proposed definition of
F. Compare: what makes an even number even is its divisibility by 2;
what makes a right action right is its having better consequences than
available alternative actions. Note that *anyone* proposing a
definition or account of truth can avail themselves of the notion of
truthmaking in the (*b*)-sense; e.g., a coherence theorist,
advocating that a belief is true if and only if it coheres with other
beliefs, can say: what makes a true belief true is its coherence with
other beliefs. So, on the (*b*)-use, "truthmaking"
and "truthmaker" do not signal any affinity with the basic
idea underlying the correspondence theory of truth, whereas on the
(*a*)-use these terms do signal such an affinity.
3. Talk of truthmaking and truthmakers goes well with the basic idea
underlying the correspondence theory; hence, it might seem natural to
describe a traditional fact-based correspondence theory as maintaining
that the truthmakers are facts and that the correspondence relation is
the truthmaking relation. However, the assumption that the
correspondence relation can be regarded as (a species of) the
truthmaking relation is dubious. Correspondence appears to be
a *symmetric* relation (if *x* corresponds
to *y*, then *y* corresponds to *x*), whereas it
is usually taken for granted that truthmaking is
an *asymmetric* relation, or at least not a symmetric one. It
is hard to see how a symmetric relation could be (a species of) an
asymmetric or non-symmetric relation (cf. David 2009.)
4. Talk of truthmaking and truthmakers is frequently employed during
informal discussions involving truth but tends to be dropped when a
more formal or official formulation of a theory of truth is produced
(one reason being that it seems circular to define or explain truth in
terms of truthmakers or truthmaking). However, in recent years, the
informal talk has been turned into an official doctrine:
"truthmaker theory". This theory should be distinguished
from informal truthmaker talk: not everyone employing the latter would
subscribe to the former. Moreover, truthmaker theory should not simply
be assumed to be a version of the correspondence theory; indeed, some
advocates present it as a competitor to the correspondence theory (see
below, Section 8.5).
### 2.3 Truth
The abstract noun "truth" has various uses. (*a*)
It can be used to refer to the general relational *property*
otherwise referred to as *being true*; though the latter label
would be more perspicuous, it is rarely used, even in philosophical
discussions. (*b*) The noun "truth" can be used to
refer to the *concept* that "picks out" the
property and is expressed in English by the adjective
"true". Some authors do not distinguish between concept
and property; others do, or should: an account of the concept might
differ significantly from an account of the property. To mention just
one example, one might maintain, with some plausibility, that an
account of the *concept* ought to succumb to the liar paradox
(see the entry on the liar paradox),
otherwise it wouldn't be an adequate account of *our*
concept of truth; this idea is considerably less plausible in the case
of the property. Any proposed "definition of truth" might
be intend as a definition of the property or of the concept or both;
its author may or may not be alive to the difference. (*c*) The
noun "truth" can be used, finally, to refer to some set of
true truthbarers (possibly unknown), as in: "The truth is out
there", and: "The truth about this matter will never be
known".
## 3. Simple Versions of the Correspondence Theory
The traditional centerpiece of any correspondence theory is a
definition of truth. Nowadays, a correspondence definition is most
likely intended as a "real definition", i.e., as a
definition of the property, which does not commit its advocate to the
claim that the definition provides a synonym for the term
"true". Most correspondence theorists would consider it
implausible and unnecessarily bold to maintain that
"true" *means the same as* "corresponds with
a fact". Some simple forms of correspondence definitions of
truth should be distinguished ("iff" means "if and
only if"; the variable, "*x*", ranges over
whatever truthbearers are taken as primary; the notion of
correspondence might be replaced by various related notions):
>
>
>
> | | |
> | --- | --- |
> | (1) | *x* is true iff *x* corresponds to some fact;
>
> *x* is false iff *x* does not correspond to any fact.
> |
>
>
>
>
>
>
> | | |
> | --- | --- |
> | (2) | *x* is true iff *x* corresponds to some state of
> affairs that obtains;
> *x* is false iff *x* corresponds to some state of
> affairs that does not obtain. |
>
>
>
Both forms invoke portions of reality--facts/states of
affairs--that are typically denoted by that-clauses or by
sentential gerundives, viz. the fact/state of affairs *that snow is
white*, or the fact/state of affairs of *snow's being
white*. (2)'s definition of falsehood is committed to there
being (existing) entities of this sort that nevertheless fail to
obtain, such as *snow's being green*. (1)'s
definition of falsehood is not so committed: to say that a fact does
not obtain means, at best, that there is no such fact, that no such
fact exists. It should be noted that this terminology is not
standardized: some authors use "state of affairs" much
like "fact" is used here (e.g. Armstrong 1997). The
question whether non-obtaining beings of the relevant sort are to be
accepted is the substantive issue behind such terminological
variations. The difference between (2) and (1) is akin to the
difference between Platonism about properties (embraces uninstantiated
properties) and Aristotelianism about properties (rejects
uninstantiated properties).
Advocates of (2) hold that facts *are* states of affairs that
obtain, i.e., they hold that their account of truth is in effect an
analysis of (1)'s account of truth. So disagreement turns
largely on the treatment of falsehood, which (1) simply identifies
with the absence of truth.
The following points might be made for preferring (2) over (1):
(*a*) Form (2) does not imply that things outside the category
of truthbearers (tables, dogs) are false just because they don't
correspond to any facts. One might think this "flaw" of
(1) is easily repaired: just put an explicit specification of the
desired category of truthbearers into both sides of (1). However, some
worry that truthbearer categories, e.g. declarative sentences or
propositions, cannot be defined without invoking truth and falsehood,
which would make the resultant definition implicitly
circular. (*b*) Form (2) allows for items within the category
of truthbearers that are neither true nor false, i.e., it allows for
the failure of bivalence. Some, though not all, will regard this as a
significant advantage. (*c*) If the primary truthbearers are
sentences or mental states, then states of affairs could be their
meanings or contents, and the correspondence relation in (2) could be
understood accordingly, as the relation of representation,
signification, meaning, or having-as-content. Facts, on the other
hand, cannot be identified with the meanings or contents of sentences
or mental states, on pain of the absurd consequence that false
sentences and beliefs have no meaning or content. (*d*) Take a
truth of the form '*p* or *q*', where
'*p*' is true and '*q*'
false. What are the constituents of the corresponding fact? Since
'*q*' is false, they cannot both be facts
(cf. Russell 1906-07, p. 47f.). Form (2) allows that the fact
corresponding to '*p* or *q*' is an
obtaining disjunctive state of affairs composed of a state of affairs
that obtains and a state of affairs that does not obtain.
The main point in favor of (1) over (2) is that (1) is not committed
to counting non-obtaining states of affairs, like the state of affairs
that snow is green, as constituents of reality.
(One might observe that, strictly speaking, (1) and (2), being
biconditionals, are not ontologically committed to anything. Their
respective commitments to facts and states of affairs arise only when
they are combined with claims to the effect that there is something
that is true and something that is false. The discussion assumes some
such claims as given.)
Both forms, (1) and (2), should be distinguished from:
>
>
>
> | | |
> | --- | --- |
> | (3) | *x* is true iff *x* corresponds to some fact that exists;
> *x* is false iff *x* corresponds to some fact that does not exist,
> |
>
>
>
which is a confused version of (1), or a confused version of (2), or,
if unconfused, signals commitment to Meinongianism, i.e., the thesis
that there are things/facts that do not exist. The lure of (3) stems
from the desire to offer more than a purely negative correspondence
account of falsehood while avoiding commitment to non-obtaining states
of affairs. Moore at times succumbs to (3)'s temptations
(1910-11, pp. 267 & 269, but see p. 277). It can also be found in
the 1961 translation of Wittgenstein (1921, 4.25), who uses
"state of affairs" (*Sachverhalt*) to refer to
(atomic) facts. The translation has Wittgenstein saying that an
elementary proposition is false, when the corresponding state of
affairs (atomic fact) does not exist--but the German original of
the same passage looks rather like a version of (2). Somewhat
ironically, a definition of form (3) reintroduces Plato's
problem of falsehood into a fact-based correspondence theory, i.e.,
into a theory of the sort that was supposed to provide an alternative
solution to that very problem (see Section 1.2).
A fourth simple form of correspondence definition was popular for a
time (cf. Russell 1918, secs. 1 & 3; Broad 1933, IV.2.23; Austin
1950, fn. 23), but seems to have fallen out of favor:
>
>
>
> | | |
> | --- | --- |
> | (4) | *x* is true iff *x*
> corresponds (agrees) with some fact;
> *x* is false iff *x* mis-corresponds (disagrees) with some fact. |
>
>
>
This formulation attempts to avoid (2)'s commitment to
non-obtaining states of affairs and (3)'s commitment to
non-existent facts by invoking the relation of mis-correspondence, or
disagreement, to account for falsehood. It differs from (1) in that it
attempts to keep items outside the intended category
of *x*'s from being false: supposedly, tables and dogs
cannot mis-correspond with a fact. Main worries about (4) are:
(*a*) its invocation of an additional, potentially mysterious,
relation, which (*b*) seems difficult to tame: Which fact is
the one that mis-corresponds with a given falsehood? and: What keeps
a truth, which by definition corresponds with some fact, from also
mis-corresponding with some other fact, i.e., from being a falsehood
as well?
In the following, I will treat definitions (1) and (2) as
paradigmatic; moreover, since advocates of (2) agree that obtaining
states of affairs *are* facts, it is often convenient to
condense the correspondence theory into the simpler formula provided
by (1), "truth is correspondence to a fact", at least as
long as one is not particularly concerned with issues raised by
falsehood.
## 4. Arguments for the Correspondence Theory
The main positive argument given by advocates of the correspondence
theory of truth is its obviousness. Descartes: "I have never had
any doubts about truth, because it seems a notion so transcendentally
clear that nobody can be ignorant of it...the word
'truth', in the strict sense, denotes the conformity of
thought with its object" (1639, AT II 597). Even philosophers
whose overall views may well lead one to expect otherwise tend to
agree. Kant: "The nominal definition of truth, that it is the
agreement of [a cognition] with its object, is assumed as
granted" (1787, B82). William James: "Truth, as any
dictionary will tell you, is a property of certain of our ideas. It
means their 'agreement', as falsity means their
disagreement, with 'reality'" (1907,
p. 96). Indeed, *The Oxford English Dictionary* tells us:
"Truth, n. Conformity with fact; agreement with
reality".
In view of its claimed obviousness, it would seem interesting to learn
how popular the correspondence theory actually is. There are some
empirical data. The *PhilPapers Survey* (conducted in 2009;
cf. Bourget and Chalmers 2014), more specifically, the part of the
survey targeting all regular faculty members in 99 leading departments
of philosophy, reports the following responses to the question:
"Truth: correspondence, deflationary, or epistemic?"
Accept or lean toward: correspondence 50.8%; deflationary 24.8%; other
17.5%; epistemic 6.9%. The data suggest that correspondence-type
theories may enjoy a weak majority among professional philosophers and
that the opposition is divided. This fits with the observation that
typically, discussions of the nature of truth take some version of the
correspondence theory as the default view, the view to be criticized
or to be defended against criticism.
Historically, the correspondence theory, usually in an object-based
version, was taken for granted, so much so that it did not acquire
this name until comparatively recently, and explicit arguments for the
view are very hard to find. Since the (comparatively recent) arrival
of apparently competing approaches, correspondence theorists have
developed negative arguments, defending their view against objections
and attacking (sometimes ridiculing) competing views.
## 5. Objections to the Correspondence Theory
***Objection 1*:** Definitions like (1) or (2) are
too narrow. Although they apply to truths from some domains of
discourse, e.g., the domain of science, they fail for others, e.g.
the domain of morality: there are no moral facts.
The objection recognizes moral truths, but rejects the idea that
reality contains moral facts for moral truths to correspond to. Logic
provides another example of a domain that has been
"flagged" in this way. The logical positivists recognized
logical truths but rejected logical facts. Their intellectual
ancestor, Hume, had already given two definitions of
"true", one for logical truths, broadly conceived, the
other for non-logical truths: "Truth or falsehood consists in an
agreement or disagreement either to the *real* relations of
ideas, or to *real* existence and matter of fact"
(Hume, *Treatise*, 3.1.1, cf. 2.3.10; see also
Locke, *Essay*, 4.5.6, for a similarly two-pronged account but
in terms of object-based correspondence).
There are four possible responses to objections of this sort:
(*a*) Noncognitivism, which says that, despite appearances to
the contrary, claims from the flagged domain are not truth-evaluable
to begin with, e.g., moral claims are commands or expressions of
emotions disguised as truthbearers; (*b*) Error theory, which
says that all claims from the flagged domain are false; (*c*)
Reductionism, which says that truths from the flagged domain
correspond to facts of a different domain regarded as unproblematic,
e.g., moral truths correspond to social-behavioral facts, logical
truths correspond to facts about linguistic conventions; and
(*d*) Standing firm, i.e., embracing facts of the flagged
domain.
The objection in effect maintains that there are different brands
of *truth* (of the property *being true*, not just
different brands of truths) for different domains. On the face of it,
this conflicts with the observation that there are many obviously
valid arguments combining premises from flagged and unflagged
domains. The observation is widely regarded as refuting
non-cognitivism, once the most popular (concessive) response to the
objection.
In connection with this objection, one should take note of the
recently developed "multiple realizability" view of truth,
according to which truth is *not* to be *identified*
with correspondence to fact but can be *realized by*
correspondence to fact for truthbearers of some domains of discourse
and by other properties for truthbearers of other domains of
discourse, including "flagged" domains. Though it retains
important elements of the correspondence theory, this view does not,
strictly speaking, offer a response to the objection on behalf of the
correspondence theory and should be regarded as one of its competitors
(see below, Section 8.2).
***Objection 2*:** Correspondence theories are too
obvious. They are trivial, vacuous, trading in mere platitudes.
Locutions from the "corresponds to the facts"-family are
used regularly in everyday language as idiomatic substitutes for
"true". Such common turns of phrase should not be taken to
indicate commitment to a correspondence *theory* in any serious
sense. Definitions like (1) or (2) merely condense some trivial idioms
into handy formulas; they don't deserve the grand label
"theory": there is no theoretical weight behind them
(cf. Woozley 1949, chap. 6; Davidson 1969; Blackburn 1984,
chap. 7.1).
In response, one could point out: (*a*) Definitions like (1) or
(2) are "mini-theories"--mini-theories are quite
common in philosophy--and it is not at all obvious that they are
vacuous merely because they are modeled on common usage. (*b*)
There are correspondence theories that go beyond these
definitions. (*c*) The complaint implies that definitions like
(1) and/or (2) are generally accepted and are, moreover, so shallow
that they are compatible with any deeper theory of truth. This makes
it rather difficult to explain why some thinkers emphatically reject
all correspondence formulations. (*d*) The objection implies
that the correspondence of *S*'s belief with a fact could
be said to consist in, e.g., the belief's coherence
with *S*'s overall belief system. This is wildly
implausible, even on the most shallow understanding of
"correspondence" and "fact".
***Objection 3*:** Correspondence theories are
too obscure.
Objections of this sort, which are the most common, protest that the
central notions of a correspondence theory carry unacceptable
commitments and/or cannot be accounted for in any respectable manner.
The objections can be divided into objections primarily aimed at the
*correspondence relation* and its relatives (3.C1, 3.C2), and
objections primarily aimed at the notions of *fact*
or *state of affairs* (3.F1, 3.F2):
***3.C1***: The correspondence relation must be
some sort of resemblance relation. But truthbearers do not resemble
anything in the world except other truthbearers--echoing
Berkeley's "an idea can be like nothing but an
idea".
***3.C2***: The correspondence relation is very
mysterious: it seems to reach into the most distant regions of space
(faster than light?) and time (past and future). How could such a
relation possibly be accounted for within a naturalistic framework?
What physical relation could it possibly be?
***3.F1***: Given the great variety of complex
truthbearers, a correspondence theory will be committed to all sorts
of complex "funny facts" that are ontologically
disreputable. Negative, disjunctive, conditional, universal,
probabilistic, subjunctive, and counterfactual facts have all given
cause for complaint on this score.
***3.F2***: All facts, even the most simple ones,
are disreputable. Fact-talk, being wedded to that-clauses, is
entirely parasitic on truth-talk. Facts are too much like
truthbearers. Facts are fictions, spurious sentence-like slices of
reality, "projected from true sentences for the sake of
correspondence" (Quine 1987, p. 213; cf. Strawson 1950).
## 6. Correspondence as Isomorphism
Some correspondence theories of truth are two-liner mini-theories,
consisting of little more than a specific version of (1) or
(2). Normally, one would expect a bit more, even from a philosophical
theory (though mini-theories are quite common in philosophy). One
would expect a correspondence theory to go beyond a mere definition
like (1) or (2) and discharge a triple task: it should tell us about
the workings of the correspondence relation, about the nature of
facts, and about the conditions that determine which truthbearers
correspond to which facts.
One can approach this by considering some general principles a
correspondence theory might want to add to its central principle to
flesh out her theory. The first such principle says that the
correspondence relation must not collapse into
identity--"It takes two to make a truth" (Austin
1950, p. 118):
>
> *Nonidentity:*
> No truth is identical with a fact
> correspondence to which is sufficient for its being a truth.
>
It would be much simpler to say that no truth is identical with a
fact. However, some authors, e.g. Wittgenstein 1921, hold that a
proposition (*Satz*, his truthbearer) is itself a fact, though
not the same fact as the one that makes the proposition true (see also
King 2007). Nonidentity is usually taken for granted by correspondence
theorists as constitutive of the very idea of a correspondence
theory--authors who advance contrary arguments to the effect that
correspondence must collapse into identity regard their arguments as
objections to any form of correspondence theory (cf. Moore 1901/02,
Frege 1918-19, p. 60).
Concerning the correspondence relation, two aspects can be
distinguished: *correspondence as correlation*
and *correspondence as isomorphism* (cf. Pitcher 1964; Kirkham
1992, chap. 4). Pertaining to the first aspect, familiar from
mathematical contexts, a correspondence theorist is likely to adopt
claim (*a*), and some may in addition adopt claim (*b*),
of:
>
> *Correlation:*
>
> (*a*) Every truth corresponds to exactly one fact;
>
> (*b*) Different truths correspond to different facts.
>
Together, (*a*) and (*b*) say that correspondence is a
one-one relation. This seems needlessly strong, and it is not easy to
find real-life correspondence theorists who explicitly embrace part
(*b*): Why shouldn't different truths correspond to the
same fact, as long as they are not too different? Explicit commitment
to (*a*) is also quite rare. However, correspondence theorists
tend to move comfortably from talk about a given truth to talk
about *the* fact it corresponds to--a move that signals
commitment to (*a*).
Correlation does not imply anything about the inner nature of the
corresponding items. Contrast this with correspondence
as *isomorphism*, which requires the corresponding items to
have the same, or sufficiently similar, constituent structure. This
aspect of correspondence, which is more prominent (and more notorious)
than the previous one, is also much more difficult to make
precise. Let us say, roughly, that a correspondence theorist may want
to add a claim to her theory committing her to something like the
following:
>
> *Structure:*
> If an item of kind K corresponds to a certain
> fact, then they have the same or sufficiently similar structure: the
> overall correspondence between a true K and a fact is a matter of
> part-wise correspondences, i.e. of their having corresponding
> constituents in corresponding places in the same structure, or in
> sufficiently similar structures.
>
The basic idea is that truthbearers and facts are both complex
structured entities: truthbearers are composed of (other truthbearers
and ultimately of) words, or concepts; facts are composed of (other
facts or states of affairs and ultimately of) things, properties, and
relations. The aim is to show how the correspondence relation is
generated from underlying relations between the ultimate constituents
of truthbearers, on the one hand, and the ultimate constituents of
their corresponding facts, on the other. One part of the project will
be concerned with these correspondence-generating relations: it will
lead into a theory that addresses the question how simple words, or
concepts, can be *about* things, properties, and relations;
i.e., it will merge with semantics or psycho-semantics (depending on
what the truthbearers are taken to be). The other part of the project,
the specifically ontological part, will have to provide identity
criteria for facts and explain how their simple constituents combine
into complex wholes. Putting all this together should yield an
account of the conditions determining which truthbearers correspond to
which facts.
Correlation and Structure reflect distinct aspects of
correspondence. One might want to endorse the former without the
latter, though it is hard to see how one could endorse the latter
without embracing at least part (*a*) of the former.
The isomorphism approach offers an answer to objection 3.C1. Although
the truth that the cat is on the mat does not resemble the cat or the
mat (the truth doesn't meow or smell, etc.), it does resemble
the *fact* that the cat is on the mat. This is not a
qualitative resemblance; it is a more abstract, structural
resemblance.
The approach also puts objection 3.C2 in some perspective. The
correspondence relation is supposed to reduce to underlying relations
between words, or concepts, and reality. Consequently, a
correspondence theory is little more than a spin-off from semantics
and/or psycho-semantics, i.e. the theory of intentionality construed
as incorporating a representational theory of the mind (cf. Fodor
1989). This reminds us that, as a relation, correspondence is no
more--but also no less--mysterious than semantic relations
in general. Such relations have some curious features, and they raise
a host of puzzles and difficult questions--most notoriously: Can
they be explained in terms of natural (causal) relations, or do they
have to be regarded as irreducibly non-natural aspects of reality?
Some philosophers have claimed that semantic relations are too
mysterious to be taken seriously, usually on the grounds that they are
not explainable in naturalistic terms. But one should bear in mind
that this is a very general and extremely radical attack on semantics
as a whole, on the very idea that words and concepts can
be *about* things. The common practice to aim this attack
specifically at the correspondence theory seems misleading. As far as
the intelligibility of the correspondence relation is concerned, the
correspondence theory will stand, or fall, with the general theory of
reference and intentionality.
It should be noted, though, that these points concerning objections
3.C1 and 3.C2 are not independent of one's views about the
nature of the primary *truthbearers*. If truthbearers are taken
to be sentences of an ordinary language (or an idealized version
thereof), or if they are taken to be mental representations (sentences
of the language of thought), the above points hold without
qualification: correspondence will be a semantic or psycho-semantic
relation. If, on the other hand, the primary truthbearers are taken to
be *propositions*, there is a complication:
1. On a broadly *Fregean view of propositions*, propositions
are constituted by *concepts* of objects and properties (in the
logical, not the psychological, sense of "concept"). On
this view, the above points still hold, since the relation between
concepts, on the one hand, and the objects and properties they are
concepts *of*, on the other, appears to be a semantic relation,
a concept-semantic relation.
2. On the so-called *Russellian view of propositions* (which
the early Russell inherited mostly from early Moore), propositions are
constituted, not of concepts of objects and properties, but of the
objects and properties themselves (cf. Russell 1903). On this view,
the points above will most likely fail, since the correspondence
relation would appear to collapse into the identity relation when
applied to true Russellian propositions. It is hard to see how a true
Russellian proposition could be anything but a fact: What would a
fact *be*, if not this sort of thing? So the principle of
Nonidentity is rejected, and with it goes the correspondence theory of
truth: "Once it is definitely recognized that the proposition is
to denote, not a belief or form of words, but an object of belief, it
seems plain that a truth differs in no respect from the reality with
which it was supposed merely to correspond" (Moore 1901-02,
p. 717). A simple, fact-based correspondence theory, applied to
propositions understood in the Russellian way, thus reduces to
an *identity theory* of truth, on which a proposition is true
iff it *is* a fact, and false, iff it is not a fact. See below,
Section 8.3; and the entries on
propositions,
singular propositions, and
structured propositions
in this encyclopedia.
But Russellians don't usually renounce the correspondence theory
entirely. Though they have no room for (1) from Section 3, when
applied to propositions as truthbearers, correspondence will enter
into their account of truth for sentences, public or mental. The
account will take the form of Section 3's (2), applied to
categories of truthbearers other than propositions, where Russellian
propositions show up on the right-hand side in the guise of states of
affairs that obtain or fail to obtain. Commitment to states of affairs
in addition to propositions is sometimes regarded with scorn, as a
gratuitous ontological duplication. But Russellians are not committed
to states of affairs *in addition* to propositions, for
propositions, on their view, must already *be* states of
affairs. This conclusion is well nigh inevitable, once true
propositions have been identified with facts. If a true proposition is
a fact, then a false proposition that might have been true would have
been a fact, if it had been true. So, a (contingent) false proposition
must be the same kind of being as a fact, only not a fact--an
unfact; but that just is a non-obtaining state of affairs under a
different name. Russellian propositions are states of affairs: the
false ones are states of affairs that do not obtain, and the true ones
are states of affairs that do obtain.
The Russellian view of propositions is popular nowadays. Somewhat
curiously, contemporary Russellians hardly ever refer to propositions
as facts or states of affairs. This is because they are much concerned
with understanding belief, belief attributions, and the semantics of
sentences. In such contexts, it is more natural to talk
proposition-language than state-of-affairs-language. It feels odd
(wrong) to say that someone believes a state of affairs, or that
states of affairs are true or false. For that matter, it also feels
odd (wrong) to say that some propositions are facts, that facts are
true, and that propositions obtain or fail to obtain. Nevertheless,
all of this must be the literal truth, according to the
Russellians. They have to claim that "proposition" and
"state of affairs", much like "evening star"
and "morning star", are different names for the same
things--they come with different associations and are at home in
somewhat different linguistic environments, which accounts for the
felt oddness when one name is transported to the other's
environment.
Returning to the isomorphism approach in general, on a strict or
naive implementation of this approach, correspondence will be a
one-one relation between truths and corresponding facts, which leaves
the approach vulnerable to objections against funny facts (3.F1): each
true truthbearer, no matter how complex, will be assigned a matching
fact. Moreover, since a strict implementation of isomorphism assigns
corresponding entities to all (relevant) constituents of truthbearers,
complex facts will contain objects corresponding to the logical
constants ("not", "or", "if-then",
etc.), and these "logical objects" will have to be
regarded as constituents of the world. Many philosophers have found it
hard to believe in the existence of all these funny facts and funny
quasi-logical objects.
The isomorphism approach has never been advocated in a fully
naive form, assigning corresponding objects to each and every
wrinkle of our verbal or mental utterings. Instead, proponents try to
isolate the "relevant" constituents of truthbearers
through meaning analysis, aiming to uncover the *logical form*,
or deep structure, behind ordinary language and thought. This deep
structure might then be expressed in an *ideal-language*
(typically, the language of predicate logic), whose syntactic
structure is designed to mirror perfectly the ontological structure of
reality. The resulting view--correspondence as isomorphism
between properly analyzed truthbearers and facts--avoids
assigning strange objects to such phrases as "the average
husband", "the sake of", and "the present king
of France"; but the view remains committed to logically complex
facts and to logical objects corresponding to the logical
constants.
Austin (1950) rejects the isomorphism approach on the grounds that it
projects the structure of our language onto the world. On his version
of the correspondence theory (a more elaborated variant of (4) applied
to statements), a statement as a whole is correlated to a state of
affairs by arbitrary linguistic conventions without mirroring the
inner structure of its correlate (cf. also Vision 2004). This approach
appears vulnerable to the objection that it avoids funny facts at the
price of neglecting systematicity. Language does not provide separate
linguistic conventions for each statement: that would require too vast
a number of conventions. Rather, it seems that the truth-values of
statements are systematically determined, via a relatively small set
of conventions, by the semantic values (relations to reality) of their
simpler constituents. Recognition of this systematicity is built right
into the isomorphism approach.
Critics frequently echo Austin's
"projection"-complaint, 3.F2, that a traditional
correspondence theory commits "the error of reading back into
the world the features of language" (Austin 1950, p. 155;
cf. also, e.g., Rorty 1981). At bottom, this is a pessimistic stance:
if there is a prima facie structural resemblance between a mode of
speech or thought and some ontological category, it is inferred,
pessimistically, that the ontological category is an illusion, a
matter of us projecting the structure of our language or thought into
the world. Advocates of traditional correspondence theories can be
seen as taking the opposite stance: unless there are specific reasons
to the contrary, they are prepared to assume, optimistically, that the
structure of our language and/or thought reflects genuine ontological
categories, that the structure of our language and/or thought is, at
least to a significant extent, the way it is *because* of the
structure of the world.
## 7. Modified Versions of the Correspondence Theory
### 7.1 Logical Atomism
Wittgenstein (1921) and Russell (1918) propose modified fact-based
correspondence accounts of truth as part of their program of
*logical atomism*. Such accounts proceed in two stages. At the
first stage, the basic truth-definition, say (1) from Section 3,
is *restricted* to a special subclass of truthbearers, the
so-called *elementary* or *atomic* truthbearers, whose
truth is said to consist in their correspondence to (atomic) facts:
if *x* is elementary, then *x* is true iff *x*
corresponds to some (atomic) fact. This restricted definition serves
as the base-clause for truth-conditional recursion-clauses given at
the second stage, at which the truth-values of non-elementary, or
molecular, truthbearers are explained *recursively* in terms of
their logical structure and the truth-values of their simpler
constituents. For example: a sentence of the form
'not-*p*' is true iff '*p*' is
false; a sentence of the form '*p* and *q*'
is true iff '*p*' is true and
'*q*' is true; a sentence of the form
'*p* or *q*' is true iff
'*p*' is true or '*q*' is true,
etc. These recursive clauses (called "truth conditions")
can be reapplied until the truth of a non-elementary, molecular
sentence of arbitrary complexity is reduced to the truth or falsehood
of its elementary, atomic constituents.
Logical atomism exploits the familiar rules, enshrined in the
truth-tables, for evaluating complex formulas on the basis of their
simpler constituents. These rules can be understood in two different
ways: (*a*) as tracing the *ontological* relations
between complex facts and constituent simpler facts, or (*b*)
as tracing *logico-semantic* relations, exhibiting how the
truth-values of complex sentences can be explained in terms of their
logical relations to simpler constituent sentences together with the
correspondence and non-correspondence of simple, elementary sentences
to atomic facts. Logical atomism takes option (*b*).
Logical atomism is designed to go with the ontological view that the
world is the totality of atomic facts (cf. Wittgenstein 1921, 2.04);
thus accommodating objection 3.F2 by doing without funny facts: atomic
facts are all the facts there are--although real-life atomists
tend to allow conjunctive facts, regarding them as mere aggregates of
atomic facts. An elementary truth is true because it corresponds to an
atomic fact: correspondence is still isomorphism, but it holds
exclusively between elementary truths and atomic facts. There is no
match between truths and facts at the level of non-elementary,
molecular truths; e.g., '*p*', '*p*
or *q*', and '*p* or *r*' might
all be true merely because '*p*' corresponds to a
fact). The trick for avoiding logically complex facts lies
in *not* assigning any entities to the logical
constants. Logical complexity, so the idea goes, belongs to the
structure of language and/or thought; it is not a feature of the
world. This is expressed by Wittgenstein in an often quoted passage
(1921, 4.0312): "My fundamental idea is that the 'logical
constants' are not representatives; that there can be no
representatives of the *logic* of facts"; and also by
Russell (1918, p. 209f.): "You must not look about the real
world for an object which you can call 'or', and say
'Now look at this. This is 'or''".
Though accounts of this sort are naturally classified as versions of
the correspondence theory, it should be noted that they are strictly
speaking in conflict with the basic forms presented in Section
3. According to logical atomism, it is *not* the case that for
every truth there is a corresponding fact. It is, however, still the
case that the being true of every truth is *explained* in terms
of correspondence to a fact (or non-correspondence to any fact)
together with (in the case of molecular truths) logical notions
detailing the logical structure of complex truthbearers. Logical
atomism attempts to avoid commitment to logically complex, funny facts
via structural analysis of *truthbearers*. It should not be
confused with a superficially similar account maintaining that
molecular facts are ultimately constituted by atomic facts. The latter
account would admit complex facts, offering an ontological analysis of
their structure, and would thus be compatible with the basic forms
presented in Section 3, because it would be compatible with the claim
that for every truth there is a corresponding fact. (For more on
classical logical atomism, see Wisdom 1931-1933, Urmson 1953, and the
entries on
Russell's logical atomism
and
Wittgenstein's logical atomism
in this encyclopedia.)
While Wittgenstein and Russell seem to have held that the constituents
of atomic facts are to be determined on the basis of *a priori*
considerations, Armstrong (1997, 2004) advocates an *a
posteriori* form of logical atomism. On his view, atomic facts are
composed of particulars and simple universals (properties and
relations). The latter are objective features of the world that ground
the objective resemblances between particulars and explain their
causal powers. Accordingly, what particulars and universals there are
will have to be determined on the basis of total science.
**Problems:** Logical atomism is not easy to sustain and
has rarely been held in a pure form. Among its difficulties are the
following: (*a*) What, exactly, are the elementary
truthbearers? How are they determined? (*b*) There are
molecular truthbearers, such as subjunctives and counterfactuals, that
tend to provoke the funny-fact objection but cannot be handled by
simple truth-conditional clauses, because their truth-values do not
seem to be determined by the truth-values of their elementary
constituents. (*c*) Are there universal facts corresponding to
true universal generalizations? Wittgenstein (1921) disapproves of
universal facts; apparently, he wants to re-analyze universal
generalizations as infinite conjunctions of their instances. Russell
(1918) and Armstrong (1997, 2004) reject this analysis; they admit
universal facts. (*d*) Negative truths are the most notorious
problem case, because they clash with an appealing principle, the
"truthmaker principle" (cf. Section 8.5), which says that
for every truth there must be something in the world that makes it
true, i.e., every true truthbearer must have a truthmaker. Suppose
'p' is elementary. On the account given above,
'not-*p*' is true iff '*p*' is
false iff '*p*' does not correspond to any fact;
hence, 'not-*p*', if true, is not made true by any
fact: it does not seem to have a truthmaker. Russell finds himself
driven to admit negative facts, regarded by many as paradigmatically
disreputable portions of reality. Wittgenstein sometimes talks of
atomic facts that do not exist and calls their very nonexistence a
negative fact (cf. 1921, 2.06)--but this is hardly an atomic fact
itself. Armstrong (1997, chap. 8.7; 2004, chaps. 5-6) holds that
negative truths are made true by a second-order "totality
fact" which says of all the (positive) first-order facts that
they are all the first-order facts.
*Atomism* and *the Russellian view of propositions* (see
Section 6). By the time Russell advocated logical atomism (around
1918), he had given up on what is now referred to as the Russellian
conception of propositions (which he and G. E. Moore held around
1903). But Russellian propositons are popular nowadays. Note that
logical atomism is *not* for the friends of Russellian
propositions. The argument is straightforward. We have logically
complex beliefs some of which are true. According to the friends of
Russellian propositions, the contents of our beliefs are Russellian
propositions, and the contents of our true beliefs are true Russellian
propositions. Since true Russellian propositions are facts, there must
be at least as many complex facts as there are true beliefs with
complex contents (and at least as many complex states of affairs as
there are true or false beliefs with complex contents). Atomism may
work for sentences, public or mental, and for Fregean propositions;
but not for Russellian propositions.
Logical atomism is designed to address objections to funny facts
(3.F1). It is not designed to address objections to facts in general
(3.F2). Here logical atomists will respond by defending (atomic)
facts. According to one defense, facts are needed because mere
objects are not sufficiently *articulated* to serve as
truthmakers. If *a* were the sole truthmaker of
'*a* is *F*', then the latter should imply
'*a* is *G*', for any
'*G*'. So the truthmaker for '*a*
is *F*' needs at least to involve *a*
and *F*ness. But since *F*ness is a universal, it could
be instantiated in another object, *b*, hence the mere
existence of *a* and *F*ness is not sufficient for
making true the claim '*a*
is *F*': *a* and *F*ness need to be tied
together in the fact of *a's being F*. Armstrong (1997)
and Olson (1987) also maintain that facts are needed to make sense of
the tie that binds particular objects to universals.
In this context it is usually emphasized that facts do *not
supervene on*, hence, are not reducible to, their constituents.
Facts are entities *over and above* the particulars and
universals of which they are composed: *a's loving b*
and *b's loving a* are not the same fact even though they
have the very same constituents.
Another defense of facts, surprisingly rare, would point out that many
facts are observable: one can *see* that the cat is on the mat;
and this is different from seeing the cat, or the mat, or both. The
objection that many facts are not observable would invite the
rejoinder that many objects are not observable either. (See Austin
1961, Vendler 1967, chap. 5, and Vision 2004, chap. 3, for more
discussion of anti-fact arguments; see also the
entry facts in this encyclopedia.)
Some atomists propose an atomistic version of definition (1), but
without facts, because they regard facts as slices of reality too
suspiciously sentence-like to be taken with full ontological
seriousness. Instead, they propose events and/or objects-plus-tropes
(a.k.a. modes, particularized qualities, moments) as the corresponding
portions of reality. It is claimed that these items are more
"thingy" than facts but still sufficiently
articulated--and sufficiently abundant--to serve as adequate
truthmakers (cf. Mulligan, Simons, and Smith 1984).
### 7.2 Logical "Subatomism"
Logical atomism aims at getting by without logically complex
truthmakers by restricting definitions like (1) or (2) from Section 3
to elementary truthbearers and accounting for the truth-values of
molecular truthbearers recursively in terms of their logical structure
and atomic truthmakers (atomic facts, events,
objects-plus-tropes). More radical modifications of the correspondence
theory push the recursive strategy even further, entirely discarding
definitions like (1) or (2), and hence the need for atomic
truthmakers, by going, as it were,
"*subatomic*".
Such accounts analyze truthbearers, e.g., sentences, into their
subsentential constituents and dissolve the relation of correspondence
into appropriate semantic subrelations: names *refer to*, or
denote, objects; predicates (open sentences) apply to, or are
*satisfied by* objects. Satisfaction of complex predicates can
be handled recursively in terms of logical structure and satisfaction
of simpler constituent predicates: an object *o* satisfies
'*x* is not *F*' iff *o* does not
satisfy '*x* is *F*'; *o* satisfies
'*x* is *F* or *x* is *G*'
iff *o* satisfies '*x* is *F*'
or *o* satisfies '*x* is *G*'; and so
on. These recursions are anchored in a base-clause addressing the
satisfaction of *primitive* predicates: an object *o*
satisfies '*x* is *F*' iff *o*
instantiates the property expressed by '*F*'. Some
would prefer a more nominalistic base-clause for satisfaction, hoping
to get by without seriously invoking properties. Truth for singular
sentences, consisting of a name and an arbitrarily complex predicate,
is defined thus: A singular sentence is true iff the object denoted by
the name satisfies the predicate. Logical machinery provided by Tarski
(1935) can be used to turn this simplified sketch into a more general
definition of truth--a definition that handles sentences
containing relational predicates and quantifiers and covers molecular
sentences as well. Whether Tarski's own definition of truth can
be regarded as a correspondence definition, even in this modified
sense, is under debate (cf. Popper 1972; Field 1972, 1986; Kirkham
1992, chaps. 5-6; Soames 1999; Kunne 2003, chap. 4; Patterson
2008.)
Subatomism constitutes a return to (broadly) object-based
correspondence. Since it promises to avoid facts and all similarly
articulated, sentence-like slices of reality, correspondence theorists
who take seriously objection 3.F2 favor this approach: not even
elementary truthbearers are assigned any matching truthmakers. The
correspondence relation itself has given way to two semantic relations
between constituents of truthbearers and objects: reference (or
denotation) and satisfaction--relations central to any semantic
theory. Some advocates envision causal accounts of reference and
satisfaction (cf. Field 1972; Devitt 1982, 1984; Schmitt 1995;
Kirkham 1992, chaps. 5-6). It turns out that relational predicates
require talk of satisfaction by *ordered sequences* of
objects. Davidson (1969, 1977) maintains that satisfaction by
sequences is all that remains of the traditional idea of
correspondence to facts; he regards reference and satisfaction as
"theoretical constructs" not in need of causal, or any,
explanation.
**Problems:** (*a*) The subatomistic approach
accounts for the truth-values of molecular truthbearers in the same
way as the atomistic approach; consequently, molecular truthbearers
that are not truth-functional still pose the same problems as in
atomism. (*b*) Belief attributions and modal claims pose
special problems; e.g., it seems that "believes" is a
relational predicate, so that "John believes that snow is
white" is true iff "believes" is satisfied by John
and the object denoted by "that snow is white"; but the
latter appears to be a proposition or state of affairs, which
threatens to let in through the back-door the very sentence-like
slices of reality the subatomic approach was supposed to avoid, thus
undermining the motivation for going subatomic. (*c*) The
phenomenon of referential indeterminacy threatens to undermine the
idea that the truth-values of elementary truthbearers are always
determined by the denotation and/or satisfaction of their
constituents; e.g., pre-relativistic uses of the term
"mass" are plausibly taken to lack determinate reference
(referring determinately neither to relativistic mass nor to rest
mass); yet a claim like "The mass of the earth is greater than
the mass of the moon" seems to be determinately true even when
made by Newton (cf. Field 1973).
**Problems for both versions of modified correspondence
theories:** (*a*) It is not known whether an entirely
general recursive definition of truth, one that covers all
truthbearers, can be made available. This depends on unresolved issues
concerning the extent to which truthbearers are amenable to the kind
of structural analyses that are presupposed by the recursive
clauses. The more an account of truth wants to exploit the internal
structure of truthbearers, the more it will be hostage to the
(limited) availability of appropriate structural analyses of the
relevant truthbearers. (*b*) Any account of truth employing a
recursive framework may be virtually committed to taking sentences
(maybe sentences of the language of thought) as primary
truthbearers. After all, the recursive clauses rely heavily on what
appears to be the logico-syntactic structure of truthbearers, and it
is unclear whether anything but sentences can plausibly be said to
possess that kind of structure. But the thesis that sentences of any
sort are to be regarded as the primary truthbearers is
contentious. Whether propositions can meaningfully be said to have an
analogous (albeit non-linguistic) structure is under debate
(cf. Russell 1913, King 2007). (*c*) If clauses like
"'*p* or *q*' is true iff
'*p*' is true or '*q*' is
true" are to be used in a recursive account of *our*
notion of *truth*, as opposed to some other notion, it has to
be presupposed that 'or' expresses *disjunction*:
one cannot define "or" and "true" at the same
time. To avoid circularity, a modified correspondence theory (be it
atomic or subatomic) must hold that the logical connectives can be
understood without reference to correspondence truth.
### 7.3 Relocating Correspondence
Definitions like (1) and (2) from Section 3 assume, naturally, that
truthbearers are true because they, the truthbearers themselves,
correspond to facts. There are however views that reject this natural
assumption. They propose to account for the truth of truthbearers of
certain kinds, propositions, not by way of *their*
correspondence to facts, but by way of the correspondence to facts of
other items, the ones that have propositions as their
contents. Consider the state of *believing that p* (or the
activity of judging that *p*). The *state* (the
activity) is not, strictly speaking, true or false; rather, what is
true or false is its content, the proposition
that *p*. Nevertheless, on the present view, it is the state of
believing that *p* that corresponds or fails to correspond to a
fact. So truth/falsehood for propositions can be defined in the
following manner: *x* is a true/false proposition iff there is
a belief state *B* such that *x* is the content
of *B* and *B* corresponds/fails to correspond to a
fact.
Such a modification of fact-based correspondence can be found in Moore
(1927, p. 83) and Armstrong (1973, 4.iv & 9). It can be adapted to
atomistic (Armstrong) and subatomistic views, and to views on which
sentences (of the language of thought) are the primary bearers of
truth and falsehood. However, by taking the content-carrying states as
the primary corresponders, it entails that there are no
truths/falsehoods that are not believed by someone. Most advocates of
propositions as primary bearers of truth and falsehood will regard
this as a serious weakness, holding that there are very many true and
false propositions that are not believed, or even entertained, by
anyone. Armstrong (1973) combines the view with an instrumentalist
attitude towards propositions, on which propositions are mere
abstractions from mental states and should not be taken seriously,
ontologically speaking.
## 8. The Correspondence Theory and Its Competitors
### 8.1 Traditional Competitors
Against the *traditional competitors*--coherentist,
pragmatist, and verificationist and other epistemic theories of
truth--correspondence theorists raise two main sorts of
objections. *First*, such accounts tend to lead into
relativism. Take, e.g., a coherentist account of truth. Since it is
possible that '*p*' coheres with the belief system
of *S* while 'not-*p*' coheres with the
belief system of *S*\*, the coherentist account seems to imply,
absurdly, that contradictories, '*p*' and
'not-*p*', could both be true. To avoid embracing
contradictions, coherentists often commit themselves (if only
covertly) to the objectionable relativistic view that
'*p*' is true-for-*S* and
'not-*p*' is true-for-*S*\*. *Second*,
the accounts tend to lead into some form of idealism or anti-realism,
e.g., it is possible for the belief that *p* to cohere with
someone's belief system, even though it is not a fact that
*p*; also, it is possible for it to be a fact that *p*,
even if no one believes that *p* at all or if the belief does
not cohere with anyone's belief system. Cases of this sort are
frequently cited as counterexamples to coherentist accounts of
truth. Dedicated coherentists tend to reject such counterexamples,
insisting that they are not possible after all. Since it is hard to
see why they would not be possible, unless its being a fact that
*p* were determined by the belief's coherence with other
beliefs, this reaction commits them to the anti-realist view that the
facts are (largely) determined by what we believe.
This offers a bare outline of the overall shape the debates tend to
take. For more on the correspondence theory vs. its traditional
competitors see, e.g., Vision 1988; Kirkham 1992, chaps. 3, 7-8;
Schmitt 1995; Kunne 2003, chap. 7; and essays in Lynch
2001. Walker 1989 is a book-lenght discussion of coherence theories of
truth. See also the entries on
pragmatism,
relativism,
the coherence theory of truth,
in this encyclopedia.
### 8.2 Pluralism
The correspondence theory is sometimes accused of overreaching itself:
it does apply, so the objection goes, to truths from some domains of
discourse, e.g., scientific discourse and/or discourse about everyday
midsized physical things, but not to truths from various other domains
of discourse, e.g., ethical and/or aesthetic discourse (see the first
objection in Section 5 above). *Alethic pluralism* grows out of
this objection, maintaining that truth is *constituted* by
different properties for true propositions from different domains of
discourse: by correspondence to fact for true propositions from the
domain of scientific or everyday discourse about physical things; by
some epistemic property, such as coherence or superassertibility, for
true propositions from the domain of ethical and aesthetic discourse,
and maybe by still other properties for other domains of
discourse. This suggests a position on which the term
"true" is multiply ambiguous, expressing different
properties when applied to propositions from different
domains. However, contemporary pluralists reject this problematic
idea, maintaining instead that truth is "multiply
realizable". That is, the term "true" is univocal,
it expresses one concept or property, truth (being true), but one that
can be *realized by* or *manifested in* different
properties (correspondence to fact, coherence or superassertibility,
and maybe others) for true propositions from different domains of
discourse. Truth itself is not to be identified with any of its
realizing properties. Instead, it is characterized, quasi
axiomatically, by a set of alleged "platitudes",
including, according to Crispin Wright's (1999) version,
"transparency" (to assert is to present as true),
"contrast" (a proposition may be true without being
justified, and v.v.), "timelesness" (if a proposition is
ever true, then it always is), "absoluteness" (there is no
such thing as a proposition being more or less true), and others.
Though it contains the correspondence theory as one ingredient,
alethic pluralism is nevertheless a genuine competitor, for it rejects
the thesis that truth *is* correspondence to reality. Moreover,
it equally contains competitors of the correspondence theory as
further ingredients.
Alethic pluralism in its contemporary form is a relatively young
position. It was inaugurated by Crispin Wright (1992; see also 1999)
and was later developed into a somewhat different form by Lynch
(2009). Critical discussion is still at a relatively nascent stage
(but see Vision 2004, chap. 4, for extended discussion of Wright). It
will likely focus on two main problem areas.
*First*, it seems difficult to sort propositions into distinct
kinds according to the subject matter they are about. Take, e.g., the
proposition that killing is morally wrong, or the proposition that
immoral acts happen in space-time. What are they about? Intuitively,
their subject matter is mixed, belonging to the physical domain, the
biological domain, and the domain of ethical discourse. It is hard to
see how pluralism can account for the truth of such mixed
propositions, belonging to more than one domain of discourse: What
will be the realizing property?
*Second*, pluralists are expected to explain how the platitudes
can be "converted" into an account of truth itself. Lynch
(2009) proposes to construe truth as a *functional property*,
defined in terms of a complex *functional role* which is given
by the conjunction of the platitudes (somewhat analogous to the way in
which functionalists in the philosophy of mind construe mental states
as functional states, specified in terms of their functional
roles--though in their case the relevant functional roles are
causal roles, which is not a feasible option when it comes to the
truth-role). Here the main issue will be to determine (*a*)
whether such an account really works, when the technical details are
laid out, and (*b*) whether it is plausible to claim that
properties as different as correspondence to a fact, on the one hand,
and coherence or superassertibilty, on the other, can be said to play
one and the same role--a claim that seems required by the thesis
that these different properties all realize the same property, being
true.
For more on pluralism, see e.g. the essays in Monnoyer (2007) and in
Pedersen & Wright (2013); and the entry on
pluralist theories of truth
in this encyclopedia.
### 8.3 The Identity Theory of Truth
According to the *identity theory* of truth, true propositions
do not correspond to facts, they *are* facts: the true
proposition that snow is white = the fact that snow is white. This
non-traditional competitor of the correspondence theory threatens to
collapse the correspondence relation into identity. (See Moore
1901-02; and Dodd 2000 for a book-length defense of this theory and
discussion contrasting it with the correspondence theory; and see the
entry
the identity theory of truth:
in this encyclopedia.)
In response, a correspondence theorist will point out: (*a*)
The identity theory is defensible only for propositions as
truthbearers, and only for propositions construed in a certain way,
namely as having objects and properties as constituents rather than
ideas or concepts of objects and properties; that is, for Russellian
propositions. Hence, there will be ample room (and need) for
correspondence accounts of truth for other types of truthbearers,
including propositions, if they are construed as constituted, partly
or wholly, of *concepts* of objects and
properties. (*b*) The identity theory is committed to the
unacceptable consequence that facts are true. (*c*) The
identity theory rests on the assumption that that-clauses always
denote propositions, so that the that-clause in "the fact that
snow is white" denotes the proposition that snow is white. The
assumption can be questioned. That-clauses can be understood as
ambiguous names, sometimes denoting propositions and sometimes
denoting facts. The descriptive phrases "the
proposition..." and "the fact..." can be
regarded as serving to disambiguate the succeeding ambiguous
that-clauses--much like the descriptive phrases in "the
philosopher Socrates" and "the soccer-player
Socrates" serve to disambiguate the ambiguous name
"Socrates" (cf. David 2002).
### 8.4 Deflationism About Truth
At present the most noticeable competitors to correspondence theories
are *deflationary* accounts of truth (or
'true'). Deflationists maintain that correspondence
theories need to be deflated; that their central notions,
correspondence and fact (and their relatives), play no legitimate role
in an adequate account of truth and can be excised without loss. A
correspondence-type formulation like
> (5) "Snow is white" is true iff it corresponds
> to the fact that snow is white,
is to be deflated to
> (6) "Snow is white" is true iff snow is
> white,
which, according to deflationists, says all there is to be said about
the truth of "Snow is white", without superfluous
embellishments (cf. Quine 1987, p. 213).
Correspondence theorists protest that (6) cannot lead to anything
deserving to be regarded as an account of truth. It is concerned with
only one particular sentence ("Snow is white"), and it
resists generalization. (6) is a substitution instance of the
schema
> (7) "*p*" is true iff
> *p*,
which does not actually say anything itself (it is not
truth-evaluable) and cannot be turned into a genuine generalization
about truth, because of its essential reliance on the schematic letter
"*p*", a mere placeholder. The attempt to turn (7) into a
generalization produces nonsense along the lines of "For
every *x*, "*x*" is true
iff *x*", or requires invocation of truth: "Every
substitution instance of the schema ""*p*" is true iff
p" is *true*". Moreover, no genuine generalizations
about truth can be accounted for on the basis of (7). Correspondence
definitions, on the other hand, do yield genuine generalizations about
truth. Note that definitions like (1) and (2) in Section 3 employ
ordinary objectual variables (not mere schematic placeholders); the
definitions are easily turned into genuine generalizations by
prefixing the quantifier phrase "For every *x*",
which is customarily omitted in formulations intended as
definitions.
It should be noted that the deflationist's starting point, (5),
which lends itself to deflating excisions, actually misrepresents the
correspondence theory. According to (5), corresponding to the fact
that snow is white is sufficient *and necessary* for
"Snow is white" to be true. Yet, according to (1) and (2),
it is sufficient but not necessary: "Snow is white" will
be true as long as it corresponds to some fact or other. The genuine
article, (1) or (2), is not as easily deflated as the impostor
(5).
The debate turns crucially on the question whether anything deserving
to be called an "account" or "theory" of truth
ought to take the form of a genuine generalization (and ought to be
able to account for genuine generalizations involving
truth). Correspondence theorists tend to regard this as a (minimal)
requirement. Deflationists argue that truth is a shallow (sometimes
"logical") notion--a notion that has no serious
explanatory role to play: as such it does not require a full-fledged
account, a real theory, that would have to take the form of a genuine
generalization.
There is now a substantial body of literature on truth-deflationism in
general and its relation to the correspondence theory in particular;
the following is a small selection: Quine 1970, 1987; Devitt 1984;
Field 1986; Horwich 1990 & 19982; Kirkham 1992; Gupta
1993; David 1994, 2008; Schmitt 1995; Kunne 2003, chap. 4; Rami
2009. Relevant essays are contained in Blackburn and Simmons 1999;
Schantz 2002; Armour-Garb and Beall 2005; and Wright and Pedersen
2010. See also the entry
the deflationary theory of truth
in this encyclopedia.
### 8.5 Truthmaker Theory
This approach centers on the *truthmaker*
or *truthmaking* *principle*: Every truth has a
truthmaker; or alternatively: For every truth there is something that
makes it true. The principle is usually understood as an expression of
a realist attitude, emphasizing the crucial contribution the world
makes to the truth of a proposition. Advocates tend to treat
truthmaker theory primarily as a guide to ontology, asking: To
entities of what ontological categories are we committed as
truthmakers of the propositions we accept as true? Most advocates
maintain that propositions of different logical types can be made true
by items from different ontological categories: e.g., propositions of
some types are made true by facts, others just by individual things,
others by events, others by tropes (cf., e.g. Armstrong 1997). This is
claimed as a significant improvement over traditional correspondence
theories which are understood--correctly in most but by no means
all cases--to be committed to all truthmakers belonging to a
single ontological category (albeit disagreeing about which category
that is). All advocates of truthmaker theory maintain that the
truthmaking relation is not one-one but many-many: some truths are
made true by more than one truthmaker; some truthmakers make true more
than one truth. This is also claimed as a significant improvement over
traditional correspondence theories which are often portrayed as
committed to correspondence being a one-one relation. This portrayal
is only partly justified. While it is fairly easy to find real-life
correspondence theorists committing themselves to the view that each
truth corresponds to exactly one fact (at least by implication,
talking about *the* corresponding fact), it is difficult to
find real-life correspondence theorists committing themselves to the
view that only one truth can correspond to a given fact (but see Moore
1910-11, p. 256).
A truthmaker theory may be presented as a competitor to the
correspondence theory or as a version of the correspondence
theory. This depends considerably on how narrowly or broadly one
construes "correspondence theory", i.e. on terminological
issues. Some advocates would agree with Dummett (1959, p. 14) who said
that, although "we have nowadays abandoned the correspondence
theory of truth", it nevertheless "expresses one important
feature of the concept of truth...: that a statement is true only
if there is something in the world in virtue of which it is
true". Other advocates would follow Armstrong who tends to
present his truthmaker theory as a liberal form of correspondence
theory; indeed, he seems committed to the view that the truth of a
(contingent) *elementary* proposition consists in its
correspondence with some (atomic) fact (cf. Armstrong 1997; 2004,
pp. 22-3, 48-50).
It is not easy to find a substantive difference between truthmaker
theory and various brands of the sort of modified correspondence
theory treated above under the heading "Logical Atomism"
(see Section 7.1). Logical atomists, such as Russell (1918) and
Wittgenstein (1921), will hold that the truth or falsehood of every
truth-value bearer can be explained in terms of (can be derived from)
logical relations between truth-value bearers, by way of the recursive
clauses, together with the base clauses, i.e., the correspondence and
non-correspondence of elementary truth-value bearers with facts. This
recursive strategy could be pursued with the aim to *reject the
truthmaker principle*: not all truths have truthmakers, only
elementary truths have truthmakers (here understood as corresponding
atomic facts). But it could also be pursued--and this seems to
have been Russell's intention at the time--with the aim
to *secure the truthmaker principle*, even though the simple
correspondence definition has been abandoned: not every truth
corresponds to a fact, only elementary truths do, but every
truth *has* a truthmaker; where the recursive clauses are
supposed to show how truthmaking without correspondence, but grounded
in correspondence, comes about.
There is one straightforward difference between truthmaker theory and
most correspondence theories. The latter are designed to answer the
question "What is truth?". Simple (unmodified)
correspondence theories center on a *biconditional*, such as
"*x* is true iff *x* corresponds to a fact",
intended to convey a *definition* of truth (at least a
"real definition" which does not commit them to the claim
that the term "true" is synonymous with "corresponds
to a fact"--especially nowadays most correspondence
theorists would consider such a claim to be implausibly and
unnecessarily bold). Modified correspondence theories also aim at
providing a definition of truth, though in their case the definition
will be considerably more complex, owing to the recursive character of
the account. Truthmaker theory, on the other hand, centers on
the *truthmaker principle*: For every truth there is something
that makes it true. Though this principle will deliver the
biconditional "*x* is true iff something makes *x*
true" (since "something makes *x* true"
trivially implies "*x* is true"), this does not
yield a promising candidate for a definition of truth: defining truth
in terms of truthmaking would appear to be circular. Unlike most
correspondence theories, truthmaker theory is not equipped, and
usually not designed, to answer the question "What is
truth?"--at least not if one expects the answer to take the
form of a feasible candidate for a definition of truth.
There is a growing body of literature on truthmaker theory; see for
example: Russell 1918; Mullligan, Simons, and Smith 1984; Fox 1987;
Armstrong 1997, 2004; Merricks 2007; and the essays in Beebe and Dodd
2005; Monnoyer 2007; and in Lowe and Rami 2009. See also the entry
on truthmakers in this encyclopedia.
## 9. More Objections to the Correspondence Theory
Two final objections to the correspondence theory deserve separate
mention.
### 9.1 The Big Fact
Inspired by an allegedly similar argument of Frege's, Davidson
(1969) argues that the correspondence theory is bankrupt because it
cannot avoid the consequence that all true sentences correspond to the
same fact: the Big Fact. The argument is based on two crucial
assumptions: (i) Logically equivalent sentences can be
substituted *salva veritate* in the context 'the fact
that...'; and (ii) If two singular terms denoting the same thing
can be substituted for each other in a given sentence *salva
veritate*, they can still be so substituted if that sentence is
embedded within the context 'the fact that...'. In the
version below, the relevant singular terms will be the following:
'(the *x* such that *x* = Diogenes
& *p*)' and '(the *x* such
that *x* = Diogenes & *q*)'. Now, assume that
a given sentence, *s*, corresponds to the fact that *p*;
and assume that '*p*' and '*q*'
are sentences with the same truth-value. We have:
* *s* corresponds to the fact that
*p*
which, by (i), implies
* *s* corresponds to the fact that [(the *x* such that
*x* = Diogenes & *p*) = (the *x* such that
*x* = Diogenes)],
which, by (ii), implies
* *s* corresponds to the fact that [(the *x* such that
*x* = Diogenes & *q*) = (the *x* such that
*x* = Diogenes)],
which, by (i), implies
* *s* corresponds to the fact that *q*.
Since the only restriction on '*q*' was that it
have the same truth-value as '*p*', it would follow
that any sentence *s* that corresponds to any fact corresponds
to every fact; so that all true sentences correspond to the same
facts, thereby proving the emptiness of the correspondence
theory--the conclusion of the argument is taken as tantamount to
the conclusion that every true sentence corresponds to the totality of
all the facts, i.e, the Big Fact, i.e., the world as a whole.
This argument belongs to a type now called "slingshot
arguments" (because a giant opponent is brought down by a single
small weapon, allegedly). The first versions of this type of argument
were given by Church (1943) and Godel (1944); it was later
adapted by Quine (1953, 1960) in his crusade against quantified modal
logic. Davidson is offering yet another adaption, this time involving
the expression "corresponds to the fact that". The
argument has been criticized repeatedly. Critics point to the two
questionable assumptions on which it relies, (i) and (ii). It is far
from obvious why a correspondence theorist should be tempted by either
one of them. Opposition to assumption (i) rests on the view that
expressibility by logically equivalent sentences may be a necessary,
but is not a sufficient condition for fact identity. Opposition to
assumption (ii) rests on the observation that the (alleged) singular
terms used in the argument are *definite descriptions*: their
status as genuine singular terms is in doubt, and it is well-known
that they behave rather differently than proper names for which
assumption (ii) is probably valid (cf. Follesdal 1966/2004; Olson
1987; Kunne 2003; and especially the extended discussion and
criticism in Neale 2001.)
### 9.2 No Independent Access to Reality
The objection that may well have been the most effective in causing
discontent with the correspondence theory is based on an
epistemological concern. In a nutshell, the objection is that a
correspondence theory of truth must inevitably lead into skepticism
about the external world, because the required correspondence between
our thoughts and reality is not ascertainable. Ever since
Berkeley's attack on the representational theory of the mind,
objections of this sort have enjoyed considerable popularity. It is
typically pointed out that we cannot step outside our own minds to
compare our thoughts with mind-independent reality. Yet--so the
objection continues--on the correspondence theory of truth, this
is precisely what we would have to do to gain knowledge. We would have
to access reality as it is in itself, independently of our cognition,
and determine whether our thoughts correspond to it. Since this is
impossible, since all our access to the world is mediated by our
cognition, the correspondence theory makes knowledge impossible
(cf. Kant 1800, intro vii). Assuming that the resulting skepticism is
unacceptable, the correspondence theory has to be rejected, and some
other account of truth, an epistemic (anti-realist) account of some
sort, has to be put in its place (cf., e.g., Blanshard 1941.)
This type of objection brings up a host of issues in epistemology, the
philosophy of mind, and general metaphysics. All that can be done here
is to hint at a few pertinent points (cf. Searle 1995, chap. 7; David
2004, 6.7). The objection makes use of the following line of
reasoning: "If truth is correspondence, then, since knowledge
requires truth, we have to know that our beliefs correspond to
reality, if we are to know anything about reality". There are
two assumptions implicit in this line of reasoning, both of them
debatable.
(i) It is assumed that *S* knows *x*, only if *S*
knows that
*x* is true--a requirement not underwritten by standard
definitions of knowledge, which tell us that *S* knows
*x*, only if *x* is true and *S* is justified in
believing *x*. The assumption may rest on confusing requirements
for knowing *x* with requirements for knowing that one knows
*x*.
(ii) It is assumed that, if truth = *F*, then *S* knows
that *x* is true, only if *S* knows that *x* has
*F*. This is highly implausible. By the same standard it
would follow that no one who does not know that water is
H2O can know that the Nile contains water--which would
mean, of course, that until fairly recently nobody knew that the Nile
contained water (and that, until fairly recently, nobody
knew that there were stars in the sky, whales in the sea, or that the
sun gives light). Moreover, even if one does know that water is
H2O, one's strategy for finding out whether the liquid in
one's glass is water does not have to involve chemical analysis: one
could simply taste it, or ask a reliable informant. Similarly, as far
as knowing that *x* is true is concerned, the
correspondence theory does not entail that we have to know that a
belief corresponds to a fact in order to know that it is true, or that
our method of finding out whether a belief is true has to involve a
strategy of actually comparing a belief with a fact--although the
theory does of course entail that one obtains knowledge only if
one obtains a belief that corresponds to a fact.
More generally, one might question whether the objection still has
much bite once the metaphors of "accessing" and
"comparing" are spelled out with more attention to the
psychological details of belief formation and to epistemological
issues concerning the conditions under which beliefs are justified or
warranted. For example, it is quite unclear how the metaphor of
"comparing" applies to knowledge gained through perceptual
belief-formation. A perceptual belief that p may be true, and by
having acquired that belief, one may have come to know that p, without
having "compared" (the content of) one's belief with
anything.
One might also wonder whether its competitors actually enjoy any
significant advantage over the correspondence theory, once they are
held to the standards set up by this sort of objection. For example,
why should it be easier to find out whether one particular belief
coheres with *all* of one's other beliefs than it is to
find out whether a belief corresponds with a fact?
In one form or other, the "No independent access to
reality"-objection against correspondence theoretic approaches
has been one of the, if not the, main source and motivation for
idealist and anti-realist stances in philosophy (cf. Stove
1991). However, the connection between correspondence theories of
truth and the metaphysical realism vs. anti-realism (or idealism)
debate is less immediate than is often assumed. On the one hand,
deflationists and identity theorists can be, and typically are,
metaphysical realists while rejecting the correspondence theory. On
the other hand, advocates of a correspondence theory can, in
principle, be metaphysical idealists (e.g. McTaggart 1921) or
anti-realists, for one might advocate a correspondence theory while
maintaining, at the same time, (*a*) that all facts are
constituted by mind or (*b*) that what facts there are depends
somehow on what we believe or are capable of believing, or
(*c*) that the correspondence relation between true
propositions and facts depends somehow on what we believe or are
capable of believing (claiming that the correspondence relation
between true beliefs or true sentences and facts depends on what we
believe can hardly count as a commitment to anti-realism). Keeping
this point in mind, one can nevertheless acknowledge that advocacy of
a correspondence theory of truth comes much more naturally when
combined with a metaphysically realist stance and usually signals
commitment to such a stance. |
truth-deflationary | ## 1. Central Themes in Deflationism
### 1.1 The Equivalence Schema
While deflationism can be developed in different ways, it is possible
to isolate some central themes emphasized by most philosophers who
think of themselves as deflationists. These shared themes pertain to
endorsing a kind of metaphysical parsimony and positing a
"deflated" role for what we can call the *alethic
locutions* (most centrally, the expressions 'true' and
'false') in the instances of what is often called
*truth-talk*. In this section, we will isolate three of these
themes. The first, and perhaps most overarching, one has already been
mentioned: According to deflationists, there is some strong
equivalence between a statement like 'snow is white' and a
statement like "'snow is white' is true," and
this is all that can significantly be said about that application of
the notion of truth.
We may capture this idea more generally with the help of a schema,
what is sometimes called *the equivalence schema*:
(ES)
\(\langle p\rangle\) is true if, and only if, \(p\).
In this schema, the angle brackets indicate an appropriate
name-forming or nominalizing device, e.g., quotation marks or
'the proposition that ...', and the occurrences of
'\(p\)' are replaced with matching declarative sentences
to yield instances of the schema.
The equivalence schema is often associated with the formal work of
Alfred Tarski
(1935 [1956], 1944), which introduced the schema,
(T)
\(X\) is true if, and only if, \(p\).
In the instances of schema (T) (sometimes called "Convention
(T)"), the '\(X\)' gets filled in with a name of the
sentence that goes in for the '\(p\)', making (T) a
version of (ES). Tarski considered (T) to provide a criterion of
adequacy for any theory of truth, thereby allowing that there could be
more to say about truth than what the instances of the schema cover.
Given that, together with the fact that he took the instances of (T)
to be contingent, his theory does not qualify as deflationary.
By contrast with the Tarskian perspective on (T)/(ES), we can
formulate the central theme of deflationism under consideration as the
view, roughly, that the instances of (some version of) this schema do
capture everything significant that can be said about applications of
the notion of truth; in a slogan, the instances of the schema
*exhaust* the notion of truth. Approaches which depart from
deflationism don't disagree that (ES) tells us something about
truth; what they (with Tarski) deny is that it is exhaustive, that it
tells us the whole truth about truth. Since such approaches add
substantive explanations of why the instances of the equivalence
schema hold, they are now often called *inflationary*
approaches to truth. Inflationism is the general approach shared by
such traditional views as the
correspondence theory of truth,
coherence theory of truth,
pragmatic theory of truth,
identity theory of truth, and primitivist theory of truth, These theories all share
a collection of connected assumptions about the alethic locutions, the
concept of truth, and the property of truth. Inflationary theories all
assume that the expression 'is true' is a descriptive
predicate, expressing an explanatory concept of truth, which
determines a substantive property of truth. From that shared set of
presuppositions, the various traditional inflationary theories then
diverge from one another by providing different accounts of the
assumed truth property. On inflationary views, the nature of the truth
property explains why the instances of (ES) hold. Deflationary views,
by contrast, reject some if not all of the standard assumptions that
lead to inflationary theories, resisting at least their move to
positing any substantive truth property. Instead, deflationists offer
a different understanding of both the concept of truth and the
functioning of the alethic locutions. A deflationist will take the
instances of (ES) to be "conceptually basic and explanatorily
fundamental" (Horwich 1998a, 21, n. 4; 50), or to be direct
consequences of how the expression 'true' operates (cf.
Quine 1970 [1986], Brandom 1988, and Field 1994a).
It is important to notice that even among deflationists the
equivalence schema may be interpreted in different ways, and this is
one way to distinguish different versions of deflationism from one
another. One question about (ES) concerns the issue of what instances
of the schema are assumed to be about (equivalently: to what the names
in instances of (ES) are assumed to refer). According to one view, the
instances of this schema are about sentences, where a name for a
sentence can be formulated simply by putting quotation marks around
it. In other words, for those who hold what might be called a
*sententialist* version of deflationism, the equivalence schema
has instances like (1):
(1)
'Brutus killed Caesar' is true if, and only if, Brutus
killed Caesar.
To make this explicit, we might say that, according to sententialist
deflationism, the equivalence schema is:
(ES-sent)
The sentence '\(p\)' is true if, and only if,
\(p\).
Notice that in this schema, the angle-brackets of (ES) have been
replaced by quotation marks.
According to those who hold what might be called a
*propositionalist* version of deflationism, by contrast,
instances of the equivalence schema are about propositions, where
names of propositions are, or can be taken to be, expressions of the
form 'the proposition that \(p\)', where
'\(p\)' is filled in with a declarative sentence. For the
propositionalist, in other words, instances of the equivalence schema
are properly interpreted not as being about sentences but instead as
being about propositions, i.e., as biconditionals like (2) rather than
(1):
(2)
The proposition that Brutus killed Caesar is true if, and only if,
Brutus killed Caesar.
To make this explicit, we might say that, according to
propositionalist deflationism, the equivalence schema is:
(ES-prop)
The proposition that \(p\) is true if, and only if, \(p\).
Interpreting the equivalence schema as (ES-sent) rather than as
(ES-prop), or vice versa, thus yields different versions of
deflationism, sententialist and propositionalist versions,
respectively.
Another aspect that different readings of (ES) can vary across
concerns the nature of the equivalence that its instances assert. On
one view, the right-hand side and the left-hand side of such instances
are synonymous or analytically equivalent. Thus, for sententialists
who endorse this level of equivalence, (1) asserts that,
"'Brutus killed Caesar' is true" means just
what 'Brutus killed Caesar' means; while for
propositionalists who endorse analytic equivalence, (2) asserts that
'the proposition that Brutus killed Caesar is true' means
the same as 'Brutus killed Caesar'. A second view is that
the right-hand and left-hand sides of claims such as (1) and (2) are
not synonymous but are nonetheless necessarily equivalent; this view
maintains that the two sides of each equivalence stand or fall
together in every possible world, despite having different meanings.
And a third possible view is that claims such as (1) and (2) assert
only a material equivalence; this view interprets the 'if and
only if' in both (1) and (2) as simply the biconditional of
classical logic.
This tripartite distinction between analytic, necessary, and material
equivalence, when combined with the distinction between sententialism
and propositionalism, yields six different possible (although not
exhaustive) readings of the instances of (ES):
| | *Sentential* | *Propositional* |
| --- | --- | --- |
| *Analytic* | \(\mathbf{A}\) | \(\mathbf{B}\) |
| *Necessary* | \(\mathbf{C}\) | \(\mathbf{D}\) |
| *Material* | \(\mathbf{E}\) | \(\mathbf{F}\) |
While different versions of deflationism can be correlated to some
extent with different positions in this chart, some chart positions
have also been occupied by more than one version of deflationism. The
labels 'redundancy theory', 'disappearance
theory' and 'no-truth theory' have been used to
apply to analytic versions of deflationism: positions \(\mathbf{A}\)
or \(\mathbf{B}\). But there is a sense in which position
\(\mathbf{A}\) is also occupied by versions of what is called
"disquotationalism" (although the most prominent
disquotationalists tend to be leary of the notions of analyticity or
synonymy), and what is called "prosententialism" also
posits an equivalence of what is said with the left- and right-hand
sides of the instances of (ES). The latter version of deflationism,
however, does this without making the left-hand sides about sentences
named via quotation or about propositions understood as abstract
entities. No deflationist has offered an account occupying position
\(\mathbf{C}\), \(\mathbf{E}\), or \(\mathbf{F}\) (although the
explicit inspiration some disquotationalists have found in
Tarski's work and his deployment of material equivalence might
misleadingly suggest position \(\mathbf{E})\). Paul Horwich (1998a)
uses the label 'minimalism' for a version of
propositionalist deflationism that takes the instances of (ES-prop) to
involve a necessary equivalence, thereby occupying position
\(\mathbf{D}\). To a large extent, philosophers prefer one or another
(or none) of the positions in the chart on the basis of their views
from other parts of philosophy, typically their views about the
philosophy of language and metaphysics.
### 1.2 The Property of Truth
The second theme we will discuss focuses on the fact that when we say,
for example, that the proposition that Brutus killed Caesar is true,
we seem to be attributing a property to that proposition, namely, the
property of being true. Deflationists are typically wary of that
claim, insisting either that there is no property of being true at
all, or, if there is one, it is of a certain kind, often called
"thin" or "insubstantial".
The suggestion that there is no truth property at all is advanced by
some philosophers in the deflationary camp; we will look at some
examples below. What makes this position difficult to sustain is that
'is true' is grammatically speaking a predicate much like
'is metal'. If one assumes that grammatical predicates
such as 'is metal' express properties, then, *prima
facie*, the same would seem to go for 'is true.' This
point is not decisive, however. For one thing, it might be possible to
distinguish the grammatical form of claims containing 'is
true' from their logical form; at the level of logical form, it
might be, as prosententialists maintain, that 'is true' is
*not* a predicate. For another, nominalists about properties
have developed ways of thinking about grammatical predicates according
to which these expressions don't express properties at all. A
deflationist might appeal, perhaps selectively, to such proposals, in
order to say that 'is true', while a predicate, does not
express a property.
Whatever the ultimate fate of these attempts to say that there is no
property of truth may be, a suggestion among certain deflationists has
been to concede that there is a truth property but to deny it is a
property of a certain kind; in particular to deny that it is (as we
will say) a *substantive* property.
To illustrate the general idea, consider (3) and (4):
(3)
Caracas is the capital of Venezuela.
(4)
The earth revolves around the sun.
Do the propositions that these sentences express share a property of
being true? Well, in one intuitive sense they do: Since they both are
true, we might infer that they both have the property of being true.
From this point of view, there is a truth property: It is simply the
property that all true propositions have.
On the other hand, when we say that two things share a property of
*Fness*, we often mean more than simply that they are both
\(F\). We often mean that two things that are \(F\) have some
underlying nature in common, for example, that there is a common
explanation as to why they are both \(F\). It is in this second claim
that deflationists have in mind when they say that truth is not a
substantive property. Thus, in the case of our example, what, if
anything, explains the truth of (3) is that Caracas is the capital of
Venezuela, and what explains this is the political history of
Venezuela. On the other hand, what, if anything, explains the truth of
(4) is that the earth revolves around the sun, and what explains this
is the physical nature of the solar system. The physical nature of the
solar system, however, has nothing to do with the political history of
Venezuela (or if it does, the connections are completely accidental!)
and to that extent there is no shared explanation as to why (3) and
(4) are both true. Therefore, in this substantive sense, they have no
property in common.
It will help to bring out the contrast being invoked here if we
consider two properties distinct from a supposed property of being
true: the property of being a game and the property of being a mammal.
Consider the games of chess and catch. Do both of these have the
property of being a game? Well, in one sense, they do: they are both
games that people can play. On the other hand, however, there is no
common explanation as to why each counts as a game (cf. Wittgenstein
1953, SS66). We might then say that being a game is not a
substantive property and mean just this. But now compare the property
of being a mammal. If two things are mammals, they have the property
of being a mammal, but in addition there is some common explanation as
to why they are both mammals - both are descended from the same
family of creatures, say. According to one development of
deflationism, the property of being true is more like the property of
being a game than it is like the property of being a mammal.
The comparisons between being true, being a game, and being a mammal
are suggestive, but they still do not nail down exactly what it means
to say that truth is not a substantive property. The contemporary
literature on deflationism contains several different approaches to
the idea. One such approach, which we will consider in detail in
Section 4.1, involves denying that truth plays an explanatory role.
Another approach, pursuing an analogy between being true and existing,
describes truth as a "logical property" (for example,
Field 1992, 322; Horwich 1998a, 37; Kunne 2003, 91). A further
approach appeals to
David Lewis's
(1983, 1986) view that, while every set of entities underwrites a
property, there is a distinction between *sparse*, or
*natural*, properties and more motley or disjointed
*abundant* properties. On this approach, a deflationist might
say that there is an abundant property of being true rather than a
sparse one (cf. Edwards 2013, Asay 2014, Kukla and Winsberg 2015, and
Armour-Garb forthcoming). A different metaphysical idea may be to
appeal to the contemporary discussion of grounding and the distinction
between groundable and ungroundable properties. In this context, a
groundable property is one that is capable of being grounded in some
other property, whether or not it is in fact grounded; an ungroundable
property is a property that is not groundable (see Dasgupta 2015, 2016
and Rabin 2020). From this point of view, a deflationist might say
that being true is an ungroundable property. Hence it is unlike
ordinary, sparse/natural properties, such as being iron, which are
both capable of being grounded and are grounded, and it is also unlike
fundamental physical properties, such as being a lepton, which are
capable of being grounded (in some other possible world) but are not
(actually) grounded. We will not try to decide here which of these
different views of properties is correct but simply note that
deflationists who want to claim that there is a truth property, just
not a substantive one, have options for explaining what this
means.
### 1.3 The Utility of the Concept of Truth
In light of the two central ideas discussed so far - the idea
that the equivalence schema is exhaustive of the notion of truth and
the idea that there is no substantive truth property - you might
wonder why we have a concept of truth in the first place. After all,
contrast this question with the explanation of why we have the concept
of mammals. A natural suggestion is that it allows us to think and
talk about mammals and to develop theories of them. For deflationism,
however, as we have just seen, being true is completely different from
being a mammal; why then do we have a concept of truth? (An analogous
question might be asked about the word 'true', i.e., why
we have the word 'true' and related words in our language
at all. In the following discussion we will not discriminate between
questions about the concept of truth and questions about the word
'true' and will move back and forth between them.)
The question of why we have the concept of truth allows us to
introduce a third central theme in deflationism, which is an emphasis
not merely on the property of truth but on the concept of truth, or,
equivalently for present purposes, on the word 'true' (cf.
Leeds 1978). Far from supposing that there is no point having the
concept of truth, deflationists are usually at pains to point out that
anyone who has the concept of truth is in possession of a very useful
concept indeed; in particular, anyone who has this concept is in a
position to express generalizations that would otherwise require
non-standard logical devices, such as sentential variables and
quantifiers for them.
Suppose, for example, that Jones for whatever reason decides that
Smith is an infallible guide to the nature of reality. We might then
say that Jones believes everything that Smith says. To say this much,
however, is not to capture the content of Jones's belief. In
order to do that we need some way of generalizing on the embedded
sentence positions in a claim like:
(5)
If Smith says that birds are dinosaurs, then birds are
dinosaurs.
To generalize on the relationship indicated in (5), beyond just what
Smith says about birds to anything she might say, what we want to do
is generalize on the embedded occurrences of 'birds are
dinosaurs'. So, we need a (declarative) sentential variable,
'\(p\)', and a universal quantifier governing it. What we
want is a way of capturing something along the lines of
(6)
For all \(p\), if Smith says that \(p\), then \(p.\)
The problem is that we cannot formulate this in English with our most
familiar way of generalizing because the '\(p\)' in the
consequent is in a sentence-in-use position, rather than mentioned or
nominalized context (as it is in the antecedent), meaning that this
formal variable cannot be replaced with a familiar English
object-variable expression, e.g., 'it'.
This is where the concept of truth comes in. What we do in order to
generalize in the way under consideration is employ the truth
predicate with an object variable to produce the sentence,
(7)
For all \(x\), if what Smith said \(= x\), then \(x\) is
true.
Re-rendering the quasi-formal (7) into natural language yields,
(8)
Everything is such that, if it is what Smith said, then it is
true.
Or, to put the same thing more colloquially:
(9)
Everything Smith says is true.
The equivalence schema (ES-prop) allows us to use (7) (and therefore
(9)) to express what it would otherwise require the unstatable (6) to
express. For, on the basis of the schema, there is always an
equivalence between whatever goes in for a sentence-in-use occurrence
of the variable '\(p\)' and a context in which that
filling of the sentential variable is nominalized. This reveals how
the truth predicate can be used to provide a surrogate for sentential
variables, simulating this non-standard logical device while still
deploying the standard object variables already available in ordinary
language ('it') and the usual object quantifiers
('everything') that govern them.
This is how the use of the truth predicate in (9) gives us the content
of Jones's belief. And the important point for deflationists is
that we could not have stated the content of this belief unless we had
the concept of truth (the expression 'true'). In fact, for
most deflationists, it is this feature of the concept of truth -
its role in the formation of these sorts of generalizations -
that explains why we have a concept of truth at all. This is, as it is
often put, the *raison d'etre* of the concept of
truth (cf. Field 1994a and Horwich 1998a).
## 2. History of Deflationism
According to Michael Dummett (1959 [1978]), deflationism originates
with
Gottlob Frege,
as expressed in this famous quote by the latter:
>
> It is ... worthy of notice that the sentence 'I smell the
> scent of violets' has just the same content as the sentence
> 'It is true that I smell the scent of violets'. So it
> seems, then, that nothing is added to the thought by my ascribing to
> it the property of truth. (Frege 1918, 6)
>
This passage suggests that Frege embraces a deflationary view in
position \(\mathbf{B}\) (in the chart above), namely, an analytic
propositionalist version of deflationism. But this interpretation of
his view is not so clear. As Scott Soames (1999, 21ff) points out,
Frege (ibid.) distinguishes what we will call "opaque"
truth ascriptions, like 'My conjecture is true', from
transparent truth-ascriptions, like the one mentioned in the quote
from Frege. Unlike with transparent cases, in opaque instances, one
cannot simply strip 'is true' away and obtain an
equivalent sentence, since the result is not even a sentence at
all.
Frank Ramsey
is the first philosopher to have suggested a position like
\(\mathbf{B}\) (although he does not really accept propositions as
abstract entities (see Ramsey 1927 (34-5) and 1929 (7)), despite
sometimes talking in terms of propositions):
>
> Truth and falsity are ascribed primarily to propositions. The
> proposition to which they are ascribed may be either explicitly given
> or described. Suppose first that it is explicitly given; then it is
> evident that 'It is true that Caesar was murdered' means
> no more than that Caesar was murdered, and 'It is false that
> Caesar was murdered' means no more than Caesar was not murdered.
> .... In the second case in which the proposition is described and
> not given explicitly we have perhaps more of a problem, for we get
> statements from which we cannot in ordinary language eliminate the
> words 'true' or 'false'. Thus if I say
> 'He is always right', I mean that the propositions he
> asserts are always true, and there does not seem to be any way of
> expressing this without using the word 'true'. But suppose
> we put it thus 'For all \(p\), if he asserts \(p\), \(p\) is
> true', then we see that the propositional function \(p\) is true
> is simply the same as \(p\), as e.g. its value 'Caesar was
> murdered is true' is the same as 'Caesar was
> murdered'. (Ramsey 1927, 38-9)
>
On Ramsey's redundancy theory (as it is often called), the truth
operator, 'it is true that' adds no content when prefixed
to a sentence, meaning that in the instances of what we can think of
as the truth-operator version of (ES),
(ES-op)
It is true that \(p\) iff \(p\),
the left- and right-hand sides are meaning-equivalent. But Ramsey
extends his redundancy theory beyond just the transparent instances of
truth-talk, maintaining that the truth predicate is, in principle,
eliminable even in opaque ascriptions of the form '\(B\) is
true' (which he (1929, 15, n. 7) explains in terms of sentential
variables *via* a formula along the lines of '\(\exists
p\) (\(p \amp B\) is a belief that \(p\))') and in explicitly
quantificational instances, like 'Everything Einstein said is
true' (explained as above). As the above quote illustrates,
Ramsey recognizes that in truth ascriptions like these the truth
predicate fills a grammatical need, which keeps us from eliminating it
altogether, but he held that even in these cases it contributes no
content to anything said using it.
A.J. Ayer
endorses a view similar to Ramsey's. The following quote shows
that he embraces a meaning equivalence between the two sides of the
instances of both the sentential (position \(\mathbf{A})\) and
something like (since, despite his use of the expression
'proposition' to mean *sentence*, he also considers
instances of truth-talk involving the prefix 'it is true
that', which could be read as employing
'that'-clauses) the propositional (position
\(\mathbf{B})\) version of (ES).
>
> [I]t is evident that a sentence of the form "\(p\) is
> true" or "it is true that \(p\)" the reference to
> truth never adds anything to the sense. If I say that it is true that
> Shakespeare wrote *Hamlet*, or that the proposition
> "Shakespeare wrote *Hamlet*" is true, I am saying
> no more than that Shakespeare wrote *Hamlet*. Similarly, if I
> say that it is false that Shakespeare wrote the *Iliad*, I am
> saying no more than that Shakespeare did not write the *Iliad*.
> And this shows that the words 'true' and
> 'false' are not used to stand for anything, but function
> in the sentence merely as assertion and negation signs. That is to
> say, truth and falsehood are not genuine concepts. Consequently, there
> can be no logical problem concerning the nature of truth. (Ayer 1935,
> 28. Cf. Ayer 1936 [1952, 89])
>
Ludwig Wittgenstein,
under Ramsey's influence, makes claims with strong affinities
to deflationism in his later work. We can see a suggestion of an
endorsement of deflationary positions \(\mathbf{A}\) or \(\mathbf{B}\)
in his (1953, SS136) statement that "\(p\) is true \(=
p\)" and "\(p\) is false = not-\(p\)", indicating
that ascribing truth (or falsity) to a statement just amounts to
asserting that very proposition (or its negation). Wittgenstein also
expresses this kind of view in manuscripts from the 1930s, where he
claims, "What he says is true = Things are as he says" and
"[t]he word 'true' is used in contexts such as
'What he says is true', but that says the same thing as
'He says \(\ldquo p\rdquo,\) and \(p\) is the
case'". (Wittgenstein 1934 [1974, 123]) and 1937 [2005,
61]), respectively)
Peter Strawson's
views on truth emerge most fully in his 1950 debate with
J.L. Austin.
In keeping with deflationary position \(\mathbf{B}\), Strawson (1950,
145-7) maintains that an utterance of 'It is true that
\(p\)' just makes the same statement as an utterance of
'\(p\)'. However, in Strawson 1949 and 1950, he further
endorses a *performative* view, according to which an utterance
of a sentence like 'That is true' mainly functions to do
something *beyond* mere re-assertion. This represents a shift
to an account of what the expression 'true' *does*,
from traditional accounts of what truth is, or even accounts of what
'true' means.
Another figure briefly mentioned above who looms large in the
development of deflationism is
Alfred Tarski,
with his (1935 [1956] and 1944) identification of a precise criterion
of adequacy for any formal definition of truth: its implying all of
the instances of what is sometimes called "Convention (T)"
or "the (T)-schema",
(T)
\(X\) is true if, and only if, \(p\).
To explain this schema a bit more precisely, in its instances the
'\(X\)' gets replaced by a name of a sentence from the
object-language *for* which the truth predicate is being
defined, and the '\(p\)' gets replaced by a sentence that
is a translation of that sentence into the meta-language in which the
truth predicate is being defined. For Tarski, the 'if and only
if' deployed in any instance of (T) expresses just a material
equivalence, putting his view at position \(\mathbf{E}\) in the chart
from Section 1.1. Although this means that Tarski is not a
deflationist himself (cf. Field 1994a, Ketland 1999, and Patterson
2012), there is no denying the influence that his work and its
promotion of the (T)-schema have had on deflationism. Indeed, some
early deflationists, such as
W.V.O. Quine
and Stephen Leeds, are quite explicit about taking inspiration from
Tarski's work in developing their "disquotational"
views, as is Horwich in his initial discussion of deflationism. Even
critics of deflationism have linked it with Tarski: Hilary Putnam
(1983b, 1985) identifies deflationists as theorists who "refer
to the work of Alfred Tarski and to the semantical conception of
truth" and who take Tarski's work "as a solution to
the philosophical problem of truth".
The first fully developed deflationary view is the one that Quine
(1970 [1986, 10-2]) presents. Given his skepticism about the
existence of propositions, Quine takes sentences to be the primary
entities to which 'is true' may be applied, making the
instances of (ES-sent) the equivalences that he accepts. He defines a
category of sentence that he dubs "eternal", viz.,
sentence types that have all their indexical/contextual factors
specified, the tokens of which always have the same truth-values. It
is for these sentences that Quine offers his disquotational view. As
he (ibid., 12) puts it,
>
>
> This cancellatory force of the truth predicate is explicit in
> Tarski's paradigm:
>
>
>
> >
> > 'Snow is white' is true if and only if snow is white.
> >
>
>
>
> Quotation marks make all the difference between talking about words
> and talking about snow. The quotation is a name of a sentence that
> contains the name, namely 'snow', of snow. By calling the
> sentence true, we call snow white. The truth predicate is a device of
> disquotation.
>
>
>
As this quote suggests, Quine sees Tarski's formal work on
defining truth predicates for formalized languages and his criterion
of adequacy for doing so as underwriting a disquotational analysis of
the truth predicate. This makes Quine's view a different kind of
position-\(\mathbf{A}\) account, since he takes the left-hand side of
each instance of (ES-sent) to be, as we will put it (since Quine
rejects the whole idea of meaning and meaning equivalence), something
like a mere syntactic variant of the right-hand side. This also means
that Quine's version of deflationism departs from inflationism
by rejecting the latter's presupposition that truth predicates
function to *describe* the entities they get applied to, the
way that other predicates, such as 'is metal', do.
Quine also emphasizes the importance of the truth predicate's
role as a means for expressing the kinds of otherwise inexpressible
generalizations discussed in Section 1.3. As he (1992, 80-1)
explains it,
>
> The truth predicate proves invaluable when we want to generalize along
> a dimension that cannot be swept out by a general term ... The
> harder sort of generalization is illustrated by generalization on the
> clause 'time flies' in 'If time flies then time
> flies'.... We could not generalize as in 'All men are
> mortal' because 'time flies' is not, like
> 'Socrates', a name of one of a range of objects (men) over
> which to generalize. We cleared this obstacle by *semantic
> ascent*: by ascending to a level where there were indeed objects
> over which to generalize, namely linguistic objects, sentences.
>
So, if we want to generalize on embedded sentence-positions within
some sentences, "we ascend to talk of truth and sentences"
(Quine 1970 [1986, 11]). This maneuver allows us to "affirm some
infinite lot of sentences that we can demarcate only by talking about
the sentences" (ibid., 12).
Leeds (1978) (following Quine) makes it clear how the truth predicate
is crucial for extending the expressive power of a language, despite
the triviality that disquotationalism suggests for the transparent
instances of truth-talk. He (ibid., 121) emphasizes the logical role
of the truth predicate in the expression of certain kinds of
generalizations that would otherwise be inexpressible in natural
language. Leeds, like Quine, notes that a central utility of the truth
predicate, in virtue of its yielding every instance of (ES-sent), is
the simulation of quantification into sentence-positions. But, unlike
Quine, Leeds glosses this logical role in terms of expressing
potentially infinite conjunctions (for universal generalization) or
potentially infinite disjunctions (for existential generalization).
The truth predicate allows us to use the ordinary devices of
first-order logic in ways that provide surrogates for the non-standard
logical devices this would otherwise require. Leeds is also clear
about accepting the consequences of deflationism, that is, of taking
the logically expressive role of the truth predicate to exhaust its
function. In particular, he points out that there is no need to think
that truth plays any sort of *explanatory* role. We will return
to this point in Section 4.1.
Dorothy Grover, Joseph Camp, and Nuel Belnap (1975) develop a
different variety of deflationism that they call a
"prosentential theory". This theory descends principally
from Ramsey's views. In fact, Ramsey (1929, 10) made what is
probably the earliest use of the term 'pro-sentence' in
his account of the purpose of truth-talk. Prosentences are explained
as the sentence-level analog of pronouns. As in the case of pronouns,
prosentences inherit their content anaphorically from other linguistic
items, in this case from some sentence typically called the
prosentence's "anaphoric antecedent" (although it
need not actually occur before the prosentence). As Grover, *et
al*. develop this idea, this content inheritance can happen in two
ways. The most basic one is called "lazy" anaphora. Here
the prosentence could simply be replaced with a repetition of its
antecedent, as in the sort of case that Strawson emphasized, where one
says "That is true" after someone else has made an
assertion. According to Grover, *et al*., this instance of
truth-talk is a prosentence that inherits its content anaphorically
from the other speaker's utterance, so that the two speakers
assert the same thing. As a result, Grover, *et al.* would take
the instances of (ES) to express meaning equivalences, but since they
(ibid., 113-5) do not take the instances of truth-talk on the
left-hand sides of these equivalences to say anything *about*
any named entities, they would not read (ES) as either (ES-sent) or
(ES-prop) on their standard interpretations. So, while their
prosententialism is similar to views in position \(\mathbf{A}\) or in
position \(\mathbf{B}\) in the chart above, it is also somewhat
different from both.
Grover, *et al*.'s project is to develop the theory
"that 'true' can be thought of always as part of a
prosentence" (ibid., 83). They explain that 'it is
true' and 'that is true' are generally available
prosentences that can go into any sentence-position. They consider
these expressions to be "atomic" in the sense of not being
susceptible to a subject-predicate analysis giving the
'that' or 'it' separate references (ibid.,
91). Both of these prosentences can function in the "lazy"
way, and Grover, *et al*. claim (ibid., 91-2, 114) that
'it is true' can also operate as a quantificational
prosentence (i.e., a sentential variable), for example, in a
re-rendering of a sentence like,
(9)
Everything Smith says is true.
in terms of a "long-form" equivalent claim, such as
(8')
Everything is such that, if Smith says that it is true, then it is
true.
One immediate concern that this version of prosententialism faces
pertains to what one might call the "paraphrastic
gymnastics" that it requires. For example, a sentence like
'It is true that humans are causing climate change' is
said to have for its underlying logical form the same form as
'Humans are causing climate change. That is true' (ibid.,
94). As a result, when one utters an instance of truth-talk of the
form 'It is true that \(p\)', one states the content of
the sentence that goes in for '\(p\)' *twice*. In
cases of quotation, like "'Birds are dinosaurs' is
true", Grover, *et al*. offer the following rendering,
'Consider: Birds are dinosaurs. That is true' (ibid.,
103). But taking this as the underlying form of quotational instance
of truth-talk requires rejecting the standard view that putting
quotation marks around linguistic items forms names of those items.
These issues raise concerns regarding the adequacy of this version of
prosententialism.
## 3. The Varieties of Contemporary Deflationism
In this section, we explain the details of three prominent,
contemporary accounts and indicate some concerns peculiar to each.
### 3.1 Minimalism
Minimalism is the version of deflationism that diverges the least from
inflationism because it accepts many of the standard inflationary
presuppositions, including that 'is true' is a predicate
used to describe entities as having (or lacking) a truth property.
What makes minimalism a version of deflationism is its denial of
inflationism's final assumption, namely, that the property
expressed by the truth predicate has a substantive nature. Drawing
inspiration from Leeds (1978), Horwich (1982, 182) actually coins the
term 'deflationism' while describing "the
deflationary redundancy theory which denies the existence of surplus
meaning and contends that Tarski's schema ["\(p\)"
is true iff \(p\)] is quite sufficient to capture the concept."
Minimalism, Horwich's mature deflationary position (1998a [First
Edition, 1990]), adds to this earlier view. In particular, Horwich
(ibid., 37, 125, 142) comes to embrace the idea that 'is
true' does express a property, but it is merely a "logical
property" (cf. Field 1992), rather than any substantive or
naturalistic property of truth with an analyzable underlying nature
(Horwich 1998a, 2, 38, 120-1).
On the basis of natural language considerations, Horwich (ibid.,
2-3, 39-40) holds that propositions are what the alethic
locutions describe directly. Any other entities that we can properly
call true are so only derivatively, on the basis of having some
relation to true propositions (ibid., 100-1 and Horwich 1998b,
82-5). This seems to position Horwich well with respect to
explaining the instances of truth-talk that cause problems for Quine
and Leeds, e.g., those about beliefs and theories. Regarding truth
applied directly to propositions, however, Horwich (1998a, 2-3)
still explicitly endorses the thesis that Leeds emphasizes about the
utility of the truth predicate (and, Horwich adds, the concept it
expresses), namely, that it "exists solely for the sake of a
certain logical need". While Horwich (ibid., 138-9) goes
so far as to claim that the concept of truth has a
"non-descriptive" function, he does not follow Quine and
Leeds all the way to their rejection of the assumption that the
alethic predicates function to describe truth-bearers. Rather, his
(ibid., 31-3, 37) point of agreement with them is that the
*main* function of the truth predicate is its role in providing
a means for generalizing on embedded sentence positions, rather than
some role in the indication of specifically truth-involving states of
affairs. Even so, Horwich (ibid., 38-40) still contends that the
instances of truth-talk do describe propositions, in the sense that
they make statements *about* them, and they do so by
attributing a property to those propositions.
The version of (ES) that Horwich (1998a, 6) makes the basis of his
theory is what he also calls "the equivalence schema",
(E)
It is true that \(p\) if and only if \(p\).
Since he takes truth-talk to involve describing propositions with a
predicate, Horwich considers 'it is true that \(p\)' to be
just a trivial variant of 'The proposition that \(p\) is
true', meaning that his (E) is a version of (ES-prop) rather
than of Ramsey's (ES-op). He also employs the notation
'\(\langle p\rangle\)' as shorthand specifically for
'the proposition that \(p\)', generating a further
rendering of his equivalence schema (ibid., 10) that we can clearly
recognize as a version of (ES-prop), namely
(E)
\(\langle p\rangle\) is true iff \(p\).
Horwich considers the instances of (E) to constitute the axioms of
both an account of the property of truth and an account of the concept
of truth, i.e., what is meant by the word 'true' (ibid.,
136). According to minimalism, the instances of (E) are explanatorily
fundamental, which Horwich suggests is a reason for taking them to be
necessary (ibid., 21, n. 4). This, combined with his view that the
equivalence schema applies to propositions, places his minimalism in
position \(\mathbf{D}\) in the chart given in Section 1.1. The
instances of (ES-prop) are thus explanatory of the functioning of the
truth predicate (of its role as a de-nominalizer of
'that'-clauses (ibid., 5)), rather than being explained by
that functioning (as the analogous equivalences are for both
disquotationalism and prosententialism). Moreover, Horwich (ibid., 50,
138) claims that they are also conceptually basic and *a
priori*. He (ibid., 27-30, 33, 112) denies that truth admits
of any sort of explicit definition or reductive analysis in terms of
other concepts, such as reference or predicate-satisfaction. In fact,
Horwich (ibid., 10-1, 111-2, 115-6) holds that these
other semantic notions should both be given their own, infinitely
axiomatized, minimalist accounts, which would then clarify the
non-reductive nature of the intuitive connections between them and the
notion of truth.
Horwich (ibid., 27-30) maintains that the infinite axiomatic
nature of minimalism is unavoidable. He (ibid., 25) rejects the
possibility of a finite formulation of minimalism *via* the use
of
substitutional quantification.
On the usual understanding of this non-standard type of
quantification, the quantifiers govern variables that serve to mark
places in linguistic strings, indicating that either all or some of
the elements of an associated substitution class of linguistic items
of a particular category can be substituted in for the variables.
Since it is possible for the variables so governed to take sentences
as their substitution items, this allows for a type of quantification
governing sentence positions in complex sentences. Using this sort of
sentential substitutional quantification, the thought is, one can
formulate a finite general principle that expresses Horwich's
account of truth as follows:
(GT)
\(\forall x\, (x\) is true iff \(\Sigma p (x = \langle p\rangle
\amp p)),\)
where '\(\Sigma\)' is the existential substitutional
quantifier. (GT) is formally equivalent to the formulation that Marian
David (1994, 100) presents as disquotationalism's definition of
'true sentence', here formulated for propositions instead.
Horwich's main reason for rejecting the proposed finite
formulation of minimalism, (GT), is that an account of substitutional
quantifiers seems (contra David 1994, 98-9) to require an appeal
to truth (since the quantifiers are explained as expressing that at
least one or that every item in the associated substitution class
yields a *true sentence* when substituted in for the governed
variables), generating circularity concerns (Horwich 1998a,
25-6).
Moreover, on Horwich's (ibid., 4, n. 1; Cf. 25, 32-3)
understanding, the point of the truth predicate is to provide a
surrogate for substitutional quantification and sentence-variables in
natural language, so as "to achieve the effect of generalizing
substitutionally over sentences ... but by means of ordinary
[quantifiers and] variables (i.e., pronouns), which range over
*objects*" (italics original). Horwich maintains that the
infinite "list-like" nature of minimalism poses no problem
for the view's adequacy with respect to explaining all of our
uses of the truth predicate, and the bulk of Horwich 1998a attempts to
establish just that. However, Anil Gupta (1993a, 365) has pointed out
that minimalism's infinite axiomatization in terms of the
instances of (E) for every (non-paradox-inducing) proposition makes it
maximally ideologically complex, in virtue of involving every other
concept. (Moreover, the overtly "fragmented" nature of the
theory also makes it particularly vulnerable to the Generalization
Problem that Gupta has raised, which we discuss in Section 4.5,
below.)
Christopher Hill (2002) attempts to deal with some of the problems
that Horwich's view faces, by presenting a view that he takes to
be a newer version of minimalism, replacing Horwich's
equivalence schema with a universally quantified formula, employing a
kind of substitutional quantification to provide a finite definition
of 'true thought (proposition)'. Hill's (ibid., 22)
formulation of his account,
(S)
For any object \(x\), \(x\) is true if and only if \((\Sigma p)((x
=\) the thought that \(p)\) and \(p)\),
is formally similar to the formulation of minimalism in terms of (GT)
that Horwich rejects, but to avoid the circularity concerns driving
that rejection, Hill's (ibid., 18-22) idea is to offer
introduction and elimination rules in the style of Gerhard Gentzen
(1935 [1969]) as a means for defining the substitutional quantifiers.
Horwich (1998a, 26) rejects even this inference-rule sort of approach,
but he directs his critique against defining *linguistic*
substitutional quantification this way. Hill takes his substitutional
quantifiers to apply to thoughts (propositions) instead of sentences.
But serious concerns have been raised regarding the coherence of this
non-linguistic notion of substitutional quantification (cf. David
2006, Gupta 2006b, Simmons 2006). As a result, it is unclear that
Hill's account is an improvement on Horwich's version of
minimalism.
### 3.2 Disquotationalism
Like minimalism, disquotationalism agrees with inflationary accounts
of truth that the alethic locutions function as predicates, at least
logically speaking. However, as we explained in discussing
Quine's view in Section 2, disquotationalism diverges from
inflationary views (and minimalism) at their shared assumption that
these (alethic) predicates serve to *describe* the entities
picked out by the expressions with which they are combined,
specifically as having or lacking a certain property.
Although Quine's disquotationalism is inspired by Tarski's
recursive method for defining a truth predicate, that method is not
what Quine's view emphasizes. Field's contemporary
disquotationism further departs from that aspect of Tarski's
work by looking directly to the instances of the (T)-schema that the
recursive method must generate in order to satisfy Tarski's
criterion of material adequacy. Tarski himself (1944, 344-5)
suggests at one point that each instance of (T) could be considered a
"partial definition" of truth and considers (but
ultimately rejects; see Section 4.5) the thesis that a logical
conjunction of all of these partial definitions amounts to a general
definition of truth (for the language that the sentences belonged to).
Generalizing slightly from Tarski, we can call this alternative
approach "(T)-schema disquotationalism", in contrast with
the Tarski-inspired approach that David (1994, 110-1) calls
"recursive disquotationalism". Field (1987, 1994a)
develops a version of (T)-schema disquotationalism that he calls
"pure disquotational truth", focusing specifically on the
instances of his preferred version of (ES), the "disquotational
schema" (Field 1994a, 258),
(T[/ES-sent])
"\(p\)" is true if and only if \(p\).
Similar to the "single principle" formulation, (GT),
rejected by Horwich (but endorsed by Hill), Field (ibid., 267) allows
that one could take a "generalized" version of
(T/ES-sent), prefixed with a universal substitutional quantifier,
'\(\Pi\)', as having axiomatic status, or one could
incorporate schematic sentence variables directly into one's
theorizing language and reason directly with (T/ES-sent) as a schema
(cf. ibid., 259). Either way, in setting out his version of
deflationism, Field (ibid., 250), in contrast with Horwich, does not
take the instances of his version of (ES) as fundamental but instead
as following from the functioning of the truth predicate. On
Field's reading of (T/ES-sent), the use of the truth predicate
on the left-hand side of an instance does not add any cognitive
content beyond that which the mentioned utterance has (for the
speaker) on its own when used (as on the right-hand-side of
(T/ES-sent)). As a result, each instance of (T/ES-sent) "holds
of conceptual necessity, that is, by virtue of the cognitive
equivalence of the left and right hand sides" (ibid., 258). This
places Field's deflationism also in position \(\mathbf{A}\) in
the chart from Section 1.1.
Following Leeds and Quine, Field (1999, 533-4) sees the central
utility of a purely disquotational truth predicate to be providing for
the expression of certain "fertile generalizations" that
cannot be made without using the truth predicate but which do not
really involve the notion of truth. Field (1994a, 264) notes that the
truth predicate plays "an important logical role: it allows us
to formulate certain infinite conjunctions and disjunctions that
can't be formulated otherwise [n. 17: at least in a language
that does not contain *substitutional quantifiers*]".
Field's disquotationalism addresses some of the worries that
arose for earlier versions of this variety of deflationism, due to
their connections with Tarski's method of defining truth
predicates. It also explains how to apply a disquotational truth
predicate to ambiguous and indexical utterances, thereby going beyond
Quine's (1970 [1986]) insistence on taking eternal sentences as
the subjects of the instances of (ES-sent) (cf. Field 1994a,
278-81). So, Field's view addresses some of the concerns
that David (1994, 130-66) raises for disquotationalism. However,
an abiding concern about this variety of deflationism is that it is an
account of truth as applied specifically to sentences. This opens the
door to a version of the complaint that Strawson (1950) makes against
Austin's account of truth, that it is not one's act of
stating [here: the sentence one utters] but what thereby gets stated
that is the target of a truth ascription. William Alston (1996, 14)
makes a similar point. While disquotationalists do not worry much
about this, this scope restriction might strike others as problematic
because it raises questions about how we are to understand truth
applied to beliefs or judgments, something that Hill (2002) worries
about. Field (1978) treats beliefs as mental states relating thinkers
to sentences (of a language of thought). But David (1994, 172-7)
raises worries for applying disquotationalism to beliefs, even in the
context of an account like Field's. The view that we believe
sentences remains highly controversial, but it is one that, it seems,
a Field-style disquotationalist must endorse. Similarly, such
disquotationalists must take scientific theories to consist of sets of
sentences, in order for truth to be applicable to them. This too runs
up against Strawson's complaint because it suggests that one
could not state the same theory in a different language. These sorts
of concerns continue to press for disquotationalists.
### 3.3 Prosententialism
As emerges from the discussion of Grover, *et al*. (1975) in
Section 2, prosententialism is the form of deflationism that contrasts
the most with inflationism, rejecting even the latter's initial
assumption that the alethic locutions function as predicates. Partly
in response to the difficulties confronting Grover, *et
al*.'s prosentential account, Robert Brandom (1988 and 1994)
has developed a variation on their view with an important
modification. In place of taking the underlying logic of
'true' as having this expression occur only as a
non-separable component of the semantically atomic prosentential
expressions, 'that is true' and 'it is true',
Brandom treats 'is true' as a separable
*prosentence-forming* *operator*. "It applies to a
term that is a sentence nominalization or that refers to or picks out
a sentence tokening. It yields a prosentence that has that tokening as
its anaphoric antecedent" (Brandom 1994, 305). In this way,
Brandom's account avoids most of the paraphrase concerns that
Grover, *et al*.'s prosententialism faces, while still
maintaining prosententialism's rejection of the contention that
the alethic locutions function predicatively. As a consequence of his
operator approach, Brandom gives quantificational uses of prosentences
a slightly different analysis. He (re)expands instances of truth-talk
like the following,
(9)
Everything Smith says is true.
(10)
Something Jones said is true.
"back" into longer forms, such as
(8\*)
For anything one can say, if Smith says it, then it is true.
(11)
For something one can say, Jones said it, and it is true.
and explains only the second 'it' as involved in a
prosentence. The first 'it' in (8\*) and (11) still
functions as a pro*noun*, anaphorically linked to a set of noun
phrases (sentence nominalizations) supplying objects (sentence
tokenings) as a domain being quantified over with standard (as opposed
to sentential or "propositional") quantifiers (ibid.,
302).
Brandom presents a highly flexible view that takes 'is
true' as a general "denominalizing" device that
applies to singular terms formed from the nominalization of sentences
broadly, not just to pronouns that indicate them. A sentence like
'It is true that humans are causing climate change',
considered *via* a re-rendering as 'That humans are
causing climate change is true', is *already* a
prosentence on his view, as is a quote-name case like
"'Birds are dinosaurs' is true", and an opaque
instance of truth-talk like 'Goldbach's Conjecture is
true'. In this way, Brandom offers a univocal and broader
prosentential account, according to which, "[i]n each use, a
prosentence will have an anaphoric antecedent that determines a class
of admissible substituends for the prosentence (in the lazy case, a
singleton). This class of substituends determines the significance of
the prosentence associated with it" (ibid.). As a result,
Brandom can accept both (ES-sent) and (ES-prop) - the latter
understood as involving no commitment to propositions as entities
- on readings closer to their standard interpretations, taking
the instances of both to express meaning equivalences. Brandom's
account thus seems to be located in both position \(\mathbf{A}\) and
position \(\mathbf{B}\) in the chart from Section 1.1, although, as
with any prosententialist view, it still denies that the instances of
(ES) say anything *about* either sentences or propositions.
Despite its greater flexibility, however, Brandom's account
still faces the central worry confronting prosentential views, namely
that truth-talk really does seem predicative, and not just in its
surface grammatical form but in our inferential practices with it as
well. In arguing for the superiority of his view over that of Grover,
*et al*., Brandom states that "[t]he account of truth
talk should bear the weight of ... divergence of logical from
grammatical form only if no similarly adequate account can be
constructed that lacks this feature" (ibid., 304). One might
find it plausible to extend this principle beyond grammatical form, to
behavior in inferences as well. This is an abiding concern for
attempts to resist inflationism by rejecting its initial assumption,
namely, that the alethic locutions function as predicates.
## 4. Objections to Deflationism
In the remainder of this article, we consider a number of objections
to deflationism. These are by no means the only objections that have
been advanced against the approach, but they seem to be particularly
obvious and important ones.
### 4.1 The Explanatory Role of Truth
The first objection starts from the observation that (a) in certain
contexts an appeal to the notion of truth appears to have an
explanatory role and (b) deflationism seems to be inconsistent with
that appearance. Some of the contexts in which truth seems to have an
explanatory role involve philosophical projects, such as the
theory of meaning
(which we will consider below) or explaining the nature of
knowledge.
In these cases, the notion of explanation at issue is not so much
causal as it is *conceptual* (see Armour-Garb and Woodbridge
forthcoming, for more on this). But the notion of truth seems also
sometimes to play a *causal* explanatory role, especially with
regard to explaining various kinds of success - mainly the
success of scientific theories/method (cf. Putnam 1978 and Boyd 1983)
and of people's behavior (cf. Putnam 1978 and Field 1987), but
also the kind of success involved in learning from others (Field
1972). The causal-explanatory role that the notion of truth appears to
play in accounts of these various kinds of success has seemed to many
philosophers to constitute a major problem for deflationism. For
example, Putnam (1978, 20-1, 38) claims, "the notions of
'truth' and 'reference' have a
causal-explanatory role in ... an *explanation* of the
behavior of scientists and the success of science", and
"the notion of truth can be used in causal explanations -
the success of a man's behavior may, after all, depend on the
fact that certain of his beliefs are *true* - and the
formal logic of 'true' [the feature emphasized by
deflationism] is not all there is to the notion of
*truth*".
While a few early arguments against deflationism focus on the role of
truth in explanations of the success of science (see Williams 1986 and
Fine 1984a, 1984b for deflationary responses to Putnam and Boyd on
this), according to Field (1994a, 271), "the most serious worry
about deflationism is that it can't make sense of the
explanatory role of truth conditions: e.g., their role in explaining
behavior, or their role in explaining the extent to which behavior is
successful". While few theorists endorse the thesis that
explanations of behavior in general need to appeal to the notion of
truth (even a pre-deflationary Field (1987, 84-5) rejects this,
but see Devitt 1997, 325-330, for an opposing position),
explanations of the latter, i.e., of behavioral *success*,
still typically proceed in terms of an appeal to truth. This poses a
*prima facie* challenge to deflationary views. To illustrate
the problem, consider the role of the truth-value of an
individual's belief in whether that person succeeds in
satisfying her desires. Let us suppose that Mary wants to get to a
party, and she believes that it is being held at 1001 Northside
Avenue. If her belief is true, then, other things being equal, she is
likely to get to the party and get what she wants. But suppose that
her belief is false, and the party is in fact being held at 1001
Southside Avenue. Then it would be more likely, other things being
equal, that she won't get what she wants. In an example of this
sort, the truth of her belief seems to be playing a particular role in
explaining why she gets what she wants.
Assuming that Mary's belief is true, and she gets to the party,
it might seem natural to say that the latter success occurs
*because* her belief is true, which might seem to pose a
problem for deflationists. However, truth-involving explanations of
particular instances of success like this don't really pose a
genuine problem. This is because if we are told the specific content
of the relevant belief, it is possible to replace the apparently
explanatory claim that the belief is true with an equivalent claim
that does not appeal to truth. In Mary's particular case, we
could replace i) the claim that she believes that the party is being
held at 1001 Northside Avenue, and her belief is true, with ii) the
claim that she believes that the party is being held at 1001 Northside
Avenue, and the party \(is\) being held at 1001 Northside Avenue. A
deflationist can claim that the appeal to truth in the explanation of
Mary's success just provides an expressive convenience
(including, perhaps, the convenience of expressing what would
otherwise require an infinite disjunction (of conjunctions like ii)),
by saying just that what Mary believed was true, if one did not know
exactly which belief Mary acted on) (cf. Horwich 1998a, 22-3,
44-6).
While deflationists seem to be able to account for appeals to truth in
explanations of particular instances of success, the explanatory-role
challenge to deflationism also cites the explanatory role that an
appeal to truth appears to play in explaining the phenomenon of
behavioral success more generally. An explanation of this sort might
take the following form:
>
>
> [1]
> People act (in general) in such a way that their goals will be
> obtained (as well as possible in the given situation), or in such a
> way that their expectations will not be frustrated, ...
> *if* their beliefs are true.
> [2]
> Many beliefs [people have about how to attain their goals]
> *are* true.
> [3]
> So, as a consequence of [1] and [2], people have a tendency to
> attain certain kinds of goals. (Putnam 1978, 101)
>
>
The generality of [1] in this explanation seems to cover more cases
than any definite list of actual beliefs that someone has could
include. Moreover, the fact that [1] supports counterfactuals by
applying to whatever one might possibly believe (about attaining
goals) suggests that it is a law-like generalization. If the truth
predicate played a fundamental role in the expression of an
explanatory *law*, then deflationism would seem to be
unsustainable.
A standard deflationary response to this line of reasoning involves
rejecting the thesis that [1] is a law, seeing it (and truth-involving
claims like it) instead as functioning similarly to how the claim
'What Mary believes is true' functions in an explanation
of her particular instance of behavioral success, just expressing an
even more indefinite, and thus potentially infinite claim. The latter
is what makes a claim like [1] seem like an explanatory law, but even
considering this indefiniteness, the standard deflationary account of
[1] claims that the function of the appeal to the notion of truth
there is still just to express a kind of generalization. One way to
bring out this response is to note that, similar to the deflationary
"infinite disjunction" account of the claim 'What
Mary believes is true', generalizations of the kind offered in
[1] entail infinite *conjunctions* of their instances, which
are claims that can be formulated without appeal to truth. For
example, in the case of explaining someone, \(A\), accomplishing their
goal of getting to a party, deflationsts typically claim that the role
of citing possession of a true belief is really just to express an
infinite conjunction with something like the following form:
>
> If \(A\) believes that the party is 1001 Northside Avenue, and the
> party is at 1001 Northside Avenue, then \(A\) will get what they want;
> and if \(A\) believes that the party is at 1001 Southside Avenue, and
> the party is at 1001 Southside Ave, then \(A\) will get what they
> want; and if \(A\) believes that party is at 17 Elm St, and the party
> is at 17 Elm St, then \(A\) will get what they want; ... and so
> on.
>
The equivalence schema (ES) allows one to capture this infinite
conjunction (of conditionals) in a finite way. For, on the basis of
the schema, one can reformulate the infinite conjunction as:
>
> If \(A\) believes that the party is 1001 Northside Avenue, and that
> the party is 1001 Northside Avenue is true, then \(A\) will get what
> they want; and if \(A\) believes that the party is at 1001 Southside
> Avenue, and that the party is at 1001 Southside Avenue is true, then
> \(A\) will get what they want, and if \(A\) believes that the party is
> at 17 Elm Street, and that the party is at 17 Elm Street is true, then
> \(A\) will get what they want; ... and so on.
>
In turn, this (ES)-reformulated infinite conjunction can be expressed
as a finite statement with a universal quantifier ranging over
propositions:
>
> For every proposition \(x\), if what \(A\) believes \(= x\), and \(x\)
> is true, then \(A\) will get what they want, other things being equal.
>
The important point for a deflationist is that one could not express
the infinite conjunction regarding the agent's beliefs and
behavioral success unless one had the concept of truth. But
deflationists also claim that this is all that the notion of truth is
doing here and in similar explanations (cf. Leeds 1978, 1995; Williams
1986, Horwich 1998a).
How successful is this standard deflationary response? There are
several critiques in the literature. Some (e.g., Damnjanovic 2005)
argue that there is no distinction in the first place between
appearing in a causal-explanatory generalization and being a
causal-explanatory property. After all, suppose it is a true
generalization that metal objects conduct electricity. That would
normally be taken as sufficient to show that being metal is a
causal-explanatory property that one can cite in explaining why
something conducts electricity. But isn't this a counter, then,
to deflationism's thesis that, assuming there is a property of
truth at all, it is at most an insubstantial one? If a property is a
causal or explanatory property, after all, it is hard to view it as
insubstantial.
The reasoning at issue here may be presented conveniently by expanding
on the general argument considered above and proceeding from an
apparently true causal generalization to the falsity of deflationism
(ibid.):
>
>
> P1.
> If a person \(A\) has true beliefs, they will get what they want,
> other things being equal.
> C1.
> Therefore, if \(A\) has beliefs with the property of being true,
> \(A\) will get what they want other things being equal.
> C2.
> Therefore, the property of being true appears in a
> causal-explanatory generalization.
> C3.
> Therefore, the property of being true is a causal-explanatory
> property.
> C4.
> Therefore, deflationism is false.
>
>
Can a deflationist apply the standard deflationary response to this
argument? Doing so would seem to involve rejecting the inference from
C2 to C3. After all, the standard reply would say that the role that
the appeal to truth plays in P1, the apparent causal generalization,
is simply its generalizing role of expressing a potentially infinite,
disjointed conjunction of unrelated causal connections (cf. Leeds
1995). So, applying this deflationary response basically hinges on the
plausibility of rejecting the initial assumption that there is no
distinction between appearing in a causal-explanatory generalization
and being a causal-explanatory property.
It is worth noting two other responses beyond the standard one that a
deflationist might make to the reasoning just set out. The first
option is to deny the step from P1 to C1. This inference involves the
explicit introduction of the property of being true, and, as we have
seen, some deflationists deny that there is a truth property at all
(cf. Quine 1970 [1986], Grover, *et al*. 1975, Leeds 1978,
Brandom 1994). But, as we noted above, the idea that there is no truth
property may be difficult to sustain given the apparent fact that
'is true' functions grammatically as a predicate.
The second option is to deny the final step from C3 to C4 and concede
that there is a sense in which truth is a causal-explanatory property
and yet say that it is still not a substantive property (cf.
Damnjanovic 2005). For example, some philosophers (e.g., Friedman
1974, van Fraassen 1980, Kitcher 1989, Jackson and Pettit 1990) have
offered different understandings of
scientific explanation
and causal explanation, according to which being a causal and
explanatory property might not conflict with being insubstantial
(perhaps by being an abundant or ungroundable property). This might be
enough to sustain a deflationary position.
The standard deflationary response to the explanatory-role challenge
has also met with criticisms focused on providing explanations of
certain "higher-level" phenomena. Philip Kitcher (2002,
355-60) concludes that Horwich's (1998a, 22-3)
application of the standard response, in his account of how the notion
of truth functions in explanations of behavioral success, misses the
more systematic role that truth plays in explaining *patterns*
of successful behavior, such as when mean-ends beliefs flow from a
representational device, like a map. Chase Wrenn (2011) agrees with
Kitcher that deflationists need to explain systematic as opposed to
just singular success, but against Kitcher he argues that
deflationists are actually better off than inflationists on this
front. Will Gamester (2018, 1252-5) raises a different
"higher-level factor" challenge, one based on the putative
inability of the standard deflationary account of the role of truth in
explanations of behavioral success to distinguish between coincidental
and non-coincidental success. Gamester (ibid., 1256-7) claims
that an inflationist could mark and account for the difference between
the two kinds of success with an explanation that appeals to the
notion of truth. But it is not clear that a deflationist cannot also
avail herself of a version of this truth-involving explanation, taking
it just as the way of expressing in natural language what one might
formally express with sentential variables and quantifiers (cf. Ramsey
1927, 1929; Prior 1971, Wrenn 2021, and Armour-Garb and Woodbridge
forthcoming).
### 4.2 Propositions Versus Sentences
We noted earlier that deflationism can be presented in either a
sententialist version or a propositionalist version. Some philosophers
have suggested, however, that the choice between these two versions
constitutes a dilemma for deflationism (Jackson, Oppy, and Smith
1994). The objection is that if deflationism is construed in
accordance with propositionalism, then it is trivial, but if it is
construed in accordance with sententialism, it is false. To illustrate
the dilemma, consider the following claim:
(12)
*Snow is white* is true if and only if snow is white.
Now, does '*snow is white*' in (12) refer to a
sentence or a proposition? If, on the one hand, we take (12) to be
about a sentence, then, assuming (12) can be interpreted as making a
necessary claim, it is false. On the face of it, after all, it takes a
lot more than snow's being white for it to be the case that
'snow is white' is true. In order for 'snow is
white' to be true, it must be the case not only that snow is
white, it must, in addition, be the case that 'snow is
white' *means that* snow is white. But this is a fact
about language that (12) ignores. On the other hand, suppose we take
'*snow is white*' in (12) to denote the proposition
that snow is white. Then the approach looks to be trivial, since the
proposition that snow is white is defined as being the one that is
true just in case snow is white. Thus, deflationism faces the dilemma
of being false or trivial.
One response for the deflationist is to remain with the
propositionalist version of their doctrine and accept its triviality.
A trivial doctrine, after all, at least has the advantage of being
true.
A second response is to resist the suggestion that propositionist
deflationism is trivial. For one thing, the triviality here does not
have its source in the concept of truth, but rather in the concept of
a proposition. Moreover, even if we agree that the proposition that
snow is white is defined as the one that is true if and only if snow
is white, this still leaves open whether truth is a substantive
property of that proposition; as such it leaves open whether
deflationism or inflationism is correct.
A third response to this dilemma is to accept that deflationism
applies *inter alia* to sentences, but to argue (following
Field 1994a) that the sentences to which it applies must be
*interpreted* sentences, i.e., sentences which already have
meaning attached to them. While it takes more than snow being white to
make the sentence 'snow is white' true, when we think of
it as divorced from its meaning, that is not so clear when we treat it
as having the meaning it in fact has.
### 4.3 Correspondence
It is often said to be a platitude that true statements correspond to
the facts. The so-called "correspondence theory of truth"
is built around this intuition and tries to explain the notion of
truth by appealing to the notions of correspondence and fact. But even
if one does not *build* one's approach to truth around
this intuition, many philosophers regard it as a condition of adequacy
on any approach that it accommodate this correspondence intuition.
It is often claimed, however, that deflationism has trouble meeting
this adequacy condition. One way to bring out the problem here is by
focusing on a particular articulation of the correspondence intuition,
one favored by deflationists themselves (e.g., Horwich 1998a).
According to this way of spelling it out, the intuition that a certain
sentence or proposition "corresponds to the facts" is the
intuition that the sentence or proposition is true *because* of
how the world is; that is, the truth of the proposition is
*explained* by some fact, which is usually external to the
proposition itself. We might express this by saying that someone who
endorses the correspondence intuition so understood would endorse:
(6)
The proposition that snow is white is true *because* snow
is white.
The problem with (6) is that, when we combine it with deflationism
- or at least with a necessary version of that approach -
we can derive something that is plainly false. Anyone who assumes that
the instances of the equivalence schema are necessary would clearly be
committed to the necessary truth of:
(7)
The proposition that snow is white is true iff snow is white.
And, since (7) is a necessary truth, under that assumption, it is very
plausible to suppose that (6) and (7) together entail:
(8)
Snow is white because snow is white.
But (8) is clearly false. The reason is that the 'because'
in (6) and (8) is a causal or explanatory relation, and plausibly such
relations must obtain between distinct relata. But the relata in (8)
are (obviously) not distinct. Hence, (8) is false, and this means that
the conjunction of (6) and (7) must be false, and that deflationism is
inconsistent with the correspondence intuition. To borrow a phrase of
Mark Johnston's (1989) - who mounts a similar argument in
a different context - we might say that if deflationism is true,
then what seems to be a perfectly good explanation in (6) *goes
missing*; if deflationism is true, after all, then (6) is
equivalent to (8), and (8) is not an explanation of anything.
One way a deflationist might attempt to respond to this objection is
by providing a different articulation of the correspondence intuition.
For example, one might point out that the connection between the
proposition that snow is white being true and snow's being white
is not a contingent connection and suggest that this rules out (6) as
a successful articulation of the correspondence intuition. That
intuition (one might continue) is more plausibly given voice by
(6\*)
'Snow is white' is true because snow is white.
However, when (6\*) is conjoined with (7), one cannot derive the
problematic (8), and thus, one might think, the objection from
correspondence might be avoided. Now, certainly this is a possible
suggestion; the problem with it, however, is that a deflationist who
thinks that (6\*) is true is most plausibly construed as holding a
sententialist, rather than a propositionalist, version of
deflationism. A sententialist version of deflationism will supply a
version of (7), viz.:
(7\*)
'Snow is white' is true iff snow is white.
which, at least if it is interpreted as a necessary (or analytic)
truth, will conspire with (6\*) to yield (8). And we are back where we
started.
Another response would be to object that 'because' creates
an *opaque context* - that is, the kind of context within
which one cannot substitute co-referring expressions and preserve
truth. However, for this to work, 'because' must create an
opaque context of the right kind. In general, we can distinguish two
kinds of opaque context: *intensional* contexts, which allow
the substitution of necessarily co-referring expressions but not
contingently co-referring expressions; and
hyperintensional
contexts, which do not even allow the substitution of necessarily
co-referring expressions. If the inference from (6) and (7) to (8) is
to be successfully blocked, it is necessary that 'because'
creates a hyperintensional context. A proponent of the correspondence
objection might try to argue that while 'because' creates
an intensional context, it does not create a hyperintensional context.
But since a hyperintensional reading of 'because' has
become standard fare, this approach remains open to a deflationist and
is not an *ad hoc* fix.
A final, and most radical, response would be to reject the
correspondence intuition outright. This response is not as drastic as
it sounds. In particular, deflationists do not have to say that
someone who says 'the proposition that snow is white corresponds
to the facts' is speaking falsely. Deflationists might do better
by saying that such a person is simply using a picturesque or ornate
way of saying that the proposition is true, where truth is understood
in accordance with deflationism. Indeed, a deflationist can even agree
that, for certain rhetorical or conversational purposes, it might be
more effective to use talk of "correspondence to the
facts". Nevertheless, it is important to see that this response
does involve a burden, since it involves rejecting a condition of
adequacy that many regard as binding.
### 4.4 Truth-Value Gaps
According to some metaethicists
(moral non-cognitivists
or expressivists), moral claims - such as the injunction that
one ought to return people's phone calls - are neither
true nor false. The same situation holds, according to some
philosophers of language, for claims that presuppose the existence of
something which does not in fact exist, such as the claim that the
present King of France is bald; for sentences that are vague, such as
'These grains of sand constitute a heap'; and for
sentences that are paradoxical, such as those that arise in connection
with the
Liar Paradox.
Let us call this thesis *the gap*, since it finds a gap in the
class of sentences between those that are true and those that are
false.
The deflationary approach to truth has seemed to be inconsistent with
the gap, and this has been thought by some (e.g., Dummett 1959 [1978,
4] and Holton 2000) to be an objection. The reason for the apparent
inconsistency flows from a natural way to extend the deflationary
approach from truth to falsity. The most natural thing for a
deflationist to do is to introduce a falsity schema like:
(F-sent)
'\(p\)' is false iff \({\sim}p.\)
Following Holton (1993, 2000), we consider (F-sent) to be the relevant
schema for falsity, rather than some propositional schema, since the
standard understanding of a gappy sentence is as one that does not
express a proposition (cf. Jackson, *et al.* 1994).
With a schema like (F-sent) in hand, deflationists could say things
about falsity similar to what they say about truth: (F-sent) exhausts
the notion of falsity, there is no substantive property of falsity,
the utility of the concept of falsity is just a matter of facilitating
the expression of certain generalizations, etc.
However, there is a seeming incompatibility between (F-sent) and the
gap. Suppose, for *reductio,* that 'S' is a
sentence that is neither true nor false. In that case, it is not the
case that 'S' is true, and it is not the case that
'S' is false. But then, by (ES-sent) and (F-sent), we can
infer that it is not the case that S, and it is not the case that
not-S; in short: \({\sim}\)S and \({\sim}{\sim}\)S, which is a
classical contradiction. Clearly, then, we must give up one of these
things. But which one can we give up consistently with
deflationism?
In the context of ethical non-cognitivism, one possible response to
the apparent dilemma is to distinguish between a deflationary account
of truth and a deflationary account of truth-*aptitude* (cf.
Jackson, *et al.* 1994). By accepting an inflationary account
of the latter, one can claim that ethical statements fail the robust
criteria of "truth-aptitude" (reidentified in terms of
expression of belief), even if a deflationary view of truth still
allows the application of the truth predicate to them, *via*
instances of (ES). In the case of
vagueness,
one might adopt epistemicism about it and claim that vague sentences
actually have truth-values, we just can't know them (cf.
Williamson 1994. For an alternative, see Field 1994b).
With respect to the Liar Paradox, the apparent conflict between
deflationism and the gap has led some (e.g., Simmons 1999) to conclude
that deflationism is hobbled with respect to dealing with the problem,
since most prominent approaches to doing so, stemming from the work of
Saul Kripke (1975), involve an appeal to truth-value gaps. One
alternative strategy a deflationist might pursue in attempting to
resolve the Liar is to offer a non-classical logic. Field 2008 adopts
this approach and restricts the law of the excluded middle. JC Beall
(2002) combines truth-value gaps with Kleene logic (see the entry on
many-valued Logic)
and makes use of both weak and strong
negation.
Armour-Garb and Beall (2001, 2003) argue that deflationists can and
should be
dialetheists
and accept that some truthbearers are both true and not true (see
also, Woodbridge 2005, 152-3, on adopting a
paraconsistent logic
that remains "quasi-classical"). By contrast, Armour-Garb
and Woodbridge (2013, 2015) develop a version of the
"meaningless strategy" with respect to the Liar (based on
Grover 1977), which they claim a deflationist can use to dissolve that
paradox and semantic pathology more generally, without accepting
genuine truth-value gaps or giving up classical logic.
### 4.5 The Generalization Problem
Since deflationists place such heavy emphasis on the role of the
concept of truth in expressing generalizations, it seems somewhat
ironic that certain versions of deflationism have been criticized for
being incapable of accounting for generalizations involving
*truth* (Gupta 1993a, 1993b; Field 1994a, 2008; Horwich 1998a
(137-8), 2001; Halbach 1999 and 2011 (57-9); Soames 1999,
Armour-Garb 2004, 2010, 2011). The "Generalization
Problem" (henceforth, \(GP)\) captures the worry that a
deflationary account of truth is inadequate for explaining our
commitments to general facts we express with certain uses of
'true'. This raises the question of whether and, if so,
how, deflationary accounts earn the right to endorse such
generalizations.
Although Tarski (1935 [1956]) places great importance on the instances
of his (T)-schema, he comes to recognize that those instances do not
provide a fully adequate way of characterizing truth. Moreover, even
when the instances of (T) are taken as theorems, Tarski (ibid.) points
out that taken all together they are insufficient for proving a
'true'-involving generalization like
(A)
All sentences of the form 'If \(p\), then \(p\)' are
true.
since the collection of the instances of (T) is \(\omega\)-incomplete
(where a theory, \(\theta\), is *\(\omega\)-incomplete* if
\(\theta\) can prove every instance of an open formula
'\(Fx\)' but cannot prove the universal generalization,
'\(\forall xFx\)').
We arrive at a related problem when we combine a reliance on the
instances of some version of (ES) with Quine's view about the
functioning and utility of the truth predicate. He (1992, 81)
considers the purpose of (A) to be to express a generalization over
sentences like the following:
(B)
If it is raining, then it is raining.
(C)
If snow is white, then snow is white.
Quine points out that we want to be able to generalize on the embedded
sentences in those conditionals, by semantically ascending,
abstracting logical form, and deriving (A). But, as Tarski (ibid.)
notes, this feat cannot be achieved, given only a commitment to (the
instances of) (T). From (T) and (A), we can prove (B) and (C) but,
given the finitude of deduction, when equipped only with the instances
of (T), we cannot prove (A). As a consequence of the Compactness
Theorem of first-order logic, anything provable from the totality of
the instances of (T) is provable from just finitely many of them, so
any theory that takes the totality of the instances of (T) to
characterize truth will be unable to prove any generalization like
(A).
To address the question of why we need to be able to prove these
truth-involving generalizations, suppose that we accept a proposition
like \(\langle\)Every proposition of the form \(\langle\)if \(p\),
then \(p\rangle\) is true\(\rangle\). Call this proposition
"\(\beta\)". Now take '\(\Gamma\)' to stand
for the collection of propositions that are the instances of
\(\beta\). Horwich (2001) maintains that an account of the meaning of
'true' will be adequate only if it aids in explaining why
we accept the members of \(\Gamma\), where such explanations amount to
proofs of those propositions by, among other things, employing an
explanatory premise that does not explicitly concern the truth
predicate. So, one reason it is important to be able to prove a
'true'-involving generalization is because this is a
condition of adequacy for an account of the meaning of that term. One
might argue that anyone who grasps the concept of truth, and that of
the relevant conditional, should be said to know \(\beta\). But if a
given account of truth, together with an account of the conditional
(along, perhaps, with an account of other logical notions), does not
entail \(\beta\), then it does not provide an acceptable account of
truth.
Here is another reason for thinking that generalizations like
\(\beta\) must be proved. A theory of the meaning of
'true' should explain our acceptance of propositions like
\(\beta\), which, as Gupta (1993a) and Hill (2002) emphasize, should
be knowable *a priori* by anyone who possesses the concept of
truth (and who grasps the relevant logical concepts). But if such a
proposition can be known *a priori* on the basis of a grasp of
the concept of truth (and of the relevant logical concepts), then a
theory that purports to specify the meaning of 'true'
should be able to explain our acceptance of that proposition. But if
an account of the meaning of 'true' is going to do this,
it must be possible to derive the proposition from one or more of the
clauses that constitute our grasp of the concept of truth.
This creates a problem for a Horwichian minimalist. Let us suppose
that \(\beta\) is one of the general propositions that must be
provable. Restricted to the resources available through
Horwich's minimalism, we can show that \(\beta\) cannot be
derived.
If a Horwichian minimalist could derive \(\beta\), it would have to be
derived from the instances of
(E)
\(\langle p\rangle\) is true iff \(p.\)
But there cannot be a valid derivation of a universal generalization
from a set of particular propositions unless that set is inconsistent.
Since, according to Horwich (1998a), every instance of (E) that is
part of his theory of truth is consistent, it follows that there
cannot be a derivation of \(\beta\) from the instances of (E). This is
a purely logical point. As such, considerations of pure logic dictate
that our acceptance of \(\beta\) cannot be explained by
Horwich's account of truth. Since Horwich takes all instances of
the propositional version of (T) (i.e., (ES-prop)) as axioms, he can
prove each of those instances. But, as we have seen, restricted to the
instances of the equivalence schema, he cannot prove the
generalization, \(\beta\), i.e., \(\langle\)Every proposition of the
form \(\langle\)if \(p\) then \(p\rangle\) is true\(\rangle\).
Some deflationists respond to the GP by using a version of (GT) to
formulate their approach:
(GT)
\(\forall x\) \((x\) is true iff \(\Sigma p(x = \langle p\rangle
\amp p)).\)
In this context, there are two things to notice about (GT). First, it
is not a schema but a universally quantified formula. For this reason,
it is possible to derive a generalization like \(\beta\) from it.
Second, the existential quantifier, '\(\Sigma\)', in (GT)
must be a higher-order quantifier (see the entry on
second-order and higher-order logic)
that quantifies into sentential positions. We mentioned above an
approach that takes this quantifier as a substitutional one, where the
substitution class consists of sentences. We also mentioned
Hill's (2002) alternative version that takes the substitution
class to be the set of all propositions. Kunne (2003) suggests a
different approach that takes '\(\Sigma\)' to be an
objectual (domain and values) quantifier ranging over propositions.
However, parallel to Horwich's rejection of (GT) discussed in
Section 3.1, all of these approaches have drawn criticism on the
grounds that the use of higher-order quantifiers to define truth is
circular (cf. Platts 1980, McGrath 2000), and may get the extension of
the concept of truth wrong (cf. Sosa 1993).
An alternative deflationist approach to the GP attempts to show that,
despite appearances, certain deflationary theories do have the
resources to derive the relevant generalizations. Field (1994a,
2001a), for example, suggests that we allow reasoning with schemas
directly and proposes rules that would allow the derivation of
generalizations. Horwich (1998a, 2001) suggests a more informal
approach according to which we are justified in deriving \(\beta\)
since an informal inspection of a derivation of some instance of
\(\beta\) shows us that we could derive any instance of it. For
replies to Horwich, see Armour-Garb 2004, 2010, 2011; Gupta 1993a,
1993b; and Soames 1999. For responses to Armour-Garb's attack on
Horwich 2001, see Oms 2019 and Cieslinski 2018.
### 4.6 Conservativeness
An ideal theory of truth will be both consistent (e.g., avoid the Liar
Paradox) and adequate (e.g., allow us to derive all the essential laws
of truth, such as those at issue in the Generalization Problem). Yet
it has recently been argued that even if deflationists can provide a
consistent theory of truth and avoid the GP, they still cannot provide
an adequate theory.
This argument turns on the notion of a conservative extension of a
theory. Informally, a conservative extension of a theory is one that
does not allow us to prove anything that could not be proved from the
original, unextended theory. More formally, and applied to theories of
truth, a truth theory, \(Tr\) is conservative over some theory \(T\)
formulated in language \(L\) if and only if for every sentence
\(\phi\) of \(L\) in which the truth predicate does not occur, if \(Tr
\cup L \vdash \phi\), then \(L \vdash \phi\) (where
'\(\vdash\)' represents *provability*). Certain
truth theories are conservative over arithmetic - e.g., theories
that implicitly define truth using only the instances of some version
of (ES) - and certain truth theories are not - e.g.,
Tarski's (1935 [1956], 1944) compositional theory. Specifically,
the addition of certain truth theories allows us to prove that
arithmetic is consistent, something that we cannot do if we are
confined to arithmetic itself.
It has been argued (a) that conservative truth theories are inadequate
and (b) that deflationists are committed to conservative truth
theories. (See Shapiro 1998 and Ketland 1999; Horsten 1995 provides an
earlier version of this argument.) We will explain the arguments for
(a) below but to get a flavor of the arguments for (b), consider
Shapiro's rhetorical question: "How thin can the notion of
arithmetic truth be, if by invoking it we can learn more about the
natural numbers?" Shapiro is surely right to press deflationists
on their frequent claims that truth is "thin" or
"insubstantial". It might also be a worry for
deflationists if any adequate truth theory allowed us to derive
non-logical truths, if they endorse the thesis that truth is merely a
"logical property". On the other hand, deflationists
themselves insist that truth is an expressively useful device, and so
they cannot be faulted for promoting a theory of truth that allows us
to say more about matters not involving truth.
To see an argument for (a), consider a Godel sentence, \(G\),
formulated within the language of Peano Arithmetic (henceforth,
\(PA)\). \(G\) is not a theorem of PA if PA is consistent (cf. the
entry on
Godel's incompleteness theorems).
But \(G\) becomes a theorem when PA is expanded by adding certain
plausible principles that appear to govern a truth predicate. Thus,
the resultant theory of arithmetical truth is strong enough to prove G
and appears therefore to be non-conservative over arithmetic. If, as
has been argued by a number of theorists, any adequate account of
truth will be non-conservative over a base theory, then deflationists
appear to be in trouble.
Understood in this way, the "Conservativeness Argument"
(henceforth, \(CA)\) is a variant of the objection considered in
Section 4.1, claiming that truth plays an explanatory role that
deflationism cannot accommodate. There are several deflationary
responses to the CA. Field (1999) argues that the worries that arise
from the claim that deflationists are in violation of explanatory
conservativeness is unfounded. He (ibid., 537) appeals to the
expressive role of the truth predicate and maintains that
deflationists are committed to a form of "explanatory
conservativeness" only insofar as there are no explanations in
which the truth predicate is not playing its generalizing role. As a
result, he (ibid.) notes that "any use of 'true' in
explanations which derives solely from its role as a device of
generalization should be perfectly acceptable". For responses to
Field, see Horsten 2011 (61) and Halbach 2011 (315-6).
Responding to the CA, Daniel Waxman (2017) identifies two readings of
'conservativeness', one semantic and the other syntactic,
which correspond to two conceptions of arithmetic. On the first
conception, arithmetic is understood *categorically* as given
by the standard model. On the second conception, arithmetic is
understood *axiomatically* and is captured by the acceptance of
some first-order theory, such as PA. Waxman argues that deflationism
can be conservative given either conception, so that the CA does not
go through.
Julien Murzi and Lorenzo Rossi (2018) argue that Waxman's
attempt at marrying deflationism with conservativeness - his
"conservative deflationism" - is unsuccessful. They
(ibid.) reject the adoption of this view on the assumption that
one's conception of arithmetic is axiomatic, claiming, in
effect, that a deflationist's commitment to a conservative
conception of truth is misguided (cf. Halbach 2011, Horsten 2011,
Cieslinski 2015, and Galinon 2015).
Jody Azzouni (1999) defends the "first-order
deflationist", viz., a deflationist who endorses what Waxman
(ibid.) calls "the axiomatic conception of arithmetic" and
whose subsequent understanding cannot rule out the eligibility of
non-standard models. Azzouni accepts the need to prove certain
'true'-involving generalizations, but he maintains that
there are some generalizations that are *about* truths that a
first-order deflationist need not prove. He further contends that if
one does extend her theory of truth in a way that allows her to
establish these generalizations, she should not expect her theory to
be conservative, nor should she continue describing it as a
*deflationary* view of truth. For a response to Azzouni
(ibid.), see Waxman (2017, 453).
In line with Field's response to the CA, Lavinia Picollo and
Thomas Schindler (2020) argue that the conservativeness constraint
imposed by Horsten 1995, Shapiro 1998, Ketland 1999, and others is not
a reasonable requirement to impose on deflationary accounts. They
contend that the insistence on conservativeness arises from making too
much of the metaphor of "insubstantiality" and that it
fails to see what the function of the truth predicate really amounts
to. Their leading idea is that, from a deflationist's
perspective, the logico-linguistic function of the truth predicate is
to simulate sentential and predicate quantification in a first-order
setting (cf. Horwich 1998a, 4, n. 1). They maintain that, for a
deflationist, in conjunction with first-order quantifiers, the truth
predicate has the same function as sentential and predicate
quantifiers. So, we should not expect the deflationist's truth
theory to conservatively extend its base theory.
### 4.7 Normativity
It is commonly said that our beliefs and assertions aim at truth, or
present things as being true, and that truth is therefore a
*norm* of assertion and belief. This putative fact about truth
and assertion in particular has been seen to suggest that deflationism
must be false (cf. Wright 1992 and Bar-On and Simmons 2007). However,
the felt incompatibility between normativity and deflationism is
difficult to make precise.
The first thing to note is that there is certainly a sense in which
deflationism is consistent with the idea that truth is a norm of
assertion. To illustrate this, notice (as we saw in examining
truth's putative explanatory role) that we can obtain an
intuitive understanding of this idea without mentioning truth at all,
so long as we focus on a particular case. Suppose that for whatever
reason Mary sincerely believes that snow is green, has good evidence
for this belief, and on the basis of this belief and evidence asserts
that snow is green. We might say that there is a norm of assertion
that implies that Mary is still open to criticism in this case. After
all, since snow is not green, there must be something
*incorrect* or *defective* about Mary's assertion
(and similarly for her belief). It is this incorrectness or
defectiveness that the idea that truth is a norm of assertion (and
belief) is trying to capture.
To arrive at a general statement of the norm that lies behind this
particular case, consider that here, what we recognize is
(13)
If someone asserts that snow is green when snow is not green, then
that assertion is open to criticism.
To generalize on this, what we want to do is generalize on the
positions occupied by 'snow is green' and express
something along the lines of
(14)
For all \(p\), if someone asserts that \(p\) when \({\sim}p\),
then that assertion is open to criticism.
The problem of providing a general statement like (14) is the same
issue first raised in Section 1.3, and the solution by now should be
familiar. To state the norm in general we would need to be able to do
something we seem unable to do in ordinary language, namely, employ
sentential variables and quantifiers for them. But this is where the
notion of truth comes in. Because (ES) gives us its
contraposition,
(ES-con)
\({\sim}p\) iff \(\langle p\rangle\) is not true.
Reading '\(\langle p\rangle\)' as 'that
\(p\)', we can reformulate (14) as
(15)
For all \(p\), if someone asserts that \(p\) when that \(p\) is
not true, then that assertion is open to criticism.
But since the variable '\(p\)' occurs only in nominalized
contexts in (15), we can replace it with an object variable,
'\(x\)', and bind this with an ordinary objectual
quantifier, to get
(16)
For all \(x\), if someone asserts \(x\), and \(x\) is not true,
then that assertion is open to criticism.
Or, to put it as some philosophers might:
(N)
Truth is a norm of assertion.
In short, then, deflationists need not deny that we appeal to the
notion of truth to *express* a norm of assertion; on the
contrary, the concept of truth seems required to state that very
generalization.
If deflationists can account for the fact that we must apply the
notion of truth to express a norm of assertion, then does normativity
pose any problem for deflationism? Crispin Wright (1992, 15-23)
argues that it does, claiming that deflationism is inherently unstable
because there is a distinctive norm for assertoric practice that goes
beyond the norms for warranted assertibility - that the norms of
truth and warranted assertibility are potentially extensionally
divergent. This separate norm of truth, he claims, is already implicit
just in acceptance of the instances of (ES). He points out that not
having warrant to assert some sentence does not yield having warrant
to assert its negation. However, because (ES) gives us (ES-con), we
have, in each instance, an inference (going from right to left) from
the sentence mentioned not being true to the negation of the sentence.
But the instance of (ES) for the negation of any sentence,
(ES-neg)
\(\langle{\sim} p \rangle\) is true iff \({\sim}p,\)
takes us (again, going from right to left) from the negated sentence
to an ascription of truth to that negated sentence. Thus, some
sentence not being true *does* yield that the negation of the
sentence is true, in contrast with warranted assertibility. This
difference, Wright (ibid., 18) claims, reveals that, by
deflationism's own lights, the truth predicate expresses a
distinct norm governing assertion, which is incompatible with the
deflationary contention "that 'true' is only
grammatically a predicate whose role is not to attribute a substantial
characteristic".
Rejecting Wright's argument for the instability of deflationism,
Ian Rumfitt (1995, 103) notes that if we add the ideas of denying
something and of having warrant for doing so
("anti-warrant") to Wright's characterization of
deflationism, this would make 'is not true' simply a
device of rejection governed by the norm that "[t]he predicate
'is not true' may be applied to any sentence for which one
has an anti-warrant". But then truth-talk's behavior with
negation would not have to be seen as indicating that it marks a
distinct norm beyond justified assertibility *and justifiable
deniability*, which would be perfectly compatible with
deflationism.
Field (1994a, 264-5) offers a deflationary response to
Wright's challenge (as well as to a similar objection regarding
normativity from Putnam (1983a, 279-80)), pointing again to the
generalizing role of the truth predicate in such normative desires as
one to utter only true sentences or one to have only true beliefs.
Field agrees with Wright that truth-talk expresses a norm beyond
warranted assertibility, but he (1994a, 265) also maintains that
"there is no difficulty in desiring that all one's beliefs
be disquotationally true; and not only can each of us desire such
things, there can be a general practice of badgering other to into
having such desires". Horwich (1996, 879-80) argues that
Wright's rejection of deflationism does not follow from showing
that one can use the truth predicate to express a norm beyond
warranted assertibility. Like Field, Horwich claims that Wright missed
the point that, in the expression of such a norm, the truth predicate
is just playing its generalizing role. For other objections to
deflationism based on truth's normative role, see Price 1998,
2003 and McGrath 2003.
### 4.8 Inflationist Deflationism?
Another objection to deflationism begins by drawing attention to a
little-known doctrine about truth that G.E. Moore held at the
beginning of the 20th Century. Richard Cartwright (1987, 73) describes
the view as follows: "a true proposition is one that has a
certain simple unanalyzable property, and a false proposition is one
that lacks the property". This doctrine about truth is to be
understood as the analogue for the doctrine that Moore held about
goodness, namely that goodness is a simple, unanalyzable quality.
The potential problem that this Moorean view about truth presents for
deflationism might best be expressed in the form of a question: What
is the difference between the Moorean view and deflationism? One might
reply that, according to deflationary theories, the concept of truth
has an important logical role, i.e., expressing certain
generalizations, whereas the concept of goodness does not. However,
this doesn't really answer our question. For one thing, it
isn't clear that Moore's notion of truth does not also
capture generalizations, since it too will yield all of the instances
of (ES). For another, the idea that the concept of truth plays an
important logical role doesn't distinguish the metaphysics of
deflationary conceptions from the metaphysics of the Moorean view, and
it is the metaphysics of the matter that the present objection really
brings into focus. Alternatively, one might suggest that the
distinction between truth according to Moore's view and
deflationary conceptions of truth is the distinction between having a
simple unanalyzable nature, and not having any underlying nature at
all. But what is that distinction? It is certainly not obvious that
there is any distinction between having a nature about which nothing
can be said and having no nature at all.
How might a deflationist respond to this alleged problem? The key move
will be to focus on the property of being true. For the Moorean, this
property is a simple unanalyzable one. But deflationists need not be
committed to this. As we have seen, some deflationists think that
there is no truth property at all. And even among deflationists who
accept that there is some insubstantial truth property, it is not
clear that this is the sort of property that the Moorean has in mind.
To say that a property is unanalyzable suggests that the property is a
fundamental property. One might understand this in something like the
sense that Lewis proposes, i.e., as a property that is sparse and
perfectly natural. Or one might understand a fundamental property as
one that is groundable but not grounded in anything. But deflationists
need not understand a purported property of being true in either of
these ways. As noted in Section 1.2, they may think of it as an
abundant property rather than a sparse one, or as one that is
ungroundable. In this way, there are options available for
deflationists who want to distinguish themselves from the Moorean view
of truth.
### 4.9 Truth and Meaning
The final objection that we will consider concerns the relation
between deflationism and theories of meaning, i.e., theories about how
sentences get their meanings.
The orthodox approach to this last question appeals to the notion of
truth, suggesting, roughly, that a sentence \(S\) means that \(p\)
just in case \(S\) is true if and only if \(p\). This approach to
meaning, known widely as "truth-conditional semantics", is
historically associated with
Donald Davidson's
(1967) thesis that the "(T)-sentences" (i.e., the
instances of the (T)-schema) generated by a Tarski-truth-definition
for a language give the meanings of the sentences they mention, by
specifying their truth conditions. In contemporary linguistics, a
prominent approach to sentence meaning explains it in terms of
propositions, understood (following Lewis 1970 and Stalnaker 1970) as
sets of possible worlds, which amount to encapsulations of truth
conditions (Elbourne 2011, 50-51).
This has led a number of philosophers to argue that, on pain of
circularity, deflationism cannot be combined with theories of meaning
that make use of the notion of truth to explain meaning - in
other words, that deflationism is incompatible with truth-conditional
theories of meaning. This assessment of deflationism stems from
Dummett's (1959 [1978, 7]) claim that the (T)-sentences (or the
instances of any other version of (ES)) cannot both tell us what the
sentences they nominalize mean and give us an account of
'true'. As Horwich (1998a, 68) puts it, "we would be
faced with something like a single equation and two unknowns"
(cf. Davidson 1990, 1996; Horwich 1998b, Kalderon 1999, Collins
2002).
The first thing to say regarding this objection is that, even if
deflationism were inconsistent with truth-conditional theories of
meaning, this would not *automatically* constitute an objection
to deflationism. After all, there are alternative theories of meaning
available, and most deflationists reject truth-conditional semantics
in favor of some "truth-free" alternative. The main
alternatives available include Brandom's (1994) inferentialism,
developed following
Wilfrid Sellars
(1974, 1979); Horwich's (1998b) use-theory of meaning, inspired
by Wittgenstein 1953; and Field's (1994a, 2001b)
computational-role + indication-relations account. There is, however,
a lot of work to be done before any of these theories can be regarded
as adequate. Devitt 2001 makes this point in rejecting all of these
alternative approaches to meaning, claiming that the only viable
approach is (referential/)truth-conditional semantics.
While the orthodoxy of truth-conditional accounts of meaning adds
weight to this challenge to deflationism, it is still, contra Devitt,
an open question whether any non-truth-conditional account of meaning
will turn out to be adequate. Moreover, even if there is no viable
"truth-free" account, several philosophers (e.g., Bar-on,
*et al*. 2005) have argued that deflationism has no more
problem with a truth-conditional theory of meaning than any other
approach to truth does. Others have gone further, arguing positively
that there is no incompatibility between deflationism and
truth-conditional theories. Alexis Burgess (2011, 407-410)
argues that at least some versions of deflationism are compatible with
mainstream model-theoretic semantics in linguistics (understood as
providing explanations of truth conditions) and the recognition of
"the manifest power and progress of truth-conditional
semantics". Claire Horisk (2008) argues that
"circularity" arguments for incompatibility (or, as she
prefers, "immiscibility") fail. The only circularity
involved here, she claims, is a harmless kind, so long as one is (like
some proponents of truth-conditional semantics) offering a reciprocal,
rather than a reductive, analysis of meaning. Mark Lance (1997,
186-7) claims that any version of deflationism based on an
anaphoric reading of 'is true' (as in Brandom 1988, 1994)
is independent of, and thus compatible with, any underlying account of
meaning, including a truth-conditional one. Williams 1999 likewise
claims that the role of truth in meaning theories for particular
languages is just the expressive role that deflationists claim
exhausts the notion of truth's function. Of course, all of these
claims have been contested, but they seem to show that the thesis that
deflationism is inconsistent with a truth-conditional theory of
meaning is neither a forgone conclusion nor necessarily an objection
to deflationism, even if the thesis is correct. (For further
discussion of this issue, see Gupta 1993b, Field 2005, Gupta and
Martinez-Fernandez 2005, Patterson 2005, and Horisk 2007.) That said,
one outstanding question that remains regarding the viability of any
deflationary approach to truth is whether that approach can be squared
with an adequate account of meaning. |
truth-identity | ## 1. Definition and Preliminary Exposition
Declarative sentences seem to take truth-values, for we say things
like
(1)
"Socrates is wise" is true.
But sentences are apparently not the only bearers of truth-values: for
we also seem to allow that what such sentences express, or mean, may
be true or false, saying such things as
(2)
"Socrates is wise" means *that Socrates is
wise*,[1]
and
(3)
*That Socrates is wise* is true
or
(4)
It is true *that Socrates is wise*.
If, provisionally, we call the things that declarative sentences
express, or mean, their *contents*--again provisionally,
these will be such things as *that Socrates is wise*--then
the identity theory of truth, in its most general form, states that
(cf. Baldwin 1991: 35):
(5)
A declarative sentence's content is true just if that
content is (identical with) a fact.
A fact is here to be thought of as, very generally, *a way things
are*, or *a way the world is*. On this approach, the
identity theory secures an intimate connection between language (what
language expresses) and world. Of course there would in principle be
theoretical room for a view that identified not the content of, say,
the true declarative sentence "Socrates is wise"--let
us assume from now on that this sentence is true--with the fact
*that Socrates is wise*, but rather that sentence itself. But
this is not a version of the theory that anyone has ever advanced, nor
does it appear that it would be plausible to do so (see Candlish
1999b: 200-2; Kunne 2003: 6). The early Wittgenstein does
regard sentences as being themselves facts, but they are not identical
with the facts that make them true.
Alternatively, and using a different locution, one might say that, to
continue with the same example,
(6)
*That Socrates is wise* is true just if *that Socrates
is wise* is the case.
The idea here is that (6) makes a connection between language and
reality: on the left-hand side we have something expressed by a piece
of language, and on the right-hand side we allude to a bit of reality.
Now (6) might look truistic, and that status has indeed been claimed
for the identity theory, at least in one of its manifestations. John
McDowell has argued that what he calls true "thinkables"
are identical with facts (1996: 27-8, 179-80). Thinkables
are things like *that Socrates is wise* regarded as possible
objects of thought. For we can think *that Socrates is wise*;
and it can also be the case *that Socrates is wise*. So the
idea is that what we can think can also be (identical with) what is
the case. That identity, McDowell claims, is truistic. On this
approach, one might prefer one's identity theory to take the
form (cf. Hornsby 1997: 2):
(7)
All true thinkables are (identical with) facts.
On this approach the identity theory explicitly aims to secure an
intimate connection between mind (what we think) and world.
A point which has perhaps been obscured in the literature on this
topic, but which should be noticed, is that (7) asserts a relation of
subordination: it says that true thinkables are a (proper or improper)
subset of facts; it implicitly allows that there might be facts that
are not identical with true thinkables. So (7) is not to be confounded
with its converse,
(8)
All facts are (identical with) true thinkables,
which asserts the opposite subordination, and says that facts are a
(proper or improper) subset of true thinkables, implicitly allowing,
this time, that there might be true thinkables that are not identical
with facts. (8) is therefore distinct from (7), and if (7) is
controversial, (8) is equally or more so, but for reasons that are at
least in part different. (8) denies the existence of facts that cannot
be grasped in thought. But many philosophers hold it to be evident
that there are, or at least could be, such facts--perhaps certain
facts involving indefinable real numbers, for example, or in some
other way going beyond the powers of human thought. So (8) could be
false; its status remains to be established; it can hardly be regarded
as truistic. Accordingly, one might expect that an identity theorist
who wished to affirm (7), and certainly anyone who wanted to say that
(7) (or (6)) was truistic, would--at least *qua* identity
theorist--steer clear of (8), and leave its status *sub
judice*. In fact, however, a good number of identity theorists,
both historical and contemporary, incorporate (8) as well as--or
even instead of--(7) into their statement of the theory. Richard
Cartwright, who published the first modern discussion of the theory in
1987, wrote that if one were formulating the theory, it would say
"that every true proposition is a fact and every fact a true
proposition" (1987: 74). McDowell states that
>
>
> true thinkables already belong just as much to the world as to minds
> [i.e., (7)], and things that are the case already belong just as much
> to minds as to the world [i.e., (8)]. It should not even seem that we
> need to choose a direction in which to read the claim of identity.
> (2005: 84)
>
>
>
Jennifer Hornsby takes the theory to state that true thinkables and
facts *coincide* (1997: 2, 9, 17, 20)--they are the same
set--so that she in effect identifies that theory with the
conjunction of (7) and (8), as also, in effect, does Julian Dodd
(2008a: *passim*). Now, (8) is certainly an interesting thesis
that merits much more consideration than it has hitherto received (at
least in the recent philosophical literature), and, as indicated, some
expositions of the identity theory have as much invested in (8) as in
(5) or (7): on this point see further
SS2
below. Nevertheless, it will make for clarity of discussion if we
associate the identity theory of truth, more narrowly, with something
along the lines of (5) or (7), and omit (8) from this particular
discussion.[2]
That will be the policy here.
Whether or not (6) is truistic, both (5) and (7) involve technical or
semi-technical vocabulary; moreover, they have been advanced as moves
in a technical debate, namely one concerning the viability of the
correspondence theory of truth. For these reasons it seems difficult
to regard them as truisms (see Dodd 2008a: 179). What (5) and (7)
mean, and which of them one will prefer as one's statement of
the identity theory of truth, if one is favorably disposed to that
theory--one may of course be happy with both--will depend,
among other things, on what exactly one thinks about the nature of
such entities as *that Socrates is wise*. In order to get clear
on this point, discussion of the identity theory has naturally been
conducted in the context of the Fregean semantical hierarchy, which
distinguishes between levels of language, sense, and reference. Frege
recognized what he called "thoughts" (*Gedanken*)
at the level of sense corresponding to (presented by) declarative
sentences at the level of language. McDowell's thinkables are
meant to be Fregean thoughts: the change of terminology is intended to
stress the fact that these entities are not thoughts in the sense of
dated and perhaps spatially located individual occurrences (thinking
events), but are abstract contents that are at least in principle
available to be grasped by different thinkers at different times and
places. So a Fregean identity theory of truth would regard both such
entities as *that Socrates is wise* and, correlatively, facts
as sense-level entities: this kind of identity theory will then state
that true such entities are identical with facts. This approach will
naturally favor (7) as its expression of the identity theory.
By contrast with Frege, Russell abjured the level of sense and (at
least around 1903-4) recognized what, following Moore, he called
"propositions" as worldly entities composed of objects and
properties. A modern Russellian approach might adopt these
propositions--or something like them: the details of
Russell's own conception are quite vague--as the referents
of declarative sentences, and identity theorists who followed this
line might prefer to take a particular reading of (5) as their slogan.
So these Russellians would affirm something along the lines of:
(9)
All true Russellian propositions are identical with facts (at the
level of reference),
by contrast with the Fregean
(10)
All true Fregean thoughts are identical with facts (at the level
of sense).
This way of formulating the relevant identity claims has the advantage
of suggesting that it would, at least in principle, be open to a
theorist to combine (9) and (10) in a hybrid position that (i)
departed from Russell and followed Frege by admitting *both* a
level of Fregean sense *and* one of reference, and also, having
admitted both levels to the semantic hierarchy, (ii) *both*
located Fregean thoughts at the level of sense *and* located
Russellian propositions at the level of reference. Sense being mode of
presentation of reference, the idea would be that declarative
sentences refer, *via* Fregean thoughts, to Russellian
propositions (for this disposition, see Gaskin 2006: 203-20;
2008: 56-127). So someone adopting this hybrid approach would
affirm both (9) and (10). Of course, the facts mentioned in (9) would
be *categorially different* from the facts mentioned in (10),
and one might choose to avoid confusion by distinguishing them
terminologically, and perhaps also by privileging one set of facts,
ontologically, over the other. If one wanted to follow this
privileging strategy, one might say, for instance, that only
reference-level facts were *genuine* facts, the *relata*
of the identity relation at the level of sense being merely
*fact-like entities*, not *bona fide* facts. That would
be to give the combination of (9) and (10) a Russellian spin.
Alternatively, someone who took the hybrid line might prefer to give
it a Fregean spin, saying that the entities with which true Fregean
thoughts were identical were the genuine facts, and that the
corresponding entities at the level of reference that true Russellian
propositions were identical with were not facts as such, but fact-like
correlates of the genuine facts. Without more detail, of course, these
privileging strategies leave the status of the entities they are
treating as merely fact-*like* unclear; and, as far as the
Fregean version of the identity theory goes, commentators who identify
facts with sense-level Fregean thoughts usually, as we shall see,
repudiate reference-level Russellian propositions altogether, rather
than merely downgrading their ontological status, and so affirm (10)
but reject (9). We shall return to these issues in
SS4
below.
## 2. Historical Background
The expression "the identity theory of truth" was first
used--or, at any rate, first used in the relevant sense--by
Stewart Candlish in an article on F. H. Bradley published in 1989. But
the general idea of the theory had been in the air during the 1980s:
for example, in a discussion first published in 1985, concerning John
Mackie's theory of truth, McDowell criticized that theory for
making
>
>
> truth consist in a relation of correspondence (rather than identity)
> between how things are and how things are represented as being. (1985
> [1998: 137 n. 21])
>
>
>
The implication is that identity would be the right way to conceive
the given relation. And versions of the identity theory go back at
least to Bradley (see, e.g., Bradley 1914: 112-13; for further
discussion and references, see Candlish 1989; 1995; 1999b:
209-12; T. Baldwin 1991: 36-40), and to the founding
fathers of the analytic tradition (Sullivan 2005: 56-7 n. 4).
The theory can be found in G. E. Moore's "The Nature of
Judgment" (1899), and in the entry he wrote on
"Truth" for J. Baldwin's *Dictionary of
Philosophy and Psychology* (1902-3; reprinted Moore 1993:
4-8, 20-1; see T. Baldwin 1991: 40-3). Russell
embraced the identity theory at least during the period of his 1904
discussions of Meinong (see, e.g., 1973: 75), possibly also in his
*The Principles of Mathematics* of 1903, and for a few years
after these publications as well (see T. Baldwin 1991: 44-8;
Candlish 1999a: 234; 1999b: 206-9). Frege has a statement of the
theory in his 1919 essay "The Thought", and may have held
it earlier (Frege 1918-19: 74 [1977: 25]; see Hornsby 1997:
4-6; Milne 2010: 467-8).
Wittgenstein's *Tractatus* (1922) is usually held to
propound a correspondence rather than an identity theory of truth;
however this is questionable. In the *Tractatus*, declarative
sentences (*Satze*) are said to be facts (arrangements of
names), and states of affairs (*Sachlagen*,
*Sachverhalte*, *Tatsachen*) are also said to be facts
(arrangements of objects). If the *Tractatus* is taken to put
forward a correspondence theory of truth, then presumably the idea is
that a sentence will be true just if there is an appropriate relation
of correspondence (an isomorphism) between sentence and state of
affairs. However, the problem with this interpretation is that, in the
*Tractatus*, a relation of isomorphism between a sentence and
reality is generally conceived as a condition of the
*meaningfulness* of that sentence, not specifically of its
*truth*. False sentences, as well as true, are isomorphic with
states of affairs--only, in their case the states of affairs do
not obtain. For Wittgenstein, states of affairs may either obtain or
fail to obtain--both possibilities are, in general, available to
them.[3]
Correlatively, it has been suggested that the *Tractatus*
contains two different conceptions of fact, a factive and a
non-factive one. According to the former conception, facts necessarily
obtain or are the case; according to the latter, facts may fail to
obtain or not be the case. This non-factive conception has been
discerned at *Tractatus* 1.2-1.21, and at 2.1 (see
Johnston 2013: 382). Given that, in the *Tractatus*, states of
affairs (and perhaps facts) have two poles--obtaining or being
the case, and non-obtaining or not being the case--it seems to
follow that, while Wittgenstein is committed to a correspondence
theory of *meaning*, his theory of *truth* must be (some
version of) an identity theory, along the lines of
>
>
> A declarative sentence is true just if what it is semantically
> correlated with is identical with an obtaining state of affairs (a
> factive fact).
>
>
>
(Identity theorists normally presuppose the factive conception of
facts, so that "factive" is redundant in the phrase
"factive facts", and that is the policy which will be
followed here.) Though a bipolar conception of facts (if indeed
Wittgenstein has it) may seem odd, the bipolar conception of states of
affairs (which, it is generally agreed, he does have) seems quite
natural: here the identity theorist says that a true proposition is
identical with an obtaining state of affairs (see Candlish &
Damnjanovic 2018: 271-2).
Peter Sullivan has suggested a different way of imputing an identity
theory to the Tractarian Wittgenstein (2005: 58-9). His idea is
that Wittgenstein's simple objects are to be identified with
Fregean senses, and that in effect the *Tractatus* contains an
identity theory along the lines of (7) or (10). Sullivan's
ground for treating Tractarian objects as senses is that, like
*bona fide* Fregean senses, they are *transparent*: they
cannot be grasped in different ways. An apparent difficulty with this
view is that there is plausibly more to Fregean sense than just the
property of transparency: after all, Russell also attached the
property of transparency to his basic objects, but it has not been
suggested that Russellian basic objects are really senses, and the
suggestion would seem to have little going for it (partly, though not
only, because Russell himself disavowed the whole idea of Fregean
sense).The orthodox position, which will be presupposed here, is that
the Tractarian Wittgenstein like Russell, finds no use for a level of
Fregean sense, so that his semantical hierarchy consists exclusively
of levels of language and reference, with nothing of a mediatory or
similar nature located between these levels. (Wittgenstein does appeal
to the *concepts* of sense and reference in the
*Tractatus*, but it is generally agreed that they do not figure
in a Fregean way, according to which both names and sentences, for
example, have both sense and reference; for Wittgenstein, by contrast,
sentences have sense but not reference, whereas names have reference
but not sense.)
## 3. Motivation
What motivates the identity theory of truth? It can be viewed as a
response to difficulties that seem to accrue to at least some versions
of the correspondence theory (cf. Dodd 2008a: 120, 124). The
correspondence theory of truth holds that truth consists in a relation
of correspondence between something linguistic or quasi-linguistic, on
the one hand, and something worldly on the other. Generally, the items
on the worldly end of the relation are taken to be facts or
(obtaining) states of affairs. For many purposes these two latter
kinds of entity (facts, obtaining states of affairs) are assimilated
to one another, and that strategy will be followed here. The exact
nature of the correspondence theory will then depend on what the other
*relatum* is taken to be. The items mentioned so far make
available three distinct versions of the correspondence theory,
depending on whether this *relatum* is taken to consist of
declarative sentences, Fregean thoughts, or Russellian propositions.
Modern correspondence theorists make a distinction between
truth-*bearers*, which would typically fall under one of these
three classifications, and
truth-*makers*,[4]
the worldly entities making truth-bearers true, when they are true.
If these latter entities are facts, then true declarative sentences,
Fregean thoughts, or Russellian propositions--whichever of these
one selects as the relata of the correspondence relation on the
language side of the language-world divide--correspond to
facts in the sense that facts are what make those sentences, thoughts,
or propositions true, when they are true. (Henceforth we shall
normally speak simply of thoughts and propositions, understanding
these to be Fregean thoughts and Russellian propositions respectively,
unless otherwise specified.)
That, according to the correspondence theorist (and the identity
theorist can agree so far), immediately gives us a constraint on the
shape of worldly facts. Take our sample sentence "Socrates is
wise", and recall that this sentence is here assumed to be true.
At the level of reference we encounter the object Socrates and
(assuming realism about
properties)[5]
the property of wisdom. Both of these may be taken to be entities in
the world, but it is plausible that neither amounts to a fact: neither
amounts to a plausible truth-maker for the sentence "Socrates is
wise", or for its expressed thought, or for its expressed
proposition. That is because the man Socrates, just as such, and the
property of wisdom, just as such, are not, so the argument goes,
propositionally structured, either jointly or severally, and so do not
amount to enough to make it true that Socrates is wise (cf. D.
Armstrong 1997: 115-16; Dodd 2008a: 7; Hofweber 2016: 288). Even
if we add in further universals, such as the relation of
instantiation, and indeed the instantiation of instantiation to any
degree, the basic point seems to be unaffected. In fact it can
plausibly be maintained (although some commentators disagree; Merricks
2007: ch. 1, *passim*, and pp. 82, 117, 168; Asay 2013:
63-4; Jago 2018: *passim*, e.g., pp. 73, 84, 185, 218,
250, though cf. p. 161) that the man Socrates, just as such, is not
even competent to make it true that Socrates exists; for that we need
the *existence* of the man Socrates. Hence, it would appear
that, if there are to be truth-makers in the world, they will have to
be structured, syntactically or quasi-syntactically, in the same
general way as declarative sentences, thoughts, and propositions. For
convenience we can refer to structure in this general sense as
"propositional structure": the point then is that neither
Socrates, nor the property of wisdom, nor (if we want to adduce it)
the relation of instantiation is, just as such, propositionally
structured. Following this line of argument through, we reach the
conclusion that nothing short of full-blown, propositionally
structured entities like the fact *that Socrates is wise* will
be competent to make the sentence "Socrates is wise", or
the thought or proposition expressed by that sentence, true. (A
question that arises here is whether tropes might be able to provide a
"thinner" alternative to such ontologically
"rich" entities as the fact *that Socrates is
wise*. One problem that seems to confront any such strategy is
that of making the proposed alternative a genuine one, that is, of
construing the relevant tropes in such a way that they do not simply
collapse into, or ontologically depend on, entities of the relatively
rich form *that Socrates is wise*. For discussion see Dodd
2008a: 7-9.)
The question facing the correspondence theorist is now: if such
propositionally structured entities are truth-makers, are they
truth-makers for sentences, thoughts, or propositions? It is at this
point that the identity theorist finds the correspondence theory
unsatisfactory. Consider first the suggestion that the worldly fact
*that Socrates is wise* is the truth-maker for the
reference-level proposition *that Socrates is wise* (see, e.g.,
Jago 2018: 72-3, and *passim*). There surely are such
facts as the fact *that Socrates is wise*: we talk about such
things all the time. The problem would seem to be not with the
existence of such facts, but rather with the relation of
correspondence which is said by the version of the correspondence
theory that we are currently considering to obtain between the fact
*that Socrates is wise* and the proposition *that Socrates
is wise*. As emerges from this way of expressing the difficulty,
there seems to be no linguistic difference between the way we talk
about propositions and the way we talk about facts, when these
entities are specified by "that" clauses. That suggests
that facts just *are* true propositions. If that is right, then
the relation between facts and true propositions is not one of
*correspondence*--which, as Frege famously observed (Frege
1918-19: 60 [1977: 3]; cf. Kunne 2003: 8; Milne 2010:
467-8), implies the distinctness of the
*relata*--but *identity*.
This line of argument can be strengthened by noting the following
point about explanation. Correspondence theorists have typically
wanted the relation of correspondence to *explain* truth: they
have usually wanted to say that it is *because* the proposition
*that Socrates is wise* corresponds to a fact that it is true,
and *because* the proposition *that Socrates is
foolish*--or rather: *It is not the case that Socrates is
wise* (after all, his merely being foolish is not enough to
guarantee that he is not wise, for he might, like James I and VI, be
both wise and foolish)--does not correspond to a fact that it is
false. But the distance between the true proposition *that Socrates
is wise* and the fact *that Socrates is wise* seems to be
too small to provide for explanatory leverage. Indeed the identity
theorist's claim is that there is no distance at all. Suppose we
ask: Why is the proposition *that Socrates is wise* true? If we
reply by saying that it is true because it is a fact *that Socrates
is wise*, we seem to have explained nothing, but merely repeated
ourselves (cf. Strawson 1971: 197; Anscombe 2000: 8; Rasmussen 2014:
39-43). So correspondence apparently gives way to identity as
the relation which must hold or fail to hold between a proposition and
a state of affairs if the proposition is to be true or false: the
proposition is true just if it is identical with an obtaining state of
affairs and false if it is not (cf. Horwich 1998: 106). And it would
seem that, if the identity theorist is right about this disposition,
explanatory pretensions will have to be abandoned: for while it will
be correct to say that a proposition is true just if it is identical
with a fact, false otherwise, it is hard to see that much of substance
has thereby been said about truth (cf. Hornsby 1997; 2; Dodd 2008a;
135).
It might be replied here that there are circumstances in which we
tolerate statements of the form "*A* because
*B*" when an appropriate identity--perhaps even
identity of sense, or reference, or both--obtains between
"*A*" and "*B*". For example, we say
things like "He is your first cousin because he is a child of a
sibling of one of your parents" (Kunne 2003: 155). But here
it is plausible that there is a definitional connection between
left-hand side and right-hand side, which seems not to hold of
>
>
> The proposition *that Socrates is wise* is true because it is a
> fact *that Socrates is wise*.
>
>
>
In the latter case there is surely no question of definition; rather,
we are supposed, according to the correspondence theorist, to have an
example of metaphysical explanation, and that is just what, according
to the identity theorist, we do not have. After all, the identity
theorist will insist, it seems obvious that the relation, whatever it
is, between the proposition *that Socrates is wise* and the
fact *that Socrates is wise* must, given that the proposition
is true, be an extremely close one: what could this relation be? If
the identity theorist is right that the relation cannot be one of
metaphysical explanation (in either direction), then it looks as
though it will be hard to resist the insinuation of the linguistic
data that the relation is one of identity.
It is for this reason that identity theorists sometimes insist that
their position should not be defined in terms of an identity between
truth-bearer and truth-maker: that way of expressing the theory looks
too much in thrall to correspondence theorists' talk (cf.
Candlish 1999b: 200-1, 213). For the identity theorist, to speak
of both truth-makers and truth-bearers would imply that the things
allegedly doing the truth-making were distinct from the things that
were made true. But, since in the identity theorist's view there
are no truth-makers distinct from truth-bearers, if the latter are
conceived as propositions, and since nothing can make itself true, it
follows that there are no truth-makers *simpliciter*, only
truth-bearers. It seems to follow, too, that it would be ill-advised
to attack the identity theory by pointing out that some (or all)
truths lack truth-makers (so Merricks 2007: 181): so long as truths
are taken to be propositions, that is exactly what identity theorists
themselves say. From the identity theorist's point of view,
truth-maker theory looks very much like an exercise in splitting the
level of reference in half and then finding a bogus match between the
two halves (see McDowell 1998: 137 n. 21; Gaskin 2006: 203; 2008:
119-27). For example, when David Armstrong remarks that
>
>
> What is needed is something in the world which ensures that *a*
> is *F*, some truth-maker or ontological ground for
> *a*'s being *F*. What can this be except the state of
> affairs of *a*'s being *F*? (1991: 190)
>
>
>
the identity theorist is likely to retort that *a*'s being
*F*, which according to Armstrong "ensures" that
*a* is *F*,*just is* the entity (whatever it is)
*that a is F*. The identity theorist maps conceptual
connections that we draw between the notions of proposition, truth,
falsity, state of affairs, and fact. These connections look trivial,
when spelt out--of course, an identity theorist will counter that
to go further would be to fall into error--so that to speak of an
identity *theory* can readily appear too grand (McDowell 2005:
83; 2007: 352. But cf. David 2002: 126). So much for the thesis that
facts are truth-makers and *propositions* truth-bearers; an
exactly parallel argument applies to the version of the correspondence
theory that treats facts as truth-makers and *thoughts* as
truth-bearers.
Consider now the suggestion that obtaining states of affairs, as the
correspondence theorist conceives them, make *declarative
sentences* (as opposed to propositions) true (cf. Horwich 1998:
106-7). In this case there appears to be no threat of triviality
of the sort that apparently plagued the previous version of the
correspondence theory, because states of affairs like *that
Socrates is wise* are genuinely distinct from linguistic items
such as the sentence "Socrates is wise". To that extent
friends of the identity theory need not jib at the suggestion that
such sentences have worldly truth-makers, if that is how the relation
of correspondence is being glossed. But they might question the
appropriateness of the gloss. For, they might point out, it does not
seem possible, without falsification, to draw detailed links between
sentences and bits of the world. After all, different sentences in the
same or different languages can "correspond" to the same
bit of the world, and these different sentences might have very
different (numbers of) components. The English sentence "There
are cows" contains three words: are there then three bits in the
world corresponding to this sentence, and making it true? (cf. Neale
2001: 177). The sentence "Cows
exist" contains only two words, but would not the correspondence
theorist want to say that it was made true by the same chunk of
reality? And when we take other languages into account, there seems in
principle to be no reason to privilege any particular number and say
that a sentence corresponding to the relevant segment of reality must
contain *that* number of words: why might there not, in
principle, be sentences of actual or possible languages such that, for
any *n* [?] 1, there existed a sentence comprising *n*
words and meaning the same as the English "There are
cows"? (In fact, is English not already such a language? Just
prefix and then iterate *ad lib.* a vacuous operator like
"Really".)
In a nutshell, then, the identity theorist's case against the
correspondence theory is that, when the truth-making relation is
conceived as originating in a worldly fact (or similar) and having as
its other relatum a true *sentence*, the claim that this
relation is one of correspondence cannot be made out; if, on the other
hand, the relevant relation targets a *proposition* (or
thought), then that relation must be held to be one of identity, not
correspondence.
## 4. Identity, Sense, and Reference
Identity theorists are agreed that, in the case of any particular
relevant identity, a fact will constitute the worldly *relatum*
of the relation, but there is significant disagreement among them on
the question what the item on the other end of the relation
is--whether a thought or a proposition (or both). As we have
seen, there are three possible positions here: (i) one which places
the identity relation exclusively between true thoughts and facts,
(ii) one which places it exclusively between true propositions and
facts, and (iii) a hybrid position which allows identities of both
sorts (identities obtaining at the level of sense will of course be
quite distinct from identities obtaining at the level of reference).
Which of these positions an identity theorist adopts will depend on
wider metaphysical and linguistic considerations that are strictly
extraneous to the identity theory as such.
Identity theorists who favor (i) generally do so because they want to
have nothing to do with *propositions* as such. That is to say,
such theorists eschew propositions as *reference-level
entities*: of course the *word* "proposition"
may be, and sometimes is, applied to Fregean thoughts at the level of
sense, rather than to Russellian propositions at the level of
reference. For example, Hornsby (1997: 2-3) uses
"proposition" and "thinkable" interchangeably.
So far, this terminological policy might be considered neutral with
respect to the location of propositions and thinkables in the Fregean
semantic hierarchy: that is to say, if one encounters a theorist who
talks about "thinkables" and "propositions",
even identifying them, one does not, just so far, know where in the
semantic hierarchy this theorist places these entities. In particular,
we cannot assume, unless we are specifically told so, that our
theorist locates either propositions or thinkables at the level of
*sense*. After all, someone who houses propositions at the
level of reference holds that these reference-level entities are
*thinkable*, in the sense that they are *graspable in
thought* (perhaps via thoughts at the level of sense). But they
are not *thinkables* if this latter word is taken, as it is by
McDowell and Hornsby, to be a technical term referring to entities at
the level of sense. For clarity the policy here will be to continue to
apply the word "proposition" exclusively to Russellian
propositions at the level of reference. Such propositions, it is
plausible to suppose, can be grasped in thought, but by definition
they are not thoughts or thinkables, where these two latter terms
have, respectively, their Fregean and McDowellian meanings. It is
worth noting that this point, though superficially a merely
terminological one, engages significantly with the interface between
the philosophies of language and mind that was touched on in the
opening paragraph. Anyone who holds that reference-level propositions
can, in the ordinary sense, be thought--are thinkable--is
likely to be unsatisfied with any terminology that seems to limit the
domain of the thinkable and of what is thought to the level of sense
(On this point see further below in this section, and Gaskin 2020:
101-2).
Usually, as has been noted, identity theorists who favor (i) above
have this preference because they repudiate propositions as that term
is being employed here: that is, they repudiate propositionally
structured reference-level entities. There are several reasons why
such identity theorists feel uncomfortable with propositions when
these are understood to be reference-level entities. There is a fear
that such propositions, if they existed, would have to be construed as
truth-makers; and identity theorists, as we have seen, want to have
nothing to do with truth-makers (Dodd 2008a: 112). That fear could
perhaps be defused if facts were also located at the level of
reference for true propositions to be identical with. This move would
take us to an identity theory in the style of (ii) or (iii) above.
Another reason for suspicion of reference-level propositions is that
commentators often follow Russell in his post-1904 aversion
specifically to *false* objectives, that is, to false
propositions *in re* (Russell 1966: 152; Cartwright 1987:
79-84). Such entities are often regarded as too absurd to take
seriously as components of reality (so T. Baldwin 1991: 46; Dodd 1995:
163; 1996; 2008a: 66-70, 113-14, 162-6). More
especially, it has been argued that false propositions *in re*
could not be unities, that the price of unifying a proposition at the
level of reference would be to make it true: if this point were
correct it would arguably constitute a *reductio ad absurdum*
of the whole idea of reference-level propositions, since it is
plausible to suppose that if there cannot be false reference-level
propositions, there cannot be true ones either (see Dodd 2008a: 165).
If, on the other hand, one is happy with the existence of propositions
*in re* or reference-level propositions, both true and
false,[6]
one is likely to favor an identity theory in the style of (ii) or
(iii). And, once one has got as far as jettisoning (i) and deciding
between (ii) and (iii), there must surely be a good case for adopting
(iii): for if one has admitted propositionally structured entities
both at the level of sense (as senses of declarative sentences) and at
the level of reference (propositions), there seems no good reason not
to be maximally liberal in allowing identities between entities of
these two types and, respectively, sense- and reference-level kinds of
fact (or fact-like entities).
Against what was suggested above about Frege
(SS2),
it has been objected that Frege could not have held an identity
theory of truth (Baldwin 1991: 43); the idea here is that, even if he
had acknowledged states of affairs as *bona fide* elements of
reality, Frege could not have identified true thoughts with them on
pain of confusing the levels of sense and reference. As far as the
exegetical issue is concerned, the objection might be said to overlook
the possibility that Frege identified true thoughts with facts
construed as *sense*-level entities, rather than with states of
affairs taken as *reference*-level entities; and, as we have
noted, Frege does indeed appear to have done just this (see Dodd &
Hornsby 1992). Still, the objection raises an important theoretical
issue. It would surely be a serious confusion to try to construct an
identity *across* the categorial division separating sense and
reference, in particular to attempt to identify true Fregean thoughts
with reference-level facts or states of
affairs.[7]
It has been suggested that McDowell and Hornsby are guilty of this
confusion;[8]
they have each rejected the
charge,[9]
insisting that, for them, facts are not reference-level entities, but
are, like Fregean thoughts, sense-level
entities.[10]
But, if one adheres to the Fregean version of the identity theory ((i)
above), which identifies true thoughts with facts located at the level
of sense, and admits no correlative identity, in addition, connecting
true propositions located at the level of reference with facts or
fact-like entities also located at that level, it looks as though one
faces a difficult dilemma. At what level in the semantical hierarchy
is the world to be placed? Suppose first one puts it at the level of
reference (this appears to be Dodd's favored view: see 2008a:
180-1, and *passim*). In that case the world will contain
no facts or propositions, but just objects and properties hanging
loose in splendid isolation from one another, a dispensation which
looks like a version of Kantian transcendental idealism. (Simply
insisting that the properties include not merely monadic but also
polyadic ones, such as the relation of instantiation, will not in
itself solve the problem: we will still just have a bunch of separate
objects, properties, and relations.) If there are no true
propositions--no facts--or even false propositions to be
found at the level of reference, but if also, notwithstanding that
absence, the world is located there, the objects it contains will, it
seems, have to be conceived as bare objects, not as things of certain
sorts. Some philosophers of a nominalistic bias might be happy with
this upshot; but the problem is how to make sense of the idea of a
bare object--that is, an object not characterized by any
properties. (Properties not instantiated by any objects, by contrast,
will not be problematic, at least not for a realist.)
So suppose, on the other hand, that one places the world at the level
of sense, on the grounds that the world is composed of facts, and that
that is where facts are located. This ontological dispensation is
explicitly embraced by McDowell (1996: 179). The problem with this way
out of the dilemma would seem to be that, since Fregean senses are
constitutively modes of presentation of referents, the strategy under
current consideration would take the world to be made up of modes of
presentation--but of what? Of objects and properties? These are
certainly reference-level entities, but if they are presented by items
in the realm of sense, which is being identified on this approach with
the world, then again, as on the first horn of the dilemma, they would
appear to be condemned to an existence at the level of reference in
splendid isolation from one another, rather than in propositionally
structured combinations, so that once more we would seem to be
committed to a form of Kantian transcendental idealism (see Suhm,
Wagemann, & Wessels 2000: 32; Sullivan 2005: 59-61; Gaskin
2006:199-203). Both ways out of the dilemma appear to have this
unattractive consequence. The only difference between those ways
concerns where exactly in the semantic hierarchy we locate the world;
but it is plausible that that issue, in itself, is or ought to be of
less concern to metaphysicians than the requirement to avoid divorcing
objects from the properties that make those objects things of certain
sorts; and both ways out of the dilemma appear to flout this
requirement.
To respect the requirement, we need to nest reference-level objects
and properties in propositions, or proposition-like structures, also
located at the level of reference. And then some of these structured
reference-level entities--the true or obtaining ones--will,
it seems, be facts, or at least fact-like. Furthermore, once one
acknowledges the existence of facts, or fact-like entities, existing
at the level of *sense*, it seems in any case impossible to
prevent the automatic generation of facts, or fact-like entities,
residing at the level of *reference*. For sense is mode of
presentation of reference. So we need reference-level facts or
fact-like entities to be what sense-level facts or fact-like entities
*present*. One has to decide how to treat these variously
housed fact-like entities theoretically. If one were to insist that
the sense-level fact-like entities were the genuine and only
*facts*, the corresponding reference-level entities would be no
better than *fact-like*, and contrariwise. But, regardless
whether the propositionally structured entities automatically
generated in this way by sense-level propositionally structured
entities are to be thought of as proper facts or merely as fact-like
entities, it would seem perverse not to identify the world with these
entities.[11]
For to insist on continuing to identify the world with sense-level
rather than reference-level propositionally structured entities would
seem to fly in the face of a requirement to regard the world as
maximally objective and maximally non-perspectival. McDowell himself
hopes to avert any charge of embracing an unacceptable idealism
consequent on his location of the world at the level of sense by
relying on the point that senses present their references directly,
not descriptively, so that reference is, as it were, contained in
sense (1996: 179-80). To this it might be objected that the
requirement of maximal objectivity forces an identification of the
world with the contained, not the containing, entities in this
scenario, which in turn seems to force the upshot--if the threat
of Kantian transcendental idealism is really to be obviated--that
the contained entities be propositionally structured as such, that is,
*as contained entities*, and not simply in virtue of being
contained in propositionally structured containing entities. (For a
different objection to McDowell, see Sullivan 2005: 60 n. 6.)
## 5. Difficulties with the Theory and Possible Solutions
### 5.1 The modal problem
G. E. Moore drew attention to a point that might look (and has been
held to be) problematic for the identity theory (Moore 1953: 308; Fine
1982: 46-7; Kunne 2003: 9-10). The proposition
*that Socrates is wise* exists in all possible worlds where
Socrates and the property of wisdom exist, but in some of those worlds
this proposition is true and in others it is false. The fact *that
Socrates is wise*, by contrast, only exists in those worlds where
the proposition both exists and is true. So it would seem that the
proposition *that Socrates is wise* cannot be identical with
the fact *that Socrates is wise*. They have different modal
properties, and so by the principle of the indiscernibility of
identicals they cannot be identical.
Note, first, that this problem, if it is a problem, has nothing
especially to do with the identity theory of truth or with facts. It
seems to arise already for true propositions and propositions taken
*simpliciter* before ever we get to the topic of facts. That
is, one might think that the proposition *that Socrates is
wise* is identical with the true proposition *that Socrates is
wise* (assuming, as we are doing, that this proposition
*is* true); but we then face the objection that the proposition
taken *simpliciter* and the true proposition differ in their
modal properties, since (as one might suppose) the true proposition
*that Socrates is wise* does not exist at worlds where the
proposition *that Socrates is wise* is false, but the
proposition taken *simpliciter* does. Indeed the problem, if it
is a problem, is still more general, and purported solutions to it go
back at least to the Middle Ages (when it was discussed in connection
with Duns Scotus' formal distinction; see Gaskin 2002 [with
references to further relevant literature]). Suppose that Socrates is
a cantankerous old curmudgeon. Now grumpy Socrates, one would think,
is identical with Socrates. But in some other possible worlds Socrates
is of a sunny and genial disposition. So it would seem that Socrates
cannot be identical with grumpy Socrates after all, because in these
other possible worlds, while Socrates goes on existing, grumpy
Socrates does not exist--or so one might argue.
Can the identity theorist deal with this problem, and if so how? Here
is one suggestion. Suppose we hold, staying with grumpy Socrates for a
moment, that, against the assumption made at the end of the last
paragraph, grumpy Socrates *does* in fact exist in worlds where
Socrates has a sunny disposition. The basis for this move would be the
thought that, after all, grumpy Socrates *is* identical with
Socrates, and *Socrates* exists in these other worlds. So
grumpy Socrates exists in those worlds too; it is just that he is not
grumpy in those worlds. (Suppose Socrates is *very* grumpy;
suppose in fact that grumpiness is so deeply ingrained in his
character that worlds in which he is genial are quite far away.
Someone surveying the array of possible worlds, starting from the
actual world and moving out in circles, and stumbling at long last
upon a world with a pleasant Socrates in it, might register the
discovery by exclaiming, with relief, "Oh look! Grumpy Socrates
is not grumpy over here!".) Similarly, one might contend, the
true proposition, and fact, *that Socrates is wise* goes on
existing in the worlds where Socrates is not wise, because the true
proposition, and fact, *that Socrates is wise* just *is*
the proposition *that Socrates is wise*, and that proposition
goes on existing in these other worlds, but in those worlds that true
proposition, and fact, is not a true proposition, or a fact. (In
Scotist terms one might say that the proposition *that Socrates is
wise* and the fact *that Socrates is wise* are really
identical but formally distinct.)
This solution was, in outline, proposed by Richard Cartwright in his
1987 discussion of the identity theory (Cartwright 1987: 76-8;
cf. David 2002: 128-9; Dodd 2008a: 86-8; Candlish &
Damnjanovic 2018: 265-6). According to Cartwright, the true
proposition, and fact, *that there are subways in Boston*
exists in other possible worlds where Boston does not have subways,
even though in those worlds that fact would be not be a fact.
(Compare: grumpy Socrates exists in worlds where Socrates is genial
and sunny, but he is not grumpy there.) So even in worlds where it is
not *a* fact *that Boston has subways*, *that*
fact, namely the fact *that Boston has subways*, continues to
exist. Cartwright embellishes his solution with two controversial
points. First, he draws on Kripke's distinction between rigid
and non-rigid designation, suggesting that his solution can be
described by saying that the expression "The fact *that
Boston has subways*" is a non-rigid designator. But it is
plausible that that expression goes on referring to, or being
satisfied by (depending on how exactly one wants to set up the
semantics of definite descriptions: see Gaskin 2008: 56-81), the
fact *that Boston has subways* in possible worlds where Boston
does not have subways; it is just that, though that fact exists in
those worlds, it is not a fact there. But that upshot does not appear
to derogate from the rigidity of the expression in
question**.** Secondly, Cartwright allows for a true
reading of "The fact *that there are subways in Boston*
might not have been the fact *that there are subways in
Boston*". But it is arguable that we should say that this
sentence is just false (David 2002: 129). The fact *that there are
subways in Boston* would still have gone on being *the same
fact* in worlds where Boston has no subways, namely the fact
*that there are subways in Boston*; it is just that in those
worlds *this* fact would not have been *a* fact. You
might say: in *that* world the fact *that there are subways
in Boston* would not be correctly described as a fact, but in
talking about that world we are talking about it from the point of
view of our world, and in our world it is a fact. (Similarly with
grumpy Socrates.)
Now, an objector may want to press the following point against the
above purported solution to the difficulty. Consider again the fact
*that Socrates is wise*. Surely, it might be said, it is more
natural to maintain that that fact *does not exist* in a
possible world where Socrates is not wise, rather than that it exists
there all right, but is not a fact. After all, imagine a conversation
about a world in which Socrates is not wise and suppose that Speaker
*A* claims that Socrates is indeed wise in that world. Speaker
*B* might counter with
>
>
> No, sorry, you're wrong: there is no such fact in that world;
> the purported fact *that Socrates is wise* simply does not
> exist in that world.
>
>
>
It might seem odd to insist that *B* is not allowed to say this
and must say instead
>
>
> Yes, you're right that there is such a fact in that world,
> namely the fact *that Socrates is wise*, but in that world
> *that* fact is not *a* fact;.
>
>
>
How might the identity theorist respond to this objection? One
possible strategy would be to make a distinction between *fact*
and *factuality*, as follows. *Factuality*, one might
say, is a reification of facts. Once you have a fact, you also get, as
an ontological spin-off, the *factuality* of that fact. The
fact, being a proposition, exists at all possible worlds where the
proposition exists, though in some of these worlds it may not be a
fact: it will not be a fact in worlds where the proposition is false.
The factuality of that fact, by contrast, only exists at those worlds
where the fact *is* a fact--where the proposition is true.
So factuality is a bit like a trope. Compare grumpy Socrates again.
Grumpy Socrates, the identity theorist might contend, exists at all
worlds where Socrates exists, though at some of those worlds he is not
grumpy. But *Socrates' grumpiness*--that particular
trope--exists only at worlds where Socrates is grumpy. That seems
to obviate the problem, because the suggestion being canvassed here is
that grumpy Socrates is identical not with *Socrates'
grumpiness*--so that the fact that these two entities have
different modal properties need embarrass no one--but rather with
*Socrates*. Similarly, the suggestion is that the proposition
*that Socrates is wise* is identical not with the
*factuality* of the fact *that Socrates is wise*, but
just with that *fact*. So the identity theorist would
accommodate the objector's point by insisting that
*facts* exist at possible worlds where their
*factualities* do not exist.
The reader may be wondering why this problem was ever raised against
the identity theory of truth in the first place. After all, the
identity theorist does not say that propositions *simpliciter*
are identical with facts, but that *true* propositions are
identical with facts, and now true propositions and facts surely have
exactly the *same* modal properties: for regardless how things
are with the sheer proposition *that Socrates is wise*, at any
rate the *true* proposition *that Socrates is wise* must
surely be thought to exist at the same worlds as the fact *that
Socrates is wise*, whatever those worlds are. However, as against
this quick way with the purported problem, there stands the intuition,
mentioned and exploited above, that the true proposition *that
Socrates is wise* is identical with the proposition *that
Socrates is wise*. So long as that intuition is in play, the
problem does indeed seem to arise--for true propositions, in the
first instance, and then for facts by transitivity of identity. But
the identity theorist will maintain that, as explained, the problem
has a satisfactory solution.
### 5.2 The "right fact" problem
Candlish, following Cartwright, has urged that the identity theory of
truth is faced with the difficulty of getting hold of the "right
fact" (Cartwright 1987: 74-5; Candlish 1999a: 238-9;
1999b: 202-4). Consider a version of the identity theory that
states:
(11)
The proposition *that *p** is true just if it is
identical with a
fact.[12]
Candlish's objection is now that (11)
>
>
> does not specify *which* fact has to be identical with the
> proposition for the proposition to be true. But what the identity
> theory requires is not that a true proposition be identical with
> *some fact or other*, it is that it be identical with the
> *right* fact. (1999b: 203)
>
>
>
In another paper Candlish puts the matter like this:
>
>
> But after all, any proposition might be identical with *some*
> fact or other (and there are reasons identified in the
> *Tractatus* for supposing that all propositions are themselves
> facts), and so all might be true. What the identity theory needs to
> capture is the idea that it is *by virtue of* being identical
> with the *appropriate* fact that a proposition is true. (1999a:
> 239)
>
>
>
The reference to the *Tractatus* is suggestive. Of course, it
might be objected that the *Tractatus* does not have
propositions in the sense of that word figuring here: that is, it does
not recognize Russellian propositions (propositions at the level of
reference). Nor indeed does it appear to recognize Fregean thoughts.
In the *Tractatus*, as we have noted
(SS2),
declarative sentences (*Satze*) are facts (arrangements
of names), and states of affairs (*Sachlagen*,
*Sachverhalte*, *Tatsachen*) are also facts
(arrangements of objects). Even so, Candlish's allusion to the
*Tractatus* reminds us that propositions (in our sense)
*are* Tractarian inasmuch as they are structured
*arrangements of entities*, namely objects and properties.
(Correlatively, thoughts are structured arrangements of senses.) False
propositions (and false thoughts) will equally be arrangements of
objects and properties (respectively, senses). So the difficulty that
Cartwright and Candlish have identified can be put like this.
Plausibly *any* proposition, whether or not it is true, is
identical with *some fact or other* given that a proposition is
an arrangement of entities of the appropriate sort. But if
propositions just *are* facts, then *every* proposition
is identical with *some* fact--at the very least, with
itself--whether it is true or false. So the right-to-left
direction of (11) looks incorrect.
J. C. Beall (2000) attempts to dissolve this problem on the identity
theorist's behalf by invoking the principle of the
indiscernibility of identicals. His proposal works as follows. If we
ask, in respect of (11), what the "right" fact is, it
seems that we can answer that the "right" fact must at
least have the property of *being identical with the proposition
that *p**, and the indiscernibility principle then guarantees
that there is only one such fact. This proposal is open to an obvious
retort. Suppose that the proposition *that *p** is false.
That proposition will still be identical with itself, and if we are
saying (in Wittgensteinian spirit) that propositions are facts, then
that proposition will be identical with at least one fact, namely
itself. So it will satisfy the right-hand side of (11), its falsity
notwithstanding. But reflection on this retort suggests a patch-up to
Beall's proposal: why not say that the *right* fact is
*the fact that *p**? We would then be able to gloss (11)
with
(12)
The proposition *that *p** is true just if (a) it is
a fact *that *p**, and (b) the proposition *that
*p** is identical with *that fact*.
Falsity, it seems, now no longer presents a difficulty, because if it
is false *that *p** then it is not a fact *that
*p**, so that (a) fails, and there is no appropriate
candidate for the proposition *that *p** to be identical
with.[13]
Notice that, in view of the considerations already aired in
connection with the modal problem ((i) of this section), caution is
here required. Suppose that it is true *that *p** in the
actual world, but false in some other possible world. According to the
strategy that we have been considering on the identity
theorist's behalf, it would be wrong to say that, in the
possible world where it is false *that *p**, there is no
such fact as the fact *that *p**. The strategy has it that
there is indeed such a fact, because it is (in the actual world) a
fact *that *p**, and that fact, and the true proposition,
*that *p**, go on existing in the possible world where it
is false *that *p**; it is just that *that* fact is
not *a* fact in that possible world. But (12), the identity
theorist will maintain, deals with this subtlety. In the possible
world we are considering, where it is false *that *p**,
though the fact *that *p** exists, it is not a fact
*that *p**, so (a) fails, and there is accordingly no risk
of our getting hold of the "wrong" fact. Note also that if
a Wittgensteinian line is adopted, while the (false) proposition that
*p* will admittedly be identical with *a* fact--at
the very least with itself--it will be possible, given the
failure of (a), for the identity theorist to contend with a clear
conscience that that fact is the *wrong* fact, which does not
suffice to render the proposition true.
### 5.3 The "slingshot" problem
If the notorious "slingshot" argument worked, it would
pose a problem for the identity theory of truth. The argument exists
in a number of different, though related, forms, and this is not the
place to explore all of these in
detail.[14]
Here we shall look briefly at what is one of the simplest and most
familiar versions of the argument, namely Davidson's. This
version of the argument aims to show that if true declarative
sentences refer to anything (for example to propositions or facts),
then they all refer to the same thing (to the "Great
Proposition", or to the "Great Fact"). This upshot
would be unacceptable to an identity theorist of a Russellian cast,
who thinks that declarative sentences refer to propositions, and that
true such propositions are identical with facts: any such theorist is
naturally going to want to insist that the propositions referred to by
different declarative sentences are, at least in general, distinct
from one another, and likewise that the facts with which distinct true
propositions are identical are also distinct from one another.
Davidson expresses the problem that the slingshot argument purportedly
throws up as follows:
>
>
> The difficulty follows upon making two reasonable assumptions: that
> logically equivalent singular terms have the same reference; and that
> a singular term does not change its reference if a contained singular
> term is replaced by another with the same reference. But now suppose
> that "*R*" and "*S*" abbreviate any
> two sentences alike in truth value. (1984: 19)
>
>
>
He then argues that the following four sentences have the same
reference:
(13)
\(R\)
(14)
\(\hat{z}(z\! =\! z \amp R) = \hat{z}(z\! =\! z)\)
(15)
\(\hat{z}(z\! =\! z \amp S) = \hat{z}(z\! =\! z \))
(16)
\(S\)
(The hat over a variable symbolizes the description operator: so
"\(\hat{z}\)" means *the \(z\) such that ...*)
This is because (13) and (14) are logically equivalent, as are (15)
and (16), while the only difference between (14) and (15) is that (14)
contains the expression (Davidson calls it a "singular
term") "\(\hat{z} (z\! =\! z \amp R)\)" whereas (15)
contains "\(\hat{z} (z\! =\! z \amp S)\)",
>
>
> and these refer to the same thing if *S* and *R* are alike
> in truth value. Hence any two sentences have the same reference if
> they have the same truth value. (1984: 19)
>
>
>
The difficulty with this argument, as a number of writers have pointed
out (see, e.g., Yourgrau 1987; Gaskin 1997: 153 n. 17; Kunne
2003: 133-41), and the place where the identity theorist is
likely to raise a cavil, lies in the first assumption on which it
depends. Davidson calls this assumption "reasonable", but
it has been widely questioned. It states "that logically
equivalent singular terms have the same reference". But
intuitively, the ideas of logical equivalence and reference seem to be
quite distinct, indeed to have, as such, little to do with one
another, so that it would be odd if there were some *a priori*
reason why the assumption had to hold. And it is not difficult to
think of apparent counterexamples: the sentence "It is
raining" is logically equivalent to the sentence "It is
raining and (either Pluto is larger than Mercury or it is not the case
that Pluto is larger than Mercury)", but the latter sentence
seems to carry a referential payload that the former does not. Of
course, if declarative sentences refer to truth-values, as Frege
thought, then the two sentences will indeed be co-referential, but to
assume that sentences refer to truth-values would be question-begging
in the context of an argument designed to establish that all true
sentences refer to the same thing.
### 5.4 The congruence problem
A further objection to the identity theory, going back to an
observation of Strawson's, takes its cue from the point that
canonical names of propositions and of facts are often not
straightforwardly congruent with one another: they are often not
intersubstitutable *salva congruitate* (or, if they are, they
may not be intersubstitutable *salva veritate*) (Strawson 1971:
196; cf. Kunne 2003: 10-12). For example, we say that
propositions are true, not that they obtain, whereas we say that facts
obtain, not that they are true. How serious is this point? The
objection in effect presupposes that for two expressions to be
co-referential, or satisfied by one and the same thing, they must be
syntactically congruent, have the same truth-value potential, and
match in terms of general contextual suitability. The assumption of
the syntactic congruence of co-referential expressions is
controversial, and it may be possible for the identity theorist simply
to deny it (see Gaskin 2008: 106-10, for argument on the point,
with references to further literature; cf. Dodd 2008a: 83-6.).
Whether co-referential expressions must be syntactically congruent
depends on one's conception of reference, a matter that cannot
be further pursued here (for discussion see Gaskin 2008: ch. 2; 2020:
chs. 3-5).
There has been a good deal of discussion in the literature concerning
the question whether an identification of facts with true propositions
is undermined not specifically by phenomena of *syntactic*
incongruence but rather by failure of relevant intersubstitutions to
preserve *truth-values* (see, e.g., King 2007: ch. 5; King in
King, Soames, & Speaks 2014: 64-70, 201-8; Hofweber
2016: 215-23; Candlish & Damnjanovic 2018: 264). The
discussion has focused on examples like the following:
(17)
Daniel remembers the fact that this is a leap year;
(18)
Daniel remembers the true proposition that this is a leap
year;
(19)
The fact that my local baker has shut down is just
appalling;
(20)
The true proposition that my local baker has shut down is just
appalling.
The problem here is said to be that the substitution of "true
proposition" for "fact" or vice versa generates
different readings (in particular, readings with different
truth-values). Suppose Daniel has to memorize a list of true
propositions, of which one is the proposition that this is a leap
year. Then it is contended that we can easily imagine a scenario in
which (17) and (18) differ in truth-value. Another way of putting the
same point might be to say that (17) is equivalent to
(21)
Daniel remembers that this is a leap year,
but that (18) is not equivalent to (21), because--so the argument
goes--(18) but not (21) would be true if Daniel had memorized his
list of true propositions without realizing that they *were*
true. Similar differences can be argued to apply, *mutatis
mutandis*, to (19) and (20). Can the identity theorist deal with
this difficulty?
In the first place one might suggest that the alleged mismatch between
(17) and (18) is less clear than the objector claims. (17) surely does
have a reading like the one that is said to be appropriate for (18).
Suppose Daniel has to memorize a list of facts. (17) could then
diverge in truth-value from
(22)
Daniel remembers the fact that this is a leap year *as a
fact*.
For there is a reading of (17) on which, notwithstanding (17)'s
truth, (22) is false: this is the reading on which Daniel has indeed
memorized a list of facts, but without necessarily realizing that the
things he is memorizing *are* facts. He has memorized the
relevant fact (that this is a leap year), we might say, but not
*as* a fact. That is parallel to the reading of (18) according
to which Daniel has memorized the true proposition that this is a leap
year, but not *as* a true proposition. The identity theorist
might then aver that, perhaps surprisingly, the same point actually
applies to the simple (21), on the grounds that this sentence can mean
that Daniel remembers the propositional object *that this is a leap
year* (from a list of such objects, say, that he has been asked to
memorize), with no implication that he remembers it either *as a
proposition* or *as a fact*. So, according to this
response, the transparent reading of (18)--which has Daniel
remember the propositional object, namely *that this is a leap
year*, but not necessarily remember it *as* a fact, or even
as the propositional object *that this is a leap year* (he
remembers it under some other mode of presentation)--is also
available for (17) and for (21).
What about the opaque reading of either (17) or (21), which implies
that Daniel knows *for a fact* that this is a leap
year--is that reading available for (18) too? The identity
theorist might maintain that this reading is indeed available, and
then explain why we tend not to use sentences like (18) in the
relevant sense, preferring sentences of the form of (17) or (21), on
the basis of the relative technicality of the vocabulary of (18). The
idea would be that it is just an accident of language that we prefer
either (17) or (21) to (18) where what is in question is the sense
that implies that Daniel has propositional knowledge that this is a
leap year (is acquainted with that fact as a fact), as opposed to
having mere acquaintance, under some mode of presentation or other,
with the propositional object which happens to be *(the fact) that
this is a leap year*. And if we ask why we prefer (17) or (21)
to
(23)
Daniel remembers the proposition that this is a leap year,
then the answer will be the Gricean one that (23) conveys less
information than (17) or (21), under the reading of these two
sentences that we are usually interested in, according to which Daniel
remembers the relevant fact as a fact, for (23) is compatible with the
falsity of "This is a leap year". Hence to use (23) in a
situation where one was in a position to use (17) or (21) would carry
a misleading conversational implicature. That, at any rate, is one
possible line for the identity theorist to take. (It is worth noting
here that, if the identity theorist is right about this, it will
follow that the "know that" construction will be subject
to a similar ambiguity as the "remember that"
construction, given that remembering is a special case of knowing.
That is: "*A* knows *that *p**" will mean
either "*A* is acquainted with the fact *that
*p**, and is acquainted with it *as a fact*" or
merely "*A* is acquainted with the fact *that
*p**, but not necessarily with it *as
such*--either as a fact or even as a propositional
object".)
### 5.5 The individuation problem
It might appear that we individuate propositions more finely than
facts: for example, one might argue that the fact *that Hesperus is
bright* is the same fact as the fact *that Phosphorus is
bright*, but that the propositions in question are different (see
on this point Kunne 2003: 10-12; Candlish & Damnjanovic
2018: 266-7). The identity theorist has a number of strategies
in response to this objection. One would be simply to deny it, and
maintain that facts are individuated as finely as propositions: if one
is a supporter of the Fregean version of the identity theory, this is
likely to be one's response (see, e.g., Dodd 2008a: 90-3).
Alternatively, one might respond by saying that, if there is a good
point hereabouts, at best it tells only against the Fregean and
Russellian versions of the identity theory, not against the hybrid
version. The identity theory in the hybrid version can agree that we
sometimes think of facts as extensional, reference-level entities and
sometimes also individuate propositions or proposition-like entities
intensionally. Arguably, these twin points do indeed tell against
either a strict Fregean or a strict Russellian version of the identity
theory: they tell against the strict Fregean position because, as well
as individuating facts intensionally, we also, sometimes, individuate
facts extensionally; and they tell against the strict Russellian
position because, as well as individuating facts extensionally, we
also, sometimes, individuate facts intensionally. But it is plausible
that the hybrid version of the identity theory is not touched by the
objection, because that version of the theory accommodates
propositionally structured and factual entities at both levels of
sense and reference, though different sorts of these entities at these
different levels--either propositions at the level of sense and
correlative proposition-like entities at the level of reference or
*vice versa*, and similarly, *mutatis mutandis*, for
facts and fact-like entities. It will follow, then, for this version
of the identity theory, that Fregean thoughts and Russellian
propositions are available, if true, to be identical with the factual
entities of the appropriate level (sense and reference, respectively),
and the individuation problem will not then, it seems, arise.
Propositions or propositionally structured entities will be
individuated just as finely as we want them to be individuated, and at
each level of resolution there will be facts or fact-like entities,
individuated to the same resolution, for them to be identical with, if
true.[15]
### 5.6 Truth and Intrinsicism
The solutions to these problems, if judged satisfactory, seem to
direct us to a conception of truth that has been called
"intrinsicist" (Wright 1999: 207-9), and
"primitivist" (Candlish 1999b: 207). This was a conception
recognized by Moore and Russell who, in the period when they were
sympathetic to the identity theory, spoke of truth as a simple and
unanalysable property (Moore 1953: 261; 1993: 20-1; Russell
1973: 75; Cartwright 1987: 72-5; Johnston 2013: 384). The point
here would be as follows. There are particular, individual
explanations of the truth of many propositions. For example, the true
proposition that there was a fire in the building will be explained by
alluding to the presence of combustible material, enough oxygen, a
spark caused by a short-circuit, etc. So, case by case, we will (at
least often) be able to provide explanations why given propositions
are true, and science is expanding the field of such explanations all
the time. But according to the intrinsicist, there is no prospect of
providing a *general* explanation of truth, in the sense of an
account that would explain, in utterly general terms, why *any*
true proposition was true. At that general level, according to the
intrinsicist, there is nothing interesting to be said about what makes
true propositions true: there are only the detailed case-histories. An
intrinsicist may embrace one or another version of the identity theory
of truth: what has to be rejected is the idea that the truth of a true
proposition might consist in a relation to a *distinct*
fact--that the truth of the true proposition *that Socrates is
wise*, for example, might consist in *anything other than*
identity with the fact *that Socrates is wise*. On this
approach, truth is held to be both intrinsic to propositions, and
primitive.[16]
Intrinsicism was not a popular position, at least until recently:
Candlish described it as "so implausible that almost no one else
[apart from Russell, in 1903-4] has been able to take it
seriously" (1999b: 208); but it may be gaining in popularity now
(see, e.g., Asay 2013).
Candlish (1999b: 208) thinks that intrinsicism and the identity theory
are competitors, but perhaps that view is not mandatory. Intrinsicism
says that truth is a simple and unanalysable property of propositions;
the identity theory says that a true proposition is identical with a
fact (and with the right fact). These statements will seem to clash
only if the identity theory is taken to propound a heavy-duty analysis
of truth. But if, following recent exponents of the theory, we take it
rather to be merely spelling out a connection between two entities
that we have in our ontology anyway, namely true propositions and
facts, and which turn out (like Hesperus and Phosphorus) to be
identical, on the basis of a realization that an entity like *that
Socrates is wise* is both a true proposition and a fact, then any
incipient clash between intrinsicism and the identity theory will, it
seems, be averted. On this approach, a natural thing to say will be
that the identity theory describes the *way in which* truth is
a simple and unanalysable property. |
truth-pluralist | ## 1. Alethic pluralism about truth: a plurality of properties
### 1.1 Strength
The pluralist's thesis that there are many ways of being true
is typically construed as being tantamount to the claim that the number
of truth properties is greater than one. However, this basic
interpretation,
* (1) there is more than one truth property.
is compatible with both moderate as well as more radical
precisifications. According to moderate pluralism, at least one way of
being true among the multitude of others is universally shared:
* (2) there is more than one truth property, some of which are had by
all true sentences.
According to strong pluralism, however, there is no such universal
or common way of being true:
* (3) there is more than one truth property, none of which is had by
all true sentences.
Precisifying pluralism about truth in these two ways brings several
consequences to the fore. Firstly, both versions of pluralism conflict
with strong monism about truth:
* (4) there is exactly one truth property, which is had by all true
sentences.
Secondly, moderate--but not strong--pluralism is
compatible with a moderate version of monism about truth:
* (5) there is one truth property, which is had by all true
sentences.
(2) and (5) are compatible because (5) does not rule out the
possibility that the truth property had by all true sentences might be
one among the multitude of truth properties endorsed by the moderate
pluralist (i.e., by someone who endorses (2)). Only strong pluralism in
(3) entails the denial of the claim that all true sentences are true in
the same way. Thus, moderate pluralists and moderate monists
can in principle find common ground.
### 1.2 Related kinds of pluralism and neighboring views
Not infrequently, pluralism about truth fails to be distinguished
from various other theses about associated conceptual, pragmatic,
linguistic, semantic, and normative phenomena. Each of these other theses involves
attributing plurality to a different aspect of the analysandum
(explanandum, definiendum, etc.). For instance, linguistically, one may
maintain that there is a plurality of truth predicates (Wright 1992;
Tappolet 1997; Lynch 2000; Pedersen 2006, 2010). Semantically, one may
maintain that alethic terms like 'true' have multiple
meanings (Pratt 1908; Tarski 1944; Kolbel 2008, 2013; Wright 2010).
Cognitively or conceptually, one may maintain that there is a
multiplicity of truth concepts or regimented ways of conceptualizing
truth (Kunne 2003; cf. Lynch 2006). Normatively, one might think
that truth has a plurality of profiles (Ferrari 2016, 2018).
These parameters or dimensions suggest that pluralism is itself not
just a single, monolithic theory (see also Sher 1998; Wright
2013). Any fully developed version of pluralism about truth is likely
to make definitive commitments about at least some of these other
phenomena. (However, it hardly entails them; one can consistently be
an alethic pluralist about truth, for instance, without necessarily
having commitments to linguistic pluralism about truth predicates, or
about concepts like fact or actuality.) Nonetheless, theses about
these other phenomena should be distinguished from pluralism about
truth, as understood here.
Likewise, pluralism about truth must be distinguished from several
neighbouring views, such as subjectivism, contextualism, relativism, or
even nihilism about truth. For example, one can maintain some form of
subjectivism about truth while remaining agnostic about how many ways
of being true there are. Or again, one can consistently maintain that
there is exactly one way of being true, which is always and everywhere
dependent on context. Nor is it inconsistent to be both a pluralist and
an absolutist or other anti-relativist about truth. For example, one
might argue that each of the different ways of being true holds
absolutely if it holds at all (Wright 1992). Alternatively, one might
explicate a compatibilist view, in which there are at least two kinds
of truth, absolute and relative truth (Joachim 1905), or deflationist
and substantivist (Kolbel 2013). Such views would be, necessarily,
pluralistic. Occasionally, pluralists have also been lumped together
with various groups of so-called 'nihilists',
'deniers', and 'cynics', and even associated
with an 'anything goes' approach to truth (Williams 2002).
However, any version of pluralism is prima facie inconsistent with any
view that denies truth properties, such as nihilism and certain forms
of nominalism.
### 1.3 Alethic pluralism, inflationism, and deflationism
The foregoing varieties of pluralism are consistent with various
further analyses of pluralists' ideas about truth. For instance,
pluralists may--but need not--hold that truth properties are
simply one-place properties, since commitments to truth's being
monadic are orthogonal to commitments to its being monistic. However,
most pluralists converge on the idea that truth is a substantive
property and take this idea as the point of departure for articulating
their view.
A property is substantive just in case there is more to its nature
than what is given in our concept of the property. A paradigmatic
example of a substantive property is the property of being water. There
is more to the nature of water--being composed of H\(\_2\)O,
e.g.--than what is revealed in our concept of water (the
colourless, odourless liquid that comes out of taps, fills lakes,
etc.)
The issue of substantiveness connects with one of the major issues
in the truth debate: the rift between deflationary theories of truth
and their inflationary counterparts (Horwich 1990; Edwards 2013b;
Kunne 2003; Sher 2016b; Wyatt 2016; Wyatt & Lynch 2016). A common way to understand the divide between
deflationists and inflationists is in terms of the question whether or
not truth is a substantive property. Inflationists endorse this idea,
while deflationists reject it. More specifically, deflationists and
inflationists can be seen as disagreeing over the following claim:
* (6) there exists some property \(F\) (coherence,
correspondence, etc.) such that any sentence, if true, is so in virtue
of being \(F\)--and this is a fact that is not transparent in
the concept of truth.
The inflationist accepts (6). According to her, it is not
transparent in the concept of truth that being true is a matter of
possessing some further property (cohering, corresponding, etc.). This
makes truth a substantive property. The deflationist, on the other
hand, rejects (6) because she is committed to the idea that everything
there is to know about truth is transparent in the concept--which,
on the deflationist's view, is exhausted by the disquotational
schema ('\(p\)' is true if, and only if, \(p)\), or some principle
like it.
Deflationists also tend to reject a further claim about
truth's explanatory role:
* (7) \(F\) is necessary and sufficient for explaining the truth
of any true sentence \(p\).
Inflationists, on the other hand, typically accept both (6) and
(7).
Strong and moderate versions of pluralism are perhaps best
understood as versions of a non-traditional inflationary theory (for an exception, see Beall 2013; for refinements, see Edwards 2012b and Ferrari & Moruzzi forthcoming).
Pluralists side with inflationists on (6) and (7), and so, their views
count as inflationary. Yet, traditional inflationary theories are also
predominantly monistic. They differ about which property
\(F\)--coherence, identity, superwarrant, correspondence,
etc.--truth consists in, but concur that there is precisely one
such property:
* (8) there is a single property \(F\) and truth
consists in a single property \(F\).
The monistic supposition in (8) is tantamount to the claim that
there is but one way of being true. In opposing that claim, pluralism
counts as non-traditional.
## 2. Motivating pluralism: the scope problem
Pluralists' rejection of (8) typically begins by rendering it as a
claim about the invariant nature of truth across all regions of
discourse (Acton 1935; Wright 1992, 1996; Lynch 2000, 2001; for more
on domains see Edwards 2018b; Kim & Pedersen 2018, Wyatt 2013; Yu
2017). Thus rendered, the claim appears to be at odds with the
following observation:
* (9) the plausibility of each inflationist's candidate for the
property \(F\) differs across different regions of discourse.
For example, some theories--such as correspondence
theories--seem intuitively plausible when applied to truths about
ladders, ladles, and other ordinary objects. However, those theories
seem much less convincing when applied to truths about comedy, fashion,
ethical mores, numbers, jurisprudential dictates, etc. Conversely,
theories that seem intuitively plausible when applied to legal, comic,
or mathematical truths--such as those suggesting that the nature
of truth is coherence--seem less convincing when applied to truths
about the empirical world.
Pluralists typically take traditional inflationary theories of truth
to be correct in analyzing truth in terms of some substantive property
\(F\). Yet, the problem with their monistic suppositions lies with
generalization: a given property \(F\) might be necessary and
sufficient for explaining why sentences about a certain subject matter
are true, but no single property is necessary and sufficient for
explaining why \(p\) is true for all sentences \(p\), whatever its subject
matter. Subsequently, those theories' inability to generalize
their explanatory scope beyond the select few regions of discourse for
which they are intuitively plausible casts aspersion on their candidate
for \(F\). This problem has gone by various names, but has come to
be known as 'the scope problem' (Lynch 2004b, 2009; cf. Sher
1998).
Pluralists respond to the scope problem by first rejecting (8) and
replacing it with:
* (10) truth consists in several properties \(F\_1 ,
\ldots ,F\_n\).
With (10), pluralists contend that the nature of truth is not a
single property \(F\) that is invariant across all regions of
discourse; rather the true sentences in different regions of discourse
may consist in different properties among the plurality
\(F\_1 , \ldots ,F\_n\) that
constitute truth's nature.
The idea that truth is grounded in various properties
\(F\_1 , \ldots ,F\_n\) might be
further introduced by way of analogy. Consider water. We ordinarily
think and talk about something's being water as if it were just
one thing--able to exist in different states, but nevertheless
consisting in just one property (H\(\_2\)O). But it would be a
mistake to legislate in advance that we should be monists about water,
since the nature of water is now known to vary more than our intuitions
would initially have it. The isotopic distribution of water allows for
different molecular structures, to include hydroxonium
(H\(\_3\)O), deuterium oxide (D\(\_2\)O), and so-called
'semi-heavy water' (HDO). Or again, consider sugar, the
nature of which includes glucose, fructose, lactose, cellulose, and
similar other such carbohydrates. For the pluralist, so too might truth
be grounded as a plurality of more basic properties.
One reason to take pluralism about truth seriously, then, is that it
provides a solution to the scope problem. In rejecting the
'one-size-fits-all' approach to truth, pluralists formulate
a theory whose generality is guaranteed by accommodating the various
properties \(F\_1 , \ldots ,F\_n\) by
which true sentences come to be true in different regions of discourse.
A second and related reason is that the view promises to be
explanatory. Variance in the nature of truth in turn explains why
theories of truth perform unequally across various regions of
discourse--i.e., why they are descriptively adequate and
appropriate in certain regions of discourse, but not others. For
pluralists, the existence of different kinds of truths is symptomatic
of the non-uniform nature of truth itself. Subsequently, taxonomical
differences among truths might be better understood by formulating
descriptive models about how the nature of truth might vary between
those taxa.
## 3. Prominent versions of pluralism
### 3.1 Platitude-based strategies
Many pluralists have followed Wright (1992) in supposing that
compliance with platitudes is what regiments and characterizes the
behavior and content of truth-predicates. Given a corollary account of
how differences in truth predicates relate to differences among truth
properties, this supposition suggests a platitude-based strategy for
positing many ways of being true. Generally, a strategy will be
platitude-based if it is intended to show that a certain collection of
platitudes \(p\_1 , \ldots ,p\_n\) suffices for
understanding the analysandum or explanandum. By
'platitude', philosophers generally mean certain
uncontroversial expressions about a given topic or domain. Beyond that,
conceptions about what more something must be or have to count as
platitudinous vary widely.
#### 3.1.1 Discourse pluralism/minimalism
A well-known version of platitude-based pluralism is discourse
pluralism. The simplest versions of this view make the following four
claims. Firstly, discourse exhibits natural divisions, and so can be
stably divided into different regions \(D\_1 , \ldots ,D\_n\). Secondly, the platitudes subserving some
\(D\_i\) may be different than those subserving
\(D\_j\). Thirdly, for any pair \((D\_i,
D\_j)\), compliance with different platitudes
subserving each region of discourse can, in principle, result in
numerically distinct truth predicates \((t\_i,
t\_j)\). Finally, numerically distinct truth predicates
designate different ways of being true.
Discourse pluralism is frequently associated with Crispin Wright
(1992, 1996, 2001), although others have held similar views (see, e.g.,
Putnam 1994: 515). Wright has argued that discourse pluralism is
supported by what he calls 'minimalism'. According to
minimalism, compliance with both the disquotational schema and the
operator schema,
* (11) '\(p\)' is true if, and only if, \(p\).
* (12) it is true that \(p\) if, and only if, \(p\).
as well as other 'parent' platitudes, is both necessary
and sufficient for some term \(t\_i\) to qualify as
expressing a concept worth regarding as TRUTH (1992: 34-5).
Wright proposed that the parent platitudes, which basically serve as
very superficial formal or syntactic constraints, fall into two
subclasses: those connecting truth with assertion
('transparency'),
* (13) To assert is to present as true.
* (14) Any attitude to a proposition is an attitude towards its
truth.
and those connecting truth with logical operations
('embedding'),
* (15) Any truth-apt content has a negation which is also
truth-apt.
* (16) Aptitude for truth is preserved under basic logical operations
(disjunction, conjunction, etc.).
Any such term complying with these parent platitudes, regardless of
region of discourse, counts as what Wright called a
'lightweight' or 'minimal' truth predicate.
Yet, the establishment of some \(t\) as a minimal truth predicate
is compatible, argued Wright, with the nature of truth consisting in
different things in different domains (2001: 752).
Wright (2001) has also suggested that lightweight truth predicates
tend to comply with five additional subclasses of platitudes, including
those connecting truth with reality ('correspondence') and
eternity ('stability'),
* (17) For a sentence to be true is for it to correspond with
reality.
* (18) True sentences accurately reflect how matters stand.
* (19) To be true is to "tell it like it is".
* (20) A sentence is always true if it ever is.
* (21) Whatever may be asserted truly may be asserted truly at any
time.
and those disconnecting truth from epistemic state
('opacity'), justification ('contrast'), and
scalar degree ('absoluteness'),
* (22) A thinker may be so situated that a particular truth is beyond
her ken.
* (23) Some truths may never be known.
* (24) Some truths may be unknowable in principle.
* (25) A sentence may be true without being justified and
vice-versa.
* (26) Sentences cannot be more or less true.
* (27) Sentences are completely true if at all.
The idea is that \(t\) may satisfy additional platitudes beyond
these, and in doing so may increase its 'weight'. For
example, some \(t\_i\) may be a more heavyweight truth
predicate than \(t\_j\) in virtue of satisfying
platitudes which entail that truth be evidence-transcendent or that
there be mind-independent truth-makers. Finally, differences in what
constitutes truth in \(D\_1 , \ldots ,D\_n\) are tracked by differences in the weight of
these predicates. In this way, Wright is able to accommodate the
intuition that sentences about, e.g., macromolecules in biochemistry
are amenable to realist truth in a way that sentences about
distributive welfare in ethics may not be.
Distinctions among truth predicates, according to the discourse
pluralist, are due to more and less subtle differences among platitudes
and principles with which they must comply. For example, assuming that
accuracy of reflection is a matter of degree, predicates for truth and
truthlikeness diverge because a candidate predicate may comply with
either (18) or else either of (26) or (27); to accommodate both, two
corollary platitudes must be included to make explicit that accurate
reflection in the case of truth is necessarily maximal and that degrees
of accuracy are not equivalent to degrees of truth. Indeed, it is not
unusual for platitudes to presuppose certain attendant semantic or
metaphysical views. For example,
* (28) A sentence may be characterized as true just in case it
expresses a true proposition.
requires anti-nominalist commitments, an ontological commitment to
propositions, and commitments to the expression relation (translation
relations, an account of synonymy, etc.). Discourse pluralists
requiring predicates to comply with (28) in order to count as
truth-predicates must therefore be prepared to accommodate other claims
that go along with (28) as a package-deal.
#### 3.1.2 Functionalism about truth
'Functionalism about truth' names the thesis that truth
is a functional kind. The most comprehensive and systematic development
of a platitude-based version of functionalism comes from Michael Lynch,
who has been at the forefront of ushering in pluralist themes and
theses (see Lynch 1998, 2000, 2001, 2004c, 2005a, 2005b, 2006, 2009, 2012, 2013; Devlin
2003). Lynch has urged that we need to think about truth in terms of
the 'job' or role, \(F\), that true sentences stake
out in our discursive practices (2005a: 29).
Initially, Lynch's brand of functionalism attempted to
implicitly define the denotation of 'truth' using the
quasi-formal technique of Ramsification. The technique commences by
treating 'true' as the theoretical term \(\tau\)
issued by the theory \(T\) and targeted for implicit definition.
Firstly, the platitudes and principles of the theory are amassed
\((T: p\_1 , \ldots ,p\_n)\) so that
the \(F\)-role can be specified holistically. Secondly, a certain
subset \(A\) of essential platitudes \((p\_i ,
\ldots ,p\_k)\) must be extracted from \(T\),
and are then conjoined. Thirdly, following David Lewis, \(T\) is
rewritten as
* (29) \(R(\tau\_1,\ldots ,
\tau\_n , \omicron\_1,\ldots , \omicron\_n )\)
so as to isolate the \(\tau\)-terms from the non-theoretical
('old, original, other') \(o\)-terms. Fourthly, all
instances of 'true' and other cognate or closely related
\(\tau\)-terms are then replaced by subscripted variables
\(x\_1 , \ldots ,x\_n\). The resulting
open sentence is prefixed with existential quantifiers to bind them.
Next, the Ramsey sentence is embedded in a material biconditional; this
allows functionalists to then specify the conditions by which a given
truth-apt sentence \(p\) has a property that plays the \(F\)-role:
* (30) \(p\) has some property \(\varrho\_i\) realizing the
\(F\)-role \(\equiv \exists x\_1 , \ldots ,\exists x\_n [R(x\_1 ,
\ldots ,x\_n , \omicron\_1,\ldots , \omicron\_n ) \amp p\) has
\(x\_1\)],
where, say, the variable \(x\_1\) is the one that
replaced 'true'. Having specified the conditions under
which \(p\) has some property realizing \(F\), functionalists can then
derive another material biconditional stating that \(p\) is true iff \(p\) has
some property realizing the \(F\)-role.
However, as Lynch (2004: 394) cautioned, biconditionals that specify
necessary and sufficient conditions for \(p\) to be true still leave open
questions about the 'deep' metaphysical nature of truth.
Thus, given the choice, Lynch--following up on a suggestion from
Pettit (1996: 886)--urged functionalists to identify truth, not
with the properties realizing the \(F\)-role in a given region of
discourse, but with the \(F\)-role itself. Doing so is one way to
try to secure the 'unity' of truth (on the presumption that
there is just one \(F\)-role). Hence, to say that truth is a
functional kind \(F\) is to say that the \(\tau\)-term
'truth' denotes the property of having a property that
plays the \(F\)-role, where the \(F\)-role is tantamount to
the single unique second-order property of being \(F\).
Accordingly, this theory proposes that something is true just in case
it is \(F\).
Two consequences are apparent. Firstly, the functionalist's
commitment to alethic properties realizing the \(F\)-role seems to
be a commitment to a grounding thesis. This explains why Lynch's
version of alethic functionalism fits the pattern typical of
inflationary theories of truth, which are committed to (6) and
(7) above. Secondly, however, like most traditional inflationary
theories, Lynch's functionalism about truth appears to be
monistic. Indeed, the functionalist commitment to identifying truth
with and only with the unique property of being \(F\) seems to
entail a commitment to strong alethic monism in (5) rather than
pluralism (Wright 2005). Nonetheless, it is clear that Lynch's
version does emphasize that sentences can have the property of being
\(F\) in different ways. The theory thus does a great deal to
accommodate the intuitions that initially motivate the pluralist thesis
that there is more than one way of being true, and to finesse a fine
line between monism and pluralism.
For pluralists, this compromise may not be good enough, and critics
of functionalism about truth have raised several concerns. One
stumbling block for functionalist theories is a worry about epistemic
circularity. As Wright (2010) observes, any technique for implicit
definition, such as Ramsification, proceeds on the basis of explicit
decisions that the platitudes and principles constitutive of the
modified Ramsey sentence are themselves true, and making explicit
decisions that they are true requires already knowing in advance what
truth is. Lynch (2013a) notes that the problem is not peculiar to
functionalism about truth, generalizing to virtually all approaches
that attempt to fix the denotation of 'true' by appeal to
implicit definition. Some might want to claim that it generalizes even
further, namely to any theory of truth whatsoever. Another issue is
that the \(F\)-role becomes disunified to the extent that
\(T\) can accommodate substantially different platitudes and
principles. Recall that the individuation and identity conditions of
the \(F\)-role--with which truth is identified--are
determined holistically by the platitudes and principles constituting
\(T\). So where \(T\) is constituted by expressions of the
beliefs and commitments of ordinary folk, pluralists could try to show
that these beliefs and commitments significantly differ across
epistemic communities (see, e.g., Naess 1938a, b; Maffie 2002;
Ulatowski 2017, Wyatt 2018). In that case, Ramsification over
significantly different principles may yield implicit definitions of
numerically distinct role properties
\(F\_1, F\_2 , \ldots ,F\_n\), each of which is a warranted claimant to being
truth.
### 3.2 Correspondence pluralism
The correspondence theory is often invoked as exemplary of
traditional monistic theories of truth, and thus as a salient rival to
pluralism about truth. Prima facie, however, the two are consistent.
The most fundamental principle of any version of the correspondence
theory,
* (31) Truth consists in correspondence.
specifies what truth consists in. Since it involves no covert
commitment about how many ways of being true there are, it does not
require denying that there is more than one (Wright & Pedersen
2010). In principle, there may be different ways of consisting in
correspondence that yield different ways of being true. Subsequently,
whether the two theories turn out to be genuine rivals depends on
whether further commitments are made to explicitly rule out
pluralism.
Correspondence theorists have occasionally made proposals that combine
their view with a version of pluralism. An early--although not
fully developed--proposal of this kind was made by Henry Acton
(1935: 191). Two recent proposals are noteworthy and have been
developed in detail. Gila Sher (1998, 2004, 2005, 2013, 2015, 2016a) has
picked up the project of expounding on the claim that sentence in
domains like logic correspond to facts in a different way than do
sentences in other domains, while Terence Horgan and colleagues
(Horgan 2001; Horgan & Potrc 2000, 2006; Horgan & Timmons
2002; Horgan & Barnard 2006; Barnard & Horgan 2013)
have elaborated a view that involves a defense of the claim that not
all truths correspond to facts in the same way.
For Sher, truth does not consist in different properties in
different regions of discourse (e.g., superwarrant in macroeconomics,
homomorphism in immunology, coherence in film studies, etc.). Rather,
it always and everywhere consists in correspondence. Taking
'correspondence' to generally refer to an \(n\)-place
relation \(R\), Sher advances a version of correspondence
pluralism by countenancing different 'forms', or ways of
corresponding. For example, whereas the physical form of correspondence
involves a systematic relation between the content of physical
sentences and the physical structure of the world, the logical form of
correspondence involves a systematic relation between the logical
structure of sentences and the formal structure of the world, while the
moral form of correspondence involves a relation between the moral
content of sentences and (arguably) the psychological or sociological
structure of the world.
Sher's view can be regarded as a moderate form of pluralism.
It combines the idea that truth is many with the idea that truth is
one. Truth is many on Sher's view because there are different
forms of correspondence. These are different ways of being true. At the
same time, truth is one because these different ways of being true are
all forms of correspondence.
For Sher, a specific matrix of 'factors' determines the
unique form of correspondence as well as the correspondence principles
that govern our theorizing about them. Which factors are in play
depends primarily on the satisfaction conditions of predicates. For
example, the form of correspondence for logical truths of the form
* (32) Every \(x\) either is or is not self-identical.
is determined solely by the logical factor, which is reflected by
the universality of the union of the set of self-identical things and
its complement. Or again, consider the categorical sentences
* (33) Some humans are disadvantaged.
and
* (34) Some humans are vain.
Both (33) and (34) involve a logical factor, which is reflected in
their standard form as I-statements (i.e., some \(S\) are \(P)\), as well as
the satisfaction conditions of the existential quantifier and copula; a
biological factor, which is reflected in the satisfaction conditions
for the predicate 'is human'; and a normative factor, which
is reflected in the satisfaction conditions for the predicates
'is disadvantaged' and 'is vain'. But whereas
(34) involves a psychological factor, which is reflected in the
satisfaction conditions for 'is vain', (33) does not. Also,
(33) may involve a socioeconomic factor, which is reflected in the
satisfaction conditions for 'is disadvantaged', whereas
(34) does not.
By focusing on subsentential factors instead of supersentential
regions of discourse, Sher offers a more fine-grained way to
individuate ways in which true sentences correspond. (Sher supposes
that we cannot name the correspondent of a given true sentence since
there is no single discrete hypostatized entity beyond the
\(n\)-tuples of objects, properties and relations, functions,
structures (complexes, configurations), etc. that already populate
reality.) The upshot is a putative solution to problems of mixed
discourse (see SS4 below): the truth of sentences like
* (35) Some humans are disadvantaged and vain.
is determined by all of the above factors, and which
is--despite the large overlap--a different kind of truth than
either of the atomic sentences (33) and (34), according to Sher.
For their part, Horgan and colleagues propose a twist on the
correspondence theorist's claim that truth consists in a
correspondence relation \(R\) obtaining between a given
truth-bearer and a fact. They propose that there are exactly two
species of the relation \(R\): 'direct'
(\(R\_{dir}\)) and 'indirect correspondence'
(\(R\_{ind}\)), and thus exactly two ways of being true.
For Horgan and colleagues, which species of \(R\)--and thus
which way of being true--obtains will depend on the austerity of
ontological commitments involved in assessing sentences; in turn, which
commitments are involved depends on discursive context and operative
semantic standards. For example, an austere ontology commits to only a
single extant object: namely, the world (affectionally termed the
'blobject'). Truths about the blobject, such as
* (36) The world is all that is the case.
if it is one, correspond to it directly. Truths about things other
than the blobject correspond to them indirectly. For example, sentences
such as
* (37) Online universities are universities.
may be true even if the extension of the predicate
'university' is--strictly speaking--empty or what
is referred to by 'online universities' is not in the
non-empty extension of 'university'. In short, \(p\) is
true\(\_1\) iff \(p\) is \(R\_{dir}\)-related to the
blobject given contextually operative standards \(c\_i,
c\_j , \ldots ,c\_m\).
Alternatively, \(p\) is true\(\_2\) iff \(p\) is
\(R\_{ind}\)-related to non-blobject entities given
contextually operative standards \(c\_j,
c\_k , \ldots ,c\_n\). So, truth
always consists in correspondence. But the two types of correspondence
imply that there is more than one way of being true.
## 4. Objections to pluralism and responses
### 4.1 Ambiguity
Some take pluralists to be committed to the thesis that
'true' is ambiguous: since the pluralist thinks that there
is a range of alethically potent properties (correspondence, coherence,
etc.), 'true' must be ambiguous between these different
properties. This is thought to raise problems for pluralists. According
to one objection, the pluralist appears caught in a grave dilemma.
'True' is either ambiguous or unambiguous. If it is, then
there is a spate of further problems awaiting (see
SS4.4-SS4.6 below). If it is not, then there is only one
meaning of 'true' and thus only one property designated by
it; so pluralism is false.
Friends of pluralism have tended to self-consciously distance
themselves from the claim that 'true' is ambiguous (e.g.,
Wright 1996: 924, 2001; Lynch 2001, 2004b, 2005c). Generally, however,
the issue of ambiguity for pluralism has not been well-analyzed. Yet,
one response has been investigated in some detail. According to this
response, the ambiguity of 'true' is simply to be taken as
a datum. 'True' is de facto ambiguous (see, e.g., Schiller
1906; Pratt 1908; Kaufmann 1948; Lucas 1969; Kolbel 2002, 2008;
Sher 2005; Wright 2010). Alfred Tarski, for instance, wrote:
>
>
>
> The word 'true', like other words from our everyday
> language, is certainly not unambiguous. [...] We should reconcile
> ourselves with the fact that we are confronted, not with one concept,
> but with several different concepts which are denoted by one word; we
> should try to make these concepts as clear as possible (by means of
> definition, or of an axiomatic procedure, or in some other way); to
> avoid further confusion we should agree to use different terms for
> different concepts [...]. (1944: 342, 355)
>
>
>
If 'true' is ambiguous de facto, as some authors have
suggested, then the ambiguity objection may turn out to
be--again--not so much an objection or disconfirmation of the
theory, but rather just a datum about 'truth'-talk in
natural language that should be explained or explained away by theories
of truth. In that case, pluralists seem no worse off--and possibly
better--than any number of other truth theorists.
A second possible line of response from pluralists is that their
view is not necessarily inconsistent with a monistic account of either
the meaning of 'true' or the concept TRUTH. After all,
'true' is ambiguous only if it can be assigned more than
one meaning or semantic structure; and it has more than one meaning
only if there is more than one stable conceptualization or concept
TRUTH supporting each numerically distinct meaning. Yet, nothing about
the claim that there is more than one way of being true entails, by
itself, that there is more than one concept TRUTH. In principle, the
nature of properties like being true--whether homomorphism,
superassertibility, coherence, etc.--may outstrip the concept
thereof, just as the nature of properties like being water--such
as H\(\_2\)O, H\(\_3\)O, XYZ, etc.--may outstrip the
concept WATER (see, e.g., Wright 1996, 2001; Alston 2002; Lynch 2001,
2005c, 2006). Nor is monism about truth necessarily inconsistent with
semantic or conceptual pluralism. The supposition that TRUTH is both
many and one (i.e., 'moderate monism') neither rules out
the construction of multiple concepts or meanings thereof, nor rules
out the proliferation of uses to express those concepts or meanings.
For example, suppose that the only way of being true turns out to be a
structural relation \(R\) between reality and certain
representations thereof. Such a case is consistent with the existence
of competing conceptions of what \(R\) consists in: weak
homomorphism, isomorphism, 'seriously dyadic'
correspondence, a causal \(n\)-place correspondence relation, etc.
A more sensitive conclusion, then, is just that the objection from
ambiguity is an objection to conceptual or semantic pluralism, not to
any alethic theory--pluralism or otherwise.
### 4.2 The scope problem as a pseudo-problem
According to the so-called 'Quine-Sainsbury objection',
pluralists' postulation of ambiguity in metalinguistic alethic
terms is not actually necessary, and thus not well-motivated. This is
because taxonomical differences among kinds of truths in different
domains can be accounted for simply by doing basic ontology in
object-level languages.
>
>
>
> [E]ven if it is one thing for 'this tree is an oak' to
> be true, another thing for 'burning live cats is cruel' to
> be true, and yet another for 'Buster Keaton is funnier than
> Charlie Chaplin' to be true, this should not lead us to suppose
> that 'true' is ambiguous; for we get a better explanation
> of the differences by alluding to the differences between trees,
> cruelty, and humor. (Sainsbury 1996: 900; see also Quine 1960: 131)
>
>
>
Generally, pluralists have not yet developed a response to the
Quine-Sainsbury objection. And for some, this is because the real force
of the Quine-Sainsbury objection lies in its exposure of the scope
problem as a pseudo-problem (Dodd 2013; see also Asay 2018). Again, the idea is that
traditional inflationary theories postulate some candidate for
\(F\) but the applicability and plausibility of \(F\) differs
across regions of discourse. No such theory handles the truths of
moral, mathematical, comic, legal, etc. discourse equally well; and
this suggests that these theories, by their monism, face limitations on
their explanatory scope. Pluralism offers a non-deflationary solution.
Yet, why think that these differences among domains mark an alethic
difference in truth per se, rather than semantic or discursive
differences among the sentences comprising those domains? There is more
than one way to score a goal in soccer, for example (via corner kick,
ricochet off the foot of an opposing player or the head of a teammate,
obstruct the goalkeeper, etc.), but it is far from clear that this
entails pluralism about the property of scoring a goal in soccer.
(Analogy belongs to an anonymous referee.) Pluralists have yet to
adequately address this criticism (although see Blackburn 2013; Lynch 2013b, 2018; Wright 1998 for further discussion).
### 4.3 The criteria problem
Pluralists who invoke platitude-based strategies bear the burden of
articulating inclusion and exclusion criteria for determining which
expressions do, or do not, count as members of the essential subset of
platitudes upon which this strategy is based (Wright, 2005). Candidates
include: ordinariness, intuitiveness, uninformativeness, wide use or
citation, uncontroversiality, a prioricity, analyticity,
indefeasibility, incontrovertibility, and sundry others. But none has
proven to be uniquely adequate, and there is nothing close to a
consensus about which criteria to rely on.
For instance, consider the following two conceptions. One conception
takes platitudes about \(x\) to be expressions that must be
endorsed on pain of being linguistically incompetent with the
application of the terms \(t\_1 , \ldots ,t\_n\) used to talk about \(x\) (Nolan 2009).
However, this conception does not readily allow for disagreement: prima
facie, it is not incoherent to think that two individuals, each of whom
is competent with the application of
\(t\_1 (x), \ldots ,t\_n (x)\), may differ as to whether some \(p\)
must be endorsed or whether some expression is genuinely platitudinous.
For instance, consider the platitude in (17), which connects being
true with corresponding with reality. Being linguistically competent
with terms for structural relations like correspondence does not force
endorsement of claims that connect truth with correspondence; no one
not already in the grip of the correspondence theory would suppose that
they must endorse (17), and those who oppose it would certainly suppose
otherwise. Further inadequacies beleaguer this conception. It makes no
provision for degrees of either endorsement or linguistic incompetence.
It makes no distinction between theoretical and non-theoretical terms,
much less restrict \(t\_1 (x), \ldots ,t\_n (x)\) to non-theoretical terms. Nor does
it require that platitudes themselves be true. On one hand, this
consequently leaves open the possibility that universally-endorsed but
false or otherwise alethically defective expressions are included in
the platitude-based analysis of 'true'. An old platitude
about whales, for example--one which was universally endorsed on
pain of being linguistically incompetent--prior to whales being
classified as cetaceans--was that they are big fish. The worry,
then, is that the criteria may allow us to screen in certain
'fish stories' about truth. This would be a major problem
for advocates of Ramsification and other forms of implicit definition,
since those techniques work only on the presupposition that all input
being Ramsified over or implicitly defined is itself true (Wright
2010). On the other hand, making explicit that platitudes must also be
true seems to entail that they are genuine 'truisms'
(Lynch 2005c), though discovering which ones are truly indefeasible is
a further difficulty--one made more difficult by the possibility
of error theories (e.g., Devlin 2003) suggesting that instances of the
\(T\)-schema are universally false. Indeed, we are inclined to say
instances of disquotational, equivalence, and operator schemas are
surely candidates for being platitudinous if anything is; but to say
that they must be endorsed on pain of being linguistically incompetent
is to rule out a priori error theories about instances of the
\(T\)-schema.
A second, closely related conception is that platitudes are
expressions, which--in virtue of being banal, vacuous, elementary,
or otherwise trivial--are acceptable by anyone who understands
them (Horwich 1990). The interaction of banality or triviality with
acceptance does rule out a wide variety of candidate expressions,
however. For instance, claims that are acceptable by anyone who
understands them may still be too substantive or informative to count
as platitudinous, depending on what they countenance. Similarly, claims
that are too 'thin' or neutral to vindicate any particular
theory \(T\) may still be too substantive or informative to count
as genuinely platitudinous on this conception (Wright 1999). This is
particularly so given that nothing about a conception of platitudes as
'pretheoretical claims' strictly entails that they reduce
to mere banalities (Vision 2004). Nevertheless, criteria like banality
or triviality plus acceptance might also screen in too few expressions
(perhaps as few as one, such as a particular instance of the
\(T\)-schema). Indeed, it is an open question whether any of the
principles in (11)-(28) would count as platitudes on this
conception.
An alternative conception emphasizes that the criteria should
instead be the interaction of informality, truth, a prioricity, or
perhaps even analyticity (Wright 2001: 759). In particular, platitudes
need not take the form of an identity claim, equational definition, or
a material biconditional. At the extreme, expressions can be as
colloquial as you please so long as they remain true a priori (or
analytically). These latter criteria are commonly appealed to, but are
also not without problems. Firstly, a common worry is whether there are
any strictly analytic truths about truth, and, if there are, whether
they can perform any serious theoretical work. Secondly, these latter
criteria would exclude certain truths that are a posteriori but no less
useful to a platitude-based strategist.
### 4.4 The instability challenge
Another objection to pluralism is that it is an inherently instable
view: i.e., as soon as the view is formulated, simple reasoning renders
it untenable (Pedersen 2006, 2010; see also Tappolet 1997, 2000; Wright
2012). This so-called *instability challenge* can be presented
as follows. According to the moderate pluralist, there is more than one
truth property \(F\_1 , \ldots ,F\_n\). Yet, given \(F\_1 , \ldots ,F\_n\), it seems we should recognize another truth
property:
* (38)
\(\forall p[F\_U (p) \leftrightarrow F\_1 (p) \vee \cdots \vee F\_n (p)]\).
Observe that \(F\_U\) is not merely some property
possessed by every \(p\) which happens to have one of
\(F\_1 , \ldots ,F\_n\). (The property
of being a sentence is one such a property, but it poses no trouble to
the pluralist.) Rather, \(F\_U\) must be an alethic
property whose extension perfectly positively covaries with the
combined extension of the pluralist truth properties
\(F\_1 , \ldots ,F\_n\). And since
nothing is required for the existence of this new property other than
the truth properties already granted by the pluralist, (38) gives a
necessary and sufficient condition for \(F\_U\) to be had
by some \(p\): a sentence \(p\) is \(F\_U\) just in case \(p\) is
\(F\_1 \vee \cdots \vee F\_n\). Thus,
any sentence that is any of \(F\_1 , \ldots ,F\_n\) may be true in some more generic or universal
way, \(F\_U\). This suggests, at best, that strong
pluralism is false, and moderate monism is true; and at worst, there
seems to be something instable, or self-refuting, about pluralism.
Pluralists can make concessive or non-concessive responses to the
instability challenge. A concessive response grants that such a truth
property exists, but maintains that it poses no serious threat to
pluralism. A non-concessive response is one intended to rebut the
challenge, e.g., by rejecting the existence of a common or universal
truth property. One way of trying to motivate this rejection of
\(F\_U\) is by attending to the distinction between
sparse and abundant properties, and then demonstrating that alethic
properties like truth must be sparse and additionally argue that the
would-be trouble-maker \(F\_U\) is an abundant property.
According to sparse property theorists, individuals must be unified by
some qualitative similarity in order to share a property. For example,
all even numbers are qualitatively similar in that they share the
property of being divisible by two without remainder. Now, consider a
subset of very diverse properties \(G\_1 , \ldots ,G\_n\) possessed by an individual \(a\). Is there
some further, single property of being \(G\_1\), or
..., or \(G\_n\) that \(a\) has? Such a further
property, were it to exist, would be highly disjunctive; and it may
seem unclear what, if anything, individuals that were
\(G\_1\), or ..., or \(G\_n\) would
have in common--other than being \(G\_1\), or
..., or \(G\_n\). According to sparse property
theorists, the lack of qualitative similarity means that this putative
disjunctive property is not a property properly so-called. Abundant
property theorists, on the other hand, deny that qualitative similarity
is needed in order for a range of individuals to share a property.
Properties can be as disjunctive as you like. Indeed, for any set
\(A\) there is at least one property had by all members of
\(A\)--namely, being a member of \(A\). And since there
is a set of all things that have some disjunctive property, there is a
property--abundantly construed--had by exactly those things.
It thus seems difficult to deny the existence of \(F\_U\)
if the abundant conception of properties is adopted. So pluralists who
want to give a non-concessive response to the metaphysical instability
challenge may want to endorse the sparse conception (Pedersen 2006).
This is because the lack of uniformity in the nature of truth across
domains is underwritten by a lack of qualitative similarity between the
different truth properties that apply to specific domains of discourse.
The truth property \(F\_U\) does not exist, because truth
properties are to be thought of in accordance with the sparse
conception.
Even if the sparse conception fails to ground pluralists'
rejection of the existence of the universal truth property
\(F\_U\), a concessive response to the instability
challenge is still available. Pluralists can make a strong case that
the truth properties \(F\_1 , \ldots ,F\_n\) are more fundamental than the universal truth
property \(F\_U\) (Pedersen 2010). This is because
\(F\_U\) is metaphysically dependent on
\(F\_1 , \ldots ,F\_n\), in the sense
that \(F\_U\) is introduced in virtue of its being one of
\(F\_1 , \ldots ,F\_n\), and not
vice-versa. Hence, even if the pluralist commits to the existence of
\(F\_U\)--and hence, to moderate metaphysical
monism--there is still a clear sense in which her view is
distinctively more pluralist than monist.
### 4.5 Problems regarding mixed discourse
#### 4.5.1 Mixed atomic sentences
The content of some atomic sentences seems to hark exclusively from
a particular region of discourse. For instance, 'lactose is a
sugar' concerns chemical reality, while '\(7 + 5 = 12\)'
is solely about the realm of numbers (and operations on these). Not all
discourse is pure or exclusive, however; we often engage in so-called
'mixed discourse', in which contents from different regions
of discourse are combined. For example, consider:
* (39) Causing pain is bad.
Mixed atomic sentences such as (39) are thought to pose problems for
pluralists. It seems to implicate concepts from the physical domain
(causation), the mental domain (pain), and the moral domain (badness)
(Sher 2005: 321-22). Yet, if pluralism is correct, then in which
way is (39) true? Is it true in the way appropriate to talk of the
physical, the mental, or the moral? Is it true in neither of these
ways, or in all of these three ways, or in some altogether different
way?
The source of the problem may be the difficulty in classifying
discursive content--a classificatory task that is an urgent one
for pluralists. For it is unclear how they can maintain that regions of
discourse \(D\_1 , \ldots ,D\_n\)
partially determine the ways in which sentences can be true without a
procedure for determining which region of discourse
\(D\_i\) a given \(p\) belongs to.
One suggestion is that a mixed atomic sentence \(p\) belongs to
no particular domain. Another is that it belongs to several (Wyatt 2013). Lynch (2005b:
340-41) suggested paraphrasing mixed atomic sentences as
sentences that are classifiable as belonging to particular domains. For
example, (39) might be paraphrased as:
* (40) We ought not cause pain.
Unlike (39), the paraphrased (40) appears to be a pure atomic
sentence belonging to the domain of morals. This proposal remains
underdeveloped, however. It is not at all clear that (40) counts as a
felicitous paraphrase of (39), and, more generally, unclear whether all
mixed atomic sentences can be paraphrased such that they belong to just
one domain without thereby altering their meaning, truth-conditions, or
truth-values.
Another possible solution addresses the problem head-on by
questioning whether atomic sentences really are mixed, thereby denying
the need for any such paraphrases. Consider the following
sentences:
* (41) The Mona Lisa is beautiful.
* (42) Speeding is illegal.
Prima facie, what determines the domain-membership of (41) and (42)
is the aesthetic and legal predicates 'is beautiful' and
'is illegal', respectively. It is an aesthetic matter
whether the Mona Lisa is beautiful; this is because (41) is true in
some way just in case the Mona Lisa falls in the extension of the
aesthetic predicate 'is beautiful' (and mutatis mutandis
for (42)). In the same way, we might take (39) to exclusively belong to
the moral domain given that the moral predicate 'is
bad'. (This solution was presented in the first 2012 version of this entry; see Edwards 2018a for later, more detailed treatment.)
It is crucial to the latter two proposals that any given mixed
atomic sentence \(p\) has its domain membership essentially, since such
membership is what determines the relevant kind of truth. Sher (2005,
2011) deals with the problem of mixed atomic sentences differently. On
her view, the truth of a mixed atomic sentence is not accounted for by
membership to some specific domain; rather the 'factors'
involved in the sentence determine a specific form of correspondence,
and this specific form of correspondence is what accounts for the truth
of \(p\). The details about which specific form of correspondence obtains
is determined at the sub-sentential levels of reference, satisfaction,
and fulfillment. For example, the form of correspondence that accounts
for the truth of (39) obtains as a combination of the physical
fulfillment of 'the causing of \(x\)', the mental
reference of 'pain', and the moral satisfaction of
'\(x\) is bad' (2005: 328). No paraphrase is
needed.
#### 4.5.2 Mixed compounds
Another related problem pertains to two or more sentences joined by
one or more logical connectives, as in
* (43) Killing innocent people is wrong and \(7 + 5 = 12\).
Unlike atomic sentences, the mixing here takes place at the
sentential rather than sub-sentential level: (43) is a conjunction,
which mixes the pure sentence '\(7 + 5 = 12\)' with the pure
sentence 'killing innocent people is wrong'. (There are, of
course, also mixed compounds that involve mixed atomic sentences.) For
many theorists, each conjunct seems to be true in a different way, if
true at all: the first conjunct in whatever way is appropriate to moral
theory, and the second conjunct in whatever way is appropriate to
arithmetic. But then, how is the pluralist going to account for the
truth of the conjunction (Tappolet 2000: 384)? Pluralists owe an answer
to the question of which way, exactly, a conjunction is true when its
conjuncts are true in different ways.
Additional complications arise for pluralists who commit to facts
being what make sentences true (e.g., Lynch 2001: 730), or other such
truth-maker or -making theses. Prima facie, we would reasonably expect
there to be different kinds of facts that make the conjuncts of (43)
true, and which subsequently account for the differences in their
different ways of being true. However, what fact or facts makes true
the mixed compound? Regarding (43), is it the mathematical fact, the
moral fact, or some further kind of fact? On one hand, the claims that
mathematical or moral facts, respectively, make \(p\) true seem to betray
the thought that both facts contribute equally to the truth of the
mixed compound. On the other hand, the claim that some third
'mixed' kind of fact makes \(p\) true leaves the pluralist with
the uneasy task of telling a rather alchemist story about
fact-mixtures.
Functionalists about truth (e.g., Lynch 2005b: 396-97) propose
to deal with compounds by distinguishing between two kinds of realizers
of the \(F\)-role. The first is an atomic realizer, such that an
atomic proposition \(p\) is true iff \(p\) has a property that realizes the
\(F\)-role. The second is a compound realizer, such that a
compound \(q \* r\) (where \(q\) and \(r\) may themselves be complex) is true
iff
* (44) \({\*} = \wedge : q \wedge r\) has the property of being an instance of
the truth-function for conjunction with conjuncts that both have a
property that realizes the \(F\)-role.
* (45) \({\*} = \vee : q \vee r\) has the property of being an instance of the
truth-function for disjunction with at least one disjunct that has a
property that realizes the \(F\)-role.
* (46) \({\*} = \rightarrow : q \rightarrow r\) has the property of being an
instance of the truth-function for material conditional with an
antecedent that does not have the property that realizes the
\(F\)-role for its domain or a consequent that has a property that
realizes the \(F\)-role.
The realizers for atomic sentences are properties like
correspondence, coherence, and superwarrant. The realizer properties
for compounds are special, in the sense that realizer properties for a
given kind of compound are only had by compounds of that kind. Witness
that each of these compound realizer properties requires any of its
bearers to be an instance of a specific truth-function. Pure and mixed
compounds are treated equally on this proposal: when true, they are
true because they instantiate the truth-function for conjunction,
having two or more conjuncts that have a property that realizes the
\(F\)-role (and mutatis mutandis for disjunctions and material
conditionals).
However, this functionalist solution to the problem of mixed
compounds relies heavily on that theory's monism--i.e., its
insistence that the single role property \(F\) is a universal
truth property. This might leave one wondering whether a solution is
readily available to someone who rejects the existence of such a
property. One strategy is simply to identify the truth of conjunctions, disjunctions, and conditionals with the kind of properties specified by (44), (45), and (46), respectively (as opposed to taking them to be realizers of a single truth property). Thus, e.g., the truth of any conjunction simply \(is\) to be an instance of the truth-function for conjunction with conjuncts that have the property that plays the \(F\)-role for them (Kim & Pedersen 2018, Pedersen & Lynch 2018 (Sect. 20.6.2.1). Another strategy is to try to use the resources of multi-valued logic. For example, one can posit an ordered set of designated
values for each way of being true \(F\_1 , \ldots ,F\_n\) (perhaps according to their status as
'heavyweight' or 'lightweight'), and then take
conjunction to be a minimizing operation and disjunction a maximizing
one, i.e., \(v(p \wedge q) = \min\{v(p), v(q)\}\)
and \(v(p \vee q) = \max\{v(p), v(q)\}\).
Resultingly, each conjunction and disjunction--whether pure or
mixed--will be either true in some way or false in some way
straightforwardly determined by the values of the constituents. For
example, consider the sentences
* (47) Heat is mean molecular kinetic energy.
* (48) Manslaughter is a felony.
Suppose that (47) is true in virtue of corresponding to physical
reality, while (48) true in virtue of cohering with a body of law; and
suppose further that correspondence \((F\_1)\) is more
'heavyweight' than coherence \((F\_2)\).
Since conjunction is a minimizing operation and \(F\_2 \lt F\_1\), then 'heat is mean molecular
kinetic energy and manslaughter is a felony' will be
\(F\_2\). Since disjunction is a maximizing operation,
then 'heat is mean molecular kinetic energy or manslaughter is a
felony' will be \(F\_1\).
The many-valued solution to the problem of mixed compounds just
outlined is formally adequate because it determines a way that each
compound is true. However, while interesting, the proposal needs to be
substantially developed in several respects. For example, how is
negation treated--are there several negations, one for each way of
being true, or is there a single negation? Also, taking 'heat is
mean molecular kinetic energy and manslaughter is a felony' to be
true in the way appropriate to law betrays a thought that seems at
least initially compelling, *viz.* that both conjuncts
contribute to the truth of the conjunction. Alternatively, one could
take mixed compounds to be true in some third way. However, this would
leave the pluralist with the task of telling some story about how this
third way of being true relates to the other two. Again substantial
work needs to be done.
Edwards (2008) proposed another solution to the problem of mixed
conjunctions, the main idea of which is to appeal to the following
biconditional schema:
* (49) \(p\) is true\(\_i\) and \(q\) is true\(\_j\) iff \(p \wedge q\)
is true\(\_k\).
Edwards suggests that pluralists can answer the challenge that mixed
conjunctions pose by reading the stated biconditional as having an
order of determination: \(p \wedge q\) is true\(\_k\) in
virtue of \(p\)'s being true\(\_i\) and \(q\)'s being
true\(\_j\), but not vice-versa. This, he maintains,
explains what kind of truth a conjunction \(p \wedge q\) has when its
conjuncts are true in different ways; for the conjunction is
true\(\_k\) in virtue of having conjuncts that are both
true, where it is inessential whether the conjuncts are true in the
same way. Truth\(\_k\) is a further way of being true
that depends on the conjuncts being true in some way without reducing
to either of them. The property true\(\_k\) is thus not a
generic or universal truth property that applies to the conjuncts as
well as the conjunction.
As Cotnoir (2009) emphasizes, Edwards' proposal provides too
little information about the nature of true\(\_k\). What
little is provided makes transparent the commitment to
true\(\_k\)'s being a truth property had only by
conjunctions, in which case it is unclear whether Edwards's
solution can generalize. In this regard, Edwards' proposal is
similar to Lynch's functionalist proposal, which is committed to
there being a specific realizer property for each type of logical
compound.
#### 4.5.3 Mixed inferences
Mixed inferences--inferences involving truth-apt sentences from
different domains--appear to be yet another problem for the
pluralist (Tappolet 1997, 2000; Pedersen 2006). One can illustrate the
problem by supposing, with the pluralist, that there are two ways of
being true, one of which is predicated of the antecedent of a
conditional and the other as its consequent. It can be left open in
what way the conditional itself is true. Consider the following
inference:
* (50) Satiated dogs are lazy.
* (51) Our dog is satiated.
* (52) Our dog is lazy.
This inference would appear to be valid. However, it is not clear
that pluralists can account for its validity by relying on the standard
characterization of validity as necessary truth preservation from
premises to conclusion. Given that the truth properties applicable to
respectively (51) and (52) are different, what truth property is
preserved in the inference? The pluralist owes an explanation of how
the thesis that there are many ways of being true can account for the
validity of mixed inferences.
Beall (2000) argued that the account of validity used in
multi-valued logics gives pluralists the resources to deal with the
problem of mixed inferences. For many-valued logics, validity is
accounted for in terms of preservation of designated value, where
designated values can be thought of as ways of being true, while
non-designated values can be thought of as ways of being false.
Adopting a designated-value account of validity, pluralists can simply
take \(F\_1 , \ldots ,F\_n\) to be the
relevant designated values and define an inference as valid just in
case the conclusion is designated if each premise is designated (i.e.,
one of \(F\_1 , \ldots ,F\_n)\). On
this account, the validity of (mixed) arguments whose premises and
conclusion concern different regions of discourse is evaluable in terms
of more than one of \(F\_1 , \ldots ,F\_n\); the validity of (pure) arguments whose premises
and conclusion pertain to the same region of discourse is evaluable in
terms of the same \(F\_i\) (where \(1 \le i \le n)\). An immediate rejoinder is that the term
'true' in 'ways of being true' refers to a
universal way of being true--i.e., being designated simpliciter
(Tappolet 2000: 384). If so, then the multi-valued solution comes at
the cost of inadvertently acknowledging a universal truth property. Of
course, as noted, the existence of a universal truth property poses a
threat only to strong pluralism.
### 4.5 The problem of generalization
Alethic terms are useful devices for generalizing. For instance,
suppose we wish to state the law of excluded middle. A tedious way
would be to produce a long--indeed,
infinite--conjunction:
* (53) Everything is either actual or non-actual, and thick or not
thick, and red or not red, and ...
However, given the equivalence schema for propositions,
* (54) The proposition that \(p\) is true iff \(p\).
there is a much shorter formula, which captures what (54) is meant
to express by using 'true', but without loss of explanatory
power (Horwich 1990: 4):
* (55) Every proposition of the form \(\langle\)everything is \(G\) or
not\(-G\rangle\) is true.
Alethic terms are also useful devices for generalizing over what
speakers say, as in
* (56) What Chen said is true.
The utility of a generalization like (56) is not so much that it
eliminates the need to rely on an infinite conjunction, but that it is
'blind' (i.e., made under partial ignorance of what was
said).
Pluralists seem to have difficulty accounting for truth's use
as a device for generalization. One response is to simply treat uses of
'is true' as elliptical for 'is true in one way or
another'. In doing so, pluralists account for generalization
without sacrificing their pluralism. A possible drawback, however, is
that it may commit pluralists to the claim that 'true'
designates the disjunctive property of being \(F\_1
\vee \cdots \vee F\_n\). Granting the existence of
such a property gives pluralists a story to tell about generalizations
like (55) and (56), but the response is a concessive one available only
to moderate pluralists. However, as noted in SS4.2.3, the existence
of such a property is not a devastating blow to all pluralists, since
the domain-specific truth properties \(F\_1 , \ldots ,F\_n\) remain explanatorily basic in relation to the
property of being \(F\_1 \vee \cdots \vee F\_n\).
### 4.6 The problem of normativity
As is often noted, truth appears to be normative--i.e., a
positive standard governing immanent content (Sher 2004:
26). According to one prominent tradition (Engel 2002, 2013;
Wedgwood 2002; Boghossian 2003; Shah 2003; Gibbard 2005; Ferrari 2018; Lynch 2009; Whiting 2013),
truth is a doxastic norm because it is the norm of correctness for
belief:
* (57) \(\forall p\)(a belief that \(p\) is correct iff \(p\) is true).
Indeed, many take it to be constitutive of belief that its norm of
correctness is truth--i.e., part of what makes belief the kind of
attitude that it is. If correctness is understood in
prescriptive--rather than descriptive--terms, then (57)
presumably gives way to the following schema:
* (58) \(\forall p\)(one ought to believe
that \(p\) when \(p\) is true).
A third normative schema linking truth and belief classifies truth
as a good of belief (Lynch, 2004a, 2005b: 390, 2009: 10; David
2005):
* (59) \(\forall p\)(it is prima facie good to
believe that \(p\) when \(p\) is true).
What these schemas suggest is that the apparent doxastic and
assertoric normativity of truth appears to be entirely general, in a
manner analogous to which winning appears to be a general norm that
applies to any competitive game (Dummett 1978: 8; Lynch 2005b: 390).
Hence, if \(p\) is true, then it is correct and good to believe that \(p\), and
one should believe that \(p\)--regardless of whether \(p\) concerns
fashion or physics, comedy or chemistry. And again, the generalized
normativity of truth appears to make trouble for pluralists, insofar as
the thesis that there are several ways of being true apparently implies
a proliferation of doxastic truth norms. Yet, instead of truth being
the single normative property mentioned in (57), (58), and (59), the
pluralist commits to a wide variety of norms--one for each
domain-specific truth property \(F\_1 , \ldots ,F\_n\). For example, for any given trigonometric
statement p, it is prima facie good to believe \(p\) when \(p\) is true in the
way appropriate to trigonometry, while the prima facie goodness of
believing a truth about antibodies is tied to whatever truth property
is apropos to immunology. Hence, whereas the normative aspects of truth
seem characterized by unity, pluralism renders disunity.
As before, a concessive response can be given by granting the
existence of a disjunctive, universal truth property: the normative
property of being \(F\_1 \vee \cdots \vee F\_n\). Although this amounts to an endorsement
moderate pluralism, it poses no threat to the importance of the
domain-specific norms \(F\_1 , \ldots ,F\_n\), insofar as these properties are explanatorily
more basic than the normative property of being \(F\_1 \vee \cdots \vee F\_n\). However, at the same time,
they do provide the unity needed to maintain that the predicate *is
true* in (57), (58), and (59) denotes a single, universally
applicable norm:
* (60) \(\forall p\)(a belief that \(p\) is correct iff \(p\) is
\(F\_1 \vee \cdots \vee F\_n)\).
* (61) \(\forall p\)(one ought to believe \(p\) when \(p\) is
\(F\_1 \vee \cdots \vee F\_n)\).
* (62) \(\forall p\)(it is prima facie good to believe that \(p\) when \(p\) is
\(F\_1 \vee \cdots \vee F\_n)\).
Likewise, functionalists once again respond to the challenge by
invoking the monist aspect of their view. There is a single normative
property--the property of having a property that realizes the
\(F\)-role--that delivers a uniform understanding of (57),
(58), and (59). |
truth-pragmatic | ## 1. History of the Pragmatic Theory of Truth
The history of the pragmatic theory of truth is tied to the history of
classical American pragmatism. According to one standard account, C.S.
Peirce gets credit for first proposing a pragmatic theory of truth,
William James is responsible for popularizing the pragmatic theory,
and John Dewey subsequently reframed truth in terms of warranted
assertibility (for this reading of Dewey see Burgess & Burgess
2011: 4). More specifically, Peirce is associated with the idea that
true beliefs are those that will withstand future scrutiny; James with
the idea that true beliefs are dependable and useful; Dewey with the
idea that truth is a property of well-verified claims (or
"judgments").
### 1.1 Peirce's Pragmatic Theory of Truth
The American philosopher, logician and scientist Charles Sanders
Peirce (1839-1914) is generally recognized for first proposing a
"pragmatic" theory of truth. Peirce's pragmatic
theory of truth is a byproduct of his pragmatic theory of meaning. In
a frequently-quoted passage in "How to Make Our Ideas
Clear" (1878), Peirce writes that, in order to pin down the
meaning of a concept, we must:
>
>
> Consider what effects, which might conceivably have practical
> bearings, we conceive the object of our conception to have. Then, our
> conception of these effects is the whole of our conception of the
> object. (1878 [1986: 266])
>
>
>
The meaning of the concept of "truth" then boils down to
the "practical bearings" of using this term: that is, of
describing a belief as true. What, then, is the practical difference
of describing a belief as "true" as opposed to any number
of other positive attributes such as "creative",
"clever", or "well-justified"? Peirce's
answer to this question is that true beliefs eventually gain general
acceptance by withstanding future inquiry. (Inquiry, for Peirce, is
the process that takes us from a state of doubt to a state of stable
belief.) This gives us the pragmatic meaning of truth and leads Peirce
to conclude, in another frequently-quoted passage, that:
>
>
> All the followers of science are fully persuaded that the processes of
> investigation, if only pushed far enough, will give one certain
> solution to every question to which they can be applied....The
> opinion which is fated to be ultimately agreed to by all who
> investigate, is what we mean by the truth. (1878 [1986: 273])
>
>
>
Peirce realized that his reference to "fate" could be
easily misinterpreted. In a less-frequently quoted footnote to this
passage he writes that "fate" is not meant in a
"superstitious" sense but rather as "that which is
sure to come true, and can nohow be avoided" (1878 [1986: 273]).
Over time Peirce moderated his position, referring less to fate and
unanimous agreement and more to scientific investigation and general
consensus (Misak 2004). The result is an account that views truth as
what would be the result of scientific inquiry, if scientific inquiry
were allowed to go on indefinitely. In 1901 Peirce writes that:
>
>
> Truth is that concordance of an abstract statement with the ideal
> limit towards which endless investigation would tend to bring
> scientific belief. (1901a [1935: 5.565])
>
>
>
Consequently, truth does not depend on actual unanimity or an actual
end to inquiry:
>
>
> If Truth consists in satisfaction, it cannot be any *actual*
> satisfaction, but must be the satisfaction which *would*
> ultimately be found if the inquiry were pushed to its ultimate and
> indefeasible issue. (1908 [1935: 6.485], emphasis in original)
>
>
>
As these references to inquiry and investigation make clear,
Peirce's concern is with how we come to have and hold the
opinions we do. Some beliefs may in fact be very durable but would not
stand up to inquiry and investigation (this is true of many cognitive
biases, such as the Dunning-Kruger effect where people remain
blissfully unaware of their own incompetence). For Peirce, a true
belief is not simply one we will hold onto obstinately. Rather, a true
belief is one that has and will continue to hold up to sustained
inquiry. In the practical terms Peirce prefers, this means that to
have a true belief is to have a belief that is dependable in the face
of all future challenges. Moreover, to describe a belief as true is to
point to this dependability, to signal the belief's scientific
bona fides, and to endorse it as a basis for action.
By focusing on the practical dimension of having true beliefs, Peirce
plays down the significance of more theoretical questions about the
nature of truth. In particular, Peirce is skeptical that the
correspondence theory of truth--roughly, the idea that true
beliefs correspond to reality--has much useful to say about the
concept of truth. The problem with the correspondence theory of truth,
he argues, is that it is only "nominally" correct and
hence "useless" (1906 [1998: 379, 380]) as far as
describing truth's practical value. In particular, the
correspondence theory of truth sheds no light on what makes true
beliefs valuable, the role of truth in the process of inquiry, or how
best to go about discovering and defending true beliefs. For Peirce,
the importance of truth rests not on a "transcendental"
(1901a [1935: 5.572]) connection between beliefs on the one hand and
reality on the other, but rather on the practical connection between
doubt and belief, and the processes of inquiry that take us from the
former to the latter:
>
>
> If by truth and falsity you mean something not definable in terms of
> doubt and belief in any way, then you are talking of entities of whose
> existence you can know nothing, and which Ockham's razor would
> clean shave off. Your problems would be greatly simplified, if,
> instead of saying that you want to know the "Truth", you
> were simply to say that you want to attain a state of belief
> unassailable by doubt. (1905 [1998: 336])
>
>
>
For Peirce, a true belief is one that is indefeasible and
unassailable--and indefeasible and unassailable for all the right
reasons: namely, because it will stand up to all further inquiry and
investigation. In other words,
>
>
> if we were to reach a stage where we could no longer improve upon a
> belief, there is no point in withholding the title "true"
> from it. (Misak 2000: 101)
>
>
>
### 1.2 James' Pragmatic Theory of Truth
Peirce's contemporary, the psychologist and philosopher William
James (1842-1910), often gets credit for popularizing the
pragmatic theory of truth. In a series of popular lectures and
articles, James offers an account of truth that, like Peirce's,
is grounded in the practical role played by the concept of truth.
James, too, stresses that truth represents a kind of satisfaction:
true beliefs are satisfying beliefs, in some sense. Unlike Peirce,
however, James suggests that true beliefs can be satisfying short of
being indefeasible and unassailable: short, that is, of how they would
stand up to ongoing inquiry and investigation. In the lectures
published as *Pragmatism: A New Name for Some Old Ways of
Thinking* (1907) James writes that:
>
>
> Ideas...become true just in so far as they help us get into
> satisfactory relation with other parts of our experience, to summarize
> them and get about among them by conceptual short-cuts instead of
> following the interminable succession of particular phenomena. (1907
> [1975: 34])
>
>
>
True ideas, James suggests, are like tools: they make us more
efficient by helping us do the things that need to get done. James
adds to the previous quote by making the connection between truth and
utility explicit:
>
>
> Any idea upon which we can ride, so to speak; any idea that will carry
> us prosperously from any one part of our experience to any other part,
> linking things satisfactorily, working securely, simplifying, saving
> labor; is true for just so much, true in so far forth, true
> *instrumentally.* This is the 'instrumental' view
> of truth. (1907 [1975: 34])
>
>
>
While James, here, credits this view to John Dewey and F.C.S.
Schiller, it is clearly a view he endorses as well. To understand
truth, he argues, we must consider the practical difference--or
the pragmatic "cash-value" (1907 [1975: 97]) of having
true beliefs. True beliefs, he suggests, are useful and dependable in
ways that false beliefs are not:
>
>
> you can say of it then either that "it is useful because it is
> true" or that "it is true because it is useful".
> Both these phrases mean exactly the same thing. (1907 [1975: 98])
>
>
>
Passages such as this have cemented James' reputation for
equating truth with mere utility (something along the lines of:
"< *p* > is true just in case it is useful to
believe that *p*" [see Schmitt 1995: 78]). (James does
offer the qualification "in the long run and on the whole of
course" (1907 [1975: 106]) to indicate that truth is different
from instant gratification, though he does not say how long the long
run should be.) Such an account might be viewed as a watered-down
version of Peirce's account that substitutes
"cash-value" or subjective satisfaction for
indefeasibility and unassailability in the face of ongoing inquiry and
investigation. Such an account might also be viewed as obviously
wrong, given the undeniable existence of useless truths and useful
falsehoods.
In the early twentieth century Peirce's writings were not yet
widely available. As a result, the pragmatic theory of truth was
frequently identified with James' account--and, as we will
see, many philosophers did view it as obviously wrong. James, in turn,
accused his critics of willful misunderstanding: that because he wrote
in an accessible, engaging style his critics "have boggled at
every word they could boggle at, and refused to take the spirit rather
than the letter of our discourse" (1909 [1975: 99]). However, it
is also the case that James tends to overlook or intentionally
blur--it is hard to say which--the distinction between (a)
giving an account of true ideas and (b) giving an account of the
concept of truth. This means that, while James' theory might
give a psychologically realistic account of why we care about the
truth (true ideas help us get things done) his theory fails to shed
much light on what the concept of truth exactly is or on what makes an
idea true. And, in fact, James often seems to encourage this reading.
In the preface to *The Meaning of Truth* he doubles down by
quoting many of his earlier claims and noting that "when the
pragmatists speak of truth, they mean exclusively something about the
*ideas*, namely their workableness" (1909 [1975: 6],
emphasis added). James' point seems to be this: from a practical
standpoint, we use the concept of truth to signal our confidence in a
particular idea or belief; a true belief is one that can be acted
upon, that is dependable and that leads to predictable outcomes; any
further speculation is a pointless distraction.
What then about the concept of truth? It often seems that James
understands the concept of truth in terms of verification: thus,
"true is the name for whatever idea starts the
verification-process, useful is the name for its completed function in
experience" (1907 [1975: 98]). And, more generally:
>
>
> Truth for us is simply a collective name for verification-processes,
> just as health, wealth, strength, etc., are names for other processes
> connected with life, and also pursued because it pays to pursue them.
> (1907 [1975: 104])
>
>
>
James seems to claim that being verified is what makes an idea true,
just as having a lot of money is what makes a person wealthy. To be
true is to be verified:
>
>
> Truth *happens* to an idea. It *becomes* true, is
> *made* true by events. Its verity *is* in fact an event,
> a process: the process namely of its verifying itself, its
> veri-*fication*. Its validity is the process of its
> valid-*ation*. (1907 [1975: 97], emphasis in original)
>
>
>
Like Peirce, James argues that a pragmatic account of truth is
superior to a correspondence theory because it specifies, in concrete
terms, what it means for an idea to correspond or "agree"
with reality. For pragmatists, this agreement consists in being led
"towards that reality and no other" in a way that yields
"satisfaction as a result" (1909 [1975: 104]). By
sometimes defining truth in terms of verification, and by unpacking
the agreement of ideas and reality in pragmatic terms, James'
account attempts to both criticize and co-opt the correspondence
theory of truth.
### 1.3 Dewey's Pragmatic Theory of Truth
John Dewey (1859-1952), the third figure from the golden era of
classical American pragmatism, had surprisingly little to say about
the concept of truth especially given his voluminous writings on other
topics. On an anecdotal level, as many have observed, the index to his
527 page *Logic: The Theory of Inquiry* (1938 [2008]) has only
one reference to "truth", and that to a footnote
mentioning Peirce. Otherwise the reader is advised to "*See
also* assertibility".
At first glance, Dewey's account of truth looks like a
combination of Peirce and James. Like Peirce, Dewey emphasizes the
connection between truth and rigorous scientific inquiry; like James,
Dewey views truth as the verified result of past inquiry rather than
as the anticipated result of inquiry proceeding into an indefinite
future. For example, in 1911 he writes that:
>
>
> From the standpoint of scientific inquiry, truth indicates not just
> accepted beliefs, but beliefs accepted in virtue of a certain
> method....To science, truth *denotes* verified beliefs,
> propositions that have emerged from a certain procedure of inquiry and
> testing. By that I mean that if a scientific man were asked to point
> to samples of what he meant by truth, he would pick...beliefs
> which were the outcome of the best technique of inquiry available in
> some particular field; and he would do this no matter what his
> conception of the Nature of Truth. (1911 [2008: 28])
>
>
>
Furthermore, like both Peirce and James, Dewey charges correspondence
theories of truth with being unnecessarily obscure because these
theories depend on an abstract (and unverifiable) relationship between
a proposition and how things "really are" (1911 [2008:
34]). Finally, Dewey also offers a pragmatic reinterpretation of the
correspondence theory that operationalizes the idea of
correspondence:
>
>
> Our definition of truth...uses correspondence as a mark of a
> meaning or proposition in exactly the same sense in which it is used
> everywhere else...as the parts of a machine correspond. (1911
> [2008: 45])
>
>
>
Dewey has an expansive understanding of "science". For
Dewey, science emerges from and is continuous with everyday processes
of trial and error--cooking and small-engine repair count as
"scientific" on his account--which means he should
not be taken too strictly when he equates truth with scientific
verification. (Peirce and James also had expansive understandings of
science.) Rather, Dewey's point is that true propositions, when
acted on, lead to the sort of predictable and dependable outcomes that
are hallmarks of scientific verification, broadly construed. From a
pragmatic standpoint, scientific verification boils down to the
process of matching up expectations with outcomes, a process that
gives us all the "correspondence" we could ask for.
Dewey eventually came to believe that conventional philosophical terms
such as "truth" and "knowledge" were burdened
with so much baggage, and had become so fossilized, that it was
difficult to grasp the practical role these terms had originally
served. As a result, in his later writings Dewey largely avoids
speaking of "truth" or "knowledge" while
focusing instead on the functions played by these concepts. By his
1938 *Logic: The Theory of Inquiry* Dewey was speaking of
"warranted assertibility" as the goal of inquiry, using
this term in place of both "truth" and
"knowledge" (1938 [2008: 15-16]). In 1941, in a
response to Russell entitled "Propositions, Warranted
Assertibility, and Truth", he wrote that "warranted
assertibility" is a "definition of the nature of knowledge
in the honorific sense according to which only true beliefs are
knowledge" (1941: 169). Here Dewey suggests that
"warranted assertibility" is a better way of capturing the
function of both knowledge and truth insofar as both are goals of
inquiry. His point is that it makes little difference, pragmatically,
whether we describe the goal of inquiry as "acquiring more
knowledge", "acquiring more truth", or better yet,
"making more warrantably assertible judgments".
Because it focuses on truth's function as a goal of inquiry,
Dewey's pragmatic account of truth has some unconventional
features. To begin with, Dewey reserves the term "true"
only for claims that are the product of controlled inquiry. This means
that claims are neither true nor false before they are tested but
that, rather, it is the process of verification that makes them
true:
>
>
> truth and falsity are properties only of that subject-matter which is
> the *end*, the close, of the inquiry by means of which it is
> reached. (1941: 176)
>
>
>
Second, Dewey insists that only "judgments"--not
"propositions"--are properly viewed as truth-bearers.
For Dewey, "propositions" are the proposals and working
hypotheses that are used, via a process of inquiry, to generate
conclusions and verified judgments. As such, propositions may be more
or less relevant to the inquiry at hand but they are not, strictly
speaking true or false (1941: 176). Rather, truth and falsity are
reserved for "judgments" or "the settled outcome of
inquiry" (1941: 175; 1938 [2008: 124]; Burke 1994): reserved for
claims, in other words, that are warrantedly assertible. Third, Dewey
continues to argue that this pragmatic approach to truth is "the
only one entitled to be called a correspondence theory of truth"
(1941: 179) using terms nearly identical to those he used in 1911:
>
>
> My own view takes correspondence in the operational sense...of
> *answering*, as a key answers to conditions imposed by a lock,
> or as two correspondents "answer" each other; or, in
> general, as a reply is an adequate answer to a question or
> criticism--; as, in short, a *solution* answers the
> requirements of a *problem*. (1941: 178)
>
>
>
Thanks to Russell (e.g., 1941: Ch. XXIII) and others, by 1941 Dewey
was aware of the problems facing pragmatic accounts of truth. In
response, we see him turning to the language of "warranted
assertibility", drawing a distinction between
"propositions" and "judgments", and grounding
the concept of truth (or warranted assertibility) in scientific
inquiry (Thayer 1947; Burke 1994). These adjustments were designed to
extend, clarify, and improve on Peirce's and James'
accounts. Whether they did so is an open question. Certainly many,
such as Quine, concluded that Dewey was only sidestepping important
questions about truth: that Dewey's strategy was "simply
to avoid the truth predicate and limp along with warranted
belief" (Quine 2008: 165).
Peirce, James, and Dewey were not the only ones to propose or defend a
pragmatic theory of truth in the nineteenth and early twentieth
centuries. Others, such as F.C.S. Schiller (1864-1937), also put
forward pragmatic theories (though Schiller's view, which he
called "humanism", also attracted more than its share of
critics, arguably for very good reasons). Pragmatic theories of truth
also received the attention of prominent critics, including Russell
(1909, 1910 [1994]), Moore (1908), and Lovejoy (1908a,b) among others.
Several of these criticisms will be considered later; suffice it to
say that pragmatic theories of truth soon came under pressure that led
to revisions and several successor approaches over the next
hundred-plus years.
Historically Peirce, James, and Dewey had the greatest influence in
setting the parameters for what makes a theory of truth
pragmatic--this despite the sometimes significant differences
between their respective accounts, and that over time they modified
and clarified their positions in response to both criticism and
over-enthusiastic praise. While this can make it difficult to pin down
a single definition of what, historically, counted as a pragmatic
theory of truth, there are some common themes that cut across each of
their accounts. First, each account begins from a pragmatic analysis
of the meaning of the truth predicate. On the assumption that
describing a belief, claim, or judgment as "true" must
make some kind of practical difference, each of these accounts
attempts to describe what this difference is. Second, each account
then connects truth specifically to processes of inquiry: to describe
a claim as true is to say that it either has or will stand up to
scrutiny. Third, each account rejects correspondence theories of truth
as overly abstract, "transcendental", or metaphysical. Or,
more accurately, each attempts to redefine correspondence in pragmatic
terms, as the agreement between a claim and a predicted outcome. While
the exact accounts offered by Peirce, James, and Dewey found few
defenders--by the mid-twentieth century pragmatic theories of
truth were largely dormant--these themes did set a trajectory for
future versions of the pragmatic theory of truth.
## 2. Neo-Pragmatic Theories of Truth
Pragmatic theories of truth enjoyed a resurgence in the last decades
of the twentieth century. This resurgence was especially visible in
debates between Hilary Putnam (1926-2016) and Richard Rorty
(1931-2007) though broadly pragmatic ideas were defended by
other philosophers as well (Bacon 2012: Ch. 4). (One example is
Crispin Wright's superassertibility theory (1992, 2001) which he
claims is "as well equipped to express the aspiration for a
developed pragmatist conception of truth as any other candidate"
(2001: 781) though he does not accept the pragmatist label.) While
these "neo-pragmatic" theories of truth sometimes
resembled the classical pragmatic accounts of Peirce, James, or Dewey,
they also differed significantly, often by framing the concept of
truth in explicitly epistemic terms such as assertibility or by
drawing on intervening philosophical developments.
At the outset, neo-pragmatism was motivated by a renewed
dissatisfaction with correspondence theories of truth and the
metaphysical frameworks supporting them. Some neo-pragmatic theories
of truth grew out of a rejection of metaphysical realism (e.g., Putnam
1981; for background see Khlentzos 2016). If metaphysical realism
cannot be supported then this undermines a necessary condition for the
correspondence theory of truth: namely, that there be a
mind-independent reality to which propositions correspond. Other
neo-pragmatic approaches emerged from a rejection of
representationalism: if knowledge is not the mind representing
objective reality--if we cannot make clear sense of how the mind
could be a "mirror of nature" to use Rorty's (1979)
term--then we are also well-advised to give up thinking of truth
in realist, correspondence terms. Despite these similar starting
points, neo-pragmatic theories took several different and evolving
forms over the final decades of the twentieth century.
At one extreme some neo-pragmatic theories of truth seemed to endorse
relativism about truth (whether and in what sense they did remains a
point of contention). This view was closely associated with
influential work by Richard Rorty (1982, 1991a,b). The rejection of
representationalism and the correspondence theory of truth could lead
to the conclusion that inquiry is best viewed as aiming at agreement
or "solidarity", not knowledge or truth as these terms are
traditionally understood. This had the radical consequence of
suggesting that truth is no more than "what our peers will,
*ceteris paribus*, let us get away with saying" (Rorty
1979: 176; Rorty [2010a: 45] admits this phrase is provocative) or
just "an expression of commendation" (Rorty 1991a: 23).
Not surprisingly, many found this position deeply problematic since it
appears to relativize truth to whatever one's audience will
accept (Baghramian 2004: 147). A related concern is that this position
also seems to conflate truth with justification, suggesting that if a
claim meets contextual standards of acceptability then it also counts
as true (Gutting 2003). Rorty for one often admitted as much, noting
that he tended to "swing back and forth between trying to reduce
truth to justification and propounding some form of minimalism about
truth" (1998: 21).
A possible response to the accusation of relativism is to claim that
this neo-pragmatic approach does not aim to be a full-fledged theory
of truth. Perhaps truth is actually a rather light-weight concept and
does not need the heavy metaphysical lifting implied by putting
forward a "theory". If the goal is not to describe what
truth is but rather to describe how "truth" is used, then
these uses are fairly straightforward: among other things, to make
generalizations ("everything you said is true"), to
commend ("so true!"), and to caution ("what you said
is justified, but it might not be true") (Rorty 1998: 22; 2000:
4). None of these uses requires that we embark on a possibly fruitless
hunt for the conditions that make a proposition true, or for a proper
definition or theory of truth. If truth is "indefinable"
(Rorty 2010b: 391) then Rorty's approach should not be described as
a definition or theory of truth, relativist or otherwise.
This approach differs in some noteworthy ways from earlier pragmatic
accounts of truth. For one thing it is able to draw on, and draw
parallels with, a range of well-developed non-correspondence theories
of truth that begin (and sometimes end) by stressing the fundamental
equivalence of "*S* is *p*" and
"'*S* is *p*' is true". These
theories, including disquotationalism, deflationism, and minimalism,
simply were not available to earlier pragmatists (though Peirce does
at times discuss the underlying notions). Furthermore, while Peirce
and Dewey, for example, were proponents of scientific inquiry and
scientific processes of verification, on this neo-pragmatic approach
science is no more objective or rational than other disciplines: as
Rorty put it, "the only sense in which science is exemplary is
that it is a model of human solidarity" (1991b: 39). Finally, on
this approach Peirce, James, and Dewey simply did not go far enough:
they failed to recognize the radical implications of their accounts of
truth, or else failed to convey these implications adequately. In turn
much of the critical response to this kind of neo-pragmatism is that
it goes too far by treating truth merely as a sign of commendation
(plus a few other monor functions). In other words, this type of
neo-pragmatism can be accused of going to unpragmatic extremes
(e.g., Haack 1998; also the exchange in Rorty & Price 2010).
A less extreme version of neo-pragmatism attempts to preserve
truth's objectivity and independence while still rejecting
metaphysical realism. This version was most closely associated with
Hilary Putnam, though Putnam's views changed over time (see
Hildebrand 2003 for an overview of Putnam's evolution). While
this approach frames truth in epistemic terms--primarily in terms
of justification and verification--it amplifies these terms to
ensure that truth is more than mere consensus. For example, this
approach might identify "being true with being warrantedly
assertible under ideal conditions" (Putnam 2012b: 220). More
specifically, it might demand "that truth is independent of
justification here and now, but not independent of *all*
justification" (Putnam 1981: 56).
Rather than play up assertibility before one's peers or
contemporaries, this neo-pragmatic approach frames truth in terms of
ideal warranted assertibility: namely, warranted assertibility in the
long run and before all audiences, or at least before all
well-informed audiences. Not only does this sound much less relativist
but it also bears a strong resemblance to Peirce's and
Dewey's accounts (though Putnam, for one, resisted the
comparison: "my admiration for the classical pragmatists does
not extend to any of the different theories of truth that Peirce,
James, and Dewey advanced" [2012c: 70]).
To repeat, this neo-pragmatic approach is designed to avoid the
problems facing correspondence theories of truth while still
preserving truth's objectivity. In the 1980s this view was
associated with Putnam's broader program of "internal
realism": the idea that "*what objects does the world
consist of?* is a question that it only makes sense to ask
*within* a theory or description" (Putnam 1981: 49,
emphasis in original). Internal realism was designed as an alternative
to metaphysical realism that dispensed with achieving an external
"God's Eye Point of View" while still preserving
truth's objectivity, albeit internal to a given theory. (For
additional criticisms of metaphysical realism see Khlentzos 2016.) In
the mid-1990s Putnam's views shifted toward what he called
"natural realism" (1999; for a critical discussion of
Putnam's changing views see Wright 2000). This shift came about
in part because of problems with defining truth in epistemic terms
such as ideal warranted assertibility. One problem is that it is
difficult to see how one can verify either what these ideal conditions
are or whether they have been met: either one might attempt to do so
by taking an external "god's eye view", which would
be inconsistent with internal realism, or one might come to this
determination from within one's current theory, which would be
circular and relativistic. (As Putnam put it, "to talk of
epistemically 'ideal' connections must either be
understood outside the framework of internal realism or it too must be
understood in a solipsistic manner " (2012d: 79-80).)
Since neither option seems promising this does not bode well for
internal realism or for any account of truth closely associated with
it.
If internal realism cannot be sustained then a possible fallback
position is "natural realism"--the view "that
the objects of (normal 'veridical') perception are
'external' things, and, more generally, aspects of
'external' reality" (Putnam 1999: 10)--which
leads to a reconciliation of sorts with the correspondence theory of
truth. A natural realism suggests "that true empirical
statements correspond to states of affairs that actually obtain"
(Putnam 2012a: 97), though this does not commit one to a
correspondence theory of truth across the board. In other words,
natural realism leaves open the possibility that not all true
statements "correspond" to a state of affairs, and even
those that do (such as empirical statements) do not always correspond
in the same way (Putnam 2012c: 68-69; 2012a: 98). While not a
ringing endorsement of the correspondence theory of truth, at least as
traditionally understood, this neo-pragmatic approach is not a
flat-out rejection either.
Viewing truth in terms of ideal warranted assertibility has obvious
pragmatic overtones of Peirce and Dewey. In contrast, viewing truth in
terms of a commitment to natural realism is not so clearly pragmatic
though some parallels still exist. Because natural realism allows for
different types of truth-conditions--some but not all statements
are true in virtue of correspondence--it is compatible with the
truth-aptness of normative discourse: just because ethical statements,
for example, do not correspond in an obvious way to ethical state of
affairs is no reason to deny that they can be true (Putnam 2002). In
addition, like earlier pragmatic theories of truth, this neo-pragmatic
approach redefines correspondence: in this case, by taking a pluralist
approach to the correspondence relation itself (Goodman 2013; see also
Howat 2021 and Shields (forthcoming) for recent attempts to show the
compatibility of pragmatic and correspondence theories of truth).
These two approaches--one tending toward relativism, the other
tending toward realism--represented the two main currents in late
twentieth century neo-pragmatism. Both approaches, at least initially,
framed truth in terms of justification, verification, or
assertibility, reflecting a debt to the earlier accounts of Peirce,
James, and Dewey. Subsequently they evolved in opposite directions.
The first approach, associated with Rorty, flirts with relativism and
implies that truth is not the important philosophical concept it has
long been taken to be. Here, to take a neo-pragmatic stance toward
truth is to recognize the relatively mundane functions this concept
plays: to generalize, to commend, to caution and not much else. To ask
for more, to ask for something "beyond the here and now",
only commits us to "the banal thought that we might be
wrong" (Rorty 2010a: 45). The second neo-pragmatic approach,
associated with Putnam, attempts to preserve truth's objectivity
and the important role it plays across scientific, mathematical,
ethical, and political discourse. This could mean simply "that
truth is independent of justification here and now" or
"that to call a statement of any kind...true is to say that
it has the sort of correctness appropriate to the kind of statement it
is" (2012a: 97-98). On this account truth points to
standards of correctness more rigorous than simply what our peers will
let us get away with saying.
## 3. Truth as a Norm of Inquiry and Assertion
More recently--since roughly the turn of the twenty-first
century--pragmatic theories of truth have focused on
truth's role as a norm of assertion or inquiry. These theories
are sometimes referred to as "new pragmatic" theories to
distinguish them from both classical and neo-pragmatic accounts (Misak
2007b; Legg and Hookway 2021). Like neo-pragmatic accounts, these
theories often build on, or react to, positions besides the
correspondence theory: for example, deflationary, minimal, and
pluralistic theories of truth. Unlike some of the neo-pragmatic
accounts discussed above, these theories give relativism a wide berth,
avoid defining truth in terms of concepts such as warranted
assertibility, and treat correspondence theories of truth with deep
suspicion.
On these accounts truth plays a unique and necessary role in
assertoric discourse (Price 1998, 2003, 2011; Misak 2000, 2007a,
2015): without the concept of truth there would be no difference
between making assertions and, to use Frank Ramsey's nice
phrase, "comparing notes" (1925 [1990: 247]). Instead,
truth provides the "convenient friction" that "makes
our individual opinions engage with one another" (Price 2003:
169) and "is internally related to inquiry, reasons, and
evidence" (Misak 2000: 73).
Like all pragmatic theories of truth, these "new"
pragmatic accounts focus on the use and function of truth. However,
while classical pragmatists were responding primarily to the
correspondence theory of truth, new pragmatic theories also respond to
contemporary disquotational, deflationary, and minimal theories of
truth (Misak 1998, 2007a). As a result, new pragmatic accounts aim to
show that there is more to truth than its disquotational and
generalizing function (for a dissenting view see Freedman 2006).
Specifically, this "more" is that the concept of truth
also functions as a norm that places clear expectations on speakers
and their assertions. In asserting something to be true, speakers take
on an obligation to specify the consequences of their assertion, to
consider how their assertions can be verified, and to offer reasons in
support of their claims:
>
>
> once we see that truth and assertion are intimately
> connected--once we see that to assert that *p* is true is
> to assert *p*--we can and must look to our practices of
> assertion and to the commitments incurred in them so as to say
> something more substantial about truth. (Misak 2007a: 70)
>
>
>
This would mean that truth is not just a goal of inquiry, as Dewey
claimed, but actually a norm of inquiry that sets expectations for how
inquirers conduct themselves.
More specifically, without the norm of truth assertoric discourse
would be degraded nearly beyond recognition. Without the norm of
truth, speakers could be held accountable only for either insincerely
asserting things they don't themselves believe (thus violating
the norm of "subjective assertibility") or for asserting
things they don't have enough evidence for (thus violating the
norm of "personal warranted assertibility") (Price 2003:
173-174). The norm of truth is a condition for genuine
disagreement between people who speak sincerely and with, from their
own perspective, good enough reasons. It provides the
"friction" we need to treat disagreements as genuinely
needing resolution: otherwise, "differences of opinion would
simply slide past one another" (Price 2003: 180-181). In
sum, the concept of truth plays an essential role in making assertoric
discourse possible, ensuring that assertions come with obligations and
that conflicting assertions get attention. Without truth, it is no
longer clear to what degree assertions would still be assertions, as
opposed to impromptu speculations or musings. (Correspondence theories
should find little reason to object: they too can recognize that truth
functions as a norm. Of course, correspondence theorists will want to
add that truth also requires correspondence to reality, a step
"new" pragmatists will resist taking.)
It is important that this account of truth is not a definition or
theory of truth, at least in the narrow sense of specifying necessary
and sufficient conditions for a proposition being true. (That is,
there is no proposal along the lines of "*S* is true
iff..."; though see Brown (2015: 69) for a Deweyan
definition of truth and Heney (2015) for a Peircean response.) As
opposed to some versions of neo-pragmatism, which viewed truth as
"indefinable" in part because of its supposed simplicity
and transparency, this approach avoids definitions because the concept
of truth is implicated in a complex range of assertoric practices.
Instead, this approach offers something closer to a "pragmatic
elucidation" of truth that gives "an account of the role
the concept plays in practical endeavors" (Misak 2007a: 68; see
also Wiggins 2002: 317).
The proposal to treat truth as a norm of inquiry and assertion can be
traced back to both classical and neo-pragmatist accounts. In one
respect, this account can be viewed as adding on to neo-pragmatic
theories that reduce truth to justification or "personal
warranted assertibility". In this respect, these newer pragmatic
accounts are a response to the problems facing neo-pragmatism. In
another respect, new pragmatic accounts can be seen as a return to the
insights of classical pragmatists updated for a contemporary audience.
For example, while Peirce wrote of beliefs being "fated"
to be agreed upon at the "ideal limit" of
inquiry--conditions that to critics sounded metaphysical and
unverifiable--a better approach is to treat true beliefs as those
"that would withstand doubt, were we to inquire as far as we
fruitfully could on the matter" (Misak 2000: 49). On this
account, to say that a belief is true is shorthand for saying that it
"gets thing right" and "stands up and would continue
to stand up to reasons and evidence" (Misak 2015: 263, 265).
This pragmatic elucidation of the concept of truth thus attempts to
capture both what speakers say and what they do when they describe a
claim as true. In a narrow sense the meaning of truth--what
speakers are saying when they use this word--is that true beliefs
are indefeasible. However, in a broader sense the meaning of truth is
also what speakers are doing when they use this word, with the
proposal here that truth functions as a norm that is constitutive of
assertoric discourse.
As we have seen, pragmatic accounts of truth focus on the function the
concept plays: specifically, the practical difference made by having
and using the concept of truth. Early pragmatic accounts tended to
analyze this function in terms of the practical implications of
labeling a belief as true: depending on the version, to say that a
belief is true is to signal one's confidence, or that the belief
is widely accepted, or that it has been scientifically verified, or
that it would be assertible under ideal circumstances, among other
possible implications. These earlier accounts focus on the function of
truth in conversational contexts or in the context of ongoing
inquiries. The newer pragmatic theories discussed in this section take
a broader approach to truth's function, addressing its role not
just in conversations and inquiries but in making certain kinds of
conversations and inquiries possible in the first place. By viewing
truth as a norm of assertion and inquiry, these more recent pragmatic
theories make the function of truth independent of what individual
speakers might imply in specific contexts. Truth is not just what is
assertible or verifiable (under either ideal or non-ideal
circumstances), but sets objective expectations for making assertions
and engaging in inquiry. Unlike neo-pragmatists such as Rorty and
Putnam, new pragmatists such as Misak and Price argue that truth plays
a role entirely distinct from justification or warranted
assertibility. This means that, without the concept of truth and the
norm it represents, assertoric discourse (and inquiry in general)
would dwindle into mere "comparing notes".
## 4. Common Features
Pragmatic theories of truth have evolved to where a variety of
different approaches are described as "pragmatic". These
theories often disagree significantly with each other, making it
difficult either to define pragmatic theories of truth in a simple and
straightforward manner or to specify the necessary conditions that a
pragmatic theory of truth must meet. As a result, one way to clarify
what makes a theory of truth pragmatic is to say something about what
pragmatic theories of truth are not. Given that pragmatic theories of
truth have often been put forward in contrast to prevailing
correspondence and other "substantive" theories of truth
(Wyatt & Lynch, 2016), this suggests a common commitment shared by
the pragmatic theories described above.
One way to differentiate pragmatic accounts from other theories of
truth is to distinguish the several questions that have historically
guided discussions of truth. While some have used decision trees to
categorize different theories of truth (Lynch 2001a; Kunne 2003),
or have proposed family trees showing relations of influence and
affinity (Haack 1978), another approach is to distinguish separate
"projects" that examine different dimensions of the
concept of truth (Kirkham 1992). (These projects also break into
distinct subprojects; for a similar approach see Frapolli 1996.) On
this last approach the first, "metaphysical", project aims
to identify the necessary and sufficient conditions for "what it
is for a statement...to be true" (Kirkham 1992: 20; Wyatt
& Lynch call this the "essence project" [2016: 324]).
This project often takes the form of identifying what makes a
statement true: e.g., correspondence to reality, or coherence with
other beliefs, or the existence of a particular state of affairs. A
second, "justification", project attempts to specify
"some characteristic, possessed by most true
statements...by reference to which the probable truth or falsity
of the statement can be judged" (Kirkham 1992: 20). This often
takes the form of giving a criterion of truth that can be used to
determine whether a given statement is true. Finally, the
"speech-act" project addresses the question of "what
are we *doing* when we make utterances" that
"ascribe truth to some statement?" (Kirkham 1992: 28).
Unfortunately, truth-theorists have not always been clear on which
project they are pursuing, which can lead to confusion about what
counts as a successful or complete theory of truth. It can also lead
to truth-theorists talking past each other when they are pursuing
distinct projects with different standards and criteria of
success.
In these terms, pragmatic theories of truth are best viewed as
pursuing the speech-act and justification projects. As noted above,
pragmatic accounts of truth have often focused on how the concept of
truth is used and what speakers are doing when describing statements
as true: depending on the version, speakers may be commending a
statement, signaling its scientific reliability, or committing
themselves to giving reasons in its support. Likewise, pragmatic
theories often focus on the criteria by which truth can be judged:
again, depending on the version, this may involve linking truth to
verifiability, assertibility, usefulness, or long-term durability.
With regard to the speech-act and justification projects pragmatic
theories of truth seem to be on solid ground, offering plausible
proposals for addressing these projects. They are on much less solid
ground when viewed as addressing the metaphysical project (Capps
2020). As we will see, it is difficult to defend the idea, for
example, that either utility, verifiability, or widespread acceptance
are necessary and sufficient conditions for truth or are what make a
statement true (though, to be clear, few pragmatists have defended
their positions in these exact terms).
This would suggest that the opposition between pragmatic and
correspondence theories of truth is partly a result of their pursuing
different projects. From a pragmatic perspective, the problem with the
correspondence theory is its pursuit of the metaphysical project that,
as its name suggests, invites metaphysical speculation about the
conditions which make sentences true--speculation that can
distract from more central questions of what makes true beliefs
valuable, how the truth predicate is used, and how true beliefs
are best recognized and acquired. (Pragmatic theories of truth are not
alone in raising these concerns (David 2022).) From the standpoint of
correspondence theories and other accounts that pursue the
metaphysical project, pragmatic theories will likely seem incomplete,
sidestepping the most important questions (Howat 2014). But from the
standpoint of pragmatic theories, projects that pursue or prioritize
the metaphysical project are deeply misguided and misleading.
This supports the following truism: a common feature of pragmatic
theories of truth is that they focus on the practical function that
the concept of truth plays. Thus, whether truth is a norm of inquiry
(Misak), a way of signaling widespread acceptance (Rorty), stands for
future dependability (Peirce), or designates the product of a process
of inquiry (Dewey), among other things, pragmatic theories shed light
on the concept of truth by examining the practices through which
solutions to problems are framed, tested, asserted, and
defended--and, ultimately, come to be called true. Pragmatic
theories of truth can thus be viewed as making contributions to the
speech-act and justification projects by focusing especially on the
practices people engage in when they solve problems, make assertions,
and conduct scientific inquiry. (For another example, Chang has argued
that claims are true "to the extent that there are operationally
coherent activities that can be performed by relying on it" (2022:
167).) Of course, even though pragmatic theories of truth largely
agree on which questions to address and in what order, this does not
mean that they agree on the answers to these questions, or on how to
best formulate the meaning and function of truth.
Another common commitment of pragmatic theories of truth--besides
prioritizing the speech-act and justification projects--is that
they do not restrict truth to certain topics or types of inquiry. That
is, regardless of whether the topic is descriptive or normative,
scientific or ethical, pragmatists tend to view it as an opportunity
for genuine inquiry that incorporates truth-apt assertions. The
truth-aptness of ethical and normative statements is a notable feature
across a range of pragmatic approaches, including Peirce's (at
least in some of his moods, e.g., 1901b [1958: 8.158]), Dewey's
theory of valuation (1939), Putnam's questioning of the
fact-value dichotomy (2002), and Misak's claim that "moral
beliefs must be in principle responsive to evidence and
argument" (2000: 94; for a dissenting view see Frega 2013). This
broadly cognitivist attitude--that normative statements are
truth-apt--is related to how pragmatic theories of truth
de-emphasize the metaphysical project. As a result, from a pragmatic
standpoint one of the problems with the correspondence theory of truth
is that it can undermine the truth-aptness of normative claims. If, as
the correspondence theory proposes, a necessary condition for the
truth of a normative claim is the existence of a normative fact to
which it corresponds, and if the existence of normative facts is
difficult to account for (normative facts seem ontologically distinct
from garden-variety physical facts), then this does not bode well for
the truth-aptness of normative claims or the point of posing, and
inquiring into, normative questions (Lynch 2009). If the
correspondence theory of truth leads to skepticism about normative
inquiry, then this is all the more reason, according to pragmatists,
to sidestep the metaphysical project in favor of the speech-act and
justification projects.
As we have seen, pragmatic theories of truth take a variety of
different forms. Despite these differences, and despite often being
averse to being called a "theory", pragmatic theories of
truth do share some common features. To begin with, and unlike many
theories of truth, these theories focus on the pragmatics of
truth-talk: that is, they focus on how truth is used as an essential
step toward an adequate understanding of the concept of truth (indeed,
this comes close to being an oxymoron). More specifically, pragmatic
theories look to how truth is used in epistemic contexts where people
make assertions, conduct inquiries, solve problems, and act on their
beliefs. By prioritizing the speech-act and justification projects,
pragmatic theories of truth attempt to ground the concept of truth in
epistemic practices as opposed to the abstract relations between
truth-bearers (such as propositions or statements) and truth-makers
(such as states of affairs) appealed to by correspondence theories
(MacBride 2022). Pragmatic theories also recognize that truth can play
a fundamental role in shaping inquiry and assertoric
discourse--for example, by functioning as a norm of these
practices--even when it is not explicitly mentioned. In this
respect pragmatic theories are less austere than deflationary theories
which limit the use of truth to its generalizing and disquotational
roles. And, finally, pragmatic theories of truth draw no limits, at
least at the outset, to the types of statements, topics, and inquiries
where truth may play a practical role. If it turns out that a given
topic is not truth-apt, this is something that should be discovered as
a characteristic of that subject matter, not something determined by
having chosen one theory of truth or another (Capps 2017).
## 5. Critical Assessments
Pragmatic theories of truth have faced several objections since first
being proposed. Some of these objections can be rather narrow,
challenging a specific pragmatic account but not pragmatic theories in
general (this is the case with objections raised by other pragmatic
accounts). This section will look at more general objections: either
objections that are especially common and persistent, or objections
that pose a challenge to the basic assumptions underlying pragmatic
theories more broadly.
### 5.1 Three Classic Objections and Responses
Some objections are as old as the pragmatic theory of truth itself.
The following objections were raised in response to James'
account in particular. While James offered his own responses to many
of these criticisms (see especially his 1909 [1975]), versions of
these objections often apply to other and more recent pragmatic
theories of truth (for further discussion see Haack 1976; Tiercelin
2014).
One classic and influential line of criticism is that, if the
pragmatic theory of truth equates truth with utility, this definition
is (obviously!) refuted by the existence of useful but false beliefs,
on the one hand, and by the existence of true but useless beliefs on
the other (Russell 1910 [1994] and Lovejoy 1908a,b). In short, there
seems to be a clear and obvious difference between describing a belief
as true and describing it as useful:
>
>
> when we say that a belief is true, the thought we wish to convey is
> not the same thought as when we say that the belief furthers our
> purposes; thus "true" does not mean "furthering our
> purposes". (Russell 1910 [1994: 98])
>
>
>
While this criticism is often aimed especially at James' account
of truth, it plausibly carries over to any pragmatic theory. So
whether truth is defined in terms of utility, long-term durability or
assertibility (etc.), it is still an open question whether a useful or
durable or assertible belief is, in fact, really true. In other words,
whatever concept a pragmatic theory uses to define truth, there is
likely to be a difference between that concept and the concept of
truth (e.g., Bacon 2014 questions the connection between truth and
indefeasibility).
A second and related criticism builds on the first. Perhaps utility,
long-term durability, and assertibility (etc.) should be viewed not as
definitions but rather as criteria of truth, as yardsticks for
distinguishing true beliefs from false ones. This seems initially
plausible and might even serve as a reasonable response to the first
objection above. Falling back on an earlier distinction, this would
mean that appeals to utility, long-term durability, and assertibility
(etc.) are best seen as answers to the justification and not the
metaphysical project. However, without some account of what truth is,
or what the necessary and sufficient conditions for truth are, any
attempt to offer criteria of truth is arguably incomplete: we cannot
have criteria of truth without first knowing what truth is. If so,
then the justification project relies on and presupposes a successful
resolution to the metaphysical project, the latter cannot be
sidestepped or bracketed, and any theory which attempts to do so will
give at best a partial account of truth (Creighton 1908; Stebbing
1914).
And a third objection builds on the second. Putting aside the question
of whether pragmatic theories of truth adequately address the
metaphysical project (or address it at all), there is also a problem
with the criteria of truth they propose for addressing the
justification project. Pragmatic theories of truth seem committed, in
part, to bringing the concept of truth down to earth, to explaining
truth in concrete, easily confirmable, terms rather than the abstract,
metaphysical correspondence of propositions to truth-makers, for
example. The problem is that assessing the usefulness (etc.) of a
belief is no more clear-cut than assessing its truth: beliefs may be
more or less useful, useful in different ways and for different
purposes, or useful in the short- or long-run. Determining whether a
belief is really useful is no easier than determining whether it
is really true: "it is so often harder to determine whether a
belief is useful than whether it is true" (Russell 1910 [1994:
121]; also 1946: 817). Far from making the concept of truth more
concrete, and the assessment of beliefs more straightforward,
pragmatic theories of truth thus seem to leave the concept as opaque
as ever.
These three objections have been around long enough that pragmatists
have, at various times, proposed a variety of responses. One response
to the first objection, that there is a clear difference between
utility (etc.) and truth, is to deny that pragmatic approaches are
aiming to define the concept of truth in the first place. It has been
argued that pragmatic theories are not about finding a word or concept
that can substitute for truth but that they are, rather, focused on
tracing the implications of using this concept in practical contexts.
This is what Misak (2000, 2007a) calls a "pragmatic
elucidation". Noting that it is "pointless" to offer
a definition of truth, she concludes that "we ought to attempt
to get leverage on the concept, or a fix on it, by exploring its
connections with practice" (2007a: 69; see also Wiggins 2002).
It is even possible that James--the main target of Russell and
others--would agree with this response. As with Peirce, it often
seems that James' complaint is not with the correspondence
theory of truth, *per se*, as with the assumption that the
correspondence theory, by itself, says much interesting or important
about the concept of truth. (For charitable interpretations of what
James was attempting to say see Ayer 1968, Chisholm 1992, Bybee 1984,
Cormier 2001, 2011, Chang 2022, Pihlstrom 2021, and Perkins
1952; for a reading that emphasizes Peirce's commitment to
correspondence idioms see Atkins 2010.)
This still leaves the second objection: that the metaphysical project
of defining truth cannot be avoided by focusing instead on finding the
criteria for truth (the "justification project"). To be
sure, pragmatic theories of truth have often been framed as providing
criteria for distinguishing true from false beliefs. The distinction
between offering a definition as opposed to offering criteria would
suggest that criteria are separate from, and largely inferior to, a
definition of truth. However, one might question the underlying
distinction: as Haack (1976) argues,
>
>
> the pragmatists' view of meaning is such that a dichotomy
> between definitions and criteria would have been entirely unacceptable
> to them. (1976: 236)
>
>
>
If meaning is related to use (as pragmatists generally claim) then
explaining how a concept is used, and specifying criteria for
recognizing that concept, may provide all one can reasonably expect
from a theory of truth. Deflationists have often made a similar point
though, as noted above, pragmatists tend to find deflationary accounts
excessively austere.
Even so, there is still the issue that pragmatic criteria of truth
(whatever they are) do not provide useful insight into the concept of
truth. If this concern is valid, then pragmatic criteria, ironically,
fail the pragmatic test of making a difference to our understanding of
truth. This objection has some merit: for example, if a pragmatic
criterion of truth is that true beliefs will stand up to indefinite
inquiry then, while it is possible to have true beliefs, "we are
never in a position to judge whether a belief is true or not"
(Misak 2000: 57). In that case it is not clear what good it serves to
have a pragmatic criterion of truth. Pragmatic theories of truth might
try to sidestep this objection by stressing their commitment to both
the justification and the speech-act project. While pragmatic
approaches to the justification project spell out what truth means in
conversational contexts--to call a statement true is to cite its
usefulness, durability, etc.--pragmatic approaches to the
speech-act project point to what speakers do in using the concept of
truth. This has the benefit of showing how the concept of
truth--operating as a norm of assertion, say--makes a real
difference to our understanding of the conditions on assertoric
discourse. Pragmatic theories of truth are, as a result, wise to
pursue both the justification and the speech-act projects. By itself,
pragmatic approaches to the justification project are likely to
disappoint.
These classic objections to the pragmatic theory of truth raise
several important points. For one thing, they make it clear that
pragmatic theories of truth, or at least some historically prominent
versions of it, do a poor job if viewed as providing a strict
definition of truth. As Russell and others noted, defining truth in
terms of utility or similar terms is open to obvious counter-examples.
This does not bode well for pragmatic attempts to address the
metaphysical project. As a result, pragmatic theories of truth have
evolved often by focusing on the justification and speech-act projects
instead. This is not to say that each of the above objections have
been met. It is still an open question whether the metaphysical
project can be avoided as many pragmatic theories attempt to do (e.g.,
Fox 2008 argues that epistemic accounts such as Putnam's fail to
explain the value of truth as well as more traditional approaches do).
It is also an open question whether, as they evolve in response to
these objections, pragmatic theories of truth will invite new
lines of criticism.
### 5.2 The Fundamental Objection
One long-standing and still ongoing objection is that pragmatic
theories of truth are anti-realist and, as such, violate basic
intuitions about the nature and meaning of truth: call this "the
fundamental objection". The source of this objection rests with
the tendency of pragmatic theories of truth to treat truth
epistemically, by focusing on verifiability, assertibility, and other
related concepts. Some (see, e.g., Schmitt 1995; Nolt 2008) have
argued that, by linking truth with verifiability or assertibility,
pragmatic theories make truth too subjective and too dependent on our
contingent ability to figure things out, as opposed to theories that,
for example, appeal to objective facts as truth-makers. Others have
argued that pragmatic theories cannot account for what Peirce called
buried secrets: statements that would seem to be either true or false
despite our inability to figure out which (see de Waal 1999, Howat
2013, and Talisse & Akin 2008 for discussions of this). For
similar reasons, some have accused pragmatic theories of denying
bivalence (Allen-Hermanson 2001). Whatever form the objection takes,
it raises a common concern: that pragmatic theories of truth are
insufficiently realist, failing both to account for truth's
objectivity and to distinguish truth from the limitations of actual
epistemic practice. What results, accordingly, is not a theory of
truth, but rather a theory of justification, warranted assertibility,
or some other epistemic concept.
This objection has persisted despite inspiring a range of responses.
At one extreme some, such as Rorty, have largely conceded the point
while attempting to defuse its force. As noted earlier, Rorty grants
that truth is not objective in the traditional sense while also
attempting to undercut the very distinction between objectivity and
relativism. Others, such as Putnam, have argued against metaphysical
realist intuitions (such as "the God's Eye view"
1981: 55), while defending the idea of a more human-scale objectivity:
"objectivity and rationality humanly speaking are what we have;
they are better than nothing" (1981: 55). Another response is to
claim that pragmatic accounts of truth are fully compatible with
realism; any impression to the contrary is a result of confusing
pragmatic "elucidations" of truth with more typical
"definitions". For example Peirce's steadfast
commitment to realism is perfectly compatible with his attempting to
describe truth in terms of its practical role: hence, his notion of
truth
>
>
> is the ordinary notion, but he insists on this notion's being
> philosophically characterized from the viewpoint of the practical
> first order investigator. (Hookway 2002: 319; see also Hookway 2012
> and Legg 2014)
>
>
>
Even James claimed "my account of truth is realistic"
(1909 [1975: 117]). Likewise, Chang argues that this objection
"ignores the realist dimension of pragmatism, in terms of how
pragmatism demands that our ideas answer to experience, and to
realities" (2022: 203). Finally, others attempt to undercut the
distinction between realism and antirealism though without making
concessions to antirealism. Hildebrand argues for embracing a
"practical starting point" (Hildebrand 2003: 185) as a way
of going "beyond" the realism-antirealism debate (see also
Fine 2007). Similarly, Price, while admitting that his theory might
seem "fictionalist" about truth, argues that its bona
fides are "impeccably pragmatist" (2003: 189) and, in
fact, "deprive both sides of the realism-antirealism debate of
conceptual resources on which the debate seems to depend" (2003:
188; but see Atkin 2015 for some caveats and Lynch 2015 for a
pluralist amendment). Da Costa and French (2003) offer a formal
account of pragmatic truth that, they argue, can benefit both sides of
the realism-antirealism debate (though they themselves prefer
structural realism).
We find, in other words, an assortment of replies that run the gamut
from embracing antirealism to defending realism to attempting to
undermine the realist-antirealist distinction itself. Evidently, there
is no consensus among pragmatic theories of truth as to the best line
of response against this objection. In a way, this should be no
surprise: the objection boils down to the charge that pragmatic
theories of truth are too epistemic, when it is precisely their
commitment to epistemic concepts that characterizes pragmatic theories
of truth. Responding to this objection may involve concessions and
qualifications that compromise the pragmatic nature of these
approaches. Or responding may mean showing how pragmatic accounts have
certain practical benefits--but these benefits as well as their
relative importance are themselves contentious topics. As a result, we
should not expect this objection to be easily resolvable, if it can be
resolved at all.
Despite being the target of significant criticism from nearly the
moment of its birth, the pragmatic theory of truth has managed to
survive and, at times, even flourish for over a century. Because the
pragmatic theory of truth has come in several different versions, and
because these versions often diverge significantly, it can be
difficult to pin down and assess generally. Adding to the possible
confusion, not all those identified as pragmatists have embraced a
pragmatic theory of truth (e.g., Brandom 2011), while similar theories
have been held by non-pragmatists (e.g., Dummett 1959; Wright 1992).
Viewed more positively, pragmatic theories have evolved and matured to
become more refined and, perhaps, more plausible over time. With the
benefit of hind-sight we can see how pragmatic theories of truth have
stayed focused on the practical function that the concept of truth
plays: first, the role truth plays within inquiry and assertoric
discourse by, for example, signaling those statements that are
especially useful, well-verified, durable, or indefeasible and,
second, the role truth plays in shaping inquiry and assertoric
discourse by providing a necessary goal or norm. (While pragmatic
theories agree on the importance of focusing on truth's
practical function, they often disagree over what this practical
function is.)
The pragmatic theory of truth began with Peirce raising the question
of truth's "practical bearings". It is also possible
to ask this question of the pragmatic theory of truth itself: what
difference does this theory make? Or to put it in James' terms,
what is its "cash value"? One answer is that, by focusing
on the practical function of the concept of truth, pragmatic theories
highlight how this concept makes certain kinds of inquiry and
discourse possible. In contrast, as Lynch (2009) notes, some accounts
of truth make it difficult to see how certain claims are
truth-apt:
>
>
> consider propositions like *two and two are four* or
> *torture is wrong*. Under the assumption that truth is always
> and everywhere causal correspondence, it is a vexing question how
> these true thoughts *can* be true. (Lynch 2009: 34, emphasis in
> original)
>
>
>
If that is so, then pragmatic theories have the advantage of
preserving the possibility and importance of various types of inquiry
and discourse. While this does not guarantee that inquiry will always
reach a satisfying or definite conclusion, this does suggest that
pragmatic theories of truth do make a difference: in the spirit of
Peirce's "first rule of reason", they "do not
block the way of inquiry" (1898 [1992: 178]). |
truth-revision | ## 1. Semiformal introduction
Let's take a closer look at the sentence (1), given above:
(1)
(1) is not true.
It will be useful to make the paradoxical reasoning explicit. First,
suppose that:
(2)
(1) is not true.
It seems an intuitive principle concerning truth that, for any
sentence \(p\), we have the so-called T-biconditional
(3)
'\(p\)' is true iff \(p\).
(Here we are using 'iff' as an abbreviation for 'if
and only if'.) In particular, we should have
(4)
'(1) is not true' is true iff (1) is not true.
Thus, from (2) and (4), we get
(5)
'(1) is not true' is true.
Then we can apply the identity,
(6)
(1) = '(1) is not true.'
to conclude that (1) is true. This all shows that if (1) is not true,
then (1) is true. Similarly, we can also argue that if (1) is true
then (1) is not true. So (1) seems to be both true and not true: hence
the paradox. As stated above, the three-valued approach to the paradox
takes the liar sentence, (1), to be neither true nor false. Exactly
how, or even whether, this move blocks the above reasoning is a matter
for debate.
The RTT is not designed to block reasoning of the above kind, but to
model it -- or most of
it.[2]
As stated above, the central idea is the idea of a *revision
process*: a process by which we *revise* hypotheses about
the truth-value of one or more sentences.
Consider the reasoning regarding the liar sentence, (1) above. Suppose
that we *hypothesize* that (1) is not true. Then, with an
application of the relevant T-biconditional, we might revise our
hypothesis as follows:
| | |
| --- | --- |
| Hypothesis: | (1) is not true. |
| T-biconditional: | '(1) is not true' is true iff (1) is not true. |
| Therefore: | '(1) is not true' is true. |
| Known identity: | (1) = '(1) is not true'. |
| Conclusion: | (1) is true. |
| New *revised* hypothesis: | (1) is true. |
We could continue the revision process, by revising our hypothesis
once again, as follows:
| | |
| --- | --- |
| New hypothesis: | (1) is true. |
| T-biconditional: | '(1) is not true' is true iff (1) is not true. |
| Therefore: | '(1) is not true' is not true. |
| Known identity: | (1) = '(1) is not true'. |
| Conclusion: | (1) is not true. |
| *New* new revised hypothesis: | (1) is not true. |
As the revision process continues, we flip back and forth between
taking the liar sentence to be true and not true.
**Example 1.1**
It is worth seeing how this kind of revision reasoning works in a case
with several interconnected sentences. Let's apply the revision
idea to the following three sentences:
(7)
(8) is true or (9) is true.
(8)
(7) is true.
(9)
(7) is not true.
Informally, we might reason as follows. Either (7) is true or (7) is
not true. Thus, either (8) is true or (9) is true. Thus, (7) is true.
Thus (8) is true and (9) is not true, and (7) is still true. Iterating
the process once again, we get (8) is true, (9) is not true, and (7)
is true. More formally, consider any initial hypothesis, \(h\_0\),
about the truth values of (7), (8) and (9). Either \(h\_0\) says that
(7) is true or \(h\_0\) says that (7) is not true. In either case, we
can use the T-biconditional to generate our revised hypothesis
\(h\_1\): if \(h\_0\) says that (7) is true, then \(h\_1\) says that
'(7) is true' is true, i.e. that (8) is true; and if
\(h\_0\) says that (7) is not true, then \(h\_1\) says that '(7)
is not true' is true, i.e. that (9) is true. So \(h\_1\) says
that either (8) is true or (9) is true. So \(h\_2\) says that
'(8) is true or (9) is true' is true. In other words,
\(h\_2\) says that (7) is true. So no matter what hypothesis \(h\_0\) we
start with, two iterations of the revision process lead to a
hypothesis that (7) is true. Similarly, three *or more*
iterations of the revision process, lead to the hypothesis that (7) is
true, (8) is true and (9) is not true -- regardless of our
initial hypothesis. In Section 3, we will reconsider this example in a
more formal context.
One thing to note is that, in Example 1.1, the revision process yields
*stable* truth values for all three sentences. The notion of a
sentence *stably true in all revision sequences* will be a
central notion for the RTT. The revision-theoretic treatment
contrasts, in this case, with the three-valued approach: on most ways
of implementing the three-valued idea, all three sentences, (7), (8)
and (9), turn out to be neither true nor
false.[3]
In this case, the RTT arguably better captures the correct informal
reasoning than does the three-valued approach: the RTT assigns to the
sentences (7), (8) and (9) the truth-values that were assigned to them
by the informal reasoning given at the beginning of the example.
## 2. Framing the problem
### 2.1 Truth languages
The goal of the RTT is *not* to give a paradox-free account of
truth. Rather, the goal of the RTT is to give an account of our often
unstable and often paradoxical reasoning about truth. RTT seeks, more
specifically, to give a two-valued account that assigns stable
classical truth values to sentences when intuitive reasoning would
assign stable classical truth values. We will present a formal
semantics for a formal language: we want that language to have both a
truth predicate and the resources to refer to its own sentences.
Let us consider a first-order language \(L\), with connective &,
\(\vee\), and \(\neg\), quantifiers \(\forall\) and \(\exists\), the
equals sign =, variables, and some stock of names, function symbols
and relation symbols. We will say that \(L\) is a *truth
language*, if it has a distinguished predicate \(\boldsymbol{T}\)
and quotation marks ' and ', which will be used to form
*quote names*: if \(A\) is a sentence of \(L\), then
'\(A\)' is a name. Let \(\textit{Sent}\_L = \{A : A\) is a
sentence of \(L\}\).
It will be useful to identify the \(\boldsymbol{T}\)-free fragment of
a truth language \(L\): the first-order language \(L^-\) that has the
same names, function symbols and relation symbols as \(L\),
*except* the unary predicate \(\boldsymbol{T}\). Since \(L^-\)
has the same names as \(L\), including the same quote names, \(L^-\)
will have a quote name '\(A\)' for every sentence \(A\) of
\(L\). Thus \(\forall x\boldsymbol{T}x\) is not a sentence of \(L^-\),
but '\(\forall x\boldsymbol{T}x\)' is a name of \(L^-\)
and \(\forall x(x =\) '\(\forall x\boldsymbol{T}x\)') is a
sentence of \(L^-\).
### 2.2 Ground models
Other than the truth predicate, we will assume that our language is
interpreted classically. More precisely, let a *ground model*
for \(L\) be a classical model \(M = \langle D,I\rangle\) for \(L^-\),
the \(\boldsymbol{T}\)-free fragment of \(L\), satisfying the
following:
1. \(D\) is a nonempty domain of discourse;
2. \(I\) is a function assigning
1. to each name of \(L\) a member of \(D\);
2. to each \(n\)-ary function symbol of \(L\) a function from \(D^n\)
to \(D\); and
3. to each \(n\)-ary relation symbol, other than \(\boldsymbol{T}\),
of \(L\) a function from \(D^n\) to one of the two truth-values in the
set \(\{\mathbf{t},
\mathbf{f}\}\);[4]
3. \(\textit{Sent}\_L\) \(\subseteq\) \(D\); and
4. \(I\)('\(A\)') \(= A\) for every \(A \in\)
\(\textit{Sent}\_L\).
Clauses (1) and (2) simply specify what it is for \(M\) to be a
classical model of the \(\boldsymbol{T}\)-free fragment of \(L\).
Clauses (3) and (4) ensure that \(L\), when interpreted, can talk
about its own sentences. Given a ground model, we will consider the
prospects of providing a satisfying interpretation of
\(\boldsymbol{T}\). The most obvious desideratum is that the ground
model, expanded to include an interpretation of \(\boldsymbol{T}\),
satisfy Tarski's T-biconditionals, i.e., the biconditionals of
the form
\[
\boldsymbol{T}\lsquo A\rsquo \text{ iff } A
\]
for each \(A \in\) \(\textit{Sent}\_L\).
Some useful terminology: Given a ground model \(M\) for \(L\) and a
name, function symbol or relation symbol \(X\), we can think of
\(I(X)\) as the *interpretation* or, to borrow a term from
Gupta and Belnap, the *signification* of \(X\). Gupta and
Belnap characterize an expression's or concept's
*signification* in a world \(w\) as "an abstract
something that carries all the information about all the
expression's [or concept's] extensional relations in
\(w\)." If we want to interpret \(\boldsymbol{T}x\) as
'\(x\) is true', then, given a ground model \(M\), we
would like to find an appropriate signification, or an appropriate
range of significations, for \(\boldsymbol{T}\).
### 2.3 Three ground models
We might try to assign to \(\boldsymbol{T}\) a *classical*
signification, by expanding \(M\) to a classical model \(M' = \langle
D',I'\rangle\) for all of \(L\), including \(\boldsymbol{T}\). Also
recall that we want \(M'\) to satisfy the T-biconditionals: for our
immediate purposes, let us interpret these classically. Let us say
that an expansion \(M'\) of a ground model \(M\) is *Tarskian*
iff \(M'\) is a classical model and all of the T-biconditionals,
interpreted classically, are true in \(M'\). We would like to expand
ground models to Tarskian models. We consider three ground models in
order to assess our prospects for doing this.
**Ground model** \(M\_1\)
Our first ground model is a formalization of Example 1.1, above.
Suppose that \(L\_1\) contains three non-quote names, \(\alpha ,
\beta\), and \(\gamma\), and no predicates other than
\(\boldsymbol{T}\). Let \(M\_1 = \langle D\_1 ,I\_1 \rangle\) be as
follows:
\[\begin{align}
D\_1 &= \textit{Sent}\_{L\_1} \\
I\_1(\alpha) &= \boldsymbol{T}\beta \vee \boldsymbol{T}\gamma \\
I\_1 (\beta) &= \boldsymbol{T}\alpha \\
I\_1 (\gamma) &= \neg \boldsymbol{T}\alpha
\end{align}\]
**Ground model** \(M\_2\)
Suppose that \(L\_2\) contains one non-quote name, \(\tau\), and
no predicates other than \(\boldsymbol{T}\). Let \(M\_2 = \langle D\_2
,I\_2 \rangle\) be as follows:
\[\begin{align}
D\_2 &= \textit{Sent}\_{L\_2} \\
I\_2 (\tau) &= \boldsymbol{T}\tau
\end{align}\]
**Ground model** \(M\_3\)
Suppose that \(L\_3\) contains one non-quote name, \(\lambda\),
and no predicates other than \(\boldsymbol{T}\). Let \(M\_3 = \langle
D\_3 ,I\_3 \rangle\) be as follows:
\[\begin{align}
D\_3 &= \textit{Sent}\_{L\_3} \\
I\_3 (\lambda) &= \neg \boldsymbol{T}\lambda
\end{align}\]
**Theorem 2.1**
(1)
\(M\_1\) can be expanded to exactly one Tarskian model: in this
model, the sentences \((\boldsymbol{T}\beta \vee
\boldsymbol{T}\gamma)\) and \(\boldsymbol{T}\alpha\) are true, while
the sentence \(\neg \boldsymbol{T}\alpha\) is false.
(2)
\(M\_2\) can be expanded to exactly two Tarskian models, in one of
which the sentence \(\boldsymbol{T}\tau\) is true and in the other of
which the sentence \(\boldsymbol{T}\tau\) is false.
(3)
\(M\_3\) cannot be expanded to a Tarskian model.
The proofs of (1) and (2) are beyond the scope of this article, but
some remarks are in order.
Re (1): The fact that \(M\_1\) can be expanded to a Tarskian model is
not surprising, given the reasoning in Example 1.1, above: any initial
hypothesis about the truth values of the three sentences in question
leads, after three iterations of the revision process, to a stable
hypothesis that \((\boldsymbol{T}\beta \vee \boldsymbol{T}\gamma)\)
and \(\boldsymbol{T}\alpha\) are true, while \(\neg
\boldsymbol{T}\alpha\) is false. The fact that \(M\_1\) can be expanded
to *exactly* one Tarskian model needs the so-called
*Transfer Theorem*, Gupta and Belnap 1993, Theorem 2D.4.
Remark: In the introductory remarks, above, we claim that there are
consistent classical interpreted languages that refer to their own
sentences and have their own truth predicates. Clauses (1) of Theorem
2.1 delivers an example. Let \(M\_1 '\) be the unique Tarskian
expansion of \(M\_1\). Then the language \(L\_1\), interpreted by \(M\_1
'\) is an interpreted language that has its own truth predicate
satisfying the T-biconditionals classically understood, obeys the
rules of standard classical logic, and has the ability to refer to
each of its own sentences. Thus Tarski was not quite right in his view
that any language with its own truth predicate would be inconsistent,
as long as it obeyed the rules of standard classical logic, and had
the ability to refer to its own sentences.
Re (2): The only potential problematic self-reference is in the
sentence \(\boldsymbol{T}\tau,\) the so-called *truth teller*,
which says of itself that it is true. Informal reasoning suggests that
the truth teller can consistently be assigned either classical truth
value: if you assign it the value \(\mathbf{t}\) then no paradox is
produced, since the sentence now truly says of itself that it is true;
and if you assign it the value \(\mathbf{f}\) then no paradox is
produced, since the sentence now falsely says of itself that it is
true. Theorem 2.1 (2) formalizes this point, i.e., \(M\_2\) can be
expanded to one Tarskian model in which \(\boldsymbol{T}\tau\) is true
and one in which \(\boldsymbol{T}\tau\) is false. The fact that
\(M\_2\) can be expanded to *exactly* two Tarskian models needs
the Transfer Theorem, alluded to above. Note that the language
\(L\_2\), interpreted by either of these expansions, provides another
example of an interpreted language that has its own truth predicate
satisfying the T-biconditionals classically understood, obeys the
rules of standard classical logic, and has the ability to refer to
each of its own sentences.
Proof of (3). Suppose that \(M\_3 ' = \langle D\_3 ,I\_3 '\rangle\) is a
classical expansion of \(M\_3\) to all of \(L\_3\). Since \(M\_3 '\) is
an expansion of \(M\_3, I\_3\) and \(I\_3 '\) agree on all the names of
\(L\_3\). So
\[
I\_3 '(\lambda) = I\_3 (\lambda) = \neg \boldsymbol{T}\lambda =
I\_3(\lsquo \neg \boldsymbol{T}\lambda \rsquo)
= I\_3 '(\lsquo \neg \boldsymbol{T}\lambda\rsquo).
\]
So the sentences \(\boldsymbol{T}\lambda\) and
\(\boldsymbol{T}\)'\(\neg \boldsymbol{T}\lambda\)' have
the same truth value in \(M\_3 '\). So the T-biconditional
\[
\boldsymbol{T}\lsquo \neg \boldsymbol{T}\lambda\rsquo
\equiv \neg \boldsymbol{T}\lambda
\]
is false in \(M\_3 '\).
Remark: The language \(L\_3\) interpreted by the ground model \(M\_3\)
formalizes the liar's paradox, with the sentence \(\neg
\boldsymbol{T}\lambda\) as the offending liar's sentence. Thus,
despite Theorem 2.1, Clauses (1) and (2), Clause (3) strongly suggests
that in a semantics for languages capable of expressing their own
truth concepts, \(\boldsymbol{T}\) cannot, in general, have a
classical signification; and the 'iff' in the
T-biconditionals will not be read as the classical biconditional. We
take these suggestions up in Section 4, below.
## 3. Basic notions of the RTT
### 3.1 Revision rules
In Section 1, we informally sketched the central thought of the RTT,
namely, that we can use the T-biconditionals to generate a
*revision rule* -- a rule for revising a hypothesis about
the extension of the truth predicate. Here we will formalize this
notion, and work through an example from Section 1.
In general, let L be a truth language and \(M\) be a ground model for
\(L\). An *hypothesis* is a function \(h:D \rightarrow
\{\mathbf{t}, \mathbf{f}\}\). A hypothesis will in effect be a
hypothesized classical interpretation for \(\boldsymbol{T}\).
Let's work with an example that combines features from the
ground models \(M\_1\) and \(M\_3\). We will state the example formally,
but reason in a semiformal way, to transition from one hypothesized
extension of \(\boldsymbol{T}\) to another.
**Example 3.1** Suppose that \(L\) contains four
non-quote names, \(\alpha , \beta , \gamma\) and \(\lambda\) and no
predicates other than \(\boldsymbol{T}\). Also suppose that \(M =
\langle D,I\rangle\) is as follows:
\[\begin{align}
D &= \textit{Sent}\_L \\
I(\alpha) &= \boldsymbol{T}\beta \vee \boldsymbol{T}\gamma \\
I(\beta) &= \boldsymbol{T}\alpha \\
I(\gamma) &= \neg \boldsymbol{T}\alpha \\
I(\lambda) &= \neg \boldsymbol{T}\lambda
\end{align}\]
It will be convenient to let
\[\begin{align}
&A \text{ be the sentence } \boldsymbol{T}\beta \vee \boldsymbol{T}\gamma \\
&B \text{ be the sentence } \boldsymbol{T}\alpha \\
&C \text{ be the sentence } \neg \boldsymbol{T}\alpha \\
&X \text{ be the sentence } \neg \boldsymbol{T}\lambda
\end{align}\]
Thus:
\[\begin{align}
D &= \textit{Sent}\_L \\
I(\alpha) &= A \\
I(\beta) &= B \\
I(\gamma) &= C \\
I(\lambda) &= X
\end{align}\]
Suppose that the hypothesis \(h\_0\) hypothesizes that \(A\) is false,
\(B\) is true, \(C\) is false and \(X\) is false. Thus
\[\begin{align}
h\_0 (A) &= \mathbf{f} \\
h\_0 (B) &= \mathbf{t} \\
h\_0 (C) &= \mathbf{f} \\
h\_0 (X) &= \mathbf{f}
\end{align}\]
Now we will engage in some semiformal reasoning, *on the basis of
hypothesis* \(h\_0\). Among the four sentences, \(A, B, C\) and
\(X, h\_0\) puts only \(B\) in the extension of \(\boldsymbol{T}\).
Thus, reasoning from \(h\_0\), we conclude that:
\[\begin{align}
\neg \boldsymbol{T}\alpha
&\text{ since the referent of } \alpha \text{ is not in the extension of }
\boldsymbol{T} \\
\boldsymbol{T}\beta
&\text{ since the referent of } \beta \text{ is in the extension of }
\boldsymbol{T} \\
\neg \boldsymbol{T}\gamma
&\text{ since the referent of } \gamma \text{ is not in the extension of }
\boldsymbol{T} \\
\neg \boldsymbol{T}\lambda
&\text{ since the referent of } \lambda \text{ is not in the extension of }
\boldsymbol{T}.
\end{align}\]
The T-biconditional for the four sentences \(A, B, C\) and \(X\) are
as follows:
\[\begin{align}
\tag{T$\_A$} A \text{ is true iff }& \boldsymbol{T}\beta \vee \boldsymbol{T}\gamma \\
\tag{T$\_B$} B \text{ is true iff }& \boldsymbol{T}\alpha \\
\tag{T$\_C$} C \text{ is true iff }& \neg \boldsymbol{T}\alpha \\
\tag{T$\_X$} X \text{ is true iff }& \neg \boldsymbol{T}\lambda
\end{align}\]
Thus, reasoning from \(h\_0\), we conclude that:
\[\begin{align}
&A \text{ is true} \\
&B \text{ is not true} \\
&C \text{ is true} \\
&X \text{ is true}
\end{align}\]
This produces our new hypothesis \(h\_1\):
\[\begin{align}
h\_1 (A) &= \mathbf{t} \\
h\_1 (B) &= \mathbf{f} \\
h\_1 (C) &= \mathbf{t} \\
h\_1 (X) &= \mathbf{t}
\end{align}\]
Let's revise our hypothesis once again. So now we will engage in
some semiformal reasoning, *on the basis of hypothesis*
\(h\_1\). Hypothesis \(h\_1\) puts \(A, C\) and \(X\), but not \(B\), in
the extension of the \(\boldsymbol{T}\). Thus, reasoning from \(h\_1\),
we conclude that:
\[\begin{align}
\boldsymbol{T}\alpha
&\text{ since the referent of } \alpha \text{ is not in the extension of }
\boldsymbol{T} \\
\neg \boldsymbol{T}\beta
&\text{ since the referent of } \beta \text{ is in the extension of }
\boldsymbol{T} \\
\boldsymbol{T}\gamma
&\text{ since the referent of } \gamma \text{ is not in the extension of }
\boldsymbol{T} \\
\boldsymbol{T}\lambda
&\text{ since the referent of } \lambda \text{ is not in the extension of }
\boldsymbol{T}.
\end{align}\]
Recall the T-biconditionals for the four sentences \(A, B, C\) and
\(X\), given above. Reasoning from \(h\_1\) and these T-biconditionals,
we conclude that:
\[\begin{align}
&A \text{ is true} \\
&B \text{ is true} \\
&C \text{ is not true} \\
&X \text{ is not true}
\end{align}\]
This produces our *new* new hypothesis \(h\_2\):
\[\begin{align}
h\_2 (A) &= \mathbf{t} \\
h\_2 (B) &= \mathbf{t} \\
h\_2 (C) &= \mathbf{f} \\
h\_2 (X) &= \mathbf{f}
\end{align}\]
\(\Box\)
Let's formalize the semiformal reasoning carried out in Example
3.1. First we hypothesized that certain sentences were, or were not,
in the extension of \(\boldsymbol{T}\). Consider ordinary classical
model theory. Suppose that our language has a predicate \(G\) and a
name \(a\), and that we have a model \(M = \langle D,I\rangle\) which
places the referent of \(a\) inside the extension of \(G\):
\[
I(G)(I(\alpha)) = \mathbf{t}
\]
Then we conclude, classically, that the sentence \(Ga\) is true in
\(M\). It will be useful to have some notation for the classical truth
value of a sentence \(S\) in a classical model \(M\). We will write
\(\textit{Val}\_M (S)\). In this case, \(\textit{Val}\_M (Ga) =
\mathbf{t}\). In Example 3.1, we did not start with a classical model
of the whole language \(L\), but only a classical model of the
\(\boldsymbol{T}\)-free fragment of \(L\). But then we added a
hypothesis, in order to get a classical model of all of \(L\).
Let's use the notation \(M + h\) for the classical model of all
of \(L\) that you get when you extend \(M\) by assigning
\(\boldsymbol{T}\) an extension via the hypothesis \(h\). Once you
have assigned an extension to the predicate \(\boldsymbol{T}\), you
can calculate the truth values of the various sentences of \(L\). That
is, for each sentence \(S\) of \(L\), we can calculate
\[
\textit{Val}\_{M + h}(S)
\]
In Example 3.1, we started with hypothesis \(h\_0\) as follows:
\[\begin{align}
h\_0 (A) &= \mathbf{f} \\
h\_0 (B) &= \mathbf{t} \\
h\_0 (C) &= \mathbf{f} \\
h\_0 (X) &= \mathbf{f}
\end{align}\]
Then we calculated as follows:
\[\begin{align}
\textit{Val}\_{M+h\_0}(\boldsymbol{T}\alpha) &= \mathbf{f} \\
\textit{Val}\_{M+h\_0}(\boldsymbol{T}\beta) &= \mathbf{t} \\
\textit{Val}\_{M+h\_0}(\boldsymbol{T}\gamma) &= \mathbf{f} \\
\textit{Val}\_{M+h\_0}(\boldsymbol{T}\lambda) &= \mathbf{f}
\end{align}\]
And then we concluded as follows:
\[\begin{align}
\textit{Val}\_{M+h\_0}(A) &= \textit{Val}\_{M+h\_0}(\boldsymbol{T}\beta \lor \boldsymbol{T}\gamma) = \mathbf{t} \\
\textit{Val}\_{M+h\_0}(B) &= \textit{Val}\_{M+h\_0}(\boldsymbol{T}\alpha) = \mathbf{f} \\
\textit{Val}\_{M+h\_0}(C) &= \textit{Val}\_{M+h\_0}(\neg\boldsymbol{T}\alpha) = \mathbf{t} \\
\textit{Val}\_{M+h\_0}(X) &= \textit{Val}\_{M+h\_0}(\neg\boldsymbol{T}\lambda) = \mathbf{t}
\end{align}\]
These conclusions generated our new hypothesis, \(h\_1\):
\[\begin{align}
h\_1 (A) &= \mathbf{t} \\
h\_1 (B) &= \mathbf{f} \\
h\_1 (C) &= \mathbf{t} \\
h\_1 (X) &= \mathbf{t}
\end{align}\]
Note that, in general,
\[
h\_1 (S) = \textit{Val}\_{M+h\_0}(S).
\]
We are now prepared to define the *revision rule* given by a
ground model \(M = \langle D,I\rangle\). In general, given an
hypothesis \(h\), let \(M + h = \langle D,I'\rangle\) be the model of
\(L\) which agrees with \(M\) on the \(\boldsymbol{T}\)-free fragment
of \(L\), and which is such that \(I'(\boldsymbol{T}) = h\). So \(M +
h\) is just a classical model for all of \(L\). For any model \(M +
h\) of all of \(L\) and any sentence \(S\) of \(L\), let
\(\textit{Val}\_{M+h}(S)\) be the ordinary classical truth value of
\(S\) in \(M + h\).
**Definition 3.2**
Suppose that \(L\) is a truth language and that \(M = \langle
D,I\rangle\) is a ground model for \(L\). The *revision rule*,
\(\tau\_M\), is the function mapping hypotheses to hypotheses, as
follows:
\[\tau\_M (h)(d) =
\begin{cases}
\mathbf{t}, \text{ if } d\in D \text{ is a sentence of } L \text{ and } \textit{Val}\_{M+h}(d) = \mathbf{t} \\
\mathbf{f}, \text{ otherwise}
\end{cases}\]
The 'otherwise' clause tells us that if \(d\) is not a
sentence of \(L\), then, after one application of revision, we stick
with the hypothesis that \(d\) is not
true.[5]
Note that, in Example 3.\(1, h\_1 = \tau\_M (h\_0)\) and \(h\_2 = \tau\_M
(h\_1)\). We will often drop the subscripted '\(M\)' when
the context make it clear which ground model is at issue.
### 3.2 Revision sequences
Let's pick up Example 3.1 and see what happens when we iterate
the application of the revision rule.
**Example 3.3** (Example 3.1 continued)
Recall that \(L\) contains four non-quote names, \(\alpha , \beta ,
\gamma\) and \(\lambda\) and no predicates other than
\(\boldsymbol{T}\). Also recall that \(M = \langle D,I\rangle\) is as
follows:
\[\begin{align}
D &= \textit{Sent}\_L \\
I(\alpha) &= A = \boldsymbol{T}\beta \vee \boldsymbol{T}\gamma \\
I(\beta) &= B = \boldsymbol{T}\alpha \\
I(\gamma) &= C = \neg \boldsymbol{T}\alpha \\
I(\lambda) &= X = \neg \boldsymbol{T}\lambda
\end{align}\]
The following table indicates what happens with repeated applications
of the revision rule \(\tau\_M\) to the hypothesis \(h\_0\) from Example
3.1. In this table, we will write \(\tau\) instead of \(\tau\_M\):
| | | | | | | |
| --- | --- | --- | --- | --- | --- | --- |
| \(S\) | \(h\_0 (S)\) | \(\tau(h\_0)(S)^{}\) | \(\tau^2 (h\_0)(S)\) | \(\tau^3 (h\_0)(S)\) | \(\tau^4 (h\_0)(S)\) | \(\cdots\) |
| \(A\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(B\) | \(\mathbf{t}\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(C\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\cdots\) |
| \(X\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{f}\) | \(\cdots\) |
So \(h\_0\) generates a *revision sequence* (see Definition 3.7,
below). And \(A\) and \(B\) are *stably true* in that revision
sequence (see Definition 3.6, below), while \(C\) is *stably
false*. The liar sentence \(X\) is, unsurprisingly, neither stably
true nor stably false: the liar sentence is *unstable*. A
similar calculation would show that \(A\) is stably true, regardless
of the initial hypothesis: thus \(A\) is *categorically true*
(see Definition 3.8).
Before giving a precise definition of a *revision sequence*, we
give an example where we would want to carry the revision process
beyond the finite stages, \(h, \tau^1 (h), \tau^2 (h), \tau^3 (h)\),
and so on.
**Example 3.4**
Suppose that \(L\) contains nonquote names \(\alpha\_0, \alpha\_1,
\alpha\_2, \alpha\_3,\ldots\), and unary predicates \(G\) and
\(\boldsymbol{T}\). Now we will specify a ground model \(M = \langle
D,I\rangle\) where the name \(\alpha\_0\) refers to some tautology, and
where
\[\begin{align}
&\text{the name } \alpha\_1 \text{ refers to the sentence } \boldsymbol{T}\alpha\_0 \\
&\text{the name } \alpha\_2 \text{ refers to the sentence } \boldsymbol{T}\alpha\_1 \\
&\text{the name } \alpha\_3 \text{ refers to the sentence } \boldsymbol{T}\alpha\_2 \\
&\ \vdots
\end{align}\]
More formally, let \(A\_0\) be the sentence \(\boldsymbol{T}\alpha\_0
\vee \neg \boldsymbol{T}\alpha\_0\), and for each \(n \ge 0\), let
\(A\_{n+1}\) be the sentence \(\boldsymbol{T}\alpha\_n\). Thus \(A\_1\)
is the sentence \(\boldsymbol{T}\alpha\_0\), and \(A\_2\) is the
sentence \(\boldsymbol{T}\alpha\_1\), and \(A\_3\) is the sentence
\(\boldsymbol{T}\alpha\_2\), and so on. Our ground model \(M = \langle
D,I\rangle\) is as follows:
\[\begin{align}
D &= \textit{Sent}\_L \\
I(\alpha\_n) &= A\_n \\
I(G)(A) &= \mathbf{t} \text{ iff } A = A\_n \text{ for some } n
\end{align}\]
Thus, the extension of \(G\) is the following set of sentences:
\[\{A\_0, A\_1, A\_2, A\_3 , \ldots \} = \{(\boldsymbol{T}\alpha\_0 \vee
\neg \boldsymbol{T}\alpha\_0), \boldsymbol{T}\alpha\_0,
\boldsymbol{T}\alpha\_1, \boldsymbol{T}\alpha\_2, \boldsymbol{T}\alpha\_3 ,
\ldots \}.
\]
Finally let \(B\) be the sentence \(\forall x(Gx \supset
\boldsymbol{T}x)\). Let \(h\) be any hypothesis for which we have, for
each natural number \(n\),
\[
h(A\_n) = h(B) = \mathbf{f}.
\]
The following table indicates what happens with repeated applications
of the revision rule \(\tau\_M\) to the hypothesis \(h\). In this
table, we will write \(\tau\) instead of \(\tau\_M\):
| | | | | | | |
| --- | --- | --- | --- | --- | --- | --- |
| \(S\) | \(h(S)\) | \(t(h)(S)\) | \(\tau^2 (h)(S)\) | \(\tau^3 (h)(S)\) | \(\tau^4 (h)(S)\) | \(\cdots\) |
| \(A\_0\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(A\_1\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(A\_2\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(A\_3\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(A\_4\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\cdots\) |
| \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) |
| \(B\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\cdots\) |
At the \(0^{\text{th}}\) stage, each \(A\_n\) is outside the
hypothesized extension of \(\boldsymbol{T}\). But from the
\(n{+}1^{\text{th}}\) stage onwards, \(A\_n\) is \(in\) the
hypothesized extension of \(\boldsymbol{T}\). So, for each \(n\), the
sentence \(A\_n\) is eventually stably hypothesized to be true. Despite
this, there is no *finite* stage at which all the
\(A\_n\)'s are hypothesized to be true: as a result the sentence
\(B = \forall x(Gx \supset \boldsymbol{T}x)\) remains false at each
finite stage. This suggests extending the process as follows:
| | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| \(S\) | \(h(S)\) | \(\tau(h)(S)\) | \(\tau^2 (h)(S)\) | \(\tau^3 (h)(S)\) | \(\cdots\) | \(\omega\) | \(\omega +1\) | \(\omega +2\) | \(\cdots\) |
| \(A\_0\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(A\_1\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(A\_2\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\cdots\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(A\_3\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\cdots\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(A\_4\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\cdots\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) |
| \(B\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\cdots\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) |
Thus, if we allow the revision process to proceed beyond the finite
stages, then the sentence \(B = \forall x(Gx \supset
\boldsymbol{T}x)\) is stably true from the \(\omega{+}1^{\text{th}}\)
stage onwards. \(\Box\)
In Example 3.4, the intuitive verdict is that not only should each
\(A\_n\) receive a stable truth value of \(\mathbf{t}\), but so should
the sentence \(B = \forall x(Gx \supset \boldsymbol{T}x)\). The only
way to ensure this is to carry the revision process beyond the finite
stages. So we will consider revision sequences that are very long: not
only will a revision sequence have a \(n^{\text{th}}\) stage for each
finite number \(n\), but a \(\eta^{\text{th}}\) stage for every
*ordinal* number \(\eta\). (The next paragraph is to help the
reader unfamiliar with ordinal numbers.)
One way to think of the ordinal numbers is as follows. Start with the
finite natural numbers:
\[
0, 1, 2, 3, \ldots
\]
Add a number, \(\omega\), greater than all of these but not the
immediate successor of any of them:
\[
0, 1, 2, 3, \ldots ,\omega
\]
And then take the successor of \(\omega\), its successor, and so
on:
\[
0, 1, 2, 3, \ldots ,\omega , \omega +1, \omega +2, \omega +3\ldots
\]
Then add a number \(\omega +\omega\), or \(\omega \times 2\), greater
than all of these (and again, not the immediate successor of any), and
start over, reiterating this process over and over:
\[\begin{align}
&0, 1, 2, 3, \ldots \\
&\omega , \omega +1, \omega +2, \omega +3,\ldots, \\
&\omega \times 2, (\omega \times 2)+1, (\omega \times 2)+2, (\omega \times 2)+3,\ldots, \\
&\omega \times 3, (\omega \times 3)+1, (\omega \times 3)+2, (\omega \times 3)+3,\ldots \\
&\ \vdots
\end{align}\]
At the end of this, we add an ordinal number \(\omega \times \omega\)
or \(\omega^2\):
\[\begin{align}
&0, 1, 2, \ldots ,\omega , \omega +1, \omega +2, \ldots ,\omega \times 2, (\omega \times 2)+1,\ldots, \\
&\omega \times 3, \ldots ,\omega \times 4, \ldots ,\omega \times 5,
\ldots ,\omega^2, \omega^2 +1,\ldots
\end{align}\]
The ordinal numbers have the following structure: every ordinal number
has an immediate successor known as a *successor ordinal*; and
for any infinitely ascending sequence of ordinal numbers, there is a
*limit ordinal* which is greater than all the members of the
sequence and which is not the immediate successor of any member of the
sequence. Thus the following are successor ordinals: \(5, 178, \omega
+12, (\omega \times 5)+56, \omega^2 +8\); and the following are limit
ordinals: \(\omega , \omega \times 2, \omega^2 , (\omega^2 +\omega)\),
etc. Given a limit ordinal \(\eta\), a sequence \(S\) of objects is an
\(\eta\)-*long* sequence if there is an object \(S\_{\delta}\)
for every ordinal \(\delta \lt \eta\). We will denote the class of
ordinals as \(\textsf{On}\). Any sequence \(S\) of objects is an
\(\textsf{On}\)-*long* sequence if there is an object
\(S\_{\delta}\) for every ordinal \(\delta\).
When assessing whether a sentence receives a stable truth value, the
RTT considers sequences of hypotheses of length \(\textsf{On}\). So
suppose that \(S\) is an \(\textsf{On}\)-long sequence of hypotheses,
and let \(\zeta\) and \(\eta\) range over ordinals. Clearly, in order
for \(S\) to represent the revision process, we need the
\(\zeta{+}1^{\text{th}}\) hypothesis to be generated from the
\(\zeta^{\text{th}}\) hypothesis by the revision rule. So we insist
that \(S\_{\zeta +1} = \tau\_M(S\_{\zeta})\). But what should we do at a
limit stage? That is, how should we set \(S\_{\eta}\)(d) when \(\eta\)
is a limit ordinal? Clearly any object that is stably true [false]
*up to* that stage should be true [false] \(at\) that stage.
Thus consider Example 3.4. The sentence \(A\_2\), for example, is true
up to the \(\omega^{\text{th}}\) stage; so we set \(A\_2\) to be true
\(at\) the \(\omega^{\text{th}}\) stage. For objects that do not
stabilize up to that stage, Gupta and Belnap 1993 adopt a liberal
policy: when constructing a revision sequence \(S\), if the value of
the object \(d \in D\) has not stabilized by the time you get to the
limit stage \(\eta\), then you can set \(S\_{\eta}\)(d) to be whichever
of \(\mathbf{t}\) or \(\mathbf{f}\) you like. Before we give the
precise definition of a *revision sequence*, we continue with
Example 3.3 to see an application of this idea.
**Example 3.5** (Example 3.3 continued)
Recall that \(L\) contains four non-quote names, \(\alpha , \beta ,
\gamma\) and \(\lambda\) and no predicates other than
\(\boldsymbol{T}\). Also recall that \(M = \langle D,I\rangle\) is as
follows:
\[\begin{align}
D &= \textit{Sent}\_L \\
I(\alpha) &= A = \boldsymbol{T}\beta \vee \boldsymbol{T}\gamma \\
I(\beta) &= B = \boldsymbol{T}\alpha \\
I(\gamma) &= C = \neg \boldsymbol{T}\alpha \\
I(\lambda) &= X = \neg \boldsymbol{T}\lambda
\end{align}\]
The following table indicates what happens with repeated applications
of the revision rule \(\tau\_M\) to the hypothesis \(h\_0\) from Example
3.3. For each ordinal \(\eta\), we will indicate the
\(\eta^{\text{th}}\) hypothesis by \(S\_{\eta}\) (suppressing the index
\(M\) on \(\tau)\). Thus \(S\_0 = h\_0,\) \(S\_1 = \tau(h\_0),\) \(S\_2 =
\tau^2 (h\_0),\) \(S\_3 = \tau^3 (h\_0),\) and \(S\_{\omega},\) the
\(\omega^{\text{th}}\) hypothesis, is determined in some way from the
hypotheses leading up to it. So, starting with \(h\_0\) from Example
3.3, our revision sequence begins as follows:
| | | | | | | |
| --- | --- | --- | --- | --- | --- | --- |
| \(S\) | \(S\_0 (S)\) | \(S\_1 (S)\) | \(S\_2 (S)\) | \(S\_3 (S)\) | \(S\_4 (S)\) | \(\cdots\) |
| \(A\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(B\) | \(\mathbf{t}\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(C\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\cdots\) |
| \(X\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{f}\) | \(\cdots\) |
What happens at the \(\omega^{\text{th}}\) stage? \(A\) and \(B\) are
stably true *up to* the \(\omega^{\text{th}}\) stage, and \(C\)
is stably false *up to* the \(\omega^{\text{th}}\) stage. So
\(at\) the \(\omega^{\text{th}}\) stage, we must have the
following:
| | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- |
| \(S\) | \(S\_0 (S)\) | \(S\_1 (S)\) | \(S\_2 (S)\) | \(S\_3 (S)\) | \(S\_4 (S)\) | \(\cdots\) | \(S\_{\omega}(S)\) |
| \(A\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) | \(\mathbf{t}\) |
| \(B\) | \(\mathbf{t}\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) | \(\mathbf{t}\) |
| \(C\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\cdots\) | \(\mathbf{f}\) |
| \(X\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{f}\) | \(\cdots\) | **?** |
But the entry for \(S\_{\omega}(X)\) can be either \(\mathbf{t}\) or
\(\mathbf{f}\). In other words, the initial hypothesis \(h\_0\)
generates at least two revision sequences. Every revision sequence
\(S\) that has \(h\_0\) as its initial hypothesis must have
\(S\_{\omega}(A) = \mathbf{t}, S\_{\omega}(B) = \mathbf{t}\), and
\(S\_{\omega}(C) = \mathbf{f}\). But there is some revision sequence
\(S\), with \(h\_0\) as its initial hypothesis, and with
\(S\_{\omega}(X) = \mathbf{t}\); and there is some revision sequence
\(S'\), with \(h\_0\) as its initial hypothesis, and with
\(S\_{\omega}'(X) = \mathbf{f}. \Box\)
We are now ready to define the notion of a *revision
sequence*:
**Definition 3.6**
Suppose that \(L\) is a truth language, and that \(M = \langle
D,I\rangle\) is a ground model. Suppose that \(S\) is an
\(\textsf{On}\)-long sequence of hypotheses. Then we say that \(d \in
D\) is *stably* \(\mathbf{t} [\mathbf{f}]\) in \(S\) iff for
some ordinal \(\theta\) we have
\[
S\_{\zeta}(d) = \mathbf{t}\ [\mathbf{f}], \text{ for every ordinal } \zeta \ge \theta.
\]
Suppose that \(S\) is a \(\eta\)-long sequence of hypothesis for some
limit ordinal \(\eta\). Then we say that \(d \in D\) is
*stably* \(\mathbf{t}\) \([\mathbf{f}]\) in \(S\) iff for some
ordinal \(\theta \lt \eta\) we have
\[
S\_{\zeta}(d) = \mathbf{t}\ [\mathbf{f}], \text{ for every ordinal } \zeta
\text{ such that } \zeta \ge \theta \text{ and } \zeta \lt \eta.
\]
If \(S\) is an \(\textsf{On}\)-long sequence of hypotheses and
\(\eta\) is a limit ordinal, then \(S|\_{\eta}\) is the initial segment
of \(S\) up to but not including \(\eta\). Note that \(S|\_{\eta}\) is
a \(\eta\)-long sequence of hypotheses.
**Definition 3.7**
Suppose that \(L\) is a truth language, and that \(M = \langle
D,I\rangle\) is a ground model. Suppose that \(S\) is an
\(\textsf{On}\)-long sequence of hypotheses. \(S\) is a *revision
sequence for* \(M\) iff
* \(S\_{\zeta +1} = \tau\_M(S\_{\zeta})\), for each \(\zeta \in
\textsf{On}\), and
* for each limit ordinal \(\eta\) and each \(d \in D\), if \(d\) is
stably \(\mathbf{t}\) \([\mathbf{f}\)] in \(S|\_{\eta}\), then
\(S\_{\eta}(d) = \mathbf{t} [\mathbf{f}\)].
**Definition 3.8**
Suppose that \(L\) is a truth language, and that \(M = \langle
D,I\rangle\) is a ground model. We say that the sentence \(A\) is
*categorically true* [*false*] *in* \(M\) iff
\(A\) is stably \(\mathbf{t}\) \([\mathbf{f}]\) in every revision
sequence for \(M\). We say that \(A\) is *categorical in* \(M\)
iff \(A\) is either categorically true or categorically false in
\(M\).
We now illustrate these concepts with an example. The example will
also illustrate a new concept to be defined afterwards.
**Example 3.9**
Suppose that \(L\) is a truth language containing nonquote names
\(\beta , \alpha\_0, \alpha\_1, \alpha\_2, \alpha\_3,\ldots\), and unary
predicates \(G\) and \(\boldsymbol{T}\). Let \(B\) be the sentence
\[
\boldsymbol{T}\beta \vee \forall x\forall y(Gx \amp \neg \boldsymbol{T}x \amp Gy \amp \neg \boldsymbol{T}y \supset x=y).
\]
Let \(A\_0\) be the sentence \(\exists x(Gx \amp \neg
\boldsymbol{T}x)\). And for each \(n \ge 0\), let \(A\_{n+1}\) be the
sentence \(\boldsymbol{T}\alpha\_n\). Consider the following ground
model \(M = \langle D,I\rangle\)
\[\begin{align}
D &= \textit{Sent}\_L \\
I(\beta) &= B \\
I(\alpha\_n) &= A\_n \\
I(G)(A) &= \mathbf{t} \text{ iff } A = A\_n \text{ for some } n
\end{align}\]
Thus, the extension of \(G\) is the following set of sentences:
\(\{A\_0, A\_1, A\_2, A\_3 , \ldots \} = \{\exists x(Gx \amp \neg
\boldsymbol{T}x), \boldsymbol{T}\alpha\_0, \boldsymbol{T}\alpha\_1,
\boldsymbol{T} \alpha\_2, \boldsymbol{T}\alpha\_3 , \ldots \}\). Let
\(h\) be any hypothesis for which we have, \(h(B) = \mathbf{f}\) and
for each natural number \(n\),
\[
h(A\_n) = \mathbf{f}.
\]
And let \(S\) be a revision sequence whose initial hypothesis is
\(h\), i.e., \(S\_0 = h\). The following table indicates some of the
values of \(S\_{\gamma}(C)\), for sentences \(C \in \{B, A\_0, A\_1, A\_2,
A\_3 , \ldots \}\). In the top row, we indicate only the ordinal number
representing the stage in the revision process.
| | | | | | | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | 0 | 1 | 2 | 3 | \(\cdots\) | \(\omega\) | \(\omega{+}1\) | \(\omega{+}2\) | \(\omega{+}3\) | \(\cdots\) | \(\omega{\times}2\) | \((\omega{\times}2){+}1\) | \((\omega{\times}2){+}2\) | \(\cdots\) |
| \(B\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\cdots\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(A\_0\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) | \(\mathbf{t}\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) | \(\mathbf{t}\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(A\_1\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\cdots\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{f}\) | \(\cdots\) |
| \(A\_2\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\cdots\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{f}\) | \(\cdots\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(A\_3\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\cdots\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(A\_4\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\cdots\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\ddots\) |
It is worth contrasting the behaviour of the sentence B and the
sentence \(A\_0\). From the \(\omega{+}1^{\text{th}}\) stage on, \(B\)
stabilizes as true. In fact, \(B\) is stably true in every revision
sequence for \(M\). Thus, \(B\) is categorically true in \(M\). The
sentence \(A\_0\), however, never quite stabilizes: it is usually true,
but within a few finite stages of a limit ordinal, the sentence
\(A\_0\) can be false. In these circumstances, we say that \(A\_0\) is
*nearly stably true* (See Definition 3.10, below.) In fact,
\(A\_0\) is nearly stably true in every revision sequence for \(M.
\Box\)
Example 3.9 illustrates not only the notion of stability in a revision
sequence, but also of near stability, which we define now:
**Definition 3.10.**
Suppose that \(L\) is a truth language, and that \(M = \langle
D,I\rangle\) is a ground model. Suppose that \(S\) is an
\(\textsf{On}\)-long sequence of hypotheses. Then we say that \(d \in
D\) is *nearly stably* \(\mathbf{t}\) \([\mathbf{f}]\)
*in* \(S\) iff for some ordinal \(\theta\) we have
for every \(\zeta \ge \theta\), there is a natural number \(n\) such
that, for every \(m \ge n, S\_{\zeta +m}(d) = \mathbf{t}\)
\([\mathbf{f}]\).
Gupta and Belnap 1993 characterize the difference between stability
and near stability as follows: "Stability *simpliciter*
requires an element [in our case a sentence] to settle down to a value
\(\mathbf{x}\) [in our case a truth value] after some initial
fluctuations say up to [an ordinal \(\eta\)]... In contrast, near
stability allows fluctuations after \(\eta\) also, but these
fluctuations must be confined to finite regions just after limit
ordinals" (p. 169). Gupta and Belnap 1993 introduce two theories
of truth, \(\boldsymbol{T}^\*\) and \(\boldsymbol{T}^{\#}\), based on
stability and near stability. Theorems 3.12 and 3.13, below,
illustrate an advantage of the system \(\boldsymbol{T}^{\#}\), i.e.,
the system based on near stability.
**Definition 3.11**
Suppose that \(L\) is a truth language, and that \(M = \langle
D,I\rangle\) is a ground model. We say that a sentence \(A\) is
*valid in* \(M\) *by* \(\boldsymbol{T}^\*\) iff \(A\) is
stably true in every revision
sequence[6].
And we say that a sentence \(A\) is *valid in* \(M\)
*by* \(\boldsymbol{T}^{\#}\) iff \(A\) is nearly stably true in
every revision sequence.
**Theorem 3.12**
Suppose that \(L\) is a truth language, and that \(M = \langle
D,I\rangle\) is a ground model. Then, for every sentence \(A\) of
\(L\), the following is valid in \(M\) by \(\boldsymbol{T}^{\#}\):
\[
\boldsymbol{T}\lsquo \neg A\rsquo \equiv \neg \boldsymbol{T}\lsquo A\rsquo.
\]
**Theorem 3.13**
There is a truth language \(L\) and a ground model \(M = \langle
D,I\rangle\) and a sentence \(A\) of \(L\) such that the following is
*not* valid in \(M\) by \(\boldsymbol{T}^\*\):
\[
\boldsymbol{T}\lsquo \neg A\rsquo \equiv \neg \boldsymbol{T}\lsquo A\rsquo.
\]
Gupta and Belnap 1993, Section 6C, note similar advantages of
\(\boldsymbol{T}^{\#}\) over \(\boldsymbol{T}^\*\). For example,
\(\boldsymbol{T}^{\#}\) does, but \(\boldsymbol{T}^\*\) does not,
validate the following semantic principles:
\[\begin{align}
\boldsymbol{T}\lsquo A \amp B\rsquo &\equiv \boldsymbol{T}\lsquo A\rsquo \amp \boldsymbol{T}\lsquo B\rsquo \\
\boldsymbol{T}\lsquo A \vee B\rsquo &\equiv \boldsymbol{T}\lsquo A\rsquo \vee \boldsymbol{T}\lsquo B\rsquo
\end{align}\]
Gupta and Belnap remain noncommittal about which of
\(\boldsymbol{T}^{\#}\) and \(\boldsymbol{T}^\*\) (and a further
alternative that they define, \(\boldsymbol{T}^c)\) is preferable.
## 4. Interpreting the formalism
The main formal notions of the RTT are the notion of a *revision
rule* (Definition 3.2), i.e., a rule for revising hypotheses; and
a *revision sequence* (Definition 3.7), a sequence of
hypotheses generated in accordance with the appropriate revision rule.
Using these notions, we can, given a ground model, specify when a
sentence is *stably* (or *nearly stably) true* or
*stably* (or *nearly stably) false* (Definition 3.6 and
3.10, respectively) in a particular revision sequence. Thus we could
define two theories of truth, \(\boldsymbol{T}^\*\) and
\(\boldsymbol{T}^{\#}\) (Definition 3.11), based on stability and near
stability (respectively). The final idea is that each of these
theories delivers a verdict on which sentences of the language are
*valid* (Definition 3.11), given a ground model.
Recall the suggestions made at the end of Section 2:
>
> In a semantics for languages capable of expressing their own truth
> concepts, \(\boldsymbol{T}\) will not, in general, have a classical
> signification; and the 'iff' in the T-biconditionals will
> not be read as the classical biconditional.
>
Gupta and Belnap fill out these suggestions in the following way.
### 4.1 The signification of \(\boldsymbol{T}\)
First, they suggest that the signification of \(\boldsymbol{T}\),
given a ground model \(M\), is the revision rule \(\tau\_M\) itself. As
noted in the preceding paragraph, we can give a fine-grained analysis
of sentences' statuses and interrelations on the basis of
notions generated directly and naturally from the revision rule
\(\tau\_M\). Thus, \(\tau\_M\) is a good candidate for the signification
of \(\boldsymbol{T}\), since it does seem to be "an abstract
something that carries all the information about all [of
\(\boldsymbol{T}\)'s] extensional relations" in \(M\).
(See Gupta and Belnap's characterization of an
expression's *signification*, given in Section 2,
above.)
### 4.2 The 'iff' in the T-biconditionals
Gupta and Belnap's related suggestion concerning the
'iff' in the T-biconditionals is that, rather than being
the classical biconditional, this 'iff' is the distinctive
biconditional used to *define* a previously undefined concept.
In 1993, Gupta and Belnap present the revision theory of truth as a
special case of a revision theory of *circularly defined
concepts*. Suppose that \(L\) is a language with a unary predicate
\(F\) and a binary predicate \(R\). Consider a new concept expressed
by a predicate \(G\), introduced through a definition like this:
\[
Gx =\_{df} \forall y(Ryx \supset Fx) \vee \exists y(Ryx \amp Gx).
\]
Suppose that we start with a domain of discourse, \(D\), and an
interpretation of the predicate \(F\) and the relation symbol \(R\).
Gupta and Belnap's revision-theoretic treatment of concepts thus
circularly introduced allows one to give categorical verdicts, for
certain \(d \in D\) about whether or not \(d\) satisfies \(G\). Other
objects will be unstable relative to \(G\): we will be able
categorically to assert neither that \(d\) satisfies \(G\) nor that d
does not satisfy \(G\). In the case of truth, Gupta and Belnap take
the set of T-biconditionals of the form
\[\tag{10}
\boldsymbol{T}\lsquo A\rsquo =\_{df} A
\]
together to give the definition of the concept of truth. It is their
treatment of '\(=\_{df}\)' (the 'iff' of
definitional concept introduction), together with the T-biconditionals
of the form (10), that determine the revision rule \(\tau\_M\).
### 4.3 The paradoxical reasoning
Recall the liar sentence, (1), from the beginning of this article:
(1)
(1) is not true
In Section 1, we claimed that the RTT is designed to model, rather
than block, the kind of paradoxical reasoning regarding (1). But we
noted in footnote 2 that the RTT does avoid contradictions in these
situations. There are two ways to see this. First, while the RTT does
endorse the biconditional
(1) is true iff (1) is not true,
the relevant 'iff' is not the material biconditional, as
explained above. Thus, it does not follow that both (1) is true and
(1) is not true. Second, note that on no hypothesis can we conclude
that both (1) is true and (1) is not true. If we keep it firmly in
mind that revision-theoretical reasoning is hypothetical rather than
categorical, then we will not infer any contradictions from the
existence of a sentence such as (1), above.
### 4.4 The signification thesis
Gupta and Belnap's suggestions, concerning the signification of
\(\boldsymbol{T}\) and the interpretation of the 'iff' in
the T-biconditionals, dovetail nicely with two closely related
intuitions articulated in Gupta & Belnap 1993. The first
intuition, loosely expressed, is "that the T-biconditionals are
analytic and *fix* the meaning of 'true'" (p.
6). More tightly expressed, it becomes the "Signification
Thesis" (p. 31): "The T-biconditionals fix the
signification of truth in every world [where a world is represented by
a ground
model]."[7]
Given the revision-theoretic treatment of the definition
'iff', and given a ground model \(M\), the
T-biconditionals (10) do, as noted, fix the suggested signification of
\(\boldsymbol{T}\), i.e., the revision rule \(\tau\_M\).
### 4.5 The supervenience of semantics
The second intuition is *the supervenience of the signification of
truth*. This is a descendant of M. Kremer's 1988 proposed
*supervenience of semantics*. The idea is simple: which
sentences fall under the concept *truth* should be fixed by (1)
the interpretation of the nonsemantic vocabulary, and (2) the
empirical facts. In non-circular cases, this intuition is particularly
strong: the standard interpretation of "snow" and
"white" and the empirical fact that snow is white, are
enough to determine that the sentence "snow is white"
falls under the concept *truth*. The supervenience of the
signification of truth is the thesis that the signification of truth,
whatever it is, is fixed by the ground model \(M\). Clearly, the RTT
satisfies this principle.
It is worth seeing how a theory of truth might violate this principle.
Consider a truth-teller sentence, i.e., the sentence that says of
itself that it is true:
(11)
(11) is true
As noted above, Kripke's three-valued semantics allows three
truth values, true \((\mathbf{t})\), false \((\mathbf{f})\), and
neither \((\mathbf{n})\). Given a ground model \(M = \langle
D,I\rangle\) for a truth language \(L\), the candidate interpretations
of \(\boldsymbol{T}\) are three-valued interpretations, i.e.,
functions \(h:D \rightarrow \{\mathbf{t}, \mathbf{f}, \mathbf{n}\}\).
Given a three-valued interpretation of \(\boldsymbol{T}\), and a
scheme for evaluating the truth value of composite sentences in terms
of their parts, we can specify a truth value \(\textit{Val}\_{M+h}(A) =
\mathbf{t}, \mathbf{f}\) or \(\mathbf{n}\), for every sentence \(A\)
of \(L\). The central theorem of the three-valued semantics is that,
given any ground model \(M\), there is a three-valued interpretation h
of \(\boldsymbol{T}\) so that, for every sentence \(A\), we have
\(\textit{Val}\_{M+h}(\boldsymbol{T}\lsquo A\rsquo) =
\textit{Val}\_{M+h}(A)\).[8]
We will call such an interpretation of \(\boldsymbol{T}\) an
*acceptable* interpretation. Our point here is this: if
there's a truth-teller, as in (11), then there is not only one
acceptable interpretation of \(\boldsymbol{T}\); there are three: one
according to which (11) is true, one according to which (11) is false,
and one according to which (11) is neither. Thus, there is no single
"correct" interpretation of \(\boldsymbol{T}\) given a
ground model M. Thus the three-valued semantics seems to violate the
supervenience of
semantics.[9]
The RTT does not assign a truth value to the truth-teller, (11).
Rather, it gives an analysis of the kind of reasoning that one might
engage in with respect to the truth-teller: If we start with a
hypothesis \(h\) according to which (11) is true, then upon revision
(11) remains true. And if we start with a hypothesis \(h\) according
to which (11) is not true, then upon revision (11) remains not true.
And that is all that the concept of truth leaves us with. Given this
behaviour of (11), the RTT tells us that (11) is neither categorically
true nor categorically false, but this is quite different from a
verdict that (11) is neither true nor false.
### 4.6 A nonsupervenient interpretation of the formalism
We note an alternative interpretation of the revision-theoretic
formalism. Yaqub 1993 agrees with Gupta and Belnap that the
T-biconditionals are definitional rather than material biconditionals,
and that the concept of truth is therefore circular. But Yaqub
interprets this circularity in a distinctive way. He argues that,
>
> since the truth conditions of some sentences involve reference to
> truth in an essential, irreducible manner, these conditions can only
> obtain or fail in a world that already includes an extension of the
> truth predicate. Hence, in order for the revision process to determine
> an extension of the truth predicate, an *initial* extension of
> the predicate must be posited. This much follows from circularity and
> bivalence. (1993, 40)
>
Like Gupta and Belnap, Yaqub posits no privileged extension for
\(\boldsymbol{T}\). And like Gupta and Belnap, he sees the revision
sequences of extensions of \(\boldsymbol{T}\), each sequence generated
by an initial hypothesized extension, as "capable of
accommodating (and diagnosing) the various kinds of problematic and
unproblematic sentences of the languages under consideration"
(1993, 41). But, unlike Gupta and Belnap, he concludes from these
considerations that "*truth in a bivalent language is not
supervenient*" (1993, 39). He explains in a footnote: for
truth to be supervenient, the truth status of each sentence must be
"fully determined by nonsemantical facts". Yaqub does
not explicitly use the notion of a concept's
*signification*. But Yaqub seems committed to the claim
that the signification of \(\boldsymbol{T}\) -- i.e., that which
determines the truth status of each sentence -- is given by a
particular revision sequence itself. And no revision sequence is
determined by the nonsemantical facts, i.e., by the ground model,
alone: a revision sequence is determined, at best, by a ground model
and an initial
hypothesis.[10]
### 4.7 Classifying sentences
To obtain a signification of \(\boldsymbol{T}\) and a notion of
validity based on the concept of stability (or near stability) is by
no means the only use we can make of revision sequences. For one
thing, we could use revision-theoretic notions to make rather
fine-grained distinctions among sentences: Some sentences are unstable
in every revision sequence; others are stable in every revision
sequence, though stably true in some and stably false in others; and
so on. Thus, we can use revision-theoretic ideas to give a
fine-grained analysis of the status of various sentences, and of the
relationships of various sentences to one another. Hsiung (2017)
explores further this possibility by generalizing the notion of a
revision sequence to a *revision mapping* on a digraph, in
order to extend this analysis to *sets* of sentences of a
certain kind, called *Boolean paradoxes*. As shown in Hsiung
2022 this procedure can also be reversed, at least to some extent: Not
only we can use revision sequences (or mappings) to classify
paradoxical sentences by means of their revision-theoretic patterns,
but we can also construct "new" paradoxes from given
revision-theoretic patterns. Rossi (2019) combines the
revision-theoretic technique with graph-theoretic tools and
fixed-point constructions in order to represent a threefold
classification of paradoxical sentences (liar-like, truth-teller-like,
and revenge sentences) within a single model.
## 5. Further issues
### 5.1 Three-valued semantics
We have given only the barest exposition of the three-valued
semantics, in our discussion of the supervenience of the signification
of truth, above. Given a truth language \(L\) and a ground model
\(M\), we defined an *acceptable* three-valued interpretation
of \(\boldsymbol{T}\) as an interpretation \(h:D \rightarrow
\{\mathbf{t}, \mathbf{f}, \mathbf{n}\}\) such that
\(\textit{Val}\_{M+h}(\boldsymbol{T}\lsquo A\rsquo) =
\textit{Val}\_{M+h}(A)\) for each sentence \(A\) of \(L\). In general,
given a ground model \(M\), there are many acceptable interpretations
of \(\boldsymbol{T}\). Suppose that each of these is indeed a truly
acceptable interpretation. Then the three-valued semantics violates
the supervenience of the signification of \(\boldsymbol{T}\).
Suppose, on the other hand, that, for each ground model \(M\), we can
isolate a privileged acceptable interpretation as *the* correct
interpretation of \(\boldsymbol{T}\). Gupta and Belnap present a
number of considerations against the three-valued semantics, so
conceived. (See Gupta & Belnap 1993, Chapter 3.) One principal
argument is that the central theorem, i.e., that for each ground model
there is an acceptable interpretation, only holds when the underlying
language is expressively impoverished in certain ways: for example,
the three-valued approach fails if the language has a connective
\({\sim}\) with the following truth table:
| | |
| --- | --- |
| \(A\) | \({\sim}A\) |
| \(\mathbf{t}\) | \(\mathbf{f}\) |
| \(\mathbf{f}\) | \(\mathbf{t}\) |
| \(\mathbf{n}\) | \(\mathbf{t}\) |
The only negation operator that the three-valued approach can handle
has the following truth table:
| | |
| --- | --- |
| \(A\) | \(\neg A\) |
| \(\mathbf{t}\) | \(\mathbf{f}\) |
| \(\mathbf{f}\) | \(\mathbf{t}\) |
| \(\mathbf{n}\) | \(\mathbf{n}\) |
But consider the liar that says of itself that it is 'not'
true, in this latter sense of 'not'. Gupta and Belnap urge
the claim that this sentence "ceases to be intuitively
paradoxical" (1993, 100). The claimed advantage of the RTT is
its ability to describe the behaviour of genuinely paradoxical
sentences: the genuine liar is unstable under semantic evaluation:
"No matter what we hypothesize its value to be, semantic
evaluation refutes our hypothesis." The three-valued semantics
can only handle the "weak liar", i.e., a sentence that
only weakly negates itself, but that is not guaranteed to be
paradoxical: "There are appearances of the liar here, but they
deceive."
We've thus far reviewed two of Gupta and Belnap's
complaints against three-valued approaches, and now we raise a third:
in the three-valued theories, truth typically behaves like a
nonclassical concept even when there's no vicious reference in
the language. Without defining terms here, we note that one popular
precisification of the three-valued approach, is to take the correct
interpretation of T to be that given by the 'least fixed
point' of the 'strong Kleene scheme': putting aside
details, this interpretation always assigns the truth value
\(\mathbf{n}\) to the sentence \(\forall\)x\((\boldsymbol{T}\)x \(\vee
\neg \boldsymbol{T}\)x), even when the ground model allows no
circular, let alone vicious, reference. Gupta and Belnap claim an
advantage for the RTT: according to revision-theoretic approach, they
claim, truth always behaves like a classical concept when there is no
vicious reference.
Kremer 2010 challenges this claim by precisifying it as a formal claim
against which particular revision theories (e.g. \(\boldsymbol{T}^\*\)
or \(\boldsymbol{T}^{\#}\), see Definition 3.11, above) and particular
three-valued theories can be tested. As it turns out, on many
three-valued theories, truth does in fact behave like a classical
concept when there's no vicious reference: for example, the
least fixed point of a natural variant of the supervaluation scheme
always assigns \(\boldsymbol{T}\) a classical interpretation in the
absence of vicious reference. When there's no vicious reference,
it is granted that truth behaves like a classical concept if we adopt
Gupta and Belnap's theory \(\boldsymbol{T}^\*\), however, so
Kremer argues, this is not the case if we instead adopt Gupta and
Belnap's theory \(\boldsymbol{T}^{\#}\). This discussion is
further taken up by Wintein 2014. A general assessment of the relative
merits of the three-valued approaches and the revision-theoretic
approaches, from a metasemantic point of view, is at the core of
Pinder 2018.
### 5.2 Two values?
A contrast presupposed by this entry is between allegedly two-valued
theories, like the RTT, and allegedly three-valued or other
many-valued rivals. One might think of the RTT itself as providing
infinitely many semantic values, for example one value for every
possible revision sequence. Or one could extract three semantic values
for sentences: categorical truth, categorical falsehood, and
uncategoricalness.
In reply, it must be granted that the RTT generates many
*statuses* available to sentences. Similarly, three-valued
approaches also typically generate many statuses available to
sentences. The claim of two-valuedness is not a claim about statuses
available to sentences, but rather a claim about the *truth
values* presupposed in the whole enterprise.
### 5.3 Amendments to the RTT
We note three ways to amend the RTT. First, we might put constraints
on which hypotheses are acceptable. For example, Gupta and Belnap 1993
introduce a theory, \(\mathbf{T}^c\), of truth based on
*consistent* hypotheses: an hypothesis \(h\) is
*consistent* iff the set \(\{A:h(A) = \mathbf{t}\}\) is a
complete consistent set of sentences. The relative merits of
\(\mathbf{T}^\*, \mathbf{T}^{\#}\) and \(\mathbf{T}^c\) are discussed
in Gupta & Belnap 1993, Chapter 6.
Second, we might adopt a more restrictive *limit policy* than
Gupta and Belnap adopt. Recall the question asked in Section 3: How
should we set \(S\_{\eta}(d)\) when \(\eta\) is a limit ordinal? We
gave a partial answer: any object that is stably true [false] *up
to* that stage should be true [false] *at* that stage. We
also noted that for an object \(d \in D\) that does not stabilize up
to the stage \(\eta\), Gupta and Belnap 1993 allow us to set
\(S\_{\eta}(d)\) as either \(\mathbf{t}\) or \(\mathbf{f}\). In a
similar context, Herzberger 1982a and 1982b assigns the value
\(\mathbf{f}\) to the unstable objects. And Gupta originally
suggested, in Gupta 1982, that unstable elements receive whatever
value they received at the initial hypothesis \(S\_0\).
These first two ways of amending the RTT both, in effect, restrict the
notion of a revision sequence, by putting constraints on which of
*our* revision sequences really count as acceptable revision
sequences. The constraints are, in some sense local: the first
constraint is achieved by putting restrictions on which hypotheses can
be used, and the second constraint is achieved by putting restrictions
on what happens at limit ordinals. A third option would be to put more
global constraints on which putative revision sequences count as
acceptable. Yaqub 1993 suggests, in effect, a limit rule whereby
acceptable verdicts on unstable sentences at some limit stage \(\eta\)
depend on verdicts rendered at *other* limit stages. Yaqub
argues that these constraints allow us to avoid certain
"artifacts". For example, suppose that a ground model \(M
= \langle D,I\rangle\) has two independent liars, by having two names
\(\alpha\) and \(\beta\), where \(I(\alpha) = \neg
\boldsymbol{T}\alpha\) and \(I(\beta) = \neg \boldsymbol{T}\beta\).
Yaqub argues that it is a mere "artifact" of the
revision semantics, naively presented, that there are revision
sequences in which the sentence \(\neg \boldsymbol{T}\alpha \equiv
\neg \boldsymbol{T}\beta\) is stably true, since the two liars are
independent. His global constraints are developed to rule out such
sequences. (See Chapuis 1996 for further discussion.)
The first and the second way of amending the RTT are in some sense put
together by Campbell-Moore (2019). Here, the notion of stability for
objects is extended to sets of hypotheses: A set \(H\) of hypotheses
is stable in a sequence \(S\) of hypotheses if for some ordinal
\(\theta\) all hypotheses \(S\_{\zeta}\), with \(\zeta \ge \theta\),
belong to \(H\). With this, we can introduce the notion of
\(P\)-revision sequence: If \(P\) is a class of sets of hypotheses, a
sequence \(S\) is a \(P\)-revision sequence just in case that, at
every limit ordinal \(\eta\), if a set of hypotheses \(H\) belongs to
\(P\) and is stable in \(S\), then \(S\_{\eta}\) belongs to \(H\). It
can be shown that, for a suitable choice of \(P\), all limit stages of
\(P\)-revision sequences are *maximal consistent*
hypotheses.
### 5.4 Revision theory for circularly defined concepts
As indicated in our discussion, in Section 4, of the 'iff'
in the T-biconditionals, Gupta and Belnap present the RTT as a special
case of a revision theory of circularly defined concepts. To
reconsider the example from Section 4. Suppose that \(L\) is a
language with a unary predicate F and a binary predicate R. Consider a
new concept expressed by a predicate \(G\), introduced through a
definition, \(D\), like this:
\[
Gx =\_{df} A(x,G)
\]
where \(A(x,G)\) is the formula
\[
\forall y(Ryx \supset Fx) \vee \exists y(Ryx \amp Gx).
\]
In this context, a *ground model* is a classical model \(M =
\langle D,I\rangle\) of the language \(L\): we start with a domain of
discourse, \(D\), and an interpretation of the predicate \(F\) and the
relation symbol \(R\). We would like to extend \(M\) to an
interpretation of the language \(L + G\). So, in this context, an
hypothesis will be thought of as an hypothesized extension for the
newly introduced concept \(G\). Formally, a hypothesis is simply a
function \(h:D \rightarrow \{\mathbf{t}, \mathbf{f}\}\). Given a
hypothesis \(h\), we take \(M+h\) to be the classical model \(M+h =
\langle D,I'\rangle\), where \(I'\) interprets \(F\) and \(R\) in the
same way as \(I\), and where \(I'(G) = h\). Given a hypothesized
interpretation \(h\) of \(G\), we generate a new interpretation of
\(G\) as follows: and object \(d \in D\) is in the new extension of
\(G\) just in case the defining formula \(A(x,G)\) is true of \(d\) in
the model \(M+h\). Formally, we use the ground model \(M\) and the
definition \(D\) to define a *revision rule*, \(\delta\_{D,M}\),
mapping hypotheses to hypotheses, i.e., hypothetical interpretations
of \(G\) to hypothetical interpretations of \(G\). In particular, for
any formula \(B\) with one free variable \(x\), and \(d \in D\), we
can define the truth value \(\textit{Val}\_{M+h,d}(B)\) in the standard
way. Then,
\[
\delta\_{D,M}(h)(d) = \textit{Val}\_{M+h,d}(A)
\]
Given a revision rule \(\delta\_{D,M}\), we can generalize the notion
of a *revision sequence*, which is now a sequence of
hypothetical extensions of \(G\) rather than \(\boldsymbol{T}\). We
can generalize the notion of a sentence \(B\) being *stably
true*, *nearly stably true*, etc., relative to a revision
sequence. Gupta and Belnap introduce the systems \(\mathbf{S}^\*\) and
\(\mathbf{S}^{\#}\), analogous to \(\mathbf{T}^\*\) and
\(\mathbf{T}^{\#}\), as
follows:[11]
>
> **Definition 5.1.**
> * A sentence \(B\) is *valid on the definition* \(D\) *in
> the ground model* \(M\) *in the system* \(\mathbf{S}^\*\)
> (notation \(M \vDash\_{\*,D} B)\) iff \(B\) is stably true relative to
> each revision sequence for the revision rule \(\delta\_{D,M}\).
> * A sentence \(B\) is *valid on the definition* \(D\) *in
> the ground model* \(M\) *in the system* \(\mathbf{S}^{\#}\)
> (notation \(M \vDash\_{\#,D} B)\) iff \(B\) is *nearly* stably
> true relative to each revision sequence for the revision rule
> \(\delta\_{D,M}\).
> * A sentence \(B\) is *valid on the definition* \(D\) *in
> the system* \(\mathbf{S}^\*\) (notation \(\vDash\_{\*,D} B)\) iff for
> all classical ground models \(M\), we have \(M \vDash\_{\*,D} B\).
> * A sentence \(B\) is *valid on the definition* \(D\) *in
> the system* \(\mathbf{S}^{\#}\) (notation \(\vDash\_{\#,D} B)\) iff
> for all classical ground models \(M\), we have \(M \vDash\_{\#,D}
> B\).
>
>
>
One of Gupta and Belnap's principle open questions is whether
there is a complete calculus for these systems: that is, whether, for
each definition \(D\), either of the following two sets of sentences
is recursively axiomatizable: \(\{B:\vDash\_{\*,D} B\}\) and
\(\{B:\vDash\_{\#,D} B\}\). Kremer 1993 proves that the answer is no:
he shows that there is a definition \(D\) such that each of these sets
of sentences is of complexity at least \(\Pi^{1}\_2\), thereby putting
a lower limit on the complexity of \(\mathbf{S}^\*\) and
\(\mathbf{S}^{\#}\). (Antonelli 1994a and 2002 shows that this is also
an upper limit.)
Kremer's proof exploits an intimate relationship between
circular definitions understood *revision*-theoretically and
circular definitions understood as *inductive* definitions: the
theory of inductive definitions has been quite well understood for
some time. In particular, Kremer proves that every inductively defined
concept can be revision-theoretically defined. The expressive power
and other aspects of the revision-theoretic treatment of circular
definitions is the topic of much interesting work: see Welch 2001,
Lowe 2001, Lowe and Welch 2001, and Kuhnberger *et
al*. 2005.
Alongside Kremer's limitative result there is the positive
observation that, for some semantic system of definitions based on
restricted kinds of revision sequences, sound and complete calculi
*do* exist. For instance, Gupta and Belnap give some examples
of calculi and revision-theoretic systems which use only finite
revision sequences. Further investigation on proof-theoretic calculi
capturing some revision-theoretic semantic systems is done in Bruni
2013, Standefer 2016, Bruni 2019, and Fjellstad 2020.
### 5.5 Axiomatic Theories of Truth and the Revision Theory
The RTT is a clear example of a semantically motivated theory of
truth. Quite a different tradition seeks to give a satisfying
axiomatic theory of truth. Granted we cannot retain all of classical
logic and all of our intuitive principles regarding truth, especially
if we allow vicious self-reference. But maybe we can arrive at
satisfying axiom systems for truth, that, for example, maintain
consistency and classical logic, but give up only a little bit when it
comes to our intuitive principles concerning truth, such as the
T-biconditionals (interpreted classically); or maintain consistency
and all of the T-biconditionals, but give up only a little bit of
classical logic. Halbach 2011 comprehensively studies such axiomatic
theories (mainly those that retain classical logic), and Horsten 2011
is in the same tradition. Both Chapter 14 of Halbach 2011 and Chapter
8 of Horsten 2011 study the relationship between the Friedman-Sheard
theory FS and the revision semantics, with some interesting results.
For more work on axiomatic systems and the RTT, see Horsten et al
2012.
Field 2008 makes an interesting contribution to axiomatic theorizing
about truth, even though most of the positive work in the book
consists of model building and is therefore semantics. In particular,
Field is interested in producing a theory as close to classical logic
as possible, which retains all T-biconditionals (the conditional
itself will be nonclassical) and which at the same time can express,
in some sense, the claim that such and such a sentence is defective.
Field uses tools from multivalued logic, fixed-point semantics, and
revision theory to build models showing, in effect, that a very
attractive axiomatic system is consistent. Field's construction
is an intricate interplay between using fixed-point constructions for
successively interpreting T, and revision sequences for successively
interpreting the nonclassical conditional -- the final
interpretation being determined by a sort of super-revision-theoretic
process.
The connection between revision and Field's theory is explored
further in Standefer 2015b and in Gupta and Standefer 2017.
### 5.6 Applications
Given Gupta and Belnap's general revision-theoretic treatment of
circular definitions -- of which their treatment of
*truth* is a special case -- one would expect
revision-theoretic ideas to be applied to other concepts. Antonelli
1994b applies these ideas to non-well-founded sets: a non-well-founded
set \(X\) can be thought of as circular, since, for some \(X\_0 ,
\ldots ,X\_n\) we have \(X \in X\_0 \in \ldots \in X\_n \in X\). Chapuis
2003 applies revision-theoretic ideas to rational decision making.
This connection is further developed by Bruni 2015 and by Bruni and
Sillari 2018. For a discussion of revision theory and abstract objects
see Wang 2011. For a discussion of revision theory and vagueness, see
Asmus 2013.
Standefer (2015a) studies the connection between the circular
definitions of revision theory and a particular modal logic RT (for
"Revision Theory"). Campbell-Moore *et al.* 2019
and Campbell-Moore 2021 use revision sequences to model probabilities
and credences, respectively. Cook 2019 employs a revision-theoretic
analysis to find a new possible solution of Benardete's version
of the Zeno paradox.
In recent times, there has been increasing interest in bridging the
gap between classic debates on the nature of truth --
deflationism, the correspondence theory, minimalism, pragmatism, and
so on -- and formal work on truth, motivated by the liar's
paradox. The RTT is tied to pro-sententialism by Belnap 2006;
deflationism, by Yaqub 2008; and minimalism, by Restall 2005.
We must also mention Gupta 2006. In this work, Gupta argues that an
experience provides the experiencer, not with a straightforward
entitlement to a proposition, but rather with a hypothetical
entitlement: as explicated in Berker 2011, if subject S has experience
\(e\) and is entitled to hold view \(v\) (where S's
*view* is the totality of S's concepts, conceptions, and
beliefs), then S is entitled to believe a certain class of perceptual
judgements, \(\Gamma(v)\). (Berker uses "propositions"
instead of "perceptual judgements" in his formulation.)
But this generates a problem: how is S entitled to hold a view? There
seems to be a circular interdependence between entitlements to views
and entitlements to perceptual judgements. Here, Gupta appeals to a
general form of revision theory -- generalizing beyond both the
revision theory of truth and the revision theory of circularly defined
concepts (Section 5.4, above) -- to given an account of how
"hypothetical perceptual entitlements could yield categorical
entitlements" (Berker 2011).
### 5.7 An open question
We close with an open question about \(\mathbf{T}^\*\) and
\(\mathbf{T}^{\#}\). Recall Definition 3.11, above, which defines when
a sentence \(A\) of a truth language \(L\) is *valid in the ground
model* \(M\) *by* \(\mathbf{T}^\*\) or *by*
\(\mathbf{T}^{\#}\). We will say that \(A\) is *valid by*
\(\mathbf{T}^\*\) [alternatively, *by* \(\mathbf{T}^{\#}\)] iff
\(A\) is valid in the ground model \(M\) by \(\mathbf{T}^\*\)
[alternatively, by \(\mathbf{T}^{\#}\)] for every ground model \(M\).
Our open question is this: What is the complexity of the set of
sentences valid by \(\mathbf{T}^\* [\mathbf{T}^{\#}\)]? |
tarski-truth | ## 1. The 1933 programme and the semantic conception
In the late 1920s Alfred Tarski embarked on a project to give rigorous
definitions for notions useful in scientific methodology. In 1933 he
published (in Polish) his analysis of the notion of a true sentence.
This long paper undertook two tasks: first to say what should count as
a satisfactory definition of 'true sentence' for a given
formal language, and second to show that there do exist satisfactory
definitions of 'true sentence' for a range of formal
languages. We begin with the first task; Section 2 will consider the
second.
We say that a language is *fully interpreted* if all its
sentences have meanings that make them either true or false. All the
languages that Tarski considered in the 1933 paper were fully
interpreted, with one exception described in Section 2.2 below. This
was the main difference between the 1933 definition and the later
model-theoretic definition of 1956, which we shall examine in Section
3.
Tarski described several conditions that a satisfactory definition of
truth should meet.
### 1.1 Object language and metalanguage
If the language under discussion (the *object language*) is
\(L\), then the definition should be given in another language known
as the *metalanguage*, call it \(M\). The metalanguage should
contain a copy of the object language (so that anything one can say in
\(L\) can be said in \(M\) too), and \(M\) should also be able to talk
about the sentences of \(L\) and their syntax. Finally Tarski allowed
\(M\) to contain notions from set theory, and a 1-ary predicate symbol
*True* with the intended reading 'is a true sentence of
\(L\)'. The main purpose of the metalanguage was to formalise
what was being said about the object language, and so Tarski also
required that the metalanguage should carry with it a set of axioms
expressing everything that one needs to assume for purposes of
defining and justifying the truth definition. The truth definition
itself was to be a definition of *True* in terms of the other
expressions of the metalanguage. So the definition was to be in terms
of syntax, set theory and the notions expressible in \(L\), but not
semantic notions like 'denote' or 'mean'
(unless the object language happened to contain these notions).
Tarski assumed, in the manner of his time, that the object language
\(L\) and the metalanguage \(M\) would be languages of some kind of
higher order logic. Today it is more usual to take some kind of
informal set theory as one's metalanguage; this would affect a
few details of Tarski's paper but not its main thrust. Also
today it is usual to define syntax in set-theoretic terms, so that for
example a string of letters becomes a sequence. In fact one must use a
set-theoretic syntax if one wants to work with an object language that
has uncountably many symbols, as model theorists have done freely for
over half a century now.
### 1.2 Formal correctness
The definition of *True* should be 'formally
correct'. This means that it should be a sentence of the
form
For all \(x\), *True*\((x)\) if and only if \(\phi(x)\),
where *True* never occurs in \(\phi\); or failing this, that
the definition should be provably equivalent to a sentence of this
form. The equivalence must be provable using axioms of the
metalanguage that don't contain *True*. Definitions of
the kind displayed above are usually called *explicit*, though
Tarski in 1933 called them *normal*.
### 1.3 Material adequacy
The definition should be 'materially adequate'
(*trafny* - a better translation would be
'accurate'). This means that the objects satisfying
\(\phi\) should be exactly the objects that we would intuitively count
as being true sentences of \(L\), and that this fact should be
provable from the axioms of the metalanguage. At first sight this is a
paradoxical requirement: if we can prove what Tarski asks for, just
from the axioms of the metalanguage, then we must already have a
materially adequate formalisation of 'true sentence of
\(L\)' within the metalanguage, suggesting an infinite regress.
In fact Tarski escapes the paradox by using (in general) infinitely
many sentences of \(M\) to express truth, namely all the sentences of
the form
\[
\phi(s) \text{ if and only if } \psi
\]
whenever \(s\) is the name of a sentence \(S\) of \(L\) and \(\psi\)
is the copy of \(S\) in the metalanguage. So the technical problem is
to find a single formula \(\phi\) that allows us to deduce all these
sentences from the axioms of \(M\); this formula \(\phi\) will serve
to give the explicit definition of *True*.
Tarski's own name for this criterion of material adequacy was
*Convention T*. More generally his name for his approach to
defining truth, using this criterion, was *the semantic conception
of truth*.
As Tarski himself emphasised, Convention \(T\) rapidly leads to the
liar paradox if the language \(L\) has enough resources to talk about
its own semantics. (See the entry on
the revision theory of truth.)
Tarski's own conclusion was that a truth definition for a
language \(L\) has to be given in a metalanguage which is essentially
stronger than \(L\).
There is a consequence for the foundations of mathematics. First-order
Zermelo-Fraenkel set theory is widely regarded as the standard of
mathematical correctness, in the sense that a proof is correct if and
only if it can be formalised as a formal proof in set theory. We would
like to be able to give a truth definition for set theory; but by
Tarski's result this truth definition can't be given in
set theory itself. The usual solution is to give the truth definition
informally in English. But there are a number of ways of giving
limited formal truth definitions for set theory. For example Azriel
Levy showed that for every natural number \(n\) there is a
\(\Sigma\_n\) formula that is satisfied by all and only the
set-theoretic names of true \(\Sigma\_n\) sentences of set theory. The
definition of \(\Sigma\_n\) is too technical to give here, but three
points are worth making. First, every sentence of set theory is
provably equivalent to a \(\Sigma\_n\) sentence for any large enough
\(n\). Second, the class of \(\Sigma\_n\) formulas is closed under
adding existential quantifiers at the beginning, but not under adding
universal quantifiers. Third, the class is not closed under negation;
this is how Levy escapes Tarski's paradox. (See the entry on
set theory.)
Essentially the same devices allow Jaakko Hintikka to give an
internal truth definition for his
independence friendly logic;
this logic shares the second and third properties of Levy's
classes of formulas.
## 2. Some kinds of truth definition on the 1933 pattern
In his 1933 paper Tarski went on to show that many fully interpreted
formal languages do have a truth definition that satisfies his
conditions. He gave four examples in that paper. One was a trivial
definition for a finite language; it simply listed the finitely many
true sentences. One was a definition by quantifier elimination; see
Section 2.2 below. The remaining two, for different classes of
language, were examples of what people today think of as the standard
Tarski truth definition; they are forerunners of the 1956
model-theoretic definition.
### 2.1 The standard truth definitions
The two standard truth definitions are at first glance not definitions
of truth at all, but definitions of a more complicated relation
involving assignments \(a\) of objects to variables:
\[
a \text{ satisfies the formula } F
\]
(where the symbol '\(F\)' is a placeholder for a name of a
particular formula of the object language). In fact satisfaction
reduces to truth in this sense: \(a\) satisfies the formula \(F\) if
and only if taking each free variable in \(F\) as a name of the object
assigned to it by \(a\) makes the formula \(F\) into a true sentence.
So it follows that our intuitions about when a sentence is true can
guide our intuitions about when an assignment satisfies a formula. But
none of this can enter into the formal definition of truth, because
'taking a variable as a name of an object' is a semantic
notion, and Tarski's truth definition has to be built only on
notions from syntax and set theory (together with those in the object
language); recall Section 1.1. In fact Tarski's reduction goes
in the other direction: if the formula \(F\) has no free variables,
then to say that \(F\) is true is to say that every assignment
satisfies it.
The reason why Tarski defines satisfaction directly, and then deduces
a definition of truth, is that satisfaction obeys *recursive
conditions* in the following sense: if \(F\) is a compound
formula, then to know which assignments satisfy \(F\), it's
enough to know which assignments satisfy the immediate constituents of
\(F\). Here are two typical examples:
* The assignment \(a\) satisfies the formula '\(F\)
*and* \(G\)' if and only if \(a\) satisfies \(F\) and
\(a\) satisfies \(G\).
* The assignment \(a\) satisfies the formula '*For all*
\(x\), \(G\)' if and only if for every individual \(i\), if
\(b\) is the assignment that assigns \(i\) to the variable \(x\) and
is otherwise exactly like \(a\), then \(b\) satisfies \(G\).
We have to use a different approach for atomic formulas. But for
these, at least assuming for simplicity that \(L\) has no function
symbols, we can use the metalanguage copies \(\#(R)\) of the predicate
symbols \(R\) of the object language. Thus:
* The assignment \(a\) satisfies the formula \(R(x,y)\) if and only
if \(\#(R)(a(x),a(y))\).
(Warning: the expression \(\#\) is in the metametalanguage, not in the
metalanguage \(M\). We may or may not be able to find a formula of
\(M\) that expresses \(\#\) for predicate symbols; it depends on
exactly what the language \(L\) is.)
Subject to the mild reservation in the next paragraph, Tarski's
definition of satisfaction is *compositional*, meaning that the
class of assignments which satisfy a compound formula \(F\) is
determined solely by (1) the syntactic rule used to construct \(F\)
from its immediate constituents and (2) the classes of assignments
that satisfy these immediate constituents. (This is sometimes phrased
loosely as: satisfaction is defined recursively. But this formulation
misses the central point, that (1) and (2) don't contain any
syntactic information about the immediate constituents.)
Compositionality explains why Tarski switched from truth to
satisfaction. You can't define whether '*For all*
\(x, G\)' is true in terms of whether \(G\) is true, because in
general \(G\) has a free variable \(x\) and so it isn't either
true or false.
The reservation is that Tarski's definition of satisfaction in
the 1933 paper doesn't in fact mention the class of assignments
that satisfy a formula \(F\). Instead, as we saw, he defines the
relation '\(a\) satisfies \(F\)', which determines what
that class is. This is probably the main reason why some people
(including Tarski himself in conversation, as reported by Barbara
Partee) have preferred not to describe the 1933 definition as
compositional. But the class format, which is compositional on any
reckoning, does appear in an early variant of the truth definition in
Tarski's paper of 1931 on definable sets of real numbers. Tarski
had a good reason for preferring the format '\(a\) satisfies
\(F\)' in his 1933 paper, namely that it allowed him to reduce
the set-theoretic requirements of the truth definition. In sections 4
and 5 of the 1933 paper he spelled out these requirements
carefully.
The name 'compositional(ity)' first appears in papers of
Putnam in 1960 (published 1975) and Katz and Fodor in 1963 on natural
language semantics. In talking about compositionality, we have moved
to thinking of Tarski's definition as a semantics, i.e. a way of
assigning 'meanings' to formulas. (Here we take the
meaning of a sentence to be its truth value.) Compositionality means
essentially that the meanings assigned to formulas give *at
least* enough information to determine the truth values of
sentences containing them. One can ask conversely whether
Tarski's semantics provides *only as much information as we
need* about each formula, in order to reach the truth values of
sentences. If the answer is yes, we say that the semantics is
*fully abstract* (for truth). One can show fairly easily, for
any of the standard languages of logic, that Tarski's definition
of satisfaction is in fact fully abstract.
As it stands, Tarski's definition of satisfaction is not an
explicit definition, because satisfaction for one formula is defined
in terms of satisfaction for other formulas. So to show that it is
formally correct, we need a way of converting it to an explicit
definition. One way to do this is as follows, using either higher
order logic or set theory. Suppose we write \(S\) for a binary
relation between assignments and formulas. We say that \(S\) is a
*satisfaction relation* if for every formula \(G, S\) meets the
conditions put for satisfaction of \(G\) by Tarski's definition.
For example, if \(G\) is '\(G\_1\) *and* \(G\_2\)',
\(S\) should satisfy the following condition for every assignment
\(a\):
\[
S(a,G) \text{ if and only if } S(a,G\_1) \text{ and } S(a,G\_2).
\]
We can define 'satisfaction relation' formally, using the
recursive clauses and the conditions for atomic formulas in
Tarski's recursive definition. Now we prove, by induction on the
complexity of formulas, that there is exactly one satisfaction
relation \(S\). (There are some technical subtleties, but it can be
done.) Finally we define
\(a\) satisfies \(F\) if and only if: there is a satisfaction relation
\(S\) such that \(S(a,F)\).
It is then a technical exercise to show that this definition of
satisfaction is materially adequate. Actually one must first write out
the counterpart of Convention \(T\) for satisfaction of formulas, but
I leave this to the reader.
### 2.2 The truth definition by quantifier elimination
The remaining truth definition in Tarski's 1933 paper -
the third as they appear in the paper - is really a bundle of
related truth definitions, all for the same object language \(L\) but
in different interpretations. The quantifiers of \(L\) are assumed to
range over a particular class, call it \(A\); in fact they are second
order quantifiers, so that really they range over the collection of
subclasses of \(A\). The class \(A\) is not named explicitly in the
object language, and thus one can give separate truth definitions for
different values of \(A\), as Tarski proceeds to do. So for this
section of the paper, Tarski allows one and the same sentence to be
given different interpretations; this is the exception to the general
claim that his object language sentences are fully interpreted. But
Tarski stays on the straight and narrow: he talks about
'truth' only in the special case where \(A\) is the class
of all individuals. For other values of \(A\), he speaks not of
'truth' but of 'correctness in the domain
\(A\)'.
These truth or correctness definitions don't fall out of a
definition of satisfaction. In fact they go by a much less direct
route, which Tarski describes as a 'purely accidental'
possibility that relies on the 'specific peculiarities' of
the particular object language. It may be helpful to give a few more
of the technical details than Tarski does, in a more familiar notation
than Tarski's, in order to show what is involved. Tarski refers
his readers to a paper of Thoralf Skolem in 1919 for the
technicalities.
One can think of the language \(L\) as the first-order language with
predicate symbols \(\subseteq\) and =. The language is interpreted as
talking about the subclasses of the class \(A\). In this language we
can define:
* '\(x\) is the empty set' (viz. \(x \subseteq\) every
class).
* '\(x\) is an atom' (viz. \(x\) is not empty, but every
subclass of \(x\) not equal to \(x\) is empty).
* '\(x\) has exactly \(k\) members' (where \(k\) is a
finite number; viz. there are exactly \(k\) distinct atoms \(\subseteq
x)\).
* 'There are exactly \(k\) elements in \(A\)' (viz.
there is a class with exactly \(k\) members, but there is no class
with exactly \(k+1\) members).
Now we aim to prove:
*Lemma*. Every formula \(F\) of \(L\) is equivalent to (i.e. is
satisfied by exactly the same assignments as) some boolean combination
of sentences of the form 'There are exactly \(k\) elements in
\(A\)' and formulas of the form 'There are exactly \(k\)
elements that are in \(v\_1\), not in \(v\_2\), not in \(v\_3\) and in
\(v\_4\)' (or any other combination of this type, using only
variables free in \(F)\).
The proof is by induction on the complexity of formulas. For atomic
formulas it is easy. For boolean combinations of formulas it is easy,
since a boolean combination of boolean combinations is again a boolean
combination. For formulas beginning with \(\forall\), we take the
negation. This leaves just one case that involves any work, namely the
case of a formula beginning with an existential quantifier. By
induction hypothesis we can replace the part after the quantifier by a
boolean combination of formulas of the kinds stated. So a typical case
might be:
\(\exists z\) (there are exactly two elements that are in \(z\) and
\(x\) and not in \(y)\).
This holds if and only if there are at least two elements that are in
\(x\) and not in \(y\). We can write this in turn as: The number of
elements in \(x\) and not in \(y\) is not 0 and is not 1; which is a
boolean combination of allowed formulas. The general proof is very
similar but more complicated.
When the lemma has been proved, we look at what it says about a
sentence. Since the sentence has no free variables, the lemma tells us
that it is equivalent to a boolean combination of statements saying
that \(A\) has a given finite number of elements. So if we know how
many elements \(A\) has, we can immediately calculate whether the
sentence is 'correct in the domain \(A\)'.
One more step and we are home. As we prove the lemma, we should gather
up any facts that can be stated in \(L\), are true in every domain,
and are needed for proving the lemma. For example we shall almost
certainly need the sentence saying that \(\subseteq\) is transitive.
Write \(T\) for the set of all these sentences. (In Tarski's
presentation \(T\) vanishes, since he is using higher order logic and
the required statements about classes become theorems of logic.) Thus
we reach, for example:
*Theorem*. If the domain \(A\) is infinite, then a sentence
\(S\) of the language \(L\) is correct in \(A\) if and only if \(S\)
is deducible from \(T\) and the sentences saying that the number of
elements of \(A\) is not any finite number.
The class of *all* individuals is infinite (Tarski asserts), so
the theorem applies when \(A\) is this class. And in this case Tarski
has no inhibitions about saying not just 'correct in
\(A\)' but 'true'; so we have our truth
definition.
The method we have described revolves almost entirely around removing
existential quantifiers from the beginnings of formulas; so it is
known as *the method of quantifier elimination*. It is not as
far as you might think from the two standard definitions. In all cases
Tarski assigns to each formula, by induction on the complexity of
formulas, a description of the class of assignments that satisfy the
formula. In the two previous truth definitions this class is described
directly; in the quantifier elimination case it is described in terms
of a boolean combination of formulas of a simple kind.
At around the same time as he was writing the 1933 paper, Tarski gave
a truth definition by quantifier elimination for the first-order
language of the field of real numbers. In his 1931 paper it appears
only as an interesting way of characterising the set of relations
definable by formulas. Later he gave a fuller account, emphasising
that his method provided not just a truth definition but an algorithm
for determining which sentences about the real numbers are true and
which are false.
## 3. The 1956 definition and its offspring
In 1933 Tarski assumed that the formal languages that he was dealing
with had two kinds of symbol (apart from punctuation), namely
constants and variables. The constants included logical constants, but
also any other terms of fixed meaning. The variables had no
independent meaning and were simply part of the apparatus of
quantification.
Model theory
by contrast works with three levels of symbol. There are the logical
constants \((=, \neg\), & for example), the variables (as before),
and between these a middle group of symbols which have no fixed
meaning but get a meaning through being applied to a particular
structure. The symbols of this middle group include the nonlogical
constants of the language, such as relation symbols, function symbols
and constant individual symbols. They also include the quantifier
symbols \(\forall\) and \(\exists\), since we need to refer to the
structure to see what set they range over. This type of three-level
language corresponds to mathematical usage; for example we write the
addition operation of an abelian group as +, and this symbol stands
for different functions in different groups.
So one has to work a little to apply the 1933 definition to
model-theoretic languages. There are basically two approaches: (1)
Take one structure \(A\) at a time, and regard the nonlogical
constants as constants, interpreted in \(A\). (2) Regard the
nonlogical constants as variables, and use the 1933 definition to
describe when a sentence is satisfied by an assignment of the
ingredients of a structure \(A\) to these variables. There are
problems with both these approaches, as Tarski himself describes in
several places. The chief problem with (1) is that in model theory we
very frequently want to use the same language in connection with two
or more different structures - for example when we are defining
elementary embeddings between structures (see the entry on
first-order model theory).
The problem with (2) is more abstract: it is disruptive and bad
practice to talk of formulas with free variables being
'true'. (We saw in Section 2.2 how Tarski avoided talking
about truth in connection with sentences that have varying
interpretations.) What Tarski did in practice, from the appearance of
his textbook in 1936 to the late 1940s, was to use a version of (2)
and simply avoid talking about model-theoretic sentences being true in
structures; instead he gave an indirect definition of what it is for a
structure to be a 'model of' a sentence, and apologised
that strictly this was an abuse of language. (Chapter VI of Tarski
1994 still contains relics of this old approach.)
By the late 1940s it had become clear that a direct model-theoretic
truth definition was needed. Tarski and colleagues experimented with
several ways of casting it. The version we use today is based on that
published by Tarski and Robert Vaught in 1956. See the entry on
classical logic
for an exposition.
The right way to think of the model-theoretic definition is that we
have sentences whose truth value varies according to the situation
where they are used. So the nonlogical constants are not variables;
they are definite descriptions whose reference depends on the context.
Likewise the quantifiers have this indexical feature, that the domain
over which they range depends on the context of use. In this spirit
one can add other kinds of indexing. For example a Kripke structure is
an indexed family of structures, with a relation on the index set;
these structures and their close relatives are fundamental for the
semantics of modal,
temporal
and
intuitionist
logic.
Already in the 1950s model theorists were interested in formal
languages that include kinds of expression different from anything in
Tarski's 1933 paper. Extending the truth definition to
infinitary logics was no problem at all. Nor was there any serious
problem about most of the generalised quantifiers proposed at the
time. For example there is a quantifier \(Qxy\) with the intended
meaning:
\(QxyF(x,y)\) if and only if there is an infinite set \(X\) of
elements such that for all \(a\) and \(b\) in \(X, F(a,b)\).
This definition itself shows at once how the required clause in the
truth definition should go.
In 1961 Leon Henkin pointed out two sorts of model-theoretic language
that didn't immediately have a truth definition of
Tarski's kind. The first had infinite strings of
quantifiers:
\[
\forall v\_1 \exists v\_2 \forall v\_3 \exists v\_4\ldots R(v\_1,v\_2,v\_3, v\_4,\ldots).
\]
The second had quantifiers that are not linearly ordered. For ease of
writing I use Hintikka's later notation for these:
\[
\forall v\_1 \exists v\_2 \forall v\_3 (\exists v\_4 /\forall v\_1) R(v\_1,v\_2,v\_3, v\_4).
\]
Here the slash after \(\exists v\_4\) means that this quantifier is
outside the scope of the earlier quantifier \(\forall v\_1\) (and also
outside that of the earlier existential quantifier).
Henkin pointed out that in both cases one could give a natural
semantics in terms of Skolem functions. For example the second
sentence can be paraphrased as
\[
\exists f\exists g \forall v\_1 \forall v\_3 R(v\_1,f(v\_1),v\_3,g(v\_3)),
\]
which has a straightforward Tarski truth condition in second order
logic. Hintikka then observed that one can read the Skolem functions
as winning strategies in a game, as in the entry on
logic and games.
In this way one can build up a compositional semantics, by assigning
to each formula a game. A sentence is true if and only if the player
Myself (in Hintikka's nomenclature) has a winning strategy for
the game assigned to the sentence. This game semantics agrees with
Tarski's on conventional first-order sentences. But it is far
from fully abstract; probably one should think of it as an operational
semantics, describing how a sentence is verified rather than whether
it is true.
The problem of giving a Tarski-style semantics for Henkin's two
languages turned out to be different in the two cases. With the first,
the problem is that the syntax of the language is not well-founded:
there is an infinite descending sequence of subformulas as one strips
off the quantifiers one by one. Hence there is no hope of giving a
definition of satisfaction by recursion on the complexity of formulas.
The remedy is to note that the *explicit* form of
Tarski's truth definition in Section 2.1 above didn't
require a recursive definition; it needed only that the conditions on
the satisfaction relation \(S\) pin it down uniquely. For
Henkin's first style of language this is still true, though the
reason is no longer the well-foundedness of the syntax.
For Henkin's second style of language, at least in
Hintikka's notation (see the entry on
independence friendly logic),
the syntax is well-founded, but the displacement of the quantifier
scopes means that the usual quantifier clauses in the definition of
satisfaction no longer work. To get a compositional and fully abstract
semantics, one has to ask not what assignments of variables satisfy a
formula, but what *sets* of assignments satisfy the formula
'uniformly', where 'uniformly' means
'independent of assignments to certain variables, as shown by
the slashes on quantifiers inside the formula'. (Further details
of revisions of Tarski's truth definition along these lines are
in the entry on
dependence logic.)
Henkin's second example is of more than theoretical interest,
because clashes between the semantic and the syntactic scope of
quantifiers occur very often in natural languages. |
truthlikeness | ## 1. The Logical Problem
Truth, perhaps even more than beauty and goodness, has been the target
of an extraordinary amount of philosophical dissection and
speculation. By comparison with truth, the more complex and much more
interesting concept of truthlikeness has only recently become the
subject of serious investigation. The logical problem of truthlikeness
is to give a consistent and materially adequate account of the
concept. But first, we have to make it plausible that there is a
coherent concept in the offing to be investigated.
### 1.1 What's the problem?
Suppose we are interested in what is the number of planets in the
solar system. With the demotion of Pluto to planetoid status, the
truth of this matter is that there are precisely 8 planets. Now, the
proposition *the number of planets in our solar system is 9*
may be false, but quite a bit closer to the truth than the proposition
that *the number of planets in our solar system is 9 billion*.
(One falsehood may be closer to the truth than another falsehood.) The
true proposition *the number of the planets is between 7 and 9
inclusive* is closer to the truth than the true proposition that
*the number of the planets is greater than or equal to 0*. (So
a truth may be closer to the truth than another truth.) Finally, the
proposition that *the number of the planets is either less than or
greater than 9* may be true but it is arguably not as close to the
*whole* truth as its highly accurate but strictly false
negation: *that there are 9 planets.*
This particular numerical example is admittedly simple, but a wide
variety of judgments of relative likeness to truth crop up both in
everyday parlance as well as in scientific discourse. While some
involve the relative accuracy of claims concerning the value of
numerical magnitudes, others involve the sharing of properties,
structural similarity, or closeness among putative laws.
Consider a non-numerical example, also highly simplified but quite
topical in the light of the recent rise in status of the concept of
*fundamentality*. Suppose you are interested in the truth about
which particles are fundamental. At the outset of your inquiry all you
know are various logical truths, like the tautology *either
electrons are fundamental or they are not*. Tautologies are pretty
much useless in helping you locate the truth about fundamental
particles. Suppose that the standard model is actually on the right
track. Then, learning that electrons are fundamental edges you a
little bit closer to your goal. It is by no means the complete truth
about fundamental particles, but it is a piece of it. If you go on to
learn that electrons, along with muons and tau particles, are a kind
of lepton and that all leptons are fundamental, you have presumably
edged closer.
If this is right, then some truths are closer to the truth about
fundamental particles than others.
The discovery that atoms are not fundamental, that they are in fact
composite objects, displaced the earlier hypothesis that *atoms are
fundamental*. For a while the proposition that *protons,
neutrons and electrons are the fundamental components of atoms*
was embraced, but unfortunately it too turned out to be false. Still,
this latter falsehood seems closer to the truth than its predecessor
(assuming, again, that the standard model is true). And even if the
standard model contains errors, as surely it does, it is presumably
closer to the truth about fundamental particles than these other
falsehoods.
So again, some falsehoods may be closer to the truth about fundamental
particles than other falsehoods.
As we have seen, a tautology is not a terrific truth locator, but if
you moved from the tautology that *electrons either are or are not
fundamental* to embrace the false proposition *that electrons
are not fundamental* you would have moved further from your
goal.
So, some truths are closer to the truth than some falsehoods.
But it is by no means obvious that all truths about fundamental
particles are closer to the whole truth than any falsehood. The false
proposition that *electrons, protons and neutrons are the
fundamental components of atoms*, for instance, may well be an
improvement over the tautology.
If this is right, certain falsehoods are closer to the truth than some
truths.
Investigations into the concept of truthlikeness only began in earnest
in the early nineteen sixties. Why was *truthlikeness* such a
latecomer to the philosophical scene? It wasn't until the latter
half of the twentieth century that mainstream philosophers gave up on
the Cartesian goal of infallible knowledge. The idea that we are quite
possibly, even probably, mistaken in our most cherished beliefs, that
they might well be just *false*, was mostly considered
tantamount to capitulation to the skeptic. By the middle of the
twentieth century, however, it was clear that many of our commonsense
beliefs, as well as previous scientific theories, are strictly
speaking, false. Further, the increasingly rapid turnover of
scientific theories suggested that, far from being certain, they are
ever vulnerable to refutation, and typically are eventually refuted
and replaced by some new theory. The history of inquiry is one of a
parade of refuted theories, replaced by other theories awaiting their
turn at the guillotine. (This is the "dismal induction",
see also the entry
on realism and theory change in science.)
*Realism* holds that the constitutive aim of inquiry is the
truth of some matter. *Optimism* holds that the history of
inquiry is one of progress with respect to its constitutive aim. But
*fallibilism* holds that our theories are false or very likely
to be false, and to be replaced by other false theories. To combine
these three ideas, we must affirm that some false propositions better
realize the goal of truth - are closer to the truth - than
others. We are thus stuck with the logical problem of
truthlikeness.
While a multitude of apparently different solutions to the problem
have been proposed, they can be classified into three main
approaches, each with its own heuristic - the *content*
approach, the *consequence* approach and the *likeness*
approach. Before exploring these possible solutions to the logical
problem, it could be useful to dispel a couple of common confusions,
since truthlikeness should not be conflated with either epistemic
probability or with vagueness. We discuss this latter notion in the
supplement Why truthlikeness is not probability or vagueness (see
also the entry on
vagueness);
as for the former, we shall discuss the difference between (expected)
truthlikeness and probability when discussing the epistemological
problem (SS2).
### 1.2 The content approach
Karl Popper was the first philosopher to take the logical problem of
truthlikeness seriously enough to make an assay on it. This is not
surprising, since Popper was also the first prominent realist to
embrace a very radical fallibilism about science while also trumpeting
the epistemic superiority of the enterprise. In his early work, he
implied that the only kind of progress an inquiry can make consists in
falsification of theories. This is a little depressing, to say the
least. It is almost as depressing as the pessimistic induction. What
it lacks is a positive account of how a succession of falsehoods might
constitute positive cognitive progress. Perhaps this is why
Popper's early work received a pretty short shrift from other
philosophers. If a miss is as good as a mile, and all we can ever
establish with confidence is that our inquiry has missed its target
once again, then epistemic pessimism seems inevitable. Popper
eventually realized that falsificationism is compatible with optimism
provided we have an acceptable notion of verisimilitude (or
truthlikeness). If some false hypotheses are closer to the truth than
others, then the history of inquiry may turn out to be one of progress
towards the goal of truth. Moreover, it may even be reasonable, on the
basis of our evidence, to conjecture that our theories are in fact
making such progress, even though we know they are all false or highly
likely to be false.
Popper saw clearly that the concept of truthlikeness should not be
confused with the concept of epistemic probability, and that it has
often been so confused. (See Popper 1963 for a history of the
confusion and the supplement
Why truthlikeness is not probability or vagueness
for an explanation of the difference between the two concepts.)
Popper's insight here was facilitated by his deep but largely
unjustified antipathy to epistemic probability. He thought that his
starkly falsificationist account favored bold, contentful theories.
Degree of informative content varies inversely with probability
- the greater the content the less likely a theory is to be
true. So if you are after theories which seem, on the evidence, to be
true, then you will eschew those which make bold - that is,
highly improbable - predictions. On this picture, the quest for
theories with high probability is simply misguided.
To see this distinction between truthlikeness and probability clearly,
and to articulate it, was one of Popper's most significant
contributions, not only to the debate about truthlikeness, but to
philosophy of science and logic in general. However, his deep
antagonism to probability, combined with his love of boldness, was
both a blessing and a curse. The blessing: it led him to produce not
only the first interesting and important account of truthlikeness, but
to initiate an approach to the problem in terms of content. The curse:
content alone, as Popper envisaged it, is insufficient to characterize
truthlikeness.
Popper made the first attempt to solve the problem in his famous
collection *Conjectures and Refutations*. As a great admirer of
Tarski's assay on the concept of truth, he modelled his theory
of truthlikeness on Tarski's theory. First, let a matter for
investigation be circumscribed by a formalized language \(L\) adequate
for discussing it. Tarski showed us how each possible world, or model
of the language, induces a partition of sentences of \(L\) into those
that are true and those that are false. The set of all sentences true
in the actual world is thus a complete true account of the world, as
far as that language goes. It is aptly called the Truth, \(T\). \(T\)
is the target of the investigation couched in \(L\). It is the theory
(relative to the resources in \(L)\) that we are seeking. If
truthlikeness is to make sense, theories other than \(T\), even false
theories, come more or less close to capturing \(T\).
\(T\), the Truth, is a theory only in the technical Tarskian sense,
not in the ordinary everyday sense of that term. It is a set of
sentences closed under the consequence relation: \(T\) may not be
finitely axiomatizable, or even axiomatizable at all. However, it is a
perfectly good set of sentences all the same. In general, we will
follow the Tarski-Popper usage here and call any set of sentences
closed under consequence a *theory*, and we will assume that
each proposition we deal with is identified with a theory in this
sense. (Note that theory \(A\) logically entails theory \(B\) just in
case \(B\) is a subset of \(A\).)
The complement of \(T\), the set of false sentences \(F\), is not a
theory even in this technical sense. Since falsehoods always entail
truths, \(F\) is not closed under the consequence relation. (This may
be the reason why we have no expression like *the Falsth*: the
set of false sentences does not describe a possible alternative to the
actual world.) But \(F\) too is a perfectly good set of sentences. The
consequences of any theory \(A\) that can be formulated in \(L\) will
thus divide between \(T\) and \(F\). Popper called the intersection of
\(A\) and \(T\), the *truth content* of \(A\) (\(A\_T\)), and
the intersection of \(A\) and \(F\), the *falsity content* of
\(A\) (\(A\_T\)). Any theory \(A\) is thus the union of its
non-overlapping truth content and falsity content. Note that since
every theory entails all logical truths, these will constitute a
special set, at the center of \(T\), which will be included in every
theory, whether true or false.
![missing text, please inform](diagram1.png)
Diagram 1. Truth and falsity contents of
false theory \(A\)
A false theory will cover some of \(F\), but because every false
theory has true consequences, including all logical truths, it will
also overlap with \(T\) (Diagram 1).
A true theory, however, will only overlap \(T\) (Diagram 2):
![missing text, please inform](diagram2.png)
Diagram 2. True theory \(A\) is
identical to its own truth content
Amongst true theories, then, it seems that the more true sentences
that are entailed, the closer we get to \(T\), hence the more
truthlike. Set theoretically that simply means that, where \(A\) and
\(B\) are both true, \(A\) will be more truthlike than \(B\) just in
case \(B\) is a proper subset of \(A\) (which for true theories means
that \(B\_T\) is a proper subset of \(A\_T\)). Call this principle:
*the value of content for truths*.
![missing text, please inform](diagram3.png)
Diagram 3. True theory \(A\) has more
truth content than true theory \(B\)
This essentially syntactic account of truthlikeness has some nice
features. It induces a partial ordering of truths, with the whole
Truth \(T\) at the top of the ordering: \(T\) is closer to the Truth
than any other true theory. The set of logical truths is at the
bottom: further from the Truth than any other true theory. In between
these two extremes, true theories are ordered simply by logical
strength: the more logical content, the closer to the Truth. Since
probability varies inversely with logical strength, amongst truths the
theory with the greatest truthlikeness \((T)\) must have the smallest
probability, and the theory with the largest probability (the logical
truth) is the furthest from the Truth. Popper made a simple and
perhaps plausible generalization of this. Just as truth content
(coverage of \(T)\) counts in favor of truthlikeness, falsity content
(coverage of \(F)\) counts against. In general then, a theory \(A\) is
closer to the truth if it has more truth content without engendering
more falsity content, or has less falsity content without sacrificing
truth content (diagram 4):
![missing text, please inform](diagram4.png)
Diagram 4. False theory \(A\) closer to
the Truth than false theory \(B\)
The generalization of the truth content comparison, by incorporating
falsity content comparisons, also has some nice features. It preserves
the comparisons of true theories mentioned above. The truth content
\(A\_T\) of a false theory \(A\) (itself a theory in the Tarskian
sense) will clearly be closer to the truth than \(A\) (Diagram 1). And
the whole truth \(T\) will be closer to the truth than any falsehood
\(B\) because the truth content of \(B\) must be contained within
\(T\), and the falsity content of \(T\) (the empty class) must be
properly contained within the non-empty falsity content of \(B\).
Despite its attractive features, the account has a couple of
disastrous consequences. Firstly, since a falsehood has some false
consequences, and no truth has any, it follows that no falsehood can
be as close to the truth as a logical truth - the weakest of all
truths. A logical truth leaves the location of the truth wide open, so
it is rather worthless as an approximation to the whole truth. On
Popper's account, no falsehood can ever be more worthwhile than
a worthless logical truth. (We could call this result *the absolute
worthlessness of falsehoods*).
Furthermore, it is impossible to add a true consequence to a false
theory without thereby adding additional false consequences (or
subtract a false consequence without subtracting true consequences).
So the account entails that no false theory is closer to the truth
than any other. We could call this result *the relative
worthlessness of all falsehoods*. These worthlessness results were
proved independently by Pavel Tichy and David Miller (Miller
1974, and Tichy 1974) - for a proof, see the supplement
on Why Popper's definition of truthlikeness fails: the Tichy-Miller theorem.
It is tempting (and Popper was so tempted) to retreat in the face of
these results to something like the comparison of truth contents
alone. That is to say, \(A\) is as close to the truth as \(B\) if
\(B\_T\) is contained in \(A\_T\), and \(A\) is closer to the truth than
\(B\) just in case \(B\_T\) is properly contained in \(B\_T\). Call this
the *Simple Truth Content account*.
This Simple Truth Content account preserves what many consider to be
the chief virtue of Popper's account: the value of content for
truths. And while it delivers the absolute worthlessness of falsehoods
(no falsehood is closer to the truth than a tautology) it avoids the
relative worthlessness of falsehoods. If \(A\) and \(B\) are both
false, then \(A\_T\) may well properly contain \(B\_T\). But that holds
if and only if \(A\) is logically stronger than \(B\). That is to say,
a false proposition is the closer to the truth the stronger it is.
According to this principle - call it *the value of content
for falsehoods* - the false proposition that *there are
nine planets, and all of them are made of green cheese* is more
truthlike than the false proposition *there are nine planets*.
And so once one knows that a certain theory is false one can be
confident that tacking on any old arbitrary proposition, no matter how
inaccurate it is, will lead us inexorably closer to the truth. This is
sometimes called the *child's play* objection. Among
false theories, *brute logical strength* becomes the sole
criterion of a theory's likeness to truth.
Even though Popper's particular proposals were flawed, his idea
of comparing truth-content and falsity-content is nevertheless worth
exploring. Several philosophers have developed variations on the idea.
Some stay within Popper's essentially syntactic paradigm,
elucidating content in terms of consequence classes (e.g., Newton
Smith 1981; Schurz and Weingartner 1987, 2010; Cevolani and Festa
2020). Others have switched to a semantic conception of content,
construing semantic content in terms of classes of possibilities, and
searching for a plausible theory of distance between those.
A variant of this approach takes the class of models of a language as
a surrogate for possible states of affairs (Miller 1978a). The other
utilizes a semantics of incomplete possible states like those favored
by structuralist accounts of scientific theories (Kuipers 1987b,
Kuipers 2019). The idea which these accounts have in common is that
the distance between two propositions \(A\) and \(B\) is measured by
the *symmetric difference* \(A\mathbin{\Delta} B\) of the two
sets of possibilities: \((A - B)\cup(B - A)\). Roughly speaking, the
larger the symmetric difference, the greater the distance between the
two propositions. Symmetric differences might be compared
qualitatively - by means of set-theoretic inclusion - or
quantitatively, using some kind of probability measure. Both can be
shown to have the general features of a measure of distance.
The fundamental problem with the content approach lies not in the way
it has been articulated, but rather in the basic underlying
assumption: that truthlikeness is a function of just two variables
- content and truth value. This assumption has several rather
problematic consequences.
Firstly, any given proposition \(A\) can have only two degrees of
verisimilitude: one in case it is false and the other in case it is
true. This is obviously wrong. A theory can be false in very many
different ways. The proposition that *there are eight planets*
is false whether there are nine planets or a thousand planets, but its
degree of truthlikeness is much higher in the first case than in the
latter. Secondly, if we combine the value of content for truths and
the value of content for falsehoods, then if we fix truth value,
verisimilitude will vary only according to amount of content. So, for
example, two equally strong false theories will have to have the same
degree of verisimilitude. That's pretty far-fetched. That
*there are ten planets* and that *there are ten billion
planets* are (roughly) equally strong, and both are false in fact,
but the latter seems much further from the truth than the former.
Finally, how might strength determine verisimilitude amongst false
theories? There seem to be just two plausible candidates: that
verisimilitude increases with increasing strength (the principle of
the value of content for falsehoods) or that it decreases with
increasing strength (the principle of the disvalue of content for
falsehoods). Both proposals are at odds with attractive judgements and
principles, which suggest that the original content approach is in
need of serious revision (see, e.g., Kuipers 2019 for a recent
proposal).
### 1.3 The Consequence Approach
Popper crafted his initial proposal in terms of the true and false
consequences of a theory. Any sentence at all that follows from a
theory is counted as a consequence that, if true, contributes to its
overall truthlikeness, and if false, detracts from that. But it has
struck many that this both involves an enormous amount of double
counting, and that it is the indiscriminate counting of arbitrary
consequences that lies behind the Tichy-Miller trivialization
result.
Consider a very simple framework with three primitive sentences: \(h\)
(for the state *hot*), \(r\) (for *rainy*) and \(w\)
(for *windy*). This framework generates a very small space of
eight possibilities. The eight maximal conjunctions (like \(h \amp r
\amp w, {\sim}h \amp r \amp w,\) etc.) of the three primitive
sentences and of their negations express those possibilities.
Suppose that in fact it is hot, rainy and windy (expressed by the
maximal conjunction \(h \amp r \amp w)\). Then the claim that it is
cold, dry and still (expressed by the sentence \({\sim}h \amp{\sim}r
\amp{\sim}w)\) is further from the truth than the claim that it is
cold, rainy and windy (expressed by the sentence \({\sim}h \amp r \amp
w)\). And the claim that it is cold, dry and windy (expressed by the
sentence \({\sim}h \amp{\sim}r \amp w)\) is somewhere between the two.
These kinds of judgements, which seem both innocent and intuitively
correct, Popper's theory cannot accommodate. And if they are to
be accommodated we cannot treat all true and false consequences alike.
For the three false claims mentioned here have exactly the same number
of true and false consequences (this is the problem we called *the
relative worthlessness of all falsehoods*).
Clearly, if we are going to measure closeness to truth by counting
true and false consequences, some true consequences should count more
than others. For example, \(h\) and \(r\) are both true, and
\({\sim}h\) and \({\sim}r\) are false. The former should surely count
in favor of a claim, and the latter against. But
\({\sim}h\rightarrow{\sim}r\) is true and \(h\rightarrow{\sim}r\) is
false. After we have counted the truth \(h\) in favor of a
claim's truthlikeness and the falsehood \({\sim}r\) against it,
should we also count the true consequence
\({\sim}h\rightarrow{\sim}r\) in favor, and the falsehood
\(h\rightarrow{\sim}r\) against? Surely this is both unnecessary and
misleading. And it is precisely counting sentences like these that
renders Popper's account susceptible to the Tichy-Miller
argument.
According to the consequence approach, Popper was right in thinking
that truthlikeness depends on the relative sizes of classes of true
and false consequences, but erred in thinking that all consequences of
a theory count the same. Some consequences are *relevant*, some
aren't. Let \(R\) be some criterion of relevance of
consequences; let \(A\_R\) be the set of *relevant* consequences
of \(A\). Whatever the criterion \(R\) is it has to satisfy the
constraint that \(A\) be recoverable from (and hence equivalent to)
\(A\_R\). Popper's account is the limiting one - all
consequences are relevant. (Popper's relevance criterion is the
empty one, \(P\), according to which \(A\_P\) is just \(A\) itself.)
The *relevant truth content of A* (abbreviated \(A\_R^T\)) can
be defined as \(A\_R\cap T\) (or \(A\cap T\_R\)), and similarly the
*relevant falsity content of* \(A\) can be defined as \(A\_R\cap
F\). Since \(A\_R = (A\_R\cap T)\cup(A\_R\cap F)\) it follows that the
union of true and false relevant consequences of \(A\) is equivalent
to \(A\). And where \(A\) is true \(A\_R\cap F\) is empty, so that A is
equivalent to \(A\_R\cap T\) alone.
With this restriction to relevant consequences we can basically apply
Popper's definitions: one theory is more truthlike than another
if its relevant truth content is larger and its relevant falsity
content no larger; or its relevant falsity content is smaller, and its
relevant truth content is no smaller.
This idea was first explored by Mortensen in his 1983, but he
abandoned the basic idea as unworkable. Subsequent proposals within
the broad program have been offered by Burger and Heidema 1994, Schurz
and Weingartner 1987 and 2010, and Gemes 2007. (Gerla 2007 also uses
the notion of the relevance of a "test" or factor, but his
account is best located more squarely within the likeness
approach.)
One possible relevance criterion that the \(h\)-\(r\)-\(w\) framework
might suggest is *atomicity*. This amounts to identifying
relevant consequences as *basic* ones, i.e., atomic sentences
or their negations (Cevolani, Crupi and Festa 2011; Cevolani, Festa
and Kuipers 2013). But even if we could avoid the problem of saying
what it is for a sentence to be atomic, since many distinct
propositions imply the same atomic sentences, this criterion would not
satisfy the requirement that \(A\) be equivalent to \(A\_R\). For
example, \((h\vee r)\) and \(({\sim}h\vee{\sim}r)\), like tautologies,
imply no atomic sentences at all. This latter problem can be solved by
resorting to the notion of *partial consequence*;
interestingly, the resulting account becomes virtually indentical to
one version of the likeness approach (Cevolani and Festa 2020).
Burger and Heidema 1994 compare theories by positive and negative
sentences. A positive sentence is one that can be constructed out of
\(\amp,\) \(\vee\) and any true basic sentence. A negative sentence is
one that can be constructed out of \(\amp,\) \(\vee\) and any false
basic sentence. Call a sentence *pure* if it is either positive
or negative. If we take the relevance criterion to be *purity*,
and combine that with the relevant consequence schema above, we have
Burger and Heidema's proposal, which yields a reasonable set of
intuitive judgments. Unfortunately purity (like atomicity) does not
quite satisfy the constraint that \(A\) be equivalent to the class of
its relevant consequences. For example, if \(h\) and \(r\) are both
true then \(({\sim}h\vee r)\) and \((h\vee{\sim}r)\) both have the
same pure consequences (namely, none).
Schurz and Weingartner 2010 use the following notion of relevance
\(S\): being equivalent to a disjunction of atomic propositions or
their negations. With this criterion they can accommodate a range of
intuitive judgments in the simple weather framework that
Popper's account cannot.
For example, where \(\gt\_S\) is the relation of *greater
S-truthlikeness* we capture the following relations among false
claims, which, on Popper's account, are mostly
incommensurable:
\[ (h \amp{\sim}r) \gt\_S ({\sim}r) \gt\_S ({\sim}h \amp{\sim}r). \]
and
\[ (h\vee r) \gt\_S ({\sim}r) \gt\_S ({\sim}h\vee{\sim}r) \gt\_S ({\sim}h \amp{\sim}r). \]
The relevant consequence approach faces three major hurdles.
The first is an extension problem: the approach does produce some
intuitively acceptable results in a finite propositional framework,
but it needs to be extended to more realistic frameworks - for
example, first-order and higher-order frameworks (see Gemes 2007 for
an attempt along these lines).
The second is that, like Popper's original proposal, it judges
no false proposition to be closer to the truth than any truth,
including logical truths. Schurz and Weingartner (2010) have answered
this objection by quantitatively extending their qualitative account
by assigning weights to relevant consequences and summing; one problem
with this is that it assumes finite consequence classes.
The third involves the language-dependence of any adequate relevance
criterion. This problem will be outlined and discussed below in
connection with the likeness approach (SS1.4.3).
### 1.4 The Likeness Approach
In the wake of the difficulties facing Popper's approach,
two philosophers, working quite independently, suggested a radically
different approach: one which takes the *likeness* in
truthlikeness seriously (Tichy 1974, Hilpinen 1976). This shift
from content to likeness was also marked by an immediate shift from
Popper's essentially syntactic approach (something it shares
with the consequence program) to a semantic approach.
Traditionally the semantic contents of sentences have been taken to be
non-linguistic, or rather non-syntactic, items -
*propositions*. What propositions are is highly contested, but
most agree that a proposition carves the class of possibilities into
two sub-classes - those in which the proposition is true and
those in which it is false. Call the class of worlds in which the
proposition is true its *range*. Some have proposed that
propositions be *identified* with their ranges (for example,
David Lewis, in his 1986). This is implausible since, for example, the
informative content of \(7+5=12\) seems distinct from the informative
content of \(12=12\), which in turn seems distinct from the
informative content of Godel's first incompleteness theorem
- and yet all three have the same range: they are all true in
all possible worlds. Clearly, if semantic content is supposed to be
sensitive to informative content, classes of possible worlds will are
not discriminating enough. We need something more fine-grained for a
full theory of semantic content.
Despite this, the range of a proposition is certainly an important
aspect of informative content, and it is not immediately obvious why
truthlikeness should be sensitive to differences in the way a
proposition picks out its range. (Perhaps there are cases of logical
falsehoods some of which seem further from the truth than others. For
example \(7+5=113\) might be considered further from the truth than
\(7+5=13\) though both have the same range - namely, the empty
set of worlds; see Sorensen 2007.) But as a first approximation, we
will assume that it is not hyperintensional and that logically
equivalent propositions have the same degree of truthlikess. The
proposition that *the number of planets is eight* for example,
should have the same degree of truthlikeness as the proposition that
*the square of the number of the planets is sixty four*.
Leaving apart the controversy over the nature of possible worlds, we
shall call the complete collection of possibilities, given some array
of features, the *logical space*, and call the array of
properties and relations which underlie that logical space, the
*framework* of the space. Familiar logical relations and
operations correspond to well-understood set-theoretic relations and
operations on ranges. The range of the conjunction of two propositions
is the intersection of the ranges of the two conjuncts. Entailment
corresponds to the subset relation on ranges. The actual world is a
single point in logical space - a complete specification of
every matter of fact (with respect to the framework of features)
- and a proposition is true if its range contains the actual
world, false otherwise. The whole Truth is a true proposition that is
also complete: it entails all true propositions. The range of the
Truth is none other than the singleton of the actual world. That
singleton is the target, the bullseye, the thing at which the most
comprehensive inquiry is aiming.
Without additional structure on the logical space we have just three
factors for a theorist of truthlikeness to work with - the size
of a proposition (content factor), whether it contains the actual
world (truth factor), and which propositions it implies (consequence
factor). The likeness approach requires some additional structure to
the logical space. For example, worlds might be more or less
*like* other worlds. There might be a betweenness relation
amongst worlds, or even a fully-fledged distance metric. If
that's the case, we can start to see how one proposition might
be closer to the Truth - the proposition whose range contains
just the actual world - than another. The core of the likeness
approach is that truthlikeness supervenes on the likeness between
worlds.
The likeness theorist has two tasks: firstly, making it plausible that
there is an appropriate likeness or distance function on worlds; and
secondly, extending likeness between individual worlds to likeness of
propositions (i.e., sets of worlds) to the actual world. Suppose, for
example, that worlds are arranged in similarity spheres nested around
the actual world, familiar from the Stalnaker-Lewis approach to
counterfactuals. Consider Diagram 5.
![missing text, please inform](diagram5.png)
Diagram 5. Verisimilitude by similarity
circles
The bullseye is the actual world and the small sphere which includes
it is \(T\), the Truth. The nested spheres represent likeness to the
actual world. A world is less like the actual world the larger the
first sphere of which it is a member. Propositions \(A\) and \(B\) are
false, \(C\) and \(D\) are true. A carves out a class of worlds which
are rather close to the actual world - all within spheres two to
four - whereas \(B\) carves out a class rather far from the
actual world - all within spheres five to seven. Intuitively
\(A\) is closer to the bullseye than is \(B\).
The largest sphere which does not overlap at all with a proposition is
plausibly a measure of how close the proposition is to being true.
Call that the *truth factor*. A proposition \(X\) is closer to
being true than \(Y\) if the truth factor of \(X\) is included in the
truth factor of \(Y\). The truth factor of \(A\), for example, is the
smallest non-empty sphere, \(T\) itself, whereas the truth factor of
\(B\) is the fourth sphere, of which \(T\) is a proper subset: so
\(A\) is closer to being true than \(B\).
If a proposition includes the bullseye, then of course it is true
simpliciter and it has the maximal truth factor (the empty set).
So all true propositions are equally close to being true. But
truthlikeness is not just a matter of being close to being true. The
tautology, \(D,\) \(C\) and the Truth itself are equally true, but in
that order they increase in their closeness to the whole truth.
Taking a leaf out of Popper's book, Hilpinen argued that
closeness to the whole truth is in part a matter of degree of
informativeness of a proposition. In the case of the true
propositions, this correlates roughly with the smallest sphere which
totally includes the proposition. The further out the outermost
sphere, the less informative the proposition is, because the larger
the area of the logical space which it covers. So, in a way which
echoes Popper's account, we could take truthlikeness to be a
combination of a truth factor (given by the likeness of that world in
the range of a proposition that is closest to the actual world) and a
content factor (given by the likeness of that world in the range of a
proposition that is furthest from the actual world):
>
> \(A\) is closer to the truth than \(B\) if and only if \(A\) does as
> well as \(B\) on both truth factor and content factor, and better on
> at least one of those.
>
Applying Hilpinen's definition we capture two more particular
judgements, in addition to those already mentioned, that seem
intuitively acceptable: that \(C\) is closer to the truth than \(A\),
and that \(D\) is closer than \(B\). (Note, however, that we have here
a partial ordering: \(A\) and \(D\), for example, are not ranked.) We
can derive from this various apparently desirable features of the
relation *closer to the truth*: for example, that the relation
is transitive, asymmetric and irreflexive; that the Truth is closer to
the Truth than any other theory; that the tautology is at least as far
from the Truth as any other truth; that one cannot make a true theory
worse by strengthening it by a truth (a weak version of the value of
content for truths); that a falsehood is not necessarily improved by
adding another falsehood, or even by adding another truth (a
repudiation of the value of content for falsehoods).
But there are also some worrying features here. While it avoids the
relative worthlessness of falsehoods, Hilpinen's account, just
like Popper's, entails the absolute worthlessness of all
falsehoods: no falsehood is closer to the truth than any truth. So,
for example, Newton's theory is deemed to be no more truthlike,
no closer to the whole truth, than the tautology.
Characterizing Hilpinen's account as a combination of a truth
factor and an information factor seems to mask its quite radical
departure from Popper's account. The incorporation of similarity
spheres signals a fundamental break with the pure content approach,
and opens up a range of possible new accounts: what such accounts have
in common is that the truthlikeness of a proposition is a
*non-trivial function of the likeness to the actual world of worlds
in the range of the proposition*.
There are three main problems for any concrete proposal within the
likeness approach. The first concerns an account of likeness between
states of affairs - in what does this consist and how can it be
analyzed or defined? The second concerns the dependence of the
truthlikeness of a proposition on the likeness of worlds in its range
to the actual world: what is the correct function? (This can be called
"the extension problem".) And finally, there is the famous
problem of "translation variance" or "framework
dependence" of judgements of likeness and of truthlikeness. This
last problem will be taken up in SS1.4.3.
#### 1.4.1 Likeness of worlds in a simple propositional framework
One objection to Hilpinen's proposal (like Lewis's
proposal for counterfactuals) is that it assumes the similarity
relation on worlds as a primitive, there for the taking. At the end of
his 1974 paper, Tichy not only suggested the use of similarity
rankings on worlds, but also provided a ranking in propositional
frameworks and indicated how to generalize this to more complex
frameworks.
Examples and counterexamples in Tichy 1974 are exceedingly
simple, utilizing the little propositional framework introduced above,
with three primitives - \(h\) (for the state *hot*),
\(r\) (for *rainy*) and \(w\) (for *windy*).
Corresponding to the eight-members of the logical space generated by
distributions of truth values through the three basic conditions,
there are eight maximal conjunctions (or constituents):
| | | | | |
| --- | --- | --- | --- | --- |
| \(w\_1\) | \(h \amp r \amp w\) | | \(w\_5\) | \({\sim}h \amp r \amp w\) |
| \(w\_2\) | \(h \amp r \amp{\sim}w\) | | \(w\_6\) | \({\sim}h \amp r \amp{\sim}w\) |
| \(w\_3\) | \(h \amp{\sim}r \amp w\) | | \(w\_7\) | \({\sim}h \amp{\sim}r \amp w\) |
| \(w\_4\) | \(h \amp{\sim}r \amp{\sim}w\) | | \(w\_8\) | \({\sim}h \amp{\sim}r \amp{\sim}w\) |
Worlds differ in the distributions of these traits, and a natural,
albeit simple, suggestion is to measure the likeness between two
worlds by the number of agreements on traits. This is tantamount to
taking distance to be measured by the size of the symmetric difference
of generating states - the so-called city-block measure. As is
well known, this will generate a genuine metric, in particular
satisfying the triangular inequality. If \(w\_1\) is the actual world
this immediately induces a system of nested spheres, but one in which
the spheres come with numbers attached:
![missing text, please inform](diagram6.png)
Diagram 6. Similarity circles for the
weather space
Those worlds orbiting on the sphere \(n\) are of distance \(n\) from
the actual world.
In fact, the structure of the space is better represented not by
similarity circles, but rather by a three-dimensional cube:
![missing text](diagram7.png)
Diagram 7. The three-dimensional weather
space
This way of representing the space makes a clearer connection between
distances between worlds and the role of the atomic propositions in
generating those distances through the city-block metric. It also
eliminates inaccuracies in the relations between the worlds that are
not at the center that the similarity circle diagram suggests.
#### 1.4.2 The likeness of a proposition to the truth
Now that we have numerical distances between worlds, numerical
measures of propositional likeness to, and distance from, the truth
can be defined as some function of the distances, from the actual
world, of worlds in the range of a proposition. But which function is
the right one? This is the *extension problem*.
Suppose that \(h \amp r \amp w\) is the whole truth about the weather.
Following Hilpinen, we might consider overall distance of a
propositions from the truth to be some function of the distances from
actuality of two extreme worlds. Let *truth*\((A)\) be the
truth value of \(A\) in the actual world. Let *min*\((A)\) be
the distance from actuality of that world in \(A\) closest to the
actual world, and *max*\((A)\) be the distance from actuality
of that world in \(A\) furthest from the actual world. Table 1 display
the values of the *min* and *max* functions for some
representative propositions.
| | | | |
| --- | --- | --- | --- |
| \(\boldsymbol{A}\) | ***truth*\((\boldsymbol{A})\)** | ***min*\((\boldsymbol{A})\)** | ***max*\((\boldsymbol{A})\)** |
| \(h \amp r \amp w\) | true | 0 | 0 |
| \(h \amp r\) | true | 0 | 1 |
| \(h \amp r \amp{\sim}\)w | false | 1 | 1 |
| \(h\) | true | 0 | 2 |
| \(h \amp{\sim}r\) | false | 1 | 2 |
| \({\sim}h\) | false | 1 | 3 |
| \({\sim}h \amp{\sim}r \amp w\) | false | 2 | 2 |
| \({\sim}h \amp{\sim}r\) | false | 2 | 3 |
| \({\sim}h \amp{\sim}r \amp{\sim}w\) | false | 3 | 3 |
Table 1. The *min* and
*max* functions.
The simplest proposal (made first in Niiniluoto 1977) would be to take
the average of the *min* and the *max* (call this
measure *min-max-average*). This would remedy a rather glaring
shortcoming which Hilpinen's qualitative proposal shares with
Popper's proposal, namely that no falsehood is closer to the
truth than any truth (even the worthless tautology). This numerical
equivalent of Hilpinen's proposal renders all propositions
comparable for truthlikeness, and some falsehoods it deems more
truthlike than some truths.
But now that we have distances between all worlds, why take only the
extreme worlds in a proposition into account? Why shouldn't
every world in a proposition potentially count towards its overall
distance from the actual world?
A simple measure which does count all worlds is average distance from
the actual world. *Average* delivers all of the particular
judgements we used above to motivate Hilpinen's proposal in the
first place, and in conjunction with the simple metric on worlds it
delivers the following ordering of propositions in our simple
framework:
| | | |
| --- | --- | --- |
| \(\boldsymbol{A}\) | ***truth*\((\boldsymbol{A})\)** | ***average*\((\boldsymbol{A})\)** |
| \(h \amp r \amp w\) | true | 0 |
| \(h \amp r\) | true | 0.5 |
| \(h \amp r \amp{\sim}\)w | false | 1.0 |
| \(h\) | true | 1.3 |
| \(h \amp{\sim}r\) | false | 1.5 |
| \({\sim}h\) | false | 1.7 |
| \({\sim}h \amp{\sim}r \amp w\) | false | 2.0 |
| \({\sim}h \amp{\sim}r\) | false | 2.5 |
| \({\sim}h \amp{\sim}r \amp{\sim}w\) | false | 3.0 |
Table 2. The *average*
function.
This ordering looks promising. Propositions are closer to the truth
the more they get the basic weather traits right, further away the
more mistakes they make. A false proposition may be made either worse
or better by strengthening \((h \amp r \amp{\sim}w\) is better than
\({\sim}w\) while \({\sim}h \amp{\sim}r \amp{\sim}w\) is worse). A
false proposition (like \(h \amp r \amp{\sim}w)\) can be closer to the
truth than some true propositions (like \(h)\). And so on.
These judgments may be sufficient to show that *average* is
superior to *min-max-average*, at least on this group of
propositions, but they are clearly not sufficient to show that
averaging is the right procedure. What we need are some
straightforward and compelling general desiderata which jointly yield
a single correct function. In the absence of such a proof, we can only
resort to case by case comparisons.
Furthermore *average* has not found universal favor. Notably,
there are pairs of true propositions such that *average* deems
the stronger of the two to be the further from the truth. According to
*average*, the tautology is not the true proposition furthest
from the truth. Averaging thus violates the Popperian principle of the
value of content for truths, see Table 3 for an example.
| | | |
| --- | --- | --- |
| \(\boldsymbol{A}\) | ***truth*\((\boldsymbol{A})\)** | ***average*\((\boldsymbol{A})\)** |
| \(h \vee{\sim}r \vee w\) | true | 1.4 |
| \(h \vee{\sim}r\) | true | 1.5 |
| \(h \vee{\sim}h\) | true | 1.5 |
Table 3. *average* violates the
value of content for truths.
Let's then consider other measures, like the *sum*
function - the sum of the distances of worlds in the range of a
proposition from the actual world (Table 4).
| | | |
| --- | --- | --- |
| \(\boldsymbol{A}\) | ***truth*\((\boldsymbol{A})\)** | ***sum*\((\boldsymbol{A})\)** |
| \(h \amp r \amp w\) | true | 0 |
| \(h \amp r\) | true | 1 |
| \(h \amp r \amp{\sim}\)w | false | 1 |
| \(h\) | true | 4 |
| \(h \amp{\sim}r\) | false | 3 |
| \({\sim}h\) | false | 8 |
| \({\sim}h \amp{\sim}r \amp w\) | false | 2 |
| \({\sim}h \amp{\sim}r\) | false | 5 |
| \(h \amp{\sim}r \amp{\sim}w\) | false | 3 |
Table 4. The *sum* function.
The *sum* function is an interesting measure in its own right.
While, like *average*, it is sensitive to the distances of
all worlds in a proposition from the actual world, it is not plausible
as a measure of distance from the truth, and indeed no one has
proposed it as such a measure. What *sum* does measure is a
certain kind of distance-weighted *logical weakness*. In
general the weaker a proposition is, the larger its *sum*
value. But adding words far from the actual world makes the
*sum* value larger than adding worlds closer to it. This
guarantees, for example, that of two truths the *sum* of the
logically weaker is always greater than the *sum* of the
stronger. Thus *sum* might play a role in capturing the value
of content for truths. But it also delivers the implausible value of
content for falsehoods. If you think that there is anything to the
likeness program it is hardly plausible that the falsehood \({\sim}h
\amp{\sim}r \amp{\sim}w\) is closer to the truth than its consequence
\({\sim}h\). Niiniluoto argues that *sum* is a good
likeness-based candidate for measuring Hilpinen's
"information factor". It is obviously much more sensitive
than is *max* to the proposition's informativeness about
the location of the truth.
Niiniluoto thus proposes, as a measure of distance from the truth, the
average of this information factor and Hilpinen's truth factor:
*min-sum-average*. Averaging the more sensitive information
factor (*sum*) and the closeness-to-being-true factor
(*min*) yields some interesting results (see Table 5).
| | | |
| --- | --- | --- |
| \(\boldsymbol{A}\) | ***truth*\((\boldsymbol{A})\)** | ***min-sum-average*\((\boldsymbol{A})\)** |
| \(h \amp r \amp w\) | true | 0 |
| \(h \amp r\) | true | 0.5 |
| \(h \amp r \amp{\sim}\)w | false | 1 |
| \(h\) | true | 2 |
| \(h \amp{\sim}r\) | false | 2 |
| \({\sim}h\) | false | 4.5 |
| \({\sim}h \amp{\sim}r \amp w\) | false | 2 |
| \({\sim}h \amp{\sim}r\) | false | 3.5 |
| \({\sim}h \amp{\sim}r \amp{\sim}w\) | false | 3 |
Table 5. The *min-sum-average*
function.
For example, this measure deems \(h \amp r \amp w\) more truthlike
than \(h \amp r\), and the latter more truthlike than \(h\). And in
general *min-sum-average* delivers the value of content for
truths. For any two truths the *min* factor is the same (0),
and the *sum* factor increases as content decreases.
Furthermore, unlike the symmetric difference measures,
*min-sum-average* doesn't deliver the objectionable value
of contents for falsehoods. For example, \({\sim}h \amp{\sim}r
\amp{\sim}w\) is deemed further from the truth than \({\sim}h\). But
*min-sum-average* is not quite home free, at least from an
intuitive point of view. For example, \({\sim}h \amp{\sim}r
\amp{\sim}w\) is deemed closer to the truth than \({\sim}h
\amp{\sim}r\). This is because what \({\sim}h \amp{\sim}r
\amp{\sim}w\) loses in closeness to the actual world (*min*) it
makes up for by an increase in strength (*sum*).
In deciding how to proceed here we confront a methodological problem.
The methodology favored by Tichy is very much bottom-up. For
the purposes of deciding between rival accounts it takes the intuitive
data very seriously. Popper (along with Popperians like Miller) favor
a more top-down approach. They are suspicious of folk intuitions, and
sometimes appear to be in the business of constructing a new concept
rather than explicating an existing one. They place enormous weight on
certain plausible general principles, largely those that fit in with
other principles of their overall theory of science: for example, the
principle that strength is a virtue and that the stronger of two true
theories (and maybe even of two false theories) is the closer to the
truth. A third approach, one which lies between these two extremes, is
that of reflective equilibrium. This recognizes the claims of both
intuitive judgements on low-level cases, and plausible high-level
principles, and enjoins us to bring principle and judgement into
equilibrium, possibly by tinkering with both. Neither intuitive
low-level judgements nor plausible high-level principles are given
advance priority. The protagonist in the truthlikeness debate who has
argued most consistently for this approach is Niiniluoto.
How might reflective equilibrium be employed to help resolve the
current dispute? Consider a different space of possibilities,
generated by a single magnitude like the number of the planets
\((N).\) Suppose that \(N\) is in fact 8 and that the further \(n\) is
from 8, the further the proposition that \(N=n\) from the Truth.
Consider the three sets of propositions in Table 6. In the left-hand
column we have a sequence of false propositions which, intuitively,
decrease in truthlikeness while increasing in strength. In the middle
column we have a sequence of corresponding true propositions, in each
case the strongest true consequence of its false counterpart on the
left (Popper's "truth content"). Again members of
this sequence steadily increase in strength. Finally on the right we
have another column of falsehoods. These are also steadily increasing
in strength, and like the left-hand falsehoods, seem (intuitively) to
be decreasing in truthlikeness as well.
| | | |
| --- | --- | --- |
| **Falsehood (1)** | **Strongest True Consequence** | **Falsehood (2)** |
| \(10 \le N \le 20\) | \(N=8\) or \(10 \le N \le 20\) | \(N=9\) or \(10 \le N \le 20\) |
| \(11 \le N \le 20\) | \(N=8\) or \(11 \le N \le 20\) | \(N=9\) or \(11 \le N \le 20\) |
| ...... | ...... | ...... |
| \(19 \le N \le 20\) | \(N=8\) or \(19 \le N \le 20\) | \(N=9\) or \(19 \le N \le 20\) |
| \(N= 20\) | \(N=8\) or \(N= 20\) | \(N=9\) or \(N= 20\) |
Table 6.
Judgements about the closeness of the true propositions in the center
column to the truth may be less intuitively clear than are judgments
about their left-hand counterparts. However, it would seem highly
incongruous to judge the truths in Table 6 to be steadily increasing
in truthlikeness, while the falsehoods both to the left and the right,
both marginally different in their overall likeness relations to
truth, steadily decrease in truthlikeness. This suggests that that all
three are sequences of steadily increasing strength combined with
steadily *decreasing* truthlikeness. And if that's right,
it might be enough to overturn Popper's principle that amongst
true theories strength and truthlikeness must covary (even while
granting that this is not so for falsehoods).
If this argument is sound, it removes an objection to averaging
distances, but it does not settle the issue in its favor, for there
may still be other more plausible counterexamples to averaging that we
have not considered.
Schurz and Weingartner argue that this extension problem is the main
defect of the likeness approach:
>
> the problem of extending truthlikeness from possible worlds to
> propositions is intuitively underdetermined. Even if we are granted an
> ordering or a measure of distance on worlds, there are many very
> different ways of extending that to propositional distance, and
> apparently no objective way to decide between them. (Schurz and
> Weingartner 2010, 423)
>
One way of answering this objection head on is to identify principles
that, given a distance function on worlds, constrain the distances
between worlds and sets of worlds, principles perhaps powerful enough
to identify a unique extension.
Apart from the extension problem, two other issues affect the likeness
approach. The first is how to apply it beyond simple propositional
examples as the ones considered above (Popper's content
approach, whatever else its shortcomings, can be applied in principle
to theories expressible in any language, no matter how sophisticated).
We discuss this in the supplement on
Extending the likeness approach to first-order and higher-order frameworks.
The second has to do with the fact that assessments of relative
likeness are sensitive to how the framework underlying the logical
space is defined. This "framework dependence" issue is
discussed in the next section.
#### 1.4.3 The framework dependence of likeness
The single most powerful and influential argument against the whole
likeness approach is the charge that it is "language
dependent" or "framework dependent" (Miller 1974a,
197 a, 1976, and most recently defended, vigorously as usual, in his
2006). Early formulations of the likeness approach (Tichy 1974,
1976, Niiniluoto 1976) proceeded in terms of syntactic surrogates for
their semantic correlates - sentences for propositions,
predicates for properties, constituents for partitions of the logical
space, and the like. The question naturally arises, then, whether we
obtain the same measures if all the syntactic items are translated
into an essentially equivalent language - one capable of
expressing the same propositions and properties with a different set
of primitive predicates. Newton's theory can be formulated with
a variety of different primitive concepts, but these formulations are
typically taken to be equivalent. If the degree of truthlikeness of
Newton's theory were to vary from one such formulation to
another, then while such a concept might still might have useful
applications, it would hardly help to vindicate realism.
Take our simple weather-framework above. This trafficks in three
primitives - *hot*, *rainy*, and *windy*.
Suppose, however, that we define the following two new weather
conditions:
>
>
> *minnesotan* \( =\_{df}\) *hot* if and only if
> *rainy*
>
>
>
> *arizonan* \( =\_{df}\) *hot* if and only if
> *windy*
>
>
>
Now it appears as though we can describe the same sets of weather
states in the new \(h\)-\(m\)-\(a\)-ese language based on the above
conditions. Table 7 shows the translations of four representative
theories between the two languages.
| | | |
| --- | --- | --- |
| | *\(h\)-\(r\)-\(w\)-ese* | *\(h\)-\(m\)-\(a\)-ese* |
| \(T\) | \(h \amp r \amp w\) | \(h \amp m \amp a\) |
| \(A\) | \({\sim}h \amp r \amp w\) | \({\sim}h \amp{\sim}m \amp{\sim}a\) |
| \(B\) | \({\sim}h \amp{\sim}r \amp w\) | \({\sim}h \amp m \amp{\sim}a\) |
| \(C\) | \({\sim}h \amp{\sim}r \amp{\sim}w\) | \({\sim}h \amp m \amp a\) |
Table 7.
If \(T\) is the truth about the weather then theory \(A\), in
\(h\)-\(r\)-\(w\)-ese, seems to make just one error concerning the
original weather states, while \(B\) makes two and \(C\) makes three.
However, if we express these two theories in \(h\)-\(m\)-\(a\)-ese
however, then this is reversed: \(A\) appears to make three errors and
\(B\) still makes two and \(C\) makes only one error. But that means
the account makes truthlikeness, unlike truth, radically
language-relative.
There are two live responses to this criticism. But before detailing
them, note a dead one: the likeness theorist cannot object that
\(h\)-\(m\)-\(a\) is somehow logically inferior to \(h\)-\(r\)-\(w\),
on the grounds that the primitives of the latter are essentially
"biconditional" whereas the primitives of the former are
not. This is because there is a perfect symmetry between the two sets
of primitives. Starting within \(h\)-\(m\)-\(a\)-ese we can arrive at
the original primitives by exactly analogous definitions:
>
> *rainy* \( =\_{df}\) *hot* if and only if
> *minnesotan*
>
>
> *windy* \( =\_{df}\) *hot* if and only if
> *arizonan*
>
Thus if we are going to object to \(h\)-\(m\)-\(a\)-ese it will have
to be on other than purely logical grounds.
Firstly, then, the likeness theorist could maintain that certain
predicates (presumably "hot", "rainy" and
"windy") are primitive in some absolute, realist, sense.
Such predicates "carve reality at the joints" whereas
others (like "minnesotan" and "arizonan") are
gerrymandered affairs. With the demise of predicate nominalism as a
viable account of properties and relations this approach is not as
unattractive as it might have seemed in the middle of the last
century. Realism about universals is certainly on the rise. While this
version of realism presupposes a sparse theory of properties -
that is to say, it is not the case that to every definable predicate
there corresponds a genuine universal - such theories have been
championed both by those doing traditional a priori metaphysics of
properties (e.g. Bealer 1982) as well as those who favor a more
empiricist, scientifically informed approach (e.g. Armstrong 1978,
Tooley 1977). According to Armstrong, for example, which predicates
pick out genuine universals is a matter for developed science. The
primitive predicates of our best fundamental physical theory will give
us our best guess at what the genuine universals in nature are. They
might be predicates like electron or mass, or more likely something
even more abstruse and remote from the phenomena - like the
primitives of String Theory.
One apparently cogent objection to this realist solution is that it
would render the task of empirically estimating degree of
truthlikeness completely hopeless. If we know a priori which
primitives should be used in the computation of distances between
theories it will be difficult to estimate truthlikeness, but not
impossible. For example, we might compute the distance of a theory
from the various possibilities for the truth, and then make a weighted
average, weighting each possible true theory by its probability on the
evidence. That would be the credence-mean estimate of truthlikeness
(see SS2). However, if we don't even know which features
should count towards the computation of similarities and distances
then it appears that we cannot get off first base.
To see this consider our simple weather frameworks. Suppose that all I
learn is that it is rainy. Do I thereby have some grounds for thinking
\(A\) is closer to the truth than \(B\)? I would if I also knew that
\(h\)-\(r\)-\(w\)-ese is the language for calculating distances. For
then, whatever the truth is, \(A\) makes one fewer mistake than \(B\)
makes. \(A\) gets it right on the rain factor, while \(B\)
doesn't, and they must score the same on the other two factors
whatever the truth of the matter. But if we switch to
\(h\)-\(m\)-\(a\)-ese then \(A\)'s epistemic superiority is no
longer guaranteed. If, for example, \(T\) is the truth then \(B\) will
be closer to the truth than \(A\). That's because in the
\(h\)-\(m\)-\(a\) framework raininess as such doesn't count in
favor or against the truthlikeness of a proposition.
This objection would fail if there were empirical indicators not just
of which atomic states obtain, but also of which are the genuine ones,
the ones that really carve reality at the joints. Obviously the
framework would have to contain more than just \(h, m\) and \(a\). It
would have to contain resources for describing the states that
indicate whether these were genuine universals. Maybe whether they
enter into genuine causal relations will be crucial, for example. Once
we can distribute probabilities over the candidates for the real
universals, then we can use those probabilities to weight the various
possible distances which a hypothesis might be from any given
theory.
The second live response is both more modest and more radical. It is
more modest in that it is not hostage to the objective priority of a
particular conceptual scheme, whether that priority is accessed a
*priori* or *a posteriori*. It is more radical in that
it denies a premise of the invariance argument that at first blush is
apparently obvious. It denies the equivalence of the two conceptual
schemes. It denies that \(h \amp r \amp w\), for example, expresses
the very same proposition as \(h \amp m \amp a\) expresses. If we deny
translatability then we can grant the invariance principle, and grant
the judgements of distance in both cases, but remain untroubled. There
is no contradiction (Tichy 1978).
At first blush this response seems somewhat desperate. Haven't
the respective conditions been *defined* in such a way that
they are simple equivalents by fiat? That would, of course, be the
case if \(m\) and \(a\) had been introduced as defined terms into
\(h\)-\(r\)-\(w\). But if that were the intention then the likeness
theorist could retort that the calculation of distances should proceed
in terms of the primitives, not the introduced terms. However, that is
not the only way the argument can be read. We are asked to contemplate
two partially overlapping sequences of conditions, and two spaces of
possibilities generated by those two sequences. We can thus think of
each possibility as a point in a simple three dimensional space. These
points are ordered triples of 0s and 1s, the \(n\)th entry being 0 if
the \(n\)th condition is satisfied and 1 if it isn't. Thinking
of possibilities in this way, we already have rudimentary geometrical
features generated simply by the selection of generating conditions.
Points are adjacent if they differ on only one dimension. A path is a
sequence of adjacent points. A point \(q\) is between two points \(p\)
and \(r\) if \(q\) lies on a shortest path from \(p\) to \(r\). A
region of possibility space is convex if it is closed under the
betweenness relation - anything between two points in the region
is also in the region (Oddie 1987, Goldstick and O'Neill
1988).
Evidently we have two spaces of possibilities, S1 and S2, and the
question now arises whether a sentence interpreted over one of these
spaces expresses the very same thing as any sentence interpreted over
the other. Does \(h \amp r \amp w\) express the same thing as \(h \amp
m \amp a\)? \(h \amp r \amp w\) expresses (the singleton of) \(u\_1\)
(which is the entity \(\langle 1,1,1\rangle\) in S1 or \(\langle
1,1,1\rangle\_{S1})\) and \(h \amp m \amp a\) expresses \(v\_1\) (the
entity \(\langle 1,1,1\rangle\_{S2}). {\sim}h \amp r \amp w\) expresses
\(u\_2 (\langle 0,1,1\rangle\_{S1})\), a point adjacent to that
expressed by \(h \amp r \amp w\). However \({\sim}h \amp{\sim}m
\amp{\sim}a\) expresses \(v\_8 (\langle 0,0,0\rangle\_{S2})\), which is
not adjacent to \(v\_1 (\langle 1,1,1\rangle\_{S2})\). So now we can
construct a simple proof that the two sentences do not express the
same thing.
* \(u\_1\) is adjacent to \(u\_2\).
* \(v\_1\) is not adjacent to \(v\_8\).
* *Therefore*, either \(u\_1\) is not identical to \(v\_1\), or
\(u\_2\) is not identical \(v\_8\).
* *Therefore*, either \(h \amp r \amp w\) and \(h \amp m \amp
a\) do not express the same proposition, or \({\sim}h \amp r \amp w\)
and \({\sim}h \amp{\sim}m \amp{\sim}a\) do not express the same
proposition.
Thus at least one of the two required intertranslatability claims
fails, and \(h\)-\(r\)-\(w\)-ese is not intertranslatable with
\(h\)-\(m\)-\(a\)-ese. The important point here is that a space of
possibilities already comes with a structure and the points in such a
space cannot be individuated without reference to rest of the space
and its structure. The identity of a possibility is bound up with its
geometrical relations to other possibilities. Different relations,
different possibilities. This kind of response has also been endorsed
in the very different truth-maker proposal put forward in Fine (2021,
2022).
This kind of rebuttal to the Miller argument would have radical
implications for the comparability of actual theories that appear to
be constructed from quite different sets of primitives. Classical
mechanics can be formulated using mass and position as basic, or it
can be formulated using mass and momentum. The classical concepts of
velocity and of mass are different from their relativistic
counterparts, even if they were "intertranslatable" in the
way that the concepts of \(h\)-\(r\)-\(w\)-ese are intertranslatable
with \(h\)-\(m\)-\(a\)-ese.
This idea meshes well with recent work on conceptual spaces in
Gardenfors (2000). Gardenfors is concerned both with the
semantics and the nature of genuine properties, and his bold and
simple hypothesis is that properties carve out convex regions of an
\(n\)-dimensional quality space. He supports this hypothesis with an
impressive array of logical, linguistic and empirical data. (Looking
back at our little spaces above it is not hard to see that the convex
regions are those that correspond to the generating (or atomic)
conditions and conjunctions of those. See Burger and Heidema 1994.)
While Gardenfors is dealing with properties, it is not hard to
see that similar considerations apply to propositions, since
propositions can be regarded as 0-ary properties.
Ultimately, however, this response may seem less than entirely
satisfactory by itself. If the choice of a conceptual space is merely
a matter of taste then we may be forced to embrace a radical kind of
incommensurability. Those who talk \(h\)-\(r\)-\(w\)-ese and
conjecture \({\sim}h \amp r \amp w\) on the basis of the available
evidence will be close to the truth. Those who talk
\(h\)-\(m\)-\(a\)-ese while exposed to the "same"
circumstances would presumably conjecture \({\sim}h \amp{\sim}m
\amp{\sim}a\) on the basis of the "same" evidence (or the
corresponding evidence that they gather). If in fact \(h \amp r \amp
w\) is the truth (in \(h\)-\(r\)-\(w\)-ese) then the \(h\)-\(r\)-\(w\)
weather researchers will be close to the truth. But the
\(h\)-\(m\)-\(a\) researchers will be very far from the truth.
This may not be an explicit contradiction, but it should be worrying
all the same. Realists started out with the ambition of defending a
concept of truthlikeness which would enable them to embrace both
fallibilism and optimism. But what the likeness theorists seem to have
ended up with here is something that suggests a rather unpalatable
incommensurability of competing conceptual frameworks. To avoid this,
the realist will need to affirm that some conceptual frameworks really
are better than others. Some really do "carve reality at the
joints" and others don't. But is that something the
realist should be reluctant to affirm?
## 2. The Epistemological Problem
The quest to nail down a viable concept of truthlikeness is motivated,
at least in part, by fallibilism (SS1.1). It is certainly true
that a viable notion of distance from the truth
renders progress in an inquiry through a succession of false
theories at least possible. It is also true that if there is no such
viable notion, then truth can be retained as the goal of inquiry only
at the cost of making partial progress towards it virtually
impossible. But does the mere *possibility* of making progress
towards the truth improve our epistemic lot? Some have argued that it
doesn't (see for example Laudan 1977, Cohen 1980, Newton-Smith
1981). One common argument can be recast in the form of a simple
dilemma. Either we can ascertain the truth, or we can't. If we
can ascertain the truth then we do not need a concept of truthlikeness
- it is an entirely useless addition to our intellectual
repertoire. But if we cannot ascertain the truth, then we cannot
ascertain the degree of truthlikeness of our theories either. So
again, the concept is useless for all practical purposes. (See the
entry on
scientific progress,
especially SS2.4.)
Consider the second horn of this dilemma. Is it true that if we
can't know what the (whole) truth of some matter is, we also
cannot ascertain whether or not we are making progress towards it?
Suppose you are interested in the truth about the weather tomorrow.
Suppose you learn (from a highly reliable source) that it will be hot.
Even though you don't know the *whole* truth about the
weather tomorrow, you do know that you have added a truth to your
existing corpus of weather beliefs. One does not need to be able to
ascertain the whole truth to ascertain some less-encompassing truths.
And it seems to follow that you can also know you have made at least
some progress towards the whole weather truth.
This rebuttal is too swift. It presupposes that the addition of a new
truth \(A\) to an existing corpus \(K\) guarantees that your revised
belief \(K\)\*\(A\) constitutes progress towards the truth. But whether
or not \(K\)\*\(A\) is closer to the truth than \(K\) depends not only
on a theory of truthlikeness but also on a theory of belief revision.
(See also the entry on the
logic of belief revision.)
Let's consider a simple case. Suppose \(A\) is some newly
discovered truth, and that \(A\) is compatible with \(K\). Assume that
belief revision in such cases is simply a matter of so-called
*expansion* - i.e., conjoining \(A\) to \(K\). Consider
the case in which \(K\) also happens to be true. Then any account of
truthlikeness that endorses the value of content for truths (e.g.
Niiniluoto's *min-sum-average*) guarantees that
\(K\)\*\(A\) is closer to the truth than \(K\). That's a welcome
result, but it has rather limited application. Typically one
doesn't know that \(K\) is true: so even if one knows that \(A\)
is true, one cannot use this fact to celebrate progress.
The situation is more dire when it comes to falsehoods. If \(K\) is in
fact false then, without the disastrous principle of the value of
content for falsehoods, there is certainly no guarantee that
\(K\)\*\(A\) will constitute a step toward the truth. (And even if one
endorsed the disastrous principle one would hardly be better off. For
then the addition of any proposition, whether true or false, would
constitute an improvement on a false theory.) Consider again the
number of the planets, \(N\). Suppose that the truth is N\(=8\), and
that your existing corpus \(K\) is (\(N=7 \vee N=100)\). Suppose you
somehow acquire the truth \(A\): \(N\gt 7\). Then \(K\)\*\(A\) is
\(N=100\), which (on *average*, *min-max-average* and
*min-sum-average*) is further from the truth than \(K\). So
revising a false theory by adding truths by no means guarantees
progress towards the truth.
For theories that reject the value of content for truths (e.g., the
*average* proposal) the situation is worse still. Even if \(K\)
happens to be true, there is no guarantee that expanding \(K\) with
truths will constitute progress. Of course, there will be certain
general conditions under which the value of content for truths holds.
For example, on the *average* proposal, the expansion of a true
\(K\) by an atomic truth (or, more generally, by a convex truth) will
guarantee progress toward the truth.
So under very special conditions, one can know that the acquisition of
a truth will enhance the overall truthlikeness of one's
theories, but these conditions are exceptionally narrow and provide at
best a very weak defense against the dilemma. (See Niiniluoto 2011.
For rather more optimistic views of the relation between truthlikeness
and belief revision see Kuipers 2000, Lavalette, Renardel & Zwart
2011, Cevolani, Crupi and Festa 2011, and Cevolani, Festa and Kuipers
2013.)
A different tack is to deny that a concept is useless if there is no
effective empirical decision procedure for ascertaining whether it
applies. For even if we cannot know for sure what the value of a
certain unobservable magnitude is, we might well have better or worse
*estimates* of the value of the magnitude on the evidence. And
that may be all we need for the concept to be of practical value.
Consider, for example, the propensity of a certain coin-tossing set-up
to produce heads - a magnitude which, for the sake of the
example, we assume to be not directly observable. Any non-extreme
value of this magnitude is compatible with any number of heads in a
sequence of \(n\) tosses. So we can never know with certainty what the
actual propensity is, no matter how many tosses we observe. But we can
certainly make rational estimates of the propensity on the basis of
the accumulating evidence. Suppose one's initial state of
ignorance of the propensity is represented by an even distribution of
credences over the space of possibilities for the propensity (i.e.,
the unit interval). Using Bayes theorem and the Principal Principle,
after a fairly small number of tosses we can become quite confident
that the propensity lies in a small interval around the observed
relative frequency. Our *best estimate* of the value of the
magnitude is its *expected value* on the evidence.
Similarly, suppose we don't and perhaps cannot know which
constituent is in fact true. But suppose that we do have a good
measure of distance between constituents (or the elements of some
salient partition of the space) \(C\_i\) and we have selected the right
extension function. So we have a measure \(TL(A\mid C\_i)\)of the
truthlikeness of a proposition \(A\) given that constituent \(C\_i\) is
true. Provided we also have a measure of epistemic probability \(P\)
(where \(P(C\_i\mid e)\) is the degree of rational credence in \(C\_i\)
given evidence \(e)\) we also have a measure of the *expected*
degree of truthlikeness of \(A\) on the evidence (call this
\(\mathbf{E}TL(A\mid e))\) which we can identify with the best
epistemic estimate of truthlikeness. (Niiniluoto, who first explored
this concept in his 1977, calls the epistemic estimate of degree of
truthlikeness on the evidence, or expected degree of truthlikeness,
*verisimilitude*. Since *verisimilitude* is typically
taken to be a synonym for *truthlikeness,* we will not follow
him in this, and will stick instead with *expected
truthlikeness* for the epistemic notion. See also Maher
(1993).)
\[ \mathbf{E}TL(A\mid e) = \sum\_i TL(A\mid C\_i)\times P(C\_i \mid e). \]
Clearly, the expected degree of truthlikeness of a proposition \(is\)
epistemically accessible, and it can serve as our best empirical
estimate of the objective degree of truthlikeness. Progress occurs in
an inquiry when actual truthlikeness increases. And apparent progress
occurs when the expected degree of truthlikeness increases. (See the
entry on
scientific progress,
SS2.5.) This notion of expected truthlikeness is comparable to,
but sharply different from, that of epistemic probability: the
supplement on
Expected Truthlikeness
discusses some instructive differences between the two concepts.
With this proposal, Niiniluoto also made a connection with the
application of decision theory to epistemology. Decision theory is an
account of what it is rational to do in light of one's beliefs
and desires. One's goal, it is assumed, is to maximize utility.
But given that one does not have perfect information about the state
of the world, one cannot know for sure how to accomplish that. Given
uncertainty, the rule to maximize utility or value cannot be applied
in normal circumstances. So under conditions of uncertainty, what it
is rational to do, according to the theory, is to maximize
*subjective expected utility*. Starting with Hempel's
classic 1960 essay, epistemologists conjectured that
decision-theoretic tools might be applied to the problem of theory
*acceptance* - which hypothesis it is rational to accept
on the basis of the total available evidence to hand. But, as Hempel
argued, the values or utilities involved in a decision to accept a
hypothesis cannot be simply regular practical values. These are
typically thought to be generated by one's desires for various
states of affairs to obtain in the world. But the fact that one would
very much like a certain favored hypothesis to be true does not
increase the cognitive value of accepting that hypothesis.
>
> This much is clear: the utilities should reflect the value or disvalue
> which the different outcomes have from the point of view of pure
> scientific research rather than the practical advantages or
> disadvantages that might result from the application of an accepted
> hypothesis, according as the latter is true or false. Let me refer to
> the kind of utilities thus vaguely characterized as purely scientific,
> or epistemic, utilities. (Hempel 1960, 465)
>
If we had a decent theory of *epistemic utility* (also known as
*cognitive utility* or *cognitive value*), perhaps what
hypotheses one ought to accept, or what experiments one ought to
perform, or how one ought to revise one's corpus of belief in
the light of new information, could be determined by the rule:
maximize expected epistemic utility (or maximize expected cognitive
value). Thus decision-theoretic epistemology was born.
Hempel went on to ask what epistemic utilities are implied in the
standard conception of scientific inquiry -
"...construing the proverbial 'pursuit of
truth' in science as aimed at the establishment of a maximal
system of true statements..." Hempel (1960), p 465)
Already we have here the germ of the idea central to the truthlikeness
program: that the goal of an inquiry is to end up accepting (or
"establishing") the theory that yields the whole truth of
some matter. It is interesting that, around the same time that Popper
was trying to articulate a content-based account of truthlikeness,
Hempel was attempting to characterize partial fulfillment of the goal
(that is, of maximally contentful truth) in terms of some combination
of truth and content. These decision-theoretic considerations lead
naturally to a brief consideration of the axiological problem of
truthlikeness.
## 3. The Axiological Problem
Our interest in the concept of truthlikeness seems grounded in the
value of highly truthlike theories. And that, in turn, seems grounded
in the value of truth.
Let's start with the putative value of truth. Truth is not, of
course, a good-making property of the objects of belief states. The
proposition \(h \amp r \amp w\) is not a better *proposition*
when it happens to be hot, rainy and windy than when it happens to be
cold, dry and still. Rather, the cognitive state of *believing*
\(h \amp r \amp w\) is often deemed to be a good state to be in if the
proposition is true, not good if it is false. So the state of
*believing truly* is better than the state of *believing
falsely*.
At least some, perhaps most, of the value of believing truly can be
accounted for instrumentally. Desires mesh with beliefs to produce
actions that will best achieve what is desired. True beliefs will
generally do a better job of this than false beliefs. If you are
thirsty and you believe the glass in front of you contains safe
drinking water rather than lethal poison then you will be motivated to
drink. And if you do drink, you will be better off if the belief you
acted on was true (you quench your thirst) rather than false (you end
up dead).
We can do more with decision theory utilizing purely practical or
non-cognitive values. For example, there is a well-known,
decision-theoretic, *value-of-learning theorem*, the
Ramsey-Good theorem, which partially vindicates the value of gathering
new information, and it does so in terms of "practical"
values, without assuming any purely cognitive values. (Good 1967.)
Suppose you have a choice to make, and you can either choose now, or
choose after doing some experiment. Suppose further that you are
rational (you always choose by expected value) and that the experiment
is cost-free. It follows that performing the experiment and then
choosing always has at least as much expected value as choosing
without further ado. Further, doing the experiment has higher expected
value if one possible outcome of the experiment would alter the
relative values of some your options. So, you should do the experiment
just in case the outcome of the experiment could make a difference to
what you choose to do. Of course, the expected gain of doing the
experiment has to be worth the expected cost. Not all information is
worth pursuing when you have limited time and resources.
David Miller notes the following serious limitation of the Ramsey-Good
value-of-learning theorem:
>
> The argument as it stands simply doesn't impinge on the question
> of why evidence is collected in those innumerable cases in theoretical
> science in which no decisions will be, or anyway are envisaged to be,
> affected. (Miller 1994, 141).
>
There are some things that we think are worth knowing about, the
knowledge of which would not change the expected value of any action
that one might be contemplating performing. We spent billions of
dollars conducting an elaborate experiment to determine whether the
Higgs boson exists, and the results were promising enough that Higgs
was awarded the Nobel Prize for his prediction. The discovery may also
yield practical benefits, but it is the *cognitive* change it
induces that makes it worthwhile. It is valuable simply for what it
did to our credal state - not just our credence in the existence
of the Higgs, but our overall credal state. We may of course be wrong
about the Higgs - the results might be misleading - but
from our new epistemic viewpoint it certainly appears to us that we
have made cognitive gains. We have a little bit more evidence for the
truth, or at least the truthlikeness, of the standard model.
What we need, it seems, is an account of pure cognitive value, one
that embodies the idea that getting at the truth is itself valuable
whatever its practical benefits. As noted this possibility was first
explicitly raised by Hempel, and developed in various difference ways
in the 1960s and 1970s by Levi (1967) and Hilpinen (1968), amongst
others. (For recent developments see the entry
epistemic utility arguments for probabilism,
SS6.)
We could take the cognitive value of believing a single proposition
\(A\) to be positive when \(A\) is true and negative when \(A\) is
false. But acknowledging that belief comes in degrees, the idea might
be that the stronger one's belief in a truth (or of one's
disbelief in a falsehood), the greater the cognitive value. Let \(V\)
be the characteristic function of answers, both complete and partial,
to the question \(\{C\_1, C\_2 , \ldots ,C\_n , \ldots \}\). That is to
say, where \(A\) is equivalent to some disjunction of complete
answers:
* \(V\_i (A) = 1\) if \(A\) is entailed by the complete answer
\(C\_i\) (i.e \(A\) is true according to \(C\_i)\);
* \(V\_i (A) = 0\) if the negation of \(A\) is entailed by the
complete answer \(C\_i\) (i.e., \(A\) is false according to
\(C\_i)\).
Then the simple view is that the cognitive value of believing \(A\) to
degree \(P(A)\) (where \(C\_i\) is the complete true answer) is greater
the closer \(P(A)\) is to the actual value of \(V\_i (A)\)- that
is, the smaller \(|V\_i (A)-P(A)|\) is. Variants of this idea have been
endorsed, for example by Horwich (1982, 127-9), and Goldman
(1999), who calls this "veristic value".
\(|V\_i (A)-P(A)|\) is a measure of how far \(P(A)\) is from \(V\_i
(A)\), and \(-|V\_i (A)-P(A)|\) of how close it is. So here is the
simplest linear realization of this desideratum:
\[ Cv^1 \_i (A, P) = - |V\_i (A)-P(A)|. \]
But there are of course many other measures satisfying the basic idea.
For example we have the following quadratic measure:
\[ Cv^2 \_i (A, P) = -((V\_i (A)- P(A))^2. \]
Both \(Cv^1\) and \(Cv^2\) reach a maximum when \(P(A)\) is maximal
and \(A\) is true, or \(P(A)\) is minimal and \(A\) is false; and a
minimum when \(P(A)\) is maximal and \(A\) is false, or \(P(A)\) is
minimal and \(A\) is true.
These are measures of *local* value - the value of
investing a certain degree of belief in a single answer \(A\) to the
question \(Q\). But we can agglomerate these local values into the
value of a total credal state \(P\). This involves a substantive
assumption: that the value of a credal state is some additive function
of the values of the individual beliefs states it underwrites. A
credal state, relative to inquiry \(Q\), is characterized by its
distribution of credences over the elements of the partition \(Q\). An
*opinionated* credal state is one that assigns credence 1 to
just one of the complete answers. The best credal state to be in,
relative to the inquiry \(Q\), is the opinionated state that assigns
credence 1 to the complete correct answer. This will also assign
credence 1 to every true answer, and 0 to every false answer, whether
partial or complete. In other words, if the correct answer to \(Q\) is
\(C\_i\) then the best credal state to be in is the one identical to
\(V\_i\): for each partial or complete answer \(A\) to \(Q, P(A) = V\_i
(A)\). This credal state is the state of believing the truth, the
whole truth and nothing but the truth about \(Q\). Other total credal
states (whether opinionated or not) should turn out to be less good
than this perfectly accurate credal state. But there will, of course,
have to be more constraints on the accuracy and value of total
cognitive states.
Assuming additivity, the value of the credal state \(P\) can be taken
to be the (weighted) sum of the cognitive values of local belief
states.
>
> \(CV\_i(P) = \sum\_A \lambda\_A Cv\_i(A,P)\) (where \(A\) ranges over all
> the answers, both complete and incomplete, to question \(Q)\), and the
> \(\lambda\)-terms assign a fixed (non-contingent) weight to the
> contribution of each answer \(A\) to overall accuracy.
>
Plugging in either of the above local cognitive value measures, total
cognitive value is maximal for an assignment of maximal probability to
the true complete answer, and falls off as confidence in the correct
answer falls away.
Since there are many different measures that reflect the basic idea of
the value of true belief how are we to decide between them? We have
here a familiar problem of underdetermination. Joyce (1998) lays down
a number of desiderata, most of them very plausible, for a measure of
what he calls the *accuracy* of a cognitive state. These
desiderata are satisfied by the \(Cv^2\) but not by \(Cv^1\). In fact
all the members of a family of closely related measures, of which
\(Cv^2\) is a member, satisfy Joyce's desiderata:
\[ CV\_i(P) = \sum\_A -\lambda\_A [(V\_i (A) - P(A))^2]. \]
Giving equal weight to all propositions \((\lambda\_A =\) some non-zero
constant \(c\) for all \(A)\), this is equivalent to the
*Brier* measure (see the entry on
epistemic utility arguments for probabilism,
SS6). Given the \(\lambda\)-weightings can vary, there is
considerable flexibility in the family of admissible measures.
Our main interest here is whether any of these so-called *scoring
rules* (rules which score the overall accuracy of a credal state)
might constitute an acceptable measure of the cognitive value of a
credal state.
Absent specific refinements to the \(\lambda\)-weightings, the
quadratic measures seem unsatisfactory as a solution either to the
problem of accuracy or to the problem of cognitive value. Suppose you
are inquiring into the number of the planets and you end up fully
believing that the number of the planets is 9. Given that the correct
answer is 8, your credal state is not perfect. But it is pretty good,
and it is surely a much better credal state to be in than the
opinionated state that sets probability 1 on the number of planets
being 9 billion. It seems rather natural to hold that the cognitive
value of an opinionated credal state is sensitive to the degree of
*truthlikeness* of the complete answer that it fixes on, not
just to its *truth value*. But this is not endorsed by
quadratic measures.
Joyce's desiderata build in the thesis that accuracy is
insensitive to the distances of various different complete answers
from the complete true answer. The crucial principle is
*Extensionality*.
>
> Our next constraint stipulates that the "facts" which a
> person's partial beliefs must "fit" are exhausted by
> the truth-values of the propositions believed, and that the only
> aspect of her opinions that matter is their strengths. (Joyce 1998,
> 591)
>
We have already seen that this seems wrong for opinionated states that
confine all probability to a false hypothesis. Being convinced that
the number of planets in the solar system is 9 is better than being
convinced that it is 9 billion. But the same idea holds for truths.
Suppose you believe truly that the number of the planets is either 7,
8, 9 or 9 billion. This can be true in four different ways. If the
number of the planets is 8 then, intuitively, your belief is a little
bit closer to the truth than if it is 9 billion. (This judgment is
endorsed by both the *average* and the *min-sum-average*
proposals.) So even in the case of a true answer to some query, the
value of believing that truth can depend not just on its truth and the
strength of the belief, but on where the truth lies. If this is right
*Extensionality* is misguided.
Joyce considers a variant of this objection: namely, that given
Extensionality Kepler's beliefs about planetary motion would be
judged to be no more accurate than Copernicus's. His response is
that there will always be propositions which distinguish between the
accuracy of these falsehoods:
>
> I am happy to admit that Kepler held more accurate beliefs than
> Copernicus did, but I think the sense in which they were more accurate
> is best captured by an extensional notion. While Extensionality rates
> Kepler and Copernicus as equally inaccurate when their false beliefs
> about the earth's orbit are considered apart from their effects
> on other beliefs, the advantage of Kepler's belief has to do
> with the other opinions it supports. An agent who strongly believes
> that the earth's orbit is elliptical will also strongly believe
> many more truths than a person who believes that it is circular (e.g.,
> that the average distance from the earth to the sun is different in
> different seasons). This means that the overall effect of
> Kepler's inaccurate belief was to improve the extensional
> accuracy of his system of beliefs as a whole. Indeed, this is why his
> theory won the day. I suspect that most intuitions about falsehoods
> being "close to the truth" can be explained in this way,
> and that they therefore pose no real threat to Extensionality. (Joyce
> 1998, 592)
>
Unfortunately, this contention - that considerations of accuracy
over the whole set of answers which the two theories give will sort
them into the right ranking - isn't correct (Oddie
2019).
Wallace and Greaves (2005) assert that the weighted quadratic measures
"can take account of the value of verisimilitude." They
suggest this can be done "by a judicious choice of the
coefficients" (i.e., the \(\lambda\)-values). They go on
to say: "... we simply assign high \(\lambda\_A\) when \(A\)
is a set of 'close' states." (Wallace and Greaves
2005, 628). 'Close' here presumably means 'close to
the actual world'. But whether an answer \(A\) contains complete
answers that are close to the actual world - that is, whether
\(A\) is *truthlike* - is clearly a world-dependent
matter. The coefficients \(\lambda\_A\) were not intended by the
authors to be world-dependent. (But whether or not the co-efficients
of world-dependent or world-independent the quadratic measures cannot
capture truthlikeness. See Oddie 2019.)
One defect of the quadratic class of measures, combined with
world-independent weights, corresponds to a problem familiar from the
investigation of attempts to capture truthlikeness simply in terms of
classes of true and false consequences, or in terms of truth value and
content alone. Any two complete false answers yield precisely the same
number of true consequences and the same number of false consequences.
The local quadratic measure accords the same value to each true answer
and the same disvalue to each false answer. So if, for example, all
answers are given the same \(\lambda\)-weighting in the global
quadratic measure, any two opinionated false answers will be accorded
the same degree of accuracy by the corresponding global measure.
It may be that there are ways of tinkering with the
\(\lambda\)-terms to avoid the objection, but on the face of it
the notions of local cognitive value embodied in both the linear and
quadratic rules seem to be deficient because they omit considerations
of truthlikeness even while valuing truth. It seems that any adequate
measure of the cognitive value of investing a certain degree of belief
in \(A\) should take into account not just whether \(A\) is true or
false, but here the truth lies in relation to the worlds in \(A\).
Whatever our account of cognitive value, each credal state \(P\)
assigns an expected cognitive value to every credal state \(Q\)
(including \(P\) itself of course).
\[ \mathbf{E}CV\_P(Q) = \sum\_i P(C\_i) CV\_i(Q). \]
Suppose we accept the injunction that one ought to maximize expected
value as calculated from the perspective of one's current credal
state. Let us say that a credal state \(P\) is *self-defeating*
if to maximize expected value from the perspective of \(P\) itself one
would have to adopt some distinct credal state \(Q\), *without the
benefit of any new information*:
>
> \(P\) is *self-defeating* = for some \(Q\) distinct from \(P\),
> *\(\mathbf{E}\)CV\(\_P\)*\((Q) \gt\)
> *\(\mathbf{E}\)CV\(\_P\)*\((P)\).
>
The requirement of *propriety* requires that no credal state be
self-defeating. No credal state demands that you shift to another
credal state without new information. One feature of the quadratic
family of proper scoring rules is that they guarantee
*propriety*, and *propriety* is an extremely attractive
feature of cognitive value. From *propriety* alone, one can
construct arguments for the various elements of probabilism. For
example, *propriety* effectively guarantees that the credence
function must obey the standard axioms governing additivity (Leitgeb
and Pettigrew 2010); *propriety* guarantees a version of the
value-of-learning theorem in purely cognitive terms (Oddie, 1997);
*propriety* provides an argument that conditionalization is the
only method of updating compatible with the maximization of expected
cognitive value (Oddie 1997, Greaves and Wallace 2005, and Leitgeb and
Pettigrew 2010). Not only does propriety deliver these goods for
probabilism, but any account of cognitive value that doesn't
obey *propriety* entails that self-defeating probability
distributions should be eschewed *a priori*. (And that might
appear to violate a fundamental commitment of empiricism.)
There is however a powerful argument that any measure of cognitive
value that satisfies *propriety* cannot deliver a fundamental
desideratum on an adequate theory of truthlikeness (or accuracy): the
principle of *proximity*. Every serious account of
truthlikeness satisfies *proximity*. But *proximity* and
*propriety* turn out to be incompatible, given only very weak
assumptions (see Oddie 2019 and, for an extended critique, also
Schoenfield 2022). If this is right then the best account of
*genuine* accuracy cannot provide a vindication of pure
probabilism.
## 4. Conclusion
We are all fallibilists now, but we are not all skeptics, or
antirealists or nihilists. Most of us think inquiries can and do
progress even when they fall short of their goal of locating the truth
of the matter. We think that an inquiry can progress by moving from
one falsehood to another falsehood, or from one imperfect credal state
to another. To reconcile epistemic optimism with realism in the teeth
of the dismal induction we need a viable concept of truthlikeness, a
viable account of the empirical indicators of truthlikeness, and a
viable account of the role of truthlikeness in cognitive value. And
all three accounts must fit together appropriately.
There are a number of approaches to the logical problem of
truthlikeness but, unfortunately, there is as yet little consensus on
what constitutes the best or most promising approach, and prospects
for combining the best features of each approach do not at this stage
seem bright. In fact, recent work on whether the three main approaches
to the logical problem of truthlikeness presented above are compatible
seems to point to a negative answer (see the supplement on
The compatibility of the approaches).
There is, however, much work to be done on both the epistemological
and the axiological aspects of truthlikeness, and it may well be that
new constraints will emerge from those investigations that will help
facilitate a fully adequate solution to the logical problem as
well. |
truthmakers | ## 1. What is a Truth-maker?
Truth-makers are often introduced in the following terms (Bigelow
1988: 125; Armstrong 1989c: 88):
(*Virtue-T*)
a truth-maker is that in virtue of which something is true
The sense in which a truth-maker "makes" something true is
said to be different from the causal sense in which, e.g., a potter
makes a pot. It is added that the primary notion of a truth-maker is
that of a *minimal* one: a truth-maker for a truth-bearer
*p* none of whose proper parts or constituents are truth-makers
for *p*. (Whether every proposition has a minimal truth-maker
is contentious.) We are also cautioned that even though people often
speak as if there is a unique truth-maker for each truth, it is
usually the case that one truth is made true by many things
(collectively or severally).
Whether (*Virtue-T*) provides a satisfactory elucidation of
truth-making depends on whether we have a clear understanding of
"in virtue of". Some philosophers argue this notion is an
unavoidable primitive (Rodriguez-Pereyra 2006a: 960-1). But
others are wary. Thus, for example, Bigelow finds the locution
"in virtue of" both obscure and, as will see, avoidable
"we should not rest content with an explanation which turns on
the notion of *virtue*!" (Bigelow 1988). These kinds of
concern provides a significant motivation for establishing whether it
is possible to elucidate the concept of *truth-making* in terms
of notions that enjoy a life independently of the circle of notions to
which *in virtue of* and *truth-making* belong.
### 1.1 Truth-making as Entailment
One influential proposal for making an elucidatory advance upon
(*Virtue-T*) appeals to the notion of *entailment* (Fox
1987: 189; Bigelow 1988: 125-7):
(*Entailment-T*)
a truth-maker is a thing the very existence of which entails that
something is true.
So *x* is a truth-maker for a truth *p* iff *x*
exists and another representation that says *x* exists entails
the representation that *p*. It is an attraction of this
principle that the key notion it deploys, namely entailment, is
ubiquitous, unavoidable and enjoys a rich life outside
philosophy--both in ordinary life and in scientific and
mathematical practice.
Unfortunately this account threatens to over-generate truth-makers for
necessary truths--at least if the notion of entailment it employs
is classical. It's a feature of this notion that anything
whatsoever entails a necessary truth *p*. It follows as a
special case of this that any claim that a given object exists must
entail *p* too. So it also follows--if
(*Entailment-T*) is granted--that any object makes any
necessary truth true. But this runs counter to the belief that, e.g.,
the leftovers in your refrigerator aren't truth-makers for the
representation that 2+2=4. Even worse, Restall has shown how to
plausibly reason from (*Entailment-T*) to "truth-maker
monism": the doctrine that every truth-maker makes every truth
true (whether necessary or contingent). Every claim of the form
*p* [?] ~*p* is a necessary truth. So every existing
thing *s* is a truth-maker for each instance of this form (see
preceding paragraph). Now let *p* be some arbitrary truth
(grass is green) and *s* any truth-maker for *p* [?]
~*p* (a particular ice floe in the Antarctic ocean). Now it is
intuitively plausible that something makes a disjunction true
*either* by making one disjunct true *or* by making the
other disjunct true (Mulligan, Simons, & Smith 1984: 316). This is
what Restall calls "the disjunction thesis". It follows
from this thesis that either *s* makes *p* true or
*s* makes ~*p* true. Given that *p* is true,
neither *s* nor anything else makes it true that ~*p*.
So *s* (the ice floe) must make it true that *p* (grass
is green)! But since *s* and *p* were chosen arbitrarily
it follows that all truth-makers are on a par, making true every truth
(Restall 1996: 333-4).
The unacceptability of these results indicates that insofar as we have
an intuitive grip upon the concept of a truth-maker it is constrained
by the requirement that a truth-maker for a truth must be relevant to
or about what it represents as being the case. For example, the truths
of pure arithmetic are not about what's at the back of your
refrigerator; what's there isn't relevant to their being
true. So we're constrained to judge that what's there
cannot be truth-makers for them. This suggests that operating at the
back of our minds when we issue these snap judgements there must be
something like the principle:
(*Relevance*)
what makes something true must--*in some
sense*--be what it is "about".
Of course the notions of "about" and
"relevance" are notoriously difficult to pin down (Goodman
1961). And a speaker can know something is true without knowing
everything about what makes it true. Often it will require empirical
research to settle what makes a statement true. Moreover, what is
determined *a posteriori* to be a truth-maker may exhibit a
complexity quite different from that of the statement it makes true
(Mulligan, Simons, & Smith 1984: 299). Nevertheless, it is clear
that unless (*Entailment-T*) is constrained in some way it will
generate truth-makers that are unwanted because their presence
conflicts with (*Relevance*).
One way to restore accord between them would be to abandon the
Disjunction Thesis that together with (*Entailment-T*) led us
down the path to Truth-maker Monism. Indeed isn't the
Disjunction Thesis dubious anyway? Consider examples involving the
open future. Can't we imagine a situation arising that makes it
true that one or other of the horses competing in a race will win, but
which neither makes it true that one of the horses in particular will
win nor makes it true that another will (Read 2000; Restall 2009)? For
further discussion of issues surrounding truthmaking entailment, the
Disjunctive Thesis and the Conjunctive Thesis, i.e. the principle that
a truthmaker for a conjunction is a truthmaker for its conjuncts, see
Rodriguez-Pereyra 2006a, 2009, Jago 2009, Lopez de Sa (2009) and
Talasiewicz (et al) (2012).
But even if the Disjunction Thesis is given up still this leaves in
place the embarrassing consequence of defining what it is to be a
truth-maker in terms of classical entailment, *viz.* that
*any* existing thing turns out to be a truth-maker for
*any* necessary truth. Some more radical overhaul of
(*Entailment-T*) is needed to avoid over-generation. One
possibility is to redefine what is to be a truth-maker in terms of a
more restrictive notion of "relevant entailment" (in the
tradition of Anderson & Belnap) that requires what is entailed to
be relevant to what it is entailed by (Restall 1996, 2000; Armstrong
2004: 10-12). Whether exploring this avenue will take us very
far remains a matter of dispute. Simons, for example, reflects,
>
>
> In truth however I suspect our intuitions about truth-makers may be at
> least as robust as our intuitions about what is good for a logical
> system of implication, and I would not at present attempt to explicate
> truth-making via the arrow of the relevance logic system
> **R**. (2008: 13)
>
>
>
Nevertheless since truth-making concerns the bestowal of truth,
entailment its preservation, there must *at some level* be an
important connection to be made out between truth-making and
entailment; the effort expended to make out such a connection will be
effort spent to the advantage of metaphysicians and logicians alike.
But even granted this is so there are reasons to be doubtful that any
overhaul of (*Entailment-T*), however radical, will capture
what it is to be a truth-maker. One vital motivation for believing in
truth-makers is this. Positing truth-makers enables us to make sense
of the fact that the truth of something depends on how things stand
with an independently given reality. This is how Bigelow reports what
happens to him when he stops believing in truth-makers:
>
>
> I find I have no adequate anchor to hold me from drifting onto the
> shoals of some sort of pragmatism or idealism. And that is altogether
> uncongenial to me; I am a congenital realist. (1988: 123)
>
>
>
Truth-makers are posited to provide the point of semantic contact
whereby true representations touch upon an independent reality, upon
something non-representational. Since entailment is a relation between
representations it follows that the notion of a truth-maker cannot be
fully explicated in terms of the relation of
entailment--regardless of whether representations are best
understood as sentences or propositions or some other candidate
truth-bearer (Heil 2000: 233-4; 2003: 62-5; Merricks 2007:
12-13). Ultimately (*Entailment-T*), or a relevance logic
version if it, will leave us wanting an account of what makes a
representation of the existence of a truth-maker--whatever it
entails--itself beholden to an independent reality.
### 1.2 Truth-making as Necessitation
This difficulty arises because entailment is a relation that lights
upon representations at both ends. Appreciating this Armstrong
recognised that truth-making
>
>
> cannot be any form of entailment. Both terms of an entailment relation
> must be propositions, but the truth-making term of the truth-maker
> relation is a portion of reality, and, in general at least, portions
> of reality are not propositions. (2004: 5-6; see also Lowe 2006:
> 185)
>
>
>
Accordingly Armstrong made a bold maneuver. He posited a
metaphysically primitive relation of necessitation (i.e., one not
itself defined in terms of possible worlds because Armstrong is
"opposed to the extensional view proposed by those who put
metaphysical faith in possible worlds" preferring an
'intensional account' of modality instead (1997: 151,
2004: 96)). The relation in question lights upon a portion of reality
at one end and upon a truth at the other. In the simplest case that
means that the truth making relation is one that "holds between
any truthmaker, *T*, which is something in the world, and the
proposition" that *T* exists (2004: 6). Armstrong then
defined what is to be a truth-maker in terms of this metaphysical
bridging relation:
(*Necessitation-T*)
a truth-maker is a thing that necessitates something's
being true.
This conception of truth making avoids the "category"
mistake that results from attempting to define truth-making in terms
of entailment. It also makes some advance upon (*Virtue-T*) and
its primitive use of "in virtue of" because at least
(*Necessitation-T*) relates the notion of truth making to other
modal notions, like that of necessity, upon which we have some
independent handle. But what is there positively to be said in favour
of conceiving of truth-makers in terms of "necessitation"?
Suppose that *T* is a candidate truth-maker for a truth
*p* even though *T* fails to necessitate *p*.
Then it is possible for *T* to exist even when *p* is
false. Armstrong now reflects "we will surely think that the
alleged truth-maker was insufficient by itself and requires to be
supplemented in some way" (1997: 116). Suppose this
supplementary condition is the existence of another entity,
*U*. Then "*T*+*U* would appear to be the
true and necessitating truth-maker for *p*" (2004: 7).
Armstrong concludes that a truth-maker for a truth must necessitate
the truth in question.
Unfortunately this argument takes us nowhere except around in a
circle. (*Necessitation-T*) embodies the doctrine that it is
both necessary and sufficient for being a truth-maker that a thing
necessitates the truth it makes true. Armstrong's argument for
this doctrine relies upon the dual assumptions: (1) anything that
fails to necessitate *p* (witness *T*) cannot be a
truth-maker for *p*,whereas (2) anything that succeeds in
necessitating *p* (witness *T*+*U*) must be. But
(1) just is the claim that it is necessary, and (2) just the claim
that it is sufficient for being a truth-maker that a thing
necessitates the truth it makes true. Since it relies upon (1) and
(2), and (1) and (2) are just equivalent to
(*Necessitation-T*), it follows that Armstrong's argument
is incapable of providing independent support for the conception he
favours of what it is to be a truth-maker.
Even though this argument may be circular, does
(*Necessitation-T*) at least have the favourable feature that
adopting it enables us to avoid the other difficulty that beset
(*Entailment-T*), *viz*. over-generation? Not if there
are things that necessitate a truth whilst still failing to be
sufficiently relevant to be plausible truth-makers for it. If the
necessitation relation is so distributed that it holds between any
contingently existing portion of reality, e.g., an ice-floe, and any
necessary truth, e.g., 2+2=4, then we shall be no further forward than
we were before. So Armstrong needs to tell us more about the
cross-categorial relation in question to assure us that such cases
cannot arise. Smith suggests another problem case for
(*Necessitarian-T*).
>
>
> Suppose that God wills that John kiss Mary now. God's willing
> act thereby necessitates the truth of "John is kissing
> Mary". (For Malebranche, all necessitation is of this sort.) But
> God's act is not a truth-maker for this judgement. (Smith 1999:
> 6)
>
>
>
If such cases are possible then (*Necessitation-T*) fails to
provide a sufficient condition for being a truth-maker.
In fact the commitment already emerges from (*Necessitarian-T*)
if the necessitation relation it embodies is conceived as internal in
the following sense: "An internal relation is one where the
existence of the terms entails the existence of the relation"
strung out between the terms; otherwise a relation is external
(Armstrong 1997: 87). Armstrong argues that the relation of
truth-making has to be internal (in just this sense) because if it
weren't then we would have to allow that, absurdly,
"anything may be a truth-maker for any truth" (1997: 198).
But then nothing but propositions--conceived in the
self-interpreting sense--can be truth-bearers that are internally
related to their truth-makers. Any other candidate for this
representational role, as we've already reflected--a token
belief state or utterance--could have been endowed with a
different representational significance than the one it possesses. So
the other eligible candidates, by contrast to propositions,
aren't internally related to what makes them true.
Armstrong's commitment to the truth-making relation's
being internal clashes with his naturalism (David 2005: 156-9).
According to Armstrong,
>
>
> Truth attaches in the first place to propositions, those propositions
> which have a truth-maker. But no Naturalist can be happy with a realm
> of propositions. (Armstrong 1997: 131)
>
>
>
So Armstrong counsels that we don't take propositions with
"ontological seriousness":
>
>
> What exists are classes of intentionally equivalent tokens. The
> *fundamental* correspondence, therefore, is not between
> entities called truths and their truth-makers, but between the token
> beliefs and thoughts, on the one hand, and truth-makers on the other.
> (Armstrong 1997: 131)
>
>
>
But naturalistically kosher token beliefs and thoughts aren't
internally related to what makes them true. So Armstrong's
naturalism commits him to denying that the truth-making relation is
internal after all. Heil has also inveighed in a naturalistic spirit
against incurring a commitment to propositions that are designed to
have their own "built-in intentionality" whilst continuing
to maintain that truth-making is an internal relation (2006:
240-3). Since we have no idea of what a naturalistic
representation would look like that was internally related to what
made it true, this appears to be an impossible combination of views.
Our inability to conjure up a credible class of truth-bearers that are
internally related to their truth-makers provides us with a very
strong incentive for supposing that the truth-making relation is
external. The long and short of it: if we are wary, as many
naturalists are, of the doctrine that truth-bearers are propositions,
then we should also be wary of thinking that the truth-making relation
is internal. This spells trouble for (*Necessitarian-T*) and
any other account of truth making which invoke internal relations to
propositions.
### 1.3 Truth-making as Projection
Smith tries to get around this problem by adding to
(*Necessitation-T*) the further constraint that a truth-maker
for a given truth should be part of what that truth is *about*:
"roughly: it should fall within the mereological fusion of all
the objects to which reference is made in the judgment" (1999:
6). His suggestion is that a typical contingent judgement *p*
not only makes singular reference to the objects whose antics it
describes but also "generic reference" to the events,
processes and states that are associated with the main verb of the
sentence that is used to express the judgement in question. So, for
example, the judgement that John is kissing Mary incorporates not only
singular reference to John and Mary but also generic reference to all
kisses. Smith identifies the *projection* of a judgement with
the fusion of all these things to which it refers (singularly or
generically). He then defines what is to be a truth-maker in terms of
this notion:
(*Projection T*)
a truth-maker for a judgement *p* is something that
necessitates *p* and falls within its projection
Since God's act of willing isn't one of the things to
which the judgement that John is kissing Mary makes singular or
generic reference--it doesn't fall within the projection of
this judgement--His act isn't a truth-maker for this
judgement. But that fateful kiss does fall within the generic
reference of the judgement that John is kissing Mary *and*
necessitates the truth of that judgement; so the kiss is a truth-maker
for it.
Whether taking up (*Projection-T*) will avoid classifying
malignant cases of necessitating things as truth-makers will depend
upon whether the notion of projection can be made sufficiently precise
to vindicate the intuition that the net cast by a judgement
won't catch these unwanted fishes with the rest of its haul;
whether if the judgement that ph*x* incorporates singular
reference to *x* and generic reference to ph-ings, it
doesn't incorporate (even more) generic reference to anything
else that may (waywardly) necessitate ph*x*. (Here
"ph" indicates the place for a verb phrase and
"*x*" a place for a name.)
Smith attempts to make the notion of projection precise using just
mereology and classical entailment. Something belongs to the
projection of a judgement that ph*x* only if it is one of
the things to which the judgement is existentially committed (Smith
1999: 7). In order to capture the "generic" as well as the
singular consequences of a judgement, Smith includes amongst its
existential consequences the fusions that result from applying the
mereological comprehension principle (*T*[?])
([?]*x*ph*x* iff there exists a unique fusion of
all and only those *x* such that ph*x*) to the
immediate commitments of the judgement. For example, the judgement
that Nikita meows is immediately committed to the existence of a meow
and thereby to the unique fusion of all and only meows. Because
God's act of willing John to kiss Mary doesn't fall within
the class of logical-cum-mereological consequences of the judgement
that John is kissing Mary, His act isn't a truth-maker for
it.
But this still threatens to over-generate truth-makers (Gregory 2001).
The judgement that John is kissing Mary has amongst it existential
consequences that someone exists. Since Smith assumes that
"*x* exists", like "*x* meows"
and "*x* kisses Mary", is a predicate, it also
follows via (*T*[?]) that the projection of this judgement
includes the fusion of all and only existing things (Smith 1999: 10).
But since (*ex hypothesi*) God's act exists--and so
belongs to the aforementioned fusion and thereby the projection of the
judgement in question--and necessitates that John is kissing
Mary, it fails to rule out God's act as a truth-maker for
it!
The projection of a judgement needs to be made far more relevant to
what a judgement is intuitively about; the logical-mereological notion
of a projection supplied doesn't even provide a basis for
affirming that singular judgements don't generically refer to
every existing thing. An initial attractive move to avoid this
particular problem would be to deny that "exists" is a
predicate. But more generally, it remains to be seen whether an
appropriate notion of a judgement's referential net, its
projection, can be made out that's isn't too
permissive--thereby including illegitimate
truth-makers--without having to deploy resources that are not
obviously clearer or more problematic than that of truth-making itself
(see Smith 2002 and Schnieder 2006b for contrasting prognoses).
### 1.4 Truthmaking in terms of Essentialism
Others attempt to avoid over-generating truth-makers by appealing to
the notion of *essence*, a notion that purports to be far more
discriminating than *necessitation* (Fine 1994; Lowe 1994). The
difference in grain between these notions becomes evident when we
reflect that although it is (supposed to be) necessarily the case that
if Socrates exists then his singleton does too, it isn't part of
the essence of Socrates that he belongs to this set. If these notions
really are different then we will need to distinguish between those
entities that merely necessitate a true claim on the one hand and
those that are also part of its essence on the other. This suggests a
strategy for ruling out spurious truth-makers: they're the ones
that only necessitate true claims, whereas the real ones are also
implicated (somehow) in the essences of the claims they make true.
This brings us back to the question of truth-bearers. If truth-makers
are implicated in the essences of truth-bearers then truth-bearers can
neither be sentences nor judgements. Truth-bearers of these kinds only
bear their representational features accidentally; they could have
been used to say or think something different, or occurred in contexts
where they lacked significance altogether. Since they could have meant
something different, or nothing at all, the truth-makers of these
truth-bearers can hardly be implicated in their essences. Accordingly
truth-bearers that essentially implicate their truth-makers must be
creatures that could not have shifted or lacked their representational
features. They must be *propositions* in the deep sense of
being items that are incapable of meaning anything other than they do.
Conceiving of propositions only in this sense and appealing to their
essences, what is to be a truth-maker admits of the following
definition (Mulligan 2003: 547, 2006: 39, 2007; Lowe 2006:
203-10, 2009: 209-15):
(*Essential-T*)
a truth-maker of a proposition is something such that it is part
of the essence of that proposition that it is true if that thing
exists.
If this definition is adopted then it is plausible to maintain that
many of the spurious cases of truth-makers, which have afflicted
accounts of truth-making in terms of entailment, necessitation or
projection, will thereby be weeded out (Lowe 2006: 202-3). It
isn't part of the essence of the proposition that John is
kissing Mary that it is true if there exists an act of God's
willing it. Nor is it part of the essence of the proposition that
2+2=4 that it is true if there is a particular ice floe in the
Antarctic. It isn't even part of the essence of the proposition
that 2+2=4 that it is true if p exists. So none of these things are
(spuriously) classified, if (*Essential-T*) is our touchstone,
as truth-makers for these propositions. In this respect, the
essentialist conception of truth-making has conspicuous advantages
over its aforementioned rivals.
Nevertheless, on the downside, it may be questioned whether our grip
upon the notion of the essence of a proposition is any firmer than the
notion of truth-maker for it. Moreover, the benefits of adopting
(*Essential-T*) come at a cost. The idea of a truth-maker is
introduced as "intuitively attractive" (Lowe 2006: 207).
But (*Essential-T*) requires propositions that are not only
abstract--already anathema to naturalists--but also
mysterious, because they are, so to speak, self-interpreting, i.e.
propositions that mean what they do irrespective of what speakers or
thinkers ever do with the signs or judgements that express them. So
the idea of a truth-maker turns out to be far less intuitive and
attractive than it initially seemed. Nonetheless, this is a commitment
some may accept anyway (Lowe 2009: 178-80), or may be willing to
accept if it gives them a good account of truth-making.
Essentialist conceptions of truth-making have proved less influential
in the recent literature than grounding conceptions (1.6 below). This
is surprising in light of the affinities between the notions of ground
and essence, especially given the possibility, favoured by some, of
explaining grounding in terms of essence (Correia 2013). This also
means that at least some of the attractions and some of the objections
to a grounding approach to truth making, discussed below, are liable
to carry over to an essentialist approach.
### 1.5 Axiomatic Truth-making
The difficulties that have beset the definitions of truth-making
proffered so far have suggested to some philosophers that a
pessimistic induction is in order: that the project of defining
truth-making in more basic terms is misconceived, much as the project
of defining knowledge in more basic terms has come to seem
misconceived because e.g., of the Gettier cases. But there's no
gain to be had from, doing nothing more than declaring the notion of a
truth-maker to be primitive. If it's primitive then we also need
to know how the notion may be fruitfully applied in association with
other concepts we already deploy--entailment, existence, truth
*etc.*--to describe the interplay of truth-bearers and the
world. As Simons remarks,
>
>
> The signs are that truth-making is not analysable in terms of anything
> more primitive, but we need to be able to say more than just that. So
> we ought to consider it as specified by principles of truth-making.
> (2000: 20)
>
>
>
In other words, the notion needs to be introduced non-reductively but
still informatively and this is to be achieved by appealing to its
systematic liaisons with other concepts.
This is the approach that was originally shared by Mulligan, Simons
and Smith (1984: 312-8). The principal schemata they employed to
convey to us an articulate grasp of what *truth-making* means
in non-reductive terms included:
(i)
(Factive) If *A* makes it true that *p*, then
*p*
(ii)
(Existence) If *A* exists, then *A* makes it true
that *A* exists
(iii)
(Entailment) If *A* makes it true that *p*, and
that *p* entails that *q*, then *A* makes it true
that *q*
Each of these schemata specifies a definite linkage between the
application of the notion of *truth-making* and some other
condition. *Truth-making* is introduced as the notion that
sustains all of these linkages. Putting the schemata together what it
is to be a truth-maker is then definable intra-theoretically as
follows,
(*Axiomatic-T*)
A truth-maker is something *x* such that (i) if *x*
makes it true that *p* then *p*, (ii) if *x*
exists then *x* makes it true that *x* exists...
and so on for each of the axiom schemata of our favoured theory of
truth-makers.
It is important to appreciate that adopting this approach to
truth-making doesn't have the benefits of theft over honest
toil. For one thing it doesn't obviate the threat of superfluous
truth-makers for necessary truths. (*Entailment*) is the
principle that truth-making tracks entailment: if *A* makes
*p* true then it makes all the consequences of *p* true
too. It's a principle that recommends itself irrespective of
whether truth-making can be defined. This is because it dovetails
smoothly with the idea that one truth-maker can make many truths true.
For example, suppose that a particular *a* has some absolutely
determinate mass. It is entailed by this description that various
determinable descriptions are also truly predicable of *a*.
Some of these truths say more than others, nonetheless they all have
the same truth-maker. Why so? Because they are entailed by
*a*'s having the mass it does (Armstrong 1997: 130; 2004:
10-11). To answer so is to appeal to (*Entailment*). But
if the entailment that truth-making tracks is classical then we are
back to flouting (*Relevance*). If *q* is necessary then
any contingent *p* classically entails *q.* So if
something at the back of your refrigerator makes it true that
*p* then by (*Entailment*) it makes *q* true too.
To avoid flouting (*Relevance*) in this way
(*Entailment*) had better be a principle that links
truth-making to a more restrictive, non-classical notion of entailment
(Mulligan, Simons, & Smith 1984: 316). So we won't be saved
the logical labour of figuring out which non-classical connective it
is that contributes to capturing what it is to be a truth-maker. Nor
will appealing to (*Axiomatic-T*) save us the hard work of
figuring out what truth-makers and truth-bearers must be like in order
to collectively realise the structure described by the axiom schemata
for truth-makers we favour. But there is nothing about these schemata
that demands truth-bearer and truth-maker be internally related.
It is striking that the Axiomatic approach to truth-making has proved
less popular in the recent literature than other more
metaphysically-loaded conceptions, in terms of necessitation, essence
or grounding. A principal attraction of the approach is its
metaphysical neutrality. By contrast, in more recent debate,
philosophers have preferred approaches which rely upon a distinctive
metaphysical vision.
### 1.6 Truth-Making as Grounding
Many recent approaches define truth-making in terms of
*grounding* or what is sometimes described as *non-causal
metaphysical dependence*. The notion of truth-making is typically
introduced, as we have seen, in terms of the ideology of "in
virtue of". As we have also seen, many philosophers then ask:
but what independent content can be given to this notion? In response
to this challenge, a theory of grounding may be conceived as a general
theory of "in virtue of" in terms of which truth-making
may then be explained. A further consideration which motivates those
already committed to grounding is the general methodological principle
that favours theoretical unification. Their idea is that by conceiving
of truth-making as a kind of grounding, we are able to understand
truth-making better, i.e. not as an isolated phenomenon but as an
instance of a more general pattern, thereby illuminating not only the
nature of truth-making but grounding too.
Early exponents of grounding within analytic philosophy date from the
1960s (Bergmann 1961: 229, Hochberg 1967: 416-7). Recently,
grounding has been explained in a variety of different ways (Correia
and Schnieder 2012, Raven 2015). But, broadly speaking, we can
distinguish between two conceptions which assign different logical
forms to grounding statements: (1) the "predicate" or
"relational" view whereby grounding statements are
assigned the logical form "*x* grounds *y*"
where "*x*" and "*y*" mark singular
term positions and the verb expresses a relation; (2) the
"operator" view whereby the canonical form of a grounding
statement is characterised by a sentence operator. So statements about
something grounding something else have the logical form
"*p* because *q*" where "*p*"
and "*q*" mark positions for sentences. The predicate
view of grounding is held by, amongst others, Schaffer 2009, Audi
2012, Raven 2012; the operator view is advanced by Fine 2001, 2012,
Schnieder 2006a, Correia 2011 [2014]. (A third approach, owed to Mulligan
2007, recognises a plurality of different forms of grounding
statements.) The choice between (1) and (2) is often made either on
the basis of an assessment of the logical form of typical grounding
claims, or on the basis of ontological commitment, for example, the
claim that the operator approach to grounding is "ontologically
neutral" because (allegedly) operators don't stand for
anything as (allegedly) predicates do (Correia 2010: 254, Fine 2012:
47).
We can now distinguish two broad conceptions of truth-making, defined,
respectively, in terms of the operator and predicate approach to
grounding.
*(Grounding-Predicate-T)*
A truth-maker is an entity *x* which makes a proposition
*y* true iff the fact that *x* exists grounds the fact
that *y* is true.
*(Grounding-Operator-T)*
A truth-maker is an entity *x* which makes a proposition
*y* true iff *y* is true because *x* exists.
For the former, see Rodriguez-Pereyra 2005, Schaffer 2009, Jago 2018:
184-202, for the latter, Correia 2005 (sec. 3.2), 2011 [2014],
Schnieder 2006a, Mulligan 2007, Caputo 2007.
One attraction of conceiving truth-making as based on grounding, aside
from the more general methodological virtues of theoretical
unification mentioned above, arises from taking grounding to be
"hyperintensional" nature. Grounding is intended to be a
notion that's fine-grained enough to distinguish those entities,
e.g., the cardinal numbers, that ground a necessary truth, such as
2+2=4, from those that don't, e.g. a contingent existent such as
the aforementioned ice floe. Because grounding is so fine-grained
neither *Grounding-Predicate-T* or
*Grounding-Operator-T* will give rise to the over-generation
that beset *Entailment-T* and *Necessitation-T* but
without having to appeal, like *Projection-T*, to an
underdeveloped notion of "about " or
"relevance".
Of course, in light of sec, 1.2 and 1.4 above, we can foresee a
downside to this. Both *Grounding-Predicate-T* and
*Grounding-Operator-T* quantify over propositions that stand
hyperintensionally to their truth-makers. It follows that both
*Grounding-Predicate-T* and *Grounding-Operator-T* share
a commitment with *Necessitarian-T* and
*Essentialist-T*, viz. an ontology of abstract propositions
which have their meanings essentially. But, again, this may be a
commitment many grounding theorists have already made or be willing to
accept if it gives them a good account of truth-making. Another issue
is that there is no consensus upon the formal properties of grounding.
Whilst, for example, Schaffer (2009: 376) has maintained that
grounding is irreflexive, asymmetric and transitive, Rodriguez-Pereyra
(2015) has argued that the relation of truth-making isn't any of
these things. But Rodriguez-Pereyra also argues that grounding
isn't any of those things either.
So far we have considered efforts to define truth-making in terms of
grounding. But there are other approaches in the literature which seek
to endorse one notion at the expense of the other. One way accepts
grounding as a welcome theoretical innovation but argues that we are
better off without truth-making, at least in the forms which have
become familiar to us. A second way continues to endorse truth-making
whilst rejecting grounding.
The first approach, taken by Fine, involves embracing grounding but
rejecting truth-making. It's truth-making, he argues, we can do
without; grounding does a better job (Fine 2012; 43-6). Here
truth-making is conceived as a relation between a worldly entity and a
representation: the existence of the entity guarantees the truth of
the representation. But, Fine argues, it is more theoretically
fruitful to take grounding as the central notion for metaphysics. This
is partly because, Fine maintains, grounding is a less restrictive
notion than truth-making, because grounding does not require that the
ultimate source of what is true should lie in what exists, rather we
can remain open about what form grounding takes. Moreover, whilst
there may be genuine questions about what the ground is for the truth
of the representation that *p* there may be further questions
about what makes it the case that *p* and these questions will
typically have nothing to do with representation as such. Because
truth-making is typically understood in modal terms, i.e.
necessitation is conceived to be sufficient for truth-making, Fine is
also sceptical that there is any way of avoiding the over-generation
of truth-makers we have already discussed (see 1.1 and 1.2 above).
Finally, Fine thinks that we should be able to conceive of the world
hierarchically, whereby (e.g.) the normative is grounded in the
natural, the natural in the physical and the physical in the
micro-physical. But this is difficult to do if we think in terms of
truth-making, because if (e.g.) we conceive of a normative
representation being made true by the existence of something natural,
that natural thing is just the wrong sort of thing to be made true by
something at the physical level because it is not a representation.
Liggins (2012) maintains, like Fine, that the theory of truth-making
is inevitably less general than a theory of grounding. But Liggins
also argues that appealing to grounding obviates the need to appeal to
truth-makers in the sense that (e.g.) Armstrong maintained. This is
because one significant reason for positing truth-makers is that doing
so enables us to explain the asymmetric sense in which truth depends
upon being (see sec. 3.2 below). But, Liggins maintains, this can
already be done in terms of grounding, by saying that the fact that p
grounds the fact that a certain proposition is true, namely the
proposition that p. According to Liggins, this isn't a case of
truth-making because the relation of grounding invoked holds solely
between two facts, albeit one of them a fact about a proposition, that
it is true, rather than, as Liggins reads the requirements of
truth-maker theory, a relation holding between something and a
proposition.
Fine's critique of truth-making presupposes that we can only avail
ourselves of truth-making if we conceive of truth-making in monolithic
terms, as providing a unique method for metaphysics. But whilst, for
example, Armstrong may have had a tendency to think in such terms,
this doesn't appear to be mandated by the nature of truth-making
itself. One might, as Asay (2017) argues, instead conceive of
grounding and truth-making as complementary projects, whereby a theory
of truth-making doesn't purport to do everything a theory of
grounding does and so reciprocal illumination remains a live
possibility. Indeed one might already think of the aforementioned
definitions or elucidations of truth-making, which are designed in
terms of a hyperintensional notion of grounding in order to avoid
over-generation of truthmakers as one way in which grounding is
already being used to illuminate truth-making.
The second approach, outlined above, refusing grounding but favouring
truth-making, is advanced by Heil (2016). Heil is generally suspicious
of grounding because, he points out, grounding is often characterized
in different and incompatible ways, so he is sceptical that there is a
univocal concept of grounding to be had. But, more specifically, Heil
has long maintained that hierarchical conceptions of reality spell
trouble because of the difficulties explaining the causal and
nomological relationship between the layers (Heil 2003: 31-9). He
argues that this was the problem identified for supervenience and
realization in the '90s and it's a problem inherited by the
hierarchical conception of reality to which grounding gives rise. So
Heil does a modus tollens where Fine does a modus ponens. Whilst Fine
conceives of the hierarchical styles of explanation grounding provides
as one of the principal virtues of grounding, Heil considers this is
one of the deepest drawbacks of grounding. By contrast with Fine, Heil
instead recommends truth-making as the proper methodology for
metaphysics because it enables us to give a non-hierarchical
description of reality whereby fundamental physics gives us the
truth-makers for truths that have truth-makers. Further and
independent objections to grounding include whether the notion of
grounding is intelligible (Daly 2012), theoretically unified (MacBride
2013b), theoretically illuminating (Wilson 2014) or the result of a
cognitive illusion (Miller and Norton 2017).
Other available positions include: that truth making is a special kind
of grounding which involves a unique form of dependence (Griffith
2013), and that truth making isn't grounding but that grounding
is key to truthmaking (Saenz 2018).
## 2. Which range of truths are eligible to be made true (if any are)?
Even when truth-maker panegyrists agree about what it is to be a
truth-maker, they often still disagree about the range of truths that
are eligible to be made true. This results in further disagreement
about what kinds of entities truth-makers are. There is potential for
disagreement here because of the appearance that different ranges of
truths require different kinds of truth-makers. Sometimes, discovering
themselves unable to countenance the existence of one or other kind of
truth-maker, panegyrists may find themselves obliged to reconsider
what truths really require truth-makers or to reconsider what it is to
be a truth-maker. We can get a sense of the complex interplay of
forces at work here by starting out from the most simple and general
principle about truth-making (maximalism) and then seeing what
pressures are generated to make us step back from it.
### 2.1 Maximalism
Truth-maker *maximalists* demand that every truth has a
truth-maker--no exceptions granted. So they advance the
completely general principle,
(*Maximalism*)
For every truth, then there must be something in the world that
makes it true.
The principle lies at one end of the spectrum of positions we can
potentially occupy. At the other end, we find *truth-maker
nihilism*, the idea that no truth needs to be made true because
(roughly) the very idea of a truth-maker is a corrupt one: there is no
such role as making something true for anything to perform.
*Truth-maker optimalism* is the intermediate position that only
some truths stand in need of truth-makers: not so few that truth fails
to be anchored in reality but not so many that we strain credulity
about the kinds of things there are.
#### 2.1.1 The Liar?
Milne (2005) has offered the following knock-down, if not knock-out,
argument against maximalism. Take the sentence,
(*M*)
This sentence has no truth-maker.
Suppose that (*M*) has a truth-maker. Since it's made
true, *M* must be true. And since it's true, what
(*M*) says must also be the case: that *M* has no
truth-maker. So if (*M*) has a truth-maker then it
doesn't have a truth-maker. By *reductio ad absurdum*
(*M*) therefore has no truth-maker. But this is just what
(*M*) says. So, *contra* (Maximalism), there is at least
one sentence, (*M*), that is true without benefit of a
truth-maker.
Rodriguez-Pereyra (2006c) has responded to this argument by
maintaining that (*M*) is like the Liar sentence
(*L*)
This sentence is false.
It's a familiar position in the philosophy of logic to respond
to the inconsistency that arises from supposing that (*L*) is
either true or false--if it is, it isn't and if it
isn't, it is--that the *sentence* (*L*)
isn't meaningful, despite superficial appearances (see, for a
useful introduction to these issues, Sainsbury 1995: 111-33).
According to Rodriguez-Pereyra, because (*M*) is akin to the
Liar sentence there's no reason to suppose that (*M*) is
meaningful either. But if (*M*) isn't meaningful then
(*M*) certainly can't be true; in which case (*M*)
can't be a counterexample to maximalism either.
This response is flawed because, as Milne points out, (*M*) is
importantly unlike (*L*): (*L*) gives rise to an
outright inconsistency when only elementary logic rules are applied to
it; whereas (*M*) isn't inconsistent *per se* but
only when combined with a substantive metaphysical principle:
maximalism. If, by contrast, you don't believe in truth-makers
then you have every right to treat (*M*) as just another true
sentence--just as you have every right to think (*P*) is
true if you don't believe in propositions,
(*P*)
This sentence does not express a proposition
It's only if one was (absurdly) to think that maximalism was a
logical truth that (*M*) could be intelligibly thought to be
"Liar-like". Of course, one person's *modus
ponens* is another's *modus tollens*. So if one
already had very strong independent reasons for being committed to
maximalism, Milne's argument would provide a reason for thinking
that (*M*) is meaningless (even though it isn't
Liar-like). But even if there are such reasons--or appear to
be--we can't claim to have control of our subject matter
until we have established what it is about the constitutive
connections that obtain between truth-making, truth and truth-bearers
that determines (*M*) to be meaningless (if it is); until
we've established the lie of the land, we can't be sure
that we're not kidding ourselves thinking (*M*) to be
meaningless rather than maximalism to be false. The lesson repeats
itself: a convincing theory of truth-makers requires a coeval theory
of truth bearers.
For further discussion of the Liar Paradox in relation to
truth-making, see Milne 2013 and Barrio and Rodriguez-Pereyra
2015.
#### 2.1.2 Could there be nothing rather than something?
Here's another shot across the bows, this time from Lewis. Take
the most encompassing negative existential of all: absolutely nothing
exists. Surely this statement is possibly true. But if it were true
then something would have to exist to make it true if the principle
that every truth has a truth-maker is to be upheld. But then there
would have to be something rather than nothing. So combining
maximalism with the conviction that there could have been nothing
rather than something leads to contradiction (Lewis 1998: 220, 2001:
611). So unless we already have reason to think there must be
something rather than nothing--as both Armstrong (1989b:
24-5) and Lewis (1986: 73-4) think they
do--maximalism is already in trouble.
#### 2.1.3 Maximalism is distinct from 1-1 Correspondence
What is there to be said in defence of maximalism? Even though he
favours it, Armstrong finds himself obliged to admit that "I do
not have any direct argument" for recommending the position
(2004: 7). Instead he expresses the hope that
>
>
> philosophers of realist inclinations will be immediately attracted to
> the idea that a truth, any truth, should depend for its truth for
> something "outside" it, in virtue of which it is true.
>
>
>
>
Let us follow Armstrong's lead and treat maximalism as a
"hypothesis to be tested". (See Schaffer 2008b: 308 for a
more direct argument that relies upon (*Grounding-T*).)
Maximalism needs to be distinguished from the even stronger claim,
(*Correspondence*)
For each truth there is exactly one thing that makes it true and
for each truth-maker there is exactly one truth made true by it.
But that there's clear blue water between these claims is
evident if we combine maximalism with (*Entailment*)--that
whatever makes a truth *p* true makes what *p* entails
true too. So *p* and all of its consequences are not only all
made true (as maximalism demands) but they also share a truth-maker
(as *Correspondence* denies). This takes us halfway to
appreciating that so far from being an inevitably profligate
doctrine--as its name suggests--maximalism is compatible
with denying that some logically complex claims have their own bespoke
truth-makers. (The inspiration for thinking this way comes from the
logical atomism of Russell (1918-19) that admitted some
logically complex facts but not others--it is to be contrasted
with Wittgenstein's version of the doctrine (1921) which
admitted only atomic facts.)
Suppose, for the sake of expounding the view, that some truth-bearers
are atomic. Also suppose that *P* and *Q* are atomic and
*t* makes *P* true. Then, by *Entailment*,
*t* makes *P* [?] *Q* true too. Similarly, if
*s* makes *P* true and *s*\* makes *Q* true
then, by *Entailment*, *s* and *s\** together make
*P* & *Q* true. Since the task of making *P*
[?] *Q* and *P* & *Q* true has already been
discharged by the truth-makers for the atomic truth bearers, there is
no need to posit additional truth-makers for making these disjunctive
and conjunctive truths true. Similar reasoning suggests that there is
no need to posit bespoke truth-makers for existential generalisations
or truths of identity either (Mulligan, Simons, & Smith 1984: 313;
Simons 1992: 161-3; Armstrong 2004: 54-5). But maximalism
is not thereby compromised: even though disjunctive and conjunctive
truths lack specific truth-makers of their own, they're still
made true by the truth-makers of the basic claims from which
they're compounded by the logical operations of disjunction and
conjunction.
#### 2.1.4 The need for bespoke truth-makers
But positing truth-makers for atomic truths doesn't obviate the
need--supposing maximalism--to posit additional truth-makers
for negative and universal truths. This becomes apparent in the case
of negative truths when we compare the truth-tables for conjunction
and disjunction with the truth-table for negation. The former tell us
that the truth of a disjunctive formula is determined by the truth of
one or other of its disjuncts, whilst the truth of a conjunctive
formula is determined by the truth of both its conjuncts. But the
truth-table for negation doesn't tell us how the truth of
~*P* is determined by the truth of some other atomic formula
*Q* from which ~*P* follows; it only tells us that
~*P* is true iff *P* is *false*. The strategy for
avoiding bespoke truth-makers for logically complex truths can't
get a grip in this case: there's no *Q* such that
supplying a truth-maker for it obviates the necessity of positing an
additional truth-maker for ~*P* (Russell 1918-19:
209-11; Hochberg 1969: 325-7).
The problem is even starker for universal truths: there's no
truth-table for them because there is no set of atomic formulae whose
truth determines that a universally quantified formula is true too.
Why so? Because whatever true atomic formulae we light upon
(*Fa*, *Fb*... *Fn*), it doesn't
follow from them that [?]*xFx* is true. To extract the
general conclusion one would need to add to the premises that
*a*, *b*... *n* are *all* the things
there are; that there's no extra thing waiting in the wings to
appear on stage that isn't *F*. But this extra premise is
itself universally quantified, not atomic. So it can't be argued
that the truth-makers for *Fa*, *Fb*... *Fn*
put together already discharge the task of making [?]*xFx*
true because they entail it (Russell 1918-9: 236-7;
Hochberg 1969: 335-7).
##### 2.1.4.1 Truth-makers for Negative Truths
The most straightforward response--since we are treating
maximalism as a working hypothesis--is to find more fitting
truth-makers for those truths that aren't already made true by
the truth-makers for the atomic truths that entail them. Whilst
looking around for truth-makers for negative truths Russell reflected,
>
>
> There is implanted in the human breast an almost unquenchable desire
> to find some way of avoiding the admission that negative facts are as
> ultimate as those that are positive. (1919: 287)
>
>
>
He was right that our desire for positive facts and things makes us
awkward about acknowledging that negative facts or things are the
truth-makers of negative truths. Nonetheless, discussions about
whether there are truth-makers for a given range of truth bearers on
one or the other side of the positive-negative divide are apt to
appear nebulous. This is because, as Russell had himself previously
noted, there is "no formal test" or "general
definition" for being a negative fact; we "must go into
the meanings of words" (1918-19: 215-6). Statements
of the form "*a* is *F*" aren't
invariably positive ("so-and-so is dead"), nor are
statements of the form "*a* isn't *F*"
("so-and-so isn't blind") always negative. But it
doesn't follow from the fact that a syntactic test cannot be
given that there is nothing to the contrast between positive and
negative. Molnar suggests that the contrast can be put on a sound
scientific footing. For Molnar, natural kinds are paradigm instances
of the positive, to be identified on *a posteriori* grounds
(2000: 73). To say that a thing belongs to a natural kind identified
in this way is to state a positive fact. To state a negative fact is
to negate a statement of a positive fact.
It's a very natural suggestion that if the negative claim that
*a* isn't *F* is true it's made true by the
existence of something positive that's *incompatible*
with *a*'s being *F* (Demos 1917). For example,
the truth-maker for the claim that kingfishers aren't yellow is
the fact that they're blue because their being blue is
incompatible with their being yellow. But what makes it true that
these colours are incompatible? The notion of *incompatibility*
appears itself to be negative--a relation that obtains between
two states when it's *not* possible for them to obtain
together. So this proposal threatens to become regressive: we'll
need to find another positive truth-maker for the further negative
claim that yellow and blue are incompatible, something whose obtaining
is incompatible with the state of yellow and blue's being
compatible, and so on (Russell 1918-9: 213-5, 1919:
287-9; Taylor 1952: 438-40; Hochberg 1969: 330-1;
Grossman 1992: 130-1; Molnar 2000: 74-5; Simons 2008).
There's another worry: it's not obvious that there are
enough positive states out there to underwrite all the negative truths
there are. Even though it may be true that this liquid is odourless
this needn't be because there's something further about it
that excludes its being odorous (Taylor 1952: 447; Mulligan, Simons,
& Smith 1984: 314; Molnar 2000: 75; Armstrong 2004: 62-3;
Dodd 2007: 387).
One could circumvent the threatened regress by denying that the
incompatibilities in question require truth-makers of their own
because they're necessary truths and such truths are a
legitimate exception to maximalism--because "they are true
come (or exist) what may" (Simons 2005: 254; Mellor 2003: 213).
There is some plausibility to the idea that formal truths
(tautologies) don't stand in need of truth-makers; their truth
is settled by the truth-tables of the logical constants. But material
necessary truths--such as that expressed by "yellow is
incompatible with blue"--appear to make just as substantive
demands upon the world as contingent truths do (Molnar 2000: 74). In a
sense they appear to make even more of a demand since the world must
be so endowed that it could not in any circumstances have failed to
live up to the expectations of material necessary truths. It's a
peculiar feature of our philosophical culture that even though
it's almost universally acknowledged that Wittgenstein's
plan (1921: 6.37) to show all necessity is logical necessity ended in
failure--indeed foundered upon the very problem of explaining
colour incompatibilities--that so many philosophers continue to
think and talk as though the only necessities were formal ones so that
necessary truths don't need truth-makers (MacBride 2011:
176-7).
Russell reluctantly chose to acknowledge negative facts as
truth-makers for negative truths. He just couldn't see any way
of living without them. But negative facts are an unruly bunch. Try to
think of all the ways you are. Contrast that with the even harder task
of thinking of all the ways you aren't! If negative truths are
acknowledged as truth-makers they will have to be indefinitely
numerous, unbounded in their variety; choosing to live with them is a
heavy commitment to make (Armstrong 2004: 55). What's worse, if
negative facts are akin to positive facts--as their name
suggests--then they must be made up out of things, properties and
relations arranged together. But, *prima facie*, many of these
things, properties and relations aren't existing elements of
reality. So unless, like Meinong, we believe in the non-existent,
we'll have to admit that negative facts aren't
configurations of their constituents and so an entirely different kind
of entity from positive facts altogether (Molnar 2000: 77; Dodd 2007:
388).
It is for such reasons that Armstrong counsels us to adopt a more
parsimonious account of what makes negative truths true (2004:
56-9). Armstrong's own account lies at the opposite
extreme to Russell's. Whereas Russell posited indefinitely
*many* negative facts to make negative truths true, Armstrong
posits just *one* thing that's responsible for making
them *all* true, *viz*. a totality fact.
We should register C.B. Martin's doubt that this is throwing out
the baby with the bath water (1996: 59). According to Martin, we
already recognise in ordinary discourse that different negative truths
have different truth-makers--not just one as Armstrong proposes.
For example, we recognise that what makes it true that there is no oil
in this engine is different from what makes it true that there are no
dodos left. What makes claims like these true are absences, lacks,
limits, holes and voids, where these are conceived not as things but
as "*localised* states of the world", robustly
first-order and "causally relevant" to what goes on
(Martin 1996: 58, 65-6; Taylor 1952: 443-5). But, as many
philosophers have argued, when we talk about an absence having causal
effects what we're really saying can be understood without
reifying negative states and appealing instead to the actual effects,
or the counterfactual effects, of a positive state (Molnar 2000:
77-80; Armstrong 2004: 64-7; Lewis 2004; Beebee 2004).
##### 2.1.4.2 Truth-makers for general truths
Armstrong's grand design is to sweep away the difficulties that
attend the admission of negative facts by positing a special kind of
general fact that also serves as the truth-maker for general truths.
Russell admitted general facts too but he acknowledged that, "I
do not profess to know what the right analysis of general facts
is" (Russell 1918-9: 236-7). But Armstrong has gone
further and assigned to general facts the following structure: a
general, or totality fact consists in a binary relation *T* of
*totality* that holds between an aggregate on the one hand and
a property on the other when the aggregate comprises *all* the
items that fall under the property in question (Armstrong 1989b:
93-4, 1997: 199-200, 2004: 72-5).
For example, China, France, the Russian Federation, the U.K. and the
U.S.A. comprise the permanent membership of the UN Security Council.
So the aggregate (*A*) of them bears the *T* relation to
the property (*P*) of being a permanent member of the Council
(*T*(*A*, *P*)). Since the aggregate bears that
relation to that property, there can be other permanent members of the
Council who aren't already included in it. So the totality fact
*T*(*A*, *P*) suffices for the general truth that
China, France, the Russian Federation, the U.K. and the U.S.A are
*all* its members. It also suffices for the truth of the
negative existential that there are *no* other members of the
Council. So once we've recognised that *T*(*A, P*)
exists, there's no need to recognise additional bespoke
truth-makers for these negative truths.
In a similar way Armstrong endeavours to sweep away the need for
negative facts by affirming "the biggest totality state of all,
the one embracing all lower-order states of affairs", i.e., the
existence of a totality state that consists in an aggregate of all the
(1st order) states of affairs there are related by
*T* to the property of *being a (1st order)
states of affairs* (2004: 75). It's one of Armstrong's
abiding contentions that the world is "a world of states of
affairs, with particulars and universals only having existence within
states of affairs" (1989: 94). Consequently this totality state
comprises a vast swathe of what exists--whether particulars,
universals or states of affairs that consist in particulars having
universals. So it follows from the existence of this totality fact
that there are no more (1st order) states of affairs that
are not already included in the aggregate of states that *T*
relates to the property of *being a (1st order) states
of affairs*. Nor are there any particulars or (1st
order) universals had by those particulars that are not constituents
of the state of affairs included in that aggregate. It also follows
that there are no more particulars or (1st order)
universals. So this totality fact serves as truth-maker for all these
negative truths.
It is sometimes objected that such totality facts are just negative
facts in disguise: "Totality statements state the non-existence
of certain entities, they state 'no more facts'"; so
we should reject totality facts if we are dissatisfied with negative
ones (Molnar 2000: 81-2; Dodd 2007: 389). Armstrong responds to
this charge with equanimity: "It is not denied, of course, that
the totalling or alling relation involves negation. It sets a limit to
the things of that sort" (2004: 73). But if negation has indeed
been smuggled into the description of the role that *T*
performs in comprising a totality state then it is difficult to avoid
the suspicion that Armstrong has simply exchanged many negative facts
for one big one. But we may think of Armstrong's contribution in
a different way. There are two ostensible concerns that negative
states of affairs present. First, there is a concern about their
number. Second, there is a concern about, so to speak, their
*negativity*. Armstrong has addressed the first concern by
showing how we may reduce the number of negative states of affairs.
But the second concern neither cannot be met--because, as
Armstrong reflects, we cannot eliminate negation from our description
of the world.
It has also been objected that Armstrong's position gives rise
to a "paradox of totality" (Armstrong 1989: 94, 1997:
198-9, 2004: 78-9; Cox 1997: 53-60; Molnar 2000:
81). Take the totality state of affairs that comprises all the
(1st order) state of affairs. Since this (2nd
order) state is itself a state of affairs it follows that the initial
aggregate of (1st order) states of affairs failed to
comprise all the states of affairs there are. So there must be a
further totality state that comprised the aggregate of all of them.
But this (3rd order) state is also a state of affairs so it
will need to be added to the mix too, and so *ad
infinitum*.
Armstrong responds to this objection with equanimity too:
>
>
> we can afford to be casual about this infinite series. For after the
> first fact of totality these "extra" states of affairs are
> all supervenient. As such, we do not have to take them with
> ontological seriousness. (1989b: 94)
>
>
>
To understand this we need to appreciate that Armstrong's notion
of *supervenience* is non-standard: "an entity *Q*
supervenes upon entity *P* if and only if it is impossible that
P should exist and *Q* not exist" or, in other words that
the existence of *P* entails the existence of *Q*.
(Armstrong 1997: 11). He also holds a non-standard conception of
ontological commitment, *viz*. that "What supervenes is
no addition of being" (Armstrong 1997: 12, 2004: 23-4).
Armstrong's idea is (roughly) that to be a genuine addition to
being is to be a net (indispensable) contributor to the schedule of
truth-makers for all the truths. But supervenient entities are
superfluous as truth-makers. If the existence of an *R* entails
the existence of a certain *S* which in turn entails the truth
of *P*, then *R* already makes *P* true so there
is no need to include *S* in the inventory of truth-makers. It
follows that supervenient entities, like *S,* are no addition
of being (Lewis 1992: 202-3).
Now bring this non-standard conception of ontological commitment to
bear upon the envisaged infinite series of totality facts. It is
impossible that the 2nd order totality fact comprising all
the 1st order states of affairs exist and the 3rd order
totality fact comprising all the 1st and 2nd
order states not exist. So the 3rd totality fact--or
any other state higher-order than it--is entailed by the
existence of the 2nd order totality state. So all of these
*n* > 2 higher-order totality states supervene on it.
That's why Armstrong doesn't think we need to take any of
them with ontological seriousness.
What should we make of this non-standard conception of ontological
commitment in terms of truth-making: that to be is to be a
truth-maker? Cameron (2008c) has proposed that this conception should
*replace* the standard one that to be is to be the value of a
variable. But this threatens to cut off the branch the advocates of
truth-maker theory are sitting on: if they disavow that existential
quantification is ontologically committing then they will be left
without a means of determining the ontological commitments of
truth-maker theory itself, i.e., the theory that gives the inventory,
using existential quantification, of what makes all the truths true
(Schaffer 2010: 16-7).
Of course, if someone grants that existential quantification is
ontologically committing *in* the context of a theory of
truth-makers then they won't stultify themselves in this way
(Heil 2003: 9, 2009: 316-7, Simons 2010: 200). But this just
seems like special pleading. An argument is owed that we can't
legitimately commit ourselves to the existence of things that perform
theoretical roles outside the theory of truth-makers (MacBride 2011:
169). Why should there be only one theoretical dance allowed in town?
Why shouldn't we allow that there are other theoretical roles
for existing things to perform? Indeed it's a very real
possibility that when we come to understand the capacity of the
truth-makers to make truth-bearers true we will find ourselves
embroiled in commitment to the existence of other things in their
explanatory wake that aren't truth-makers themselves. In fact
Armstrong himself should have been one of the first to recognise this.
For it has been an abiding feature of Armstrong's world view
that we are obliged to acknowledge not only states affairs, which are
truth makers, but also properties and relations, constituents of
states of affairs, which aren't.
More generally, can we make any sense of the idea of "an
ontological free lunch"? Why is something supervenient no
addition to being? Even if we only ever came to recognise the
existence of one supervening entity would we not thereby have
*added* at least one extra item to our inventory of things that
exist? Of course in the special case where the supervening entity is
*constituted* by the entities it supervenes upon--i.e.,
the things that are already there--it makes some sense to say
that it's not adding anything new; but it's not at all
obvious that what Armstrong et al. declare to be ontological free
lunches are constituted from the entities upon which they supervene
(Melia 2005: 74-5). Accordingly Schulte (2011a, 2011b) argues
that no necessitarian or grounding account of truth-making can make
sense of the notion of an ontological free lunch, maintaining instead
that truthmaker theory needs to be augmented with the idea that we are
offering a reductive explanation when we make an explanation in terms
of truthmakers
Where does this leave us? So long as Armstrong's non-standard
conception of ontological commitment remains controversial, it also
remains controversial whether the infinite series of totality facts to
which Armstrong is "committed" may be dismissed as mere
ontological frippery.
### 2.2 Optimalism
One way to respond to these difficulties is to abandon maximalism in
favour of optimalism, to deny that universal and negative statements
need truth-makers. But Merricks argues the optimalist "way
out" is blocked. Negative truths need truth-makers if any truths
do, but they can't have them. So we must give up thinking that
truths need truth-makers in the first place (2007: 39-67).
Here's why Merricks thinks so. In order to avoid the
over-generation of truth-makers for necessary truths (*etc*.)
Merricks imposes a relevance constraint: "a truth-maker must be
that which its truth is about" (2007: 28). But can this
constraint be satisfied in the case of negative existentials, such as
the statement there are no hobbits? This statement isn't about
hobbits, because there aren't any, and other apparent
candidates, such as the universe's exhibiting the global
property of *being such that there are no hobbits in it*,
appear hoaxed up and artificial (2007: 43-55). Even if one
follows Merricks this far, one may still think that negative
existentials are the principled exception that proves the rule that a
truth-maker for a positive truth must be what its truth is about. But,
so far as Merricks is concerned, this is throwing the baby out with
the bathwater:
>
>
> I deny that if we set aside the intuition that "a truth, any
> truth" depends on being we are left with the equally compelling
> intuition that all truths *except negative existentials* depend
> on being. (2007: 41)
>
>
>
He also suggests that this position is theoretically disingenuous
because no one would consider retreating to it from full-blown
maximalism unless he or she had already been "spooked" by
his or her failure to find truth-makers for negative truths; or if
they held onto the view that truth is correspondence (Merricks 2007:
40-1; Dodd 2007: 394; Cameron 2008a: 411). Merricks surmises
that if we have any reason to commit to truth-makers, we have only
reason to commit outright to a truth-maker for every truth
(maximalism). But since maximalism cannot be sustained because of the
lack of things for negative existentials to be about, Merricks
recommends the rejection of truth-makers altogether.
But optimalists aren't just "spooked" or
"timid" maximalists. They stand on their two feet with a
principled position of their own that need neither be based upon
"gerrymandered intuition" nor adopted as a consequence of
a forced retreat from maximalism. If maximalism is intellectual heir
to Russell's logical atomism, then optimalism (at least in the
form under consideration) is heir to Wittgenstein's version of
the doctrine according to which it is only atomic propositions that
represent the existence of states of affairs. The optimalists'
idea is that once truth-makers have been supplied for the atomic
truths there is simply *no need* to posit further truth-makers
for the molecular ones. All we need to recognise is that an atomic
statement *P* is true whenever a truth-maker for *P*
exists, that *P* is false if and only if no truth-maker for
*P* exists. Once the existence and non-existence of the
truth-makers has settled the truth-values of all the atomic
statements, the logical operations described by the truth-tables then
settle the truth and falsity of all the molecular statements (another
story must be told about what the truth-makers are for the
non-extensional constructions--another elephant in the room). In
particular, the truth-table for negation--that tells us what
"~" means--assures us that if *P* is false
then ~*P* is true. So all it takes to make ~*P* true is
that no truth-maker for *P* exists. Thus Mulligan, Simons, and
Smith:
>
>
> it seems more adequate to regard sentences of the given kind as true
> not in virtue of any truth-maker of their own, but simply in virtue of
> the fact that the corresponding positive sentences have no
> truth-maker. (1984: 315; see also Mellor 2003: 213-4, 2009;
> Simons 2000:7-8, 2005: 255-6)
>
>
>
Simons offers the illuminating reflection:
>
>
> This is the truth-maker end of Wittgenstein's insight that
> propositions are bi-polar: if a proposition has one truth-value,
> however it gets it, its contradictory opposite has the opposite
> truth-value without further ado;
>
>
>
Simons dubs the truth functional mechanism whereby the negation of an
atomic statement gets the value true, "truth by default"
(2008: 14).
Optimalists also think that general truths are true by default so
there is no need for bespoke truth-makers for them (like totality
facts). Universal quantifications [?]*xFx* are logically
equivalent to negative statements of the form
[?][?]*x*[?]*Fx*. Since the latter are negative,
they're true, if they are, only by default. Then because the
former statements are logically equivalent to them, the optimalists
surmise that universal quantifications are true by default too (Mellor
2003: 214; Simons 2008: 14-5).
Further challenges for the optimalist or non-maximalist are discussed
in Jago 2012, Simpson 2014, Jago 2018 (81-102).
### 2.3 Truth Supervenes Upon Being
Optimalism retains the original demand for truth-makers but restricts
it to atomic statements. Optimalism accordingly disavows a commitment
to truth-makers for negative statements and statements of generality.
But Bigelow[?]also wary of the commitments that maximalism
engenders to negative and totality facts[?] weakens what
truth-making means to a point where negative and general statements
don't require bespoke truth-makers of their own (1988:
131-3). He offers the following principle to capture the kernel
of truth in truth-making worth saving.
(*Truth Supervenes on Being*)
If something is true then it would not be possible for it to be
false unless either certain things were to exist which don't, or
else certain things had not existed which do (1988: 133).
This principle allows atomic truths to have truth-makers[?]because
it would only be possible for them to be false if certain things had
not existed: their truth-makers. But it also allows negative truths to
be true without them. The statement that there are no dragons gets to
be true because it would only be possible for that statement to be
false if something which hadn't existed (dragons) did exist.
Such statements are true not because they have truth-makers but
because they have no counterexamples, "they lack
false-makers" (Lewis 1992: 216, 2001: 610).
General truths are also true for lack of false-makers. The statement
that everything is physical (if true) is true because it would only be
possible for the statement to be false if something that hadn't
existed did, *viz*. something that wasn't physical. So
when truth-making is understood in this weakened sense there is no
need to acknowledge, e.g., an additional totality fact to make it true
that there are only five coins in my pocket; it's enough that if
I hadn't stopped adding coins that statement would have been
false because then there would have been at least one other coin in my
pocket (Heil 2000: 236-240, 2003: 68-72, 2006:
238-40; Bricker 2006).
Despite their differences, optimalism and (*Truth Supervenes Upon
Being*) share a key idea in common--that a negative
existential truth isn't true because something exists, but
because something doesn't, *viz*. a truth-maker for its
contradictory, a false-maker for it. C.B. Martin has objected that
this doesn't obviate the need to posit truth-makers for negative
existentials. This is because a statement that there are no
false-makers for a negative existential truth is itself a negative
existential truth. So this statement "can't be used to
explain or show how the latter needs no truth-making state of the
world for it to be true" (1996: 61). Lewis takes Martin's
objection to be that this account of the truth of negative truths
isn't informative. Take the statement that there are no
unicorns. Why is it true? Well because there are no
unicorns. That's not much of an explanation! But, Lewis retorts,
the positive existential statement that there is a cat is true because
there is a cat. That's "No explanation at all, and none
the worse for that" (Lewis 2001: 611-12). Fair enough; but
perhaps Martin meant that optimalists and their fellow-travellers were
presupposing what they set out to show--because they assumed
without argument that a special class of negative claims stood in no
need of truth-makers, those according to which other negative
statements lacked false makers.
### 2.4 Humean Scepticism
Lewis has anyway argued that the doctrine that truth supervenes upon
being--and, by implication, optimalism--are an uncomfortable
halfway house. He began by trying to persuade us that the retreat from
maximalism was already mandated. He pointed out how deeply
counterintuitive it is to suppose that negative existentials are true
because their truth-makers exist. It seems, offhand, that they are
true not because some things do exist but because some don't.
"Why defy this first impression?" (1992: 204). But, more
importantly, Lewis pointed out that maximalism was incompatible with
what he took to be the master principle governing our thought about
modality: Hume's denial of necessary connection between distinct
existences. It is a consequence of this principle that anything can
co-exist with anything else that's distinct. But it is the
*raison d'etre* of any truth-maker for the negative
existential truth that there are no unicorns--if there is one and
whatever else it is like--that even though it is distinct from
all unicorns it cannot co-exist with any of them, else it would make
it true that there are no unicorns even in circumstances where
unicorns existed. Similarly it is the *raison
d'etre* of any truth-maker for a general truth that
such-and-such are all the facts there are that it refuses to co-exist
with any other facts even though it is distinct from them. So if
we're to hang onto Hume's denial of necessary connexions,
we'd better give up the demand for truth-makers for negative and
general truths, i.e., maximalism (Lewis 1992: 205, 2001:
610-11).
Lewis also argues that we can't stop here--just giving up
maximalism for optimalism--because even the truth-makers for
atomic statements conflict with Humeanism. Suppose that the statement
that Dog Harry is golden is atomic. Also suppose that Harry is only
accidentally golden, so the statement is contingent. What is its
truth-maker? It can't be Harry because it's possible for
him to exist even in circumstances where the statement is false, i.e.,
when he has a different coloured coat. But it can't be the
property of *being golden* either since there are possible
circumstances in which plenty of things are golden but not Harry. It
can't even be the mereological fusion Harry + *being
golden* because it's a feature of fusions that they exist
even in circumstances where their parts are otherwise
unrelated--e.g., when Harry is black whilst Harriet is golden.
For this reason, many maximalists make common cause with optimalists
to posit another fundamental kind of thing to perform the truth-making
role in the case of contingent (atomic) predications: facts that
consist in objects, properties and relations bound together; in this
case the fact *Harry's being golden* (Armstrong 1989a:
41-2, 1997: 115-6; Mellor 1995: 24). Now facts in general
and *Harry's being golden* in particular cannot be built
up mereologically from Harry and *being golden*; otherwise
*Harry's being golden* would be no more fit a candidate
for making the statement that Harry is golden true than the fusion
Harry + *being golden*. Since *Harry's being
golden* isn't built up mereologically from Harry and
*being golden*, they cannot be parts of it. But if they
aren't parts of it they must be entirely distinct from this
state of affairs. This is where Lewis pounced. Even though
*Harry's being golden* is entirely distinct from its
"constituents" it cannot obtain without Harry and
*being golden* existing. More generally, the obtaining of a
fact necessitates the existence of its constituents even though the
fact and its constituents are entirely distinct. So we cannot posit
facts as truth-makers for contingent (atomic) statements without going
against Humeanism (Lewis 1992: 200, 1998: 215-6, 2001: 611).
Other maximalists and optimalists, often those wary of facts, posit
(non-transferrable) tropes as truth-makers for contingent predications
(Martin 1980; Mulligan, Simons, & Smith 1984: 295-304, Lowe
2006: 186-7, 204-5) Tropes, in the non-transferrable
sense, are particular instances of properties that are existentially
dependent upon their bearers. For example, the particular golden
colour *g* of Harry's coat is a non-transferrable trope
because *g* could not have existed except as the colour of
*his* coat. Since *g* is non-transferrable, this trope
only exists in circumstances where Harry's coat is golden; hence
*g*'s eligibility to be a truth-maker for the statement
that Harry is golden. (If *g* was transferrable, then
*g* could have existed in circumstances where whilst borne by
Harriet, Harry bore another black trope *d*; so if *g*
is transferrable it isn't eligible to be a truth-maker for the
statement that Harry is golden.) But Lewis' reasoning can be
easily extended to show that non-transferrable tropes cannot serve as
the truth-makers for contingent statements without contradicting
Humeanism. If trope *g* is a property that Harry bears rather
than a part of him, then *g* is wholly distinct from Harry.
Nevertheless, the existence of *g* necessitates the existence
of Harry even though they are distinct (MacBride 2005: 121).
Alternatively if *g* is a part of the bundle of tropes that
constitute Harry, if *g* is non-transferrable, then the
existence of *g* necessitates the existence of some other
distinct tropes that are also parts of this bundle. So Humeanism is
violated either way.
#### 2.4.1 Subject Matter
In order to avoid contradicting Humeanism, Lewis recommended a further
weakening of (*Truth Supervenes upon Being*). According to
Lewis, the kernel of truth in truth-making is the idea that
propositions have a *subject matter*. They are *about*
things so whether they are true or false depends on how those things
stand. This led Lewis to endorse (2003: 25):
(*Subject matter*)
Any proposition has a subject matter, on which its truth value
supervenes.
Equivalently, *there cannot be a difference in the truth-value of a
proposition without a difference in its subject matter*. This
might consist in (1) an increase or decrease in the population of
things that fall within the subject matter; or (2) a shift in the
pattern of fundamental properties and relations those things exhibit
(Lewis 2001: 612). Now the truth value of "Harry is
golden" supervenes upon its subject matter without there needing
to be any existing thing distinct from Harry or *being golden*
that necessitates their existence; it is enough that the statement
would have been false if Harry had lost his hair or been dyed. So we
can avoid contradicting Humeanism by abandoning optimalism in favour
of (*Subject matter*).
In an intriguing twist to the plot, Lewis subsequently appeared to
withdraw his doubts about truth-makers (2003: 30, Lewis & Rosen
2003: 39). When building the case for (*Subject Matter*) Lewis
had deliberately remained neutral about the metaphysics of modality
(2001: 605). But Lewis recognised that if he gave up his neutrality
and availed himself of counterpart theory then he could supply
truth-makers aplenty whilst still adhering to his Humeanism.
According to Lewis's counterpart theory, "*a* is
essentially *F* just in case all of *a*'s
counterparts (including *a* itself) are *F*"
(2003: 27). Counterparts of *x* are objects that are similar to
*x* in certain salient respects. Different conversational
contexts make different respects salient. This means that the truth
(or falsity) of essentialist judgements is always relative to the
counterpart relation conversational cues select; so it will often be
the case that whilst something is essentially *F* with respect
to one counterpart relation it won't be with respect to another
(Lewis 1968). In a conversation that makes it salient to think about
Harry under the counterpart relation of *being a golden
Labrador* then every relevant counterpart of him will be golden.
So relative to this counterpart relation he is essentially golden. But
in a different conversation that selects the counterpart relation of
*being a dog* because then whilst all of his counterparts will
be dogs Harry won't be essentially golden because some of his
counterparts will be other colours. What's important is that
it's one and the same thing, our beloved Harry, that's
essentially golden with respect to the former counterpart relation but
only accidentally so with respect to the latter. To make these
essentialist judgements count as true in the right contexts we
don't need one dog that's essentially golden, and another
that's only accidentally so; we just need to think about Harry
in two different (but compatible) ways.
According to (*Necessitarian-T*) and other related views, the
property a truth-maker *x* has of making a given proposition
*P* true is an essential feature of it. Why so? Because
*x* is supposed to be something the mere existence of which
suffices for the truth of *P*; it couldn't have existed
without *P* being true (Armstrong 1997: 115). But Lewis'
proposal is to treat essentialist attributions about truth-making in
the same relativistic spirit that counterpart theory treats every
other modal attribution (2003: 27-32). Think of Harry
*qua* golden Labrador, i.e., under the counterpart relation of
being a golden Labrador. All his counterparts selected by this
relation are golden. So every world in which Harry or one of the
counterparts exists is a world in which "Harry is golden"
is true. In other words, Harry *qua* golden is truth-maker for
the proposition that Harry is golden. But this doesn't commit us
to necessary connexions between distinct existences--to a dog
that couldn't have failed to be golden. In another context it
will be just as legitimate to talk about Harry in a different way, as
Harry *qua* dog. In that context his counterparts will include
dogs that aren't golden. So Harry *qua* dog will be the
truth-maker for the statement that Harry is a dog, but not the
statement that he is golden.
In the same way that ordinary objects like Harry can serve as
truth-makers for contingent predications, Lewis & Rosen (2003)
suggest that the entire world--"the totality of everything
there actually is"--can serve as the truth-maker for
negative existentials. Take the world *qua* unaccompanied by
unicorns. In the conversational context just set up, the world has no
counterparts that are inhabited by unicorns. So the world *qua*
unaccompanied by unicorns is essentially lacking in unicorns and
therefore qualifies as a truth-maker for the statement that there are
no unicorns.
Has Lewis shown how it is possible to garner truth-makers for
contingent predications and negative existentials without positing
totality facts or tropes that ensnare us in a web of necessary
connections? Indeed has he shown how we can get by using only ordinary
objects and collections of them to serve as truth-makers? We need to
be clear about what Lewis is trying to do in this paper. He
isn't reporting upon a damascene conversion, belatedly
recognising what he had previously denied, that the truth-making role
is genuine, but then ingeniously coming up with the idea that
*qua*-versions of things perform this role just as effectively
as states of affairs, only without necessary connections between
distinct existences. Rather, Lewis' aim in this paper is to damn
the very idea of truth-makers with faint praise. By showing how
*qua* versions of things performed the truth-making role just
as effectively as facts or tropes Lewis aimed to show how
explanatorily bankrupt the truth-making role truly was (MacBride 2005:
134-6, Bricker 2015: 180-3).
There are various criticisms of detail that might be made. MacBride
(2005: 130-2) points out that Lewis failed to provide any
account of the truthmakers for relational predications and argues that
it is very difficult to see how the lacuna can be filled when the
class of eligible truthmakers is restricted by Lewis to the class of
ordinary things and only intrinsic counterpart relations can be
evoked. Lewis considers a relaxation of the latter requirement when it
came to negative existentials (*qua unaccompanied by unicorns*
does not express an intrinsic counterpart relation). But MacBride
argues removing the latter requirement risks trivialising
Lewis's account because for any object *a* such that
*p* is true, for any true *p*, *a* satisfies the
description *F*: "inhabiting a world where *p* is
true", hence *a qua F* makes it true that *p*.
Bricker (2015: 179-80) argues that Lewis can provide for
relational predications by allowing sequences to be truthmakers, i.e.
not just ordinary things as Lewis required. But Bricker finds this
solution ultimately unattractive because it conflicts with another
truthmaking principle he finds plausible: that distinct atomic truths
have distinct truthmakers.
One might also object to Lewis' controversial modal metaphysical
assumptions of counterpart theory more generally (Rodriguez-Pereyra
2006b: 193; Dodd 2007: 385). More broadly, one might question whether
Lewis was right to place so much weight upon Hume's denial of
necessary connections between distinct existences or question whether
Lewis had interpreted it correctly (Daly 2000: 97-8; MacBride
2005: 123-7; Hoffman 2006; Cameron 2008b). It's one thing
to say that we shouldn't multiply necessities without necessity
but it's another thing to demand that we purge our world-view of
them altogether. What about dispositions, chances, laws *etc*?
Hume relied upon an empiricist theory of content to underpin his
rejection of necessary connections. But Lewis certainly didn't
advocate such a view. One is left wondering therefore whether
Hume's denial of necessary connexions should be abandoned along
with the theory of content that inspired it, an antiquated relic. If
so, then we have yet to be given a reason to retreat from either
optimalism or (*Truth Supervenes Upon Being*) to (*Subject
Matter*)(MacBride 2005: 126-7).
## 3. What Motivates the Doctrine of Truth-makers?
### 3.1 To catch a cheater
What is the motivation for adopting a theory of truth-makers, whether
maximalist or optimalist? Following Sider, it has become customary to
describe the "whole point" of adopting a theory of
truth-makers thus: to "Catch the cheaters" who don't
believe in truth-makers (Sider 2001: 40-1; see also Merricks
2007: 2-3). But this is a bit like saying that the point of a
benefit system is to catch out benefit frauds. It's only if
there is some antecedent point in favour of truth-makers that there
can be anything wrong with leaving them out. Nonetheless, it is
possible that we can come to appreciate the need for truth-makers by
appreciating what is lacking in theories that neglect them. This is
how C.B. Martin and Armstrong came to recognise the necessity for
admitting truth-makers (Armstrong 1989b: 8-11, 2004: 1-3).
Consider phenomenalism: the view that the physical world is a
construction out of sense-impressions. One obstacle for this view is
the need to make sense in sense-impression terms of true claims we
make about the unobserved world. Martin noticed that the phenomenalist
could only appeal to brute, ungrounded counterfactuals about possible
experience to do so. But counterfactuals do not float free in a void.
They must be responsive to an underlying reality, the reality that
makes them true. For Martin this was the phenomenalists' fatal
error: their inability to supply credible truth-makers for truths
about unobserved objects. The same error afflicted Ryle's
behaviourism with its brute counterfactuals about non-existent
behaviour and Prior's presentism according to which there is
nothing outside the present and so there is nothing for past-tensed
and future-tensed truths to be held responsible to. We come to
appreciate the need for truth-makers as the common need these
different theories fail to fulfil.
As Lewis points out, this demand for truth-makers represented an
overreaction to what was undoubtedly a flaw in phenomenalism (1992:
217). We can already understand what's wrong with phenomenalism
when we appreciate that it fails to provide an account of the things
upon which the truth of counterfactuals about sense impressions
supervene, i.e., when we see that it fails to satisfy (*Subject
Matter*). This leaves under-motivated the claim that phenomenalism
should also be faulted for failing to supply truth-makers to ground
counterfactuals about sense-impressions. The same reasoning applies to
behaviourism and presentism; they also fail to satisfy (*Subject
Matter*). So there's no need to adopt truth makers to catch
cheats. All we need to do is to recognise the strictures (*Subject
Matter*) places upon us.
It's a further flaw of Martin and Armstrong's reasoning
that even if we accede to the demand for truth-makers it doesn't
follow that even idealism is ruled out. According to Armstrong if we
forsake the demand for truth-makers we thereby challenge "the
realistic insight that there is a world that exists independently of
our thoughts and statements, making the latter true or false"
(1997: 128). But even an idealist could accept that there are
truth-makers whilst thinking of them as mind-dependent entities. So
acceding to the demand for truth-makers doesn't tell us
what's wrong with idealism (Daly 2005: 95-7). Bergmann,
the grandfather of the contemporary truth-maker movement, was explicit
about this: "the truth of *S* must be grounded
ontologically. On this first move idealists and realists agree"
(Bergmann 1961: 229). The realist-idealist distinction cuts across the
distinction between philosophers who admit truth-makers and those that
don't (Dodd 2002: 83-4).
The demand for truth-makers doesn't help "catch
cheaters" at all. Just acknowledge, e.g., some brute
counterfactual facts about sense data, or brute dispositions to behave
in one way or another, or brute facts about what happened in the past
or will happen in the future. What's unsatisfactory about these
posits isn't that if they existed they'd fail to make true
statements about unobserved objects or statements about mental states
that failed to manifest themselves in actual behaviour or statements
about the past or future. The problem is that we have difficulty in
understanding how such facts or dispositions could be explanatorily
sound (Horwich 2009: 195-7). Whatever we find lacking in
theories that posit such items, it isn't that they fail to
provide truth-makers, because they do. Of course if we are already
committed to the need for truth makers then we will likely conceive of
the demand for them as, e.g., drawing the phenomenalist out into the
open to reveal the unfitness of their explanatory posits. But unless
we already have independent reasons for recognising the demand for
truth makers, catching cheaters cannot provide a motivation for
positing them.
### 3.2 Truth Depends on Being
Does this mean that we should join Bigelow in his retreat to
(*Truth Supervenes Upon Being*) or step back with Lewis to
(*Subject Matter*)? The problem is that these supervenience
principles don't seem a satisfactory resting place either, not
if our concern is to understand how true representations can touch
upon an independent reality, i.e., something non-representational.
First, it is familiar point that just appealing to a systematic
pattern of modal co-variation between truths and their subject
matters--so that there is no difference in the former without a
difference in the latter--doesn't provide us within any
insight into the underlying mechanism or mechanisms that sustains this
dependency (Molnar 2000: 82-3; Heil 2003: 67; Daly 2005:
97-8; Melia 2005: 82-3). Second, if possible worlds are
conceived as maximal representations then these supervenience
principles (understood in the idiom of possible worlds) fail to
articulate what it is for a representation to touch upon being,
something non-representational: "For the idea that every truth
*depends on being* is not the idea that every truth is
*entailed by propositions of a certain sort*" (Merricks
2007: 86).
A third reason for being unsatisfied with Lewis' (*Subject
Matter*) is based upon the observation that the supervenience
involved is symmetric. Just as there is no difference in the
distribution of truth-values amongst the body of propositions without
a reciprocating difference in their subject matter, there is no
difference in their subject matter without a difference in the
distribution of their truth-values (Armstrong 2004: 7-8). But we
also have the firm intuition that the truth or falsity of a
proposition depends upon the state of the world in a way in which the
state of the world doesn't depend upon the proposition's
truth or falsity. This means that a supervenience principle like
(*Subject Matter*) cannot be used to articulate the asymmetric
way in which truth so depends upon being; for this, it is argued, we
need to rely upon a robust asymmetric notion of truth-making
(Rodriguez-Pereyra 2005: 18-19). Bricker (2015: 180-183)
replies on Lewis's behalf that the oft-heard complaint that
supervenience by itself is not a dependence or ontological priority
relation is besides the point. Rather, Bricker argues, the
metaphysical punch of Lewis's doctrine that truth supervenes
upon being "has little to do with the 'supervenience
part' and everything to do with the 'being' part and
the conception of fundamentality that informs it", where by
'fundamental' Bricker means Lewis's theory of
natural properties and relations.
But is there really anything so problematic or mysterious about the
asymmetric dependence of truth upon being that we need anything as
heavy duty as truth-making or supervenience+naturalness to explain it?
A number of proposals have been made for explaining the felt asymmetry
in other terms. Horwich (1998: 105; 2008: 265-66) argues that
asymmetric dependence of truth upon being arises from the fact that we
can deduce *p* from the proposition that *p is true*. But we
can equally deduce *p* from *p is true* so this fails to
establish an asymmetric dependence (Kunne 2003: 151-52;
Rodriguez-Pereyra 2005: 27).
Hornsby argues instead that there is explanatory asymmetry between its
being the case that *p* and the proposition that *p is
true*, because the latter requires more to be the case than the
former, (e.g.) that there are propositions (Hornsby 2005: 44). Dodd
suggests a cognate alternative: that the felt asymmetry of truth upon
being arises from the fact that the identity of the proposition that p
is partially determined by the worldly items it's about, so we
can't understand what is required for the proposition that *p*
to be true without already having an understanding of what it is for
*p* to be the case (Dodd 2007: 398-400).
MacBride (2014) offers a purely semantic account of the felt asymmetry
in terms of the interlocking mechanism of reference and satisfaction
that explains the truth of a proposition. Whilst we can deduce from
the truth of the proposition that *p* that *p* is the case,
MacBride argues that we can't (ultimately) explain its being the
case that *p* in terms of the truth of the proposition that
*p* because its being the case that *p* is already assumed
by the truth of that proposition, that is, has already performed a
role in the semantic mechanism whereby the truth-value is determined.
By contrast, MacBride continues, its being the case that *p*
isn't determined by a semantic mechanism but by whatever worldly
factors, if any, that gave rise to *p*'s being the case. Saenz
(2018) replies that whilst reference and satisfaction may be used to
explain that propositions are true, they still "do not, in any
sense, account for the world 'making it' [*sic*] that
propositions are some way" (a similar point is made by Mulligan
et al 1984: 288, see sec. 3.5 below for further discussion). But Saenz
risks begging the question by making the additional demand on the
character of the felt asymmetry of truth and being that it consists in
more than an asymmetric relationship between truth and being. Saenz
holds that the felt asymmetry consists in the requirement that the
world makes propositions true but whether we need to appeal to the
idea of the world making propositions true in the first place is the
point at issue. Finally, Liggins (2012) argues that the felt asymmetry
between truth and being is best explained in terms of the asymmetry of
grounding, although, as we have seen, the asymmetry of grounding is
contested (see sec. 1.6 above for further discussion).
### 3.3 Correspondence
"Catching cheats" doesn't supply a motivation for
adopting truth-makers. Are there other motivations for so doing? One
such motivation is that truth-making falls out naturally from the
correspondence theory of truth.
>
>
> Anybody who is attracted to the correspondence theory of truth should
> be drawn to the truth-maker [*sic*]. Correspondence demands a
> correspondent, and a correspondent for a truth is a truth-maker.
> (Armstrong 1997: 14; see also 1997: 128-9, 2004: 16-7)
>
>
>
>
According to Armstrong, the "truth-maker principle" (aka
maximalism) just is what we are left with once we drop the assumption
from the correspondence theory that the relation between truth bearers
and truth-makers is one-one. Of course, the truth-making relation,
however it is explicated, cannot be the relation of correspondence in
terms of which *truth* is defined--for whereas the latter
is symmetric, the former isn't (Beebee & Dodd 2005b:
13-4). But this doesn't prevent truth being defined using
the *truth-making* relation, albeit in a more roundabout way
according to the following pattern,
(C)
*S* is true = ([?]*x*)(*S*
is-made-true-by *x*).
Of course such a definition will only succeed if the relation
expressed by "is-made-true-by" can itself be explicated
without relying upon the notion of *truth* itself or other
notions that implicitly presuppose it, e.g., entailment (Merricks
2007: 15). This is no doubt one of the reasons that principles of
truth-making are typically not put forward as definitions of truth
(David 2009: 144). Another objection made by Horwich is that it is
implausible to suppose that an ordinary person understands the notion
of truth *via* the heterogeneous principles that govern the
provision of truth-makers for different types of propositions.
It's more plausible to suppose that we first grasp what truth
is, and only subsequently figure out what truth-makers are required
for the various kinds of propositions there are (Horwich 2009:
188).
### 3.4 Deflationism
Several philosophers have argued that so far from presupposing the
notion of *truth*, the various truth-making principles we have
discussed operate at a far more subterranean level. Bigelow writes,
"The force behind Truth-maker lies deeper than worries about the
nature of truth and of truth-bearers" (Bigelow 1988: 127, 2009:
396-7; Robinson 2000: 152; Lewis 2001: 605-6; Horwich
2009: 188-9, Bricker 2015: 169). Suppose we understand what it
is to be truth-maker in terms of entailment. Then whatever the range
of truths we think are capable of being rendered true, the
"guts" of our truth-maker principle can be stated using
the schema,
(*Schema*)
If *P*, then there must be something in the world whose
existence entails that *P*.
This schema is equivalent to the infinitely many conditionals that
fall under it:
(*i*1)
If the donkey is brown, then there must be something in the world
whose existence entails that the donkey is brown;
(*i*2)
if the Parthenon is on the Acropolis then there must be something
in the world whose existence entails that the Parthenon is on the
Acropolis; and so on.
Note that neither "true" nor "truth" shows up
anywhere in the schema or its instances. Arguably the only role that
the notion of *truth* perform*s* is the one that
minimalist or deflationary theories of truth emphasize, that of
enabling us to capture all of that unending string of conditionals in
a single slogan,
(*Slogan*)
For *any truth*, there must be something in the world whose
existence entails *that truth*.
But since truth is serving here *only* as a device of
generalization, the real subject matter of (*Slogan*) is
already expressed by the conditionals (*i*1), (*i2*)
*etc*. Since they make no explicit mention of either
propositions or truth, a theory of truth-makers is neither a theory
about propositions nor a theory about truth. If so, then the theory of
truth-makers can neither gain inspiration from, nor be tarred by the
same brush as the correspondence theory of truth. Nor need the theory
of truth-makers be bedeviled by concerns about the nature of truth
bearers.
One may nevertheless wonder whether Bigelow and Lewis have thrown the
baby out with the bathwater. MacBride (2013a) argues that if we do not
allow ourselves the general thought that truth-bearers are true or
false depending upon how things stand in the world then it is unclear
what, if any, credibility (*i1*), (*i2*) etc. have. For
why should there be something in the world whose existence
necessitates that (e.g.) cats purr? Why can't cats purr, even though
there is nothing whose existence entails that truth? Since
*Slogan* is intended to be just shorthand for its instances,
(*i* 1), (*i* 2) etc. it follows *Slogan* can't
be any more credible or motivated than its instances. If, however, we
allow ourselves a conception of truth which isn't deflationary, i.e.
a substantial conception whereby truth is conceived as a relation
borne by a truth-bearer to something worldly that exists independently
of us, then, MacBride argues, we will have an independent reason for
positing something worldly the existence of which necessitates the
truth that cats purr etc. (see also Armstrong 1997: 128). For further
discussion of deflationism in connection with truth-making see McGrath
2003, Vision 2005 and Thomas 2011.
### 3.5 Truth-making and Conceptual Explanation
Other philosophers appeal to the idea that when we say that a
statement is true *because* a truth-maker for it exists, the
"because" we employ is a connective (Kunne 2003:
150-6; Hornsby 2005: 35-7; Melia 2005: 78-9;
Schnieder 2006a: 29-37; Mulligan 2007). They think of this
connective on the model of logical operators ("&",
"[?]" etc.) i.e., expressions that link sentences but
without expressing a relation that holds between the states of
affairs, facts or tropes that these sentences denote; but unlike
"&", "[?]" etc., "because"
is not truth-functional. According to these philosophers, the
truth-maker panegyrists have misconstrued the logical form of
"makes true". They have taken it to be a verb like
"*x* hits *y*" when really it is akin to the
connective "-" or "because" (Melia 2005:
78). There's no need to introduce truth-makers as the special
things that stand at one end of the *truth-making* relation
with true statements at the other, no need to because it's only
superficial features of the grammar of our language that suggest there
is a truth-making relation to stand at the ends of.
Typically philosophers who maintain that "makes true" is a
connective do not argue for this conclusion directly but encourage us
to dwell upon the fact that it natural to hear the
"because" that occurs in equivalent constructions as a
connective. When we hear, e.g., "It is true that the rose is red
because the rose is red" we don't naturally hear it as
expressing a binary relation between two things. So we shouldn't
hear, or at least don't have to hear, "It's true
that the rose is red in virtue of the rose's being red" or
"the rose's being red makes it true that the rose is
red" as a expressing a binary relation between a truth and
truth-maker either.
Kunne has gone further and suggested that the truth-making
connective is really the "because" of conceptual
explanation (2003: 154-5; see also Schnieder 2006a: 36-7).
He encourages us to hear the equivalence between,
(*R*)
He is a child of a sibling of one of your parents, which makes
him your first cousin
and,
(*R\**)
He is your first cousin because he is a child of a sibling of one
of your parents.
He tells us that the "because" in (*R\**) is the
"because" of "conceptual explanation". Since
(*R*) and (*R\**) are equivalent, he concludes that the
"makes" construction in (*R*) is just a cumbersome
way of expressing the "because" of conceptual explanation
in (*R\*)*. Kunne then invites us take the use of
"makes" in (*R*) as a model for understanding the
truth-making construction,
(*S*)
The fact that snow is white makes the statement that snow is
white true.
This enables us to see that (*S*) does "not affirm a
relation of any kind between a truth vehicle and something in the
world" (Kunne 2003: 155). Why? Because it is equivalent
to,
(*S\**)
The statement that snow is white is true because snow is
white.
a claim that makes no (explicit) mention of facts. Moreover, the
second clause of (*S*\*), like the second clause of
(*R\**), tells us when it is correct to affirm its first clause.
In both cases we have an explanation that draws respectively upon our
understanding of the concepts *cousin* and *truth.*
Drawing upon these materials, we are then able to construct an
explanation of the felt asymmetry whereby the truth of what we say
about the world depends upon what the world is like, but not the other
way around. What is written on the left-hand-side of (*S*\*) is
conceptually more sophisticated than what is written upon the
right-hand-side because the former requires us to be able to grasp the
concepts of *statement* and *truth* to understand it
whereas the latter does not. Accordingly the latter requires less of
both the world and us than the former to be both understood and true.
That's why it makes sense to deploy the right-hand-side (a claim
about reality) to explain the left-hand-side (a claim about a truth
bearer) but not the other way around (Hornsby 2005: 44; Dodd 2007:
399-400). It may still be that this is just wishful thinking,
until it has been established that the "because" of truth
making is a connective rather than a relational expression--i.e.,
not just because it would be technically convenient for us to believe
it to be so or because it sounds like a connective to someone with the
ears of a naive grammarian. As we saw in section 1.6, some advocates
of truth-making as grounding also consider "because" to be
a connective, although there is much disagreement about its
implications. Some early exponents of the connective view of
truth-making have embraced grounding (e.g. Schnieder), while others
have not (e.g. Hornsby and Dodd).
### 3.6 Truth Making and Quantification
A final supporting suggestion: that the grand truth-maker projects of
the late twentieth century arose (partly) out of a failure to
appreciate the logical variety of natural language quantifiers that we
unreflectively employ when expressing the intuitions that speak in
favour of truth- makers.
Consider how Armstrong expressed himself when he started out:
>
>
> It seems obvious that for every true contingent proposition there must
> be something in the world (in the largest sense of
> "something") which makes the proposition true. For
> consider any true contingent proposition and imagine that it is false.
> We must automatically imagine some difference in the world. (Armstrong
> 1973: 11)
>
>
>
Bigelow asks us to compare two worlds, one in which *A* is
true, the other in which it is false. According to Bigelow,
>
>
> There must surely be some difference between these two possible
> worlds! They are clearly different in that *A* is true in one
> but not the other. So there must be something in one of these worlds
> which is lacking in the other and which accounts for the difference in
> truth. (1988: 126)
>
>
>
Armstrong and Bigelow make the same assumption about the
"some" they employ in their expression of what makes the
difference between the world in which *A* is true and the world
in which *A* is false; they assume it is an objectual
quantifier in name position.
With this assumption in place it is an easy move to make to think that
there must be some thing that we quantify over that
*constitutes* the difference between these circumstances,
*viz.* the truth-maker for *A*. But as Williamson
remarks "We should not assume that all quantification is either
objectual or substitutional" (1999: 262-3; *cf*.
Prior 1971: 31-4). Williamson argues that that the truth maker
principle in fact
>
>
> involves irreducible non-substitutional quantification into sentence
> position... We should not even assume that all non-substitutional
> quantification is interpreted in terms of assignments of values to the
> variables. For "value" is a noun, not a sentence. (1999:
> 263)
>
>
>
But suppose the "some" Armstrong and Bigelow employ is
understood not as an objectual quantifier that comes equipped with a
domain of entities over which it ranges but is understood in some
other way. Then it would be open to us to acknowledge the force of
Armstrong and Bigelow's intuition that the circumstances in
which *A* is true are somehow different from those in which it
isn't, but without our having to think that there exists
something that we've quantified over than makes it so. Evidently
it will not be until we have arrived at a settled view of the
admissible interpretations our quantifiers may bear that this issue
can be settled in a principled rather than dogmatic manner. |
truth-values | ## 1. Truth values as objects and referents of sentences
### 1.1 Functional analysis of language and truth values
The approach to language analysis developed by Frege rests essentially
on the idea of a strict discrimination between two main kinds of
expressions: proper names (singular terms) and functional
expressions. Proper names designate (signify, denote, or refer to)
singular objects, and functional expressions designate (signify,
denote, or refer to) functions. [Note: In the literature, the
expressions 'designation', 'signification',
'denotation', and 'reference' are usually
taken to be synonymous. This practice is used throughout the present
entry.] The name 'Ukraine', for example, refers to a
certain country, and the expression 'the capital of'
denotes a one-place function from countries to cities, in particular,
a function that maps Ukraine to Kyiv (Kiev). Whereas names are
"saturated" (complete) expressions, functional expressions
are "unsaturated" (incomplete) and may be saturated by
applying them to names, producing in this way new names. Similarly,
the objects to which singular terms refer are saturated and the
functions denoted by functional expression are unsaturated. Names to
which a functional expression can be applied are called
the *arguments* of this functional expression, and entities to
which a function can be applied are called the *arguments* of
this function. The object which serves as the reference for the name
generated by an application of a functional expression to its
arguments is called the *value* of the function for these
arguments. Particularly, the above mentioned functional expression
'the capital of' remains incomplete until applied to some
name. An application of the function denoted by 'the capital
of' to Ukraine (as an argument) returns Kyiv as the object
denoted by the compound expression 'the capital of
Ukraine' which, according to Frege, is a proper name of
Kyiv. Note that Frege distinguishes between an \(n\)-place function
\(f\) as an unsaturated entity that can be completed by and applied to
arguments \(a\_1\),..., \(a\_n\) and its *course of values*,
which can be seen as the set-theoretic representation of this
function: the set
\[\{\langle a\_1, \ldots, a\_n, a\rangle \mid a =
f(a\_1,\ldots , a\_n)\}.\]
Pursuing this kind of analysis, one is very quickly confronted with
two intricate problems. *First*, how should one treat
declarative *sentences*? Should one perhaps separate them into
a specific linguistic category distinct from the ones of names and
function symbols? And *second*, how--from a functional
point of view--should one deal with *predicate
expressions* such as 'is a city', 'is
tall', 'runs', 'is bigger than',
'loves', etc., which are used to denote classes of
objects, properties of objects, or relations between them and which
can be combined with (applied to) singular terms to obtain sentences?
If one considers predicates to be a kind of functional expressions,
what sort of names are generated by applying predicates to their
arguments, and what can serve as referents of these names,
respectively values of these functions?
A uniform solution of both problems is obtained by introducing the
notion of a *truth value*. Namely, by applying the criterion of
"saturatedness" Frege provides a negative answer to the
first of the above problems. Since sentences are a kind of complete
entities, they should be treated as a sort of proper names, but names
destined to denote some specific objects, namely the truth values:
*the True* and *the False*. In this way one also obtains
a solution of the second problem. Predicates are to be interpreted as
some kind of functional expressions, which being applied to these or
those names generate sentences referring to one of the two truth
values. For example, if the predicate 'is a city' is
applied to the name 'Kyiv', one gets the sentence
'Kyiv is a city', which designates *the True*
(i.e., 'Kyiv is a city' *is true*). On the other
hand, by using the name 'Mount Everest', one obtains the
sentence 'Mount Everest is a city' which clearly
designates *the False*, since 'Mount Everest is a
city' *is false*.
Functions whose values are truth values are called *propositional
functions*. Frege also referred to them as concepts
(*Begriffe*). A typical kind of such functions (besides the
ones denoted by predicates) are the functions denoted by propositional
connectives. Negation, for example, can be interpreted as a unary
function converting *the* True into *the False* and
*vice versa*, and conjunction is a binary function that returns
*the True* as a value when both its argument positions are
filled in by *the True*, etc. Propositional functions mapping
\(n\)-tuples of truth values into truth values are also called
*truth-value functions*.
Frege thus in a first step extended the familiar notion of a numerical
function to functions on singular objects in general and, moreover,
introduced a new kind of singular objects that can serve as arguments
and values of functions on singular objects, the truth values. In a
further step, he considered propositional functions taking functions
as their arguments. The quantifier phrase 'every city',
for example, can be applied to the predicate 'is a
capital' to produce a sentence. The argument of the
*second-order* function denoted by 'every city' is
the *first-order* propositional function on singular objects
denoted by 'is a capital'. The functional value denoted by
the sentence 'Every city is a capital' is a truth value,
*the False*.
Truth values thus prove to be an extremely effective instrument for a
logical and semantical analysis of
language.[1]
Moreover, Frege provides truth values (as proper referents of
sentences) not merely with a pragmatical motivation but also with a
strong theoretical justification. The idea of such justification, that
can be found in Frege 1892, employs the principle
of *substitutivity* of co-referential terms, according to which
the reference of a complex singular term must remain unchanged when
any of its sub-terms is replaced by an expression having the same
reference. This is actually just an instance of the compositionality
principle mentioned above. If sentences are treated as a kind of
singular terms which must have designations, then assuming the
principle of substitutivity one "almost inevitably" (as
Kurt Godel (1944: 129) explains) is forced to recognize truth
values as the most suitable entities for such
designations. Accordingly, Frege asks:
>
>
> What else but the truth value could be found, that belongs quite
> generally to every sentence if the reference of its components is
> relevant, and remains unchanged by substitutions of the kind in
> question? (Geach and Black 1952: 64)
>
>
>
The idea underlying this question has been neatly reconstructed by
Alonzo Church in his *Introduction to Mathematical Logic*
(1956: 24-25) by considering the following sequence of four
sentences:
C1.
Sir Walter Scott is the author of *Waverley*.
C2.
C2. Sir Walter Scott is the man who wrote 29 *Waverley*
Novels altogether.
C3.
The number, such that Sir Walter Scott is the man who wrote that
many *Waverley* Novels altogether is 29.
C4.
The number of counties in Utah is 29.
C1-C4 present a number of conversion steps each producing
co-referential sentences. It is claimed that C1 and C2 must have the
same designation by substitutivity, for the terms 'the author of
*Waverley*' and 'the man who wrote 29
*Waverley* Novels altogether' designate one and the same
object, namely Walter Scott. And so must C3 and C4, because the
number, such that Sir Walter Scott is the man who wrote that many
*Waverley* Novels altogether is the same as the number of
counties in Utah, namely 29. Next, Church argues, it is plausible to
suppose that C2, even if not completely synonymous with C3, is at
least so close to C3 "so as to ensure its having the same
denotation". If this is indeed the case, then C1 and C4 must
have the same denotation (designation) as well. But it seems that the
only (semantically relevant) thing these sentences have in common is
that both are true. Thus, taken that there must be something what the
sentences designate, one concludes that it is just their truth value.
As Church remarks, a parallel example involving false sentences can be
constructed in the same way (by considering, e.g., 'Sir Walter
Scott is not the author of *Waverley*').
This line of reasoning is now widely known as the "slingshot
argument", a term coined by Jon Barwise and John Perry (in
Barwise and Perry 1981: 395), who stressed thus an extraordinary
simplicity of the argument and the minimality of presuppositions
involved. Stated generally, the pattern of the argument goes as
follows (cf. Perry 1996). One starts with a certain sentence, and then
moves, step by step, to a completely different sentence. Every two
sentences in any step designate presumably one and the same thing.
Hence, the starting and the concluding sentences of the argument must
have the same designation as well. But the only semantically
significant thing they have in common seems to be their truth value.
Thus, what any sentence designates is just its truth value.
A formal version of this argument, employing the term-forming,
variable-binding class abstraction (or property abstraction) operator
l\(x\) ("the class of all \(x\) such that" or
"the property of being such an \(x\) that"), was first
formulated by Church (1943) in his review of Carnap's
*Introduction to Semantics*. Quine (1953), too, presents a
variant of the slingshot using class abstraction, see also (Shramko
and Wansing 2009a). Other remarkable variations of the argument are
those by Kurt Godel (1944) and Donald Davidson (1967, 1969),
which make use of the formal apparatus of a theory of definite
descriptions dealing with the description-forming, variable-binding
iota-operator (i\(x\), "the \(x\) such that"). It is
worth noticing that the formal versions of the slingshot show how to
move--using steps that ultimately preserve reference--from
*any* true (false) sentence to *any* other such
sentence. In view of this result, it is hard to avoid the conclusion
that what the sentences refer to are just truth values.
The slingshot argument has been analyzed in detail by many authors
(see especially the comprehensive study by Stephen Neale (Neale 2001)
and references therein) and has caused much controversy notably on the
part of fact-theorists, i.e., adherents of facts, situations,
propositions, states of affairs, and other fact-like entities
conceived as alternative candidates for denotations of declarative
sentences. Also see the
supplement on the slingshot argument.
### 1.2 Truth as a property versus truth as an object
Truth values evidently have something to do with a general concept of
truth. Therefore it may seem rather tempting to try to incorporate
considerations on truth values into the broader context of traditional
truth-theories, such as correspondence, coherence, anti-realistic, or
pragmatist conceptions of truth. Yet, it is unlikely that such
attempts can give rise to any considerable success. Indeed, the
immense fruitfulness of Frege's introduction of truth values
into logic to a large extent is just due to its philosophical
neutrality with respect to theories of truth. It does not commit one
to any specific metaphysical doctrine of truth. In one significant
respect, however, the idea of truth values contravenes traditional
approaches to truth by bringing to the forefront the problem of its
categorial classification.
In most of the established conceptions, truth is usually treated as a
property. It is customary to talk about a "truth
predicate" and its attribution to sentences, propositions,
beliefs or the like. Such an understanding corresponds also to a
routine linguistic practice, when one operates with the adjective
'true' and asserts, e.g., 'That 5 is a prime number
is true'. By contrast with this apparently quite natural
attitude, the suggestion to interpret truth as an object may seem very
confusing, to say the least. Nevertheless this suggestion is also
equipped with a profound and strong motivation demonstrating that it
is far from being just an oddity and has to be taken seriously (cf.
Burge 1986).
First, it should be noted that the view of truth as a property is not
as natural as it appears on the face of it. Frege brought into play an
argument to the effect that characterizing a sentence as *true*
adds nothing new to its content, for 'It is true that 5 is a
prime number' says exactly the same as just '5 is a prime
number'. That is, the adjective 'true' is in a sense
*redundant* and thus is not a real predicate expressing a real
property such as the predicates 'white' or
'prime' which, on the contrary, cannot simply be
eliminated from a sentence without an essential loss for its content.
In this case a superficial grammatical analogy is misleading. This
idea gave an impetus to the deflationary conception of truth
(advocated by Ramsey, Ayer, Quine, Horwich, and others, see the entry
on
the deflationary theory of truth).
However, even admitting the redundancy of truth as a property, Frege
emphasizes its importance and indispensable role in some other
respect. Namely, truth, accompanying every act of judgment as its
ultimate goal, secures an objective *value of cognition* by
arranging for every assertive sentence a transition from the level of
sense (the thought expressed by a sentence) to the level of denotation
(its truth value). This circumstance specifies the significance of
taking truth as a particular object. As Tyler Burge explains:
>
>
> Normally, the point of using sentences, what "matters to
> us", is to claim truth for a thought. The object, in the sense
> of the point or *objective*, of sentence use was truth. It is
> illuminating therefore to see truth as an object. (Burge 1986: 120)
>
>
>
>
As it has been observed repeatedly in the literature (cf., e.g., Burge
1986, Ruffino 2003), the stress Frege laid on the notion of a truth
value was, to a great extent, pragmatically motivated. Besides an
intended gain for his system of "Basic Laws" (Frege
1893/1903) reflected in enhanced technical clarity, simplicity, and
unity, Frege also sought to substantiate in this way his view on logic
as a theoretical discipline with truth as its main goal and primary
subject-matter. Incidentally, Gottfried Gabriel (1986) demonstrated
that in the latter respect Frege's ideas can be naturally linked
up with a value-theoretical tradition in German philosophy of the
second half of the 19th century; see also (Gabriel 2013) on
the relation between Frege's value-theoretically inspired
conception of truth values and his theory of judgement. More
specifically, Wilhelm Windelband, the founder and the principal
representative of the Southwest school of Neo-Kantianism, was actually
the first who employed the term "truth value"
("*Wahrheitswert*") in his essay "What is
Philosophy?" published in 1882 (see Windelband 1915: 32), i.e.,
nine years before Frege 1891, even if he was very far from treating a
truth value as a value of a function.
Windelband defined philosophy as a "critical science about
universal values". He considered philosophical statements to be
not mere judgements but rather *assessments*, dealing with some
fundamental values, *the value of truth* being one of the most
important among them. This latter value is to be studied by logic as a
special philosophical discipline. Thus, from a value-theoretical
standpoint, the main task of philosophy, taken generally, is to
establish the principles of logical, ethical and aesthetical
assessments, and Windelband accordingly highlighted the triad of basic
values: "true", "good" and
"beautiful". Later this triad was taken up by Frege in
1918 when he defined the subject-matter of logic (see below). Gabriel
points out (1984: 374) that this connection between logic and a value
theory can be traced back to Hermann Lotze, whose seminars in
Gottingen were attended by both Windelband and Frege.
The decisive move made by Frege was to bring together a philosophical
and a mathematical understanding of values on the basis of a
generalization of the notion of a function on numbers. While Frege may
have been inspired by Windelband's use of the word
'value' (and even more concretely - 'truth
value'), it is clear that he uses the word in its mathematical
sense. If predicates are construed as a kind of functional expressions
which, being applied to singular terms as arguments, produce
sentences, then the values of the corresponding functions must be
references of sentences. Taking into account that the range of any
function typically consists of objects, it is natural to conclude that
references of sentences must be objects as well. And if one now just
takes it that sentences refer to truth values (*the True* and
*the False*), then it turns out that truth values are indeed
objects, and it seems quite reasonable to generally explicate truth
and falsity as objects and not as properties. As Frege explains:
>
>
> A statement contains no empty place, and therefore we must take its
> *Bedeutung* as an object. But this *Bedeutung* is a
> truth-value. Thus the two truth-values are objects. (Frege 1891,
> trans. Beaney 1997: 140)
>
>
>
Frege's theory of sentences as names of truth values has been
criticized, for example, by Michael Dummett who stated rather
dramatically:
>
>
> This was the most disastrous of the effects of the misbegotten
> doctrine that sentences are a species of complex singular terms, which
> dominated Frege's later period: to rob him of the insight that
> sentences play a unique role, and that the role of almost every other
> linguistic expression ... consists in its part in forming
> sentences. (Dummett 1981: 196)
>
>
>
But even Dummett (1991: 242) concedes that "to deny that
truth-values are objects ... seems a weak response".
### 1.3 The ontology of truth values
If truth values are accepted and taken seriously as a special kind of
objects, the obvious question as to the nature of these entities
arises. The above characterization of truth values as objects is far
too general and requires further specification. One way of such
specification is to qualify truth values as *abstract* objects.
Note that Frege himself never used the word 'abstract'
when describing truth values. Instead, he has a conception of so
called "logical objects", truth values being primary and
the most fundamental of them (Frege 1976: 121). Among the other
logical objects Frege pays particular attention to are sets and
numbers, emphasizing thus their logical nature (in accordance with his
logicist view).
Church (1956: 25), when considering truth values, explicitly
attributes to them the property of being abstract. Since then it is
customary to label truth values as abstract objects, thus allocating
them into the same category of entities as mathematical objects
(numbers, classes, geometrical figures) and propositions. One may pose
here an interesting question about the correlation between Fregean
logical objects and abstract objects in the modern sense (see the
entry on
abstract objects).
Obviously, the universe of abstract objects is much broader than the
universe of logical objects as Frege conceives them. The latter are
construed as constituting an ontological foundation for logic, and
hence for mathematics (pursuant to Frege's logicist program).
Generally, the class of *abstracta* includes a wide diversity
of platonic universals (such as redness, youngness, justice or
triangularity) and not only those of them which are logically
necessary. Nevertheless, it may safely be said that logical objects
can be considered as paradigmatic cases of abstract entities, or
abstract objects in their purest form.
It should be noted that finding an adequate definition of abstract
objects is a matter of considerable controversy. According to a common
view, abstract entities lack spatio-temporal properties and relations,
as opposed to concrete objects which exist in space and time (Lowe
1995: 515). In this respect truth values obviously *are*
abstract as they clearly have nothing to do with physical spacetime.
In a similar fashion truth values fulfill another requirement often
imposed upon abstract objects, namely the one of a causal inefficacy
(see, e.g., Grossmann 1992: 7). Here again, truth values are very much
like numbers and geometrical figures: they have no causal power and
make nothing happen.
Finally, it is of interest to consider how truth values can be
introduced by applying so-called *abstraction principles*,
which are used for supplying abstract objects with *criteria of
identity*. The idea of this method of characterizing abstract
objects is also largely due to Frege, who wrote:
>
>
> If the symbol *a* is to designate an object for us, then we must
> have a criterion that decides in all cases whether *b* is the
> same as *a*, even if it is not always in our power to apply this
> criterion. (Frege 1884, trans. Beaney 1997: 109)
>
>
>
More precisely, one obtains a new object by abstracting it from some
given kind of entities, in virtue of certain criteria of identity for
this new (abstract) object. This abstraction is performed in terms of
an equivalence relation defined on the given entities (see Wrigley
2006: 161). The celebrated slogan by Quine (1969: 23) "No entity
without identity" is intended to express essentially the same
understanding of an (abstract) object as an "item falling under
a sortal concept which supplies a well-defined criterion of identity
for its instances" (Lowe 1997: 619).
For truth values such a criterion has been suggested in Anderson and
Zalta (2004: 2), stating that for any two sentences \(p\) and \(q\),
the truth value of \(p\) is identical with the truth value of \(q\) if
and only if \(p\) is (non-logically) equivalent with \(q\) (cf. also
Dummett 1959: 141). This idea can be formally explicated following the
style of presentation in Lowe (1997: 620):
\[ \forall p\forall q[(\textit{Sentence}(p) \mathbin{\&}
\textit{Sentence}(q)) \Rightarrow(tv(p)=tv(q)
\Leftrightarrow(p\leftrightarrow q))], \]
where &, \(\Rightarrow, \Leftrightarrow, \forall\) stand
correspondingly for 'and', 'if... then',
'if and only if' and 'for all' in the
*metalanguage*, and \(\leftrightarrow\) stands for some
*object language* equivalence connective (biconditional).
Incidentally, Carnap (1947: 26), when introducing truth-values as
extensions of sentences, is guided by essentially the same idea.
Namely, he points out a strong analogy between extensions of
predicators and truth values of sentences. Carnap considers a wide
class of designating expressions ("designators") among
which there are predicate expressions ("predicators"),
functional expressions ("functors"), and some others.
Applying the well-known technique of interpreting sentences as
predicators of degree 0, he generalizes the fact that two predicators
of degree \(n\) (say, \(P\) and \(Q)\) have the same extension if and
only if \(\forall x\_1\forall x\_2 \ldots \forall x\_n(Px\_1 x\_2\ldots x\_n
\leftrightarrow Qx\_1 x\_2\ldots x\_n)\) holds. Then, analogously, two
sentences (say, \(p\) and \(q)\), being interpreted as predicators of
degree 0, must have the same extension if and only if
\(p\leftrightarrow q\) holds, that is if and only if they are
equivalent. And then, Carnap remarks, it seems quite natural to take
truth values as extensions for sentences.
Note that this criterion employs a *functional dependency*
between an introduced abstract object (in this case a truth value) and
some other objects (sentences). More specifically, what is considered
is the truth value of a sentence (or proposition, or the like).
The criterion of identity for truth values is formulated then through
the logical relation of equivalence holding between these other
objects--sentences, propositions, or the like (with an explicit
quantification over them).
It should also be remarked that the properties of the object language
biconditional depend on the logical system in which the biconditional
is employed. Biconditionals of different logics may have different
logical properties, and it surely matters what kind of the equivalence
connective is used for defining truth values. This means that the
concept of a truth value introduced by means of the identity criterion
that involves a biconditional between sentences is also
logic-relative. Thus, if '\(\leftrightarrow\)' stands for
material equivalence, one obtains classical truth values, but if the
intuitionistic biconditional is employed, one gets truth values of
intuitionistic logic, etc. Taking into account the role truth values
play in logic, such an outcome seems to be not at all unnatural.
Anderson and Zalta (2004: 13), making use of an object theory from
Zalta (1983), propose the following definition of 'the truth
value of proposition \(p\)' ('\(tv(p)\)' [notation
adjusted]):
\[ tv(p) =\_{df} i x(A!x \wedge \forall F(xF \leftrightarrow
\exists q(q\leftrightarrow p \wedge F= [l y\ q]))), \]
where \(A\)! stands for a primitive theoretical predicate 'being
abstract', \(xF\) is to be read as "\(x\) encodes
\(F\)" and [l*y q*] is a propositional
property ("being such a \(y\) that \(q\)"). That is,
according to this definition, "the extension of \(p\) is the
abstract object that encodes all and only the properties of the form
[l*y q*] which are constructed out of
propositions \(q\) materially equivalent to \(p\)" (Anderson and
Zalta 2004: 14).
The notion of a truth value in general is then defined as an object
which is the truth value of some proposition:
\[TV(x) =\_{df} \exists p(x = tv(p)).\]
Using this apparatus, it is possible to explicitly define the Fregean
truth values *the True* \((\top)\) and *the False*
\((\bot)\):
\[ \begin{align} \top &=\_{df} i x(A!x \wedge \forall F(xF
\leftrightarrow \exists p(p \wedge F= [l y\ p])));\\ \bot
&=\_{df} ix (A!x \wedge \forall F(xF \leftrightarrow \exists
p(\neg p \wedge F= [l y\ p]))).\\ \end{align} \]
Anderson and Zalta prove then that \(\top\) and \(\bot\) are indeed
truth values and, moreover, that there are exactly two such objects.
The latter result is expected, if one bears in mind that what the
definitions above actually introduce are the *classical* truth
values (as the underlying logic is classical). Indeed,
\(p\leftrightarrow q\) is classically equivalent to \((p\wedge
q)\vee(\neg p\wedge \neg q)\), and \(\neg(p\leftrightarrow q)\) is
classically equivalent to \((p\wedge \neg q)\vee(\neg p\wedge q)\).
That is, the connective of material equivalence divides sentences into
two distinct collections. Due to the law of excluded middle these
collections are exhaustive, and by virtue of the law of
non-contradiction they are exclusive. Thus, we get exactly two
equivalence classes of sentences, each being a hypostatized
representative of one of two classical truth values.
## 2. Truth values as logical values
### 2.1 Logic as the science of logical values
In a late paper Frege (1918) claims that the word 'true'
determines the subject-matter of logic in exactly the same way as the
word 'beautiful' does for aesthetics and the word
'good' for ethics. Thus, according to such a view, the
proper task of logic consists, ultimately, in investigating "the
laws of being true" (Sluga 2002: 86). By doing so, logic is
interested in truth as such, understood objectively, and not in what
is merely taken to be true. Now, if one admits that truth is a
specific abstract object (the corresponding truth value), then logic
in the first place has to explore the features of this object and its
interrelations to other entities of various other kinds.
A prominent adherent of this conception was Jan Lukasiewicz. As
he paradigmatically put it:
>
>
> All true propositions denote one and the same object, namely truth,
> and all false propositions denote one and the same object, namely
> falsehood. I consider truth and falsehood to be *singular*
> objects in the same sense as the number 2 or 4 is. ...
> Ontologically, truth has its analogue in being, and falsehood, in
> non-being. The objects denoted by propositions are called *logical
> values*. Truth is the positive, and falsehood is the negative
> logical value. ... Logic is the science of objects of a special
> kind, namely a science of *logical values*. (Lukasiewicz
> 1970: 90)
>
>
>
This definition may seem rather unconventional, for logic is usually
treated as the science of correct reasoning and valid inference. The
latter understanding, however, calls for further justification. This
becomes evident, as soon as one asks, *on what grounds* one
should qualify this or that pattern of reasoning as correct or
incorrect.
In answering this question, one has to take into account that any
valid inference should be based on logical rules which, according to a
commonly accepted view, should at least guarantee that in a valid
inference the conclusion(s) is (are) true if all the premises are
true. Translating this demand into the Fregean terminology, it would
mean that in the course of a correct inference the possession of the
truth value *The True* should be *preserved* from the
premises to the conclusion(s). Thus, granting the realistic treatment
of truth values adopted by Frege, the understanding of logic as the
science of truth values in fact provides logical rules with an
ontological justification placing the roots of logic in a certain kind
of ideal entities (see Shramko 2014).
These entities constitute a certain uniform domain, which can be
viewed as a subdomain of Frege's so-called "third
realm" (the realm of the objective content of thoughts, and
generally abstract objects of various kinds, see Frege 1918, cf.
Popper 1972 and also Burge 1992: 634). Among the subdomains of this
third realm one finds, e.g., the collection of mathematical objects
(numbers, classes, etc.). The set of truth values may be regarded as
forming another such subdomain, namely the one of *logical
values*, and logic as a branch of science rests essentially on
this *logical domain* and on exploring its features and
regularities.
### 2.2 Many-valued logics, truth degrees and valuation systems
According to Frege, there are exactly two truth values, *the
True* and *the False*. This opinion appears to be rather
restrictive, and one may ask whether it is really indispensable for
the concept of a truth value. One should observe that in elaborating
this conception, Frege assumed specific requirements of his system of
the *Begriffsschrift*, especially the principle of bivalence
taken as a metatheoretical principle, viz. that there exist only two
distinct logical values. On the object-language level this principle
finds its expression in the famous classical laws of excluded middle
and non-contradiction. The further development of modern logic,
however, has clearly demonstrated that classical logic is only one
particular theory (although maybe a very distinctive one) among the
vast variety of logical systems. In fact, the Fregean ontological
interpretation of truth values depicts logical principles as a kind of
ontological postulations, and as such they may well be modified or
even abandoned. For example, by giving up the principle of bivalence,
one is naturally led to the idea of postulating *many truth
values*.
It was Lukasiewicz, who as early as 1918 proposed to take
seriously other logical values different from truth and falsehood (see
Lukasiewicz 1918, 1920). Independently of Lukasiewicz, Emil
Post in his dissertation from 1920, published as Post 1921, introduced
\(m\)-valued truth tables, where \(m\) is any positive integer.
Whereas Post's interest in *many-valued logic* (where
"many" means "more than two") was almost
exclusively mathematical, Lukasiewicz's motivation was
philosophical (see the entry on
many-valued logic).
He contemplated the semantical value of sentences about the
contingent future, as discussed in Aristotle's *De
interpretatione*. Lukasiewicz introduced a third truth value
and interpreted it as "possible". By generalizing this
idea and also adopting the above understanding of the subject-matter
of logic, one naturally arrives at the representation of particular
logical systems as a certain kind of *valuation systems* (see,
e.g., Dummett 1981, 2000; Ryan and Sadler 1992).
Consider a propositional language \(\mathcal{L}\) built upon a set of
atomic sentences \(\mathcal{P}\) and a set of propositional
connectives \(\mathcal{C}\) (the set of sentences of \(\mathcal{L}\)
being the smallest set containing \(\mathcal{P}\) and being closed
under the connectives from \(\mathcal{C})\). Then a *valuation
system* \(\mathbf{V}\) for the language \(\mathcal{L}\) is a
triple \(\langle \mathcal{V}, \mathcal{D}, \mathcal{F}\rangle\), where
\(\mathcal{V}\) is a non-empty set with at least two elements,
\(\mathcal{D}\) is a subset of \(\mathcal{V}\), and
\(\mathcal{F} = \{f\_{c \_1},\ldots, f\_{c \_m}\}\) is a set of functions
such that \(f\_{c \_i}\) is an \(n\)-place function on \(\mathcal{V}\)
if \(c\_i\) is an \(n\)-place connective. Intuitively, \(\mathcal{V}\)
is the set of truth values, \(\mathcal{D}\) is the set of
*designated* truth values, and \(\mathcal{F}\) is the set of
truth-value functions interpreting the elements of \(\mathcal{C}\). If
the set of truth values of a valuation system \(\mathbf{V}\) has \(n\)
elements, \(\mathbf{V}\) is said to be \(n\)-valued. Any valuation
system can be equipped with an assignment function which maps the set
of atomic sentences into \(\mathcal{V}\). Each assignment \(a\)
relative to a valuation system \(\mathbf{V}\) can be extended to all
sentences of \(\mathcal{L}\) by means of a valuation function \(v\_a\)
defined in accordance with the following conditions:
\[
\begin{align}
\forall p &\in \mathcal{P} , &v\_a (p) &= a(p) ; \tag{1}\\
\forall c\_i &\in \mathcal{C} , & v\_a ( c\_i ( A\_1 ,\ldots , A\_n )) &= f\_{c\_i} ( v\_a ( A\_1 ),\ldots , v\_a ( A\_n )) \tag{2} \\
\end{align}
\]
It is interesting to observe that the elements of \(\mathcal{V}\) are
sometimes referred to as *quasi truth values*. Siegfried
Gottwald (1989: 2) explains that one reason for using the term
'quasi truth value' is that there is no convincing and
uniform interpretation of the truth values that in many-valued logic
are assumed in addition to the classical truth values *the
True* and *the False*, an understanding that, according to
Gottwald, associates the additional values with the naive
understanding of being true, respectively the naive understanding of
*degrees of being true* (cf. also the remark by Font (2009:
383) that "[o]ne of the main problems in many-valued logic, at
least in its initial stages, was the interpretation of the
'intermediate' or 'non-classical'
values", et seq.). In later publications, Gottwald has changed
his terminology and states that
>
>
> [t]o avoid any confusion with the case of classical logic one prefers
> in many-valued logic to speak of *truth degrees* and to use the
> word "truth value" only for classical logic. (Gottwald
> 2001: 4)
>
>
>
Nevertheless in what follows the term 'truth values' will
be used even in the context of many-valued logics, without any
commitment to a philosophical conception of truth as a graded notion
or a specific understanding of semantical values in addition to the
classical truth values.
Since the cardinality of \(\mathcal{V}\) may be greater than 2, the
notion of a valuation system provides a natural foundational framework
for the very idea of a many-valued logic. The set \(\mathcal{D}\) of
designated values is of central importance for the notion of a
valuation system. This set can be seen as a generalization of the
classical truth value *the True* in the sense that it
determines many central logical notions and thereby generalizes some
of the important roles played by Frege's *the True* (cf.
the introductory remarks about uses of truth values). For example, the
set of tautologies (logical laws) is directly specified by the given
set of designated truth values: a sentence \(A\) is a
*tautology* in a valuation system \(\mathbf{V}\) iff for every
assignment \(a\) relative to \(\mathbf{V}\), \(v\_a(A) \in
\mathcal{D}\). Another fundamental logical notion--that of an
entailment relation--can also be defined by referring to the set
\(\mathcal{D}\). For a given valuation system \(\mathbf{V}\) a
corresponding entailment relation \((\vDash\_V)\) is usually defined by
postulating the preservation of designated values from the premises to
the conclusion:
\[ \tag{3} D\vDash\_V A \textrm{ iff }\forall a[(\forall B \in
D: v\_a (B) \in \mathcal{D}) \Rightarrow v \_a (A) \in
\mathcal{D}]. \]
A pair \(\mathcal{M} = \langle \mathbf{V}, v\_a\rangle\), where
\(\mathbf{V}\) is an \((n\)-valued) valuation system and \(v\_a\) a
valuation in \(\mathbf{V}\), may be called an \((n\)-valued)
*model* based on \(\mathbf{V}\). Every model \(\mathcal{M} =
\langle \mathbf{V}, v\_a\rangle\) comes with a corresponding entailment
relation \(\vDash\_{\mathcal{M}}\) by defining
\(D\vDash\_{\mathcal{M} }A\textrm{ iff }(\forall B \in D:
v\_a (B) \in \mathcal{D}) \Rightarrow v\_a(A) \in \mathcal{D}\).
Suppose \(\mathfrak{L}\) is a syntactically defined logical system
\(\mathfrak{L}\) with a consequence relation \(\vdash\_{ \mathfrak{L}
}\), specified as a relation between the power-set of \(\mathcal{L}\)
and \(\mathcal{L}\). Then a valuational system \(\mathbf{V}\) is said
to be *strictly characteristic* for \(\mathfrak{L}\) just in
case \(D\vDash\_V A \textrm{ iff } D\vdash\_{ \mathfrak{L}
}A\) (see Dummett 1981: 431). Conversely, one says that
\(\mathfrak{L}\) *is characterized* by \(\mathbf{V}\). Thus, if
a valuation system is said to *determine* a logic, the
valuation system *by itself* is, properly speaking,
*not* a logic, but only serves as a semantic basis for some
logical system. Valuation systems are often referred to as
(*logical*) *matrices*. Note that in Urquhart 1986, the
set \(\mathcal{D}\) of designated elements of a matrix is required to
be non-empty, and in Dunn & Hardegree 2001, \(\mathcal{D}\) is
required to be a non-empty proper subset of \(\mathbf{V}\). With a
view on semantically defining a many-valued logic, these restrictions
are very natural and have been taken up in Shramko & Wansing 2011
and elsewhere. For the characterization of consequence relations (see
the supplementary document
Suszko's Thesis), however, the restrictions do not apply.
In this way Fregean, i.e., classical, logic can be presented as
determined by a particular valuation system based on exactly two
elements: \(\mathbf{V}\_{cl} = \langle \{T, F\}, \{T\}, \{ f\_{\wedge},
f\_{\vee}, f\_{\rightarrow}, f\_{\sim}\}\rangle\), where \(f\_{\wedge},
f\_{\vee}, f\_{\rightarrow},f\_{\sim}\) are given by the classical truth
tables for conjunction, disjunction, material implication, and
negation.
As an example for a valuation system based on more that two elements,
consider two well-known valuation systems which determine
Kleene's (strong) "logic of indeterminacy" \(K\_3\)
and Priest's "logic of paradox" \(P\_3\). In a
propositional language without implication, \(K\_3\) is specified by
the *Kleene matrix* \(\mathbf{K}\_3 = \langle \{T, I, F\},
\{T\}, \{ f\_c: c \in \{\sim , \wedge , \vee \}\} \rangle\), where the
functions \(f\_c\) are defined as follows:
\[
\begin{array}{c|c}
f\_\sim & \\\hline
T & F \\
I & I \\
F & T \\
\end{array}\quad
\begin{array}{c|c|c|c}
f\_\wedge & T & I & F \\\hline
T & T & I & F \\
I & I & I & F \\
F & F & F & F \\
\end{array}\quad
\begin{array}{c|c|c|c}
f\_\vee & T & I & F \\\hline
T & T & T & T \\
I & T & I & I \\
F & T & I & F \\
\end{array}
\]
The *Priest matrix* \(\mathbf{P}\_3\) differs from
\(\mathbf{K}\_3\) only in that \(\mathcal{D} = \{T, I\}\). Entailment
in \(\mathbf{K}\_3\) as well as in \(\mathbf{P}\_3\) is defined by means
of
(3).
There are natural intuitive interpretations of \(I\) in
\(\mathbf{K}\_3\) and in \(\mathbf{P}\_3\) as the
*underdetermined* and the *overdetermined* value
respectively--a truth-value gap and a truth-value glut. Formally
these interpretations can be modeled by presenting the values as
certain subsets of the set of classical truth values \(\{T, F\}\).
Then \(T\) turns into \(\mathbf{T} = \{T\}\) (understood as
"true only"), \(F\) into \(\mathbf{F} = \{F\}\)
("false only"), \(I\) is interpreted in \(K\_3\) as
\(\mathbf{N} = \{\} = \varnothing\) ("*neither* true nor
false"), and in \(P\_3\) as \(\mathbf{B} = \{T, F\}\)
("*both* true and false"). (Note that also Asenjo
(1966) considers the same truth-tables with an interpretation of the
third value as "antinomic".) The designatedness of a truth
value can be understood in both cases as containment of the classical
\(T\) as a member.
If one combines all these new values into a joint framework, one
obtains the four-valued logic \(B\_4\) introduced by Dunn and Belnap
(Dunn 1976; Belnap 1977a,b). A Gentzen-style formulation can be found
in Font (1997: 7)). This logic is determined by the *Belnap
matrix* \(\mathbf{B}\_4 = \langle \{\mathbf{N}, \mathbf{T},
\mathbf{F}, \mathbf{B}\}, \{\mathbf{T}, \mathbf{B}\}, \{ f\_c: c \in
\{\sim , \wedge , \vee \}\}\rangle\), where the functions \(f\_c\) are
defined as follows:
\[
\begin{array}{c|c}
f\_\sim & \\\hline
T & F \\
B & B \\
N & N \\
F & T \\
\end{array}\quad
\begin{array}{c|c|c|c|c}
f\_\wedge & T & B & N & F \\\hline
T & T & B & N & F \\
B & B & B & F & F \\
N & N & F & N & F \\
F & F & F & F & F\\
\end{array}\quad
\begin{array}{c|c|c|c|c}
f\_\vee & T & B & N & F \\\hline
T & T & T & T & T\\
B & T & B & T & B \\
N & T & T & N & N \\
F & T & B & N & F \\
\end{array}
\]
Definition (3) applied to the Belnap matrix
determines the entailment relation of \(\mathbf{B}\_4\). This
entailment relation is formalized as the well-known logic of
"first-degree entailment" (\(E\_{fde}\)) introduced in
Anderson & Belnap 1975 (see also Omori and Wansing 2017).
The syntactic notion of a single-conclusion consequence relation has
been extensively studied by representatives of the Polish school of
logic, most notably by Alfred Tarski, who in fact initiated this line
of research (see Tarski 1930a,b; cf. also Wojcicki 1988). In
view of certain key features of a standard consequence relation it is
quite remarkable--as well as important--that any entailment
relation \(\vDash\_V\) defined as above has the following structural
properties (see Ryan and Sadler 1992: 34):
\[\begin{align}
\tag{4} D\cup \{A\}&\vDash\_V A &\textrm{(Reflexivity)} \\
\tag{5} \textrm{If } D\vDash\_V A \textrm{ then } D\cup G &\vDash\_V A &\textrm{(Monotonicity)}\\
\tag{6} \textrm{If } D\vDash\_V A \textrm{ for every } A \in G &\\
\textrm{ and } G\cup D \vDash\_V B, \textrm{ then } D &\vDash\_VB & \textrm{(Cut)}
\end{align}\]
Moreover, for every \(A \in \mathcal{L}\), every \(D \subseteq
\mathcal{L}\), and every uniform substitution function \(s\) on
\(\mathcal{L}\) the following *Substitution* property holds
(\(s(D)\) stands for \(\{ s(B) \mid B \in
D\})\):
\[
\tag{7}
D\vDash\_V A \textrm{ implies } s(D)\vDash\_Vs(A).
\]
(The function of uniform substitution \(s\) is defined as follows.
Let \(B\) be a formula in \(\mathcal{L}\), let \(p\_1,\ldots, p\_n\) be
all the propositional variables occurring in \(B\), and let
\(s(p\_1) = A\_1,\ldots , s(p\_n) = A\_n\) for some formulas
\(A\_1 ,\ldots ,A\_n\). Then \(s(B)\) is the formula that results
from B by substituting simultaneously \(A\_1\),..., \(A\_n\) for
all occurrences of \(p\_1,\ldots, p\_n\), respectively.)
If \(\vDash\_V\) in the conditions
(4)-(6)
is replaced by \(\vdash\_{ \mathfrak{L} }\), then one obtains what is
often called a *Tarskian consequence relation*. If additionally
a consequence relation has the substitution property
(7),
then it is called *structural*. Thus, any entailment relation
defined for a given valuation system \(\mathbf{V}\) presents an
important example of a consequence relation, in that \(\mathbf{V}\) is
strictly characteristic for some logical system \(\mathfrak{L}\) with
a structural Tarskian consequence relation.
Generally speaking, the framework of valuation systems not only
perfectly suits the conception of logic as the science of truth
values, but also turns out to be an effective technical tool for
resolving various sophisticated and important problems in modern
logic, such as soundness, completeness, independence of axioms,
etc.
### 2.3 Truth values, truth degrees, and vague concepts
The term 'truth degrees', used by Gottwald and many other
authors, suggests that truth comes by degrees, and these degrees may
be seen as truth values in an extended sense. The idea of truth as a
graded notion has been applied to model vague predicates and to obtain
a solution to the Sorites Paradox, the Paradox of the Heap (see the
entry on the
Sorites Paradox).
However, the success of applying many-valued logic to the problem of
vagueness is highly controversial. Timothy Williamson (1994: 97), for
example, holds that the phenomenon of higher-order vagueness
"makes most work on many-valued logic irrelevant to the problem
of vagueness".
In any case, the vagueness of concepts has been much debated in
philosophy (see the entry on
vagueness)
and it was one of the major motivations for the development of
*fuzzy logic* (see the entry on
fuzzy logic).
In the 1960s, Lotfi Zadeh (1965) introduced the notion of a *fuzzy
set*. A characteristic function of a set \(X\) is a mapping which
is defined on a superset \(Y\) of \(X\) and which indicates membership
of an element in \(X\). The range of the characteristic function of a
classical set \(X\) is the two-element set \(\{0,1\}\) (which may be
seen as the set of classical truth values). The function assigns the
value 1 to elements of \(X\) and the value 0 to all elements of \(Y\)
not in \(X\). A fuzzy set has a membership function ranging over the
real interval [0,1]. A vague predicate such as 'is much earlier
than March 20th, 1963', 'is beautiful',
or 'is a heap' may then be regarded as denoting a fuzzy
set. The membership function \(g\) of the fuzzy set denoted by
'is much earlier than March 20th, 1963' thus
assigns values (seen as truth degrees) from the interval [0, 1] to
moments in time, for example \(g\)(1p.m., August 1st, 2006)
\(= 0\), \(g\)(3a.m., March 19th, 1963) \(= 0\),
\(g\)(9:16a.m., April 9th, 1960) \(= 0.005\), \(g\)(2p.m.,
August 13th, 1943) \(= 0.05\), \(g\)(7:02a.m., December
2nd, 1278) \(= 1\).
The application of continuum-valued logics to the Sorites Paradox has
been suggested by Joseph Goguen (1969). The Sorites Paradox in its
so-called conditional form is obtained by repeatedly applying
*modus ponens* in arguments such as:
* A collection of 100,000 grains of sand is a heap.
* If a collection of 100,000 grains of sand is a heap, then a
collection 99,999 grains of sand is a heap.
* If a collection of 99,999 grains of sand is a heap, then a
collection 99,998 grains of sand is a heap.
* ...
* If a collection of 2 grains of sand is a heap, then a collection
of 1 grain of sand is a heap.
* Therefore: A collection of 1 grain of sand is a heap.
Whereas it seems that all premises are acceptable, because the first
premise is true and one grain does not make a difference to a
collection of grains being a heap or not, the conclusion is, of
course, unacceptable. If the predicate 'is a heap' denotes
a fuzzy set and the conditional is interpreted as implication in
Lukasiewicz's continuum-valued logic, then the Sorites
Paradox can be avoided. The truth-function \(f\_{\rightarrow}\) of
Lukasiewicz's implication \(\rightarrow\) is defined by
stipulating that if \(x \le y\), then \(f\_{\rightarrow}(x, y) = 1\),
and otherwise \(f\_{\rightarrow}(x, y) = 1 - (x - y)\). If, say, the
truth value of the sentence 'A collection of 500 grains of sand
is a heap' is 0.8 and the truth value of 'A collection of
499 grains of sand is a heap' is 0.7, then the truth value of
the implication 'If a collection of 500 grains of sand is a
heap, then a collection 499 grains of sand is a heap.' is 0.9.
Moreover, if the acceptability of a statement is defined as having a
value greater than \(j\) for \(0 \lt j \lt 1\) and all the conditional
premises of the Sorites Paradox do not fall below the value \(j\),
then *modus ponens* does not preserve acceptability, because
the conclusion of the Sorites Argument, being evaluated as 0, is
unacceptable.
Alasdair Urquhart (1986: 108) stresses
>
>
> the extremely artificial nature of the attaching of precise numerical
> values to sentences like ... "Picasso's
> *Guernica* is beautiful".
>
>
>
To overcome the problem of assigning precise values to predications of
vague concepts, Zadeh (1975) introduced *fuzzy truth values* as
distinct from the numerical truth values in [0, 1], the former being
fuzzy subsets of the set [0, 1], understood as *true*, *very
true*, *not very true*, etc.
The interpretation of continuum-valued logics in terms of fuzzy set
theory has for some time be seen as defining the field of mathematical
fuzzy logic. Susan Haack (1996) refers to such systems of mathematical
fuzzy logic as "base logics" of fuzzy logic and reserves
the term 'fuzzy logics' for systems in which the truth
values themselves are fuzzy sets. Fuzzy logic in Zadeh's latter
sense has been thoroughly criticized from a philosophical point of
view by Haack (1996) for its "methodological
extravagances" and its linguistic incorrectness. Haack
emphasizes that her criticisms of fuzzy logic do not apply to the base
logics. Moreover, it should be pointed out that mathematical fuzzy
logics are nowadays studied not in the first place as continuum-valued
logics, but as many-valued logics related to residuated lattices (see
Hajek 1998; Cignoli *et al.* 2000; Gottwald 2001; Galatos
*et al.* 2007), whereas fuzzy logic in the broad sense is to a
large extent concerned with certain engineering methods.
A fundamental concern about the semantical treatment of vague
predicates is whether an adequate semantics should be
truth-functional, that is, whether the truth value of a complex
formula should depend functionally on the truth values of its
subformulas. Whereas mathematical fuzzy logic is truth-functional,
Williamson (1994: 97) holds that "the nature of vagueness is not
captured by any approach that generalizes truth-functionality".
According to Williamson, the degree of truth of a conjunction, a
disjunction, or a conditional just fails to be a function of the
degrees of truth of vague component sentences. The sentences
'John is awake' and 'John is asleep', for
example, may have the same degree of truth. By truth-functionality the
sentences 'If John is awake, then John is awake' and
'If John is awake, then John is asleep' are alike in truth
degree, indicating for Williamson the failure of
degree-functionality.
One way of in a certain sense non-truthfunctionally reasoning about
vagueness is supervaluationism. The method of supervaluations has been
developed by Henryk Mehlberg (1958) and Bas van Fraassen (1966) and
has later been applied to vagueness by Kit Fine (1975), Rosanna Keefe
(2000) and others.
Van Fraassen's aim was to develop a semantics for sentences
containing non-denoting singular terms. Even if one grants atomic
sentences containing non-denoting singular terms and that some
attributions of vague predicates are neither true nor false, it
nevertheless seems natural not to preclude that compound sentences of
a certain shape containing non-denoting terms or vague predications
*are* either true or false, e.g., sentences of the form
'If \(A\), then \(A\)'. Supervaluational semantics
provides a solution to this problem. A three-valued assignment \(a\)
into \(\{T, I, F\}\) may assign a truth-value gap (or rather the value
\(I)\) to the vague sentence 'Picasso's *Guernica*
is beautiful'. Any classical assignment \(a'\) that agrees with
\(a\) whenever \(a\) assigns \(T\) or \(F\) may be seen as a
precisification (or superassignment) of \(a\). A sentence may than be
said to be supertrue under assignment \(a\) if it is true under every
precisification \(a'\) of \(a\). Thus, if \(a\) is a three-valued
assignment into \(\{T, I, F\}\) and \(a'\) is a two-valued assignment
into \(\{T, F\}\) such that \(a(p) = a'(p)\) if \(a(p) \in \{T, F\}\),
then \(a'\) is said to be a *superassignment* of \(a\). It
turns out that if \(a\) is an assignment extended to a valuation
function \(v\_a\) for the Kleene matrix \(\mathbf{K}\_3\), then for
every formula \(A\) in the language of \(\mathbf{K}\_3\), \(v\_a (A) =
v\_{a'}(A)\) if \(v\_a (A) \in \{T, F\}\). Therefore, the function
\(v\_{a'}\) may be called a *supervaluation* of \(v\_a\). A
formula is then said to be *supertrue* under a valuation
function \(v\_a\) for \(\mathbf{K}\_3\) if it is true under every
supervaluation \(v\_{a'}\) of \(v\_a\), i.e., if \(v\_{a'}(A) = T\) for
every supervaluation \(v\_{a'}\) of \(v\_a\). The property of being
*superfalse* is defined analogously.
Since every supervaluation is a classical valuation, every classical
tautology is supertrue under every valuation function in
\(\mathbf{K}\_3\). Supervaluationism is, however, not truth-functional
with respect to supervalues. The supervalue of a disjunction, for
example, does not depend on the supervalue of the disjuncts. Suppose
\(a(p) = I\). Then \(a(\neg p) = I\) and \(v\_{a'} (p\vee \neg p) = T\)
for every supervaluation \(v\_{a'}\) of \(v\_a\). Whereas \((p\vee \neg
p)\) is thus supertrue under \(v\_a,p\vee p\) is *not*, because
there are superassignments \(a'\) of \(a\) with \(a'(p) = F\). An
argument against the charge that supervaluationism requires a
non-truth-functional semantics of the connectives can be found in
MacFarlane (2008) (cf. also other references given there).
Although the possession of supertruth is preserved from the premises
to the conclusion(s) of valid inferences in supervaluationism, and
although it might be tempting to consider supertruth an abstract
object on its own, it seems that it has never been suggested to
hypostatize supertruth in this way, comparable to Frege's
*the True*. A sentence supertrue under a three-valued valuation
\(v\) just takes the Fregean value *the True* under every
supervaluation of \(v\). The advice not to confuse supertruth with
"real truth" can be found in Belnap (2009).
### 2.4 Suszko's thesis and anti-designated values
One might, perhaps, think that the mere existence of many-valued
logics shows that there exist infinitely, in fact, uncountably many
truth values. However, this is not at all clear (recall the more
cautious use of terminology advocated by Gottwald).
In the 1970's Roman Suszko (1977: 377) declared many-valued
logic to be "a magnificent conceptual deceit". Suszko
actually claimed that "there are but two logical values, true
and false" (Caleiro *et al*. 2005: 169), a statement now
called *Suszko's Thesis*. For Suszko, the set of truth
values assumed in a logical matrix for a many-valued logic is a set of
"admissible referents" (called "algebraic
values") of formulas but not a set of logical values. Whereas
the algebraic values are elements of an algebraic structure and
referents of formulas, the logical value *true* is used to
define valid consequence: If every premise is true, then so is (at
least one of) the conclusion(s). The other logical value,
*false*, is preserved in the opposite direction: If the (every)
conclusion is false, then so is at least one of the premises. The
logical values are thus represented by a bi-partition of the set of
algebraic values into a set of designated values (truth) and its
complement (falsity).
Essentially the same idea has been taken up earlier by Dummett (1959)
in an influential paper, where he asks
>
>
> what point there may be in distinguishing between different ways in
> which a statement may be true or between different ways in which it
> may be false, or, as we might say, between degrees of truth and
> falsity. (Dummett 1959: 153)
>
>
>
Dummett observes that, first,
>
>
> the sense of a sentence is determined wholly by knowing the case in
> which it has a designated value and the cases in which it has an
> undesignated one,
>
>
>
and moreover,
>
>
> finer distinctions between different designated values or different
> undesignated ones, however naturally they come to us, are justified
> only if they are needed in order to give a truth-functional account of
> the formation of complex statements by means of operators. (Dummett
> 1959: 155)
>
>
>
Suszko's claim evidently echoes this observation by Dummett.
Suszko's Thesis is substantiated by a rigorous proof (the Suszko
Reduction) showing that every structural Tarskian consequence relation
and therefore also every structural Tarskian many-valued propositional
logic is characterized by a bivalent semantics. (Note also that
Richard Routley (1975) has shown that every logic based on a
l-categorical language has a sound and complete bivalent
possible worlds semantics.) The dichotomy between designated values
and values which are not designated and its use in the definition of
entailment plays a crucial role in the Suszko Reduction. Nevertheless,
while it seems quite natural to construe the set of designated values
as a generalization of the classical truth value \(T\) in some of its
significant roles, it would not always be adequate to interpret the
set of non-designated values as a generalization of the classical
truth value \(F\). The point is that in a many-valued logic, unlike in
classical logic, "not true" does not always mean
"false" (cf., e.g., the above interpretation of
Kleene's logic, where sentences can be neither true nor
false).
In the literature on many-valued logic it is sometimes proposed to
consider a set of *antidesignated* values which not
obligatorily constitute the complement of the set of designated values
(see, e.g., Rescher 1969, Gottwald 2001). The set of antidesignated
values can be regarded as representing a generalized concept of
falsity. This distinction leaves room for values that are
*neither* designated *nor* antidesignated and even for
values that are *both* designated *and*
antidesignated.
Grzegorz Malinowski (1990, 1994) takes advantage of this proposal to
give a counterexample to Suszko's Thesis. He defines the notion
of a single-conclusion *quasi*-consequence \((q\)-consequence)
relation. The semantic counterpart of \(q\)-consequence is called
\(q\)-entailment. Single-conclusion \(q\)-entailment is defined by
requiring that if no premise is antidesignated, the conclusion is
designated. Malinowski (1990) proved that for every structural
\(q\)-consequence relation, there exists a characterizing class of
\(q\)-matrices, matrices which in addition to a subset
\(\mathcal{D}^{+}\) of designated values comprise a disjoint subset
\(\mathcal{D}^-\) of antidesignated values. Not every
\(q\)-consequence relation has a bivalent semantics.
In the supplementary document
Suszko's Thesis,
Suszko's reduction is introduced, Malinowski's
counterexample to Suszko's Thesis is outlined, and a short
analysis of these results is presented.
Can one provide evidence for a multiplicity of logical values? More
concretely, \(is\) there more than one logical value, each of which
may be taken to determine its own (independent) entailment relation? A
positive answer to this question emerges from considerations on truth
values as structured entities which, by virtue of their internal
structure, give rise to natural partial orderings on the set of
values.
## 3. Ordering relations between truth-values
### 3.1 The notion of a logical order
As soon as one admits that truth values come with valuation
*systems*, it is quite natural to assume that the elements of
such a system are somehow *interrelated*. And indeed, already
the valuation system for classical logic constitutes a well-known
algebraic structure, namely the two-element Boolean algebra with
\(\cap\) and \(\cup\) as meet and join operators (see the entry on the
mathematics of Boolean algebra).
In its turn, this Boolean algebra forms a lattice with a *partial
order* defined by \(a\le\_t b \textrm{ iff } a\cap b = a\). This
lattice may be referred to as *TWO*. It is easy to see that the
elements of *TWO* are ordered as follows: \(F\le\_t T\). This
ordering is sometimes called the *truth order* (as indicated by
the corresponding subscript), for intuitively it expresses an increase
in truth: \(F\) is "less true" than \(T\). It can be
schematically presented by means of a so-called Hasse-diagram as in
Figure 1.
![[a horizontal line segment with the left endpoint labeled 'F' and the right endpoint labeled 'T', below an arrow goes from left to right with the arrowhead labeled 't'.]](Figure1.png)
Figure 1: Lattice *TWO*
It is also well-known that the truth values of both Kleene's and
Priest's logic can be ordered to form a lattice
(*THREE*), which is diagrammed in Figure 2.
![[The same as figure 1 except the line segment has a point near the middle labeled 'I'.]](Figure2.png)
Figure 2: Lattice *THREE*
Here \(\le\_t\) orders \(T, I\) and \(F\) so that the intermediate
value \(I\) is "more true" than \(F\), but "less
true" than \(T\).
The relation \(\le\_t\) is also called a *logical order*,
because it can be used to determine key logical notions: logical
connectives and an entailment relation. Namely, if the elements of the
given valuation system \(\mathbf{V}\) form a lattice, then the
operations of meet and join with respect to \(\le\_t\) are usually seen
as the functions for conjunction and disjunction, whereas negation can
be represented by the inversion of this order. Moreover, one can
consider an entailment relation for \(\mathbf{V}\) as expressing
agreement with the truth order, that is, the conclusion should be at
least as true as the premises taken together:
\[
\tag{8}
D\vDash B\textrm{ iff }\forall v\_a[\Pi\_t\{ v\_a (A) \mid A \in D\} \le\_t v\_a (B)],
\]
where \(\Pi\_t\) is the lattice meet in the corresponding lattice.
The Belnap matrix \(\mathbf{B}\_4\) considered above also can be
represented as a partially ordered valuation system. The set of truth
values \(\{\mathbf{N}, \mathbf{T}, \mathbf{F}, \mathbf{B}\}\) from
\(\mathbf{B}\_4\) constitutes a specific algebraic structure -
the *bilattice* FOUR\(\_2\) presented in Figure 3 (see, e.g.,
Ginsberg 1988, Arieli and Avron 1996, Fitting 2006).
![[a graph with the y axis labeled 'i' and the x axis labeled 't'. A square with the corners labeled 'B' (top), 'T' (right), 'N' (bottom), and 'F' (left).]](Figure3.png)
Figure 3: The bilattice
*FOUR*\(\_2\)
This bilattice is equipped with *two* partial orderings; in
addition to a truth order, there is an information order \((\le\_i )\)
which is said to order the values under consideration according to the
information they give concerning a formula to which they are assigned.
Lattice meet and join with respect to \(\le\_t\) coincide with the
functions \(f\_{\wedge}\) and \(f\_{\vee}\) in the Belnap matrix
\(\mathbf{B}\_4\), \(f\_{{\sim}}\) turns out to be the truth order
inversion, and an entailment relation, which happens to coincide with
the matrix entailment, is defined by
(8).
*FOUR*\(\_2\) arises as a combination of two structures: the
approximation lattice \(A\_4\) and the logical lattice \(L\_4\) which
are discussed in Belnap 1977a and 1977b (see also, Anderson, Belnap
and Dunn 1992: 510-518)).
### 3.2 Truth values as structured entities. Generalized truth values
Frege (1892: 30) points out the possibility of "distinctions of
parts within truth values". Although he immediately specifies
that the word 'part' is used here "in a special
sense", the basic idea seems nevertheless to be that truth
values are not something amorphous, but possess some inner structure.
It is not quite clear how serious Frege is about this view, but it
seems to suggest that truth values may well be interpreted as complex,
structured entities that can be divided into parts.
There exist several approaches to semantic constructions where truth
values are represented as being made up from some primitive
components. For example, in some explications of Kripke models for
intuitionistic logic propositions (identified with sets of
"worlds" in a model structure) can be understood as truth
values of a certain kind. Then the empty proposition is interpreted as
the value *false*, and the maximal proposition (the set of all
worlds in a structure) as the value *true*. Moreover, one can
consider non-empty subsets of the maximal proposition as intermediate
truth values. Clearly, the intuitionistic truth values so conceived
are composed from some simpler elements and as such they turn out to
be complex entities.
Another prominent example of structured truth values are the
"truth-value objects" in topos models from category theory
(see the entry on
category theory).
For any topos \(C\) and for a \(C\)-object O one can define a
truth value of \(C\) as an arrow \(1 \rightarrow O\) ("a
subobject classifier for \(C\)"), where 1 is a terminal object
in \(C\) (cf. Goldblatt 2006: 81, 94). The set of truth values so
defined plays a special role in the logical structure of \(C\), since
arrows of the form \(1 \rightarrow O\) determine central
semantical notions for the given topos. And again, these truth values
evidently have some inner structure.
One can also mention in this respect the so-called "factor
semantics" for many-valued logic, where truth values are defined
as ordered \(n\)-tuples of classical truth values \((T\)-\(F\)
sequences, see Karpenko 1983). Then the value \(3/5\), for example,
can be interpreted as a \(T\)-\(F\) sequence of length 5 with exactly
3 occurrences of \(T\). Here the classical values \(T\) and \(F\) are
used as "building blocks" for non-classical truth
values.
Moreover, the idea of truth values as compound entities nicely
conforms with the modeling of truth values considered above in
three-valued (Kleene, Priest) and four-valued (Belnap) logics as
certain subsets of the set of classical truth values. The latter
approach stems essentially from Dunn (1976), where a generalization of
the notion of a classical truth-value function has been proposed to
obtain so-called "underdetermined" and
"overdetermined" valuations. Namely, Dunn considers a
valuation to be a function not from sentences to elements of the set
\(\{T, F\}\) but from sentences to subsets of this set (see also Dunn
2000: 7). By developing this idea, one arrives at the concept of a
*generalized truth value function*, which is a function from
sentences into the *subsets* of some *basic set of truth
values* (see Shramko and Wansing 2005). The values of generalized
truth value functions can be called *generalized truth
values*.
By employing the idea of generalized truth value functions, one can
obtain a hierarchy of valuation systems starting with a certain
set-theoretic representation of the valuation system for classical
logic. The representation in question is built on a single initial
value which serves then as the designated value of the resulting
valuation system. More specifically, consider the singleton
\(\{\varnothing \}\) taken as the basic set subject to a further
generalization procedure. Set-theoretically the basic set can serve as
the universal set (the complement of the empty set) for the valuation
system \(\mathbf{V}^{\varnothing}\_{cl}\) introduced below. At the
first stage \(\varnothing\) comes out with no specific intuitive
interpretation, it is only important to take it as some
distinct *unit*. Consider then the power-set of \(\{\varnothing
\}\) consisting of exactly two elements: \(\{\{\varnothing \},
\varnothing \}\). Now, these elements can be interpreted as
Frege's *the True* and *the False*, and thus it is
possible to construct a valuation system for classical logic,
\(\mathbf{V}^{\varnothing}\_{cl} = \langle \{\{\varnothing \},
\varnothing \}, \{\{\varnothing \}\}, \{f\_{\wedge}, f\_{\vee},
f\_{\rightarrow}, f\_{\sim}\}\rangle\), where the functions
\(f\_{\wedge}, f\_{\vee}, f\_{\rightarrow}, f\_{\sim}\) are defined as
follows (for
\[
\begin{align}
X, Y \in \{\{\varnothing \}, \varnothing \}:\quad & f\_{\wedge}(X, Y) = X\cap Y; \\
& f\_{\vee}(X, Y) = X\cup Y; \\
& f\_{\rightarrow}(X, Y) = X^{c}\cup Y; \\
& f\_{\sim}(X) = X^{c}.
\end{align}
\]
It is not difficult to see that for any
assignment \(a\) relative to \(\mathbf{V}^{\varnothing}\_{cl}\), and
for any formulas \(A\) and \(B\), the following holds:
\[\begin{align}
v\_a (A\wedge B) = \{\varnothing \}&\Leftrightarrow v\_a (A) = \{\varnothing \} \text{ and } v\_a (B) = \{\varnothing \}; \\
v\_a (A\vee B) = \{\varnothing \}&\Leftrightarrow v\_a (A) = \{\varnothing \} \text{ or } v\_a (B) = \{\varnothing \}; \\
v\_a (A\rightarrow B) = \{\varnothing \}&\Leftrightarrow v\_a (A) = \varnothing \text{ or } v\_a (B) = \{\varnothing \}; \\
v\_a (\sim A) = \{\varnothing \}&\Leftrightarrow v\_a (A) = \varnothing.
\end{align}\]
This shows that \(f\_{\wedge}, f\_{\vee}, f\_{\rightarrow}\) and
\(f\_{\sim}\) determine exactly the propositional connectives of
classical logic. One can conveniently mark the elements
\(\{\varnothing \}\) and \(\varnothing\) in the valuation system
\(\mathbf{V}^{\varnothing}\_{cl}\) by the classical labels \(T\) and
\(F\). Note that within \(\mathbf{V}^{\varnothing}\_{cl}\) it is fully
justifiable to associate \(\varnothing\) with falsity, taking into
account the virtual *monism of truth* characteristic for
classical logic, which treats falsity not as an independent entity but
merely as the absence of truth.
Then, by taking the set \(\mathbf{2} = \{F, T\}\) of these classical
values as the basic set for the next valuation system, one obtains the
four truth values of Belnap's logic as the power-set of the set
of classical values \(\mathcal{P}(\mathbf{2}) = \mathbf{4}: \mathbf{N}
= \varnothing\), \(\mathbf{F} = \{F\} (= \{\varnothing \})\),
\(\mathbf{T} = \{T\} (= \{\{\varnothing \}\})\) and \(\mathbf{B} =
\{F, T\} (= \{\varnothing, \{\varnothing \}\})\). In this way,
Belnap's four-valued logic emerges as a certain generalization
of classical logic with its two Fregean truth values. In
Belnap's logic truth and falsity are considered to be
full-fledged, self-sufficient entities, and therefore \(\varnothing\)
is now to be interpreted not as falsity, but as a real truth-value gap
(*neither* true *nor* false). The dissimilarity of
Belnap's truth and falsity from their classical analogues is
naturally expressed by passing from the corresponding classical values
to their singleton-sets, indicating thus their new interpretations as
*false only* and *true only*. Belnap's
interpretation of the four truth values has been critically discussed
in Lewis 1982 and Dubois 2008 (see also the reply to Dubois in Wansing
and Belnap 2010).
Generalized truth values have a strong intuitive background,
especially as a tool for the rational explication of incomplete and
inconsistent information states. In particular, Belnap's
heuristic interpretation of truth values as information that
"has been told to a computer" (see Belnap 1977a,b; also
reproduced in Anderson, Belnap and Dunn 1992, SS81) has been
widely acknowledged. As Belnap points out, a computer may receive data
from *various* (maybe independent) sources. Belnap's
computers have to take into account various kinds of information
concerning a given sentence. Besides the standard (classical) cases,
when a computer obtains information either that the sentence is (1)
true or that it is (2) false, two other (non-standard) situations are
possible: (3) nothing is told about the sentence or (4) the sources
supply inconsistent information, information that the sentence is true
and information that it is false. And the four truth values from
\(\mathbf{B}\_4\) naturally correspond to these four situations: there
is no information that the sentence is false and no information that
it is true \((\mathbf{N})\), there is *merely* information that
the sentence is false \((\mathbf{F})\), there is *merely*
information that the sentence is true \((\mathbf{T})\), and there is
information that the sentence is false, but there is also information
that it is true \((\mathbf{B})\).
Joseph Camp in 2002: 125-160 provides Belnap's four values
with quite a different intuitive motivation by developing what he
calls a "semantics of confused thought". Consider a
rational agent, who happens to mix up two very similar objects (say,
\(a\) and \(b)\) and ambiguously uses one name (say,
'\(C\)') for both of them. Now let such an agent assert
some statement, saying, for instance, that \(C\) has some property.
How should one evaluate this statement if \(a\) has the property in
question whereas \(b\) lacks it? Camp argues against ascribing truth
values to such statements and puts forward an "epistemic
semantics" in terms of "profitability" and
"costliness" as suitable characterizations of sentences. A
sentence \(S\) is said to be "profitable" if one would
profit from acting on the belief that \(S\), and it is said to be
"costly" if acting on the belief that \(S\) would generate
costs, for example as measured by failure to achieve an intended goal.
If our "confused agent" asks some external observers
whether \(C\) has the discussed property, the following four answers
are possible: 'yes' (mark the corresponding sentence with
\(\mathbf{Y})\), 'no' (mark it with \(\mathbf{N})\),
'cannot say' (mark it with **?**),
'yes' and 'no' (mark it with
**Y&N**). Note that the external observers, who
provide answers, are "non-confused" and have different
objects in mind as to the referent of '\(C\)', in view of
all the facts that may be relevant here. Camp conceives these four
possible answers concerning epistemic properties of sentences as a
kind of "semantic values", interpreting them as follows:
the value \(\mathbf{Y}\) is an indicator of profitability, the value
\(\mathbf{N}\) is an indicator of costliness, the value
**?** is no indicator either way, and the value
**Y&N** is both an indicator of profitability and an
indicator of costliness. A strict analogy between this
"semantics of confused reasoning" and Belnap's four
valued logic is straightforward. And indeed, as Camp (2002: 157)
observes, the set of implications valid according to his semantics is
exactly the set of implications of the entailment system \(E\_{fde}\).
In Zaitsev and Shramko 2013 it is demonstrated how ontological and
epistemic aspects of truth values can be combined within a joint
semantical framework. Kapsner (2019) extends Belnap's framework
by two additional values "Contestedly-True" and
"Contestedly-False" which allows for new outcomes for
disjunctions and conjunctions between statements with values
\(\mathbf{B}\) and \(\mathbf{N}\).
The conception of generalized truth values has its purely logical
import as well. If one continues the construction and applies the idea
of generalized truth value functions to Belnap's four truth
values, then one obtains further valuation systems which can be
represented by various *multilattices*. One arrives, in
particular, at *SIXTEEN*\(\_3\) - the *trilattice*
of 16 truth-values, which can be viewed as a basis for a logic of
computer networks (see Shramko and Wansing 2005, 2006; Kamide and
Wansing 2009; Odintsov 2009; Wansing 2010; Odintsov and Wansing 2015;
cf. also Shramko, Dunn, Takenaka 2001). The notion of a multilattice
and *SIXTEEN*\(\_3\) are discussed further in the supplementary
document
Generalized truth values and multilattices.
A comprehensive study of the conception of generalized logical values
can be found in Shramko and Wansing 2011.
## 4. Concluding remarks
Gottlob Frege's notion of a truth value has become part of the
standard philosophical and logical terminology. The notion of a truth
value is an indispensable instrument of realistic, model-theoretic
approaches to semantics. Indeed, truth values play an essential role
in applications of model-theoretic semantics in areas such as, for
example, knowledge representation and theorem proving based on
semantic tableaux, which could not be treated in the present entry.
Moreover, considerations on truth values give rise to deep ontological
questions concerning their own nature, the feasibility of fact
ontologies, and the role of truth values in such ontological theories.
There also exist well-motivated theories of generalized truth values
that lead far beyond Frege's classical values *the True*
and *the False*. (For various directions of further logical and
philosophical investigations in the area of truth values see Shramko &
Wansing 2009b, 2009c.) |
tsongkhapa | ## 1. Tsongkhapa's Life
### 1.1 Overview
Biographical information about Tsongkhapa comes from clues in his own
writing, and primarily from the hagiography written by his student
Kedrup Pelzangpo (*mKhas grub dpal bzang po*) (1385-1438)
called *Stream of Faith* (*Dad pa'i 'jug ngog*) (partial
translation in Thurman
1982).[1]
Tsongkhapa's life falls roughly into an earlier and later period. The
later period is defined by a series of publications, beginning in
about 1400, which systematically present his mature philosophy.
Tsongkhapa, influenced by the divisions used by the editors of the
Kanjur, treats non-tantric and tantric sources separately. His
philosophical views on tantra, and, to a certain extent his work on
ethics fall naturally into separate categories. The later period of
his life includes a period of institution building, possibly with an
eye to the founding of a new school or sect.
### 1.2 Detailed account
The name Tsongkhapa derives from Tsong kha, an ancient name for a part
of the A mdo region of Greater Tibet (*Bod chen*) now included
in Qinghai Province of the People's Republic of China, and the Tibetan
suffix *pa* that works as an agentive nominalizing particle.
His given name is Losang (sometimes written Lozang) Drakpa (*bLo
bzang grags pa*). He is also known by the honorific title Je
Rinpoche (*rJe rin po che*) ("Precious Lord"). He
was born, probably to semi-nomadic farmers, in a settlement now
incorporated into the outskirts of the Chinese city of Xi ning. His
birthplace is marked by the popular Kumbum (*sKu 'bum*)
monastery,
In his teens (1372-73), Tsongkhapa travelled from Tsong kha to
Central (*dBus*) Tibet (*Bod*) where he remained until
his death in 1419. He arrived as a young man in Central Tibet at the
end of a long flowering of intellectual activity called "the
later diffusion [of Buddhism]" (Tib. *spyi dar*) that
began with translator Rin chen bzang po (958-1055) and the
scholar-saint Dipamkara-srijnana
Atisa (980-1054), and ended with the most important editor
of the Kanjur, Buton (*Bu ston Rin chen grub*),
(1290-1364).
According to Kedrup, on his arrival in Central Tibet Tsongkhapa first
studied Tibetan medicine, then a traditional Buddhist curriculum of
*abhidharma*, tenet systems (Tib. *grub mtha'*) focusing
on the Middle Way and Mind Only (Sk. *cittamatra* also
called *yogacara*) philosophy, Buddhist ethics (Tib.
*sdom gsum*), and epistemology (Sk.
*pramana*). His early studies were pursued mainly in
institutions affiliated with the two dominant scholarly traditions
(Tib. *lugs*) of the time: the Sangphu (*gSang phu ne'u
tog*) tradition founded by Ngok (*rNgog bLo ldan shes rab*)
(1059-1109), and the Sakya (*Sa skya*) epistemological
tradition based primarily on the works of the Sa skya
pandi ta (1182-1251). He also studied and practiced
tantric Buddhism.
In his early years he wrote a number of essays on topics in
*abhidharma* (Apple 2008), a detailed investigation of the
*alaya-vijnana* (Sk.) ("foundation,
storehouse, basis-of all consciousness") (Sparham 1993), and an
important treatise, *Golden Garland* (*Legs bshad gser
phreng*), on the *Perfection of Wisdom* (Sk.
*prajna-paramita*) literature based on
the codification of its topics in the *Ornament for the Clear
Realizations* (*Abhisamayalamkara*)
(Sparham 2007-2013).
After completing his *Golden Garland* in 1388-89,
Tsongkhapa spent a period of some ten years removed from the hub of
intellectual activity. He engaged in meditation and ritual religious
exercises in Southern Tibet (*Lho kha*) where, as evidenced in
his short autobiographical writing from this transitional period
(Thurman 1982), and corroborated by traditional sources (Kaschewsky
1971, Vostrikov 1970), he resorted to a communion or dialogue with
Manjusri--the iconographic representation
of perfect knowledge in the Buddhist pantheon. Through the voice of
Manjusri he articulated a hierarchical system of
philosophies culminating in what he characterized as a pristine,
Middle Way, \*Prasangika-madhyamaka (Tib. *dBu ma thal
'gyur pa*) that avoided over-reification (of the absolute) and
over-negation (of the conventional).
Tsongkhapa framed his insights not as original contributions, but as a
rediscovery of meanings already revealed by the Buddha. In all his
works he characterizes his philosophy as identical to the Buddha's.
Further, he says his philosophy is based on Nagarjuna's and
Nagarjuna's follower Aryadeva's (third-fourth century)
explanation of what the Buddha said. His philosophy gets its name
\*Prasangika ("those who reveal the unwelcome
consequences [inherent in other's assertions]") from the
importance it gives to Buddhapalita's (late fifth century)
explication of Nagarjuna's work, and to Candrakirti's
defense of Buddhapalita's explanation, in the face of criticism
by Bhaviveka.
Tsongkhapa presented his mature philosophy in a series of volumes
during the later period of his life, beginning with the publication of
his most famous work, the *Great Exposition of the Stages of the
Path* (*Lam rim chen mo*) in 1402, at the age of 46. This
was followed by *Essence of Eloquence* (*Legs bshad snying
po*) and *Ocean of Reasoning* (*Rigs pa'i rgya
mtsho*) (1407-1408), the *Medium-Length Exposition of the
Stages of the Path* (*Lam rim 'bring*) in 1415, and, in the
last years of his life, his *Elucidation of the Intention*
(*dGongs pa rab gsal*).
Together, in this series of works written over a seventeen year
period, he succeeded in setting the agenda for the following
centuries, to a great extent, with later writers taking positions pro
and con his arguments. As is so often the case, his stature pushed
much of earlier Tibetan intellectual history into the shadows, until
its rediscovery by the Eclectic Movement (Tib. *Ris med*) in
Derge (*sDe dge*) in the late 18th century.
Tsongkhapa wrote his major works on tantra and ethics at the same time
as his major philosophical works. Volumes 3-12 of his collected
works in 18 or 19 volumes deal exclusively with topics based on
tantric sources. In 1405 he completed his *Great Exposition of
Tantra* (*sNgags rim chen mo*) (Hopkins 1980, Yarnall 2013)
a companion volume to his *Great Exposition*, where, in harmony
with his philosophy, he argued that tantra is defined neither by
special insight (Sk. *vipasyana*) or Mahayana
altruism (Sk. *bodhicitta*), but solely by deity yoga (Tib.
*lha'i rnal 'byor*) (Hopkins 1980).
In 1402, Tsongkhapa, with his teacher Rendawa (*Red mda' ba gzhon
nu blo gros*), (1349-1412) (Roloff 2009) and others, at the
temple of Namstedeng (*rNam rtsed ldeng*/*lding*)
convened a gathering of monks with the intention of reinvigorating the
Buddhist order. From this came Tsongkhapa's short, but influential
works on basic Buddhist morality (Sk. *pratimoksa*)
that in turn laid the foundation for the importance some Gelukpa
monasteries would place on a stricter adherence to the monastic code.
In 1403 his influential works on ethical codes for bodhisattvas
(*Byang chub gzhung lam*) (Tatz 1987) and for tantric
practitioners (*rTsa ltung rnam bshad*) (Sparham 2005)
appeared. His separate explanations of the three ethical codes (Tib.
*sdom gsum*) are distinguished by the centrality they give to
basic morality, the importance the bodhisattva's code retains in
tantra, and in the antinomian tantras the presence of a separate
*pratimoksa*-like ordination ritual built around a
codification of of ethical conduct specific to tantric
practitioners.
By 1409, at the age of 52, his place in Tibetan society was
sufficiently established that he could garner the support and
sponsorship necessary for a successful rejuvenation of the central
temple in Lhasa, for establishing a new-year prayer festival (*smon
lam chen mo*), and for completion of a large new monastery, Ganden
(*dGa' ldan*), where he lived for much of his subsequent life
until his death there in 1419. He inspired two of his students Tashi
Palden (*bKra shis dpal ldan*) (1379-1449) and Shakya
Yeshey (*Ye shes*) (1354-1435) to found Drepung ('Bras
spungs) monastery in 1416, and Sera Monastery in 1419, respectively.
These, with Ganden, would later become the three largest and most
powerful Gelukpa monasteries, and indeed, the biggest monasteries in
the world. In 1407 and 1413 the Ming dynasty Yongle Emperor recognized
Tsongkhapa's growing fame and importance by inviting him to the
Chinese court, as was the custom.
## 2. Early Period
### 2.1 *Explanation of the Difficult Points* (*Kun gzhi dka' 'grel*)
Tsongkhapa's most important works from his early period are his
*Explanation of the Difficult Points* (*Kun gzhi dka'
'grel*) and *Golden Garland* (*Legs bshad gser
phreng*).
In the former he gives a detailed explanation of the
*alaya-vijnana* ("foundation
consciousness")--according to Tsongkhapa the
distinctive eighth consciousness in the eight-consciousness system of
Asanga's (fourth century) Indian Yogacara
Buddhism.
Tsongkhapa opens his work with a brief discussion of the relationship
between the views of Nagarjuna and Asanga. Tsongkhapa
then discovers in the different usages of the term
*alaya-vijnana* and its near synonyms a
negative connotation conveying the cause and effect nature of repeated
suffering existences (Sk. *samsara*). He surveys the
relevant literature, limiting his discussion of *gotra*
("lineage," in the sense of the innate ability a person
has to grow up into a fully enlightened being) to the presentation
given in Asanga's *Bodhisattva Levels*
(*Bodhisattva-bhumi*). There the "residual
impression left by listening [to the correct exposition of the
truth]" (Sk. *sruta-vasana*) is the
"seed" (Sk. *bija*) that matures into
enlightenment. It is little more than the innate clarity of mind when
it is freed from limitation by habituation to a correct vision of
reality.
Tsongkhapa limits the sources for his explanation to non-tantric
works, almost all by persons he will later describe as Indian
Yogacara writers, demonstrating thereby the influence the
categories used in the recently redacted Kanjur and Tenjur (Tib.
*bsTan 'gyur*, the name for the commentaries in Tibetan
translation included in the canon) had on his thinking.
When Tsongkhapa arrived in Central Tibet, the pressing philosophical
issue of the day was Dolpopa's (*Dol po pa Shes rab rgyal
mtshan*) (1292-1361) Great Middle Way (Tib. *dBu ma chen
po*, Sk. \**Maha-madhayamaka*) philosophy. It
propounded a hermeneutics based on the principle that a Buddha will
always tell you clearly what he means. Based on a wide array of
sources, and without making a clear distinction between tantric and
non-tantric works, the central tenet of the Great Middle Way is that
an absolute, pure, transcendental mind endowed with all good qualities
exists radically other than the ordinary world of appearance. It is
the archetypical Tibetan Buddhist *gzhan stong* (pronounced
shentong, "emptiness of other") philosophy.
Dolpopa, stressing the hermeneutic stated above, says
Nagarjuna and Asanga, writing during the golden age,
have the same philosophy, the Great Middle Way clearly articulated by
the Buddha. Dolpopa distinguishes a pure, transcendental knowledge
(Tib. *kun gzhi ye shes*) that is quite other than the defiled
foundation consciousness (Tib. *kun gzhi rnam shes*), and
equates the former with his single absolute, endowed with all the
qualities of a Buddha.
Later, in his mature period, as explained below, Tsongkhapa will be
explicit. He will state categorically that Asanga's
Yogacara Buddhism is quite separate from, and in profundity
inferior to, Nagarjuna's Middle Way school (Wangchuk 2013),
removing all references to a Great Middle Way school. He will say the
*alaya-vijnana*, devoid of subject-object
bifurcation, provides the essential mind-stuff for defilement
(=*samsara*) and carries seeds for purification
(=*nirvana*) in a Yogacara system that is
fundamentally wrong, and at odds with the way things actually are. He
will give the *alaya-vijnana* only a heuristic
value, and say a correct view (the \*Prasangika-madhyamaka
view) of the workings of psycho-physical reality (the Buddhist
*skandha*s) totally invalidates it. As Jeffrey Hopkins (2002,
2006b) has shown conclusively, such statements constitute an explicit
rejection of Dolpopa's view, a position Tsongkhapa still is
formulating in early works like the *Explanation of the Difficult
Points*.
In his *Explanation of the Difficult Points*, Tsongkhapa is
certain he does not accept the view of Dolpopa, but he is still unsure
exactly what he will put in its place. He ignores Dolpopa's pure,
transcendental knowledge called *alaya*, mentioning it
obliquely, only in passing, during a final brief survey of views based
on Chinese sources; he leaves unanswered the question of the final
ontological status of the eighth consciousness, but appears to accept
it as having a provisional reality; and he allows the views of
Asanga set forth in the *Bodhisattva-bhumi* to
retain an authority he will later explicitly deny.
### 2.2 *Golden Garland* (*Legs bshad gser phreng*)
Tsongkhapa's *Golden Garland* is his most important early work.
It takes the form of a long explanation of the *Perfection of
Wisdom* sutras, given pride of place in the Kanjur as the
foremost words of the Buddha, after the Vinaya section (the
codifications of ethical conduct). It is a word-by-word commentary on
the topics in the *Ornament for the Clear Realizations*
(*Abhisamayalamkara*). Tsongkhapa bases his
explanation on two sub-commentaries by Arya Vimuktisena (sixth
century?) and Hari Bhadra (end of the eighth century). It propounds a
philosophy that later Gelukpas, following the taxonomy developed in
the mature works of Tsongkhapa, call
Yogacara-svatantrika-madhyamaka, in essence a Middle
Way that incorporates many of the categories of Yogacara
Buddhism, yet does not have the authority of Candrakirti's
Prasangika interpretation.
A great deal has been written on the difference between
Prasangika and Svatantrika (Dreyfus and McClintock
2003). Historically, the Gelukpa's
Yogacara-svatantrika-madhyamaka is another name for the
mainstream philosophical school that developed in Tibet, based
primarily on the teaching of Santaraksita and his
student Kamalasila (both fl. end of the eight century), the
most influential of all the Indian panditas involved in
the dissemination of Buddhist ideas to Tibet. Later Gelukpa
scholastics, basing themselves on the mature works of Tsongkhapa, are
unequivocal that Yogacara-svatantrika-madhyamaka, like
the Yogacara philosophy of Asanga, is wrong, and has
only heuristic value. At the same time, they tacitly acknowledge its
centrality by taking this "wrong" philosophy as the
foundation of their studies.
In the *Golden Garland*, Tsongkhapa, following
Santaraksita's school as it developed in Tibet,
asserts that all bases (Sk. *vastu*) (the psycho-physical
factors locating the person as they begin their philosophical career),
all mental states along the course of the path (Sk.
*marga*), and the final result, are illusory, because they
lack any essential nature. But it is not yet clear how the basis
relates to the final outcome (the result). As explained below,
Tsongkhapa will solve the problem in his mature works by a clear,
radical, nihilistic leap denying any essential nature whatsoever,
anywhere, to anything, thereby implicitly and explicitly devaluing (in
absolutist terms) the ontological status of the final, pure, unified
state, relegating it to just one more conventional,
dependently-originated thing.
In the *Golden Garland*, Tsongkhapa has not yet fully developed
his mature, systematic philosophy. He is still content with
discrediting Dolpopa and surveying different points of view. He uses
Yogacara terminology to present his own opinion, and uses a
language to describe enlightened (and enlightening) knowledge that
retains for it a separateness (through its freedom from all mental
construction) from all other mental states. The primary object of
philosophical inquiry in the *Golden Garland* is the unity of
the perceiving subject in enlightenment, a topic that reflects the
language and concerns of the
Yogacara-svatantrika-madhyamaka school.
Tsongkhapa in this early work is still comfortable with characterizing
the ultimate as an absence of elaboration (Tib. *spros bral*,
Sk. *nisprapanca*) beyond the four extremes, later
set forth as the orthodox view of the Sakya sect. Tsongkhapa's
rejection of this characterization in his later work occasions a
strong rejection of his view by writers like Gorampa (*Go rab
'byams pa bsod nams sge ge*) (1429-1489) (Cabezon and Dargyay
2006).
The *Golden Garland* is driven in no small measure by an agenda
dedicated to discrediting Dolpopa's *Perfection of Wisdom*
commentaries. This is evident from a comparison of Dolpopa and
Tsongkhapa's sources and views. While Dolpopa excoriates Arya
Vimuktisena and Hari Bhadra and says they belong to the degenerate
age, propounding doctrines harmful to golden age Buddhism, the
*Golden Garland* privileges the commentaries by Arya
Vimuktisena and Hari Bhadra. Dolpopa says the two *Detailed
Explanations of the Perfection of Wisdom Sutras* (Sk.
*Brhattika*, Tib. *Gnod
'jom*) by "the foremost, Great Middle Way master
Vasubandhu" stand as the primary scriptural authority for the
Great Middle Way doctrine, superior even to the partial truths
revealed by Nagarjuna. The *Golden Garland* disputes
that Vasubandhu is the author of the two *Detailed
Explanations*, and says, regardless of their author, they are
simply restatements of Nagarjuna's Middle Way. Finally,
Dolpopa propounds a doctrine that holds the basis, path, and result
are eternally the same, and all else is thoroughly imaginary, but the
*Golden Garland* is certain that such a view is wrong, and
directly cites a passage from Dolpopa's work, though without naming
its author, saying, "since no other great path-breaker, [i.e.,
founder of an original philosophy] besides him has ever asserted [what
Dolpopa says], learned persons are right to cast out [what he says]
like a gob of spit" (Sparham 2007-2013 vol. 1, 425).
## 3. Mature Period
Tsongkhapa formulated the philosophy for which he is best known some
ten years after finishing the *Golden Garland*. He
characterized the vision that led him to his philosophy as a pristine
Middle Way. Reflecting back on his insight he would write in his
*In Praise of Dependent Origination* (*brTen 'brel bstod
pa*) (translation by Tupten Jinpa, undated)
>
>
> "Nonetheless, before the stream of this life
>
>
> Flowing towards death has come to cease
>
>
> That I have found slight faith in you--
>
>
> Even this I think is fortunate.
>
>
> Among teachers, the teacher of dependent origination,
>
>
> Amongst wisdoms, the knowledge of dependent origination--
>
>
> You, who're most excellent like the kings in the worlds,
>
>
> Know this perfectly well, not others."
>
>
>
Tsongkhapa first sets forth this mature philosophy linking dependent
origination and emptiness in a special section at the end of his
*Great Exposition*. There, in the context of an investigation
into the end-product of an authentic, intellectual investigation into
the truly real (Sk. *tattva*, Tib. *de kho na*), and
into the way things finally are at their deepest level (Sk.
*tathata*, Tib. *de bzhin
nyid*),[2]
he says you have to identify the object of negation (Tib. *dgag
bya*), i.e., the last false projection to appear as reality, by
avoiding two errors: going too far (Tib. *khyab che ba*) and
not going far enough (Tib. *khyab chung ba*) (Drefus et al,
2011, chapter 5).
That his project does not presuppose a privileged, soul-like, state of
consciousness that surveys and categorizes phenomena is evident from
his assertion that these two errors originate in a latent
psychological tendency in philosophers: first, to hold on to vestiges
of truth (reality) where there is in fact an absence of it, and
second, to fall back on some version of truth (reality) in ordinary
appearance after failing to avoid nihilism in a futile quest for
meaning.
This section of Tsongkhapa's work is based on the opening verse of
Nagarjuna's *Root Verses on the Middle Way*
(*Mula-madhyamaka-karika*). Tsongkhapa's
*Ocean of Reasoning* (Garfield and Samten 2006) is a detailed
exegesis of this work. In it, Nagarjuna says nothing is
produced because it is not produced from itself, other, both or
neither. In Tsongkhapa's *Great Exposition*, the subject of
Nagarjuna's syllogism is primarily the embodied person,
reflecting the trend that runs throughout Buddhist philosophical
literature, in general, to stress the application of philosophical
analysis to praxis. Furthermore, for Tsongkhapa, Nagarjuna's
statement is about a first moment in the continuum of the
investigating person. By implication, reflecting the continuing
influence of Santaraksita, it is the person engaged in
the philosophical investigation in the immediate moment. It is
axiomatic for Buddhist philosophers that there is no independent
subject (soul) other than the five heaps (*skandhas*), so the
subject of the syllogism becomes the complex or continuum of
sense-faculties in the present instant. Such a subject makes
excellence sense of Tsongkhapa's otherwise confusing juxtaposition of
Nagarjuna's analysis with Dharmakirti's epistemology,
in which it is axiomatic that direct sense perception is an
authoritative means of knowledge (Sk.
*pramana*).
Nagarjuna's statement that such a person was never produced
seems paradoxical, to say the least, because it appears to undercut
the reality of the very intellectual act in which the thinker is
self-evidently engaged. Tsongkhapa makes it abundantly clear that in
his view not only is the intellectual act itself utterly devoid of any
essential reality, even the sense-faculties (the complex eye, ear and
so on) also lack any essential reality, never mind the sense data they
convey to the knowing individual.
Tsongkhapa's choice of the opening line's of Nagarjuna's
work as a point of departure for his philosophy stems from his belief
that he has gained a true insight into dependent origination
(*pratitya-samutpada*). This insight, he believes,
resolves the paradox between his apparent nihilism and his insistence
on the weight of authoritative statements and cognitions. Thus he says
in his *Great Exposition*,
>
>
> "Therefore, the intelligent should develop an unshakeable
> certainty that *the very meaning of emptiness is
> dependent-arising*... Specifically, this is the subtle point
> that the noble Nagarjuna and his spiritual son Aryadeva
> had in mind and upon which the master Buddhapalita and the
> glorious Candrakirti gave fully comprehensive commentary. This is
> how dependent-arising bestows certain knowledge of the absence of
> intrinsic existence; this is how it dawns on you that it is things
> which are devoid of intrinsic existence that are causes and
> effects" (Lamrim Chenmo Translation Committee Vol. 3, 139).
>
>
>
Tsongkhapa set out the ramifications of his view in "eight
points that are difficult to understand" (Tib. *dka' gnas
brgyad*) first enumerated in notes (*rjes byang*) to one of
his lectures by his contemporary and disciple, Darma rin chen (also
called *rGyal tshab rje* "the regent"). Tillemans
(1998) has nicely rendered the opening of the text as follows,
>
>
> "Concerning the [ontological] bases, there are the following
> [three points]: (1) the conventional nonacceptance of particulars and
> of (2) the storehouse consciousness, and (3) the acceptance of
> external objects. Concerning the path, there are the following [four
> points]: (4) the nonacceptance of autonomous reasonings as being means
> for understanding reality and (5) the nonacceptance of self-awareness;
> (6) the way in which the two obscurations exist; (7) how it is
> accepted that the Buddha's disciples and those who become awakened
> without a Buddha's help realize that things are without any
> own-nature. Concerning the result, there is: (8) the way in which the
> *buddhas* know [conventional] things in their full extent.
> Thus, there are four accepted theses and four unaccepted
> theses."
>
>
>
They have been discussed by a number of writers (Ruegg 2002, Cabezon
1992, 397, Cabezon and Dargyay 2006, 76-93, 114-201).
I have surveyed Tsongkhapa's early work on the
*alaya-vijnana* (Tilleman's "storehouse
consciousness") above. For Tsongkhapa, until the correct view is
understood, it is necessary to assert the
*alaya-vijnana* in order to save cause and
effect, and avoid falling into nihilism. When the correct view is
obtained, the *alaya-vijnana* stands
invalidated (Tillemans's second point). For Tsongkhapa, it is only
possible to understand cause and effect (in particular, as experienced
in the immediate moment by an embodied person), by correctly
understanding that dependent origination precludes any essential
existence whatsoever.
Tsongkhapa asserts that a specific mark (Sk.
*sva-laksana*), for instance a mark that makes blue
blue, instead of red or any other color, has not even a nominal
existence. This is the first of Tillemans's eight difficult
points.
In his monograph *Essence of Eloquence*, Tsongkhapa employs a
hermeneutics that treats language and knowledge as equally semiotic in
nature. This is consistent with Tsongkhapa's view that any
intellectual act is itself utterly devoid of any essential reality,
yet functions on a conventional level through the natural workings of
dependent origination (Sk. *dharmata*). This view allows
him to conclude that Dignaga and Dharmakirti's
Logico-epistemological school is equivalent to Asanga's
Yogacara school insofar as the former school asserts a
specific mark, seen by direct sense perception, that is necessary in
order to retain the reality of the conventional world. Tsongkhapa
takes the strong position that no datum that appears to (or is there
but unknown to) thought or sense perception, has any essential
reality. All are, equally, simply labeled by thought construction
(Tib. *rtog pas btags tsam*). Only convention makes the actual
sense-faculties, for example, real, and success or failure experienced
in a dream, for example, false. This dependent origination (between a
label and what is labeled) precludes the essential existence implicit
in the Epistemologist's *sva-laksana*.
Tsongkhapa's strong rejection of Yogacara idealism leads him
to assert the existence of external objects (the third point). His
(discredited) Yogacara school, based on either the works of
Asanga or the Epistemologists, explains the absence of
subject-object bifurcation as non-dual with thought or mind (Sk.
*citta*). Realizing this in a non-conceptual, meditational
state constitutes a liberating vision. Ultimately, therefore, external
objects are projections of a deluded mind. Tsongkhapa rejects this,
though he suggests only those who have understood the emptiness of
inherent existence through reflecting on the natural workings of
dependent origination can set it aside. As he says in his *Ocean of
Reasoning*,
>
>
> "The meaning of the statement that the conventional designation
> of subject and objects stops is that the designation of these two
> stops from the perspective of meditative equipoise, but it does not
> mean that the insight in meditative equipoise and the ultimate truth
> are rejected as subject and object. This is because their being
> subject and object is not posited from the perspective of analytic
> insight, but from the perspective of conventional understanding"
> (Garfield and Samten 2006, p. 26).
>
>
>
From the perspective of ordinary convention there are external
objects, so it is sufficient, on that level, to assert that they are
there.
Tsongkhapa does not accept *svatantra*
("autonomous") reasoning (the fourth point). He asserts
that it is enough, when proving that any given subject is empty of
intrinsic existence, to lead the interlocutor, through reasoning, to
the unwelcome consequences (*prasanga*) in their own
untenable position; it is not necessary to demonstrate the thesis
based on reasoning that presupposes any sort of intrinsic
(=autonomous) existence. This gives Tsongkhapa's philosophy its name
\*Prasangika-madhyamaka, i.e., a philosophy of a middle way
(between nihilism and eternalism) arrived at through demonstrating the
unwelcome consequences (in any given position that presupposes
intrinsic existence).
In the context of this assertion, Tsongkhapa offers a distinctive
explanation of the well-known Nayayika objection to
Nagarjuna's philosophy, namely, if the statements he uses to
prove his thesis are themselves without any final intrinsic reality,
they will be ineffective as proofs. Tsongkhapa says
Prasangika-madhyamakas do not simply find faults in all
positions and reject all positions as their own. They only deny any
thesis that presupposes an intrinsic existence. They do hold a
specific thesis, to wit, that all phenomena lack an intrinsic
existence. They use reasoning and logic that lacks any essential
reality to establish this thesis. Such reasoning derives its
efficacity on a conventional level through the natural workings of
dependent origination. This is one of the most contentious assertions
of Tsongkhapa.
Tsongkhapa's rejection of any form of self-referential consciousness
(Sk. *sva-samvitti*, *sva-samvedana*) (the
fifth point) is in essence a rejection of
Santaraksita's position that such self-referential or
reflexive awareness is necessary to explain the self-evidential nature
of consciousness, and to explain the privileged access the conscious
person has to their own consciousness as immediate and veridical
(Garfield
2006).[3]
At issue is the status of the knowing subject engaged in the
intellectual pursuit of the truly real. Tsongkhapa holds that such a
knowing subject has no essential reality at all.
Such a position requires of Tsongkhapa an explanation of memory. His
solution is to deny that when you remember seeing object *X*
you also remember the conscious act of seeing it. (Were you to do so,
there would have to be an aspect of the earlier consciousness of the
object that was equally conscious of itself, i.e., self-referential.)
Instead, Tsongkhapa argues, memory is simply the earlier consciousness
of object *X*, now designated "past." When
designated past, inexorably (or inferentially, as it were) the
presence of a consciousness of the past object *X* is required
to make sense of the present reality.
Tsongkhapa characterizes basic ignorance (Sk. *avidya*),
the root cause of suffering in Buddhist philosophy, not as a latent
tendency, but as an active defiling agency (Sk.
*klesavarana*) that projects a reality onto
objects that is in fact absent from them. This ignorance affects even
sense perception and explains the veridical aspect that is, in fact,
just error. The residual impressions left by distortion (literally
"perfumings" Sk. *vasana*) explain the
mere appearance of things as real. Beyond that, he asserts that
habituation to this distortion prevents conventional and ultimate
reality from appearing united in an appearing object. This explanation
of the psychology of error differs markedly from earlier Tibetan
explanations and constitutes the sixth difficult point.
In Tsongkhapa's mature philosophy the Mahayana altruistic
principle (*bodhicitta*) is the sole criterion for
distinguishing authentic Mahayana views and practices from
non-Mahayana ones. By privileging the principle in this way
he is able to assert that any authentic realization of truth is a
realization of the way things are, namely, a realization of no
*sva-bhava* (Sk.) ("own-being, own-nature, intrinsic
identity"). For Tsongkhapa, therefore, Hinayanists (by
which he intends followers of the basic Buddhist doctrine of the Four
Noble Truths set forth in the earliest scriptures), necessarily have
the same authentic knowledge of reality. Were they not to have such
knowledge, he argues, they could not have reached the goals they
reached (the seventh point).
Finally, Tsongkhapa has a robust explanation of the difference between
true and false on the covering or conventional level. He denies any
difference between a false object (a dream lottery ticket, for
example) and a real one; as appearances, he asserts, both are equally
false, only convention decides which is true. All phenomena equally
lack truth. In Tsongkhapa's mature philosophy, therefore, *all*
appearance is false--to appear is to appear as being truly
what the appearance is of, and the principle of dependent origination
precludes such truth from according with the way things actually
are.
Tsongkhapa holds that sentences and their content and minds and what
appears to them function in the same way. Saying (1) of a set of
"true" and "false" statements that they are
all equally untrue, and (2) saying of unmediated sense-based
perception or mistaken ideas that they are equally untrue is to say
the same thing. The truth in both is decided by convention, not by
something inhering in the true statement (or its content), or the
valid perception (or its object). For Tsongkhapa, therefore, since all
appearance is false, the Buddha knows, but without any appearance of
truth (the eighth difficult point).
Towards the very end of his life, in his *Medium-Length Exposition
of the Path* and *Elucidation of the Intention* Tsongkhapa
defends his views against Rongton's (*rong stong shes bya kun
rig*) (1367-1449) criticism that his earlier and later works are
inconsistent. Tsongkhapa extends his comparison of Candrakirti's
and Bhaviveka's interpretations of Nagarjuna to more
clearly identify a less subtle object of negation in the works of
Yogacara-svatantrika-madhyamaka writers, particularly
Jnanagarbha (8th century) and Santaraksita
(Hopkins 2008, Part One, Yi 2015). Tsongkhapa finds in their works an
analysis leading beyond the Yogacara's mere absence of a
substantial difference between known object and knowing subject, but
limited insofar as it negates the ultimate truth of all objects
appearing to conventional awareness while still allowing room for an
object of negation that does not appear to sense perception.
Dreyfus et al (2011) and Falls (2010) suggest parallels between the
issues raised by the Svatantrika-Prasangika
difference and some recent work in Western philosophy. They are
helpful for those familar with Western philosophy but unfamiliar with
Tsongkhapa, surveying questions that arise from the interaction of
philosophical traditions from different cultures. Falls argues
Tsongkhapa's philosophy is a therapeutea not a theoria and draws
parallels between issues raised by Tsongkhapa and John McDowell's
interpretation of Wilfrid Sellars in particular.
## 4. Hermeneutics
A distinctive feature of Tsongkhapa's mature philosophy is the
centrality he accords hermeneutics, and the particular stratification
of philosophical systems it occasions. His focus on hermeneutics stems
in no small measure from his acceptance of the Kanjur as a true record
of authentic statements of the Buddha, and also from his wish to
discredit Dolpopa's views. According to Tsongkhapa, and worked out in
detail in his *Essence of Eloquence*, the given record of the
Buddha's diverse statements seems to contain contradictions, so a
reader must decide on criteria for interpreting them. No statement of
the Buddha can serve as a primary hermeneutic principle, so that
principle necessarily becomes human reason (Sk. *yukti*, Tib.
*rigs pa*).
When human reason is brought to bear on the diverse statements of the
Buddha, it concludes that all statements that an essential or
intrinsic identity exists, or any statements that presupposes that,
cannot be taken literally, at face value. This is because human
reason, when it analyzes the ultimate truth of any object or
statement, finds it empty of anything intrinsic to it that would make
it true. For Tsongkhapa, the line of reasoning that leads most clearly
to this conclusion is dependent origination.
Statements that require interpretation Tsongkhapa groups into those of
different Buddhist philosophical schools: in ascending order, Listener
(Sk. *Sravaka*) schools based on older Buddhist
sources (the Vaibhasika and Sautrantika),
Yogacara schools following Asanga and
Dignaga/Dharmakirti, and non-Prasangika
Madhyamaka schools following Bhaviveka and
Santaraksita. The Prasangika-madhyamaka
school of Candrakirti alone captures the final intention of the
Buddha. Tsongkhapa was certainly not the first doxographer, but his
clear and definitive categories had immense influence on later
writers, so much so they have anachronistically been read back into
works that were written before his time.
Tsongkhapa's particular hermeneutics, the primary means he employs to
lead readers to his philosophical insight, allows him to characterize
particular, authentic, Buddhist philosophies as wrong (because they
are wrong from a Prasangika perspective), yet right from
the perspective of those particular systems. They are right because
the philosophies have particular roles to play in a larger, grander
scheme. This scheme is part of the larger philosophy of a perfect
person whose views are in perfect accord (Sk. *tathagata*)
with the way things are, i.e., dependently originated, and whose
statements are motivated solely by the benefit they have to those who
hear them.
In this way, Tsongkhapa's hermeneutics lead to, or incorporate, a
second principle, namely, *bodhicitta* (Sk.). The word
*bodhicitta* has at least six different, but interrelated
meanings in different contexts (Wangchuk 2007). In his philosophical
works Tsongkhapa uses it to mean a universal, altruistic principle
(not unlike the *logos*) that explains, primarily, the genesis
of the Buddha's diverse statements, i.e., explains why a person with
perfect intellect and powers of expression would make statements that
seem to contain contradictions.
This principle plays a central role in Tsongkhapa's assertion that all
authentic attainments, without distinction, are based on an authentic
insight into emptiness (the seventh of the eight difficult points
listed above), and it leads him to assert that the
"origin" of the Mahayana is located in
*bodhicitta*, and *bodhicitta* alone
## 5. Ethics
Tsongkhapa is part of a long, shared, Indo-Tibetan Buddhist tradition
that conceives of ethical conduct not in absolute terms, but in the
context of different individuals in different situations. There is an
unspoken presupposition that ethical statements are grounded in
reality, in the sense that suffering (Sk. *duhkha*) comes
from actions (Sk. *karma*) that are not in accord with the way
things actually are (Tib. *dngos po'i gnas lugs*). The person
is ultimately not present as an ethical subject, but is so, on a
conventional or covering level, through the natural working of
dependent origination.
Tsongkhapa groups such individuals into three separate categories
(those who privilege basic Buddhist codes, bodhisattvas, and
tantrikas). Each is governed by an ethical code (Tib. *sdom
gsum*), each superior code subsuming the points of the lesser.
Beyond these three, his *Great Exposition* suggests ordinary
ethical conduct is codified as well, in the main, in the ten ethical
points (Sk. *dasa-kusala-patha*) basic to any human
life that rises above the mere animal.
The first of the three specifically Buddhist codes, the basic code,
primarily governs the behavior of monks and nuns. Since for Tsongkhapa
each higher code incorporates all the rules in the lower code, he
conceives of the seven (or even twelve) sub-codes making up the basic
code as, in descending order, containing fewer and fewer of the rules
that constitute the full code. Each of the sub-codes is designed for
particular people in particular human situations. Tsongkhapa does not
expect a butcher to be governed by the rule of not killing, for
example, and he does not expect a lawyer to be governed by the rule of
not lying. He avoids gross devaluation of the basic code by
privileging spiritual elites. In this, he takes a pragmatic approach
to ethics in line with his non-absolutist stance. For Tsongkhapa, it
is axiomatic that human life is not *ipso facto* privileged
above any other form of life, but his detailed explanation gives
greater "weight" to the karmic retribution that comes from
killing humans, and amongst humans, noble persons, for example, than
less fortunate forms of life.
Tsongkhapa's explanation of the bodhisattva moral code presented in
the *Bodhisattva-bhumi* (Tatz 1987) follows naturally from
the importance he gives to the altruistic principle
(*bodhicitta*) in his other writing. He reconciles rules in the
bodhisattva code that enjoin behavior contradicting the basic code by
positing an elite that, through a *noblesse oblige* of the
spiritually advanced, are required to do things which would be
unethical in an ordinary person. Their not doing so constitutes an
ethical lapse. For example, the basic code prohibits actions that
influence what food a donor puts in the begging bowl (with a few
exceptions for human flesh and so on), and prohibits eating after
noon. The bodhisattva code contradicts the basic code insofar as it
prohibits eating meat, even though, by so doing, the donor does not
get the opportunity to make charity. Tsongkhapa argues that if a monk
at the bodhisattva stage of development eats meat it clashes with the
dictates of the altruistic principle (*bodhicitta*).
Tsongkhapa accepts that the diverse body of literature, including the
historically latest Buddhist tantras (some of which are distinctly
antinomian in character), are the work of a fully awakened being (Sk.
*buddha*). The last of the three codes systematizes the conduct
espoused in these tantras. Tsongkhapa's presentation is distinctive
for finding a code complete with a full ordination ceremony. The
practical result of his presentation is to revalue the basic code for
monks, and devalue the ritual of tantric consecration as it pertains
to ethical conduct. It stresses ethical conduct even in the context of
works that appear, in line with the nihilistic drift of Buddhist
philosophy, to count ethical codes as a block to the highest spiritual
and philosophical attainment.
For Tsongkhapa the tantric code is only for the very highest spiritual
elite, mainly monks and nuns. Tsongkhapa divides tantras into two
sections, and says the lower section is governed exclusively by the
bodhisattva code, supplemented by specific ritual injunctions (Sk.
*vrata*). He gives two interpretations of the rules for the
higher tantras, reserving the truly antinomian behavior for a
theoretical elite whose altruism is so strong, and whose understanding
and status is so ennobled, that they engage in what ordinarily would
be condemned as gross immorality. According to this code, it is an
ethical lapse not to eat meat, perhaps even human flesh, the logic
being that in certain specific and unusual circumstances, in a person
at this stage of development, such behavior would constitute a
skillful means to benefit others.
## 6. Tantra
Tsongkhapa does not have a different tantric philosophy. His
Prasangika Middle Way is the philosophical position he
articulates in his works on tantra. He does however, unlike in his
non-tantric works, accept that those propounding Idealistic
philosophies (Yogacara, Svatantrika Middle Way) can
have success in their practice. In this he reveals again the
importance of the central Tibetan philosophical tradition going back
to Santaraksita.
Tsongkhapa conceives of tantra as a subset of the Mahayana,
and to that extent all authentic Buddhist tantric activities are,
necessarily, authentically altruistic. What differentiates tantric
activity from other ordinary Mahayana activities is deity
yoga (Tib. *lha'i rnal 'byor*), i.e., whether or not, from a
first person perspective, the person is acting as a perfect,
"divine" subject when engaging in (primarily ritual)
behavior.
Tsongkhapa wrote works on the *Vajrabhairava*,
*Cakrasamvara* (Gray 2017), and *Kalacakra*
tantras, but is best known for his exposition of the
*Guhyasamaja* tantra based on Indian commentaries
associated with the names of Nagarjuna and Aryadeva,
particularly his short but influential *Commentary on the
Jnana-vajra-samuccaya* (*Ye shes rdo rje kun las
'dus pa'i ti ka*) (1410), and magisterial
*Clear Exposition of the Five Stages of Guhyasamaja*
(*gSang 'dus rim lnga gsal sgron*) (1411) (Thurman 2011,
Kilty 2013). He accords great importance to esoteric yoga. In this
respect, his work is firmly located in the mainstream Tibetan
tradition, influenced in particular by the translator Mar pa (Mar pa
Chos kyi blo gros) (1012-1097), who spread the *Six Teachings
of Naropa* (*Na ro chos drug)*, a later Indian
synthesis of diverse tantric practices, in Tibet. Based on this
Tsongkhapa gives detailed explanations of the
*nadi* (channels for the energy or feelings
that run through the tantric practitioner's body), *cakra*
(circles of channels in the heart, throat, and other central points up
the center of the body from the bottom of the spine, or tip of the sex
organ, to the top of the head) and *candali*
(an intense pleasure experienced as heat that spreads through the
channels and fills the body). Tsongkhapa is praised, in particular,
for his explanation of theory and praxis associated with the illusory
body (Sk. *maya-kaya*, Tib. *sgyu lus*)
and clear light (Sk. *prabhasvara*, Tib. *'od
gsal*), a skillful adaptation of his understanding of the two
truths to yogic praxis (Natanya 2017). This is summed up nicely in the
statement, "The
*Pancakramasamgrahaprakasa (Illumination of
the Summary of the Five Steps)*, a short treatise attributed to
Naropa (956-1040), combining the Six Teachings with the
'Five Steps' (*pancakrama*) of the Arya
tradition provided Tsong kha pa with the basic ideas of his Tantric
system" (Tillemans 1998). |
turing | ## 1. Outline of Life
Alan Turing's short and extraordinary life has attracted wide interest.
It has inspired his mother's memoir (E. S. Turing 1959), a detailed
biography (Hodges 1983), a play and television film (Whitemore 1986),
and various other works of fiction and art.
There are many reasons for this interest, but one is that in every
sphere of his life and work he made unexpected connections between
apparently unrelated areas. His central contribution to science and
philosophy came through his treating the subject of symbolic logic as a
new branch of applied mathematics, giving it a physical and engineering
content. Unwilling or unable to remain within any standard role or
department of thought, Alan Turing continued a life full of
incongruity. Though a shy, boyish, man, he had a pivotal role in world
history through his role in Second World War cryptology. Though the
founder of the dominant technology of the twentieth century, he
variously impressed, charmed or disturbed people with his unworldly
innocence and his dislike of moral or intellectual compromise.
Alan Mathison Turing was born in London, 23 June 1912, to
upper-middle-class British parents. His schooling was of a traditional
kind, dominated by the British imperial system, but from earliest life
his fascination with the scientific impulse--expressed by him as
finding the 'commonest in nature'--found him at odds
with authority. His scepticism, and disrespect for worldly values, were
never tamed and became ever more confidently eccentric. His moody
humour swung between gloom and vivacity. His life was also notable as
that of a gay man with strong emotions and a growing insistence on his
identity.
His first true home was at King's College, Cambridge University,
noted for its progressive intellectual life centred on J. M. Keynes.
Turing studied mathematics with increasing distinction and was elected
a Fellow of the college in 1935. This appointment was followed by a
remarkable and sudden debut in an area where he was an unknown
figure: that of mathematical logic. The paper "On Computable
Numbers..." (Turing 1936-7) was his first and perhaps greatest
triumph. It gave a definition of computation and an absolute limitation
on what computation could achieve, which makes it the founding work of
modern computer science. It led him to Princeton for more advanced work
in logic and other branches of mathematics. He had the opportunity to
remain in the United States, but chose to return to Britain in 1938,
and was immediately recruited for the British communications war.
From 1939 to 1945 Turing was almost totally engaged in the mastery
of the German enciphering machine, Enigma, and other cryptological
investigations at now-famous Bletchley Park, the British government's
wartime communications headquarters. Turing made a unique logical
contribution to the decryption of the Enigma and became the chief
scientific figure, with a particular responsibility for reading the
U-boat communications. As such he became a top-level figure in
Anglo-American liaison, and also gained exposure to the most advanced
electronic technology of the day.
Combining his ideas from mathematical logic, his experience in
cryptology, and some practical electronic knowledge, his ambition, at
the end of the war in Europe, was to create an electronic computer in
the full modern sense. His plans, commissioned by the National Physical
Laboratory, London, were overshadowed by the more powerfully supported
American projects. Turing also laboured under the disadvantage that his
wartime achievements remained totally secret. His ideas led the field
in 1946, but this was little recognised. Frustrated in his work, he
emerged as a powerful marathon runner, and almost qualified for the
British team in the 1948 Olympic games.
Turing's motivations were scientific rather than industrial or
commercial, and he soon returned to the theoretical limitations of
computation, this time focussing on the comparison of the power of
computation and the power of the human brain. His contention was that
the computer, when properly programmed, could rival the brain. It
founded the 'Artificial Intelligence' program of coming
decades.
In 1948 he moved to Manchester University, where he partly fulfilled
the expectations placed upon him to plan software for the pioneer
computer development there, but still remained a free-ranging thinker.
It was here that his famous 1950 paper, "Computing Machinery and
Intelligence," (Turing 1950b) was written. In 1951 he was elected a
Fellow of the Royal Society for his 1936 achievement, yet at the same
time he was striking into entirely new territory with a mathematical
theory of biological morphogenesis (Turing 1952).
This work was interrupted by Alan Turing's arrest in February 1952
for his sexual affair with a young Manchester man, and he was obliged,
to escape imprisonment, to undergo the injection of oestrogen intended
to negate his sexual drive. He was disqualified from continuing secret
cryptological work. His general libertarian attitude was enhanced
rather than suppressed by the criminal trial, and his intellectual
individuality also remained as lively as ever. While remaining formally
a Reader in the Theory of Computing, he not only embarked on more
ambitious applications of his biological theory, but advanced new ideas
for fundamental physics.
For this reason his death, on 7 June 1954, at his home in Wilmslow,
Cheshire, came as a general surprise. In hindsight it is obvious that
Turing's unique status in Anglo-American secret communication work
meant that there were pressures on him of which his contemporaries were
unaware; there was certainly another 'security' conflict
with government in 1953 (Hodges 1983, p. 483). Some commentators, e.g.
Dawson (1985), have argued that assassination should not be ruled out.
But he had spoken of suicide, and his death, which was by cyanide
poisoning, was most likely by his own hand, contrived so as to allow
those who wished to do so to believe it a result of his penchant for
chemistry experiments. The symbolism of its dramatic element--a
partly eaten apple--has continued to haunt the intellectual Eden
from which Alan Turing was expelled.
## 2. The Turing Machine and Computability
Alan Turing drew much between 1928 and 1933 from the work of the
mathematical physicist and populariser A. S. Eddington, from J. von
Neumann's account of the foundations of quantum mechanics, and then
from Bertrand Russell's mathematical logic. Meanwhile, his lasting
fascination with the problems of mind and matter was heightened by
emotional elements in his own life (Hodges 1983, p. 63). In 1934 he
graduated with an outstanding degree in mathematics from Cambridge
University, followed by a successful dissertation in probability theory
which won him a Fellowship of King's College, Cambridge, in 1935. This
was the background to his learning, also in 1935, of the problem which
was to make his name.
It was from the lectures of the topologist M. H. A. (Max) Newman in
that year that he learnt of Godel's 1931 proof of the formal
incompleteness of logical systems rich enough to include arithmetic,
and of the outstanding problem in the foundations of mathematics as
posed by Hilbert: the "Entscheidungsproblem" (decision problem). Was
there a method by which it could be decided, for any given mathematical
proposition, whether or not it was provable?
The principal difficulty of this question lay in giving an
unassailably correct and general definition of what was meant by such
expressions as 'definite method' or 'effective
procedure.' Turing worked on this alone for a year until April
1936; independence and isolation was to be both his strength, in
formulating original ideas, and his weakness, when it came to promoting
and implementing them.
The word 'mechanical' had often been used of the
formalist approach lying behind Hilbert's problem, and Turing seized on
the concept of the *machine.*Turing's solution lay in defining
what was soon to be named the *Turing machine.* With this he
defined the concept of 'the mechanical' in terms of simple
atomic operations. The Turing machine formalism was modelled on the
teleprinter, slightly enlarged in scope to allow a paper tape that
could move in both directions and a 'head' that could read,
erase and print new symbols, rather than only read and punch permanent
holes.
The Turing machine is 'theoretical,' in the sense that
it is not intended actually to be engineered (there being no point in
doing so), although it is essential that its atomic components (the
paper tape, movement to left and right, testing for the presence of a
symbol) are such as *could* actually be implemented. The whole
point of the formalism is to reduce the concept of 'method'
to simple operations that can unquestionably be
'effected.'
Nevertheless Turing's purpose was to embody the most general
mechanical process as carried out by a *human being.* His
analysis began not with any existing computing machines, but with the
picture of a child's exercise book marked off in squares. From the
beginning, the Turing machine concept aimed to capture what the
*human mind* can do when carrying out a procedure.
In speaking of 'the' Turing machine it should be made
clear that there are *infinitely many* Turing machines, each
corresponding to a different method or procedure, by virtue of having a
different 'table of behaviour.' Nowadays it is almost
impossible to avoid imagery which did not exist in 1936: that of the
computer. In modern terms, the 'table of behaviour' of a
Turing machine is equivalent to a computer program.
If a Turing machine corresponds to a computer program, what is the
analogy of the computer? It is what Turing described as a
*universal* machine (Turing 1936, p. 241). Again, there are
*infinitely many* universal Turing machines, forming a subset of
Turing machines; they are those machines with 'tables of
behaviour' complex enough to read the tables of other Turing
machines, and then do what those machines would have done. If this
seems strange, note the modern parallel that any computer can be
simulated by software on another computer. The way that tables can read
and simulate the effect of other tables is crucial to Turing's theory,
going far beyond Babbage's ideas of a hundred years earlier. It also
shows why Turing's ideas go to the heart of the modern computer, in
which it is essential that programs are themselves a form of data which
can be manipulated by other programs. But the reader must always
remember that in 1936 there were no such computers; indeed the modern
computer arose *out of* the formulation of 'behaving
mechanically' that Turing found in this work.
Turing's machine formulation allowed the precise definition of the
*computable:* namely, as what can be done by a Turing machine
acting alone. More exactly, computable operations are those which can
be effected by what Turing called *automatic* machines. The
crucial point here is that the action of an automatic Turing machine is
totally determined by its 'table of behaviour'. (Turing
also allowed for 'choice machines' which call for human
inputs, rather than being totally determined.) Turing then proposed
that this definition of 'computable' captured precisely
what was intended by such words as 'definite method, procedure,
mechanical process' in stating the
*Entscheidungsproblem.*
In applying his machine concept to the
*Entscheidungsproblem,* Turing took the step of defining
*computable numbers.* These are those real numbers, considered
as infinite decimals, say, which it is possible for a Turing machine,
starting with an empty tape, to print out. For example, the Turing
machine which simply prints the digit 1 and moves to the right, then
repeats that action for ever, can thereby compute the number
.1111111... A more complicated Turing machine can compute the
infinite decimal expansion of p.
Turing machines, like computer programs, are countable; indeed they
can be ordered in a complete list by a kind of alphabetical ordering of
their 'tables of behaviour'. Turing did this by encoding
the tables into 'description numbers' which can then be
ordered in magnitude. Amongst this list, a subset of them (those with
'satisfactory' description numbers) are the machines which
have the effect of printing out infinite decimals. It is readily shown,
using a 'diagonal' argument first used by Cantor and
familiar from the discoveries of Russell and Godel, that there can
be no Turing machine with the property of deciding whether a
description number is satisfactory or not. The argument can be
presented as follows. Suppose that such a Turing machine exists. Then
it is possible to construct a new Turing machine which works out in
turn the Nth digit from the Nth machine possessing a satisfactory
description number. This new machine then prints an Nth digit differing
from that digit. As the machine proceeds, it prints out an infinite
decimal, and therefore has a 'satisfactory' description
number. Yet this number must by construction differ from the outputs of
every Turing machine with a satisfactory description number. This is a
contradiction, so the hypothesis must be false (Turing 1936, p. 246).
From this, Turing was able to answer Hilbert's
*Entscheidungsproblem* in the negative: there can be no such
general method.
Turing's proof can be recast in many ways, but the core idea depends
on the *self-reference* involved in a machine operating on
symbols, which is itself described by symbols and so can operate on its
own description. Indeed, the self-referential aspect of the theory can
be highlighted by a different form of the proof, which Turing preferred
(Turing 1936, p. 247). Suppose that such a machine for deciding
satisfactoriness does exist; then apply it to its own description
number. A contradiction can readily be obtained. However, the
'diagonal' method has the advantage of bringing out the
following: that a real number may be *defined* unambiguously,
yet be *uncomputable.* It is a non-trivial discovery that
whereas some infinite decimals (e.g. p) may be encapsulated in a
finite table, other infinite decimals (in fact, almost all) cannot.
Likewise there are decision problems such as 'is this number
prime?' in which infinitely many answers are wrapped up in a
finite recipe, while there are others (again, almost all) which are
not, and must be regarded as requiring infinitely many different
methods. 'Is this a provable proposition?' belongs to the
latter category.
This is what Turing established, and into the bargain the remarkable
fact that anything that *is* computable can in fact be computed
by *one* machine, a universal Turing machine.
It was vital to Turing's work that he justified the definition by
showing that it encompassed the most general idea of
'method'. For if it did not, the
*Entscheidungproblem* remained open: there might be some more
powerful type of method than was encompassed by Turing computability.
One justification lay in showing that the definition included many
processes a mathematician would consider to be natural in computation
(Turing 1936, p. 254). Another argument involved a human calculator
following written instruction notes. (Turing 1936, p. 253). But in a
bolder argument, the one he placed first, he considered an
'intuitive' argument appealing to the *states of
mind* of a human computer. (Turing 1936, p. 249). The entry of
'mind' into his argument was highly significant, but at
this stage it was only a mind following a rule.
To summarise: Turing found, and justified on very general and
far-reaching grounds, a precise mathematical formulation of the
conception of a general process or method. His work, as presented to
Newman in April 1936, argued that his formulation of
'computability' encompassed 'the possible processes
which can be carried out in computing a number.' (Turing 1936,
p. 232). This opened up new fields of discovery both in practical
computation, and in the discussion of human mental processes. However,
although Turing had worked as what Newman called 'a confirmed
solitary' (Hodges 1983, p 113), he soon learned that he was not
alone in what Gandy (1988) has called 'the confluence of ideas in
1936.'
The Princeton logician Alonzo Church had slightly outpaced Turing in
finding a satisfactory definition of what he called 'effective
calculability.' Church's definition required the logical
formalism of the *lambda-calculus.* This meant that from the
outset Turing's achievement merged with and superseded the formulation
of Church's Thesis, namely the assertion that the
lambda-calculus formalism correctly embodied the concept of effective
process or method. Very rapidly it was shown that the mathematical
scope of Turing computability coincided with Church's definition (and
also with the scope of the *general recursive functions* defined
by Godel). Turing wrote his own statement (Turing 1939, p. 166) of
the conclusions that had been reached in 1938; it is in the Ph.D.
thesis that he wrote under Church's supervision, and so this statement
is the nearest we have to a joint statement of the 'Church-Turing
thesis':
> A function is said to be 'effectively
> calculable' if its values can be found by some purely mechanical
> process. Although it is fairly easy to get an intuitive grasp of this
> idea, it is nevertheless desirable to have some more definite,
> mathematically expressible definition. Such a definition was first
> given by Godel at Princeton in 1934... These functions were
> described as 'general recursive' by Godel...
> Another definition of effective calculability has been given by
> Church... who identifies it with lambda-definability. The author
> [i.e. Turing] has recently suggested a definition corresponding more
> closely to the intuitive idea... It was stated above that 'a
> function is effectively calculable if its values can be found by a
> purely mechanical process.' We may take this statement literally,
> understanding by a purely mechanical process one which could be carried
> out by a machine. It is possible to give a mathematical description, in
> a certain normal form, of the structures of these machines. The
> development of these ideas leads to the author's definition of a
> computable function, and to an identification of computability with
> effective calculability. It is not difficult, though somewhat
> laborious, to prove that these three definitions are
> equivalent.
Church accepted that Turing's definition gave a compelling, intuitive
reason for why Church's thesis was true. The recent exposition by
Davis (2000) emphasises that Godel also was convinced by Turing's
argument that an absolute concept had been identified (Godel
1946). The situation has not changed since 1937. (For further
comment, see the article on the
Church-Turing Thesis.
The recent selection of Turing's papers edited by Copeland (2004),
and the review of Hodges (2006), continue this discussion.)
Turing himself did little to evangelise his formulation in the world
of mathematical logic and early computer science. The textbooks of
Davis (1958) and Minsky (1967) did more. Nowadays Turing computability
is often reformulated (e.g. in terms of 'register
machines'). However, computer simulations (e.g.,
Turing's World,
from Stanford) have brought Turing's
original imagery to life.
Turing's work also opened new areas for decidability questions
within pure mathematics. From the 1970s, Turing machines also took on
new life in the development of *complexity theory,* and as such
underpin one of the most important research areas in computer science.
This development exemplifies the lasting value of Turing's special
quality of giving concrete illustration to abstract concepts.
## 3. The Logical and the Physical
As put by Gandy (1988), Turing's paper was 'a paradigm of
philosophical analysis,' refining a vague notion into a precise
definition. But it was more than being an analysis *within* the
world of mathematical logic: in Turing's thought the question that
constantly recurs both theoretically and practically is the
relationship of the logical Turing machine to the physical world.
'Effective' means *doing,* not merely imagining
or postulating. At this stage neither Turing nor any other logician
made a serious investigation into the physics of such
'doing.' But Turing's image of a teleprinter-like machine
does inescapably refer to something that could actually be physically
'done.' His concept is a distillation of the idea that one
can only 'do' one simple action, or finite number of simple
actions, at a time. How 'physical' a concept is it?
The tape never holds more than a finite number of marked squares at
any point in a computation. Thus it can be thought of as being finite,
but always capable of further extension as required. Obviously this
unbounded extendibility is unphysical, but the definition is still of
practical use: it means that anything done on a finite tape, however
large, is computable. (Turing himself took such a finitistic approach
when explaining the practical relevance of computability in his 1950
paper.) One aspect of Turing's formulation, however, involves absolute
finiteness: the table of behaviour of a Turing machine must be finite,
since Turing allows only a finite number of
'configurations' of a Turing machine, and only a finite
repertoire of symbols which can be marked on the tape. This is
essentially equivalent to allowing only computer programs with finite
lengths of code.
'Calculable by finite means' was Turing's
characterisation of computability, which he justified with the argument
that 'the human memory is necessarily limited.' (Turing
1936, p. 231). The whole point of his definition lies in encoding
infinite potential effects, (e.g. the printing of an infinite decimal)
into finite 'tables of behaviour'. There would be no point
in allowing machines with infinite 'tables of behaviour'.
It is obvious, for instance, that any real number could be printed by
such a 'machine', by letting the Nth configuration be
'programmed' to print the Nth digit, for example. Such a
'machine' could likewise store any countable number of
statements about all possible mathematical expressions, and so make the
*Entscheidungsproblem* trivial.
Church (1937), when reviewing Turing's paper while Turing was in
Princeton under his supervision, actually gave a bolder
characterisation of the Turing machine as an *arbitrary finite
machine.*
> The author [i.e. Turing] proposes as a criterion that an
> infinite sequence of digits 0 and 1 be "computable" that it shall be
> possible to devise a computing machine, occupying a finite space and
> with working parts of finite size, which will write down the sequence
> to any desired number of terms if allowed to run for a sufficiently
> long time. As a matter of convenience, certain further restrictions are
> imposed on the character of the machine, but these are of such a nature
> as obviously to cause no loss of generality--in particular, a
> human calculator, provided with pencil and paper and explicit
> instructions, can be regarded as a kind of Turing machine.
Church (1940) repeated this characterisation. Turing neither
endorsed it nor said anything to contradict it, leaving the general
concept of 'machine' itself undefined. The work of Gandy
(1980) did more to justify this characterisation, by refining the
statement of what is meant by 'a machine.' His results
support Church's statement; they also argue strongly for the view that
natural attempts to extend the notion of computability lead to
trivialisation: if Gandy's conditions on a 'machine' are
significantly weakened then every real number becomes calculable (Gandy
1980, p. 130ff.). (For a different interpretation of Church's
statement, see the article on the
Church-Turing Thesis.)
Turing did not explicitly discuss the question of the *speed*
of his elementary actions. It is left implicit in his discussion, by
his use of the word 'never,' that it is not possible for
infinitely many steps to be performed in a finite time. Others have
explored the effect of abandoning this restriction. Davies (2001), for
instance, describes a 'machine' with an infinite number of
parts, requiring components of arbitrarily small size, running at
arbitrarily high speeds. Such a 'machine' could perform
uncomputable tasks. Davies emphasises that such a machine cannot be
built in our own physical world, but argues that it could be
constructed in a universe with different physics. To the extent that it
rules out such 'machines', the Church-Turing thesis must
have at least some physical content.
True physics is quantum-mechanical, and this implies a different
idea of matter and action from Turing's purely classical picture. It is
perhaps odd that Turing did not point this out in this period, since he
was well versed in quantum physics. Instead, the analysis and practical
development of quantum computing was left to the 1980s. Quantum
computation, using the evolution of wave-functions rather than
classical machine states, is the most important way in which Turing
machine model has been challenged. The standard formulation of quantum
computing (Deutsch 1985, following Feynman 1982) does not predict
anything beyond computable effects, although within the realm of the
computable, quantum computations may be very much more efficient than
classical computations. It is possible that a deeper understanding of
quantum mechanical physics may further change the picture of what can
be physically 'done.'
## 4. The Uncomputable
Turing turned to the exploration of the *uncomputable* for his
Princeton Ph.D. thesis (1938), which then appeared as *Systems of
Logic based on Ordinals* (Turing 1939).
It is generally the view, as expressed by Feferman (1988), that this
work was a diversion from the main thrust of his work. But from another
angle, as expressed in (Hodges 1997), one can see Turing's development
as turning naturally from considering the mind when following a rule,
to the action of the mind when *not* following a rule. In
particular this 1938 work considered the mind when seeing the truth of
one of Godel's true but formally unprovable propositions, and
hence going beyond rules based on the axioms of the system. As Turing
expressed it (Turing 1939, p. 198), there are 'formulae, seen
intuitively to be correct, but which the Godel theorem shows are
unprovable in the original system.' Turing's theory of
'ordinal logics' was an attempt to 'avoid as far as
possible the effects of Godel's theorem' by studying the
effect of adding Godel sentences as new axioms to create stronger
and stronger logics. It did not reach a definitive conclusion.
In his investigation, Turing introduced the idea of an
'oracle' capable of performing, as if by magic, an
uncomputable operation. Turing's oracle cannot be considered as some
'black box' component of a new class of machines, to be put
on a par with the primitive operations of reading single symbols, as
has been suggested by (Copeland 1998). An oracle is *infinitely more
powerful* than anything a modern computer can do, and nothing like
an elementary component of a computer. Turing defined
'oracle-machines' as Turing machines with an additional
configuration in which they 'call the oracle' so as to take
an uncomputable step. But these oracle-machines are *not purely
mechanical.* They are only partially mechanical, like Turing's
choice-machines. Indeed the *whole point* of the oracle-machine
is to explore the realm of what *cannot* be done by purely
mechanical processes. Turing emphasised (Turing 1939, p. 173):
> We shall not go any further into the nature of this oracle
> apart from saying that it cannot be a machine.
Turing's oracle can be seen simply as a mathematical tool, useful
for exploring the mathematics of the uncomputable. The idea of an
oracle allows the formulation of questions of *relative* rather
than absolute computability. Thus Turing opened new fields of
investigation in mathematical logic. However, there is also a possible
interpretation in terms of human cognitive capacity. On this
interpretation, the oracle is related to the 'intuition'
involved in seeing the truth of a Godel statement. M. H. A.
Newman, who introduced Turing to mathematical logic and continued to
collaborate with him, wrote in (Newman 1955) that the oracle resembles
a mathematician 'having an idea', as opposed to using a
mechanical method. However, Turing's oracle cannot actually be
*identified* with a human mental faculty. It is too powerful: it
immediately supplies the answer as to whether any given Turing machine
is 'satisfactory,' something no human being could do. On
the other hand, anyone hoping to see mental 'intuition'
captured completely by an oracle, must face the difficulty that Turing
showed how his argument for the incompleteness of Turing machines could
be applied with equal force to oracle-machines (Turing 1939, p. 173).
This point has been emphasised by Penrose (1994, p. 380). Newman's
comment might better be taken to refer to the different oracle
suggested later on (Turing 1939, p. 200), which has the property of
recognising 'ordinal formulae.' One can only safely say
that Turing's interest at this time in uncomputable operations appears
in the *general setting* of studying the mental
'intuition' of truths which are not established by
following mechanical processes (Turing 1939, p. 214ff.).
In Turing's presentation, intuition is in practice present in every
part of a mathematician's thought, but when mathematical proof is
formalised, intuition has an explicit manifestation in those steps
where the mathematician sees the truth of a formally unprovable
statement. Turing did not offer any suggestion as to what he considered
the brain was physically doing in a moment of such
'intuition'; indeed the word 'brain' did not
appear in his writing in this era. This question is of interest because
of the views of Penrose (1989, 1990, 1994, 1996) on just this issue:
Penrose holds that the ability of the mind to see formally unprovable
truths shows that there must be uncomputable physical operations in the
brain. It should be noted that there is widespread disagreement about
whether the human mind is really seeing the truth of a Godel
sentence; see for instance the discussion in (Penrose 1990) and the
reviews following it. However Turing's writing at this period accepted
without criticism the concept of intuitive recognition of the
truth.
It was also at this period that Turing met Wittgenstein, and there
is a full record of their 1939 discussions on the foundations of
mathematics in (Diamond 1976). To the disappointment of many, there is
no record of any discussions between them, verbal or written, on the
problem of Mind.
In 1939 Turing's various energetic investigations were broken off
for war work. This did, however, have the positive feature of leading
Turing to turn his universal machine into the practical form of the
modern digital computer.
## 5. Building a Universal Machine
When apprised in 1936 of Turing's idea for a universal machine,
Turing's contemporary and friend, the economist David Champernowne,
reacted by saying that such a thing was impractical; it would need
'the Albert Hall.' If built from relays as then employed in
telephone exchanges, that might indeed have been so, and Turing made no
attempt at it. However, in 1937 Turing did work with relays on a
smaller machine with a special cryptological function (Hodges 1983, p.
138). World history then led Turing to his unique role in the Enigma
problem, to his becoming the chief figure in the mechanisation of
logical procedures, and to his being introduced to ever faster and more
ambitious technology as the war continued.
After 1942, Turing learnt that electronic components offered the
speed, storage capacity and logical functions required to be effective
as 'tapes' and instruction tables. So from 1945, Turing
tried to use electronics to turn his universal machine into practical
reality. Turing rapidly composed a detailed plan for a modern
stored-program computer: that is, a computer in which data and
instructions are stored and manipulated alike. Turing's ideas led the
field, although his report of 1946 postdated von Neumann's more famous
EDVAC report (von Neumann 1945). It can however be argued, as does
Davis (2000), that von Neumann gained his fundamental insight into the
computer through his pre-war familiarity with Turing's logical work. At
the time, however, these basic principles were not much discussed. The
difficulty of engineering the electronic hardware dominated
everything.
It therefore escaped observers that Turing was ahead of von Neumann
and everyone else on the future of software, or as he called it, the
'construction of instruction tables.' Turing (1946) foresaw
at once:
> Instruction tables will have to be made up by
> mathematicians with computing experiences and perhaps a certain
> puzzle-solving ability. There will probably be a great deal of work to
> be done, for every known process has got to be translated into
> instruction table form at some stage.
>
>
>
> The process of constructing instruction tables should be very
> fascinating. There need be no real danger of it ever becoming a drudge,
> for any processes that are quite mechanical may be turned over to the
> machine itself.
>
>
>
These remarks, reflecting the universality of the computer, and its
ability to manipulate its own instructions, correctly described the
future trajectory of the computer industry. However, Turing had in mind
something greater: 'building a brain.'
## 6. Building a Brain
The provocative words 'building a brain' from the outset
announced the relationship of Turing's technical computer engineering
to a philosophy of Mind. Even in 1936, Turing had given an
interpretation of computability in terms of 'states of
mind'. His war work had shown the astounding power of the
computable in mechanising expert human procedures and judgments. From
1941 onwards, Turing had also discussed the mechanisation of
chess-playing and other 'intelligent' activities with his
colleagues at Bletchley Park (Hodges 1983, p. 213). But more
profoundly, it appears that Turing emerged in 1945 with a conviction
that computable operations were sufficient to embrace *all*
mental functions performed by the brain. As will become clear from the
ensuing discussion, the uncomputable 'intuition' of 1938
disappeared from Turing's thought, and was replaced by new ideas all
lying within the realm of the computable. This change shows even in the
technical prospectus of (Turing 1946), where Turing referred to the
possibility of making a machine calculate chess moves, and then
continued:
> This ... raises the question 'Can a machine play
> chess?' It could fairly easily be made to play a rather bad game.
> It would be bad because chess requires intelligence. We stated ...
> that the machine should be treated as entirely without intelligence.
> There are indications however that it is possible to make the machine
> display intelligence at the risk of its making occasional serious
> mistakes. By following up this aspect the machine could probably be
> made to play very good chess.
The puzzling reference to 'mistakes' is made clear by a
talk Turing gave a year later (Turing 1947), in which the issue of
mistakes is linked to the issue of the significance of seeing the truth
of formally unprovable statements.
> ...I would say that fair play must be given to the
> machine. Instead of it giving no answer we could arrange that it gives
> occasional wrong answers. But the human mathematician would likewise
> make blunders when trying out new techniques... In other words
> then, if a machine is expected to be infallible, it cannot also be
> intelligent. There are several mathematical theorems which say almost
> exactly that. But these theorems say nothing about how much
> intelligence may be displayed if a machine makes no pretence at
> infallibility.
Turing's post-war view was that mathematicians make mistakes, and so
do not in fact see the truth infallibly. Once the possibility of
mistakes is admitted, Godel's theorem become irrelevant.
Mathematicians and computers alike apply computable processes to the
problem of judging the correctness of assertions; both will therefore
sometimes err, since seeing the truth is known not to be a computable
operation, but there is no reason why the computer need do worse than
the mathematician. This argument is still very much alive. For
instance, Davis (2000) endorses Turing's view and attacks Penrose
(1989, 1990, 1994, 1996) who argues against the significance of human
error on the grounds of a Platonist account of mathematics.
Turing also pursued more constructively the question of how
computers could be made to perform operations which did not appear to
be 'mechanical' (to use common parlance). His guiding
principle was that it should be possible to simulate the operation of
human brains. In an unpublished report (Turing 1948), Turing explained
that the question was that of how to simulate 'initiative'
in addition to 'discipline'--comparable to the need
for 'intuition' as well as mechanical ingenuity expressed
in his pre-war work. He announced ideas for how to achieve this: he
thought 'initiative' could arise from systems where the
algorithm applied is not consciously designed, but is arrived at by
some other means. Thus, he now seemed to think that the mind when
*not* actually following any conscious rule or plan, was
nevertheless carrying out some computable process.
He suggested a range of ideas for systems which could be said to
modify their own programs. These ideas included nets of logical
components ('unorganised machines') whose properties could
be 'trained' into a desired function. Thus, as expressed
by (Ince 1989), he predicted neural networks. However, Turing's nets
did not have the 'layered' structure of the neural
networks that were to be developed from the 1950s onwards. By the
expression 'genetical or evolutionary search', he also
anticipated the 'genetic algorithms' which since the late
1980s have been developed as a less closely structured approach to
self-modifying programs. Turing's proposals were not well developed in
1948, and at a time when electronic computers were only barely in
operation, could not have been. Copeland and Proudfoot (1996) have
drawn fresh attention to Turing's connectionist ideas, which
have since been tried out (Teuscher 2001).
It is important to note that Turing identified his prototype neural
networks and genetic algorithms as *computable.* This has to be
emphasised since the word 'nonalgorithmic' is often now
confusingly employed for computer operations that are not explicitly
planned. Indeed, his ambition was explicit: he himself wanted to
implement them as programs on a computer. Using the term Universal
Practical Computing Machine for what is now called a digital computer,
he wrote in (Turing 1948):
> It should be easy to make a model of any particular machine
> that one wishes to work on within such a UPCM instead of having to work
> with a paper machine as at present. If one also decided on quite
> definite 'teaching policies' these could also be programmed
> into the machine. One would then allow the whole system to run for an
> appreciable period, and then break in as a kind of 'inspector of
> schools' and see what progress had been made. One might also be
> able to make some progress with unorganised
> machines...
The upshot of this line of thought is that all mental operations are
*computable* and hence realisable on a universal machine: the
computer. Turing advanced this view with increasing confidence in the
late 1940s, perfectly aware that it represented what he enjoyed calling
'heresy' to the believers in minds or souls beyond material
description.
Turing was not a mechanical thinker, or a stickler for convention;
far from it. Of all people, he knew the nature of originality and
individual independence. Even in tackling the U-boat Enigma problem,
for instance, he declared that he did so because no-one else was
looking at it and he could have it to himself. Far from being trained
or organised into this problem, he took it on despite the prevailing
wisdom in 1939 that it was too difficult to attempt. His arrival at a
thesis of 'machine intelligence' was not the outcome of
some dull or restricted mentality, or a lack of appreciation of
individual human creativity.
## 7. Machine Intelligence
Turing relished the paradox of 'Machine Intelligence': an
apparent contradiction in terms. It is likely that he was already
savouring this theme in 1941, when he read a theological book by the
author Dorothy Sayers (Sayers 1941). In (Turing 1948) he quoted from
this work to illustrate his full awareness that in common parlance
'mechanical' was used to to mean 'devoid of
intelligence.' Giving a date which no doubt had his highly
sophisticated Enigma-breaking machines secretly in mind, he wrote that
'up to 1940' only very limited machinery had been used, and
this 'encouraged the belief that machinery was necessarily
limited to extremely straightforward, possibly even to repetitious,
jobs.' His object was to dispel these connotations.
In 1950, Turing wrote on the first page of his Manual for users of
the Manchester University computer (Turing 1950a):
> Electronic computers are intended to carry out any definite
> rule of thumb process which could have been done by a human operator
> working in a disciplined but unintelligent manner.
This is, of course, just the 1936 universal Turing machine, now in
electronic form. On the other hand, he also wrote in the more famous
paper of that year (Turing 1950b, p. 460)
> We may hope that machines will eventually compete with men
> in all purely intellectual fields.
How could the *intelligent* arise from operations which were
themselves totally *routine and
mindless*--'entirely without intelligence'? This is the core of the
problem Turing faced, and the same problem faces Artificial
Intelligence research today. Turing's underlying argument was that the
human brain must somehow be organised for intelligence, and that the
organisation of the brain must be realisable as a finite discrete-state
machine. The implications of this view were exposed to a wider circle
in his famous paper, "Computing Machinery and Intelligence," which
appeared in *Mind* in October 1950.
The appearance of this paper, Turing's first foray into a journal of
philosophy, was stimulated by his discussions at Manchester University
with Michael Polanyi. It also reflects the general sympathy of Gilbert
Ryle, editor of *Mind,* with Turing's point of view.
Turing's 1950 paper was intended for a wide readership, and its fresh and
direct approach has made it one of the most frequently cited and republished
papers in modern philosophical literature. Not surprisingly,
the paper has attracted many critiques. Not all commentators note the
careful explication of computability which opens the paper, with an
emphasis on the concept of the universal machine. This explains why if
mental function can be achieved by any finite discrete state machine,
then the same effect can be achieved by programming a computer (Turing
1950b, p. 442). (Note, however, that Turing makes no claim that the
nervous system should resemble a digital computer in its structure.)
Turing's treatment has a severely finitistic flavour: his argument is
that the relevant action of the brain is not only computable, but
realisable as a totally finite machine, i.e. as a Turing machine that
does not use any 'tape' at all. In his account, the full
range of computable functions, defined in terms of Turing machines that
use an infinite tape, only appears as being of 'special
theoretical interest.' (Of uncomputable functions there is, *a
fortiori,* no mention.) Turing uses the finiteness of the nervous
system to give an estimate of about 109 bits
of storage required for a limited simulation of intelligence (Turing
1950b, p. 455).
The wit and drama of Turing's 'imitation game' has
attracted more fame than his careful groundwork. Turing's argument was
designed to bypass discussions of the nature of thought, mind, and
consciousness, and to give a criterion in terms of external observation
alone. His justification for this was that one only judges that other
human beings are thinking by external observation, and he applied a
principle of 'fair play for machines' to argue that the
same should hold for machine intelligence. He dramatised this viewpoint
by a thought-experiment (which nowadays can readily be tried out). A
human being and a programmed computer compete to convince an impartial
judge, using textual messages alone, as to which is the human being.
If the computer wins, it must be credited with intelligence.
Turing introduced his 'game' confusingly with a poor
analogy: a party game in which a man pretends to be a woman. His loose
wording (Turing 1950b, p. 434) has led some writers wrongly to suppose
that Turing proposed an 'imitation game' in which a machine
has to imitate a man imitating a woman. Others, like Lassegue
(1998), place much weight on this game of gender pretence and its real
or imaginary connotations. In fact, the whole point of the
'test' setting, with its remote text-message link, was to
*separate* intelligence from other human faculties and
properties. But it may fairly be said that this confusion reflects
Turing's richly ambitious concept of what is involved in human
'intelligence'. It might also be said to illustrate his own
human intelligence, in particular a delight in the Wildean reversal of
roles, perhaps reflecting, as in Wilde, his homosexual identity. His
friends knew an Alan Turing in whom intelligence, humour and sex were
often intermingled.
Turing was in fact sensitive to the difficulty of separating
'intelligence' from other aspects of human senses and
actions; he described ideas for robots with sensory attachments and
raised questions as to whether they might enjoy strawberries and cream
or feel racial kinship. In contrast, he paid scant attention to the
questions of authenticity and deception implicit in his test,
essentially because he wished to by-pass questions about the reality of
consciousness. A subtle aspect of one of his imagined
'intelligent' conversations (Turing 1950b, p. 434) is where
the computer imitates human intelligence by giving the *wrong
answer* to a simple arithmetic problem. But in Turing's setting we
are not supposed to ask whether the computer 'consciously'
deceives by giving the impression of innumerate humanity, nor why it
should wish to do so. There is a certain lack of seriousness in this
approach. Turing took on a second-rank target in countering the
published views of the brain surgeon G. Jefferson, as regards the
objectivity of consciousness. Wittgenstein's views on Mind would have
made a more serious point of departure.
Turing's imitation principle perhaps also assumes (like
'intelligence tests' of that epoch) too much of a shared
language and culture for his imagined interrogations. Neither does it
address the possibility that there may be kinds of thought, by animals
or extra-terrestrial intelligences, which are not amenable to
communication.
A more positive feature of the paper lies in its constructive
program for research, culminating in Turing's ideas for 'learning
machines' and educating 'child' machines (Turing
1950b, p. 454). It is generally thought (e.g. in Dreyfus and Dreyfus
1990) that there was always an antagonism between programming and the
'connectionist' approach of neural networks. But Turing
never expressed such a dichotomy, writing that both approaches should
be tried. Donald Michie, the British AI research pioneer profoundly
influenced by early discussions with Turing, has called this suggestion
'Alan Turing's Buried Treasure', in an allusion to a
bizarre wartime episode in which Michie was himself involved (Hodges
1983, p. 345). The question is still highly pertinent.
It is also a commonly expressed view that Artificial Intelligence
ideas only occurred to pioneers in the 1950s *after* the success
of computers in large arithmetical calculations. It is hard to see why
Turing's work, which was rooted from the outset in the question of
mechanising Mind, has been so much overlooked. But through his failure
to publish and promote work such as that in (Turing 1948) he largely
lost recognition and influence.
It is also curious that Turing's best-known paper should appear in a
journal of philosophy, for it may well be said that Turing, always
committed to materialist explanation, was not really a philosopher at
all. Turing was a mathematician, and what he had to offer philosophy
lay in illuminating its field with what had been discovered in
mathematics and physics. In the 1950 paper this was surprisingly
cursory, apart from his groundwork on the concept of computability. His
emphasis on the sufficiency of the computable to explain the action of
the mind was stated more as a hypothesis, even a manifesto, than argued
in detail. Of his hypothesis he wrote (Turing 1950b, p. 442):
> ...I believe that at the end of the century the use of
> words and general educated opinion will have altered so much that one
> will be able to speak of machines thinking without expecting to be
> contradicted. I believe further that no useful purpose is served by
> concealing these beliefs. The popular view that scientists proceed
> inexorably from established fact to established fact, never being
> influenced by any unproved conjecture, is quite mistaken. Provided it
> is made clear which are proved facts and which are conjecture, no harm
> can result. Conjectures are of great importance since they suggest
> useful lines of research.
Penrose (1994, p.21), probing into Turing's conjecture, has
presented it as 'Turing's thesis' thus:
> It seems likely that he viewed physical action in general--which would include the action of a human brain--to be
> always reducible to some kind of Turing-machine action.
The statement that all physical action is in effect computable goes
beyond Turing's explicit words, but is a fair characterisation of the
implicit assumptions behind the 1950 paper. Turing's consideration of
'The Argument from Continuity in the Nervous System,' in
particular, simply asserts that the physical system of the brain can be
approximated as closely as is desired by a computer program (Turing
1950b, p. 451). Certainly there is nothing in Turing's work in the
1945-50 period to contradict Penrose's interpretation. The more
technical precursor papers (Turing 1947, 1948) include wide-ranging
comments on physical processes, but make no reference to the
possibility of physical effects being uncomputable.
In particular, a section of (Turing 1948) is devoted to a general
classification of 'machines.' The period between 1937 and
1948 had given Turing much more experience of actual machinery than he
had in 1936, and his post-war remarks reflected this in a down-to-earth
manner. Turing distinguished 'controlling' from
'active' machinery, the latter being illustrated by
'a bulldozer'. Naturally it is the former--in modern
terms 'information-based machinery'--with which
Turing's analysis is concerned. It is noteworthy that in 1948 as in
1936, despite his knowledge of physics, Turing made no mention of how
quantum mechanics might affect the concept of
'controlling'. His concept of 'controlling'
remained entirely within the classical framework of the Turing machine
(which he called a Logical Computing Machine in this paper.)
The same section of (Turing 1948) also drew the distinction between
*discrete* and *continuous* machinery, illustrating the
latter with 'the telephone' as a continuous, controlling
machine. He made light of the difficulty of reducing continuous physics
to the discrete model of the Turing machine, and though citing
'the brain' as a continuous machine, stated that it could
probably be treated as if discrete. He gave no indication that physical
continuity threatened the paramount role of computability. In fact, his
thrust in (Turing 1947) was to promote the digital computer as *more
powerful* than analog machines such as the differential analyser.
When he discussed this comparison, he gave the following informal
version of the Church-Turing thesis:
> One of my conclusions was that the idea of a 'rule of
> thumb' process and a 'machine process' were
> synonymous. The expression 'machine process' of course
> means one which could be carried out by the type of machine I was
> considering [i.e. Turing machines]
Turing gave no hint that the discreteness of the Turing machine
constituted a real limitation, or that the non-discrete processes of
analog machines might be of any deep significance.
Turing also introduced the idea of 'random elements' but
his examples (using the digits of p) showed that he considered
*pseudo-random* sequences (i.e. computable sequences with
suitable 'random' properties) quite adequate for his
discussion. He made no suggestion that randomness implied something
uncomputable, and indeed gave no definition of the term
'random'. This is perhaps surprising in view of the fact
that his work in pure mathematics, logic and cryptography all gave him
considerable motivation to approach this question at a serious
level.
## 8. Unfinished Work
From 1950 Turing worked on a new mathematical theory of morphogenesis,
based on showing the consequences of non-linear equations for chemical
reaction and diffusion (Turing 1952). He was a pioneer in using a
computer for such work. Some writers have referred to this theory as
founding Artificial Life (A-life), but this is a misleading
description, apt only to the extent that the theory was intended, as
Turing saw it, to counter the Argument from Design. A-life since the
1980s has concerned itself with using computers to explore the logical
consequences of evolutionary theory without worrying about specific
physiological forms. Morphogenesis is complementary, being concerned to
show which physiological pathways are feasible for evolution to
exploit. Turing's work was developed by others in the 1970s and is now
regarded as central to this field.
It may well be that Turing's interest in morphogenesis went back to
a primordial childhood wonder at the appearance of plants and flowers.
But in another late development, Turing went back to other stimuli of
his youth. For in 1951 Turing did consider the problem, hitherto
avoided, of setting computability in the context of quantum-mechanical
physics. In a BBC radio talk of that year (Turing 1951) he discussed
the basic groundwork of his 1950 paper, but this time dealing rather
less certainly with the argument from Godel's theorem, and this
time also referring to the quantum-mechanical physics underlying the
brain. Turing described the universal machine property, applying it to
the brain, but said that its applicability required that the machine
whose behaviour is to be imitated
> ...should be of the sort whose behaviour is in
> principle predictable by calculation. We certainly do not know how any
> such calculation should be done, and it was even argued by Sir Arthur
> Eddington that on account of the indeterminacy principle in quantum
> mechanics no such prediction is even theoretically
> possible.
Copeland (1999) has rightly drawn attention to this sentence in his
preface to his edition of the 1951 talk. However, Copeland's critical
context suggests some connection with Turing's 'oracle.'
There is is in fact no mention of oracles here (nor anywhere in
Turing's post-war discussion of mind and machine.) Turing here is
discussing the possibility that, when seen as as a
*quantum-mechanical machine* rather than a classical machine,
the Turing machine model is inadequate. The correct connection to draw
is not with Turing's 1938 work on ordinal logics, but with his
knowledge of quantum mechanics from Eddington and von Neumann in his
youth. Indeed, in an early speculation, influenced by Eddington, Turing
had suggested that quantum mechanical physics could yield the basis of
free-will (Hodges 1983, p. 63). Von Neumann's axioms of quantum
mechanics involve two processes: unitary evolution of the wave
function, which is predictable, and the measurement or reduction
operation, which introduces unpredictability. Turing's reference to
unpredictability must therefore refer to the reduction process. The
essential difficulty is that still to this day there is no agreed or
compelling theory of when or how reduction actually occurs. (It should
be noted that 'quantum computing,' in the standard modern
sense, is based on the predictability of the unitary evolution, and
does not, as yet, go into the question of how reduction occurs.) It
seems that this single sentence indicates the beginning of a new field
of investigation for Turing, this time into the foundations of quantum
mechanics. In 1953 Turing wrote to his friend and student Robin Gandy
that he was 'trying to invent a new Quantum Mechanics but it
won't really work.'
At Turing's death in June 1954, Gandy reported in a letter to Newman
on what he knew of Turing's current work (Gandy 1954). He wrote of
Turing having discussed a problem in understanding the reduction
process, in the form of
> ...'the Turing Paradox'; it is easy to
> show using standard theory that if a system start in an eigenstate of
> some observable, and measurements are made of that observable N times a
> second, then, even if the state is not a stationary one, the
> probability that the system will be in the same state after, say, 1
> second, tends to one as N tends to infinity; i.e. that continual
> observation will prevent motion. Alan and I tackled one or two
> theoretical physicists with this, and they rather pooh-poohed it by
> saying that continual observation is not possible. But there is nothing
> in the standard books (e.g., Dirac's) to this effect, so that at least the
> paradox shows up an inadequacy of Quantum Theory as usually
> presented.
Turing's investigations take on added significance in view of the
assertion of Penrose (1989, 1990, 1994, 1996) that the reduction
process must involve something uncomputable. Probably Turing was aiming
at the opposite idea, of finding a theory of the reduction process that
would be predictive and computable, and so plug the gap in his
hypothesis that the action of the brain is computable. However Turing
and Penrose are alike in seeing this as an important question affecting
the assumption that all mental action is computable; in this they both
differ from the mainstream view in which the question is accorded
little significance.
Alan Turing's last postcards to Robin Gandy, in March 1954, headed
'Messages from the Unseen World' in allusion to Eddington,
hinted at new ideas in the fundamental physics of relativity and
particle physics (Hodges 1983, p. 512). They illustrate the wealth of
ideas with which he was concerned at that last point in his life, but
which apart from these hints are entirely lost. A review of such lost
ideas is given in (Hodges 2004), as part of a larger volume on
Turing's legacy (Teuscher 2004).
## 9. Alan Turing: the Unknown Mind
It is a pity that Turing did not write more about his ethical
philosophy and world outlook. As a student he was an admirer of Bernard
Shaw's plays of ideas, and to friends would openly voice both the
hilarities and frustrations of his many difficult situations. Yet the
nearest he came to serious personal writing, apart from occasional
comments in private letters, was in penning a short story about his
1952 crisis (Hodges 1983, p. 448). His last two years were particularly
full of Shavian drama and Wildean irony. In one letter (to his friend
Norman Routledge; the letter is now in the Turing Archive at King's
College, Cambridge) he wrote:
> Turing believes machines think
>
>
> Turing lies with men
>
>
> Therefore machines do not think
The syllogistic allusion to Socrates is unmistakeable, and his
demise, with cyanide rather than hemlock, may have signalled something
similar. A parallel figure in World War II, Robert Oppenheimer,
suffered the loss of his reputation during the same week that Turing
died. Both combined the purest scientific work and the most effective
application of science in war. Alan Turing was even more directly on
the receiving end of science, when his sexual mind was treated as a
machine, against his protesting consciousness and will. But amidst all
this human drama, he left little to say about what he really thought of
himself and his relationship to the world of human events.
Alan Turing did not fit easily with any of the intellectual movements
of his time, aesthetic, technocratic or marxist. In the 1950s,
commentators struggled to find discreet words to categorise him: as
'a scientific Shelley,' as possessing great 'moral
integrity'. Until the 1970s, the reality of his life was
unmentionable. He is still hard to place within twentieth-century
thought. He exalted the science that according to existentialists had
robbed life of meaning. The most original figure, the most insistent
on personal freedom, he held originality and will to be susceptible to
mechanisation. The mind of Alan Turing continues to be an
enigma.
But it is an enigma to which the twenty-first century seems
increasingly drawn. The year of his centenary, 2012, witnessed
numerous conferences, publications, and cultural events in his
honor. Some reasons for this explosion of interest are obvious. One is
that the question of the power and limitations of computation now
arises in virtually every sphere of human activity. Another is that
issues of sexual orientation have taken on a new importance in modern
democracies. More subtly, the interdisciplinary breadth of Turing's
work is now better appreciated. A landmark of the centenary period was
the publication of *Alan Turing, his work and impact*
(eds. Cooper and van Leeuwen, 2013), which made available almost all
aspects of Turing's scientific oeuvre, with a wealth of modern
commentary. In this new climate, fresh attention has been paid to
Turing's lesser-known work, and new light shed upon his
achievements. He has emerged from obscurity to become one of the most
intensely studied figures in modern science. |
turing-machine | ## 1. Definitions of the Turing Machine
### 1.1 Turing's Definition
Turing introduced Turing machines in the context of research into the
foundations of mathematics. More particularly, he used these abstract
devices to prove that there is no effective general method or
procedure to solve, calculate or compute every instance of the
following problem:
>
>
> ***Entscheidungsproblem*** The problem to decide
> for every statement in first-order logic (the so-called restricted
> functional calculus, see the entry on
> classical logic
> for an introduction) whether or not it is derivable in that
> logic.
>
>
>
Note that in its original form (Hilbert & Ackermann 1928), the
problem was stated in terms of validity rather than derivability.
Given Godel's completeness theorem (Godel 1929)
proving that there is an effective procedure (or not) for derivability
is also a solution to the problem in its validity form. In order to
tackle this problem, one needs a formalized notion of "effective
procedure" and Turing's machines were intended to do
exactly that.
A Turing machine then, or a *computing machine* as Turing
called it, in Turing's original definition is a machine capable
of a finite set of configurations \(q\_{1},\ldots,q\_{n}\) (the
states of the machine, called *m*-configurations by Turing). It
is supplied with a one-way infinite and one-dimensional tape divided
into squares each capable of carrying exactly one symbol. At any
moment, the machine is scanning the content of *one* square
*r* which is either blank (symbolized by \(S\_0\)) or contains a
symbol \(S\_{1},\ldots ,S\_{m}\) with \(S\_1 = 0\) and \(S\_2 =
1\).
The machine is an automatic machine (*a*-machine) which means
that at any given moment, the behavior of the machine is completely
determined by the current state and symbol (called the
*configuration*) being scanned. This is the so-called
*determinacy condition*
(Section 3).
These *a*-machines are contrasted with the so-called choice
machines for which the next state depends on the decision of an
external device or operator (Turing 1936-7: 232). A Turing
machine is capable of three types of action:
1. Print \(S\_i\), move one square to the left (*L*) and go to
state \(q\_{j}\)
2. Print \(S\_i\), move one square to the right (*R*) and go to
state \(q\_{j}\)
3. Print \(S\_i\), do not move (*N*) and go to state
\(q\_{j}\)
The 'program' of a Turing machine can then be written as a
finite set of quintuples of the form:
\[q\_{i}S\_{j}S\_{i,j}M\_{i,j}q\_{i,j}\]
Where \(q\_i\) is the current state, \(S\_j\) the content of the square
being scanned, \(S\_{i,j}\) the new content of the square; \(M\_{i,j}\)
specifies whether the machine is to move one square to the left, to
the right or to remain at the same square, and \(q\_{i,j}\) is the next
state of the machine. These quintuples are also called the transition
rules of a given machine. The Turing machine \(T\_{\textrm{Simple}}\)
which, when started from a blank tape, computes the sequence
\(S\_0S\_1S\_0S\_1\ldots\) is then given by
Table 1.
Table 1: Quintuple representation of
\(T\_{\textrm{Simple}}\)
\[
\begin{align}\hline
;q\_{1}S\_{0}S\_{0}Rq\_{2}\\
;q\_{1}S\_{1}S\_{0}Rq\_{2}\\
;q\_{2}S\_{0}S\_{1}Rq\_{1}\\
;q\_{2}S\_{1}S\_{1}Rq\_{1}\\\hline
\end{align}
\]
Note that \(T\_{\textrm{Simple}}\) will never enter a configuration
where it is scanning \(S\_1\) so that two of the four quintuples are
redundant. Another typical format to represent Turing machines and
which was also used by Turing is the *transition table*.
Table 2
gives the transition table of \(T\_{\textrm{Simple}}\).
Table 2: Transition table for
\(T\_{\textrm{Simple}}\)
| | | |
| --- | --- | --- |
| | \(S\_0\) | \(S\_1\) |
| \(q\_1\) | \(S\_{0}\opR q\_{2}\) | \(S\_{0}\opR q\_{2}\) |
| \(q\_2\) | \(S\_{1}\opR q\_{1}\) | \(S\_{1}\opR q\_{1}\) |
Where current definitions of Turing machines usually have only one
type of symbols (usually just 0 and 1; it was proven by Shannon that
any Turing machine can be reduced to a binary Turing machine (Shannon
1956)) Turing, in his original definition of so-called *computing
machines*, used two kinds of symbols: the *figures* which
consist entirely of 0s and 1s and the so-called *symbols of the
second kind*. These are differentiated on the Turing machine tape
by using a system of alternating squares of figures and symbols of the
second kind. One sequence of alternating squares contains the figures
and is called the sequence of *F*-squares. It contains the
*sequence computed by the machine*; the other is called the
sequence of *E*-squares. The latter are used to mark
*F*-squares and are there to "assist the memory"
(Turing 1936-7: 232). The content of the *E*-squares is
liable to change. *F*-squares however cannot be changed which
means that one cannot implement algorithms whereby earlier computed
digits need to be changed. Moreover, the machine will never print a
symbol on an *F*-square if the *F*-square preceding it has
not been computed yet. This usage of *F* and *E*-squares can
be quite useful (see
Sec. 2.3)
but, as was shown by Emil L. Post, it results in a number of
complications (see
Sec. 1.2).
There are two important things to notice about the Turing machine
setup. The first concerns the definition of the machine itself, namely
that the machine's tape is potentially infinite. This
corresponds to an assumption that the memory of the machine is
(potentially) infinite. The second concerns the definition of Turing
computable, namely that a function will be Turing computable if there
exists a set of instructions that will result in a Turing machine
computing the function regardless of the amount of time it takes. One
can think of this as assuming the availability of potentially infinite
time to complete the computation.
These two assumptions are intended to ensure that the definition of
computation that results is not too narrow. This is, it ensures that
no computable function will fail to be Turing-computable solely
because there is insufficient time or memory to complete the
computation. It follows that there may be some Turing computable
functions which may not be carried out by any existing computer,
perhaps because no existing machine has sufficient memory to carry out
the task. Some Turing computable functions may not ever be computable
in practice, since they may require more memory than can be built
using all of the (finite number of) atoms in the universe. If we
moreover assume that a physical computer is a finite realization of
the Turing machine, and so that the Turing machine functions as a good
formal model for the computer, a result which shows that a function is
not Turing computable is very strong, since it implies that no
computer that we could ever build could carry out the computation. In
Section 2.4, it is shown that there are functions which are not
Turing-computable.
### 1.2 Post's Definition
Turing's definition was standardized through (some of)
Post's modifications of it in Post 1947. In that paper Post
proves that a certain problem from mathematics known as Thue's
problem or the word problem for semi-groups is not Turing computable
(or, in Post's words, recursively unsolvable). Post's main
strategy was to show that if it were decidable then the following
decision problem from Turing 1936-7 would also be decidable:
>
>
> **PRINT?** The problem to decide for every Turing machine
> *M* whether or not it will ever print some symbol (for instance,
> 0).
>
>
>
It was however proven by Turing that **PRINT?** is not
Turing computable and so the same is true of Thue's problem.
While the uncomputability of **PRINT?** plays a central
role in Post's proof, Post believed that Turing's proof of
that was affected by the "spurious Turing convention"
(Post 1947: 9), viz. the system of *F* and *E*-squares.
Thus, Post introduced a modified version of the Turing machine. The
most important differences between Post's and Turing's
definition are:
1. Post's Turing machine, when in a given state, either prints or
moves and so its transition rules are more 'atomic' (it
does not have the composite operation of moving and printing). This
results in the quadruple notation of Turing machines, where each
quadruple is in one of the three forms of
Table 3:
Table 3: Post's Quadruple
notation
\[
\begin{aligned}\hline
& q\_iS\_jS\_{i,j}q\_{i,j}\\
& q\_iS\_jLq\_{i,j}\\
& q\_iS\_jRq\_{i,j}\\\hline
\end{aligned}
\]
2. Post's Turing machine has only one kind of symbol and so
does not rely on the Turing system of *F* and
*E*-squares.
3. Post's Turing machine has a two-way infinite tape.
4. Post's Turing machine halts when it reaches a state for
which no actions are defined.
Note that Post's reformulation of the Turing machine is very
much rooted in his Post 1936. (Some of) Post's modifications of Turing's
definition became part of the definition of the Turing machine in
standard works such as Kleene 1952 and Davis 1958. Since that time,
several (logically equivalent) definitions have been introduced.
Today, standard definitions of Turing machines are, in some respects,
closer to Post's Turing machines than to Turing's
machines. In what follows we will use a variant on the standard
definition from Minsky 1967 which uses the quintuple notation but has no *E* and
*F*-squares and includes a special halting state *H*. It
also has only two move operations, viz., *L* and *R* and so
the action whereby the machine merely prints is not used. When the
machine is started, the tape is blank except for some finite portion
of the tape. Note that the blank square can also be represented as a
square containing the symbol \(S\_0\) or simply 0. The finite content
of the tape will also be called the *dataword* on the tape.
### 1.3 The Definition Formalized
Talk of "tape" and a "read-write head" is
intended to aid the intuition (and reveals something of the time in
which Turing was writing) but plays no important role in the
definition of Turing machines. In situations where a formal analysis
of Turing machines is required, it is appropriate to spell out the
definition of the machinery and program in more mathematical terms.
Purely formally a Turing machine can be specified as a quadruple \(T =
(Q,\Sigma, s, \delta)\) where:
* *Q* is a finite set of states *q*
* \(\Sigma\) is a finite set of symbols
* *s* is the initial state \(s \in Q\)
* \(\delta\) is a transition function determining the next move:
\[\delta : (Q \times \Sigma) \rightarrow (\Sigma \times \{L,R\} \times Q)\]
The transition function for the machine *T* is a function from
computation states to computation states. If \(\delta(q\_i,S\_j) =
(S\_{i,j},D,q\_{i,j})\), then when the machine's state is \(q\_j\),
reading the symbol \(S\_j\), \(T\) replaces \(S\_j\) by \(S\_{i,j}\),
moves in direction \(D \in \{L,R\}\) and goes to state
\(q\_{i,j}\).
### 1.4 Describing the Behavior of a Turing Machine
We introduce a representation which allows us to describe the behavior
or dynamics of a Turing machine \(T\_n\), relying on the notation of
the *complete configuration* (Turing 1936-7: 232) also
known today as *instantaneous description* (ID) (Davis 1982:
6). At any
stage of the computation of \(T\_{i}\) its ID is given by:
* (1)the content of the
tape, that is, its data word
* (2)the location of the
reading head
* (3)the machine's
internal state
So, given some Turing machine *T* which is in state \(q\_{i}\)
scanning the symbol \(S\_{j}\), its ID is given by \(Pq\_{i}S\_{j}Q\)
where *P* and *Q* are the finite words to the left and right
hand side of the square containing the symbol \(S\_{j}\).
Figure 1
gives a visual representation of an ID of some Turing machine
*T* in state \(q\_i\) scanning the tape.
![a horizontal strip of concatenated boxed 0s and 1s with the left and right ends of the strip being ragged. The numbers from left to right are 01001100000010000000. The sixth 0 from the left is red and label points to it stating \(q_{i} 0 : 0 R q_{i}, 0\) ](TM_basic0.svg)
Figure 1: A complete configuration of
some Turing machine *T*
The notation thus allows us to capture the developing behavior of the
machine and its tape through its consecutive IDs.
Figure 2
gives the first few consecutive IDs of \(T\_{\textrm{Simple}}\) using
a graphical representation.
![a horizontal strip of concatenated boxed 0s with the left and right ends of the strip being ragged. The third 0 from the left is red and there is a label pointing to it stating \(q_1 0 : 0 R q_2\) ](Simple1.svg)
Figure 2: The dynamics of
\(T\_{\textrm{Simple}}\) graphical representation
The animation can be started by clicking on the picture. One can also
explicitly print the consecutive IDs, using their symbolic
representations. This results in a state-space diagram of the behavior
of a Turing machine. So, for \(T\_{\textrm{Simple}}\) we get (Note that
\(\overline{0}\) means the infinite repetition of 0s):
\[\begin{matrix}
\overline{0}q\_1{\bf 0}\overline{0}\\
\overline{0}{\color{blue} 0}q\_2{\bf 0}\overline{0}\\
\overline{0}{\color{blue}01}q\_1{\bf 0}\overline{0}\\
\overline{0}{\color{blue}010}q\_2{\bf 0}\overline{0}\\
\overline{0}{\color{blue}0101}q\_1{\bf 0}\overline{0}\\
\overline{0}{\color{blue}01010}q\_2{\bf 0}\overline{0}\\
\vdots
\end{matrix}\]
## 2. Computing with Turing Machines
As explained in
Sec. 1.1,
Turing machines were originally intended to formalize the notion of
computability in order to tackle a fundamental problem of mathematics.
Independently of Turing, Alonzo Church gave a different but logically
equivalent formulation (see
Sec. 4).
Today, most computer scientists agree that Turing's, or any
other logically equivalent, formal notion captures *all*
computable problems, viz. for any computable problem, there is a
Turing machine which computes it. This is known as the
*Church-Turing thesis*, *Turing's thesis* (when
the reference is only to Turing's work) or *Church's
thesis* (when the reference is only to Church's work).
It implies that, if accepted, any problem not computable by a Turing
machine is not computable by any finite means whatsoever. Indeed,
since it was Turing's ambition to capture "[all] the
possible processes which can be carried out in computing a
number" (Turing 1936-7: 249), it follows that, if we
accept Turing's analysis:
* Any problem not computable by a Turing machine is not
"computable" in the absolute sense (at least, absolute
relative to humans, see
Section 3).
* For any problem that we believe is computable, we should be able
to construct a Turing machine which computes it. To put it in
Turing's wording:
>
> It is my contention that [the] operations [of a computing machine]
> include all those which are used in the computation of a number.
> (Turing 1936-7: 231)
>
In this section, examples will be given which illustrate the
computational power and boundaries of the Turing machine
model. Section 3 then discusses some philosophical issues related to
Turing's thesis.
### 2.1 Some (Simple) Examples
In order to speak about a Turing machine that does something useful
from the human perspective, we will have to provide an interpretation
of the symbols recorded on the tape. For example, if we want to design
a machine which will compute some mathematical function, addition say,
then we will need to describe how to interpret the ones and zeros
appearing on the tape as numbers.
In the examples that follow we will represent the number *n* as
a block of \(n+1\) copies of the symbol '1' on the tape.
Thus we will represent the number 0 as a single '1' and
the number 3 as a block of four '1's. This is called
*unary notation*.
We will also have to make some assumptions about the configuration of
the tape when the machine is started, and when it finishes, in order
to interpret the computation. We will assume that if the function to
be computed requires *n* arguments, then the Turing machine
will start with its head scanning the leftmost '1' of a
sequence of *n* blocks of '1's. The blocks of
'1's representing the arguments must be separated by a
single occurrence of the symbol '0'. For example, to
compute the sum \(3+4\), a Turing machine will start in the
configuration shown in
Figure 3.
![a horizontal strip of concatenated boxed 0s and 1s with the left and right ends of the strip being ragged. The numbers from left to right are 00111101111100000000. The first 1 from the left is red with a label pointing to it stating \(q_{1} : j R q_{1}\) ](Initial.svg)
Figure 3: Initial configuration for a
computation over two numbers *n* and *m*
Here the supposed addition machine takes two arguments representing
the numbers to be added, starting at the leftmost 1 of the first
argument. The arguments are separated by a single 0 as required, and
the first block contains four '1's, representing the
number 3, and the second contains five '1's, representing
the number 4.
A machine must finish in standard configuration too. There must be a
single block of symbols (a sequence of 1s representing some number or
a symbol representing another kind of output) and the machine must be
scanning the leftmost symbol of that sequence. If the machine
correctly computes the function then this block must represent the
correct answer.
Adopting this convention for the terminating configuration of a Turing
machine means that we can compose machines by identifying the final
state of one machine with the initial state of the next.
##### Addition of two numbers *n* and *m*
Table 4
gives the transition table of a Turing machine \(T\_{\textrm{Add}\_2}\)
which adds two natural numbers *n* and *m*. We assume the
machine starts in state \(q\_1\) scanning the leftmost 1 of
\(n+1\).
Table 4: Transition table for
\(T\_{\textrm{Add}\_2}\)
| | | |
| --- | --- | --- |
| | 0 | 1 |
| \(q\_1\) | / | \(0\opR q\_2\) |
| \(q\_2\) | \(1\opL q\_3\) | \(1\opR q\_2\) |
| \(q\_3\) | \(0\opR q\_{4}\) | \(1\opL q\_3\) |
| \(q\_4\) | \(/\) | \(0\opR q\_{\textrm{halt}}\) |
The idea of doing an addition with Turing machines when using unary
representation is to shift the leftmost number *n* one square to
the right. This is achieved by erasing the leftmost 1 of \(n +1\)
(this is done in state \(q\_1\)) and then setting the 0 between \(n+1\)
and \(m+1\) to 1 (state \(q\_2\)). We then have \(n + m + 2\) and so we
still need to erase one additional 1. This is done by erasing the
leftmost 1 (states \(q\_3\) and \(q\_4\)).
Figure 4
shows this computation for \(3 + 4\).
![a horizontal strip of concatenated boxed 0s and 1s with the left and right ends of the strip being ragged. The numbers from left to right are 00111101111100000000. The first 1 from the left is red and labeled \(q_{1} 1 : 0 R q_{2}\) ](tm1.svg)
Figure 4: The computation of \(3+4\) by
\(T\_{\textrm{Add}\_2}\)
##### Addition of *n* numbers
We can generalize \(T\_{\textrm{Add}\_2}\) to a Turing machine
\(T\_{\textrm{Add}\_i}\) for the addition of an arbitrary number
*i* of integers \(n\_1, n\_2,\ldots, n\_j\). We assume
again that the machine starts in state \(q\_1\) scanning the leftmost 1
of \(n\_1+1\). The transition table for such a machine
\(T\_{\textrm{Add}\_i}\) is given in
Table 5.
Table 5: Transition table for
\(T\_{\textrm{Add}\_i}\)
| | | |
| --- | --- | --- |
| | 0 | 1 |
| \(q\_1\) | / | \(0\opR q\_2\) |
| \(q\_2\) | \(1\opR q\_3\) | \(1\opR q\_2\) |
| \(q\_3\) | \(0\opL q\_{6}\) | \(1\opL q\_4\) |
| \(q\_4\) | \(0\opR q\_5\) | \(1\opL q\_4\) |
| \(q\_5\) | / | \(0\opR q\_1\) |
| \(q\_6\) | \(0\opR q\_{\textrm{halt}}\) | \(1\opL q\_6\) |
The machine \(T\_{\textrm{Add}\_i}\) uses the principle of shifting the
addends to the right which was also used for \(T\_{\textrm{Add}\_2}\).
More particularly, \(T\_{add\_i}\) computes the sum of \(n\_1 + 1\),
\(n\_2 + 1\),... \(n\_i+1\) from left to right, viz. it computes
this sum as follows:
\[\begin{align}
N\_1 & = n\_1 + n\_2 + 1\\
N\_2 & = N\_1 + n\_3 \\
N\_3 &= N\_2 + n\_4\\
&\vdots\\
N\_i &= N\_{i-1} + n\_i + 1
\end{align}
\]
The most important difference between \(T\_{\textrm{Add}\_2}\) and
\(T\_{\textrm{Add}\_i}\) is that \(T\_{\textrm{Add}\_i}\) needs to verify
if the leftmost addend \(N\_j, 1 < j \leq i\) is equal to
\(N\_i\). This is achieved by checking whether the first 0 to the right
of \(N\_j\) is followed by another 0 or not (states \(q\_2\) and
\(q\_3\)). If it is not the case, then there is at least one more
addend \(n\_{j+1}\) to be added. Note that, as was the case for \(T\_{\textrm{Add}\_2}\), the machine needs to erase an additional one from the addend \(n\_{j+1}\) which is done via state \(q\_5\). It then moves back to state \(q\_1\).
If, on the other hand, \(N\_j = N\_i\), the machine moves to the leftmost 1 of \(N\_i = n\_1 + n\_2 + \ldots + n\_i + 1 \) and halts.
### 2.2 Computable Numbers and Problems
Turing's original paper is concerned with *computable (real)
numbers*. A (real) number is Turing computable if there exists a
Turing machine which computes an arbitrarily precise approximation to
that number. All of the algebraic numbers (roots of polynomials with
algebraic coefficients) and many transcendental mathematical
constants, such as *e* and \(\pi\) are Turing-computable.
Turing gave several examples of classes of numbers computable by
Turing machines (see section 10 *Examples of large classes of
numbers which are computable* of Turing 1936-7) as a
heuristic argument showing that a wide diversity of classes of numbers
can be computed by Turing machines.
One might wonder however in what sense computation with numbers, viz.
calculation, captures *non-numerical* but computable problems
and so how Turing machines capture *all* general and effective
procedures which determine whether something is the case or not.
Examples of such problems are:
* "decide for any given *x* whether or not *x*
denotes a prime"
* "decide for any given *x* whether or not *x* is
the description of a Turing machine".
In general, these problems are of the form:
* "decide for any given *x* whether or not *x* has
property *X*"
An important challenge of both theoretical and concrete advances in
computing (often at the interface with other disciplines) has become
the problem of providing an interpretation of *X* such that it
can be tackled computationally. To give just one concrete example, in
daily computational practices it might be important to have a method
to decide for any digital "source" whether or not it can
be trusted and so one needs a computational interpretation of
trust.
The *characteristic function* of a predicate is a function
which has the value TRUE or FALSE when given appropriate arguments. In
order for such functions to be computable, Turing relied on
Godel's insight that these kind of problems can be encoded
as a problem about numbers (See
Godel's incompleteness theorem
and the next
Sec. 2.3)
In Turing's wording:
>
>
> The expression "there is a general process for determining
> ..." has been used [here] [...] as equivalent to
> "there is a machine which will determine ...". This
> usage can be justified if and only if we can justify our definition of
> "computable". For each of these "general
> process" problems can be expressed as a problem concerning a
> general process for determining whether a given integer *n* has a
> property \(G(n)\) [e.g. \(G(n)\) might mean "*n* is
> satisfactory" or "*n* is the Godel
> representation of a provable formula"], and this is equivalent
> to computing a number whose *n*-th figure is 1 if \(G(n)\) is
> true and 0 if it is false. (1936-7: 248)
>
>
>
It is the possibility of coding the "general process"
problems as numerical problems that is essential to Turing's
construction of the universal Turing machine and its use within a
proof that shows there are problems that cannot be computed by a
Turing machine.
### 2.3 Turing's Universal Machine
The universal Turing machine which was constructed to prove the
uncomputability of certain problems, is, roughly speaking, a Turing
machine that is able to compute what any other Turing machine
computes. Assuming that the Turing machine notion fully captures
computability (and so that Turing's thesis is valid), it is
implied that anything which can be "computed", can also be
computed by that one universal machine. Conversely, any problem that
is not computable by the universal machine is considered to be
uncomputable.
This is the rhetorical and theoretical power of the universal machine
concept, viz. that one relatively simple formal device captures all
"*the possible processes which can be carried out in
computing a number*" (Turing 1936-7). It is also one
of the main reasons why Turing has been *retrospectively*
identified as one of the founding fathers of computer science (see
Section 5).
So how to construct a universal machine *U* out of the set of
basic operations we have at our disposal? Turing's approach is
the construction of a machine *U* which is able to (1)
'understand' the program of *any* other machine
\(T\_{n}\) and, based on that "understanding", (2)
'mimic' the behavior of \(T\_{n}\). To this end, a method
is needed which allows to treat the program and the behavior of
\(T\_n\) interchangeably since both aspects are manipulated on the same
tape and by the same machine. This is achieved by Turing in two basic
steps: the development of (1) a notational method (2) a set of
elementary functions which treats that notation--independent of
whether it is formalizing the program or the behavior of
\(T\_n\)--as text to be compared, copied down, erased, etc. In
other words, Turing develops a technique that allows to treat program
and behavior on the same level.
#### 2.3.1 Interchangeability of program and behavior: a notation
Given some machine \(T\_n\), Turing's basic idea is to construct
a machine \(T\_n'\) which, rather than directly printing the output
of \(T\_n\), prints out the successive complete configurations or
instantaneous descriptions of \(T\_n\). In order to achieve this,
\(T\_n'\):
>
>
> [...] could be made to depend on having the rules of operation
> [...] of [\(T\_n\)] written somewhere within itself [...]
> each step could be carried out by referring to these rules. (Turing
> 1936-7: 242)
>
>
>
In other words, \(T\_n'\) prints out the successive complete
configurations of \(T\_n\) by having the program of \(T\_n\) written on
its tape. Thus, Turing needs a notational method which makes it
possible to 'capture' two different aspects of a Turing
machine on one and the same tape in such a way they can be treated
*by the same machine*, viz.:
* (1) its description in
terms of *what it should do*--the quintuple
notation
* (2) its description in
terms of *what it is doing*--the complete configuration
notation
Thus, a first and perhaps most essential step, in the construction of
*U* are the quintuple and complete configuration notation and
the idea of putting them on the same tape. More particularly, the tape
is divided into two regions which we will call the *A* and
*B* region here. The *A* region contains a notation of the
'program' of \(T\_n\) and the *B* region a notation
for the successive complete configurations of \(T\_n\). In
Turing's paper they are separated by an additional symbol
"::".
To simplify the construction of *U* and in order to encode any
Turing machine as a unique number, Turing develops a third notation
which permits to express the quintuples and complete configurations
with letters only. This is determined by [Note that we use
Turing's original encoding. Of course, there is a broad variety
of possible encodings, including binary encodings]:
* Replacing each state \(q\_i\) in a quintuple of \(T\_n\) by \[D\underbrace{A\ldots A}\_i,\] so, for instance \(q\_3\) becomes \(DAAA\).
* Replacing each symbol \(S\_{j}\) in a quintuple of \(T\_n\) by \[D\underbrace{C\ldots C}\_j,\] so, for instance, \(S\_1\) becomes \(DC\).
Using this method, each quintuple of some Turing machine \(T\_n\) can
be expressed in terms of a sequence of capital letters and so the
'program' of any machine \(T\_{n}\) can be expressed by the
set of symbols *A, C, D, R, L, N* and ;. This is the so-called
*Standard Description* (S.D.) of a Turing machine. Thus, for
instance, the S.D. of \(T\_{\textrm{Simple}}\) is:
;*DADDRDAA*;*DADCDRDAA*;*DAADDCRDA*;*DAADCDCRDA*
This is, essentially, Turing's version of
Godel numbering.
Indeed, as Turing shows, one can easily get a numerical description
representation or *Description Number* (D.N.) of a Turing
machine \(T\_{n}\) by replacing:
* "A" by "1"
* "C" by "2"
* "D" by "3"
* "L" by "4"
* "R" by "5"
* "N" by "6"
* ";" by "7"
Thus, the D.N. of \(T\_{\textrm{Simple}}\) is:
7313353117313135311731133153173113131531
Note that every machine \(T\_n\) has a unique D.N.; a D.N. represents
one and one machine only.
Clearly, the method used to determine the \(S.D.\) of some machine
\(T\_n\) can also be used to write out the successive complete
configurations of \(T\_n\). Using ":" as a separator
between successive complete configurations, the first few complete
configurations of \(T\_{\textrm{Simple}}\) are:
:*DAD*:*DDAAD*:*DDCDAD*:*DDCDDAAD*:*DDCDDCDAD*
#### 2.3.2 Interchangeability of program and behavior: a basic set of functions
Having a notational method to write the program and successive
complete configurations of some machine \(T\_n\) on one and the same
tape of some other machine \(T\_n'\) is the first step in
Turing's construction of *U*. However, *U* should also
be able to "emulate" the program of \(T\_n\) as written in
region *A* so that it can actually write out its successive
complete configurations in region *B*. Moreover it should be
possible to "take out and exchange[...] [the rules of operations of some Turing machine] for
others" (Turing 1936-7: 242). Viz.,
it should be able not just to calculate but also to compute, an issue
that was also dealt with by others such as Church, Godel and Post
using their own formal devices. It should, for instance, be able to
"recognize" whether it is in region *A* or *B*
and it should be able to determine whether or not a certain sequence
of symbols is the next state \(q\_i\) which needs to be executed.
This is achieved by Turing through the construction of a sequence of
Turing computable problems such as:
* Finding the leftmost or rightmost occurrence of a sequence of
symbols
* Marking a sequence of symbols by some symbol *a* (remember
that Turing uses two kinds of alternating squares)
* Comparing two symbol sequences
* Copying a symbol sequence
Turing develops a notational technique, called *skeleton
tables*, for these functions which serves as a kind of shorthand
notation for a complete Turing machine table but can be easily used to
construct more complicated machines from previous ones. The technique
is quite reminiscent of the recursive technique of composition (see:
recursive functions).
To illustrate how such functions are Turing computable, we discuss one
such function in more detail, viz. the compare function. It is
constructed on the basis of a number of other Turing computable
functions which are built on top of each other. In order to understand
how these functions work, remember that Turing used a system of
alternating *F* and *E*-squares where the *F*-squares
contain the actual quintuples and complete configurations and the
*E*-squares are used as a way to mark off certain parts of the
machine tape. For the comparing of two sequences \(S\_1\) and \(S\_2\),
each symbol of \(S\_1\) will be marked by some symbol *a* and each
symbol of \(S\_2\) will be marked by some symbol *b*.
Turing defined nine different functions to show how the compare
function can be computed with Turing machines:
* FIND\((q\_{i}, q\_{j},a)\): this machine function searches for the
leftmost occurrence of *a*. If *a* is found, the machine
moves to state \(q\_{i}\) else it moves to state \(q\_{j}\). This is
achieved by having the machine first move to the beginning of the tape
(indicated by a special mark) and then to have it move right until it
finds *a* or reaches the rightmost symbol on the tape.
* FINDL\((q\_{i}, q\_{j},a)\): the same as FIND but after *a* has
been found, the machine moves one square to the left. This is used in
functions which need to compute on the symbols in *F*-squares
which are marked by symbols *a* in the *E*-squares.
* ERASE\((q\_{i},q\_{j},a)\): the machine computes FIND. If *a* is
found, it erases *a* and goes to state \(q\_{i}\) else it goes to
state \(q\_{j}\)
* ERASE\_ALL\((q\_j,a) = \textrm{ERASE}(\textrm{ERASE}\\_\textrm{ALL},
q\_j,a)\): the machines computes ERASE on *a* repeatedly until all
*a*'s have been erased. Then it moves to \(q\_{j}\).
* EQUAL\((q\_i,q\_j,a)\): the machine checks whether or not the
current symbol is *a*. If yes, it moves to state \(q\_i\) else it
moves to state \(q\_j\)
* CMP\_XY\((q\_i,q\_j,b) = \textrm{FINDL(EQUAL}(q\_i,q\_j,x), q\_j, b)\):
whatever the current symbol *x*, the machine computes FINDL on
*b* (and so looks for the symbol marked by *b*). If there is
a symbol *y* marked with *b*, the machine computes
\(\textrm{EQUAL}\) on *x* and *y*, else, the machine goes to
state \(q\_j\). In other words, CMP\_XY\((q\_i,q\_j,b)\) compares whether
the current symbol is the same as the leftmost symbol marked
*b*.
* COMPARE\_MARKED\((q\_i,q\_j,q\_n,a,b)\): the machine checks whether
the leftmost symbols marked *a* and *b* respectively are the
same. If there is no symbol marked *a* nor *b*, the machine
goes to state \(q\_{n}\); if there is a symbol marked *a* and one
marked *b* and they are the same, the machine goes to state
\(q\_i\), else the machine goes to state \(q\_j\). The function is
computed as \(\textrm{FINDL(CMP}\\_XY(q\_i,q\_j,b),
\textrm{FIND}(q\_j,q\_n,b),a)\)
* \(\textrm{COMPARE}\\_\textrm{ERASE}(q\_iq\_j,q\_n,a,b)\): the same as
COMPARE\_MARKED but when the symbols marked *a* and *b* are the same,
the marks *a* and *b* are erased. This is achieved by
computing \(\textrm{ERASE}\) first on *a* and then on
*b*.
* \(\textrm{COMPARE}\\_\textrm{ALL}(q\_j,q\_n,a,b)\) The machine compares
the sequences *A* and *B* marked with *a* and *b*
respectively. This is done by repeatedly computing COMPARE\_ERASE on *a* and
*b*. If *A* and *B* are equal, all *a*'s and
*b*'s will have been erased and the machine moves to state
\(q\_j\), else, it will move to state \(q\_n\). It is computed by
\[\textrm{COMPARE}\\_\textrm{ERASE}(\textrm{COMPARE}\\_\textrm{ALL}(q\_j,q\_n,a,b),q\_j,q\_n,a,b)\]
and so by recursively calling
\(\textrm{COMPARE}\\_\textrm{ALL}\).
In a similar manner, Turing defines the following functions:
* \(\textrm{COPY}(q\_i,a)\): copy the sequence of symbols marked with
*a*'s to the right of the last complete configuration and
erase the marks.
* \(\textrm{COPY}\_{n}(q\_i, a\_1,a\_2,\ldots ,a\_n)\): copy down the
sequences marked \(a\_1\) to \(a\_n\) to the right of the last complete
configuration and erase all marks \(a\_i\).
* \(\textrm{REPLACE}(q\_i, a,b)\): replace all letters *a* by
*b*
* \(\textrm{MARK\_NEXT\_CONFIG}(q\_i,a)\): mark the first configuration
\(q\_iS\_j\) to the right of the machine's head with the letter
*a*.
* \(\textrm{FIND}\\_\textrm{RIGHT}(q\_i,a)\): find the rightmost
symbol *a*.
Using the basic functions COPY, REPLACE and COMPARE, Turing constructs
a universal Turing machine.
Below is an outline of the universal Turing machine indicating how
these basic functions indeed make possible universal computation. It
is assumed that upon initialization, *U* has on its tape the S.D.
of some Turing machine \(T\_n\). Remember that Turing uses the system
of alternating *F* and *E*-squares and so, for instance, the
S.D. of \(T\_{\textrm{Simple}}\) will be written on the tape of
*U* as:
;\_*D*\_*A*\_*D*\_*D*\_*R*\_*D*\_*A*\_*A*\_*;*\_*D*\_*A*\_*D*\_*C*\_*D*\_*R*\_*D*\_*A*\_*A*\_*;*\_*D*\_*A*\_*A*\_*D*\_*D*\_*C*\_*R*\_*D*\_*A*\_*;*\_*D*\_*A*\_*A*\_*D*\_*C*\_*D*\_*C*\_*R*\_*D*\_*A*\_
where "\_" indicates an unmarked *E*-square.
* INIT: To the right of the rightmost quintuple of
*T*\_*n*, *U* prints ::\_:\_*D*\_*A*\_, where \_
indicates an unmarked *E*-square.
* FIND\_NEXT\_STATE: The machine first marks (1) with *y* the
configuration \(q\_{CC,i}S\_{CC,j}\) of the rightmost (and so last)
complete configuration computed by *U* in the *B* part of
the tape and (2) with *x* the configuration \(q\_{q,m}S\_{q,n}\) of
the leftmost quintuple which is not preceded by a marked (with the
letter *z*) semicolon in the *A* part of the tape. The two
configurations are compared. If they are identical, the machine moves
to MARK\_OPERATIONS, if not, it marks the semicolon preceding
\(q\_{q,m}S\_{q,n}\) with *z* and goes to FIND\_NEXT\_STATE. This is
easily achieved using the function COMPARE\_ALL which means that,
whatever the outcome of the comparison, the marks *x* and
*y* will be erased. For instance, suppose that \(T\_n =
T\_{\textrm{Simple}}\) and that the last complete configuration of
\(T\_{\textrm{Simple}}\) as computed by *U* is:
\[\tag{1}
\label{CC\_univ}
:\\_\underbrace{D\\_}\_{S\_0}\underbrace{D\\_C\\_}\_{S\_1}\underbrace{D\\_}\_{S\_0}\textcolor{orange}{\underbrace{D\\_A\\_A\\_}\_{q\_{2}}\underbrace{D\\_}\_{S\_0}}
\]
Then *U* will move to region *A* and determine that the
corresponding quintuple is:
\[\tag{2}\label{quint\_univ}
\textcolor{orange}{\underbrace{D\\_A\\_A\\_}\_{q\_{2}}\underbrace{D\\_}\_{S\_{0}}}\underbrace{D\\_C\\_}\_{S\_1}\underbrace{R\\_}\underbrace{D\\_A\\_}\_{q\_1}\]
* MARK\_OPERATIONS: The machine *U* marks the operations that it
needs to execute in order to compute the next complete configuration
of \(T\_n\). The printing and move (L,R, N) operations are marked with
*u* and the next state with *y*. All marks *z* are
erased. Continuing with our example, *U* will mark
\(\eqref{quint\_univ}\) as follows:
\[D\\_A\\_A\\_D\\_\textcolor{magenta}{DuCuRu}\textcolor{green}{DyAy}\]
* MARK\_COMPCONFIG: The last complete configuration of \(T\_n\) as
computed by *U* is marked into four regions: the configuration
\(q\_{CC,i}S\_{CC,j}\) itself is left unmarked; the symbol just
preceding it is marked with an *x* and the remaining symbols to
the left or marked with *v*. Finally, all symbols to the right,
if any, are marked with *w* and a ":" is printed to
the right of the rightmost symbol in order to indicate the beginning
of the next complete configuration of \(T\_n\) to be computed by
*U*. Continuing with our example, \(\eqref{CC\_univ}\) will be
marked as follows by *U*:
\[\textcolor{red}{\underbrace{Dv}\_{S\_0}\underbrace{DvCv}\_{S\_1}}\textcolor{blue}{\underbrace{Dx}\_{S\_0}}\underbrace{D\\_A\\_A\\_}\_{q\_2}\underbrace{D\\_}\_{S\_0}:\\_\]
*U* then goes to PRINT
* PRINT. It is determined if, in the instructions that have been
marked in MARK\_OPERATIONS, there is an operation Print 0 or Print 1.
If that is the case, \(0:\) respectively \(1:\) is printed to the
right of the last complete configuration. This is not a necessary
function but Turing insisted on having *U* print out not just the
(coded) complete configurations computed by \(T\_n\) but also the
actual (binary) real number computed by \(T\_n\).
* PRINT\_COMPLETE\_CONFIGURATION. *U* prints the next complete
configuration and erases all marks *u, v, w, x, y*. It then
returns to FIND\_NEXT\_STATE. *U* first searches for the rightmost
letter *u*, to check which move is needed (*R, L, N*) and
erases the mark *u* for *R, L, N*. Depending on the value
*L, R* or *N* will then write down the next complete
configuration by applying COPY\(\_5\) to *u, v, w, x, y*. The move
operation (*L, R, N*) is accounted for by the particular
combination of *u, v, w, x, y*:
\[\begin{array}{ll}
\textrm{When ~} L: &
\textrm{COPY}\_5(\textrm{FIND}\\_\textrm{NEXT}\\_\textrm{STATE},
\textcolor{red}{v},\textcolor{green}{y},\textcolor{blue}{x},\textcolor{magenta}{u},\textcolor{RawSienna}{w})\\
\textrm{When ~} R: &
\textrm{COPY}\_5(\textrm{FIND}\\_\textrm{NEXT}\\_\textrm{STATE},
\textcolor{red}{v},\textcolor{blue}{x},\textcolor{magenta}{u},\textcolor{green}{y},\textcolor{RawSienna}{w})\\
\textrm{When ~} N: &
\textrm{COPY}\_5(\textrm{FIND}\\_\textrm{NEXT}\\_\textrm{STATE},
\textcolor{red}{v},\textcolor{blue}{x},\textcolor{green}{y},\textcolor{magenta}{u},\textcolor{RawSienna}{w})
\end{array}\]
Following our example, since \(T\_{\textrm{Simple}}\)
needs to move right, the new rightmost complete configursiation of
\(T\_{\textrm{Simple}}\) written on the tape of *U* is:
\[\textcolor{red}{\underbrace{D\\_}\_{S\_0}\underbrace{D\\_C\\_}\_{S\_1}}\textcolor{blue}{\underbrace{D\\_}\_{S\_0}}\textcolor{magenta}{\underbrace{D\\_C\\_}\_{S\_1}}\textcolor{green}{\underbrace{D\\_A\\_}\_{q\_1}}
\]
Since we have that for this complete configuration the square being
scanned by \(T\_{\textrm{Simple}}\) is one that was not included in the
previous complete configuration (viz. \(T\_{\textrm{Simple}}\) has
reached beyond the rightmost previous point) the complete
configuration as written out by *U* is in fact incomplete. This
small defect was corrected by Post (Post 1947) by including an additional instruction in the function
used to mark the complete configuration in the next round.
As is clear, Turing's universal machine indeed requires that
program and 'data' produced by that program are
manipulated interchangeably, viz. the program and its productions are
put next to each other and treated in the same manner, as sequences of
letters to be copied, marked, erased and compared.
Turing's particular construction is quite intricate with its
reliance on the *F* and *E*-squares, the use of a rather
large set of symbols and a rather arcane notation used to describe the
different functions discussed above. Since 1936 several modifications
and simplifications have been implemented. The removal of the
difference between *F* and *E*-squares was already discussed
in
Section 1.2
and it was proven by Shannon that any Turing machine, including the
universal machine, can be reduced to a binary Turing machine (Shannon
1956). Since the 1950s, there has been quite some research on what
could be the smallest possible universal devices (with respect to the
number of states and symbols) and quite some "small"
universal Turing machines have been found. These results are usually
achieved by relying on other equivalent models of computability such
as, for instance, tag systems. For a survey on research into small
universal devices (see Margenstern 2000; Woods & Neary 2009).
### 2.4 The Halting Problem and the Entscheidungsproblem
As explained, the purpose of Turing's paper was to show that the
Entscheidungsproblem for first-order logic is not computable. The same
result was achieved independently by Church (1936a, 1936b) using a different kind of formal device which is logically
equivalent to a Turing machine (see
Sec. 4).
The result went very much against what Hilbert had hoped to achieve
with his finitary and formalist program. Indeed, next to
Godel's incompleteness results, they broke much of
Hilbert's dream of making mathematics void of
*Ignorabimus* and which was explicitly expressed in the
following words of Hilbert:
>
>
> The true reason why Comte could not find an unsolvable problem, lies
> in my opinion in the assertion that there exists no unsolvable
> problem. Instead of the stupid Ignorabimus, our solution should be: We
> must know. We shall know. (1930: 963) [translation by the author]
>
>
>
Note that the solvability Hilbert is referring to here concerns
solvability of mathematical problems in general and not just
mechanically solvable. It is shown however in Mancosu et al. 2009 (p.
94), that this general aim of solving every mathematical problem,
underpins two particular convictions of Hilbert namely that (1) the
axioms of number theory are complete and (2) that there are no
undecidable problems in mathematics.
#### 2.4.1 Direct and indirect proofs of uncomputable decision problems
So, how can one show, for a particular decision problem
\(\textrm{D}\_i\), that it is not computable? There are two main
methods:
* **Indirect proof:** take some problem
\(\textrm{D}\_{\textrm{uncomp}}\) which is already known to be
uncomputable and show that the problem "reduces" to
\(\textrm{D}\_{i}\).
* **Direct proof:** prove the uncomputability of
\(\textrm{D}\_{i}\) directly by assuming some version of the
Church-Turing thesis.
Today, one usually relies on the first method while it is evident that
in the absence of a problem \(\textrm{D}\_{\textrm{uncomp}}\), Turing
but also Church and Post (see
Sec. 4)
had to rely on the direct approach.
The notion of reducibility has its origins in the work of Turing and
Post who considered several variants of computability (Post 1947;
Turing 1939). The concept was later appropriated in the context of
computational complexity theory and is today one of the basic concepts
of both computability and computational complexity theory (Odifreddi
1989; Sipser 1996). Roughly speaking, a reduction of a problem \(D\_i\)
to a problem \(D\_j\) comes down to providing an effective procedure
for translating every instance \(d\_{i,m}\) of the problem \(D\_i\) to
an instance \(d\_{j,n}\) of \(D\_j\) in such a way that an effective
procedure for solving \(d\_{j,n}\) also yields an effective procedure
for solving \(d\_{i,m}\). In other words, if \(D\_i\) reduces to \(D\_j\)
then, if \(D\_i\) is uncomputable so is \(D\_j\). Note that the
reduction of one problem to another can also be used in decidability
proofs: if \(D\_i\) reduces to \(D\_j\) and \(D\_j\) is known to be
computable then so is \(D\_i\).
In the absence of **D**\(\_{\textrm{uncomp}}\) a very
different approach was required and Church, Post and Turing each used
more or less the same approach to this end (Gandy 1988). First of all,
one needs a formalism which captures the notion of computability.
Turing proposed the Turing machine formalism to this end. A second
step is to show that there are problems that are not computable within
the formalism. To achieve this, a uniform process **U**
needs to be set-up relative to the formalism which is able to compute
every computable number. One can then use (some form of)
diagonalization in combination with **U** to derive a
contradiction. Diagonalization was introduced by Cantor to show that
the set of real numbers is "uncountable" or not
denumerable. A variant of the method was used also by Godel in
the proof of his
first incompleteness theorem.
#### 2.4.2 Turing's basic problem CIRC?, PRINT? and the Entscheidungsproblem
Recall that in Turing's original version of the Turing machine,
the machines are computing real numbers. This implied that a
"well-behaving" Turing machine should in fact never halt
and print out an infinite sequence of figures. Such machines were
identified by Turing as *circle-free*. All other machines are
called *circular machines*. A number *n* which is the D.N.
of a circle-free machine is called *satisfactory*.
This basic difference is used in Turing's proof of the
uncomputability of:
>
>
> **CIRC?** The problem to decide for every number *n*
> whether or not it is satisfactory
>
>
>
The proof of the uncomputability of **CIRC?** uses the
construction of a hypothetical and circle-free machine \(T\_{decide}\)
which computes the diagonal sequence of the set of all computable
numbers computed by the circle-free machines. Hence, it relies for its
construction on the universal Turing machine and a hypothetical
machine that is able to decide **CIRC?** for each number
*n* given to it. It is shown that the machine \(T\_{decide}\)
becomes a circular machine when it is provided with its own
description number, hence the assumption of a machine which is capable
of solving **CIRC?** must be false.
Based on the uncomputability of **CIRC?**, Turing then
shows that also **PRINT?** is not computable. More
particularly he shows that if **PRINT?** were to be
computable, also **CIRC?** would be decidable, viz. he
rephrases **PRINT?** in such a way that it becomes the
problem to decide for any machine whether or not it will print an
infinity of symbols which would amount to deciding
**CIRC?**.
Finally, based on the uncomputability of **PRINT?**
Turing shows that the Entscheidungsproblem is not decidable. This is
achieved by showing:
1. how for each Turing machine *T*, it is possible to construct
a corresponding formula **T** in first-order logic
and
2. if there is a general method for determining whether
**T** is provable, then there is a general method for
proving that *T* will ever print 0. This is the problem
**PRINT?** and so cannot be decidable (provided we accept
Turing's thesis).
It thus follows from the uncomputability of **PRINT?**,
that the Entscheidungsproblem is not computable.
#### 2.4.3 The halting problem
Given Turing's focus on computable real numbers, his base
decision problem is about determining whether or not some Turing
machine will *not* halt and so is not quite the same as the
more well-known halting problem:
* **HALT?** The problem to decide for every Turing
machine *T* whether or not *T* will halt.
Turing's problem **PRINT?** is in fact very close
to **HALT?** (see Davis 1958: Chapter 5, Theorem
2.3).
A popular proof of **HALT?** goes as follows. Assume that
**HALT?** is computable. Then it should be possible to
construct a Turing machine which decides, for each machine \(T\_i\) and
some input *w* for \(T\_i\) whether or not \(T\_i\) will halt on
*w*. Let us call this machine \(T\_{H}\). More particularly, we
have:
\[
T\_H(T\_i,w) = \left\{
\begin{array}{ll}
\textrm{HALT} & \textrm{if \(T\_i\) halts on } w\\
\textrm{LOOP} & \textrm{if \(T\_i\) loops on } w
\end{array} \right.
\]
We now define a second machine \(T\_D\) which relies on the assumption
that the machine \(T\_H\) can be constructed. More particularly, we
have:
\[
T\_D(T\_i,D.N.~of~ T\_i) = \left\{
\begin{array}{ll}
\textrm{HALT} & \textrm{if \(T\_i\) does not halt on its own} \\
& \qquad \textrm{description number}\\
\textrm{LOOP} & \textrm{if \(T\_i\) halts on its own} \\
& \qquad \textrm{description number}\\
\end{array}
\right.
\]
If we now set \(T\_i\) to \(T\_D\) we end up with a contradiction: if
\(T\_D\) halts it means that \(T\_D\) does not halt and vice versa. A
popular but quite informal variant of this proof was given by
Christopher Strachey in the context of programming (Strachey
1965).
### 2.5 Variations on the Turing machine
As is clear from
Sections 1.1
and
1.2,
there is a variety of definitions of the Turing machine. One can use
a quintuple or quadruple notation; one can have different types of
symbols or just one; one can have a two-way infinite or a one-way
infinite tape; etc. Several other less obvious modifications have been
considered and used in the past. These modifications can be of two
kinds: generalizations or restrictions. These do not result in
"stronger" or "weaker" models. Viz. these
modified machines compute no more and no less than the Turing
computable functions. This adds to the robustness of the Turing
machine definition.
##### Binary machines
In his short 1936 note Post considers machines that either mark or unmark a square which
means we have only two symbols \(S\_0\) and \(S\_1\) but he did not
prove that this formulation captures exactly the Turing computable
functions. It was Shannon who proved that for any Turing machine
*T* with *n* symbols there is a Turing machine with two
symbols that simulates *T* (Shannon 1956). He also showed that
for any Turing machine with *m* states, there is a Turing machine
with only two states that simulates it.
##### Non-erasing machines
Non-erasing machines are machines that can only overprint \(S\_0\). In
Moore 1952, it was mentioned that Shannon proved that non-erasing
machines can compute what any Turing machine computes. This result was
given in a context of actual digital computers of the 50s which relied
on punched tape (and so, for which, one cannot erase). Shannon's
result however remained unpublished. It was Wang who published the
result (Wang 1957).
##### Non-writing machines
It was shown by Minsky that for every Turing machine there is a
non-writing Turing machine with two tapes that simulates it.
##### Multiple tapes
Instead of one tape one can consider a Turing machine with multiple
tapes. This turned out the be very useful in several different
contexts. For instance, Minsky, used two-tape non-writing Turing machines to prove that a certain decision problem defined by Post (the decision problem for tag systems) is
non-Turing computable (Minsky 1961). Hartmanis and Stearns then,
in their founding paper for computational complexity theory, proved
that any *n*-tape Turing machine reduces to a single tape Turing
machine and so anything that can be computed by an *n*-tape or
multitape Turing machine can also be computed by a single tape Turing
machine, and conversely (Hartmanis & Stearns 1965). They used
multitape machines because they were considered to be closer to actual
digital computers.
##### *n*-dimensional Turing machines
Another variant is to consider Turing machines where the tape is not
one-dimensional but *n*-dimensional. This variant too reduces to
the one-dimensional variant.
##### Non-deterministic machines
An apparently more radical reformulation of the notion of Turing
machine is that of non-deterministic Turing machines. As explained in
1.1,
one fundamental condition of Turing's machines is the so-called
determinacy condition, viz. the idea that at any given moment, the
machine's behavior is completely determined by the configuration
or state it is in and the symbol it is scanning. Next to these, Turing
also mentions the idea of choice machines for which the next state is
not completely determined by the state and symbol pair. Instead, some
external device makes a random choice of what to do next.
Non-deterministic Turing machines are a kind of choice machines: for
each state and symbol pair, the non-deterministic machine makes an
arbitrary choice between a finite (possibly zero) number of states.
Thus, unlike the computation of a deterministic Turing machine, the
computation of a non-deterministic machine is a tree of possible
configuration paths. One way to visualize the computation of a
non-deterministic Turing machine is that the machine spawns an exact
copy of itself and the tape for each alternative available transition,
and each machine continues the computation. If any of the machines
terminates successfully, then the entire computation terminates and
inherits that machine's resulting tape. Notice the word
successfully in the preceding sentence. In this formulation, some
states are designated as *accepting states* and when the
machine terminates in one of these states, then the computation is
successful, otherwise the computation is unsuccessful and any other
machines continue in their search for a successful outcome. The
addition of non-determinism to Turing machines does not alter the
extent of Turing-computability. Non-determinism was introduced for
finite automata in the paper, Rabin & Scott 1959, where it is also
shown that adding non-determinism does not result in more powerful
automata. Non-deterministic Turing machines are an important model in
the context of
computational complexity theory.
##### Weak and semi-weak machines
Weak Turing machines are machines where some word over the alphabet is
repeated infinitely often to the left and right of the input.
Semi-weak machines are machines where some word is repeated infinitely
often either to the left or right of the input. These machines are
generalizations of the standard model in which the initial tape
contains some finite word (possibly nil). They were introduced to
determine smaller universal machines. Watanabe was the first to define
a universal semi-weak machine with six states and five symbols
(Watanabe 1961). Recently, a number of researchers have determined
several small weak and semi-weak universal Turing machines (e.g.,
Woods & Neary 2007; Cook 2004)
Besides these variants on the Turing machine model, there are also
variants that result in models which capture, in some well-defined
sense, more than the (Turing)-computable functions. Examples of such
models are oracle machines (Turing 1939), infinite-time Turing
machines (Hamkins & Lewis 2008) and accelerating Turing machines
(Copeland 2002). There are various reasons for introducing such
stronger models. Some are well-known models of computability or
recursion theory and are used in the theory of higher-order recursion
and relative computability (oracle machines); others, like the
accelerating machines, were introduced in the context of
supertasks
and the idea of providing physical models that "compute"
functions which are not Turing-computable.
## 3. Philosophical Issues Related to Turing Machines
### 3.1 Human and Machine Computations
In its original context, Turing's identification between the
computable numbers and Turing machines was aimed at proving that the
Entscheidungsproblem is *not* a computable problem and so not a
so-called "general process" problem (Turing 1936-7:
248). The basic assumption to be made for this result is that our
"intuitive" notion of computability can be formally
defined as Turing computability and so that there are no
"computable" problems that are not Turing computable. But
what was Turing's "intuitive" notion of
computability and how can we be sure that it really covers all
computable problems, and, more generally, all kinds of computations?
This is a very basic question in the
philosophy of computer science.
At the time Turing was writing his paper, the modern computer was not
developed yet and so rephrasings of Turing's thesis which
identify Turing computability with computability by a modern computer
are interpretations rather than historically correct statements of
Turing's thesis. The existing computing machines at the time
Turing wrote his paper, such as the differential analyzer or desk
calculators, were quite restricted in what they could compute and were
used in a context of human computational practices (Grier 2007). It is
thus not surprising that Turing did not attempt to formalize machine
computation but rather human computation and so computable problems in
Turing's paper become computable by human means. This is very
explicit in Section 9 of Turing 1936-7 where he shows that
Turing machines are a 'natural' model of (human)
computation by analyzing the process of human computation. The
analysis results in a kind of abstract human 'computor'
who fulfills a set of different conditions that are rooted in
Turing's recognition of a set of human limitations which
restrict what we can compute (of our sensory apparatus but also of our
mental apparatus). This 'computor' computes (real) numbers
on an infinite one-dimensional tape divided into squares [Note: Turing
assumed that the reduction of the 2-dimensional character of the paper
a human mathematician usually works on "is not essential of
computation" (Turing 1936-7: 249)]. It has the following
restrictions (Gandy 1988; Sieg 1994):
* **Determinacy condition D** "The behaviour of
the computer at any moment is determined by the symbols which he is
observing and his 'state of mind' at that moment."
(Turing 1936-7: 250)
* **Boundedness condition B1** "there is a bound
B to the number of symbols or squares which the computer can observe
at one moment. If he wishes to observe more, he must use successive
observations." (Turing 1936-7: 250)
* **Boundedness condition B2** "the number of
states of mind which need be taken into account is finite"
(Turing 1936-7: 250)
* **Locality condition L1** "We may [...]
assume that the squares whose symbols are changed are always
'observed' squares." (Turing 1936-7: 250)
* **Locality condition L2** "each of the new
observed squares is within *L* squares of an immediately
previously observed square." (Turing 1936-7: 250)
It is this so-called "direct appeal to intuition"
(1936-7: 249) of Turing's analysis and resulting model
that explain why the Turing machine is today considered by many as the
best standard model of computability (for a strong statement of this
point of view, see Soare 1996). Indeed, from the above set of
conditions one can quite easily derive Turing's machines. This
is achieved basically by analyzing the restrictive conditions into
"'simple operations' which are so elementary that it
is not easy to imagine them further divided" (Turing
1936-7: 250).
Note that while Turing's analysis focuses on human computation,
the application of his identification between (human) computation and
Turing machine computation to the Entscheidungsproblem suggests that
he did *not* consider the possibility of a model of computation
that somehow goes "beyond" human computation and is
capable of providing an effective and general procedure which solves
the Entscheidungsproblem. If that would have been the case, he would
not have considered the Entscheidungsproblem to be uncomputable.
The focus on human computation in Turing's analysis of
computation, has led researchers to extend Turing's analysis to
computation by physical devices. This results in (versions of) the
physical Church-Turing thesis. Robin Gandy focused on extending
Turing's analysis to discrete mechanical devices (note that he
did not consider analog machines). More particularly, like Turing,
Gandy starts from a basic set of restrictions of computation by
discrete mechanical devices and, on that basis, develops a new model
which he proved to be reducible to the Turing machine model. This work
is continued by Wilfried Sieg who proposed the framework of Computable
Dynamical Systems (Sieg 2008). Others have considered the possibility
of "reasonable" models from physics which
"compute" something that is not Turing computable. See for
instance Aaronson, Bavarian, & Gueltrini 2016 (Other Internet
Resources) in which it is shown that *if* closed timelike curves
would exist, the halting problem would become solvable with finite
resources. Others have proposed alternative models for computation
which are inspired by the Turing machine model but capture specific
aspects of current computing practices for which the Turing machine
model is considered less suited. One example here are the persistent
Turing machines intended to capture interactive processes. Note
however that these results do not show that there are
"computable" problems that are not Turing
computable. These and other related proposals have been considered by
some authors as reasonable models of computation that somehow compute
more than Turing machines. It is the latter kind of statements that
became affiliated with research on so-called hypercomputation
resulting in the early 2000s in a rather fierce debate in the computer
science community, see, e.g., Teuscher 2004 for various positions.
### 3.2 Thesis, Definition, Axioms or Theorem
As is clear, strictly speaking, Turing's thesis is not provable,
since, in its original form, it is a claim about the relationship
between a formal and a vague or intuitive concept. By consequence,
many consider it as a thesis or a definition. The thesis would be
refuted if one would be able to provide an intuitively acceptable
effective procedure for a task that is not Turing-computable. This
far, no such counterexample has been found. Other independently
defined notions of computability based on alternative foundations,
such as
recursive functions
and abacus machines have also been shown to be equivalent to Turing
computability. These equivalences between quite different formulations
indicate that there is a natural and robust notion of computability
underlying our understanding. Given this apparent robustness of our
notion of computability, some have proposed to avoid the notion of a
thesis altogether and instead propose a set of axioms used to sharpen
the informal notion. There are several approaches, most notably, an
approach of structural axiomatization where computability itself is
axiomatized (Sieg 2008) and one whereby an axiomatization is given
from which the Church-Turing thesis can be derived (Dershowitz &
Gurevich 2008).
## 4. Alternative Historical Models of Computability
Besides the Turing machine, several other models were introduced
independently of Turing in the context of research into the foundation
of mathematics which resulted in theses that are logically equivalent
to Turing's thesis. For each of these models it was proven that
they capture the Turing computable functions. Note that the
development of the modern computer stimulated the development of other
models such as register machines or Markov algorithms. More recently,
computational approaches in disciplines such as biology or physics,
resulted in bio-inspired and physics-inspired models such as Petri
nets or quantum Turing machines. A discussion of such models, however,
lies beyond the scope of this entry.
### 4.1 General Recursive Functions
The original formulation of general
recursive functions can be
found in Godel 1934, which built on a suggestion by Herbrand. In
Kleene 1936 a simpler definition was given and in Kleene 1943 the
standard form which uses the so-called minimization or
\(\mu\)-operator was introduced. For more information, see the entry
on
recursive functions.
Church used the definition of general recursive functions to state his
thesis:
>
>
> **Church's thesis** Every effectively calculable
> function is general recursive
>
>
>
In the context of recursive function one uses the notion of recursive
solvability and unsolvability rather than Turing computability and
uncomputability. This terminology is due to Post (1944).
### 4.2 l-Definability
Church's l-calculus has its origin in the papers (Church
1932, 1933) and which were intended as a logical foundation for
mathematics. It was Church's conviction at that time that this
different formal approach might avoid Godel incompleteness (Sieg
1997: 177). However, the logical system proposed by Church was proven
inconsistent by his two PhD students Stephen C. Kleene and Barkley
Rosser and so they started to focus on a subpart of that logic which
was basically the l-calculus. Church, Kleene and Rosser started
to l-define any calculable function they could think of and
quite soon Church proposed to define effective calculability in terms
of l-definability. However, it was only after Church, Kleene
and Rosser had established that general recursiveness and
l-definability are equivalent that Church announced his thesis
publicly and in terms of general recursive functions rather than
l-definability (Davis 1982; Sieg 1997).
In l-calculus there are only two types of symbols. The three
primitive symbols l, (, ) also called the improper symbols, and
an infinite list of variables. There are three rules to define the
well-formed formulas of l-calculus, called
l-formulas.
1. The l-formulas are first of all the variables
themselves.
2. If **P** is a l-formula containing *x* as
a free variable then \(\lambda x[\textbf{P}]\) is also a
l-formula. The l-operator is used to bind variables and
it thus converts an expression containing free variables into one that
denotes a function
3. If **M** and **N** are l-formulas
then so is {**M**}(**N**), where
{**M**}(**N**) is to be understood as the
application of the function **M** to
**N**.
The l-formulas, or well-formed formulas of l-calculus
are all and only those formulas that result from (repeated)
application of these three rules.
There are three operations or rules of conversion. Let us define
\(\textrm{S}\_{\mathbf{N}}^{x}\mathbf{M}|\) as standing for the formula
that results by substitution of **N** for *x* in
**M**.
1. *Reduction*. To replace any part \(\{\lambda x
\mathbf{[M]}\} (\mathbf{N})\) of a formula by
\(\textrm{S}\_{\mathbf{N}}^{x}\mathbf{M}|\) provided that the bound
variables of **M** are distinct both from *x* and
from the free variables of **N**. For example \(\{\lambda
x [x^{2}]\}(2)\) reduces to \(2^{2}\)
2. *Expansion* To replace any part
\(\textrm{S}\_{\mathbf{N}}^{x}\mathbf{M}|\) of a formula by \(\{\lambda
x \mathbf{[M]}\} (\mathbf{N})\) provided that \(((\lambda x
\mathbf{M}) \mathbf{N})\) is well-formed and the bound variables of
**M** are distinct both from *x* and from the free
variables in **N**. For example, \(2^{2}\) can be
expanded to \(\{\lambda x [x^{2}]\}(2)\)
3. *Change of bound variable* To replace any part
**M** of a formula by
\(\textrm{S}\_{\textrm{y}}^{x}\mathbf{M}|\) provided that *x* is
not a free variable of **M** and *y* does not occur
in **M**. For example changing \(\{\lambda x [x^{2}]\}\)
to \(\{\lambda y [y^{2}]\}\)
Church introduces the following abbreviations to define the natural
numbers in l-calculus:
\[\begin{array}{l}
1 \rightarrow \lambda yx.yx,\\
2 \rightarrow \lambda yx.y(yx),\\
3 \rightarrow \lambda yx.y(y(yx)),\\
\ldots
\end{array}\]
Using this definition, it is possible to l*-define*
functions over the positive integers. A function *F* of one
positive integer is l-definable if we can find a
l-formula **F**, such that if \(F(m) = n\) and
**m** and **n** are l-formulas
standing for the integers *m* and *n*, then the
l-formula \(\{\mathbf{F}\} (\mathbf{m})\) can be
*converted* to **n** by applying the conversion
rules of l-calculus. Thus, for example, the successor function
*S*, first introduced by Church, can be l-defined as
follows:
\[S \rightarrow \lambda abc. b(abc)\]
To give an example, applying *S* to the l-formula standing
for 2, we get:
\[\begin{align}
\big(\lambda abc. b(abc)\big ) \big(\lambda yx. y(yx)\big) \\
\rightarrow \lambda bc. b\big( \big(\lambda yx. y(yx)\big) bc\big)\\
\rightarrow \lambda bc. b\big( \big(\lambda x. b(bx)\big) c\big)\\
\rightarrow \lambda bc. b (b(bc))
\end{align}\]
Today, l-calculus is considered to be a basic model in the
theory of programming.
### 4.3 Post Production Systems
Around 1920-21 Emil Post developed different but related types
of production systems in order to develop a syntactical form which
would allow him to tackle the decision problem for first-order logic.
One of these forms are Post canonical systems *C* which became
later known as Post production systems.
A canonical system consists of a finite alphabet \(\Sigma\), a finite
set of initial words \(W\_{0,0}\), \(W\_{0,1}\),..., \(W\_{0,n}\)
and a finite set of production rules of the following form:
\[
\begin{array}{c}
g\_{11}P\_{i\_{1}^{1}}g\_{12}P\_{i\_{2}^{1}} \ldots g\_{1m\_{1}}P\_{i^{1}\_{m\_{1}}}g\_{1 {(m\_{1} + 1)}}\\
g\_{21}P\_{i\_{1}^{2}}g\_{22}P\_{i\_{2}^{2}} \ldots g\_{2m\_{2}}P\_{i^{2}\_{m\_{2}}}g\_{2 {(m\_{2} + 1)}}\\
.................................\\
g\_{k1}P\_{i\_{1}^{k}}g\_{k2}P\_{i\_{2}^{k}} \ldots g\_{km\_{k}}P\_{i^{k}\_{m\_{k}}}g\_{k {(m\_{k} + 1)}}\\
\textit{produce}\\
g\_{1}P\_{i\_{1}}g\_{2}P\_{i\_{2}} \ldots g\_{m}P\_{i\_{m}}g\_{(m + 1)}\\
\end{array}
\]
The symbols *g* are a kind of metasymbols: they correspond to
actual sequences of letters in actual productions. The symbols
*P* are the operational variables and so can represent any
sequence of letters in a production. So, for instance, consider a
production system over the alphabet \(\Sigma = \{a,b\}\) with initial
word:
\[W\_0 = ababaaabbaabbaabbaba\]
and the following production rule:
\[
\begin{array}{c}
P\_{1,1}bbP\_{1,2}\\
\textit{produces}\\
P\_{1,3}aaP\_{1,4}\\
\end{array}
\]
Then, starting with \(W\_0\), there are three possible ways to apply
the production rule and in each application the variables \(P\_{1,i}\)
will have different values but the values of the g's are fixed.
Any set of finite sequences of words that can be produced by a
canonical system is called a *canonical set*.
A special class of canonical forms defined by Post are normal systems.
A normal system *N* consists of a finite alphabet \(\Sigma\), one
initial word \(W\_0 \in \Sigma^{\ast}\) and a finite set of production
rules, each of the following form:
\[
\begin{array}{c}
g\_iP\\
\textit{produces}\\
Pg\_i'\\
\end{array}
\]
Any set of finite sequences of words that can be produced by a normal
system is called a *normal set*. Post was able to show that for
any canonical set *C* over some alphabet \(\Sigma\) there is a
normal set *N* over an alphabet \(\Delta\) with \(\Sigma
\subseteq \Delta\) such that \(C = N \cap \Sigma^{\ast}\). It was his
conviction that (1) any set of finite sequences that can be generated
by finite means can be generated by canonical systems and (2) the
proof that for every canonical set there is a normal set which
contains it, which resulted in Post's thesis I:
>
>
> **Post's thesis I** (Davis 1982) Every set of
> finite sequences of letters that can be generated by finite processes
> can also be generated by normal systems. More particularly, any set of
> words on an alphabet \(\Sigma\) which can be generated by a finite
> process is of the form \(N \cap \Sigma^{\ast}\), with *N* a
> normal set.
>
>
>
Post realized that "[for the thesis to obtain its full
generality] a complete analysis would have to be made of all the
possible ways in which the human mind could set up finite processes
for generating sequences" (Post 1965: 408) and it is quite
probable that the formulation 1 given in Post 1936 and which is almost
identical to Turing's machines is the result of such an
analysis.
Post production systems became important formal devices in computer
science and, more particularly, formal language theory (Davis 1989;
Pullum 2011).
### 4.4 Formulation 1
In 1936 Post published a short note from which one can derive
Post's second thesis (De Mol 2013):
>
>
> **Post's thesis II** Solvability of a
> problem in the intuitive sense coincides with solvability by
> formulation 1
>
>
>
Formulation 1 is very similar to Turing machines but the
'program' is given as a list of directions which a human
worker needs to follow. Instead of a one-way infinite tape,
Post's 'machine' consists of a two-way infinite
symbol space divided into boxes. The idea is that a worker is working
in this symbol space, being capable of a set of five primitive acts
(\(O\_{1}\) mark a box, \(O\_{2}\) unmark a box, \(O\_{3}\) move one box
to the left, \(O\_{4}\) move one box to the right, \(O\_{5}\)
determining whether the box he is in is marked or unmarked), following
a finite set of directions \(d\_{1}\),..., \(d\_{n}\) where each
direction \(d\_{i}\) always has one of the following forms:
1. Perform one of the operations (\(O\_{1}\)-\(O\_4\)) and go to
instruction \(d\_{j}\)
2. Perform operation \(O\_{5}\) and according as the box the worker is
in is marked or unmarked follow direction \(d\_{j'}\) or
\(d\_{j''}\).
3. Stop.
Post also defined a specific terminology for his formulation 1 in
order to define the solvability of a problem in terms of formulation
1. These notions are applicability, finite-1-process, 1-solution and
1-given. Roughly speaking these notions assure that a decision problem
is solvable with formulation 1 on the condition that the solution
given in the formalism always terminates with a correct solution.
## 5. Impact of Turing Machines on Computer Science
Turing is today one of the most celebrated figures of computer
science. Many consider him as the father of computer science and the
fact that the main award in the computer science community is called
the Turing award is a clear indication of that (Daylight 2015). This
was strengthened by the Turing centenary celebrations from 2012, which
were largely coordinated by S. Barry Cooper. This resulted not only in
an enormous number of scientific events around Turing but also a
number of initiatives that brought the idea of Turing as the father of
computer science also to the broader public (Bullynck, Daylight, &
De Mol 2015). Amongst Turing's contributions which are today
considered as pioneering, the 1936 paper on Turing machines stands out
as the one which has the largest impact on computer science. However,
recent historical research shows also that one should treat the impact
of Turing machines with great care and that one should be careful in
retrofitting the past into the present.
### 5.1 Impact on Theoretical Computer Science
Today, the Turing machine and its theory are part of the theoretical
foundations of computer science. It is a standard reference in
research on foundational questions such as:
* What is an algorithm?
* What is a computation?
* What is a physical computation?
* What is an efficient computation?
* etc.
It is also one of the main models for research into a broad range of
subdisciplines in theoretical computer science such as: variant and
minimal models of computability, higher-order computability,
computational complexity theory,
algorithmic information theory, etc. This significance of the Turing
machine model for theoretical computer science has at least two
historical roots.
First of all, there is the continuation of the work in mathematical
logic from the 1920s and 1930s by people like Martin Davis--who
is a student of Post and Church--and Kleene. Within that
tradition, Turing's work was of course well-known and the Turing
machine was considered as the best model of computability given. Both
Davis and Kleene published a book in the 1950s on these topics (Kleene
1952; Davis 1958) which soon became standard references not just for
early computability theory but also for more theoretical reflections
in the late 1950s and 1960s on computing.
Secondly, one sees that in the 1950s there is a need for theoretical
models to reflect on the new computing machines, their abilities and
limitations and this in a more systematic manner. It is in that
context that the theoretical work already done was picked up. One
important development is automata theory in which one can situate,
amongst others, the development of other machine models like the
register machine model or the Wang *B* machine model which are,
ultimately, rooted in Turing's and Post's machines; there
are the minimal machine designs discussed in
Section 5.2;
and there is the use of Turing machines in the context of what would
become the origins of formal language theory, viz the study of
different classes of machines with respect to the different
"languages" they can recognize and so also their
limitations and strengths. It are these more theoretical developments
that contributed to the establishment of
computational complexity theory
in the 1960s. Of course, besides Turing machines, other models also
played and play an important role in these developments. Still, within
theoretical computer science it is mostly the Turing machine which
remains the model, even today. Indeed, when in 1965 one of the
founding papers of computational complexity theory (Hartmanis &
Stearns 1965) is published, it is the multitape Turing machine which
is introduced as the standard model for the computer.
### 5.2 Turing Machines and the Modern Computer
In several accounts, Turing has been identified not just as the father
of computer science but as the father of the modern computer. The
classical story for this more or less goes as follows: the blueprint
of the modern computer can be found in von Neumann's EDVAC
design and today classical computers are usually described as having a
so-called von Neumann architecture. One fundamental idea of the EDVAC
design is the so-called stored-program idea. Roughly speaking this
means the storage of instructions and data in the same memory allowing
the manipulation of programs as data. There are good reasons for
assuming that von Neumann knew the main results of Turing's
paper (Davis 1988). Thus, one could argue that the stored-program
concept originates in Turing's notion of the universal Turing
machine and, singling this out as the defining feature of the modern
computer, some might claim that Turing is the father of the modern
computer. Another related argument is that Turing was the first who
"captured" the idea of a general-purpose machine through
his notion of the universal machine and that in this sense he also
"invented" the modern computer (Copeland & Proudfoot
2011). This argument is then strengthened by the fact that Turing was
also involved with the construction of an important class of computing
devices (the Bombe) used for decrypting the German Enigma code and
later proposed the design of the ACE (Automatic Computing Engine)
which was explicitly identified as a kind of physical realization of
the universal machine by Turing himself:
>
>
> Some years ago I was researching on what might now be described as an
> investigation of the theoretical possibilities and limitations of
> digital computing machines. [...] Machines such as the ACE may be
> regarded as practical versions of this same type of machine. (Turing
> 1947)
>
>
>
Note however that Turing already knew the ENIAC and EDVAC designs and
proposed the ACE as a kind of improvement on that design (amongst
others, it had a simpler hardware architecture).
These claims about Turing as the inventor and/or father of the
computer have been scrutinized by some historians of computing
(Daylight 2014; Haigh 2013; Haigh 2014; Mounier-Kuhn 2012), mostly in the wake of the Turing centenary and
this from several perspectives. Based on that research it is clear
that claims about Turing being the inventor of the modern computer
give a distorted and biased picture of the development of the modern
computer. At best, he is one of the many who made a contribution to
one of the several historical developments (scientific, political,
technological, social and industrial) which resulted, ultimately, in
(our concept of) the modern computer. Indeed, the "first"
computers are the result of a wide number of innovations and so are
rooted in the work of not just one but several people with diverse
backgrounds and viewpoints.
In the 1950s then the (universal) Turing machine starts to become an
accepted model in relation to actual computers and is used as a tool
to reflect on the limits and potentials of general-purpose computers
by both engineers, mathematicians and logicians. More particularly,
with respect to machine designs, it was the insight that only a few
number of operations were required to built a general-purpose machine
which inspired in the 1950s reflections on minimal machine
architectures. Frankel, who (partially) constructed the MINAC stated
this as follows:
>
>
> One remarkable result of Turing's investigation is that he was
> able to describe a single computer which is able to compute
> *any* computable number. He called this machine a *universal
> computer*. It is thus the "best possible" computer
> mentioned.
>
>
>
> [...] This surprising result shows that in examining the question
> of what problems are, in principle, solvable by computing machines, we
> do not need to consider an infinite series of computers of greater and
> greater complexity but may think only of a single machine.
>
>
>
> Even more surprising than the theoretical possibility of such a
> "best possible" computer is the fact that it need not be
> very complex. The description given by Turing of a universal computer
> is not unique. Many computers, some of quite modest complexity,
> satisfy the requirements for a universal computer. (Frankel 1956:
> 635)
>
>
>
The result was a series of experimental machines such as the MINAC,
TX-0 (Lincoln Lab) or the ZERO machine (van der Poel) which in their
turn became predecessors of a number of commercial machines. It is
worth pointing out that also Turing's ACE machine design fits
into this philosophy. It was also commercialized as the BENDIX G15
machine (De Mol, Bullynck, & Daylight 2018).
Of course, by minimizing the machine instructions, coding or
programming became a much more complicated task. To put it in
Turing's words who clearly realized this trade-off between code
and (hard-wired) instructions when designing the ACE: "[W]e have
often simplified the circuit at the expense of the code" (Turing
1947).
And indeed, one sees that with these early minimal designs, much
effort goes into developing more efficient coding strategies. It is
here that one can also situate one historical root of making the
connection between the universal Turing machine and the important
principle of the interchangeability between hardware and programs.
Today, the universal Turing machine is by many still considered as the
main theoretical model of the modern computer especially in relation
to the so-called von Neumann architecture. Of course, other models
have been introduced for other architectures such as the Bulk
synchronous parallel model for parallel machines or the persistent
Turing machine for modeling interactive problems.
### 5.3 Theories of Programming
The idea that any general-purpose machine can, in principle, be
modeled as a universal Turing machine also became an important
principle in the context of automatic programming in the 1950s
(Daylight 2015). In the machine design context it was the minimizing
of the machine instructions that was the most important consequence of
that viewpoint. In the programming context then it was about the idea
that one can built a machine that is able to
'mimic'' the behavior of any other machine and so,
ultimately, the interchangeability between machine hardware and
language implementations. This is introduced in several forms in the
1950s by people like John W. Carr III and Saul Gorn--who were
also actively involved in the shaping of the *Association for
Computing Machinery (ACM)*--as the unifying theoretical idea
for automatic programming which indeed is about the (automatic)
"translation" of higher-order to lower-level, and,
ultimately, machine code. Thus, also in the context of programming,
the universal Turing machine starts to take on its foundational role
in the 1950s (Daylight 2015).
Whereas the Turing machine is and was a fundamental theoretical model
delimiting what is possible and not on the general level, it did not
have a real impact on the syntax and semantics of programming
languages. In that context it were rather l-calculus and Post
production systems that had an effect (though also here one should be
careful in overstating the influence of a formal model on a
programming practice). In fact, Turing machines were often regarded as
machine models rather than as a model for programming:
>
>
> Turing machines are not conceptually different from the automatic
> computers in general use, but they are very poor in their control
> structure. [...] Of course, most of the theory of computability
> deals with questions which are not concerned with the particular ways
> computations are represented. It is sufficient that computable
> functions be represented somehow by symbolic expressions, e.g.,
> numbers, and that functions computable in terms of given functions be
> somehow represented by expressions computable in terms of the
> expressions representing the original functions. However, a practical
> theory of computation must be applicable to particular algorithms.
> (McCarthy 1963: 37)
>
>
>
Thus one sees that the role of the Turing machine for computer science
should be situated rather on the theoretical level: the universal
machine is today by many still considered as the model for the modern
computer while its ability to mimic machines through its manipulation
of programs-as-data is one of the basic principles of modern
computing. Moreover, its robustness and naturalness as a model of
computability have made it the main model to challenge if one is
attacking versions of the so-called (physical) Church-Turing
thesis. |
turing-test | ## 1. Turing (1950) and the Imitation Game
Turing (1950) describes the following kind of game. Suppose that we
have a person, a machine, and an interrogator. The interrogator is in
a room separated from the other person and the machine. The object of
the game is for the interrogator to determine which of the other two
is the person, and which is the machine. The interrogator knows the
other person and the machine by the labels '*X*'
and '*Y*'--but, at least at the beginning of
the game, does not know which of the other person and the machine is
'*X*'--and at the end of the game says either
'*X* is the person and *Y* is the machine'
or '*X* is the machine and *Y* is the
person'. The interrogator is allowed to put questions to the
person and the machine of the following kind: "Will *X*
please tell me whether *X* plays chess?" Whichever of the
machine and the other person is *X* must answer questions that
are addressed to *X*. The object of the machine is to try to
cause the interrogator to mistakenly conclude that the machine is the
other person; the object of the other person is to try to help the
interrogator to correctly identify the machine. About this game,
Turing (1950) says:
>
> I believe that in about fifty years' time it will be possible to
> programme computers, with a storage capacity of about 109,
> to make them play the imitation game so well that an average
> interrogator will not have more than 70 percent chance of making the
> right identification after five minutes of questioning. ... I
> believe that at the end of the century the use of words and general
> educated opinion will have altered so much that one will be able to
> speak of machines thinking without expecting to be contradicted.
>
There are at least two kinds of questions that can be raised about
Turing's predictions concerning his Imitation Game. First, there
are empirical questions, e.g., Is it true that we now--or will
soon--have made computers that can play the imitation game so
well that an average interrogator has no more than a 70 percent chance
of making the right identification after five minutes of questioning?
Second, there are conceptual questions, e.g., Is it true that, if an
average interrogator had no more than a 70 percent chance of making
the right identification after five minutes of questioning, we should
conclude that the machine exhibits some level of thought, or
intelligence, or mentality?
There is little doubt that Turing would have been disappointed by the
state of play at the end of the twentieth century. Participants in the
Loebner Prize Competition--an annual event in which computer
programmes are submitted to the Turing Test-- had come nowhere
near the standard that Turing envisaged. A quick look at the
transcripts of the participants for the preceding decade reveals that
the entered programs were all easily detected by a range of
not-very-subtle lines of questioning. Moreover, major players in the
field regularly claimed that the Loebner Prize Competition was an
embarrassment precisely because we were still so far from having a
computer programme that could carry out a decent conversation for a
period of five minutes--see, for example, Shieber (1994). It was
widely conceded on all sides that the programs entered in the Loebner
Prize Competition were designed solely with the aim of winning the
minor prize of best competitor for the year, with no thought that the
embodied strategies would actually yield something capable of passing
the Turing Test.
At the end of the second decade of the twenty-first century, it is
unclear how much has changed. On the one hand, there have been
interesting developments in language generators. In particular, the
release of Open AI's GPT-3 (Brown, et al. 2020, Other Internet
Resources) has prompted a flurry of excitement. GPT-3 is quite good at
generating fiction, poetry, press releases, code, music, jokes,
technical manuals, and news articles. Perhaps, as Chalmers speculates
(2020, Other Internet Resources), GPT-3 "suggests a potential
mindless path to artificial general intelligence". But, of
course, GPT-3 is not close to passing the Turing Test: GPT-3 neither
perceives nor acts, and it is, at best, highly contentious whether it
is a site of understanding. What remains to be seen is whether, within
the next couple of generations of language generators - GPT-4 or
GPT-5 - we have something that can be linked to perceptual
inputs and behavioural outputs in a way that does produce something
capable of passing the Turing Test. (For further discussion, see
Floridi and Chiriatti (2020).)
On the other hand, as, for example, Floridi (2008) complains, there
are other ways in which progress has been frustratingly slow. In 2014,
claims emerged that, because the computer program *Eugene
Goostman* had fooled 33% of judges in the Turing Test 2014
competition, it had "passed the Turing Test". But there
have been other one-off competitions in which similar results have
been achieved. Back in 1991, *PC Therapist* had 50% of judges
fooled. And, in a 2011 demonstration, *Cleverbot* had an even
higher success rate. In all three of these cases, the size of the
trial was very small, and the result was not reliably projectible: in
no case were there strong grounds for holding that an average
interrogator had no more than a 70% chance of making the right
determination about the relevant program after five minutes of
questioning. Moreover--and much more importantly--we must
distinguish between the test the Turing proposed, and the particular
prediction that he made about how things would be by the end of the
twentieth century. The percentage chance of making the correct
identification, the time interval over which the test takes place, and
the number of conversational exchanges required are all adjustable
parameters in the Test, despite the fact that they are fixed in the
particular prediction that Turing made. Even if Turing was very far
out in the prediction that he made about how things would be by the
end of the twentieth century, it remains possible that the test that
he proposes is a good one. However, before one can endorse the
suggestion that the Turing Test is good, there are various objections
that ought to be addressed.
Some people have suggested that the Turing Test is chauvinistic: it
only recognizes intelligence in things that are able to sustain a
conversation with us. Why couldn't it be the case that there are
intelligent things that are unable to carry on a conversation, or, at
any rate, unable to carry on a conversation with creatures like us?
(See, for example, French (1990).) Perhaps the intuition behind this
question can be granted; perhaps it is unduly chauvinistic to insist
that anything that is intelligent has to be capable of sustaining a
conversation with us. (On the other hand, one might think that, given
the availability of suitably qualified translators, it ought to be
possible for any two intelligent agents that speak different languages
to carry on some kind of conversation.) But, in any case, the charge
of chauvinism is completely beside the point. What Turing claims is
only that, if something can carry out a conversation with us, then we
have good grounds to suppose that that thing has intelligence of the
kind that we possess; he does not claim that only something that can
carry out a conversation with us can possess the kind of intelligence
that we have.
Other people have thought that the Turing Test is not sufficiently
demanding: we already have anecdotal evidence that quite unintelligent
programs (e.g., ELIZA--for details of which, see Weizenbaum
(1966)) can seem to ordinary observers to be loci of intelligence for
quite extended periods of time. Moreover, over a short period of
time--such as the five minutes that Turing mentions in his
prediction about how things will be in the year 2000--it might
well be the case that almost all human observers could be taken in by
cunningly designed but quite unintelligent programs. However, it is
important to recall that, in order to pass Turing's Test, it is
not enough for the computer program to fool "ordinary
observers" in circumstances other than those in which the test
is supposed to take place. What the computer program has to be able to
do is to survive interrogation by someone who knows that one of the
other two participants in the conversation is a machine. Moreover, the
computer program has to be able to survive such interrogation with a
high degree of success over a repeated number of trials. (Turing says
nothing about how many trials he would require. However, we can safely
assume that, in order to get decent evidence that there is no more
than a 70% chance that a machine will be correctly identified as a
machine after five minutes of conversation, there will have to be a
reasonably large number of trials.) If a computer program could do
this quite demanding thing, then it does seem plausible to claim that
we would have at least *prima facie* reason for thinking that
we are in the presence of intelligence. (Perhaps it is worth
emphasizing again that there might be all kinds of intelligent
things--including intelligent machines--that would not pass
this test. It is conceivable, for example, that there might be
machines that, as a result of moral considerations, refused to lie or
to engage in pretence. Since the human participant is supposed to do
everything that he or she can to help the interrogator, the question
"Are you a machine?" would quickly allow the interrogator
to sort such (pathological?) truth-telling machines from humans.)
Another contentious aspect of Turing's paper (1950) concerns his
restriction of the discussion to the case of "digital
computers." On the one hand, it seems clear that this
restriction is really only significant for the prediction that Turing
makes about how things will be in the year 2000, and not for the
details of the test itself. (Indeed, it seems that if the test that
Turing proposes is a good one, then it will be a good test for any
kinds of entities, including, for example, animals, aliens, and analog
computers. That is: if animals, aliens, analog computers, or any other
kinds of things, pass the test that Turing proposes, then there will
be as much reason to think that these things exhibit intelligence as
there is reason to think that digital computers that pass the test
exhibit intelligence.) On the other hand, it is actually a highly
controversial question whether "thinking machines" would
have to be digital computers; and it is also a controversial question
whether Turing himself assumed that this would be the case. In
particular, it is worth noting that the seventh of the objections that
Turing (1950) considers addresses the possibility of continuous state
machines, which Turing explicitly acknowledges to be different from
discrete state machines. Turing appears to claim that, even if we are
continuous state machines, a discrete state machine would be able to
imitate us sufficiently well for the purposes of the Imitation Game.
However, it seems doubtful that the considerations that he gives are
sufficient to establish that, if there are continuous state machines
that pass the Turing Test, then it is possible to make discrete state
machines that pass the test as well. (Turing himself was keen to point
out that some limits had to be set on the notion of
"machine" in order to make the question about
"thinking machines" interesting:
>
> It is natural that we should wish to permit every kind of engineering
> technique to be used in our machine. We also wish to allow the
> possibility that an engineer or team of engineers may construct a
> machine which works, but whose manner of operation cannot be
> satisfactorily described by its constructors because they have applied
> a method which is largely experimental. Finally, we wish to exclude
> from the machines men born in the usual manner. It is difficult to
> frame the definitions so as to satisfy these three conditions. One
> might for instance insist that the team of engineers should all be of
> one sex, but this would not really be satisfactory, for it is probably
> possible to rear a complete individual from a single cell of the skin
> (say) of a man. To do so would be a feat of biological technique
> deserving of the very highest praise, but we would not be inclined to
> regard it as a case of 'constructing a thinking machine'.
> (435/6)
>
But, of course, as Turing himself recognized, there is a large class
of possible "machines" that are neither digital nor
biotechnological.) More generally, the crucial point seems to be that,
while Turing recognized that the class of machines is potentially much
larger than the class of discrete state machines, he was himself
*very* confident that properly engineered discrete state
machines could succeed in the Imitation Game (and, moreover, at the
time that he was writing, there were certain discrete state
machines--"electronic computers"--that loomed
very large in the public imagination).
## 2. Turing (1950) and Responses to Objections
Although Turing (1950) is pretty informal, and, in some ways rather
idiosyncratic, there is much to be gained by considering the
discussion that Turing gives of potential objections to his claim that
machines--and, in particular, digital computers--can
"think". Turing gives the following labels to the
objections that he considers: (1) The Theological Objection; (2) The
"Heads in the Sand" Objection; (3) The Mathematical
Objection; (4) The Argument from Consciousness; (5) Arguments from
Various Disabilities; (6) Lady Lovelace's Objection; (7)
Argument from Continuity of the Nervous System; (8) The Argument from
Informality of Behavior; and (9) The Argument from Extra-Sensory
Perception. We shall consider these objections in the corresponding
subsections below. (In some--but not all--cases, the
counter-arguments to these objections that we discuss are also
provided by Turing.)
### 2.1 The Theological Objection
Substance dualists believe that thinking is a function of a
non-material, separately existing, substance that somehow
"combines" with the body to make a person. So--the
argument might go--making a body can never be sufficient to
guarantee the presence of thought: in themselves, digital computers
are no different from any other merely material bodies in being
utterly unable to think. Moreover--to introduce the
"theological" element--it might be further added
that, where a "soul" is suitably combined with a body,
this is always the work of the divine creator of the universe: it is
entirely up to God whether or not a particular kind of body is imbued
with a thinking soul. (There is well known scriptural support for the
proposition that human beings are "made in God's
image". Perhaps there is also theological support for the claim
that only God can make things in God's image.)
There are several different kinds of remarks to make here. First,
there are many serious objections to substance dualism. Second, there
are many serious objections to theism. Third, even if theism and
substance dualism are both allowed to pass, it remains quite unclear
why thinking machines are supposed to be ruled out by this combination
of views. Given that God can unite souls with human bodies, it is hard
to see what reason there is for thinking that God could not unite
souls with digital computers (or rocks, for that matter!). Perhaps, on
this combination of views, there is no especially good reason why,
amongst the things that we can make, certain kinds of digital
computers turn out to be the only ones to which God gives
souls--but it seems pretty clear that there is also no
particularly good reason for ruling out the possibility that God would
choose to give souls to certain kinds of digital computers. Evidence
that God is dead set against the idea of giving souls to certain kinds
of digital computers is not particularly thick on the ground.
### 2.2 The 'Heads in the Sand' Objection
If there were thinking machines, then various consequences would
follow. First, we would lose the best reasons that we have for
thinking that we are superior to everything else in the universe
(since our cherished "reason" would no longer be something
that we alone possess). Second, the possibility that we might be
"supplanted" by machines would become a genuine worry: if
there were thinking machines, then very likely there would be machines
that could think much better than we can. Third, the possibility that
we might be "dominated" by machines would also become a
genuine worry: if there were thinking machines, who's to say
that they would not take over the universe, and either enslave or
exterminate us?
As it stands, what we have here is not an argument against the claim
that machines can think; rather, we have the expression of various
fears about what might follow if there were thinking machines. Someone
who took these worries seriously--and who was persuaded that it
is indeed possible for us to construct thinking machines--might
well think that we have here reasons for giving up on the project of
attempting to construct thinking machines. However, it would be a
major task--which we do not intend to pursue here--to
determine whether there really are any good reasons for taking these
worries seriously.
### 2.3 The Mathematical Objection
Some people have supposed that certain fundamental results in
mathematical logic that were discovered during the 1930s--by
Godel (first incompleteness theorem) and Turing (the halting
problem)--have important consequences for questions about digital
computation and intelligent thought. (See, for example, Lucas (1961)
and Penrose (1989); see, too, Hodges (1983:414) who mentions
Polanyi's discussions with Turing on this matter.) Essentially,
these results show that within a formal system that is strong enough,
there are a class of true statements that can be expressed but not
proven within the system (see the entry on
Godel's incompleteness theorems).
Let us say that such a system is "subject to the Lucas-Penrose
constraint" because it is constrained from being able to prove a
class of true statements expressible within the system.
Turing (1950:444) himself observes that these results from
mathematical logic might have implications for the Turing test:
>
> There are certain things that [any digital computer] cannot do. If it
> is rigged up to give answers to questions as in the imitation game,
> there will be some questions to which it will either give a wrong
> answer, or fail to give an answer at all however much time is allowed
> for a reply. (444)
>
So, in the context of the Turing test, "being subject to the
Lucas-Penrose constraint" implies the existence of a class of
"unanswerable" questions. However Turing noted that in the
context of the Turing test, these "unanswerable" questions
are only a concern if humans can answer them. His "short"
reply was that it is not clear that humans are free from such a
constraint themselves. Turing then goes on to add that he does not
think that the argument can be dismissed "quite so
lightly."
To make the argument more precise, we can write it as follows:
1. Let C be a digital computer.
2. Since C is subject to the Lucas-Penrose constraint, there is an
"unanswerable" question q for C.
3. If an entity, E, is not subject to the Lucas-Penrose constraint,
then there are no "unanswerable" questions for E.
4. The human intellect is not subject to the Lucas-Penrose
constraint.
5. Thus, there are no "unanswerable" questions for the
human intellect.
6. The question q is therefore "answerable" to the human
intellect.
7. By asking question q, a human could determine if the responder is
a computer or a human.
8. Thus C may fail the Turing test.
Once the argument is laid out as above, it becomes clear that premise
(3) should be challenged. Putting that aside, we note that one
interpretation of Turing's "short" reply is that
claim (4) is merely asserted--without any kind of proof. The
"short" reply then leads us to examine whether humans are
free from the Lucas-Penrose constraint.
If humans are subject to the Lucas-Penrose constraint then the
constraint does not provide any basis for distinguishing humans from
digital computers. If humans are free from the Lucas-Penrose
constraint, then (granting premise 3) it follows that digital
computers may fail the Turing test and thus, it seems, cannot
think.
However, there remains a question as to whether being free from the
constraint is necessary for the capacity to think. It may be that the
Turing test is too strict. Since, by hypothesis, we are free from the
Lucas-Penrose constraint, we are, in some sense, too good at asking
and answering questions. Suppose there is a thinking entity that is
subject to the Lucas-Penrose constraint. By an argument analogous to
the one above, it can fail the Turing test. Thus, an entity which can
think would fail the Turing test.
We can respond to this concern by noting that the construction of
questions suggested by the results from mathematical
logic--Godel, Turing, etc.--are extremely complicated,
and require extremely detailed information about the language and
internal programming of the digital computer (which, of course, is not
available to the interrogators in the Imitation Game). At the very
least, much more argument is required to overthrow the view that the
Turing Test could remain a very high quality statistical test for the
presence of mind and intelligence even if digital computers differ
from human beings in being subject to the Lucas-Penrose constraint.
(See Bowie 1982, Dietrich 1994, Feferman 1996, Abramson 2008, and
Section 6.3 of the entry on
Godel's incompleteness theorems,
for further discussion.)
### 2.4 The Argument from Consciousness
Turing cites Professor Jefferson's *Lister Oration* for
1949 as a source for the kind of objection that he takes to fall under
this label:
>
> Not until a machine can write a sonnet or compose a concerto because
> of thoughts and emotions felt, and not by the chance fall of symbols,
> could we agree that machine equals brain--that is, not only write
> it but know that it had written it. No mechanism could feel (and not
> merely artificially signal, an easy contrivance) pleasure at its
> successes, grief when its valves fuse, be warmed by flattery, be made
> miserable by its mistakes, be charmed by sex, be angry or depressed
> when it cannot get what it wants. (445/6)
>
There are several different ideas that are being run together here,
and that it is profitable to disentangle. One idea--the one upon
which Turing first focuses--is the idea that the only way in
which one could be certain that a machine thinks is to be the machine,
and to feel oneself thinking. A second idea, perhaps, is that the
presence of mind requires the presence of a certain kind of
self-consciousness ("not only write it but know that it had
written it"). A third idea is that it is a mistake to take a
narrow view of the mind, i.e. to suppose that there could be a
believing intellect divorced from the kinds of desires and emotions
that play such a central role in the generation of human behavior
("no mechanism could feel ...").
Against the solipsistic line of thought, Turing makes the effective
reply that he would be satisfied if he could secure agreement on the
claim that we might each have just as much reason to suppose that
machines think as we have reason to suppose that *other* people
think. (The point isn't that Turing thinks that solipsism is a
serious option; rather, the point is that following this line of
argument isn't going to lead to the conclusion that there are
respects in which digital computers could not be our intellectual
equals or superiors.)
Against the other lines of thought, Turing provides a little
"*viva voce*" that is intended to illustrate the
kind of evidence that he supposes one might have that a machine is
intelligent. Given the right kinds of responses from the machine, we
*would* naturally interpret its utterances as evidence of
pleasure, grief, warmth, misery, anger, depression, etc.
Perhaps--though Turing doesn't say this--the only way
to make a machine of this kind would be to equip it with sensors,
affective states, etc., i.e., in effect, to make an artificial
*person*. However, the important point is that if the claims
about self-consciousness, desires, emotions, etc. are right, then
Turing can accept these claims with equanimity: *his* claim is
then that a machine with a digital computing "brain" can
have the full range of mental states that can be enjoyed by adult
human beings.
### 2.5 Arguments from Various Disabilities
Turing considers a list of things that some people have claimed
machines will never be able to do: (1) be kind; (2) be resourceful;
(3) be beautiful; (4) be friendly; (5) have initiative; (6) have a
sense of humor; (7) tell right from wrong; (8) make mistakes; (9) fall
in love; (10) enjoy strawberries and cream; (11) make someone fall in
love with one; (12) learn from experience; (13) use words properly;
(14) be the subject of one's own thoughts; (15) have as much
diversity of behavior as a man; (16) do something really new.
An interesting question to ask, before we address these claims
directly, is whether we should suppose that intelligent creatures from
some other part of the universe would necessarily be able to do these
things. Why, for example, should we suppose that there must be
something deficient about a creature that does not enjoy--or that
is not able to enjoy--strawberries and cream? True enough, we
might suppose that an intelligent creature ought to have the capacity
to enjoy some kinds of things--but it seems unduly chauvinistic
to insist that intelligent creatures must be able to enjoy just the
kinds of things that we do. (No doubt, similar considerations apply to
the claim that an intelligent creature must be the kind of thing that
can make a human being fall in love with it. Yes, perhaps, an
intelligent creature should be the kind of thing that can love and be
loved; but what is so special about us?)
Setting aside those tasks that we deem to be unduly chauvinistic, we
should then ask what grounds there are for supposing that no digital
computing machine *could* do the other things on the list.
Turing suggests that the most likely ground lies in our prior
acquaintance with machines of all kinds: none of the machines that any
of us has hitherto encountered has been able to do these things. In
particular, the digital computers with which we are now familiar
cannot do these things. (Except perhaps for make mistakes: after all,
even digital computers are subject to "errors of
functioning." But this might be set aside as an irrelevant
case.) However, given the limitations of storage capacity and
processing speed of even the most recent digital computers, there are
obvious reasons for being cautious in assessing the merits of this
inductive argument.
(A different question worth asking concerns the progress that has been
made until now in constructing machines that can do the kinds of
things that appear on Turing's list. There is at least room for
debate about the extent to which current computers can: make mistakes,
use words properly, learn from experience, be beautiful, etc.
Moreover, there is also room for debate about the extent to which
recent advances in other areas may be expected to lead to further
advancements in overcoming these alleged disabilities. Perhaps, for
example, recent advances in work on artificial sensors may one day
contribute to the production of machines that can enjoy strawberries
and cream. Of course, if the intended objection is to the notion that
machines can experience any kind of feeling of enjoyment, then it is
not clear that work on particular kinds of artificial sensors is to
the point.)
### 2.6 Lady Lovelace's Objection
One of the most popular objections to the claim that there can be
thinking machines is suggested by a remark made by Lady Lovelace in
her memoir on Babbage's Analytical Engine:
>
> The Analytical Engine has no pretensions to originate anything. It can
> do whatever we know how to order it to perform (cited by Hartree,
> p. 70)
>
The key idea is that machines can *only* do what we know how to
order them to do (or that machines can never do anything really new,
or anything that would take us by surprise). As Turing says, one way
to respond to these challenges is to ask whether we can ever do
anything "really new." Suppose, for instance, that the
world is deterministic, so that everything that we do is fully
determined by the laws of nature and the boundary conditions of the
universe. There is a sense in which nothing "really new"
happens in a deterministic universe--though, of course, the
universe's being deterministic would be entirely compatible with
our being surprised by events that occur within it. Moreover--as
Turing goes on to point out--there are many ways in which even
digital computers do things that take us by surprise; more needs to be
said to make clear exactly what the nature of this suggestion is.
(Yes, we might suppose, digital computers are
"constrained" by their programs: they can't do
anything that is not permitted by the programs that they have. But
human beings are "constrained" by their biology and their
genetic inheritance in what might be argued to be just the same kind
of way: they can't do anything that is not permitted by the
biology and genetic inheritance that they have. If a program were
sufficiently complex--and if the processor(s) on which it ran
were sufficiently fast--then it is not easy to say whether the
kinds of "constraints" that would remain would necessarily
differ in kind from the kinds of constraints that are imposed by
biology and genetic inheritance.)
Bringsjord et al. (2001) claim that Turing's response to the
Lovelace Objection is "mysterious" at best, and
"incompetent" at worst (p.4). In their view,
Turing's claim that "computers do take us by
surprise" is only true when "surprise" is given a
very superficial interpretation. For, while it is true that computers
do things that we don't intend them to do--because
we're not smart enough, or because we're not careful
enough, or because there are rare hardware errors, or
whatever--it isn't true that there are any cases in which
we should want to say that a computer has *originated*
something. Whatever merit might be found in this objection, it seems
worth pointing out that, in the relevant sense of
*origination*, human beings "originate something"
on more or less every occasion in which they engage in conversation:
they produce new sentences of natural language that it is appropriate
for them to produce in the circumstances in which they find
themselves. Thus, on the one hand--for all that Bringsjord et al.
have argued--The Turing Test is a perfectly good test for the
presence of "origination" (or "creativity," or
whatever). Moreover, on the other hand, for all that Bringsjord et al.
have argued, it remains an open question whether a digital computing
device is capable of "origination" in this sense (i.e.
capable of producing new sentences that are appropriate to the
circumstances in which the computer finds itself). So we are not
overly inclined to think that Turing's response to the Lovelace
Objection is poor; and we are even less inclined to think that Turing
lacked the resources to provide a satisfactory response on this
point.
### 2.7 Argument from Continuity of the Nervous System
The human brain and nervous system is not much like a digital
computer. In particular, there are reasons for being skeptical of the
claim that the brain is a discrete-state machine. Turing observes that
a small error in the information about the size of a nervous impulse
impinging on a neuron may make a large difference to the size of the
outgoing impulse. From this, Turing infers that the brain is likely to
be a continuous-state machine; and he then notes that, since
discrete-state machines are not continuous-state machines, there might
be reason here for thinking that no discrete-state machine can be
intelligent.
Turing's response to this kind of argument seems to be that a
continuous-state machine can be imitated by discrete-state machines
with very small levels of error. Just as differential analyzers can be
imitated by digital computers to within quite small margins of error,
so too, the conversation of human beings can be imitated by digital
computers to margins of error that would not be detected by ordinary
interrogators playing the imitation game. It is not clear that this is
the right kind of response for Turing to make. If someone thinks that
real thought (or intelligence, or mind, or whatever) can only be
located in a continuous-state machine, then the fact--if, indeed,
it is a fact--that it is possible for discrete-state machines to
pass the Turing Test shows only that the Turing Test is no good. A
better reply is to ask why one should be so confident that real
thought, etc. can only be located in continuous-state machines (if,
indeed, it is right to suppose that we are not discrete-state
machines). And, before we ask this question, we would do well to
consider whether we really do have such good reason to suppose that,
from the standpoint of our ability to think, we are not essentially
discrete-state machines. (As Block (1981) points out, it seems that
there is nothing in our concept of intelligence that rules out
intelligent beings with quantised sensory devices; and nor is there
anything in our concept of intelligence that rules out intelligent
beings with digital working parts.)
### 2.8 Argument from Informality of Behavior
This argument relies on the assumption that there is no set of rules
that describes what a person ought to do in every possible set of
circumstances, and on the further assumption that there is a set of
rules that describes what a machine will do in every possible set of
circumstances. From these two assumptions, it is supposed to
follow--somehow!--that people are not machines. As Turing
notes, there is some slippage between "ought" and
"will" in this formulation of the argument. However, once
we make the appropriate adjustments, it is not clear that an obvious
difference between people and digital computers emerges.
Suppose, first, that we focus on the question of whether there are
sets of rules that describe what a person and a machine
"will" do in every possible set of circumstances. If the
world is deterministic, then there are such rules for both persons and
machines (though perhaps it is not possible to write down the rules).
If the world is not deterministic, then there are no such rules for
either persons or machines (since both persons and machines can be
subject to non-deterministic processes in the production of their
behavior). Either way, it is hard to see any reason for supposing that
there is a relevant difference between people and machines that bears
on the description of what they will do in all possible sets of
circumstances. (Perhaps it might be said that what the objection
invites us to suppose is that, even though the world is not
deterministic, humans differ from digital machines precisely because
the operations of the latter are indeed deterministic. But, if the
world is non-deterministic, then there is no reason why digital
machines cannot be programmed to behave non-deterministically, by
allowing them to access input from non-deterministic features of the
world.)
Suppose, instead, that we focus on the question of whether there are
sets of rules that describe what a person and a machine
"ought" to do in every possible set of circumstances.
Whether or not we suppose that norms can be codified--and quite
apart from the question of which kinds of norms are in
question--it is hard to see what grounds there could be for this
judgment, other than the question-begging claim that machines are not
the kinds of things whose behavior could be subject to norms. (And, in
that case, the initial argument is badly mis-stated: the claim ought
to be that, whereas there are sets of rules that describe what a
person ought to do in every possible set of circumstances, there are
no sets of rules that describe what machines *ought* to do in
all possible sets of circumstances!)
### 2.9 Argument from Extra-Sensory Perception
The strangest part of Turing's paper is the few paragraphs on
ESP. Perhaps it is intended to be tongue-in-cheek, though, if it is,
this fact is poorly signposted by Turing. Perhaps, instead, Turing was
influenced by the apparently scientifically respectable results of J.
B. Rhine. At any rate, taking the text at face value, Turing seems to
have thought that there was overwhelming empirical evidence for
telepathy (and he was also prepared to take clairvoyance, precognition
and psychokinesis seriously). Moreover, he also seems to have thought
that if the human participant in the game was telepathic, then the
interrogator could exploit this fact in order to determine the
identity of the machine--and, in order to circumvent this
difficulty, Turing proposes that the competitors should be housed in a
"telepathy-proof room." Leaving aside the point that, as a
matter of fact, there is no current statistical support for
telepathy--or clairvoyance, or precognition, or
telekinesis--it is worth asking what kind of theory of the nature
of telepathy would have appealed to Turing. After all, if humans can
be telepathic, why shouldn't digital computers be so as well? If
the capacity for telepathy were a standard feature of any sufficiently
advanced system that is able to carry out human conversation, then
there is no in-principle reason why digital computers could not be the
equals of human beings in this respect as well. (Perhaps this response
assumes that a successful machine participant in the imitation game
will need to be equipped with sensors, etc. However, as we noted
above, this assumption is not terribly controversial. A plausible
conversationalist has to keep up to date with goings-on in the
world.)
After discussing the nine objections mentioned above, Turing goes on
to say that he has "no very convincing arguments of a positive
nature to support my views. If I had I should not have taken such
pains to point out the fallacies in contrary views." (454)
Perhaps Turing sells himself a little short in this self-assessment.
First of all--as his brief discussion of solipsism makes
clear--it is worth asking what grounds we have for attributing
intelligence (thought, mind) to other people. If it is plausible to
suppose that we base our attributions on behavioral tests or
behavioral criteria, then his claim about the appropriate test to
apply in the case of machines seems apt, and his conjecture that
digital computing machines might pass the test seems like a
reasonable--though controversial--empirical conjecture.
Second, subsequent developments in the philosophy of mind--and,
in particular, the fashioning of functionalist theories of the
mind--have provided a more secure theoretical environment in
which to place speculations about the possibility of thinking
machines. If mental states are functional states--and if mental
states are capable of realisation in vastly different kinds of
materials--then there is some reason to think that it is an
empirical question whether minds can be realised in digital computing
machines. Of course, this kind of suggestion is open to challenge; we
shall consider some important philosophical objections in the later
parts of this review.
## 3. Some Minor Issues Arising
There are a number of much-debated issues that arise in connection
with the interpretation of various parts of Turing (1950), and that we
have hitherto neglected to discuss. What has been said in the first
two sections of this document amounts to our interpretation of what
Turing has to say (perhaps bolstered with what we take to be further
relevant considerations in those cases where Turing's remarks
can be fairly readily improved upon). But since some of this
interpretation has been contested, it is probably worth noting where
the major points of controversy have been.
### 3.1 Interpreting the Imitation Game
Turing (1950) introduces the imitation game by describing a game in
which the participants are a man, a woman, and a human interrogator.
The interrogator is in a room apart from the other two, and is set the
task of determining which of the other two is a man and which is a
woman. Both the man and the woman are set the task of trying to
convince the interrogator that they are the woman. Turing recommends
that the best strategy for the woman is to answer all questions
truthfully; of course, the best strategy for the man will require some
lying. The participants in this game also use teletypewriter to
communicate with one another--to avoid clues that might be
offered by tone of voice, etc. Turing then says: "We now ask the
question, 'What will happen when a machine takes the part of A
in this game?' Will the interrogator decide wrongly as often
when the game is played like this as he does when the game is played
between a man and a woman?" (434).
Now, of course, it is *possible* to interpret Turing as here
intending to say what he seems literally to say, namely, that the new
game is one in which the computer must pretend to be a woman, and the
other participant in the game is a woman. (For discussion, see, for
example, Genova (1994) and Traiger (2000).) And it is also
*possible* to interpret Turing as intending to say that the new
game is one in which the computer must pretend to be a woman, and the
other participant in the game is a man who must also pretend to be a
woman. However, as Copeland (2000), Piccinini (2000), and Moor (2001)
convincingly argue, the rest of Turing's article, and material
in other articles that Turing wrote at around the same time, very
strongly support the claim that Turing actually intended the standard
interpretation that we gave above, viz. that the computer is to
pretend to be a human being, and the other participant in the game is
a human being of unspecified gender. Moreover, as Moor (2001) argues,
there is no reason to think that one would get a better test if the
computer must pretend to be a woman and the other participant in the
game is a man pretending to be a woman; and, indeed, there is some
reason to think that one would get a worse test. Perhaps it would make
no difference to the effectiveness of the test if the computer must
pretend to be a woman, and the other participant is a woman (any more
than it would make a difference if the computer must pretend to be an
accountant and the other participant is an accountant); however, this
consideration is simply insufficient to outweigh the strong textual
evidence that supports the standard interpretation of the imitation
game that we gave at the beginning of our discussion of Turing (1950).
(For a dissenting view about many of the matters discussed in this
paragraph, see Sterrett (2000; 2020).)
### 3.2 Turing's Predictions
As we noted earlier, Turing (1950) makes the claim that:
>
> I believe that in about fifty years' time it will be possible to
> programme computers, with a storage capacity of about 109,
> to make them play the imitation game so well that an average
> interrogator will not have more than 70 percent chance of making the
> right identification after five minutes of questioning. ... I
> believe that at the end of the century the use of words and general
> educated opinion will have altered so much that one will be able to
> speak of machines thinking without expecting to be contradicted.
>
Most commentators contend that this claim has been shown to be
mistaken: in the year 2000, *no-one* was able to program
computers to make them play the imitation game so well that an average
interrogator had no more than a 70% chance of making the correct
identification after five minutes of questioning. Copeland (2000)
argues that this contention is seriously mistaken: "about fifty
years" is by no means "exactly fifty years," and it
remains open that we may soon be able to do the required programming.
Against this, it should be noted that Turing (1950) goes on
immediately to refer to how things will be "at the end of the
century," which suggests that not too much can be read into the
qualifying "about." However, as Copeland (2000) points
out, there are other more cautious predictions that Turing makes
elsewhere (e.g., that it would be "at least 100 years"
before a machine was able to pass an unrestricted version of his
test); and there are other predictions that are made in Turing (1950)
that seem to have been vindicated. In particular, it is plausible to
claim that, in the year 2000, educated opinion had altered to the
extent that, in many quarters, one could speak of the possibility of
machines' thinking--and of machines'
learning--without expecting to be contradicted. As Moor (2001)
points out, "machine intelligence" is not the oxymoron
that it might have been taken to be when Turing first started thinking
about these matters.
### 3.3 A Useful Distinction
There are two different theoretical claims that are run together in
many discussions of The Turing Test that can profitably be separated.
One claim holds that the general scheme that is described in
Turing's Imitation Game provides a good test for the presence of
intelligence. (If something can pass itself off as a person under
sufficiently demanding test conditions, then we have very good reason
to suppose that that thing is intelligent.) Another claim holds that
an appropriately programmed computer could pass the kind of test that
is described in the first claim. We might call the first claim
"The Turing Test Claim" and the second claim "The
Thinking Machine Claim". Some objections to the claims made in
Turing (1950) are objections to the Thinking Machine Claim, but not
objections to the Turing Test Claim. (Consider, for example, the
argument of Searle (1982), which we discuss further in Section 6.)
However, other objections are objections to the Turing Test Claim.
Until we get to Section 6, we shall be confining our attention to
discussions of the Turing Test Claim.
### 3.4 A Further Note
In this article, we follow the standard philosophical convention
according to which "a mind" means "at least one
mind". If "passing the Turing Test" implies
intelligence, then "passing the Turing Test" implies the
presence of at least one mind. We cannot here explore recent
discussions of "swarm intelligence", "collective
intelligence", and the like. However, it is surely clear that
two people taking turns could "pass the Turing Test" in
circumstances in which we should be very reluctant to say that there
is a "collective mind" that has the minds of the two as
components.
## 4. Assessment of the Current Standing of The Turing Test
Given the initial distinction that we made between different ways in
which the expression The Turing Test gets interpreted in the
literature, it is probably best to approach the question of the
assessment of the current standing of The Turing Test by dividing
cases. True enough, we think that there is a correct interpretation of
exactly what test it is that is proposed by Turing (1950); but a
complete discussion of the current standing of The Turing Test should
pay at least some attention to the current standing of other tests
that have been mistakenly supposed to be proposed by Turing
(1950).
There are a number of main ideas to be investigated. First, there is
the suggestion that The Turing Test provides logically necessary and
sufficient conditions for the attribution of intelligence. Second,
there is the suggestion that The Turing Test provides logically
sufficient--but not logically necessary--conditions for the
attribution of intelligence. Third, there is the suggestion that The
Turing Test provides "criteria"--defeasible
sufficient conditions--for the attribution of intelligence.
Fourth--and perhaps not importantly distinct from the previous
claim--there is the suggestion that The Turing Test provides
(more or less strong) probabilistic support for the attribution of
intelligence. We shall consider each of these suggestions in turn.
### 4.1 (Logically) Necessary and Sufficient Conditions
It is doubtful whether there are very many examples of people who have
explicitly claimed that The Turing Test is meant to provide conditions
that are both logically necessary and logically sufficient for the
attribution of intelligence. (Perhaps Block (1981) is one such case.)
However, some of the objections that have been proposed against The
Turing Test only make sense under the assumption that The Turing Test
does indeed provide logically necessary and logically sufficient
conditions for the attribution of intelligence; and many more of the
objections that have been proposed against The Turing Test only make
sense under the assumption that The Turing Test provides necessary and
sufficient conditions for the attribution of intelligence, where the
modality in question is weaker than the strictly logical, e.g., nomic
or causal.
Consider, for example, those people who have claimed that The Turing
Test is chauvinistic; and, in particular, those people who have
claimed that it is surely logically possible for there to be something
that possesses considerable intelligence, and yet that is not able to
pass The Turing Test. (Examples: Intelligent creatures might fail to
pass The Turing Test because they do not share our way of life;
intelligent creatures might fail to pass The Turing Test because they
refuse to engage in games of pretence; intelligent creatures might
fail to pass The Turing Test because the pragmatic conventions that
govern the languages that they speak are so very different from the
pragmatic conventions that govern human languages. Etc.) None of this
can constitute objections to The Turing Test unless The Turing Test
delivers *necessary* conditions for the attribution of
intelligence.
French (1990) offers ingenious arguments that are intended to show
that "the Turing Test provides a guarantee not of intelligence,
but of culturally-oriented intelligence." But, of course,
anything that has culturally-oriented intelligence *has*
intelligence; so French's objections cannot be taken to be
directed towards the idea that The Turing Test provides sufficient
conditions for the attribution of intelligence. Rather--as we
shall see later--French supposes that The Turing Test establishes
sufficient conditions that no machine will ever satisfy. That is, in
French's view, what is wrong with The Turing Test is that it
establishes utterly uninteresting sufficient conditions for the
attribution of intelligence.
Floridi and Chiriatti (2020: 683) say that The Turing Test provides
necessary but insufficient conditions for intelligence: not passing
The Turing Test disqualifies an AI from being intelligent, but passing
The Turing Test is not sufficient to qualify an AI as intelligent.
However, they also say that "any reader ... will be well
acquainted with the nature of the test, so we shall not describe
it." The account that they would give of The Turing Test
must be quite different from the account of The Turing Test that we
have been presenting.
### 4.2 Logically Sufficient Conditions
There are many philosophers who have supposed that The Turing Test is
intended to provide logically sufficient conditions for the
attribution of intelligence. That is, there are many philosophers who
have supposed that The Turing Test claims that it is logically
impossible for something that lacks intelligence to pass The Turing
Test. (Often, this supposition goes with an interpretation according
to which passing The Turing Test requires rather a lot, e.g.,
producing behavior that is indistinguishable from human behavior over
an entire lifetime.)
There are well-known arguments against the claim that passing The
Turing Test--or any other purely behavioral test--provides
logically sufficient conditions for the attribution of intelligence.
*The* standard objection to this kind of analysis of
intelligence (mind, thought) is that a being whose behavior was
produced by "brute force" methods ought not to count as
intelligent (as possessing a mind, as having thoughts).
Consider, for example, Ned Block's *Blockhead*. Blockhead
is a creature that looks just like a human being, but that is
controlled by a "game-of-life look-up tree," i.e. by a
tree that contains a programmed response for every discriminable input
at each stage in the creature's life. If we agree that Blockhead
is logically possible, and if we agree that Blockhead is not
intelligent (does not have a mind, does not think), then Blockhead is
a counterexample to the claim that the Turing Test provides a
logically sufficient condition for the ascription of intelligence.
After all, Blockhead could be programmed with a look-up tree that
produces responses identical with the ones that *you* would
give over the entire course of *your* life (given the same
inputs).
There are perhaps only two ways in which someone who claims that The
Turing Test offers logically sufficient conditions for the attribution
of intelligence can respond to Block's argument. First, it could
be denied that Blockhead is a logical possibility; second, it could be
claimed that Blockhead would be intelligent (have a mind, think).
In order to deny that Blockhead is a logical possibility, it seems
that what needs to be denied is the commonly accepted link between
conceivability and logical possibility: it certainly seems that
Blockhead is *conceivable*, and so, if (properly circumscribed)
conceivability is sufficient for logical possibility, then it seems
that we have good reason to accept that Blockhead is a logical
possibility. Since it would take us too far away from our present
concerns to explore this issue properly, we merely note that it
remains a controversial question whether (properly circumscribed)
conceivability is sufficient for logical possibility. (For further
discussion of this issue, see Crooke (2002).)
The question of whether Blockhead is intelligent (has a mind, thinks)
may seem straightforward, but--despite Block's confident
assertion that Blockhead "has all of the intelligence of a
toaster"--it is not obvious that we should deny that
Blockhead is intelligent. Blockhead may not be a particularly
efficient processor of information; but it is at least a processor of
information, and that--in combination with the behavior that is
produced as a result of the processing of information--might well
be taken to be sufficient grounds for the attribution of *some*
level of intelligence to Blockhead. For further critical discussion of
the argument of Block (1981), see McDermott (2014), and Pautz and
Stoljar (2019).
### 4.3 Criteria
In his *Philosophical Investigations*, Wittgenstein famously
writes: "An 'inner process' stands in need of
outward criteria" (580). Exactly what Wittgenstein meant by this
remark is unclear, but one way in which it might be interpreted is as
follows: in order to be justified in ascribing a "mental
state" to some entity, there must be some true claims about the
observable behavior of that entity that, (perhaps) together with other
true claims about that entity (not themselves couched in
"mentalistic" vocabulary), entail that the entity has the
mental state in question. If no true claims about the observable
behavior of the entity can play any role in the justification of the
ascription of the mental state in question to the entity, then there
are no grounds for attributing that kind of mental state to the
entity.
The claim that, in order to be justified in ascribing a mental state
to an entity, there must be some true claims about the observable
behavior of that entity that alone--i.e. without the addition of
any other true claims about that entity--entail that the entity
has the mental state in question, is a piece of philosophical
behaviorism. It may be--for all that we are able to
argue--that Wittgenstein was a philosophical behaviorist; it may
be--for all that we are able to argue--that Turing was one,
too. However, if we go by the letter of the account given in the
previous paragraph, then all that need follow from the claim that the
Turing Test is criterial for the ascription of intelligence (thought,
mind) is that, when other true claims (not themselves couched in terms
of mentalistic vocabulary) are conjoined with the claim that an entity
has passed the Turing Test, it then follows that the entity in
question has intelligence (thought, mind).
(Note that the parenthetical qualification that the additional true
claims not be couched in terms of mentalistic vocabulary is only one
way in which one might try to avoid the threat of trivialization. The
difficulty is that the addition of the true claim that an entity has a
mind will always produce a set of claims that entails that that entity
has a mind, no matter what other claims belong to the set!)
To see how the claim that the Turing Test is merely criterial for the
ascription of intelligence differs from the logical behaviorist claim
that the Turing Test provides logically sufficient conditions for the
ascription of intelligence, it suffices to consider the question of
whether it is *nomically* possible for there to be a
"hand simulation" of a Turing Test program. Many people
have supposed that there is good reason to deny that Blockhead is a
nomic (or physical) possibility. For example, in *The Physics of
Immortality*, Frank Tipler provides the following argument in
defence of the claim that it is physically impossible to "hand
simulate" a Turing-Test-passing program:
>
> If my earlier estimate that the human brain can code as much as
> 1015 bits is correct, then since an average book codes
> about 106 bits ... it would require more than 100
> million books to code the human brain. It would take at least thirty
> five-story main university libraries to hold this many books. We know
> from experience that we can access any memory in our brain in about
> 100 seconds, so a hand simulation of a Turing Test-passing program
> would require a human being to be able to take off the shelf, glance
> through, and return to the shelf all of these 100 million books in 100
> seconds. If each book weighs about a pound (0.5 kilograms), and on the
> average the book moves one yard (one meter) in the process of taking
> it off the shelf and returning it, then in 100 seconds the energy
> consumed in just moving the books is 3 x 1019 joules; the
> rate of energy consumption is 3 x 1011 megawatts. Since a
> human uses energy at a normal rate of 100 watts, the power required is
> the bodily power of 3 x 1015 human beings, about a million
> times the current population of the entire earth. A typical large
> nuclear power plant has a power output of 1,000 megawatts, so a hand
> simulation of the human program requires a power output equal to that
> of 300 million large nuclear power plants. As I said, a man can no
> more hand-simulate a Turing Test-passing program than he can jump to
> the Moon. In fact, it is far more difficult. (40)
>
While there might be ways in which the details of Tipler's
argument could be improved, the general point seems clearly right: the
kind of combinatorial explosion that is required for a look-up tree
for a human being is ruled out by the laws and boundary conditions
that govern the operations of the physical world. But, if this is
right, then, while it may be true that Blockhead is a *logical*
possibility, it follows that Blockhead is not a *nomic* or
*physical* possibility. And then it seems natural to hold that
The Turing Test does indeed provide *nomically* sufficient
conditions for the attribution of intelligence: given everything else
that we already know--or, at any rate, take ourselves to
know--about the universe in which we live, we would be fully
justified in concluding that anything that succeeds in passing The
Turing Test is, indeed, intelligent (possessed of a mind, and so
forth).
There are ways in which the argument in the previous paragraph might
be resisted. At the very least, it is worth noting that there is a
serious gap in the argument that we have just rehearsed. Even if we
can rule out "hand simulation" of intelligence, it does
not follow that we have ruled out all other kinds of mere simulation
of intelligence. Perhaps--for all that has been argued so
far--there are nomically possible ways of producing mere
simulations of intelligence. But, if that's right, then passing
The Turing Test need not be so much as criterial for the possession of
intelligence: it need not be that given everything else that we
already know--or, at any rate, take ourselves to know--about
the universe in which we live, we would be fully justified in
concluding that anything that succeeds in passing The Turing Test is,
indeed, intelligent (possessed of a mind, and so forth).
(McDermott (2014) calculates that a look-up table for a participant
who makes 50 conversational exchanges would have about
1022278 nodes. It is tempting to take this calculation to
establish that it is neither nomically nor physically possible for
there to be a "hand simulation" of a Turing Test program,
on the grounds that the required number of nodes could not be fitted
into a space much much larger than the entire observable
universe.)
### 4.4 Probabilistic Support
When we look at the initial formulation that Turing provides of his
test, it is clear that he thought that the passing of the test would
provide probabilistic support for the hypothesis of intelligence.
There are at least two different points to make here. First, the
*prediction* that Turing makes is itself probabilistic: Turing
predicts that, in about fifty years from the time of his writing, it
will be possible to programme digital computers to make them play the
imitation game so well that an average interrogator will have no more
than a seventy per cent chance of making the right identification
after five minutes of questioning. Second, the probabilistic nature of
Turing's prediction provides good reason to think that the
*test* that Turing proposes is itself of a probabilistic
nature: a given level of success in the imitation game
produces--or, at any rate, should produce--a specifiable
level of increase in confidence that the participant in question is
intelligent (has thoughts, is possessed of a mind). Since Turing
doesn't tell us how he supposes that levels of success in the
imitation game correlate with increases in confidence that the
participant in question is intelligent, there is a sense in which The
Turing Test is greatly underspecified. Relevant variables clearly
include: the length of the period of time over which the questioning
in the game takes place (or, at any rate, the "amount" of
questioning that takes place); the skills and expertise of the
interrogator (this bears, for example, on the "depth" and
"difficulty" of the questioning that takes place); the
skills and expertise of the third player in the game; and the number
of independent sessions of the game that are run (particularly when
the other participants in the game differ from one run to the next).
Clearly, a machine that is very successful in many different runs of
the game that last for quite extended periods of time and that involve
highly skilled participants in the other roles has a much stronger
claim to intelligence than a machine that has been successful in a
single, short run of the game with highly inexpert participants. That
a machine has succeeded in one short run of the game against inexpert
opponents might provide some reason for increase in confidence that
the machine in question is intelligent: but it is clear that results
on subsequent runs of the game could quickly overturn this initial
increase in confidence. That a machine has done much better than
chance over many long runs of the imitation game against a variety of
skilled participants surely provides much stronger evidence that the
machine is intelligent. (Given enough evidence of this kind, it seems
that one could be quite confident indeed that the machine is
intelligent, while still--of course--recognizing that
one's judgment could be overturned by further evidence, such as
a series of short runs in which it does much worse than chance against
participants who use the same strategy over and over to expose the
machine as a machine.)
The probabilistic nature of The Turing Test is often overlooked. True
enough, Moor (1976, 2001)--along with various other
commentators--has noted that The Turing Test is
"inductive," i.e. that "The Turing Test"
provides no more than defeasible evidence of intelligence. However, it
is one thing to say that success in "a rigorous Turing
test" provides no more than defeasible evidence of intelligence;
it is quite another to note the probabilistic features to which we
have drawn attention in the preceding paragraph. Consider, for
example, Moor's observation (Moor 2001:83) that "...
inductive evidence gathered in a Turing test can be outweighed by new
evidence. ... If new evidence shows that a machine passed the
Turing Test by remote control run by a human behind the scenes, then
reassessment is called for." This--and other similar
passages--seems to us to suggest that Moor supposes that a
"rigorous Turing test" is a one-off event in which the
machine either succeeds or fails. But this interpretation of The
Turing Test is vulnerable to the kind of objection lodged by
Bringsjord (1994): even on a moderately long single run with
relatively expert participants, it may not be all that unlikely that
an unintelligent machine serendipitously succeeds in the imitation
game. In our view, given enough sufficiently long runs with different
sufficiently expert participants, the likelihood of serendipitous
success can be made as small as one wishes. Thus, while
Bringsjord's "argument from serendipity" has force
against some versions of The Turing Test, it has no force against the
most plausible interpretation of the test that Turing actually
proposed.
It is worth noting that it is quite easy to construct more
sophisticated versions of "The Imitation Game" that yield
more fine-grained statistical data. For example, rather than getting
the judges to issue Yes/No verdicts about both of the participants in
the game, one could get the judges to provide probabilistic answers.
("I give a 75% probability to the claim that A is the machine,
and only 25% probability to the claim that B is the machine.")
This point is important when one comes to consider criticisms of the
"methodology" implicit in "The Turing Test".
(For further discussion of the probabilistic nature of "The
Turing Test", see Shieber (2007).)
## 5. Alternative Tests
Some of the literature about The Turing Test is concerned with
questions about the framing of a test that can provide a suitable
guide to future research in the area of Artificial Intelligence. The
idea here is very simple. Suppose that we have the ambition to produce
an artificially intelligent entity. What tests should we take as
setting the goals that putatively intelligent artificial systems
should achieve? Should we suppose that The Turing Test provides an
appropriate goal for research in this field? In assessing these
proposals, there are two different questions that need to be borne in
mind. First, there is the question whether it is a useful goal for AI
research to aim to make a machine that can pass the given test
(administered over the specified length of time, at the specified
degree of success). Second, there is the question of the appropriate
conclusion to draw about the mental capacities of a machine that does
manage to pass the test (administered over the specified length of
time, at the specified degree of success).
Opinion on these questions is deeply divided. Some people suppose that
The Turing Test does not provide a useful goal for research in AI
because it is far too difficult to produce a system that can pass the
test. Other people suppose that The Turing Test does not provide a
useful goal for research in AI because it sets a very narrow target
(and thus sets unnecessary restrictions on the kind of research that
gets done). Some people think that The Turing Test provides an
entirely appropriate goal for research in AI; while other people think
that there is a sense in which The Turing Test is not really demanding
enough, and who suppose that The Turing Test needs to be extended in
various ways in order to provide an appropriate goal for AI. We shall
consider some representatives of each of these positions in turn.
There are some people who continue to endorse The Turing Test. For
example, Neufeld and Finnestad (2020a) (2020b) argue that The Turing
Test is no barrier to progress in AI, requires no significant
redefinition, and does not shut down other avenues of investigation.
Maybe we do better just to take The Turing Test to define a watershed
rather than a threshold towards which we might hope to make
incremental progression.
### 5.1 The Turing Test is Too Hard
Some people have claimed that The Turing Test doesn't set an
appropriate goal for current research in AI because we are plainly so
far away from attaining this goal. Amongst these people there are some
who have gone on to offer reasons for thinking that it is doubtful
that we shall ever be able to create a machine that can pass The
Turing Test--or, at any rate, that it is doubtful that we shall
be able to do this at any time in the foreseeable future. Perhaps the
most interesting arguments of this kind are due to French (1990); at
any rate, these are the arguments that we shall go on to consider.
(Cullen (2009) sets out similar considerations.)
According to French, The Turing Test is "virtually
useless" as a real test of intelligence, because nothing without
a "human subcognitive substrate" could pass the test, and
yet the development of an artificial "human cognitive
substrate" is almost impossibly difficult. At the very least,
there are straightforward sets of questions that reveal
"low-level cognitive structure" and that--in
French's view--are almost certain to be successful in
separating human beings from machines.
First, if interrogators are allowed to draw on the results of research
into, say, *associative priming*, then there is data that will
very plausibly separate human beings from machines. For example, there
is research that shows that, if humans are presented with series of
strings of letters, they require less time to recognize that a string
is a word (in a language that they speak) if it is preceded by a
related word (in the language that they speak), rather than by an
unrelated word (in the language that they speak) or a string of
letters that is not a word (in the language that they speak). Provided
that the interrogator has accurate data about average recognition
times for subjects who speak the language in question, the
interrogator can distinguish between the machine and the human simply
by looking at recognition times for appropriate series of strings of
letters. Or so says French. It isn't clear to us that this is
right. After all, the design of The Turing Test makes it hard to see
how the interrogator will get reliable information about response
times to series of strings of symbols. The point of putting the
computer in a separate room and requiring communication by teletype
was precisely to rule out certain irrelevant ways of identifying the
computer. If these requirements don't already rule out
identification of the computer by the application of tests of
associative priming, then the requirements can surely be altered to
bring it about that this is the case. (Perhaps it is also worth noting
that administration of the kind of test that French imagines is not
ordinary conversation; nor is it something that one would expect that
any but a few expert interrogators would happen upon. So, even if the
circumstances of The Turing Test do not rule out the kind of procedure
that French here envisages, it is not clear that The Turing Test will
be impossibly hard for machines to pass.)
Second, at a slightly higher cognitive level, there are certain kinds
of "ratings games" that French supposes will be very
reliable discriminators between humans and machines. For instance, the
"Neologism Ratings Game"--which asks participants to
rank made-up words on their appropriateness as names for given kinds
of entities--and the "Category Rating
Game"--which asks participants to rate things of one
category as things of another category--are both, according to
French, likely to prove highly reliable in discriminating between
humans and machines. For, in the first case, the ratings that humans
make depend upon large numbers of culturally acquired associations
(which it would be well-nigh impossible to identify and describe, and
hence which it would (arguably) be well-nigh impossible to program
into a computer). And, in the second case, the ratings that people
actually make are highly dependent upon particular social and cultural
settings (and upon the particular ways in which human life is
experienced). To take French's examples, there would be
widespread agreement amongst competent English speakers in the
technologically developed Western world that "Flugblogs"
is not an appropriate name for a breakfast cereal, while
"Flugly" is an appropriate name for a child's teddy
bear. And there would also be widespread agreement amongst competent
speakers of English in the developed world that pens rate higher as
weapons than grand pianos rate as wheelbarrows. Again, there are
questions that can be raised about French's argument here. It is
not clear to us that the data upon which the ratings games rely is as
reliable as French would have us suppose. (At least one of us thinks
that "Flugly" would be an entirely inappropriate name for
a child's teddy bear, a response that is due to the similarity
between the made-up word "Flugly" and the word
"Fugly," that had some currency in the primarily
undergraduate University college that we both attended. At least one
of us also thinks that young children would very likely be delighted
to eat a cereal called "Flugblogs," and that a good answer
to the question about ratings pens and grand pianos is that it all
depends upon the pens and grand pianos in question. What if the grand
piano has wheels? What if the opponent has a sword or a sub-machine
gun? It isn't obvious that a refusal to play this kind of
ratings game would necessarily be a give-away that one is a machine.)
Moreover, even if the data is reliable, it is not obvious that any but
a select group of interrogators will hit upon this kind of strategy
for trying to unmask the machine; nor is it obvious that it is
impossibly hard to build a machine that is able to perform in the way
in which typical humans do on these kinds of tests. In particular,
if--as Turing assumes--it is possible to make learning
machines that can be "trained up" to learn how to do
various kinds of tasks, then it is quite unclear why these machines
couldn't acquire just the same kinds of "subcognitive
competencies" that human children acquire when they are
"trained up" in the use of language.
There are other reasons that have been given for thinking that The
Turing Test is too hard (and, for this reason, inappropriate in
setting goals for current research into artificial intelligence). In
general, the idea is that there may well be features of human
cognition that are particularly hard to simulate, but that are not in
any sense essential for intelligence (or thought, or possession of a
mind). The problem here is not merely that The Turing Test really does
test for *human* intelligence; rather, the problem here is the
fact--if indeed it is a fact--that there are quite
inessential features of human intelligence that are extraordinarily
difficult to replicate in a machine. If this complaint is
justified--if, indeed, there are features of human intelligence
that are extraordinarily difficult to replicate in machines,
*and* that could and would be reliably used to unmask machines
in runs of The Turing Test--then there is reason to worry about
the idea that The Turing Test sets an appropriate direction for
research in artificial intelligence. However, as our discussion of
French shows, there may be reason for caution in supposing that the
kinds of considerations discussed in the present section show that we
are already in a position to say that The Turing Test does indeed set
inappropriate goals for research in artificial intelligence.
### 5.2 The Turing Test is Too Narrow
There are authors who have suggested that The Turing Test does not set
a sufficiently broad goal for research in the area of artificial
intelligence. Amongst these authors, there are many who suppose that
The Turing Test is too easy. (We go on to consider some of these
authors in the next sub-section.) But there are also some authors who
have supposed that, even if the goal that is set by The Turing Test is
very demanding indeed, it is nonetheless too restrictive.
Objection to the notion that the Turing Test provides a logically
sufficient condition for intelligence can be adapted to the goal of
showing that the Turing Test is too restrictive. Consider, for
example, Gunderson (1964). Gunderson has two major complaints to make
against The Turing Test. First, he thinks that success in
Turing's Imitation Game might come for reasons other than the
possession of intelligence. But, second, he thinks that success in the
Imitation Game would be but one example of the kinds of things that
intelligent beings can do and--hence--in itself could not be
taken as a reliable indicator of intelligence. By way of analogy,
Gunderson offers the case of a vacuum cleaner salesman who claims that
his product is "all-purpose" when, in fact, all it does is
to suck up dust. According to Gunderson, Turing is in the same
position as the vacuum cleaner salesman *if* he is prepared to
say that a machine is intelligent merely on the basis of its success
in the Imitation Game. Just as "all purpose" entails the
ability to do a range of things, so, too, "thinking"
entails the possession of a range of abilities (beyond the mere
ability to succeed in the Imitation Game).
There is an obvious reply to the argument that we have here attributed
to Gunderson, viz. that a machine that is capable of success in the
Imitation Game is capable of doing a large range of different kinds of
things. In order to carry out a conversation, one needs to have many
different kinds of cognitive skills, each of which is capable of
application in other areas. Apart from the obvious general cognitive
competencies--memory, perception, etc.--there are many
particular competencies--rudimentary arithmetic abilities,
understanding of the rules of games, rudimentary understanding of
national politics, etc.--which are tested in the course of
repeated runs of the Imitation Game. It is inconceivable that that
there be a machine that is startlingly good at playing the Imitation
Game, and yet unable to do well at *any* other tasks that might
be assigned to it; and it is equally inconceivable that there is a
machine that is startlingly good at the Imitation Game and yet that
does not have a wide range of competencies that can be displayed in a
range of quite disparate areas. To the extent that Gunderson considers
this line of reply, all that he says is that there is no reason to
think that a machine that can succeed in the Imitation Game
*must* have more than a narrow range of abilities; we think
that there is no reason to believe that this reply should be taken
seriously.
More recently, Erion (2001) has defended a position that has some
affinity to that of Gunderson. According to Erion, machines might be
"capable of outperforming human beings in limited tasks in
specific environments, [and yet] still be unable to act skillfully in
the diverse range of situations that a person with common sense
can" (36). On one way of understanding the claim that Erion
makes, he too believes that The Turing Test only identifies one
amongst a range of independent competencies that are possessed by
intelligent human beings, and it is for this reason that he proposes a
more comprehensive "Cartesian Test" that "involves a
more careful examination of a creature's language, [and] also
tests the creature's ability to solve problems in a wide variety
of everyday circumstances" (37). In our view, at least when The
Turing Test is properly understood, it is clear that anything that
passes The Turing Test must have the ability to solve problems in a
wide variety of everyday circumstances (because the interrogators will
use their questions to probe these--and other--kinds of
abilities in those who play the Imitation Game).
### 5.3 The Turing Test is Too Easy
There are authors who have suggested that The Turing Test should be
replaced with a more demanding test of one kind or another. It is not
at all clear that any of these tests actually proposes a better goal
for research in AI than is set by The Turing Test. However, in this
section, we shall not attempt to defend that claim; rather, we shall
simply describe some of the further tests that have been proposed, and
make occasional comments upon them. (One preliminary point upon which
we wish to insist is that Turing's Imitation Game was devised
against the background of the limitations imposed by then current
technology. It is, of course, not essential to the game that tele-text
devices be used to prevent direct access to information about the sex
or genus of participants in the game. We shall not advert to these
relatively mundane kinds of considerations in what follows.)
#### 5.3.1 The Total Turing Test
Harnad (1989, 1991) claims that a better test than The Turing Test
will be one that requires responses to all of our inputs, and not
merely to text-formatted linguistic inputs. That is, according to
Harnad, the appropriate goal for research in AI has to be to construct
a robot with something like human sensorimotor capabilities. Harnad
also considers the suggestion that it might be an appropriate goal for
AI to aim for "neuromolecular indistinguishability," but
rejects this suggestion on the grounds that once we know how to make a
robot that can pass his Total Turing Test, there will be no problems
about mind-modeling that remain unsolved. It is an interesting
question whether the test that Harnad proposes sets a more appropriate
goal for AI research. In particular, it seems worth noting that it is
not clear that there could be a system that was able to pass The
Turing Test and yet that was not able to pass The Total Turing Test.
Since Harnad himself seems to think that it is quite likely that
"full robotic capacities [are] ... necessary to generate
... successful linguistic performance," it is unclear why
there is reason to replace The Turing Test with his extended test.
(This point against Harnad can be found in Hauser (1993:227), and
elsewhere.)
#### 5.3.2 The Lovelace Test
Bringsjord et al. (2001) propose that a more satisfactory aim for AI
is provided by a certain kind of meta-test that they call the Lovelace
Test. They say that an artificial agent *A*, designed by human
H, passes the Lovelace Test just in case three conditions are jointly
satisfied: (1) the artificial agent *A* produces output
*O*; (2) *A*'s outputting *O* is not the
result of a fluke hardware error, but rather the result of processes
that *A* can repeat; and (3) *H*--or someone who
knows what *H* knows and who has *H*'s
resources--cannot explain how *A* produced *O* by
appeal to *A*'s architecture, knowledge-base and core
functions. Against this proposal, it seems worth noting that there are
questions to be raised about the interpretation of the third
condition. If a computer program is long and complex, then no human
agent can explain in *complete* detail how the output was
produced. (Why did the computer output 3.16 rather than 3.17?) But if
we are allowed to give a highly schematic explanation--the
computer took the input, did some internal processing and then
produced an answer--then it seems that it will turn out to be
very hard to support the claim that human agents ever do anything
genuinely creative. (After all, we too take external input, perform
internal processing, and produce outputs.) What is missing from the
account that we are considering is any suggestion about the
appropriate *level* of explanation that is to be provided. It
is quite unclear why we should suppose that there is a relevant
difference between people and machines at any level of explanation;
but, if that's right, then the test in question is trivial. (One
might also worry that the proposed test rules out *by fiat* the
possibility that creativity can be best achieved by using genuine
*randomising* devices.)
#### 5.3.3 The Truly Total Turing Test
Schweizer (1998) claims that a better test than The Turing Test will
advert to the evolutionary history of the subjects of the test. When
we attribute intelligence to human beings, we rely on an extensive
historical record of the intellectual achievements of human beings. On
the basis of this historical record, we are able to claim that human
beings are intelligent; and we can rely upon this claim when we
attribute intelligence to individual human beings on the basis of
their behavior. According to Schweizer, if we are to attribute
intelligence to machines, we need to be able to advert to a comparable
historical record of cognitive achievements. So, it will only be when
machines have developed languages, written scientific treatises,
composed symphonies, invented games, and the like, that we shall be in
a position to attribute intelligence to individual machines on the
basis of their behavior. Of course, we can still use The Turing Test
to determine whether an individual machine is intelligent: but our
answer to the question won't depend merely upon whether or not
the machine is successful in The Turing Test; there is the further
"evolutionary" condition that also must be satisfied.
Against Schweizer, it seems worth noting that it is not at all clear
that our reason for granting intelligence to other humans on the basis
of their behavior is that we have prior knowledge of the collective
cognitive achievements of human beings.
**5.3.4 Further Proposals**
Damassino (2020) suggests that it would be better to require test
subjects to produce an enquiry in which performance is assessed along
three dimensions: (a) comparison with human performance; (b) success
in completing the enquiry; and (c) efficiency in completing the
enquiry (minimisation of the number of questions asked in completing
the enquiry). The motivation given for this proposal is that, because
The Turing Test attracts projects whose primary ambition is to fool
judges, it is concerned with whether or how well test subjects perform
on their allocated tasks. It seems to us that there is nothing here
that impugns The Turing Test. It does not count against The
Turing Test that public competitions based on it with prizes
attached lead to gaming, given that everyone knows that those prizes
are being awarded to entries that clearly do not pass The Turing Test.
If anything is impugned here, it is the public competitions, rather
than The Turing Test.
Kulikov (2020) suggests that there is value in considering
Preferential Engagement Tests or Meaningful Engagement Tests. Even
though computers can now beat the best humans at chess, many people
prefer to play chess with humans rather than with expert chess-playing
computers. Perhaps, even if computers could pass The Turing Test,
people would prefer to carry on conversations with humans rather than
with expert conversational computers. We think that this kind of
speculation relies upon assumptions about what could make for expert
conversational partners. If our conversational partners need to be
able to update information about their surroundings in real
time--for example, while watching a game of football--then
we will not think that there is a direct path from GPT-3 to expert
conversational partners. If only androids can be expert conversational
partners, then it is less clear that Preferential Engagement Tests or
Meaningful Engagement Tests will track anything other than
anthropocentric bias.
### 5.4 Should the Turing Test be Considered Harmful?
Perhaps the best known attack on the suggestion that The Turing Test
provides an appropriate research goal for AI is due to Hayes and Ford
(1995). Among the controversial claims that Hayes and Ford make, there
are at least the following:
1. Turing suggested the imitation game as a definite goal for program
of research.
2. Turing intended The Turing Test to be a gender test rather than a
species test.
3. The task of trying to make a machine that is successful in The
Turing Test is so extremely difficult that no one could seriously
adopt the creation of such a machine as a research goal.
4. The Turing Test suffers from the basic design flaw that it sets
out to confirm a "null hypothesis", viz. that there is no
difference in behavior between certain machines and humans.
5. No null effect experiment can provide an adequate criterion for
intelligence, since the question can always arise that the judges did
not look hard enough (and did not raise the right kinds of questions).
But, if this question is left open, then there is no stable endpoint
of enquiry.
6. Null effect experiments cannot measure anything: The Turing Test
can only test for complete success. ("A man who failed to seem
feminine in 10% of what he said would almost always fail the Imitation
game.")
7. The Turing Test is really a test of the ability of the human
species to discriminate its members from human imposters. ("The
gender test ... is a test of making a mechanical
transvestite.")
8. The Turing Test is circular: what it fails to detect cannot be
"intelligence" or"humanity", since many humans
would fail The Turing Test. Indeed, "since one of the players
must be judged to be a machine, half the human population would fail
the species test".
9. The perspective of The Turing Test is arrogant and parochial: it
mistakenly assumes that we can understand human cognition without
first obtaining a firm grasp of the basic principles of
cognition.
10. The Turing Test does not admit of weaker, different, or even
stronger forms of intelligence than those deemed human.
Some of these claims seem straightforwardly incorrect. Consider (h),
for example. In what sense can it be claimed that 50% of the human
population would fail "the species test"? If "the
species test" requires the interrogator to decide which of two
people is a machine, why should it be thought that the verdict of the
interrogator has any consequences for the assessment of the
intelligence of the person who is judged to be a machine? (Remember,
too, that one of the conditions for "the species
test"--as it is originally described by Hayes and
Ford--is that one of the contestants *is* a machine. While
the machine can "demonstrate" its intelligence by winning
the imitation game, a person cannot "demonstrate" their
lack of intelligence by failing to win.)
It seems wrong to say that The Turing Test is defective because it is
a "null effect experiment". True enough, there is a sense
in which The Turing Test does look for a "null result": if
ordinary judges in the specified circumstances fail to identify the
machine (at a given level of success), then there is a given
likelihood that the machine is intelligent. But the point of insisting
on "ordinary judges" in the specified circumstances is
precisely to rule out irrelevant ways of identifying the machine (i.e.
ways of identifying the machine that are not relevant to the question
whether it is intelligent). There might be all kinds of irrelevant
differences between a given kind of machine and a human
being--not all of them rendered undetectable by the experimental
set-up that Turing describes--but The Turing Test will remain a
good test provided that it is able to ignore these irrelevant
differences.
It also seems doubtful that it is a serious failing of The Turing Test
that it can only test for "complete success". On the one
hand, if a man has a one in ten chance of producing a claim that is
plainly not feminine, then we can compute the chance that he will be
discovered in a game in which he answers *N*
questions--and, if *N* is sufficiently small, then it
won't turn out that "he would almost always fail to
win". On the other hand, as we noted at the end of Section 4.4
above, if one were worried about the "YES/NO" nature of
"The Turing Test", then one could always get the judges to
produce probabilistic verdicts instead. This change preserves the
character of The Turing Test, but gives it scope for greater
statistical sophistication.
While there are (many) other criticisms that can be made of the claims
defended by Hayes and Ford (1995), it should be acknowledged that they
are right to worry about the suggestion that The Turing Test provides
the defining goal for research in AI. There are various reasons why
one should be loathe to accept the proposition that the one central
ambition of AI research is to produce artificial people. However it is
worth pointing out that there is no reason to think that Turing
supposed that The Turing Test defined the field of AI research (and
there is not much evidence that any other serious thinkers have
thought so either). Turing himself was well aware that there might be
non-human forms of intelligence--cf. (j) above. However, all of
this remains consistent with the suggestion that it is quite
appropriate to suppose that The Turing Test sets *one* long
term goal for AI research: one thing that we might well aim to do
*eventually* is to produce artificial people. If--as Hayes
and Ford claim--that task is almost impossibly difficult, then
there is no harm in supposing that the goal is merely an
*ambit* goal to which few resources should be committed; but we
might still have good reason to allow that it is *a* goal.
Others who have argued that we need to "move beyond" The
Turing Test include Hernandez-Orallo (2000) (2020) and
Marcus (2020).
## 6. The Chinese Room
There are many different objections to The Turing Test which have
surfaced in the literature during the past fifty years, but which we
have not yet discussed. We cannot hope to canvass all of these
objections here. However, there is one argument--Searle's
"Chinese Room" argument--that is mentioned so often
in connection with the Turing Test that we feel obliged to end with
some discussion of it.
In *Minds, Brains and Programs* and elsewhere, John Searle
argues against the claim that "appropriately programmed
computers literally have cognitive states" (64). Clearly enough,
Searle is here disagreeing with Turing's claim that an
appropriately programmed computer could think. There is much that is
controversial about Searle's argument; we shall just consider
*one* way of understanding what it is that he is arguing
for.
The basic structure of Searle's argument is very well known. We
can imagine a "hand simulation" of an intelligent
agent--in the case described, a speaker of a Chinese
language--in circumstances in which we might well be very
reluctant to allow that there is any appropriate intelligence lying
behind the simulated behavior. (Thus, what we are invited to suppose
is a logical possibility is not so very different from what Block
invites us to suppose is a logical possibility. However, the argument
that Searle goes on to develop is rather different from the argument
that Block defends.) *Moreover*--and this is really the
key point for Searle's argument--the "hand
simulation" in question is, in all relevant respects, simply a
special kind of digital computation. So, there is a possible
world--doubtless one quite remote from the actual world--in
which a digital computer simulates intelligence but in which the
digital computer does not itself possess intelligence. But, if we
consider any digital computer in the actual world, it will not differ
from the computer in that remote possible world in any way which could
make it the case that the computer in the actual world is more
intelligent than the computer in that remote possible world. Given
that we agree that the "hand simulating" computer in the
Chinese Room is not intelligent, we have no option but to conclude
that digital computers are simply not the kinds of things that
*can* be intelligent.
So far, the argument that we have described arrives at the conclusion
that no appropriately programmed computer can think. While this
conclusion is not one that Turing accepted, it is important to note
that it is compatible with the claim that The Turing Test is a good
test for intelligence. This is because, for all that has been argued,
it may be that it is not *nomically* possible to provide any
"hand simulation" of intelligence (and, in particular,
that it is not possible to simulate intelligence using any kind of
computer). In order to turn Searle's argument--at least in
the way in which we have developed it--into an objection to The
Turing Test, we need to have some reason for thinking that it is at
least *nomically* possible to simulate intelligence using
computers. (If it is nomically impossible to simulate intelligence
using computers, then the alleged fact that digital computers cannot
genuinely possess intelligence casts no doubt at all on the usefulness
of the Turing Test, since digital computers are nomically disqualified
from the range of cases in which there is mere simulation of
intelligence.) In the absence of reason to believe this, the most that
Searle's argument yields is an objection to Turing's
confidently held belief that digital computing machines will one day
pass The Turing Test. (Here, as elsewhere, we are supposing that, for
any kind of creature C, there is a version of The Turing Test in which
C takes the role of the machine in the specific test that Turing
describes. This general format for testing for the presence of
intelligence would not necessarily be undermined by the success of
Searle's Chinese Room argument.)
There are various responses that might be made to the argument that we
have attributed to Searle. One kind of response is to dispute the
claim that there is no intelligence present in the case of the Chinese
Room. (Suppose that the "hand simulation" is embedded in a
robot that is equipped with appropriate sensors, etc. Suppose,
further, that the "hand simulation" involves updating the
process of "hand simulation," etc. If enough details of
this kind are added, then it becomes quite unclear whether we do want
to say that we still haven't described an intelligent system.)
Another kind of response is to dispute the claim that digital
computers in the actual world could not be relevantly different from
the system that operates in the Chinese Room in that remote possible
world. (If we suppose that the core of the Chinese Room is a kind of
giant look-up table, then it may well be important to note that
digital computers in the actual world do not work with look-up tables
in that kind of way.) Doubtless there are other possible lines of
response as well. However, it would take us out of our way to try to
take this discussion further. (One good place to look for further
discussion of these matters is Braddon-Mitchell and Jackson
(1996).)
## 7. Brief Notes on Intelligence
There are radically different views about the measurement of
intelligence that have not been canvassed in this article. Our concern
has been to discuss Turing (1950) and its legacy. But, of course, a
more wide-ranging discussion would also consider, for example,
research on the measurement of intelligence using the mathematical and
computational resources of Algorithmic Information Theory, Kolmogorov
Complexity Theory, Minimum Message Length (MML) Theory, and so forth.
(For an introduction to this literature, see Hernandez-Orallo and Dowe
(2010), and the list of references contained therein. For a more
general introduction to research into AI, see Marquis et al.
(2020).)
More broadly, there are radically different views about our
concept--or concepts--of intelligence that have not been canvassed in
this article. There is a dispute, for example, about whether Turing is
best interpreted as working with a response-dependent concept of
intelligence. (Pro: Proudfoot (2013) (2020); contra: Wheeler (2020).)
Relatedly, there is a dispute about whether intelligence bears some
kind of necessary relationship to symmetrical relations of recognition
between agents, as suggested in Mallory (2020) There is also a
broader dispute about whether we should think that useful notions of
intelligence are always domain specific, or whether we should rather
suppose that there is something important in the idea of general,
domain independent intelligence.
And there are radically different views about the most likely paths to
building general intelligence (assuming that there is such a thing as
general intelligence). For example, Crosby (2020) suggests that the
best way forwards may be to try to make machines that can pass animal
cognition tests, i.e. that can create predictive models of their
environment from sensory input. (There are clear precusors to this
line of thought in, for example, Brooks (1990).) |
scottish-18th | ## 1. Major figures
The major figures in Scottish eighteenth century philosophy were
Francis Hutcheson, David Hume, Adam Smith, Thomas Reid and Adam
Ferguson. Others who produced notable works included Gershom
Carmichael, Archibald Campbell, George Turnbull, George Campbell,
James Beattie, Hugh Blair, Alexander Gerard, Henry Home (Lord
Kames), John Millar and Dugald Stewart.
## 2. Carmichael on Natural Law
Gershom Carmichael (1672-1729) studied at Edinburgh University
(1687-1691), taught at St Andrews University (1693-1694),
and spent the rest of his life at Glasgow, first as a regent in arts
and then as professor of moral philosophy. He was a main conduit into
Scotland of the European natural law tradition, a tradition of
scientific investigation of human nature with a view to constructing
an account of the principles that are morally binding on us. Among the
great figures of that tradition were Hugo Grotius (1583-1645)
and Samuel Pufendorf (1632-1694), thinkers whose writings played
a major role in moral philosophical activity in Scotland during the
Age of Enlightenment.
In 1718, during the first stirrings of the Scottish Enlightenment,
Carmichael published *Supplements and Observations upon the two
books of the distinguished Samuel Pufendorf's On the Duty of Man
and Citizen*. In 1724 he published a second edition containing
extensive additional material. Carmichael affirms: "when God
prescribes something to us, He is simply signifying that he requires
us to do such and such an action, and regards it, when offered with
that intention, as a sign of love and veneration towards him, while
failure to perform such actions, and, still worse, commission of the
contrary acts, he interprets as an indication of contempt or
hatred" (Carmichael, [NR], p. 46). Hence we owe God love and
veneration, and on this basis Carmichael distinguishes between
immediate and mediate duties. Our immediate duty is formulated in the
first precept of natural law, that God is to be worshipped. He seeks a
sign of our love and veneration for him, and worship is the clearest
manifestation of these feelings.
The second precept, which identifies our mediate duties, is:
"Each man should promote, so far as it is in his power, the
common good of the whole human race, and, so far as this allows, the
private good of individuals" ([NR], p. 48). This relates to our
'mediate' duties since we indirectly signify our love and
veneration of God by treating his creatures well. On this basis,
Carmichael deploys the distinction between self and others in two
subordinate precepts: "Each man should take care to promote his
own interest without harming others" and "Sociability is
to be cultivated and preserved by every man so far as in him
lies." These precepts, concerning duties to God, to self and to
others, are the fundamental precepts of natural law, and though the
precept that God is to be worshipped is prior to and more evident than
the precept that one should live sociably with men, the requirement
that we cultivate sociability is a foundation of the well-lived
life.
Carmichael therefore rejects an important aspect of Pufendorf's
doctrine on the cultivation of sociability, for the latter argues that
the demand "that every man must cultivate and preserve
sociability so far as he can" is that to which all our duties
are subordinate. Yet for Carmichael the precept that we worship God is
not traceable back to the duty to cultivate sociability, and therefore
the requirement that we cultivate and preserve sociability cannot
precede the laws binding us to behave appropriately towards God.
For instance, God is central to the narrative concerning the duty to
cultivate our mind, for performance of this duty requires that we
cultivate in ourselves the conviction that God is creator and governor
of the universe and of us. Carmichael criticises Pufendorf for paying
too little attention to the subject of cultivation of the mind, and
indicates some features that might profitably have been considered by
Pufendorf, for example the following.
Due cultivation of the mind involves filling it with sound opinion
regarding our duty, learning to judge well the objects which commonly
stimulate our desires, and acquiring rational control of our passions.
It also involves our learning to act on the knowledge that, as regards
our humanity, we are neither superior nor inferior to other people.
Finally, a person with a well cultivated mind is aware of how little
he knows of what the future holds, and consequently is neither
arrogant at his present happy circumstances nor excessively anxious
about ills that might yet assail him.
The Stoic character of this text is evident, as is Carmichael's
injunction that we not be disturbed on account of evils which have
befallen us, or which might befall us, due to no fault of ours. The
deliberate infringement of the moral law is said however to be another
matter; it prompts a discomfort peculiarly hard to bear. In full
concord with the Stoic tendency here observed, we find him supporting,
under the heading 'duty to oneself', a Stoic view of
anger. Though not expressing unconditional disapproval of anger, he
does point out that it is difficult to keep an outburst of anger
within just limits, and that such an outburst is problematic in
relation to natural law, for: "it must be regarded as one of the
things which most of all makes human life unsocial, and has pernicious
effects for the human race. Thus we can scarcely be too diligent in
restraining our anger" ([NR], p. 65). Anger conflicts with
sociability and it is only by due cultivation of the mind that our
sociability can be fortified and enhanced.
## 3. Hutcheson and Archibald Campbell
The first of the major philosophers was Francis Hutcheson
(1694-1746). His reputation rests chiefly on his earlier
writings, especially *An Inquiry into the Original of our Ideas of
Beauty and Virtue* (London 1725), *Reflections upon Laughter
and Remarks on the Fable of the Bees* (both in the Dublin Journal
1725-1726), and *Essay on the Nature and Conduct of the
Passions with Illustrations on the Moral Sense* (London 1728). His
magnum opus, *A System of Moral Philosophy*, was published
posthumously in Glasgow in 1755. During his period as a student at
Glasgow University (c. 1711-1717) Gershom Carmichael taught
moral philosophy and jurisprudence there and there are clear signs in
Hutcheson's writings of Carmichael's influence though it
is not known whether he studied under Carmichael during his student
days. In 1730 he took up the moral philosophy chair left vacant on
Carmichael's death. Hutcheson is known principally for his ideas
on moral philosophy and aesthetics. First moral philosophy.
Hutcheson reacted against both the psychological egoism of Thomas
Hobbes and the rationalism of Samuel Clarke and William Wollaston. As
regards Hobbes, Hutcheson thought his doctrine was both wrong and
dangerous; wrong because by the frame of our nature we have
compassionate, generous and benevolent affections which owe nothing at
all to calculations of self-interest, and dangerous because people may
be discouraged from the morally worthy exercise of cultivating
generous affections in themselves on the grounds that the exercise of
such affections is really an exercise in dissimulation or pretence. As
against Hobbes Hutcheson held that a morally good act is one motivated
by benevolence, a desire for the happiness of others. Indeed the wider
the scope of the act the better, morally speaking, the act is;
Hutcheson was the first to speak of "the greatest happiness for
the greatest numbers".
He believed that moral knowledge is gained via our moral sense. A
sense, as the term is deployed by Hutcheson, is every determination of
our minds to receive ideas independently of our will, and to have
perceptions of pleasure and pain. In accordance with this definition,
the five external senses determine us to receive ideas which please or
pain us, and the will does not intervene -- we open our eyes and
by natural necessity see whatever it is that we see. But Hutcheson
thought that there were far more senses than the five external ones.
Three in particular play a role in our moral life. The public sense is
that by which we are pleased with the happiness of others, and are
uneasy at their misery. The moral sense is that by which we perceive
virtue or vice in ourselves or others, and derive pleasure, or pain,
from the perception. And the sense of honour is that which makes the
approbation, or gratitude of others, for any good actions we have
done, the necessary occasion of pleasure. In each of these cases the
will is not involved. We see a person acting with the intention of
bringing happiness to someone else, and by the frame of our nature
pleasure wells up in us.
Hutcheson emphasises both the complexity of the relations between our
natural affections and also the need, in the name of morality, to
exercise careful management of the relations between the affections.
We must especially be careful not to let any of our affections get too
'passionate', for a passionate affection might become an
effective obstacle to other affections that should be given priority.
Above all the selfish affections must not be allowed to over-rule
'calm universal benevolence'.
Hutcheson's opposition to Hobbesian egoism is matched by his
opposition to ethical rationalism, an opposition which emerges in the
*Illustrations upon the Moral Sense*, where he demonstrates
that his account of the affections and the moral sense makes sense of
the moral facts whereas the doctrines of Clarke and Wollaston totally
fail to do so. Hutcheson's main thesis against ethical
rationalism is that all exciting reasons presuppose instincts and
affections, while justifying reasons presuppose a moral sense. An
exciting reason is a motive which actually prompts a person to act; a
justifying reason is one which grounds moral approval of the act.
Hutcheson demonstrates that reason, unlike affection, cannot furnish
an exciting motive, and that there can be no exciting reason previous
to affection. Reason does of course play a role in our moral life, but
only as helping to guide us to an end antecedently determined by
affection, in particular the affection of universal benevolence. On
this basis, an act can be called 'reasonable', but this is
not a point on the side of the rationalists, since they hold that
reason by itself can motivate, and in this case it is affection, not
reason that motivates, that is, that gets us doing something rather
than nothing.
If we add to this the fact, as Hutcheson sees it, that it has never
been demonstrated that reason is a fit faculty to determine what the
ends are that we are obliged to seek, we shall see that
Hutcheson's criticism of rationalism is that it can account for
neither moral motivation nor moral judgment. On the other hand our
natural affections, in particular benevolence, account fully for our
moral motivation and our faculty of moral sense accounts fully for our
ability to make an assessment of actions whether our own or
others'.
Certain features of Hutcheson's moral philosophy appear in his
aesthetic theory also. Indeed the two fields are inextricably related,
as witness Hutcheson's reference to the 'moral sense of
beauty'. Two features especially work hard. He contends that we
sense the beauty, sublimity or grandeur of a sight or of a sound. The
sense of the thing's beauty, so to say, wells up unbidden. And
associated with that sense, and perhaps even part of it --
Hutcheson does not give us a clear account of the matter -- is a
pleasure that we take in the thing. We *enjoy* beautiful things
and that enjoyment is not merely incidental to our sensing their
beauty.
A question arises here regarding the features of a thing that cause us
to see it as beautiful and to take pleasure in it. Hutcheson suggests
that a beautiful thing displays unity (or uniformity) amidst variety.
If a work has too much uniformity it is simply boring. If it has too
much variety it is a jumble. An object, whether visual or audible,
requires therefore to occupy the intermediate position if it is to
give rise to a sense of beauty in the object. But if Hutcheson is
right about the basis of aesthetic judgment how does disagreement
arise? Hutcheson's reply is that our aesthetic response is
affected in part by the associations that the thing arouses in our
mind. If an object that we had found beautiful comes to be associated
in our mind with something disagreeable this will affect our aesthetic
response; we might even find the thing ugly. Hutcheson gives an
example of wines to which men acquire an aversion after they have
taken them in an emetic preparation. On this matter his position may
seem extreme, for he holds that if two people have the same experience
and if the thing experienced carries the same identical associations
for the two people, then they will have the same aesthetic response to
the object. The position is however difficult to disprove, since if
two people do in fact disagree about the aesthetic merit of an object,
Hutcheson can say that the object produces different associations in
the two spectators.
Nevertheless, Hutcheson does believe aesthetic misjudgments are
possible, and in the course of explaining their occurrence he deploys
Locke's doctrine of association of ideas, a doctrine according
to which ideas linked solely by chance or custom come to be associated
in our minds and become almost inseparable from each other though they
are 'not at all of kin'. Hutcheson holds that an art
connoisseur's judgment can be distorted through his tendency to
associate ideas, and notes in particular that a connoisseur's
aesthetic response to a work of art is likely to be affected by the
fact that he owns it, for the pleasure of ownership will tend to
intermix with and distort the affective response he would otherwise
have to the object. Hutcheson, it should be added, is equally
sensitive to the danger to our moral judgments that is posed by our
associative tendency. And in both types of case the best defence
against the threat is reflection, understood as a mental probing, an
examination and then cross-examination, whether of a work of art or of
an action, and of the elements in and aspects of our situation that
motivate our judgments, all this with a view to factoring out
irrelevant considerations. Without such mental exercises we cannot, in
his view, obtain what he terms 'true liberty and
self-command'. This position, which he presents several times,
points to a doctrine of free will not otherwise readily discernible in
his writings. Our free will, on this account, is a habit of reflection
through which we form a judgment which we are in a position to defend.
We stand back from the object of reflection, do not allow ourselves to
be overwhelmed by it, but instead adjudicate it in the light of
whatever considerations we judge it appropriate to bring to bear
(Broadie 2016).
A philosopher of the early period of the Scottish Enlightenment with
whom Hutcheson may helpfully be compared and contrasted is his close
contemporary Archibald Campbell (1691-1756), professor of
divinity and church history at St Andrews University. Both men studied
at Glasgow University under John Simson, a professorial divine much
harassed by conservative Reformists in the Kirk on account of his
rejection of the Kirk's doctrine that, since the Fall, human
nature through the generations has suffered from total depravity. In
due course Hutcheson and Campbell were both harassed by the Kirk on
account of their claim that human beings are by nature inclined to
virtue, for the Kirk took that claim to imply that the two men did not
fully embrace the doctrine of total depravity. However, beyond this
agreement Campbell opposes Hutcheson on certain essential points. Most
especially, in his *Enquiry into the Original of Moral Virtue*
(1733) Campbell rejects Hutcheson's claim that for an action to
be virtuous it must be motivated by a disinterested benevolence, and
argues to the contrary that all human acts are and can only be
motivated by self-love. From which Campbell concludes that all
virtuous human acts also are motivated by self-love. This claim,
though at first sight Hobbesian, is however not at all of a Hobbesian
stripe, for Campbell holds that the self-love that motivates us to
perform a virtuous act takes the form of a desire for esteem, where
the desired esteem derives from our gratification of another
person's self-love. As well as writing against Hobbes and
Hutcheson, Campbell also directs his fire at Bernard Mandeville, who
held that a virtuous act must involve an exercise of self-denial, in
the sense that to act virtuously we have to cut across, or frustrate
our natural principles, whereas for Campbell, as also for Hutcheson,
virtuous acts are performed in realisation of, and not at all in
conflict with, our nature. The Kirk set up a committee of purity of
doctrine to investigate the teachings of Campbell, a Kirk minister who
painted such a distressingly agreeable picture of human nature. The
conservatively orthodox Reformists on the Committee wished to move
against him, but the Kirk's General Assembly, which was already
beginning to display an Enlightenment spirit, prevented such a move,
and Campbell retained his professorship of divinity and his position
as minister of the Kirk. Campbell's ideas, till now neglected,
are at last beginning to receive serious consideration (Maurer 2012,
2016).
## 4. Hutcheson, Hume and Turnbull
Hutcheson influenced most of the Scottish philosophers who succeeded
him, perhaps all of them, whether because he helped to set their
agenda or because they appropriated, in a form suitable to their
needs, certain of his doctrines. In the field of aesthetics for
example, where Hutcheson led, many, including Hume, Reid, and
Archibald Alison, followed. But influences can be hard to pin down and
there is much dispute in particular concerning his influence on David
Hume (1711-1776). It is widely held that Hume's moral
philosophy is essentially Hutchesonian, and that Hume took a stage
further Hutcheson's projects of internalisation and of grounding
our experience of the world on sentiment or feeling. For Hume agreed
with Hutcheson that moral and aesthetic qualities are really
sentiments existing in our minds, but he also argued that the
necessary connection between any pair of events E1 and E2 which are
related as cause to effect is also in our minds, for it is nothing
more than a determination of the mind, due to custom or habit, to have
a belief (a kind of feeling) that an event of kind E2 will occur next
when we experience an event of kind E1. Furthermore Hume argues that
what we think of as the 'external' world is almost
entirely a product of our own imaginative activity. As against these
reasons for thinking Hume indebted to Hutcheson there are the awkward
facts that Hutcheson greatly disapproved of the draft of
*Treatise* Book III that he saw in 1739 and that Hutcheson did
his best to prevent Hume being appointed to the moral philosophy chair
at Edinburgh University in 1744-1745. In addition many of their
contemporaries, such as Adam Smith and Thomas Reid, held that
Hume's moral philosophy was significantly different from
Hutcheson's (Moore 1990).
One close contemporary of Hutcheson, who also stands in interesting
relations to Hume, is George Turnbull (1698-1748), regent at
Marischal College, Aberdeen (1721-1727), and teacher of Thomas
Reid at Marischal. He describes Hutcheson as "one whom I think
not inferior to any modern writer on morals in accuracy and
perspicuity, but rather superior to almost all" (*Principles
of Moral Philosophy*, p. 14), and no doubt Hutcheson was an
influence on Turnbull in several ways. But it has to be borne in mind
that the earliest of Turnbull's writings, *Theses
philosophicae de scientiae naturalis cum philosophia morali
conjunctione* (Philosophical theses on the unity of natural
science and moral philosophy), a graduation oration delivered in 1723
(*Education for Life*, pp. 45-57), shows Turnbull already
working on a grand project that might be thought of as roughly
Hutchesonian, but doing so several years before Hutcheson's
earliest published work (Turnbull [EL]). As regards Turnbull's
relationship with Hume, we should recall that the subtitle of
Hume's *Treatise of Human Nature* is "An attempt to
introduce the experimental method of reasoning into moral
subjects". As with Hume's *Treatise*, so also
Turnbull's *Principles of Moral Philosophy*, published in
1740 (the year of publication of Bk. III of the *Treatise*) but
based on lectures given in Aberdeen in the mid-1720s, contains a
defence of the claim that natural and moral philosophy are very
similar types of enquiry. When Turnbull tells us that all enquiries
into fact, reality, or any part of nature must be set about, and
carried on in the same way, he is bearing in mind the fact, as he sees
it to be one, that there are moral facts and a moral reality, and that
our moral nature is part of nature and therefore to be investigated by
the methods appropriate to the investigation of the natural world. As
the natural philosopher relies on experience of the external world, so
the moral philosopher relies on his experience of the internal world.
Likewise, writing in Humean terms, but uninfluenced by Hume, Turnbull
affirms: "every Enquiry about the Constitution of the human
Mind, is as much a question of Fact or natural History, as Enquiries
about Objects of Sense are: It must therefore be managed and carried
on in the same way of Experiment, and in the one case as well as in
the other, nothing ought to be admitted as Fact, till it is clearly
found to be such from unexceptionable Experience and
Observation" (*Education for Life*, pp. 342-3). It
is, in Turnbull's judgment, the failure to respect this
experimental method that led to the moral scepticism (as Turnbull
thought it to be) of Hobbes and Mandeville, whose reduction of
morality to self-love flies in the face of experience and shocks
common sense.
The experience in question is of the reality of the public affection
in our nature, the immediate object of which is the good of others,
and the reality of the moral sense by which we are determined to
approve such affections. This moral sense, of whose workings we are
all aware, is the faculty by which, without the mediation of rational
activity, we approve of virtuous acts and disapprove of vicious ones;
and the approval and disapproval rise up in us without any regard for
self-love or self-interest. In a very Hutchesonian way Turnbull
invites us to consider the difference we feel when faced with two acts
which are the same except for the fact that one of them is performed
from love of another person and the other act is performed from
self-interest. These facts about our nature have to be accommodated
within moral philosophy just as the fact that heavy bodies tend to
fall has to be accommodated within natural philosophy.
Turnbull is committed to a form of reliabilism according to which the
faculties that we have by the frame or constitution of our nature are
trustworthy. It is not simply that we are so constructed that we
cannot but accept their deliverances; it is that we are also entitled
to accept them. Turnbull, a deeply committed Christian, believed that
the author of our nature would not have so constituted us as to accept
the deliverances of our nature if our nature could not be relied upon
to deliver up truth. We are in the hands of providence, and live
directed towards the truth for that reason. This doctrine has been
termed 'providential naturalism', and bears a marked
resemblance to the language and also to the substance of the position
held by Turnbull's pupil Thomas Reid.
## 5. Kames on aesthetics and religion
Henry Home, Lord Kames, likewise held a version of providential
naturalism. In his *Essays on the Principles of Morality and
Natural Religion* (1779) he has a good deal to say about the
senses external and internal, treating them as enabling us, by the
original frame of our nature, to gain access, without the use of
reasoning processes, to the realities in the corresponding domains,
including the moral domain. Kames's moral sense has as much to
do with aesthetics as with morality; or rather, for Kames, no less
than for Hutcheson, virtue is a kind of beauty, moral beauty, as vice
is moral deformity. Beauty itself is ascribed to anything that gives
pleasure. And as there are degrees of pleasure and pain, so also there
are degrees of beauty and ugliness. In the lowest rank are things
considered without regard to an end or a designing agent. The
possibility of greater pleasure, and of the ascription of greater
beauty, arises when an object is considered with respect to the
object's end. A house, considered in itself, might be beautiful,
but how much more beautiful is it judged to be if it is seen to be
well designed for human occupancy.
Approbation, as applied to works of art, is our pleasure at them when
we consider them to be well fitted or suited to an end. The
approbation is greater if the end for which the object is well suited
also gives pleasure. A ship may give pleasure because it is so
shapely, and also give pleasure because it is well suited to trade,
and also give pleasure because trade also is a fine thing. If these
further things are taken into account the beauty of the ship is
enhanced. Kames argues that these kinds of pleasure can also be taken
in human actions, and that human acts can cause pleasure additionally
by the special fact about them that they proceed from intention,
deliberation and choice. In the case of, for example, an act of
generosity towards a worthy person, the act is intentionally well
suited, or fitted, to an end whose beauty is recognised by the agent.
The fact that observation of acts displaying generosity, and other
virtues, gives us pleasure is due to the original constitution of our
nature. The pleasure arises unbidden, and no exercise of will or
reason is required, any more than we require to use our reason to see
the beauty of a landscape or a work of art.
Kames wrote extensively on revealed and natural theology. As regards
the latter, he often has Hume in his sights, particularly Hume's
*Dialogues Concerning Natural Religion* (1779), with whose
contents Kames was familiar decades before the work's
publication. Hume held that in an inference from effect to cause no
more should be assigned to the cause than is sufficient to explain the
effect. In particular, if we argue from the existence of the natural
world to the existence of God we should ascribe to God only such
attributes as are requisite for the explanation of the world. And
since the world is imperfect, why not say that we are not constrained
by the facts in the natural order to ascribe perfection to God? Kames,
on the other hand, holds that there are principles implanted in our
nature that permit us to draw conclusions that reason alone does not
sanction. If something is a tendency of our nature then we have to
rely on it as a source of truth. Invoking just such a tendency Kames
affirms that though we see both good and evil around us we do not
conclude that the cause of the world must also be a mixture of good
and evil: "it is a tendency of our nature to reject a mixed
character of benevolence and malevolence, unless where it is
necessarily pressed home upon us by an equality of opposite effects;
and in every subject that cannot be reached by the reasoning faculty,
we justly rely on the tendency of our nature" (*Essays*,
p. 353). In any case Kames sees a world which is predominantly good
even though it has 'a few cross instances'. But the few
cross instances might not look so cross, or even at all cross, if we
had a fuller perspective, and Kames anticipates the time when that
perspective will be granted us.
This latter position did not raise the hackles of the zealots among
the Presbyterian clergy in Scotland, but Kames's position on
free will caused a furore and he had to defend himself from attempts
to expel him from the Kirk. Kames, accepting the concept of history,
natural and human, as the gradual realisation of a divine plan,
believed in universal necessity. The laws ordained by God
"produce a regular train of causes and effects in the moral as
well as material world, bringing about those events which are
comprehended in the original plan, and admitting the possibility of
none other" (*Essays*, p. 192). On the other hand, if we
are to fulfill our role in the grand scheme we must see ourselves as
able to initiate things, that is, to be the free cause of their
occurrence. God has therefore, according to Kames, concealed from us
the necessity of our acts and he is therefore a deceitful God. Kames
sought to explain how this divine deceit enables us to live as morally
accountable beings, but this latter part of his philosophy did nothing
to placate those in the Kirk for whom the affirmation of a deceitful
God was a sacrilege. Kames, however, could not see any difference
between the deception by which we believe ourselves to be free when in
fact we are necessitated and the deception by which we believe
secondary qualities, such as colours and sounds, to be in the external
world and able to get along without us, when in fact they depend for
their existence upon the exercise of our own sensory powers.
## 6. George Campbell on miracles
Kames did not dedicate an entire book to an attack on Hume on
religion, but George Campbell (1719-1796) did. This interesting
man, a student at Marischal College, Aberdeen, of which in 1759 he
became Principal, was a founder-member of the Aberdeen Philosophical
Society, the 'Wise Cub', which also included Thomas Reid,
John Gregory, David Skene, Alexander Gerard, James Beattie and James
Dunbar. It is probable that many of Campbell's writings began
life as papers to the Club. In 1763 he published *A Dissertation on
Miracles* which was intended as a demolition of Hume's essay
'On miracles', Chapter Ten in *An Enquiry Concerning
Human Understanding*. Miracles were commonly discussed in
eighteenth century Scotland. On the one hand the Kirk required people
to accept miracle claims on the basis either of eyewitness reports or
of reports of such reports, and on the other hand the spirit of
Enlightenment required that claims based on the authority of others be
put before the tribunal of reason. Hume focuses especially on the
credibility of testimony, and argues that the credence we place in
testimony is based entirely on experience, the experience of the
occasions when testimony has turned out to be true as against those
experiences where it has not. Likewise it is on the basis of
experience that we judge whether a reported event occurred. If the
reported event is improbable we ask how probable it is that the
eyewitness is speaking truly. We have to balance the probability that
the eyewitness is speaking truly against the improbability of the
occurrence of the event. Hume held that the improbability of a miracle
is always so great that no testimony could tell effectively in its
favour. The wise man, proportioning his belief to the evidence, would
believe that the testimony in favour of the miracle is false.
Campbell's opening move against this argument is to reject
Hume's premiss that we believe testimony solely on the basis of
experience. For, according to Campbell, there is in all of us a
natural tendency to believe other people. This is not a learned
response based on repeated experience but an innate disposition. In
practice this principle of credulity is gradually finessed in the
light of experience. Once testimony is placed before us it becomes the
default position, something that is true unless or until proved false,
not false unless or until proved true. The credence we give to
testimony is much like the credence we give to memory. It is the
default position as regards beliefs about the past, even though in the
light of experience we might withhold belief from some of its
deliverances.
Because our tendency to accept testimony is innate, it is harder to
overturn than Hume believes it to be. Campbell considers the case of a
ferry that has safely made a crossing two thousand times. I, who have
seen these safe crossings, meet a stranger who tells me solemnly that
he has just seen the boat sink taking with it all on board. The
likelihood of my believing this testimony is greater than would be
implied by Hume's formula for determining the balance of
probabilities. Reid, a close friend of Campbell's, likewise gave
massive emphasis to the role of testimony, stressing both the innate
nature of the credence we give to testimony and also the very great
proportion of our knowledge of the world that we gain, not through
perception or reason, but through the testimony of others.
Reid's comparison of the credence we naturally give to the
testimony of others and the credence we naturally give to the
deliverances of our senses, is one of the central features of his
*Inquiry into the Human Mind* (1764).
## 7. George Campbell and the rhetorical tradition
A number of eighteenth century Scots, including James Burnett (Lord
Monboddo), Adam Smith, Thomas Reid, Hugh Blair and James Dunbar, made
significant contributions in the field of language and rhetoric.
George Campbell's *The Philosophy of Rhetoric* (London
1776) is a large-scale essay in which he takes a roughly Aristotelian
position on the relation between logic and rhetoric, since he holds
that convincing an audience, which is the province of rhetoric or
eloquence, is a particular application of the logician's art.
The central insight from which Campbell is working is that the orator
seeks to persuade people, and in general the best way to persuade is
to produce perspicuous arguments. Good orators have to be good
logicians. Their grammar also must be sound. This double requirement
of orators leads Campbell to make a sharp distinction between logic
and grammar, on the grounds that though both have rules, the rules of
logic are universal and those of grammar particular. Though there are
many natural languages there is but one set of rules of logic, and on
the other hand different languages have different rules of grammar. It
is against a background of discussion by prominent writers on language
such as Locke and James ('Hermes') Harris that Campbell
takes his stand with the claim that there cannot be such a thing as a
universal grammar. His argument is that there cannot be a universal
grammar unless there is a universal language, and there is no such
thing as a universal language, just many particular languages. There
are, he grants, collections of rules that some have presented under
the heading 'universal grammar'. But, protests, Campbell,
"such collections convey the knowledge of no tongue
whatever". His position stands in interesting relation to
Reid's frequent appeals to universals of language in support of
the claim that given beliefs are held by all humankind.
## 8. Common sense
Campbell was a leading member of the school of common sense
philosophy. For him common sense is an original source of knowledge
common to humankind, by which we are assured of a number of truths
that cannot be evinced by reason and "it is equally impossible,
without a full conviction of them, to advance a single step in the
acquisition of knowledge" (*Philosophy of Rhetoric*, vol.
1, p. 114). His account is much in line with that of his colleague
James Beattie: "that power of the mind which perceives truth, or
commands belief, not by progressive argumentation, but by an
instantaneous, instinctive, and irresistible impulse; derived neither
from education nor from habit, but from nature; acting independently
on our will, whenever its object is presented, according to an
established law, and therefore properly called Sense; and acting in a
similar manner upon all, or at least upon a great majority of mankind,
and therefore properly called *Common Sense*" (*An
Essay on the Nature and Immutability of Truth*, p. 40). We are
plainly in the same territory as Reid's account: "there
are principles common to [philosophers and the vulgar], which need no
proof, and which do not admit of direct proof", and these common
principles "are the foundation of all reasoning and
science" (Reid [EIP]).
These philosophers do however disagree about substantive matters. In
particular, Reid lists as the first principle of common sense:
"The operations of our minds are attended with consciousness;
and this consciousness is the evidence, the only evidence which we
have or can have of their existence" (Reid [EIP], p. 41).
Campbell on the other hand lists three sorts of intuitive evidence.
The first concerns our unmediated insight into the truth of
mathematical axioms and the third concerns common sense principles.
The second concerns the deliverances of consciousness, consciousness
being the faculty through which we learn directly of the occurrence of
mental acts -- thinking, remembering, being in pain, and so on.
What is listed as a principle of common sense by Reid is, therefore,
according to Campbell, to be contrasted with such principles. Aside
from this, however, it is clear that Campbell is philosophically very
close to Reid, even if Reid is unquestionably the greater
philosopher.
## 9. Smith on moral sentiments
Reid and Hume both owed an immense debt to Hutcheson. So also did Adam
Smith (1723-1790) who, unlike the others, studied under
Hutcheson at Glasgow University. In 1751 Smith was appointed to the
chair of logic and rhetoric at Glasgow and the following year
transferred to the chair of moral philosophy that Hutcheson had
occupied. Smith's *An Inquiry into the Nature and Causes of
the Wealth of Nations* appeared in 1776. *Essays on
Philosophical Subjects* appeared posthumously in 1795. He also
published an essay on the first formation of languages, and student
notes of his lectures on rhetoric and belles lettres, and on
jurisprudence have survived. But much his most important work in
philosophy is the *Theory of Moral Sentiments* which appeared
in 1759 and of which six authorised editions appeared during
Smith's lifetime. The second and sixth editions contain
significant revisions and additions. Smith sets out to provide a
theory that will address the weaknesses of existing systems of moral
philosophy (critcised in Part VII of the *Moral Sentiments*).
He does so by rejecting attempts to reduce morality to a single
principle and instead seeks to provide an account of the operation of
ordinary moral judgment that recognises the central role of
socialisation.
The concepts of sympathy and spectatorship, central to the doctrine of
*TMS*, had already been put to work by Hutcheson and Hume, but
Smith's account is distinct. As spectator of an agent's
suffering we form in our imagination a copy of such 'impression
of our own senses' as we have experienced when we have been in a
situation of the kind the agent is in: "we enter as it were into
his body, and become in some measure the same person with the
agent" (Smith 1790, p. 9). Smith gives two spectacular examples
of cases where the spectator has a sympathetic feeling that does not
correspond to the agent's. The first concerns the agent who has
lost his reason. He is happy, unaware of his tragic situation. The
spectator imagines how he himself would feel if reduced to this same
situation. In this imaginative experiment, in which the spectator is
operating on the edge of a contradiction, the spectator's idea
of the agent's situation plays a large role while his idea of
the agent's actual feelings has a role only in that the
agent's happiness is itself evidence of his tragedy. The second
of Smith's examples is the spectator's sympathy for the
dead, deprived of sunshine, conversation and company. Again Smith
emphasises the agent's situation, and asks how the spectator
would feel if in the agent's situation, deprived of everything
that matters to people.
Smith relates sympathy to approval. For a spectator to approve of an
agent's feelings is for him to observe that he sympathises with
the agent. This account is used as the basis of the analysis of
propriety. For a spectator to judge that an agent's act is
proper or appropriate is for him to approve of the agent's act.
The agent's act lacks propriety, in the judgment of the
spectator, if the spectator does not sympathise with the agent's
performance.
Propriety and impropriety are based on a bilateral relation, between
spectator and agent. Smith attends also to a trilateral relation,
between a spectator, an agent who acts on someone, and the person who
is acted on, the 'recipient' of the act. There are several
kinds of response that the recipient may make to the agent's
act, and Smith focuses on two, gratitude and resentment. If the
spectator judges the recipient's gratitude proper or appropriate
then he approves of the agent's act and judges it meritorious,
or worthy of reward. If he judges the recipient's resentment
proper or appropriate then he disapproves of the agent's act and
judges it demeritorious, or worthy of punishment. Judgments of merit
or demerit concerning a person's act are therefore made on the
basis of an antecedent judgment concerning the propriety or
impropriety of another person's reaction to that act. Sympathy
underlies all these judgments, for in the cases just mentioned the
spectator sympathises with the recipient's gratitude and with
his resentment. He has direct sympathy with the affections and motives
of the agent and indirect sympathy with the recipient's
gratitude; or in judging the agent's behaviour improper the
spectator has indirect sympathy with the agent's resentment
(Smith 1790, p. 74).
We have supposed, in each of these cases, that the recipient really
does have the feeling in question, whether of gratitude or resentment.
However, in Smith's account the spectator's belief about
what the recipient actually feels about the agent is not important for
the spectator's judgment concerning the merit and demerit of the
agent. The recipient may, for whatever reason, resent an act that was
kindly intentioned and in all other ways admirable, and the spectator,
knowing the situation better than the recipient does, puts himself
imaginatively in the shoes of the recipient while taking with him into
this spectatorial role information about the agent's behaviour
that the recipient lacks. The spectator judges that were he himself in
the recipient's situation he would be grateful for the
agent's act; and on that basis, and independently of the
recipient's actual reaction, he approves of the agent's
act and judges it meritorious. Here the spectator considers himself as
a better (because better informed) spectator of the agent's act
than the recipient is.
As regards judgments of merit and demerit, Smith sets up a model of
three people, but the three differ in respect of the weight that has
to be given to their work, for the recipient does almost nothing. He
is acted on by the agent, but apart from that he is no more than a
place holder for the spectator who will imaginatively occupy his shoes
and make a judgment concerning merit or demerit on the basis solely of
his conception of how he would respond to the agent if he were in the
place of the recipient. He does not judge on the basis of the actual
reaction of the recipient, who might approve of the agent's act
or disapprove or have no feelings about it one way or the other.
Up to this point we have attended to the spectator's moral
judgment of the acts of others. What of his judgment of his own acts?
In judging the other the spectator has the advantage of disinterest,
but he may lack requisite information and much of the work of creative
imagination goes into his rectifying the lack. In judging himself he
has, or may be presumed to have, the requisite information but he has
the problem of overcoming the tendency to a distorted judgment caused
by self-love or self-interest. He must therefore factor out of his
judgment those features that are due to self-love. He does this by
setting up, by an act of creative imagination, a spectator, an
*other* who, *qua* spectator, is at a distance from
him.The point about the distance is that it creates the possibility of
disinterest or impartiality, but it is still necessary to ask how
disinterest or impartiality is achieved if it is the agent himself who
imagines the spectator into existence.
Let us move to an answer by wondering who or what it is that is
imagined into existence? Is it the voice of society, representing
established social attitudes? At times in the first edition of *The
Theory of Moral Sentiments* Smith comes close to saying that it
is. In the second edition Smith is clear that this is not the role of
the impartial spectator for the latter can, and occasionally does,
speak against established social attitudes. Nor can the judgment of
the impartial spectator be reduced to the judgment of society, even
where those two judgments coincide. Nevertheless the impartial
spectator exists because of real live spectators. Were it not for our
discovery that while we are judging other people, those same people
are judging us, we would not form the idea of a spectator judging us
impartially.
Smith's account of justice is built upon his account of the
spectator's sympathetic response to the recipient of an
agent's act. If a spectator sympathises with a recipient's
resentment at the agent's act then he judges the act
demeritorious and the agent worthy of punishment. In the latter case
the moral quality attributed to the act is injustice. An act of
injustice "does a real and positive hurt to some particular
persons, from motives which are naturally disapproved of" (Smith
1790, p. 79). Since a failure to act justly has a tendency to result
in injury, while a failure to act charitably or generously does not, a
distinction is drawn by Smith, in line with Humean thinking, between
justice and the other social virtues, on the basis that it is so much
more important to prevent injury than promote positive good that the
proper response to injustice is punishment, whereas we do not feel it
appropriate to punish someone who does not act charitably or
gratefully. In a word, we have a stricter obligation to justice than
to the other virtues.
Though there are important points of contact between Smith's
account of justice and Hume's, the differences are considerable,
chief of them being the fact that Hume grounds our approval of justice
on our recognition of its utility, and Smith does not. We do sometimes
take it into account in coming to a judgment, but more often than not
it is something of a quite different nature that wells up in us:
"All men, even the most stupid and unthinking, abhor fraud,
perfidy, and injustice, and delight to see them punished. But few men
have reflected upon the necessity of justice to the existence of
society, how obvious soever that necessity may appear to be"
(Smith 1790, p. 89). There are a few cases where utility is plainly
involved in our judgment, but they are few, and they are in a distinct
psychological class. Smith instances the sentinel who fell asleep
while on watch and was executed because such carelessness might
endanger the whole army. Smith's comment is: "When the
preservation of an individual is inconsistent with the safety of a
multitude, nothing can be more just than that the many should be
preferred to the one. Yet this punishment, how necessary soever,
always appears to be excessively severe. The natural atrocity of the
crime seems to be so little, and the punishment so great, that it is
with great difficulty that our heart can reconcile itself to it"
(Smith 1790, p. 90). And our reaction in this kind of case is to be
contrasted with our reaction to the punishment of 'an ungrateful
murderer or parricide', where we applaud the punishment with
ardour and would be enraged and disappointed if the murderer escaped
punishment. These very different reactions demonstrate that our
approval of punishment in the one case and in the other are founded on
very different principles.
Smith extends the discussion of merit and demerit into a naturalised
account of the development of religious belief. In his *Essays on
Philosophical Subjects* Smith provides an account of early
religious belief as a precursor to science and philosophy. In the
case of both, the desire to explain the world is driven by anxiety
created through novel experience. Here the appeal to heaven is a
psychological coping mechanism that we develop to address injustices
that go unrecognised in this world. Smith then argues that our views
on the rules of justice more generally come to be associated with
religion and this lends a further social sanction to morality.
## 10. Blair's Christian stoicism
Smith devotes considerable space to the Stoic virtue of self-command.
Another eighteenth century Scottish thinker who devotes considerable
space to it is Hugh Blair (1718-1800), minister of the High Kirk
of St Giles in Edinburgh and first professor of rhetoric and belles
lettres at Edinburgh University. Blair's sermons bear ample
witness to his interest in Stoic virtue. For example, in the sermon
'On our imperfect knowledge of a future state' he wonders
why we have been left in the dark about our future state. Blair
replies that to see clearly into our future would have disastrous
consequences. We would be so spellbound by the sight that we would
neglect the arts and labours which support social order and promote
the happiness of society. We are, believes Blair, in 'the
childhood of existence', being educated for immortality. The
education is of such a nature as to enable us to develop virtues such
as self-control and self-denial. These are Stoic virtues, and
Blair's sermons are full of the need to be Stoical. In his
sermon 'Of the proper estimate of human life' he says:
"if we cannot control fortune, [let us] study at least to
control ourselves." Only through exercise of self- control is a
virtuous life possible, and only through virtue can we attain
happiness. He adds that the search for worldly pleasure is bound to
end in disappointment and that that is just as well. For it is through
the failure of the search that we come to a realisation both of the
essential vanity of the life we have been living and also of the need
to turn to God and to virtue. For many, the fact of suffering is the
strongest argument there is against the existence of God. Blair on the
contrary holds that our suffering provides us with a context within
which we can discover that our true nature is best realised by the
adoption of a life-plan whose overarching principle is religious.
## 11. Ferguson and the social state
One of Blair's colleagues at Edinburgh University was Adam
Ferguson (1723-1816). He succeeded David Hume as librarian of
the Advocates' Library in Edinburgh and then held in succession
two chairs at Edinburgh University, that of natural philosophy
(1759-1764) and of pneumatics and moral philosophy
(1764-1785). His lectures at Edinburgh were published as
*The* *Institutes of Moral Philosophy* (1769) and
*Principles of Moral and Political Science* (1792). Ferguson
advocates a form of moral science based on the study of human
beings as they are and then grounds a normative account of what they
ought to be upon this empirical basis. His most influential work
is *An Essay on the History of Civil Society* (1767). There
Ferguson attended to one of the main concepts of the Enlightenment,
that of human progress, and expressed doubts about whether over the
centuries the proportion of human happiness to unhappiness had
increased. He believed that each person accommodates himself to the
conditions in his own society and the fact that we cannot imagine that
we would be contented if we lived in an earlier society does not imply
that people in earlier societies were not, more or less, as happy in
their own society as we are in ours. As against our unscientific
conjectures about how we would have felt in a society profoundly
unlike the only one we have ever lived in, Ferguson commends the use
of historical records. He talks disparagingly about boundless regions
of ignorance in our conjectures about other societies, and among those
he has in mind who speak ignorantly about earlier conditions of
humanity are Hobbes, Rousseau, and Hume in their discussions of the
state of nature and the origins of society.
Hobbes and Rousseau in particular had a good deal to say about the
pre-social condition of humankind. Ferguson argues, against their
theories, that there are no records whatever of a pre-social human
condition; and since on the available evidence humankind has always
lived in society he concludes that living in society comes naturally
to us. Hence the state of nature is a social state and is not
antecedent to it. Instead of state of nature and contract theory,
Ferguson advocates the use of history, of the accounts of travellers,
and the literature and myths of societies to build an account of types
of human society. Ferguson's discussion of types of society, ideas
of civilisation and civil society has seen him credited with an
influential place in the development of sociology. This method is
ubiquitous across the thinkers of the Scottish Enlightenment, with a
particularly influential discussion of the method to be found in
John Millar's *The Origin of the Distinction of Ranks*
(1779).
## 12. Dugald Stewart on history and philosophy
One colleague of Blair and Ferguson at Edinburgh University was Dugald
Stewart (1753-1828), who was a student first at Edinburgh, and
then at Glasgow where his moral philosophy professor was Thomas Reid.
Stewart succeeded his father in the chair of mathematics at Edinburgh,
and then in 1785 became professor of pneumatic and moral philosophy at
Edinburgh when Ferguson resigned the chair. Stewart shared with
Ferguson an interest in the kind of historical inquiry that explored
how societies operate. In his *Account of the Life and Writings of
Adam Smith LL.D.* Dugald Stewart says of one of Smith's
works, the *Dissertation on the Origin of Languages* (Smith
[LRB], pp. 201-26), that "it deserves our attention less,
on account of the opinions it contains, than as a specimen of a
particular sort of inquiry, which, so far as I know, is entirely of
modern origin" (Smith 1795, p. 292). Stewart then spells out the
'particular sort of inquiry' that he has in mind. He notes
the lack of direct evidence for the origin of language, of the arts
and the sciences, of political union, and so on, and affirms:
"In this want of direct evidence, we are under a necessity of
supplying the place of fact by conjecture; and when we are unable to
ascertain how men have actually conducted themselves upon particular
occasions, of considering in what manner they are likely to have
proceeded, from the principles of their nature, and the circumstances
of their external situation" (Smith 1795, p. 293).
For Stewart such enquiries are of practical importance, for by them
"a check is given to that indolent philosophy, which refers to a
miracle, whatever appearances, both in the natural and moral worlds,
it is unable to explain" (Smith 1795, p. 293). Stewart uses the
term 'conjectural history' for the sort of history
exemplified by Smith's account of the origin of language.
Conjectural history works against the illegitimate encroachment of
religion into the lives of people who are too quick to reach for God
as the solution to a problem when extrapolation from scientifically
established principles of human nature would provide a solution
satisfying to the intellect. Knowing what we do about human nature,
about our intellect and will, our emotions and fundamental beliefs, we
ask how people would have behaved in given circumstances. Love and
hate, anger and jealousy, joy and fear, do not change much through the
generations. Much the same things, speaking generally, have much the
same effect first on the emotions and then on behaviour. Dugald
Stewart formulates the principle underlying conjectural history: it
has "long been received as an incontrovertible logical maxim
that the capacities of the human mind have been in all ages the same,
and that the diversity of phenomena exhibited by our species is the
result merely of the different circumstances in which men are
placed" (Stewart 1854-58, vol. 1, p. 69).
As regards the credentials of Stewart's 'incontrovertible
logical maxim', if the claim that human nature is invariant is
an empirical claim, it must be based on observation of our
contemporaries and on evidence of people's lives in other places
and at other times. Such evidence needs however to be handled with
care. The further back we go the more meagre it is, and so the more we
need to conjecture to supplement the few general facts available to
us. Indeed we can go back so far that we have no facts beyond the
generalities that we have worked out in the light of our experience.
But to rely on conjecture in order to support the very principle that
forms the first premiss in any exercise in conjectural history is to
come suspiciously close to arguing in a circle. The incontrovertible
logical maxim of Dugald Stewart should probably be accorded at most
the status of a well-supported empirical generalisation.
Conjectural history is certainly not pure guesswork. We argue on the
basis of observed uniformities, and the more experience we have of
given uniformities the greater credence we will give to reports that
speak of the occurrence of the uniformities, whether they concern dead
matter or living people and their institutions. In a famous passage
Hume writes: "Whether we consider mankind according to the
difference of sexes, ages, governments, conditions, or methods of
education; the same uniformity and regular operation of natural
principles are discernible. Like causes still produce like effects; in
the same manner as in the mutual action of the elements and powers of
nature" (Hume [T], p. 401).
For Hume the chief point about the similarity between ourselves and
our ancestors is that histories greatly contribute to the scientific
account of human nature by massively extending our otherwise very
limited observational data base. Hume writes: "Mankind are so
much the same, in all times and places, that history informs us of
nothing new or strange in this particular. Its chief use is only to
discover the constant and universal principles of human nature, by
showing men in all varieties of circumstances and situations, and
furnishing us with materials from which we may form our observations
and become acquainted with the regular springs of human action and
behaviour. These records of wars, intrigues, factions, and
revolutions, are so many collections of experiments, by which the
politician or moral philosopher fixes the principles of his science,
in the same manner as the physician or natural philosopher becomes
acquainted with the nature of plants, minerals, and other external
objects, by the experiments which he forms concerning them"
(Hume [E], pp. 83-84). On this account of history, it is perhaps
the single most important resource for the philosopher seeking to
construct a scientific account of human nature. Among the historians
produced by eighteenth century Scotland were Turnbull, Hume,
Smith, Ferguson and William Robertson. In light of Hume's
observation it is not surprising that so much history was written by
men prominent for their philosophical writings on human nature. |
twardowski | ## 1. Life
Kazimierz (or Kasimir) Jerzy Skrzypna-Twardowski, Ritter von
Ogonczyk was born to Polish parents on October 20, 1866 in
Vienna, which was then the capital of the Habsburg Empire. From 1877
to 1885 he attended the Theresian Academy (Theresianum), the secondary
school of Vienna's bourgeoise elite. Like many other high-school
students at the time, his philosophy textbook was the
*Philosophische Propedaeutik* by Robert Zimmermann, Bolzano's
'favorite pupil' (*Herzensjunge*). The book covered
empirical psychology, logic, and the introduction to philosophy.
In 1885 Twardowski enrolled at the University of Vienna. The year
after he became a student of Franz Brentano, for whom he felt
"the most sincere awe and veneration" and whom he
remembered as "uncompromising and relentless in his quest for
rigor in formulation, consistency in expression, and precision in the
working out of proofs" (Twardowski 1926, 20). In addition to
philosophy, Twardowski also studied history, mathematics, physics, and
physiology (with Sigmund Exner, the son of Bolzano's correspondent
Franz Exner; ibid., 21). While a student, Twardowski was a close
friend of Alois Hofler, Christian von Ehrenfels, Josef Klemens
Kreibig, and Hans Schmidkunz--the latter initiated regular
meetings between younger and older students of Brentano and founded
the Vienna Philosophical Society in 1888. Twardowski defended his
doctoral dissertation, *Idea and Perception--An Epistemological
Investigation of Descartes* (published in 1892), in the Fall of
1891. Since Brentano had resigned from his chair in 1880, Twardowski's
official PhD supervisor was Robert Zimmermann. In 1891-1892
Twardowski spent time as a researcher both in Leipzig, where he
followed courses by Wilhelm Wundt and Oswald Kulpe, and in
Munich, where he attended the lectures of Carl Stumpf, another pupil
of Brentano. Between 1892 and 1895 Twardowski earned a living working
for an insurance company and writing for German-language and
Polish-language newspapers about philosophy, literature, and music
while working on his Habilitation thesis, *On the Content and Object
of Presentations--A Psychological Investigation* (1894), and
teaching as Privatdozent at the University of Vienna, where he gave
courses in logic, on the immortality of the soul and a
*practicum* on Hume's *Enquiry concerning Human
Understanding*.
In 1895 Twardowski, then 29, was appointed as *extraordinarius*
at the University of Lvov (then Lemberg, now Lviv, in Polish
Lwow), one of the two Polish-speaking Universities of the
Empire. He saw it as his duty to export Brentano's style of
philosophizing to Poland and spent most of his time organizing Lvov's
philosophical life. He reanimated the Lvov Philosophical Circle and
gave lectures aimed at a broader public. He organized his teaching in
general 'core courses' in logic, psychology, ethics, and
the history of philosophy (given and updated every four years), and
placed less emphasis on specialized courses. Further, he inaugurated a
philosophical seminar and reading room, where he made his private
library available to students, with whom he always maintained close
and frequent personal contacts. Relatively many of Twardowski's
students were women (among others, Izydora Dambska, Maria
Lutman-Kokoszynska, and Seweryna Luszczewska-Romahnowa, all
of whom held philosophy chairs later). At a certain point, Twardowski
had managed to create three concentric circles of philosophical
influence: there was the Philosophical Circle, open to all university
departments, the Polish Philosophical Society (1904), open to
professional philosophers, and the journal *Ruch Filozoficzny*,
conceived as an organ of promotion of philosophy at large and open to
everyone (1911). Twardowski also established a laboratory of
psychology in 1907. As he writes, he was waging "a most
aggressive propaganda campaign on behalf of philosophy"
(Twardowski 1926, 20). The campaign soon resulted in his lectures
being moved to the Great Concert Hall of the Lvov Musical Society when
the number of students reached two thousand (in the mid-Twenties he
lectured in the Apollo movie theater, at 7 a.m. in the summer and 8
a.m. in the winter, without academic quarter). All this amounted to
the establishment of a philosophical movement that soon became known
as a proper school: first, until the First World War, it was known as
the Lvov School; then, when the Russian-speaking University of Warsaw
again became Polish-speaking, and Twardowski's students starting
getting positions and having their own students there, it was known as
the Lvov-Warsaw School (see
Lvov-Warsaw School).
The name is somewhat inaccurate, for Twardowski's students occupied
philosophy chairs in all post-war Polish universities, not only in
Lvov and Warsaw. As has often been pointed out, what all students of
Twardowski had in common was not a particular set of views, but a
rather distinctive attitude to philosophical problems informed by
precision and clarity that they inherited from Twardowski's general
conception of methodology, and which he valued most highly. According
to Jordan, Twardowski led his students
>
> to undertake painstaking analysis of specific problems which were rich
> in conceptual and terminological distinctions, and directed rather to
> the clarification than to the solution of the problems involved.
> (Jordan 1963,
> 7f)[2]
>
In this, Twardowski's students learned what he had learned from
Brentano, namely
>
> how to strive relentlessly after matter-of-factness, and how to pursue
> a method of analysis and investigation that, insofar as that is
> possible, guarantees that matter-of-factness. He proved to me by
> example that the most difficult problems can be clearly formulated,
> and the attempts at their solution no less clearly presented, provided
> one is clear within oneself. The emphasis he placed on sharp
> conceptual distinctions that did not lapse into fruitless nit-picking
> was an important guideline for my own writings. (Twardowski 1926, 20)
>
Twardowski was convinced that the philosophical way of thinking he
advocated, namely precision in thought and writing and rigor in
argumentation, was directly beneficial to practical life. It was with
the idea of proving exactly this point--by way of his personal
example--that he accepted to head various kinds of committees
(among others, the University Lectures Series, the Society for Women
in Higher Education, the Society for Teachers in High Education, the
Federation of Austrian Middle-School Teacher Candidates) and to be
Dean twice and Rector three times in a row (though he repeatedly
refused to be Minister of Education). All these activities cost him a
lot of time: in fact, Twardowski's choice to be most of all an
educator and an organizer left him very little time for academic
writing. Besides, as he reports, Twardowski wasn't much interested in
the publication of his ideas. Thus, since he placed high demands on
the clarity and the logical cogency of philosophical work, he set out
to publish only when it was required by external circumstances
(Twardowski 1926, 30). As a consequence, in his Lvov years, Twardowski
published little. He officially retired in 1930.
Twardowski died on February 11, 1938.
On Twardowski's life and education in Vienna up to 1895 (as well as
Twardowski's early activity in Lvov and Vienna's 'war
parenthesis' in 1914-1915) see Brozek 2012; Twardowski's
heritage and his achievements as an educator and organizer in Lvov,
see Czezowski 1948, Ingarden 1948; Ajdukiewicz 1959,
Kotarbinski 1959 (in Polish: excerpts in English are to be found
in Brozek 2014), Dambska 1978, Wolenski 1989, 1997 and
1998. With special reference to psychology, see Rzepa and Stachowski
1993.
## 2. The Vienna Years: 1891-1895
Twardowski's main publication in the Vienna period is, next to his
doctoral dissertation *Idea and Perception* (1891), his
*Habilitationsschrift*, *On the Content and Object of
Presentations* (1894), Twardowski's most influential work.
According to J. N. Findlay, *Content and Object* is
"unquestionably one of the most interesting treatises in the
whole range of modern philosophy; it is clear, concentrated, and
amazingly rich in ideas" (1963, 8).
### 2.1 On the Content and Object of Presentations (1894): Context, Influence and Historical Background
In *Content and Object*, Twardowski shares a number of Brentano's
fundamental theses. Five of them are particularly relevant here.
According to the first and most important thesis, the essential
characteristic of mental phenomena, and what demarcates them from
physical phenomena, is intentionality. We can sum it up as
follows:
>
> **Brentano's Thesis**:
>
>
> Every mental phenomenon has an object towards which it is directed.
>
Mental phenomena, also called *mental acts*, fall into three
separate classes (second thesis): presentations
(*Vorstellungen*), judgments, and phenomena of love and hate. In
the mental act of presentation, an object is presented; in a judgment,
judged; in love and hate, loved or
hated.[3]
Next (third thesis), mental phenomena are either presentations or are
based on presentations. We need to present an object in order to judge
it or appreciate it (though we do not need to judge or appreciate an
object that we might just present). Importantly, however (fourth
thesis), a judgment is not a combination of presentations, but a
mental act *sui generis* that accepts or rejects the object given
by the presentation at its basis (see
Brentano's Theory of Judgment).
In keeping with this idea, all judgments (fifth thesis) can be aptly
expressed in the existential form '*A* is' (positive
judgment) or '*A* is not' (negative judgment)
(alternatively, '*A* exists' or '*A* does
not exist'). In both cases, the judgment has a so-called
'immanent' object, given by the presentation, which is
simply *A*.
Both the notion of an object 'immanent' in a mental act,
as well as, in all generality, the term 'object' in
Brentano's Thesis, have an ambiguous character. Are the objects of
mental acts fully inside us or not? To correctly judge that the aether
does not exist means, in Brentanian terms, rejecting the aether which
is given to one's consciousness by a presentation. But if the aether
is an object immanent in me, fully inside my consciousness, what does
it mean, then, that I reject *it*? Is it some mental entity
inside my consciousness that I reject? But how could that possibly be
correct? In this case I most certainly have something in my head, and
that something exists. So it cannot be correct that I reject
*it*--for, if *that* is the aether I reject, the
judgment that the aether does not exist cannot be true. The aether
should be a physical space-filling substance, *outside* my
consciousness: that is what I am rejecting. But if that is what I
reject, it seems something must be there for me to be able to affirm
that it is not there. This is very puzzling. It was fundamentally due
to difficulties of this kind that Brentano's theory of judgment was
subjected, from 1889 onwards, to continuous objections by philosophers
such as Sigwart and Windelband. Brentano engaged in the debate in his
defense, as did his pupils Marty and Hillebrand. It is in this context
that Twardowski's *Habilitationsschrift* was conceived.
Twardowski saw that notions such as 'the object of a
presentation' and 'immanent object' were ambiguous
because in Brentano's writings the object of a presentation was
identified with that of content of a presentation (Twardowski 1926,
10). In *Content and Object*, Twardowski set out to clarify
exactly the relationship between the two, with far-reaching
implications for Brentano's original position.
The distinction between content and object of a presentation was not
new in Twardowski's time. It was the most fundamental element of
Bolzano 1837; it was later present in works of Bolzanian inspiration
(Zimmermann 1853, Kerry 1885-1891; on Zimmermann see Winter
1975, Morscher 1997, Raspa 1999); and it was also mentioned in works
of Brentanian inspiration (Hofler and Meinong 1890, Hillebrand
1891, Marty 1884-1895: article 5, 1894). Nevertheless, the
distinction was by no means common lore. In particular, before
Twardowski, no Brentanian had endorsed the distinction between content
and object in a way that offered a basis for solving the problems of
Brentano's theory, nor had anyone among the Brentanians devoted any
in-depth study to the issue, although the differences between
Bolzano's and Brentano's theories are such that reworking the
distinction in a Brentanian framework raises philosophical problems
that are by no means trivial. A major difference concerns the role of
the content of a presentation and its ontological status, since for a
Brentanian the content of a presentation can only be something
actually existing in one's mind. No Brentanian had thus realised in
full the implications that reinterpreting the Bolzanian distinction in
a Brentanian key would have. Carrying out this task, and thus helping
Brentano's theory out of its most pressing troubles, was Twardowski's
original contribution. He drew inspiration from arguments in favor of
the content-object distinction present in Bolzano, but he
reinterpreted them in a Brentanian framework to sustain conclusions
that were opposite to Bolzano's and that were new for the Brentanians.
One such conclusion, and a major implication of Twardowski's theory of
intentionality, is that there are no objectless presentations,
presentations without an object, no matter how strange and improbable
that object
is[4].
Even presentations of contradictory objects have both content and
object. It is this thesis and the conclusions that Twardowski drew
from it that have had a major impact on the development of Brentanian
theories of intentionality and that opened the way to ontologies as
rich as that of Alexius Meinong. On the other hand, this position,
together with Twardowski's identification of meaning with
psychological content, prompted Husserl's critical reactions and led
Husserl to the theory of intentionality set out in the *Logical
Investigations* (1900/01) where Twardowski's distinction between
content and object is taken up (on this, see Schuhmann 1993; on
Twardowski's influence and 'triggering effect' on Husserl,
see Cavallin 1990, especially 28f). Twardowski's ideas were not only
influential on the continent. Via G. F. Stout, who published an
anonymous review of *Content and Object,* in *Mind*,
Twardowski's ideas had an influence on Moore and Russell's transition
from idealism to analytic philosophy (van der Schaar 1996).
### 2.2 'The Presented'
The aim of *Content and Object* is to distinguish "the
presented, in one sense, where it means the content, from the
presented in the other sense, where it is used to designate the
object" (Twardowski 1894, 4). Its main thesis is that in every
mental act a content (*Inhalt*) and an object (*Gegenstand*)
must be distinguished. This distinction enables Twardowski to clarify
that
>
> **Twardowski's Thesis**:
>
>
> Every mental phenomenon has a content and an object, and it is
> directed towards its object, not towards its content.
>
On the basis of the distinction between content and object, Twardowski
is in turn able to clarify Brentano's notion of 'object immanent
(*immanentes Objekt*) in a presentation' by identifying it
with the notion of content, and to clarify Brentano's notion of
'object of a presentation' by identifying it with the
notion of object.
The distinction between the content and the object of a presentation
rests on a psychological or epistemic difference which is, roughly
speaking, the mental counterpart of Frege's distinction between sense
and reference. The object of a presentation, says Twardowski using
Zimmermann's terminology, is *that which* is presented in a
presentation; the content is *that through which* the object is
presented. An important argument Twardowski gives in favor of this
distinction--and which strengthens the analogy with Frege's
distinction--is that we can present the same object in two
different ways by having two presentations with the same object but
with different content. Twardowski calls such presentations
*interchangeable presentations* (*Wechselvorstellungen*).
The presentation of the Roman Juvavum and the presentation of the
birthplace of Mozart have the same object, Salzburg; however, their
content differs. To offer a rough analogy, think of an arrow pointing
at an object: the object is what the arrow is pointing at, the content
is what in the arrow makes it the arrow it is, that is, its being
directed to that object and not to another. The act is just the
'being directed towards' of the arrow. Interchangeable
presentations are like two arrows pointing at the same object.
Twardowski thinks of the difference between act, content, and object
in the following way. An act of presentation is a mental event which
takes place in our mind at a certain time. The content is literally
inside the mind, and exists dependently on the act, as long as the act
does. The object is instead independent of the mental act (1894, 1, 4;
SS7, 36), and, generally speaking, not inside one's mind, although
in some special cases the object of a presentation might be a mental
item. This special case is the case in which the content of some
presentation plays the role of the object of another presentation.
This is not infrequent, because any time we discuss the content of a
presentation, describing its characteristics and its relations to
other things, we are presenting it. And for this to be possible, the
content must play the role of object in the presentation(s) we are
having (what we would call second-order presentations). To use the
arrow metaphor once again, second-order presentations, then, are like
an arrow directed towards another arrow.
The way in which Twardowski relates mind and language makes the
distinction between content and object fairly easy to understand.
Names are the linguistic counterpart of presentations. By
'names' Twardowski means the categorematic terms of
traditional logic ('Barack Obama,' 'The President of
the United States,' 'black,'
'man,''he'). A name has three functions:
first, a name makes known that in the mind of the person using the
name an act of presentation is taking place; secondly, a name means
something; thirdly, it names an object. Twardowski takes the meaning
of a name to be the content of the presentation that, as the name
makes known, is taking place in the speaker (SS3), that is,
Twardowski takes meaning to be something mental and individual. This
holds, *mutatis mutandis*, also for judgments. In this sense, in
*Content and Object* the semantic sphere is dependent on the
mental sphere. This trait qualifies Twardowski's position as
psychologistic (see
Psychologism:
PA3). Although his position on meaning underwent a development,
Twardowski never adhered to a platonistic conception of meaning like
Bolzano or Frege did.
If all this is intuitive, why are the object and the content of a
presentation conflated? Twardowski maintains that the reason why
content and object are often identified comes, among others, from a
linguistic ambiguity: both the content and the object are said to be
'presented' in a presentation (SS4). Twardowski offers
an analysis of the ambiguity of the term 'presented' by
appealing to the linguistic distinction between *modifying* and
*attributive* (or determining) adjectives, and he illustrates it
with an analogy between the act of presenting an object and the act of
painting a landscape. When a painter paints a landscape, she also
paints a painting: so we can say that the painting and the landscape
are both painted. But in this situation 'painted
landscape' can have two very different meanings. In the first
meaning of 'painted', a painted landscape is a landscape;
in the second meaning of 'painted' a painted landscape is
not a landscape, but a painting (like in Magritte's *La trahison des
images* (1928-9): it is a painted pipe we are looking at, not
a pipe). In the first case, 'painted' is used in an
attributive sense (the landscape is a portion of nature that happens
to be painted by a painter in a painting); in the second case
'painted' is used in a modifying sense (that in which,
looking at the painting in a museum, someone may say: this is a
landscape!). The painted landscape in the modifying sense is a
painting, and thus identical with the painting painted in the
attributive sense. Analogously, in an act of presentation, the object
can be, like the painted landscape, said to be 'presented'
in two senses. The object presented in the modifying sense is
identical with the content presented in the attributive sense: it is
dependent on the act of presentation, and it is what we mean by
'the object immanent in the act'; the object presented in
the attributive sense is the object of the presentation, what happens
to be presented in a presentation, and what is independent of the act
of presentation.
### 2.3 Nonexistent Objects
According to a Brentanian conception of judgment, when we judge, the
object is given to us by an act of presentation. Given this fact,
Twardowski's analysis of 'presented' in terms of modifying
vs. attributive adjectives is fundamental to understand what is
exactly 'judged in a
judgment.'[5]
For it is not only in presentations that we can distinguish act,
content and object, but also in judgments. When we pass a judgment, we
either accept or reject an object through a content. All judgments
have a form which can be linguistically expressed as 'The object
A exists' or 'The object A does not exist.' The
object of judgment (A) is what is judged in a judgment. This object is
the object of the presentation at the basis of the judgment, it is not
the content of the presentation; it is the object presented in the
attributive sense, not in the modifying sense. The content of a
judgment is the existence or non-existence of the object presented. It
might seem strange at first to hear Twardowski say that the content of
a judgment is the (non-)existence of the object, but Twardowski has
something like the following in mind: we judge about the object A
*that it exists* (or *that it does not exist*). Twardowski's
analysis clarifies what is going on when we judge, in a Brentanian
theory of judgment, that the aether does not exist. Judging that the
aether does not exist means rejecting a physical space-filling
substance outside my consciousness. The object of this judgment is not
mental. However, this does not mean that there is nothing inside my
consciousness: there is a mental content, through which the aether is
presented and judged *as non-existing*. That content, present in
me, however, exists; it is the aether itself that does not exist.
For a theory of judgment like the one sketched above to work, the
object we judge must be the very object we present. Therefore,
Twardowski's thesis must be understood in the strong sense that there
are no presentations which do not have an object: if a presentation
did not have an object, we would not have an object to judge as
existing or non-existing. It is for this reason that a crucial part of
*Content and Object* is devoted to defending this claim, and to
showing that, despite appearances, every presentation has an object
(SS5). This is like saying, coming back to our arrow metaphor,
that every arrow points at something. Twardowski's strategy is to show
that presentations which are normally deemed by others to be
objectless--i.e., arrows that do not point to anything--are
not such. There exist no presentations without an object: there are
presentations whose object does not exist. Twardowski gives a number
of arguments for his position. Of all these arguments, the key one is
an argument based on the three functions of names in language.
>
> If someone uses the expression: 'oblique square,' then he
> makes known that there occurs in him an act of presentation. The
> content, which belongs to this act, constitutes the meaning of this
> name. But this name does not only mean something, it also designates
> something, namely, something which combines in itself contradictory
> properties and whose existence one denies as soon as one feels
> inclined to make a judgment about it. Something is undoubtedly
> designated by the name, even though this something does not exist.
> (SS5, 23)
>
On the basis of this reasoning and a linguistic analysis of how
'nothing' functions in language Twardowski rejects
Bolzano's claim that 'nothing' is an objectless
presentation by showing that 'nothing' is a
syncategorematic expression (like 'and,' 'or,'
and 'the') not a categorematic one. If an expression is
not a categorematic expression, it is not a name; but if an expression
is not a name, it does not need to have three functions. To every name
there corresponds a presentation and vice versa; if an expression is
not a name, then there is no presentation corresponding to it. If
there is no presentation corresponding to 'nothing'
because 'nothing' is not a name, the question of its
object does not arise. This argument has been said to anticipate by 37
years Carnap's analysis of Heidegger's 'The nothing itself
nothings.' (Wolenski 1998, 11). The comparison should
however not make one think that Twardowski shared Carnap's attitude
towards metaphysics (see 2.4 below.)
Another argument given by Twardowski for the thesis that every
presentation has an object is based on the different ontological
status of act, content, and object of a presentation. The act and the
content always exist; and, in fact, they always exist *together*,
forming a whole in the mind, a unified mental reality, though the
content has a dependent existence on the
act.[6]
The object may or may not exist. Suppose you have the following
presentations: the presentation of Barack Obama, that of a possible
object such as the aether, and that of an impossible object, such as a
dodecahedron with thirteen sides or a round square. The objects of
these presentations (namely Barack Obama, the aether, and the round
square) differ greatly, but they are still all objects. Barack Obama
is an existing object; the aether and the dodecahedron with thirteen
sides are non-existing objects. When we present an object such as a
dodecahedron with thirteen sides or a round square, the object, that
which is named, is different from the content because the content
exists and the object does not. It is the object, not the content,
that is rejected in the negative existential judgment 'the round
square does not exist,' for it is the object that does not
exist; the content 'exists in the truest sense of the
word' (SS5, 24).
A third argument rests on the difference between being presented and
existing (SS5, 24). Those who claim that there are objectless
presentations, says Twardowski, confuse 'being presented'
with 'existing.' To reinforce his point, Twardowski offers
several observations, among others, one resting on the claim that it
is the object which is the bearer of contradictory properties, not the
content. This claim will be described later by Meinong and Mally as
the principle of the independence of Being (existence) from Being-So
(bearing properties). When we present an object which does not exist
because it is contradictory, i.e. impossible, we need not
*immediately* notice that that object has contradictory
qualities: it is possible that we discover this successively by
further reasoning. Suppose now that only presentations with possible
objects are accepted. Then, Twardowski continues, the presentation of
something contradictory would have an object for as long as we did not
notice the contradiction; the moment we discovered it, the
presentation would cease to have an object. What would then bear the
contradictory qualities? Since the content can't be what bears the
contradictory qualities, it is the object itself that bears them; but
then, this object has to be what is presented.
[7]
The arguments above ultimately rest on the idea that every name has
three functions. This assumption can also be seen, in turn, as resting
on an even more basic idea, namely that in all generality
'object' equals 'being capable to take up the role
of object in a presentation,' thus an object is anything that
can be presented; and anything that can be named by a name (SS3,
12; SS5, 23; SS7, 37). Yet, saying that an object is anything
that can be named by a name and that a name always has the function of
naming are two sides of the same coin, as are the claim that an object
is anything that can be presented by a presentation and the claim that
a presentation always has an object. This cluster of correlative
assumptions is the core of the theory. Only when this core is accepted
can the other claims put forward by Twardowski (namely (1) that
'being presented' is not the same as
'existing' and (2) that objects can possess properties
even though they do not exist) become convincing parts of a cogent
theory of intentionality. One might note, however, that it is a theory
for which there are no non-circular arguments, for the theory is
acceptable only to those who are already convinced that there are no
empty names, that all categorematic terms, including 'round
square,' 'aether,', 'Pegasus,' etc. have
the function of naming something, not only the function of being
meaningful.
As is known, the theory, in Meinong's version, will be the critical
target of Russell's 'On Denoting' (1905).
### 2.4 Mereology: Parts of the Content and Parts of the Object
Metaphysics is importantly present in *Content and Object*, and
particularly important are the mereological considerations Twardowski
offers. According to Ingarden, *Content and Object* offered
"the first consistently constructed theory of objects
manifesting a certain theoretical unity since the times of
scholasticism and of the 'ontology' of Christian
Wolff" (1948, 258). One cannot literally agree with Ingarden
considering the Leibnizian metaphysics in Bolzano's *Athanasia*
(1827)--hidden in the two-thousand pages of the
*Wissenschaftslehre*--and Brentano's work in mereology.
Nevertheless, if the impact of *Content and Object* is compared
with that of Bolzano's or Brentano's metaphysics, things are
different. *Content and Object* was a fundamental contribution to
the *renaissance* of Aristotelian metaphysics--metaphysics
in the sense of a general theory of objects--which led to both
Meinong's theory of objects and to Husserl's formal ontology of parts
and wholes in the Third Logical Investigation. The story continues,
later, with Lesniewski's mereology and Ingarden's ontology. The
heritage of *Content and Object* also includes the fact that
Twardowski's pupils had a relation to metaphysics which was vastly
different from the approach that was typical, for instance, of the
Vienna circle (see Lukasiewicz 1936). In Poland, metaphysics was
not rejected as nonsense, but accepted as a respectable area of
investigation to be explored using rigorous methods, including
axiomatics (see Smith 1988, 315-6; on Twardowski and
metaphysics, see Kleszcz 2016)
Like anyone in pre-set theoretical times, Twardowski has a very broad
notion of (proper) part, covering much more than just the pieces of an
object. Twardowski distinguishes material and formal parts of an
object. The material parts of an object comprise not only the pieces
of an object, but also anything that can be said to be a component of
it, such as the series 1, 3, 5 is composed of three numbers (namely,
1, 3, and 5). Among the material parts of an object are also its
qualities (*Beschaffenheiten*) such as extension, weight, color,
etc. (1894, SS10, 58). The formal parts of an object are the
relations (*Beziehungen*) obtaining between the object and its
material parts (primary formal parts) as well as the relations
obtaining among the material parts themselves (secondary formal parts)
(1894, SS9, 48 and ff.; SS10, 51 and ff.).
In keeping with the tradition, Twardowski calls the *matter* of
an object the sum of its material parts and the *form* of an
object the sum of its formal parts. A special kind of primary formal
parts are (relations of) properties (*Eigenschaften*): these are
relations between an object as a whole and one of its material parts,
consisting in the whole's having the part at issue (1894, SS10,
56). Since Twardowski accepts that among the parts of an object there
are the relations in which that object is in, Twardowski's mereology
is, strictly speaking, an atomless mereology: there are no simples
(1894, SS12, 74). However, we can speak of atoms if we restrict
ourselves to material parts only (for instance, in the case of the
number one, we can say it is a simple object only if we consider the
(proper) material parts it has, namely zero).
The distinction between content and object enables Twardowski to
clearly distinguish conceptualizations regarding parts of the object
from conceptualizations regarding the parts of the content (as special
cases of objects). This, in turn, enables Twardowski to offer
sophisticated considerations, leading him among others to fix clearly
the notion of the characteristic mark (*Merkmal*, *nota*) of
an object. Twardowski calls *elements* the parts of the content
of a presentation; he calls *characteristic marks* the parts of
the object of a presentation (SS8, 46-7); the characteristic
marks of the object are presented through the elements of the content.
The content of a presentation is the collection of the presentations
of the characteristic marks of the object of the presentation. The
notion of characteristic mark is a relative one because only the parts
of an object that happen to be actually presented in the content of a
presentation in someone's mind qualify as its characteristics.
>
>
> For example, one can be presented with a table without thinking of the
> shape of its legs; in this case, the shape of the table legs is a
> (material, metaphysical) constituent (of second order), but not a
> characteristic of the table. But if one thinks, while being presented
> with the table, of the shape of its legs, then the shape had to be
> considered a characteristic of the table. (SS13, 86; Eng. transl.
> 81).
>
>
>
According to Twardowski, it is not possible to present all the parts
of an object in a presentation. Given that the number of parts of an
object is boundless, and that we can only present a finite amount of
characteristics, the number of the elements of the content is
therefore lower than that of the parts of the object (SS12,
78-9; on this point Twardowski is again indebted to Bolzano
(*Wissenschaftslehre*, SS64). It follows that no adequate
presentation is possible of any object (SS13, 83).
Like Brentano had done, Twardowski distinguishes, with respect to
material parts, metaphysical from physical and logical parts on the
basis of a notion of dependence and
separability.[8]
However, differently from Brentano, Twardowski does not construe the
notion of dependence in terms of existence of the objects involved.
His notion of dependence needs also to be applicable to non-existing
objects. Consequently, Twardowski construes the notion of dependence
of the parts of an object in terms of modes of *presentability*
of the parts of an object, i.e. as existence of the parts of the
content of a presentation (which always exists). The inseparability of
a part *p*1 from a part *p*2 of an
object *O* is not construed in the sense that it is not possible
for *p*1 to exist without *p*2
existing, but in the sense that in the content of the presentation of
an object *O* the part that represents *p*1
cannot exist without the part that represents *p*2
existing, that is, both must be elements of the content of the
presentation of the object *O*.
Two material parts of the content of a presentation of *A* and
of *B* are mutually separable iff *A* can be presented
without presenting *B* and vice versa. Mutually separable parts
are physical parts. For instance, the parts of the content of the
presentations of the pages and the cover of a book are mutually
separable.
Two material parts of the content of a presentation of *A* and
of *B* are one-sidedly separable iff *A* can be
presented without *B*, but not vice versa. Logical parts are
one-sidedly separable: for instance, we can have a presentation of
color without the presentation of red, but not vice versa.
Two material parts of the content of a presentation of *A* and
of *B* are mutually inseparable (but yet distinguishable) iff
they are neither mutually nor one-sidedly separable. Metaphysical
parts are mutually inseparable, e.g. you cannot present separately the
being colored and the being extended of something colored and
extended, although those parts are *in abstracto* distinguishable
in the object.
On Twardowski's mereology see Cavallin (1990), Rosiak (1998) and
Schaar (2015, 68 and ff.)
### 2.5 The Position of *Content and Object* in Twardowski's *Oeuvre*
*Content and Object* should be seen as part of a bigger research
aim that Twardowski had set for himself. This included developing a
theory of concepts and a theory of judgment.
In *Content and Object*, Twardowski had given a coherent account
of how judgments such as 'The aether does not exist' work
on the basis of the systematization of the notion of 'object of
a presentation.' That systematization however had implications
for the theory of judgment which went far beyond the account of
judgments such as 'The aether does not exist.' Although
*Content and Object* contains little on the theory of judgment,
Twardowski was well aware of those implications. A letter to Meinong
is evidence that Twardowski was developing a theory of judgment which
continued the work initiated by *Content and Object*, whose first
main outline can be found in Twardowski's manuscripts *Logik
1894/5* and *Logika 1895/6* (Betti & van der Schaar 2004,
Betti 2005).
In his 'Self-Portrait' Twardowski also remarks that it was
the question of the nature of concepts that brought him to *Content
and Object*. The issue ensued from Twardowski's research on a
particular concept in his doctoral dissertation, Descartes' concept of
clear and distinct perception. Since concepts are a species of
presentations, Twardowski saw that he had first to investigate
presentations in general, and thus to dispel the ambiguities that the
notion of 'the presented' carried with itself (Twardowski
1926, 10). Work on concepts is to be found in the manuscript *Logik
1894/5*, where concepts are defined as presentations with
well-defined content, in *Images and Concepts* (1898) and in
*The Essence of Concepts* (1924), where presentations with
well-defined content (i.e. fixed by a definition) are called
'logical concepts.' An important role in Twardowski's
theory of concepts and definitions is played by the notion of
presented judgment, i.e. judgments not really passed, but merely
presented (in the modified sense). Twardowski's investigation of the
notion of (logical) concept is important to understand his attitude
towards objectivity and unicity of meaning, i.e. his relationship to
psychologism on the one hand, and the role of the theorizations in
*Content and Object* of two notions: that of the object of a
general presentation (which disappears for instance in *Logik
1894/5*) and that of indirect presentations on the other hand.
## 3. The Lvov Years: 1895-1938
Among the issues that Twardowski only sketched or left open in his
Vienna years and investigated later are, next to theories of truth and
knowledge (*Theory of Knowledge*, 1925; see Schaar 2015, Chapter
5.3), the relationship between time and truth ('On the So-called
Relative Truths,' 1900) and that between linguistic meaning and
the content of mental acts ('Actions and Products,' 1912).
Particularly important themes to mention are the relationship between
*a priori* or deductive sciences, *a posteriori* or
inductive sciences and the notion of grounding ('A Priori and A
Posteriori Sciences,' 1923), and the relation between
philosophy, psychology and physiology ('Psychology vs.
Physiology and Philosophy', 1897b). Well worth mentioning are
Twardowski's writings on issues of philosophical methodology
('On Clear and Obscure Philosophical Style,' 1919/20;
'Symbolomania and Pragmatophobia,' 1921). He also left a
relatively substantial corpus of writings in ethics ('On Ethical
Skepticism', 1905-24).
Twardowski's publications during his Lvov years are in Polish, but of
those writings that Twardowski considered as important academic
publications we normally also have German versions. Especially
important and influential among these are 'On the So-Called
Relative Truths' (1900) and 'Actions and Products'
(1912).
### 3.1 On the so-called relative truths (1900)
'On the So-Called Relative Truths' (henceforth:
*Relative Truths*) is a work of fundamental importance for the
development of the idea of absolute truth in Poland. Scholars have
deemed its impact to have reached as far as Tarski's work on truth
(Wolenski & Simons 1989). It certainly influenced the
1910-1913 discussion on future contingents which involved
Kotarbinski, Lesniewski, and Lukasiewicz, which served
later as metaphysical foundation for Lukasiewicz's three-valued
logic (Wolenski 1990). Twardowski's position is distinctive and
interesting because it is a non-platonistic theory of absolute truth,
though it is difficult to say exactly what place it occupies with
respect to Lesniewski's nominalistic or to Tarski's platonistic
approach (for these labels see Simons 2003: section 2.) One of the
reasons why it was important for Twardowski to defend an
anti-relativistic notion of truth was that relativism jeopardizes the
possibility of constructing ethics as a science based on principles
(on Twardowski's ethics, see Paczkowska-Lagowska 1977).
This aspect is important because Twardowski is usually characterized
not as a builder of systems, but as championing 'small
philosophy.' This is correct, in some sense, but it should not
make one mistakenly think that there is no unity sought in
Twardowski's thought, or that his works are separate small occasional
contributions without overarching ideas in the background governing
their development.
In *Relative Truths*, Twardowski opposes Brentano's ideas on the
relationship between time and truth, and sides instead with Bolzano's
views.[9]
By 'truth,' Twardowski means a true judgment and by
'absolute truth' he means a judgment true independently of
any circumstance, time, and place.
The main claim of *Relative Truths* is that every truth (falsity)
is absolute, i.e. no judgment changes its truth-value relatively to
circumstance, time, or place. Although Twardowski's treatment is
general, and is not confined to the relation between truth and time,
it is his ideas on the latter that were particularly important for
later discussions. We can characterize Twardowski's view in this
respect as the view that a judgment is true *for* ever and
*since* ever, that is:
>
> **Omnitemporal truth**:
>
>
> For any judgment *g*, if *g* is true at a time
> *t*, then it is true also at an arbitrary time
> *t*' past or future with respect to *t* (the same
> applies, *mutatis mutandis*, to falsity).
>
Twardowski's aim is to defend this claim against those who argue that
truth is relative on the basis of examples of elliptical sentences
(such as 'I don't'), sentences with indexicals ('my
father is called Vincent'), sentences of general form
('radioactivity is good for you'), and sentences about
ethical principles ('it is wrong to speak against one's own
convictions'). Those who argue that truth is relative on this
basis, says Twardowski, make a mistake. They confuse judgments, which
are the real truth-bearers, with the (type) sentences expressing them.
But sentences are merely the external expression of judgments, and
often they do not express everything one has in mind when judging. For
human speech has a purely practical task; it can perfectly serve its
communicative purposes and yet be strictly speaking ambiguous or
elliptic.
Twardowski's main point is that we can disambiguate ambiguous
sentences or integrate elliptic ones in such a way that they become
the appropriate means of expression of the judgment that they actually
and strictly speaking express (Twardowski 1900, 156)--we can make
sentences eternal, as we would say in our Post-Quinean age. Once we
show that eternalization is possible, the confusion on which
relativists rely is dispelled. For instance, if standing in Lvov on
the High Castle Hill, I assert that it is raining,
>
> I do not have in mind just any rain, falling at any place and time,
> but I voice a judgment about the rain falling *here and now*
> (Twardowski 1900, 151).
>
One could object that the true sentence 'it is raining here and
now' is relative because it may become false, i.e. be true when
uttered in Amsterdam and become false when uttered in the Dry Valley
in Antarctica. Twardowski observes that this impression is, again,
only due to the ambiguity of the indexicals 'here and now'
in the sentence. Take thus the sentence
>
> (i) It is raining here and now,
>
when uttered in Lvov, on 1 March 1900, in accordance with the
Gregorian calendar, at noon, Central European Time, on the High Castle
Hill. That sentence expresses the same judgment as
>
> (i\*) On 1 March 1900, in accordance with the Gregorian calendar, at
> noon, Central European Time, it is raining in Lvov, on the High Castle
> Hill and in its vicinity.
>
According to Twardowski, the difference between the two sentences is
only in their brevity and their practical use. Take now the
sentence
>
> (ii) It is raining here and now,
>
when uttered in Krakow on 2 March 1900, in accordance with the
Gregorian calendar, at noon, Central European Time, on the Castle
Hill. That sentence expresses the same judgment as
>
> (ii\*) On 2 March 1900, in accordance with the Gregorian calendar, at
> noon, Central European Time, it is raining in Krakow, on the Castle
> Hill and in its vicinity.
>
It's not that we have in (i) and (ii) one and the same judgment; we
have *the same sentence*, which expresses two *different
judgments*, (i\*) and (ii\*). Therefore, one cannot argue that the
*same judgment* can turn from true to false.
>
>
> It is apparent that [the judgment expressed by (i) and (i\*)], which
> [...] asserts that it is raining, is true not only at a particular
> time and place, but is always true. (Twardowski 1900, 153)
>
>
>
> [...] Only sentences can be said to be relatively true; yet the truth
> of a sentence depends on the truth of the judgment expressed by that
> sentence; usually a given sentence can express various judgments, some
> true, some false, and it is thus relatively true because it expresses
> a true judgment only under certain conditions, i.e. if we consider it
> as an expression of a judgment which is true (Twardowski 1900, 169).
>
>
>
>
Notice that Twardowski is not saying that judgments can be integrated
or *completed*: the judgments we formulate in our head, and which
are true or false, are *complete* and fully unambiguous. It is
for this reason that he can think that the procedure of completing
sentences as expressions of judgments can be carried out.
An important thing to notice in *Relative Truths* is that
Twardowski, differently from what Meinong and Lukasiewicz were to
do, did not question the principle of contradiction or the principle
of the excluded middle.
Twardowski's idea that one can have absolute truth in an all-changing
world has been recently taken up by Simons (2003).
### 3.2 Actions and Products (1912)
For quite some time Twardowski thought that logic was dependent on
psychology
[10],
and continued to hold the psychologistic idea of meaning as mental
content from *Content and Object*. In his
'Self-Portrait,' Twardowski says that he changed his mind
in an anti-psychologistic sense because of Husserl's *Logical
Investigations*
(1900/01).[11]
'Actions and Products' (1912) is the first printed work
where Twardowski favors an 'Aristotelian' view of ideal
meaning, i.e. of meaning *in specie*, which he associates with
that of Husserl's in the *Logical Investigations*.
In marking the line of demarcation between logic and psychology on the
basis of the distinction act/product which underlies his theory of
meaning, Twardowski writes:
>
> Indeed, a rigorous demarcation of products from actions has already
> contributed enormously to liberating logic from psychological
> accretions. (Twardowski 1912, SS45, 132)
>
Twardowski's mature theory of meaning is connected with the rigorous
definition of the distinction between *actions* and
*products* of actions (while in *Relative Truths* he speaks
of judgments as actions or products). On the basis of grammatical
analyses, purporting however to show logically salient differences,
Twardowski establishes a basic distinction between *physical*,
psychical (i.e. *mental*), and *psychophysical* actions and
their products. The analysis is reminiscent in philosophical style of
that of 'the presented' in *Content and Object*,
though more general, and a nice example of the method which has become
now strongly associated with analytic philosophy.
The relationship between an action and what results from it, its
product, is exemplified linguistically in the relationship between a
verb and the corresponding noun as internal complement (Twardowski
1912, SS1, 10; SS8, 107):
>
>
>
> | | | |
> | --- | --- | --- |
> | | Act | Product |
> | Physical | running | run |
> | Mental | judging | judgment |
> | Psychophysical | speaking | speech |
>
>
>
A psychophysical product (e.g., a speech) differs from a mental
product (e.g., a judgment) because it is perceptible to senses; it
differs from a physical product (e.g., a run) because in the
psychophysical action which produces it (speaking) a mental action is
also involved, which has bearing on the physical action, and thus on
its product (SS10). In some cases, a psychophysical product
expresses a mental product; for instance, a sentence is a
psychophysical product which expresses a mental product, a
judgment.
Twardowski points out that the meaning of the noun
'judgment,' like other nouns ('mistake'), is
ambiguous between action and product. A judgment in the sense of the
action (a judging) is a *judgment in the psychological sense*,
while a judgment in the sense of the product is a *judgment in the
logical sense* (SS14) (a third meaning of
'judgment' is that of 'disposition, aptitude to make
correct judgments,' SS15).
Twardowski points out that he uses 'judgment' in the sense
of 'judgment in the logical sense,' i.e. the product of
the action of judging, and he clarifies that what he means now by
'judgment-product' is the content of judgment in
*Content and Object* (SS24, n. 37, 117). Exactly as it was
the case there, the judgment exists as long as someone performs the
corresponding action of judging; for this reason, it is called a
*non-enduring* product (SS23, 116).
Non-enduring products do not exist in actuality separately from the
corresponding actions, but only in conjunction with them; we can only
analyze them abstractly apart from these actions. On the other hand,
enduring products can and do exist in actuality apart from the actions
owing to which they arise (Twardowski 1912, n. 41, 116).
*Enduring* products last longer than the action which produces
them; they originate from a transformation or a rearrangement of
pre-existing physical material in the course of the action: footprints
in the sand are enduring products arising from the change of
configuration of grains of sand (the material) as a product of the
action of walking applied to that material (Twardowski points out that
the product is not the grains of sand arranged in some way, but the
arrangement itself, SS26). If actions are processes, non-enduring
products are *events* whereas enduring products present
themselves as *things* (SS 27). Among enduring products, we
find physical products (such as footprints in the sand) and
psychophysical products (such as paintings). A mental product, such as
a judgment, is never enduring (SS29), but it can have its
*expression* in an enduring psychophysical product, such as a
written sentence.
The process of preserving non-enduring products such as judgments in
enduring products such as written sentences is a complex one
(SS37), and goes in two steps.
In step one a spoken sentence is produced which expresses a judgment,
in such a way that the judgment is the meaning of the sentence and the
sentence is the sign of the judgment. The process goes as follows. A
non-enduring mental product, a judgment (together with the action of
judging), which is non-perceptible, gives rise--by being its
(partial) cause--to a non-enduring psychophysical product, a
spoken sentence, which is perceptible. In this case, the spoken
sentence is the expression of the judgment (SS30). Now, *if*
the spoken sentence becomes itself a partial cause of *another*
judgment, which (we would say) is a token of the same type of the
initial judgment (by partially causing an action of judging which
produces that other judgment in another person or at a different time
in the same person), *then* the spoken sentence can also count as
the *sign* of the judgment, and the judgment as the
*meaning* of the expression (SS32, SS34). The condition
just sketched in the antecedent is fundamental: without it, a sentence
*p* might very well have been an expression of a judgment
*j* once (*j* was a partial cause of *p*), but no
meaning is linked with *p*: for, if *p* is incomprehensible,
namely it is not a partial cause of another judgment *j*',
then nothing can be said to be its meaning (SS31).
Step two is to *preserve* the spoken sentence (a non-enduring
psychophysical product) in an enduring psychophysical product
*s*, a written
sentence.[12]
When a judgment is preserved in this way, it has in the sign an
existence called *potential*. This is because the sign may at any
moment cause the formation of an identical or similar judgment
(Twardowski 1912, SS34), and it will be able to (partially) cause
judgments as long as it lasts. In consequence of this,
'meaning' can therefore also mean
>
> the capacity to evoke a mental product in the individual on whom a
> psychophysical product acts as a sign of that mental product, or, more
> briefly, the capacity to bring the corresponding mental product to
> awareness. (Twardowski 1912, n. 51, 125.)
>
Once they are preserved in this two-step way, non-enduring products
assume not only the illusory appearance of enduring products, but also
of products which are somehow *independent* from the actions
which produce them. The appearance of independence is made stronger by
the fact that we do as if one and the same judgment-product existed in
all individuals, although *many* judgment-products are elicited
by the written sentence-product. All these many judgments are
different from each other, but, insofar as we consider a judgment to
be *the* meaning of a sentence which is its sign
>
> there must be a group of common attributes in these individual mental
> products. And it is precisely these common attributes (in which these
> individual products accord) that we ordinarily regard as the meaning
> of the psychophysical product, as the content inherent in it, provided
> of course that these common attributes correspond to the intent with
> which that psychophysical product was utilized as a sign. [...] Thus,
> we speak of only a single meaning of a sign--barring cases of
> ambiguity--and not of as many meanings as there are mental
> products that are aroused or capable of being aroused, by that sign in
> the persons on whom it acts. Now, a meaning conceived in this matter
> is no longer a concrete mental product, but something at which we
> arrive by way of an abstraction performed on concrete products.
> (Twardowski 1912, SS39, 128)
>
To this passage, Twardowski appends a note referring to Husserl's
notion of ideal meaning. The relationship between Husserl's and
Twardowski's notion of meaning as well as the status of Twardowski's
unique meaning has been and remains object of discussion (see
Paczkowska-Lagowska 1979, Buczynska-Garewicz 1980, Placek
1996, Brandl 1998, Schaar 2015, 108). These matters should be
considered as yet unsettled.
The introduction of the notion of the unique meaning of a sentence
leads Twardowski to make a further distinction between substitutive
(artefacta) and non-substitutive judgments. Substitutive judgments
aren't real judgments, but fictitious ones. The latter are in fact the
presented judgments of *Content and Object*; the difference is
that in *Actions and Products*, Twardowski applies the concept of
substitutive judgments explicitly to logic: the sentences uttered or
written by logicians-at-work are not sentences which express or have
as meanings judgments which are really passed by them, but only
presented judgments, produced thus by actions of
presenting--which are different actions from actual judging acts.
This happens for instance when a logician constructs a valid syllogism
made up of materially false sentences to give examples of formally
valid inferences (SS44, 130). In this case, the logician does not
actually judge that all triangles are square, that all squares are
round, and that all triangles are round, but she merely
*presents* the corresponding judgments. The sentences 'all
triangles are square,' 'all squares are round,' and
'all triangles are round' are not real sentences, but
artificial ones, because they are "expressions of artificial
products that substitute for actual judgments, namely merely
represented judgments" (Ibid.). The meanings of these artificial
sentences are artificial judgments, because they are merely presented,
not passed. These artificial judgments are the subject-matter of logic
(on this point, Twardowski is again indebted to Bolzano). Operating
with surrogate sentences such as 'all triangles are
square' constitutes the most extreme example of "making
mental products independent of the actions owing to which alone they
can truly (actually) exist" (SS44, 131).
Twardowski's distinction between acts and products is presently being
re-discovered in act-based theories of semantic content, within which
Twardowski's notion of product is considered an interesting
alternative to the notion of proposition (as the mind-independent and
language-independent content of assertions, meaning of sentences,
primary truthbearer, and object of propositional attitudes). See
Moltmann 2014. |
twotruths-india | ## 1. Abhidharmikas / Sarvastivada (Vaibhasika)
In the fourth century, Vasubandhu undertook a comprehensive survey of
the Sarvastivada School's thought, and wrote a compendium,
*Treasury of Knowledge*,
(*Abhidharmakosakarika* [AK]; *Mngon pa*
ku 1b-25a) with his own *Commentary on the Treasury of
Knowledge* (*Abhidharmakosabhasya* [AKB],
*Mngon pa* ku 26b-258a). This commentary not only offers
an excellent account of the Sarvastivadin views, including
the theory of the two truths, but also offers a sharp critique of many
views held by the Sarvastivadins. Vasubandhu based his
commentary on the *Mahavibhasa* (*The
Great Commentary*), as the Sarvastivadins held their
philosophical positions according to the teachings of the
*Mahavibhasa*. Consequently,
Sarvastivadins are often known as
Vaibhasikas.
The Sarvastivadin's
ontology[2]
or the theory of the two truths makes two fundamental claims.
1. the claim that the ultimate reality consists of irreducible
spatial units (e.g., atoms of the material category) and irreducible
temporal units (e.g., point-instant consciousnesses) of the five basic
categories, and
2. the claim that the conventional reality consists of reducible
spatial wholes or temporal continua.
To put the matter straightforwardly, for the Sarvastivadins,
wholes and continua are only conventionally real, whereas the atoms
and point-instant consciousness are only ultimately real.
### 1.1 Conventional truth
To see how the Sarvastivadins defend these two claims, we
shall have a close look at their definitions of the two truths. We
will examine conventional truth first. This will provide the argument
in support of the second claim. In the *Abhidharmakosa*
Vasubandu defines conventional truth/reality as follows: "An
entity, the cognition of which does not arise when it is destroyed
and, mentally divided, is conventionally existent like a pot and
water. Ultimate existence is otherwise." ([AK] 6.4, *Mngon
pa* khu 7ab) Whatever is, on this definition, designated as
"conventionally existent" is taken to be
"conventionally real" or conventional truth when the idea
or concept of which ceases to arise when it is physically destroyed,
by means of a hammer for instance. Or its properties such as shape are
stripped away from it by means of subjecting it under analysis,
thereby conceptually excluding them. A pot and water are designated as
conventionally existent therefore conventionally real for the concept
"pot" ceases to exist when it is destroyed physically, and
the concept "water" no longer arises when we conceptually
exclude from it its shape, colour etc.
On the Sarvastivadin definition, for an entity to be real,
it does not need to be ultimately real, exclusively. For a thing to be
ultimately real is for that thing to be "foundationally
existent" (dravya-sat / rdzas
yod)[3]
in contrast with being "compositely existent"
(avayavidravya / rdzas grub). By "foundationally existent"
the Sarvastivadin refer to the entity which is fundamentally
real, the concept or the cognition of which is not dependent on
conceptual construction, hence not conceptually existent
(prajnaptisat) nor a composition of the aggregative phenomena.
In the case of foundational existent there always remains something
irreducible to which the concept of the thing applies, hence it is
ultimately real. A simple entity is not reducible to conceptual forms,
or conventional designations, nor is it compositely existent entity.
We will have lot more to say on this point shortly.
Pot and water are not the foundational entities. They are rather
composite entities (avayavi-dravya / rdzas grub). By composite entity,
we mean an entity or existent which is not fundamental, primary or
simple, but is rather a conceptually constructed
(prajnaptisat), composition of various properties, and is thus
reducible both physically and logically.
Hence for the Sarvastivadin, conventional reality
(samvrtisatya), composite-existence (avayavi-dravya / rdzas
grub), and the lack of intrinsic reality (nihsvabhava) are
all equivalents. A conventional reality is therefore characterised as
a reducible conventional entity on three grounds: (i) conventional
reality is both physically and logically reducible, as it
disintegrates when it is subjected to physical destruction and
disappears from our minds when its parts are separated from it by
logical analysis; (ii) conventional reality borrows its identity from
other things including its parts, concepts etc., it does not exist
independently in virtue of its intrinsic reality
(nihsvabhava), the exclusion of its parts and concepts thus
affects and reduces its inherent nature; and (iii) conventional
reality is a product of mental constructions, like that of
conventionally real wholes, causation, continuum etc, and it does not
exists intrinsically.
### 1.2 Ultimate truth
The definition of the ultimate reality, as we shall see, offers the
Sarvastivadin response to the claim that ultimate reality
consists of irreducible atoms and point-instant moments. In glossing
the [AK] 6.4 verse his commentary explains that ultimate reality is
regarded as ultimately existent, one that is both physically and
logically irreducible. Vasubandhu supplies three arguments to support
this: (i) ultimate reality is both physically and logically
irreducible, as it does not disintegrate when it is subjected to
physical destruction and that its identity does not disappear when its
parts are separated from it under logical analysis; and (ii) ultimate
reality does not borrow its nature from other things including its
parts. Rather it exists independently in virtue of its intrinsic
reality (*svabhava*), the exclusion of its parts thus does
not affect its inherent nature; and (iii) it is not a product of
mental constructions, like that of conventionally real wholes,
causation, continuum etc. It exists intrinsically ([AKB] 6.4,
*Mngon pa* khu 214a).
Ultimate reality is of two types: the compounded
(samskrta) ultimate, and the uncompounded
(asamskrta) ultimate. The uncompounded ultimate consists
of (a) space (akasa), and (b)
nirvana--analytical cessation
(pratisamkya-nirodha) and non-analytical cessation
(apratisamkhya-nirodha). These three ultimates are
uncompounded as each is seen as being causally unconditioned. They are
nonspatial concepts. These concepts do not have any physical referent
whatsoever. Space is a mere absence of entity. Analytical and
nonanalytical cessations are the two forms of nirvana,
which is simply freedom from afflictive suffering, or the elimination
of afflictive suffering. These concepts are not associated in positing
any thing that can be described as remotely physical. They are thus
the concepts that are irreducible physically and logically.
The compounded ultimate consists of the five aggregates--material
aggregate (rupa), feeling aggregate (vedana),
perception-aggregate (samjna), dispositional
aggregate (samskara), and consciousness-aggregate
(vijnana)--since they are causally produced, and the
ideas of each aggregate are conceived individually rather than
collectively. If the aggregates, the ideas of which are conceived
collectively as a whole(s) or a continuum/continua, they could not be
ultimately real. The collective concepts of the aggregate as a
"whole" or as a "continuum" subject to
cessation as they cease to appear to the mind, get excluded from the
conceptual framework of the reality of the five aggregates when they
are logically analysed.
## 2. Sautrantika
The
philosophers[4]
who championed this view are some of the best-known Indian logicians
and epistemologists. Other great names who propogated the tradition
include Devendrabuddhi (?), Sakhyabuddhi (?), Vinitadeva
(630-700), Dharmottara (750-810), Moksakaragupt (ca.
1100-1200). Dignaga (480-540) and Dharmakirti
(600-660) are credited to have founded this school. For the
theory of the two truths in the Sautrantika we will need to rely
on the following texts:
1. Dignaga's *Compendium of Epistemology*
(*Pramanasamuccaya*, *Tshad ma* ce
1b-13a),
2. Dignaga's *Auto-commentary on the Compendium of
Epistemology* (*Pramanasamuccayavrtti, Tshad
ma* ce 14b-85b),
3. Dharmakirti's *Verses on Epistemology*
(*Pramanavarttikakarika* [PVK];
*tshad ma* ce 94b-151a),
4. Dharmakirti's *Commentary on the Verses of
Epistemology* (*Pramanavarttikavrtti*
[PVT]; *tshad ma* ce 261b-365a),
5. Dharmakirti's *Ascertainment of Epistemology*
(*Pramanaviniscaya, tshad ma* ce
152b-230a),
6. Dharmakirti's *Dose of Logical Reasoning*
(*Nyayabindu*, *tshad ma* ce 231b-238a).
Broadly, all objects of knowledge are classified into two realities
based on the ways in which right-cognition (pramana)
engages with its object. They are either directly accessible
(*pratyaksa*), which constitutes objects that are obvious
to cognition, or they are directly inaccessible
(*paroksa*), which constitutes objects that are occulted,
or obscured from cognition. A directly accessible object is
principally known by a direct perceptual right-cognition
(pratyasa-pramana), whereas a directly inaccessible
object is principally known by an inferential right-cognition
(anumana-pramana).
### 2.1 Ultimate truth
Of the two types of objects, some are ultimately real while others are
only conventionally real, and some are not even conventionally real,
they are just unreal, or fictions. In defining ultimate truth in the
Sautrantika tradition, we read in Dharmakirti's Verses on
Epistemology: "That which is ultimately causally efficient is
here an ultimately existent (paramarthasat). Conventionally
existent (samvrtisat) is otherwise. They are declared as
the definitions of the unique particular (svalaksana) and
the universal (samanyalaksana)"
(Dharmakirti [PVK] *Tshad ma* ce 118b).
Ultimate truth is, on this definition, a phenomenon (dharma) that is
ultimately existent, and ultimately existent are ultimately causally
efficient. Phenomenon that is ultimately causally efficient is
intrinsically or objectively real, existing in and of itself as a
"unique particular"
(svalaksana).[5]
By "unique particular" Dharmakirti means ultimately
real phenomenon--dharma that is self-defined, uniquely
individual, objectively real, existing independent of any conceptual
fabrication, ultimately causally efficient (artha), a dharma which
serves as an objects of direct perception, dharma that presents itself
to the cognitions as distinctive/unique individuals.
In the Commentary on the Verses of Epistemology, Dharmakirti
characterises (*Tshad ma* ce 274b-279b) all ultimately
real unique particulars exist as distinct individuals with their own
intrinsic natures. And they satisfy three criteria:
1. They have determinate spatial locations (desaniyata / yul
nges pa) of their own, as real things do not have a shared property
amongst themselves. The real fire we see is either near or far, or at
the left or the right, or at the back or in the front. By contrast,
the universal
fireness[6],
that is, the concept of being a fire, does not occupy a determinate
position.
2. Unique particulars are temporally determinate (kalaniyata /
dus nges pa or dus ma 'dres pa). They are only momentary
instants. They spontaneously go out of existence the moment they have
come into existence. This is not the case with the universals. Being
purely conceptually constructed, they remain uninfluenced by the
dynamism of causal conditions and hence are not affected by time.
3. Unique particulars are ontologically determinate
(akaraniyata / ngo bo nges pa / ngo bo ma 'dres pa) as
they are causally conditioned; the effects of the aggregation of the
causal conditions that have the ability to produce them. When those
causal conditions come together at certain points in time, unique
particulars come into existence. When those conditions disintegrate
and are not replaced by new conditions, unique particulars go out of
existence. When the conditions have not yet come together, unique
particulars are yet to obtain their ontological status.
So "determinate intrinsic natures of the unique
particulars," Dharmakirti argues in Commentary on the
Verses of Epistemology, "are not accidental or fortuitous since
what is not determinate cannot be spatially, temporally and
ontologically determinate" (*Tshad ma* ce 179a).
The unique particulars are, Sautrantika claims, ultimately real,
and they supply us four arguments to support the claim:
(1) Unique particulars are causally efficient phenomena
(arthakriyasamartha) ([PVT] *Tshad ma* ce 179a) because:
(a) they have ability to serve pragmatic purpose of life--fulfil
the objectives in our life, and (b) ability to produce a variety of
cognitive images due to their remoteness or proximity. (Dharmottara's
*Nyabindutika Tshad ma* we 36b-92a)
Both of these abilities must be associated exclusively with objects of
direct perception (*Tshad ma* we 45a).
(2) Unique particulars present themselves only to a direct perceptual
cognition as distinct and uniquely defined individuals because unique
particulars are, as Dharmakirti's *Nyayabindu* points
out "the objects whose nearness or remoteness presents the
difference of cognitive image" (*Tshad ma* ce 231a) and
that object alone which produces the impression of vividness according
to its remoteness or proximity, exists ultimately
(Dharmottara*Tshad ma* we 44b-45a).
(3) Unique particulars are not denotable by language since they are
beyond the full grasp of any conceptual mind
(sabdasyavisaya). Although unique particulars are
objective references of language and thought, and we may have firm
beliefs about them, conceptual mind does not fully grasp their real
nature. They are ultimately real, directly cognisable by means of
certain perceptions without reliance on other factors (nimitta) such
as language and thought. Therefore they must exist. They are the sorts
of phenomena whose cognition would not occur if they are not
objectively real.
In the Sautrantika ontology ultimately real/existent (synonymous)
unique particulars are classified into three kinds:
1. momentary instants of matter (rupa),
2. momentary instants of consciousness (vijnana) and
3. momentary instants of the non-associated composite phenomena,
which are neither matter nor minds or mental factors
(*citta-caitta-viprayukta-samskara*).
The Sautrantika's theory of ultimate truth mirrors its ontology
of flux in which unique particulars are viewed as spatially
infinitesimal atoms constituting temporally momentary events
(ksanika) or as successive flashes of consciousnesses,
cognitive events, all devoid of any real continuity as substratum.
Unique particulars are ultimately real, although they are not enduring
substances (dravyas) inhering in it's qualities (gunas) and
actions (karmas) as the Naiyayika-Vaisesika claims.
They are rather bundles of events arising and disappearing instantly.
Even continuity of things and motion are only successive events
closely resembling each other. On this theory ultimate realities are
momentary point instants, and Vasubandhu and Dharmakirti both
argue that no conditioned phenomenon, therefore, no ultimately real
unique particulars, endure more than a single moment--hence they
are momentary instants (ksanika).
Four closely interrelated arguments provide the defence of the
Sautrantika's claim that ultimately real unique particulars are
momentary instants. Vasubandhu and Dharmakirti both employ the
first two arguments. The third argument is one Dharmakirti
specialises in his works.
(1) Ultimately real unique particulars are momentary instants because
their perishing or destruction is spontaneous to their becoming. This
follows because (i) unique particulars are inherently self-destructive
(Vasubandhu [AKB], *Mngon pa* khu 166b-167a), and (ii)
their perishing or cessation is *intrinsic* and does not depend
on any other *extrinsic* causal factors (Dharmakirti,
[PVK] *Tshad ma* ce 102a; [PVT], *Tshad ma* ce
178ab).
(2) The ultimately real unique particulars are momentary instants
because they are motionless, and do not move from one temporal or
spatial location to another. They perish just where they were born
since nothing exists later than its acquisition of existence
(Vasubandhu [AKB] *Mngon pa* khu 166b).
(3) The third argument proves momentary instants of unique particulars
from the inference of existence (sattvanumana). This is a
case of the argument from identity of existence and production
(svabhavahetu). All unique particulars which are ultimately
existent, are necessarily produced, since only those that are
ultimately existent, insofar as Dharmakiriti is concerned, are
able to a perform causal function--i.e., to produce effects. And
causally efficient unique particulars imply constant change for the
renewal and the perishing of their antecedent identities, therefore,
they are momentary.
Finally (4), unique particulars are ultimately real not only on the
ground that they constitute the final ontological status, but also
because it forms the basis of the Sautrantika soteriology. The
attainment of nirvana -- the ultimate freedom from the
afflictions of life--for the Sautrantika, according to
Dharmakirti's *Vadanyaya*, has an immediate
perception of the unique particulars as its necessary condition
(*Tshad ma* che 108b-109a).
### 2.2 Conventional truth
Dharmakirti defines conventional truth, in his [PVK], as dharma
which is "conventionally existent" and he identifies
conventional truth with the "universal"
(samanya-laksana)[7]
just as he identifies ultimate truth with unique particular
(*Tshad ma* ce 118b). When a Sautrantika philosopher
describes a certain entity as a universal, he means a conceptual
entity not apprehended by virtue of its own being. He means a general
property that is conceptually constructed, appearing to be something
common amongst all items in a certain class of objects. Unlike
Nyaya-Vaisesika view where universals are regarded as
objectively real and eternal entities inhering in substances,
qualities and particulars, universals, for the Sautrantika are
pure conceptual constructs. Sautrantika holds the view known as
nominalism or conceptualism--the view that denies universals any
independent extramental objective reality existing on their own apart
from being mentally constructed.
While unique particulars exist independently of linguistic convention,
universals have no reality in isolation from linguistic and conceptual
conventions. Thus, universals and ultimate reality are mutually
exclusive. Universals are therefore only conventionally real, lacking
any intrinsic nature, whereas unique particulars are ultimately real,
and exist intrinsically.
The Sautrantika defends the claim that universals
(samanya-laksana) are only conventional reality
for the following reasons (*tshad ma* ce 118b):
1. Universals are domains of inferential cognition since they are
exhaustively grasped by conceptual mind by means of language and
thought (Dharmakirti, *Nyayabindu*, *tshad ma*
ce 231a).
2. Universals are objects of the apprehending cognition which arises
simply out of having beliefs about the objects without the need to see
any real object.
3. Universals are causally inefficient. By "causal
inefficiency," the Sautrantika, according to Dharmottara's
*Nyayabindutika*, (*Tshad ma we*
45ab) means three things: (a) universals are purely conceptually
constructed, hence unreal; (b) universals are unable to serve a
pragmatic purpose as they do not fulfil life's objectives; and (c)
cognitive images produced by universals are independent of the
proximity between objects and their cognitions since production of
image does not require seeing the object as it is in the case of
perception and unique particulars.
4. Universals are products of unifying or mixing of language and
their referential objects (unique particulars), and thus appearing
them to conceptual minds as generalities, unified wholes, unity,
continuity, as phenomena that appear to conceptual mind to having
shared properties linking with all items in the same class of
objects.
5. Universals, consequently, obscure the individualities of unique
particulars from being directly apprehended. This is because, as we
have already seen, universals, according to Dharmakirti's [PVK]
(*Tshad ma* ce 97ab) and [PVT] (*Tshad ma* ce 282ab),
are only conventionally real since they are conceptual constructs
founded on unifying and putting together the distinct individualities
of unique particulars as having one common property being shared by
all items in the same class.
According to the Sautrantika philosophy, language does not
describe reality or unique particulars positively through real
universals as suggested by the Naiyayikas. The Sautrantika
developed an alternative nominalist theory of universal called the
apoha-theory in which language is seen to engage with reality
negatively by means of elimination or exclusion of the other
(anyapoha / gzhan sel). On this theory, the function of language,
specifically *naming*, is to eliminate object from the class of
those objects to which language does not apply.
In brief the Sautrantika's theory of the two truths rests on
dichotomising objects between unique particulars, which are understood
as ultimately reals, dynamic, momentary, causally effective, the
objective domain of the direct perception; and universals, which are
understood as only conventionally reals, conceptually constructed,
static, causally ineffective and the objective domain of the
inferential cognition.
## 3. Yogacara
The Vaibhasika's realistic theory of the two truths and the
Sautrantika's representationalist theory of the two truths both
affirm the ultimate reality of physical objects constituted by atoms.
The Yogacara rejects physical realism of both the
Vaibhasika and the Sautrantika, although it agrees
with the Sautrantika's representationalist theory as far as they
both affirm representation as the intentional objects in perception
and deny in perception a direct access to any external object. Where
they part their company is in their response to the questions: what
causes representations? Is the contact of senses with physical objects
necessary to give rise to representations in perception? The
Sautrantika's reply is that external objects cause
representations, given that these representations are intentional
objects there is indeed a contact between senses and external objects.
This affirmative response allows the Sautrantika to affirm
reality of external objects. The Yogacarin however replies
that "subliminal impressions" (vasanas) from
foundational consciousness (alayavijnana) are the
causes of the mental representations, and given that these impressions
are only internal phenomena acting as intentional objects, the contact
between senses and external objects is therefore rejected even
conventionally. This allows the Yogacarin to deny even
conventional reality of all physical objects, and argue that all
conventional realities are our mental representations, mental
creations, cognitions etc.
The central thesis in the Yogacara philosophy, the theory of
the two truths echoes is the assertion that all that is conventionally
real is only ideas, representations, images, creations of the mind,
and that there is no conventionally real object that exists outside
the mind to which it corresponds. These ideas are only objects of any
cognition. The whole universe is a mental universe. All physical
objects are only fiction, they are unreal even by the conventional
standard, similar to a dream, a mirage, a magical illusion, where what
we perceive are only products of our mind, without a real external
existence.
Inspired by the idealistic tendencies of various sutras
consisting of important elements of the idealistic doctrines, in the
third and the fourth centuries many Indian philosophers developed and
systematised a coherent Idealist School. In the beginning of the
*Vimsatika* Vasubandhu treats *citta*,
*manas*, v*ijnana, vijnapti* as
synonymous and uses these terms as the names of the idealistic school.
The chief founders were Maitreyanath (*ca*. 300) and
Asanga (315-390), propagated by Vasubandhu
(320-380), Dignaga (480-540) Sthiramati (*ca*.
500), Dharmapala (530-561), Hiuan-tsang (602-664),
Dharmakirti (600-660), Santaraksita
(ca.725-788) and Kamalasila (ca.740-795). The
last two are Yogacara-Madhyamikas in contrast with the
earlier figures who are identified as Yogacarins.
Like other Buddhist schools, the theory of the two truths captures the
central Yogacara doctrines. Maitreyanath asserts in his
*Verses on the Distinction Between Phenomena and Reality*
(*Dharmadharmatavibhanga-karika*,
*DDVK; Sems tsam* phi 50b-53b)--"All this is
understood to be condensed in two categories: phenomena (dharma) and
reality (dharmata) because they encompass all."
(*DDVK* 2, *Sems tsam* phi) By "All"
Yogacarin means every possible object of knowledge, and they
are said to be contained in the two truths since objects are either
conventional truth or ultimate truth. Things are either objects of
conventional knowledge or objects of ultimate knowledge, and a third
category is not admitted.
### 3.1 Conventional Truth
Etymologically the term conventional truth covers the sense of what we
ordinarily take as commonsensical truths. However, in contrast with
naive realism associated with common sense notions of truths, for
the Yogacara the term "conventional truth" has
somewhat a negative connotation. It exclusively refers to objects of
knowledge like forms, sound etc., the mode of existence or mode of
being which radically contradicts with the mode of its appearance, and
thus they are false, unreal, and deceptive. Forms, sounds, etc., are
defined as conventional entities in that they are realities from the
perspective of, or by the force of, three forms of convention:
1. fabrication (asatkalpita),
2. consciousness (vijnana), or
3. language, signifier, a convenient designator (sabda).
A conventional truth is therefore a truth by virtue of being
fabricated by the conceptual mind; or it is truth erroneously
apprehended by means of the dualistic consciousness; or it is true
concept, meaning, signified and designated by a convenient
designator/signifier.
Because the Yogacarins admit three conventions, it also
admits three categories of conventional truths:
1. fabricated phenomena;
2. mind/consciousness; and
3. language since conventional truths exist due to the force of these
three conventions.
The first and the last are categories of imaginary phenomena
(parikalpita) and the second is dependent phenomena (paratantra).
The Yogacara's claim that external objects are not even
conventionally real, what is conventionally real are only our
impressions, and mental representations is one Vasubandhu closely
defends by means of the Yogacara's theory of the three
natures (*trisvabhava*). In his *Discernment of the
Three Natures* (*Trisvabhavakarika*, or
*Trisvabhavanirdesa* [TSN]; Sems tsam shi
10a-11b), Vasubandhu explains that the Yogacara
ontology and phenomenology as consisting of the unity of three natures
(*svabhava*):
1. the dependent or other (*paratantra*);
2. the imaginary / conceptual (*parikalpita*); and
3. the perfect / ultimate (*parinispanna*) ([TSN] 1,
*Sems tsam* shi 10a)
The first two account for conventional truth and the latter ultimate
truth. We shall consider the import of the three in turn.
First, Vasubandhu defines the dependent nature as: (a) one that exists
due to being causally conditioned
(pratyayadhinavrttitvat), and (b) it is the basis
of "what appears" (yat khyati) mistakenly in our
cognition as conventionally real, it is the basis for the
"unreal conceptual fabrication" (asatkalpa) which is the
phenomenological basis of the appearance of reified subjects and
objects ([TSN] 2 *Sems tsam* shi 10a). The implications of the
Yogacarin expression "what appears" to describe
the dependent nature are therefore twofold: (a) that the things that
appear in our cognitions are exclusively the representations, which
are the manifest forms of the subliminal impressions, and (b) that the
entire web of conventional reality, which presents itself to our
cognitions phenomenologically in various ways, is exclusively the
appearance of those representations. Apart from those representations,
consciousnesses, which appear to be external objects, there is no
conventionally real external content which corresponds to what
appears.
Second, in contrast with the dependent nature which is the basis of
"what appears" (yat khyati), the imaginary nature
(*parikalpita*), as Vasubandu defines it, is the mode of
appearance "as it appears" (sau yatha khyati) on
the ground that its existence is only an "unreal conceptual
fabrication" (asatakalpo) ([TSN] 2, *Sems tsam* chi 10a).
The imaginary nature is only an "unreal conceptual
construction" because of two reasons: (i) it is the dependent
nature--representations--merely dually reified by the mind
as an ultimately real subject, or self, or eg, and (ii) the imaginary
nature is dualistic reification of beings and objects as existing
really and externally, there is no such reality.
Third, given the fact that the dependent nature is devoid, or free
from this duality, the imaginary nature is a mere superimposition on
it. Hence nonduality, the perfect nature (parinispanna) is
ultimate reality of the dependent nature.
### 3.2 Ultimate truth
In the *Commentary on the Sutra of Intent*
(*Arya-samdhinirmocana-sutra*, *Mdo sde*
ca 1b-55b) it is stated that "Reality as it is, which is
the intentional object of a pure consciousness, is the definition of
the perfect nature. This must be so because it is with respect to this
that the Victorious Buddha attributed all phenomena as natureless,
ultimately." (*Mdo sde* ca 35b) Vasubandhu's [TSN]
defines "the perfect nature (*parinispanna*) as the
eternal nonexistence of 'as it appears' of 'what
appears' because it is unalterable." ([TSN] 3, *Sems
tsam* chi 10a) "What appears" is the dependent
nature--a series of cognitive events, the representations.
"As it appears" is the imaginary nature--the unreal
conceptual fabrication of the subject-object duality. The
representations, (i.e., the dependent nature) appear in the cognition
as if they have in them the subject-object duality, even though the
dependent nature is wholly devoid of such subject-object duality. The
perfect nature is therefore this eternal nonexistence of the imaginary
nature--the duality--in the dependent nature.
Vasubandhu defines the perfect nature as the ultimate truth and
identifies it with mere-consciousness. "This is the ultimate
(paramartha) of the dharmas, and so it is the reality
(tathata) too. Because its reality is like this all the time, it
is mere consciousness." (*Trim* 25, *Sems
tsam* shi 3b) Accordingly Sthiramati's *Commentary on the
Thirty Verses* (*Trimsikabhasya*
[TrimB], *Sems tsam* shi 146b-171b), explains
"ultimate" (paramartha) here as referring to
"' world-transcending knowledge'
(*lokottara-nirvikalpa-jnana*) in that there is
nothing that surpasses it. Since it is the object of [the transcendent
knowledge], it is the ultimate. It is even like the space in having
the same taste everywhere. It is the perfect nature, which is
stainless and unchangable. Therefore, it is known as the
'ultimate.'" ([TrimB], *Sems tsam* shi
169ab)
So, as we can see the dependent and the imaginary natures together
explain the Yogacara's conventional reality and the perfect
nature explains its conception of the ultimate reality. The first two
natures provide an argument for the Yogacara's empirical and
practical standpoints (vyavahara), conventional truth and the
third nature an argument for its ultimate truth. Even then the
dependent nature alone is conventionally real and the perfect nature
alone is ultimately real. By contrast, the imaginary nature is unreal
and false even by the empirical and practical standards. This is true
in spite of the fact the imaginary nature is constitutive of the
conventional truth.
So, the perfect nature--nondual mind, i.e., emptiness
(sunyata) of the subject-object duality--is the
ultimate reality of the Yogacara conception. Ultimate truth
takes various forms as it is understood within the Yogacarin
tradition. As Maitreyanatha states, ultimate truth takes three
primary forms--as emptiness it is the ultimate object, as
nirvana it is the ultimate attainment, and as nonconceptual
knowledge it is the ultimate realization ([MVK], *Sems tsam*
phi 42B).
*Yogacara Arguments*
The core argument in support of the only mind thesis is the
impossibility of the existence of external objects. Vasubandhu
develops this argument in his *Vim* 1-27 as does
Dignaga in his *Examination of the Intentional Object*
(*Alambanapariksavrtti*,
*APV* 1-8, *Tshad ma* ce 86a-87b)
against the atomists (Naiyayikas-Vaisesika and
Abhidharmikas[8]).
Against the Yogacara idealist thesis the realist opponents,
as Vasubandhu observes, raise three objections:--"If
consciousness (does) not (arise) out from an object (i) neither the
determination or certainty (niyama) in regard to place (desa) and
time (kala), (ii) nor the indetermination or unexclusiveness
(aniyama) in regard to the series of (consciousness) (iii) nor the
performance of the (specific) function (krtyakriya) are
logically possible (yukta)" (*Vim* 2, *Sems
tsam* shi 3a).
Vasubandhu offers his Yogacara reply in the
*Vim* to these realist objections and insists that the
idealist position does not face these three problems. The first
problem is not an issue for the Yogacara since dreams
(svapna) account for the spatio-temporal determination. In dreams, in
the absence of an external object, one still has the cognition of a
woman, or a man in only determinate / specific time and place, and not
arbitrarily and not everywhere and not in any moment. Neither is the
second problem of the lack of an intersubjective agreement an issue
for the Yogacara. The case of the pretas (hungry-ghosts)
account for intersubjective agreement as they look at water they
alone, not other beings, see rivers of pus, urine, excrement, and
collectively hallucinates demons as the hell guardians. Although pus,
urine, excrement and hell guardians are nonexistent externally, due to
the maturation of their collective karma, pretas exclusively
experience the series of cognitions (vijnapti), other beings do
not encounter such experience (*Vim* 3, *Sems
tsam* shi). Nor is the lack of causal efficacy of the impressions
or representations in the consciousness a problem for the
Yogacara. As in the case of wet dreams, even without a union
of the couple, the emission of semen can occur, and so the
representations in the consciousness are causally efficient even
without the externally real object.
Yogacara's defensive arguments against the realist
challenges are quite strong. However unless Yogacara is able
to undermine the core realist thesis--i.e., reality of the
external objects--and its key supporting argument--the
existence of atoms--then the debate could go either way.
Therefore Yogacara shows the nonexistence, or unreality of
the atoms as the basis of the external object to reject the realist
position.
Yogacara's impressions-only theory and the
Sautrantika's representationalist theory both explain our sensory
experience (the spatio-temporal determinacy, intersubjective
agreement, and causal efficacy). They agree on what the observables
are: mental entities, including mental images but also emotions such
as desires. They also agree that karma plays vital role in explaining
our experience. The realist theory, though, according to
Dignaga's *Investigation About the Support of the
Cognition* (*Alambanapariksavr
tti* [AlamPV]) has to posit the reality of additional
physical objects, things that are in principle unobservable, given
that all we experience in the cognition are our impressions
([AlamPV], *Tshad ma* ce 86a).
If the realist thesis were correct, then there would be three
alternatives for the atoms to act as the intentional objects of
cognition.
1. Atoms of the things would be either *one* in the way the
Nyya-Vaieika conceives the *whole* as something constituted by
parts but being single, *one*, different from the parts that
compose it, and having a real existence apart from the existence of
the parts. Or
2. things would have to be constituted by *multiple* atoms
i.e., a number or a group of atoms coexisting one besides the other
*without forming a composite whole* as a result of mutual
cohesion between the atoms. Or
3. things would have to be atoms *grouped* *together*,
massed together as a unity among themselves with a tight cohesion.
Yogacara contends that these are the only three alternatives
in which the reality of the external objects can functioning as the
objects of cognition.
Of the three alternatives: (1) points to the *unity* of a thing
conceived as a whole; and (2) and (3) point to the
*multiplicity* looking at the things as loose atoms, i.e.,
composite atoms. Not one of the three possibilities is, on
Yogacara's account, admissible as the object of cognition
however. The first is inadmissible because nowhere an external object
is grasped as a *unity*, *whole*, *one* apart
from its parts ([AlamPV] *Tshad ma ce* 86b). The second is
inadmissible because when we see things we find that the atoms are not
perceived individually as one by one ([Vim] 13, *Sems
tsam* shi 3b). The third alternative is also rejected because the
atoms in this case cannot be proved to exist as an indivisible
([Vim] 14, *Sems tsam* shi 3b).
Further, if there were a simultaneous conjunction of an atom with
other six atoms coming from the six directions, the atom would have
six parts because that which is the locus of one atom cannot be the
locus of another atom. If the locus of one atom were the locus of the
six atoms, which were in conjunction with it, then since all the seven
atoms would have the same common locus, the whole mass constituted by
the seven atoms would be of the size of a single atom, because of the
mutual exclusion of the occupants of a locus. Consequently, there
would be no visible mass (*Vim*12, *Sems tsam* shi
3b).
Since unity is an essential characteristic of being the whole, or
composite whereas indivisibility an essential characteristic of being
an atom when both the unity and individuality of the atoms are
rejected, then the whole and indivisible atoms are no longer
admissible. Therefore, Vasubandhu sums up the Yogacara
objections against the reality of the external sense-objects
(ayatanas) as: "An external sense-object is unreal because
it cannot be the intentional object of cognition either as (1) a
single thing or (2) as a multiple in [isolated] atoms; or (3) as a
aggregate because the atom is not proven to exist"
(*Vim*11, *Sems tsam* shi 3b).
The realist insists that the external sense objects are real and that
their reality is ascertained by the various means of knowledge
(pramanas) of which perception (pratyaksa) is the
most important. If the external sense objects are nonexistents, there
can be no intentional objects, then cognition would not arise. Since
cognitions do arise, there must be external objects as their
intentional objects.
To this objection, Yogacara employs two arguments to refute
the realist claim and to establish the mechanism of cognition, which
takes place without the atoms of an external object. The first
argument shows that atoms do not satisfy the criterion of being the
intentional object, therefore they do not cause the perception.
"Perceptual cognition [takes place] just like in dreams and the
like [even without an external object]; moreover, when that
[cognition] occurs, the external object is not seen; how can it be
thought that this is a case of perception?"
(*Vim*16, *Sems tsam* shi 3b).
The second is the time-lag argument, according to which, there is a
time-gap between the perceptual judgement we make and the actual
perceptual process. When we make a perceptual judgement, at that time
we do not perceive the external object as it is in our mental
consciousness (manovijnana) that carries out the judgement
and since the visual consciousness that perceives the object has
already ceased. Hence at the time when the mental consciousness
delivers it judgment, the perceptual cognition no longer exists since
all things are momentary. Therefore the atoms of an external object is
not the intentional object of perceptual cognition, since it has
already ceased and does not now exist therefore it is not responsible
for the cognition's having the content it has, like the unseen events
occurring on the other side of the wall (Vasubandhu's
*Vimsatika-karikavrtti*
[VimKV] 16-17).
Yogacara therefore concludes that we cannot postulate the
reality of an external object through direct perception. However since
in perceptual cognition we are directly aware of something, there must
be an intentional object of the perceptual cognition. That intentional
object of perceptual cognition is, according to the
Yogacara, none other than the subliminal impressions
(vasanas) passing from their latent state contained in the
storehouse consciousness (alayavijnana) to their
conscious level. Therefore the impressions are the only things that
are conventionally real.
Vasubandhu's [TSN] 35-36 inspired by the Buddhist traditional
religious beliefs also offers others arguments to defend the idealism
of Yogacara:
1. one and the same thing appears differently to beings that are in
different states of existence (*pretas*, men, and gods);
2. the ability of the *bodhisattvas* and
*dhyayins* (practitioners of meditation) who have attained
the power of thinking (*cetovasita*) to visualise
objects at will;
3. the capacity of the *yogins* who have attained serenity of
mind (*samatha*) and a direct vision of dharmas as they
really are (*dharma-vipasyana*) to perceive things at
the very moment of the concentration of mind
(*manasikara*) with their essential characteristics of
flux, suffering, nonself, empty; and
4. the power of those who have attained intuitive knowledge
(*nirvikalpakajnana*) which enables them not to
block the perception of things.
All these arguments based on the facts of experience show that objects
do not exist really outside the mind, that they are products of mental
creation and that their appearance is entirely mind dependent.
Therefore the Yogacara's theory of the two truths concludes
that the whole world is a product of mind--it is the collective
mental actions (karma) of all beings. All living beings see the same
world because of the identical maturation of their karmic
consequences. Since the karmic histories of beings are same, there is
homogeneity in the way in which the world is experienced and
perceived. This is the reason there is an orderly world instead of
chaotic and arbitrariness. This is also the reason behind the
impressions of the objectivity of the world.
## 4. Madhyamaka
After the Buddha the philosopher who broke new ground on the theory of
the two truths in the Madhyamaka system is a South Indian monk,
Nagarjuna (ca. 100 BCE-100 CE). Amongst his seminal
philosophical works delineating the theory are
Nagarjuna's
* *Fundamental Verses on the Middle Way*
(*Mulamadhyamakakarika* [MMK],
* *Seventy Verses on Emptiness*
(*Sunyatasaptati*),
* *Rebutting the Disputes*
(*Vigrahvyavartani*[9]
and
* *Sixty Verses on Logical Reasoning*
(*Yuktisastika*).
Aryadeva's work
*Catuhsatakasastrakarika*
(*Four Hundred Verses*) is also considered as one of the
foundational texts delineating Madhyamaka's theory of the two
truths.
Nagarjuna saw himself as propagating the dharma taught by
the Buddha, which he says is precisely based on the theory of the two
truths: a truth of mundane conventions and a truth of the ultimate.
([MMK] 24.8, *Dbu ma* tsa 14b-15a) He saw the theory of
the two truths as constituting the Buddha's core teaching and his
philosophy. Nagarjuna maintains therefore that those who do
not understand the distinction between these two truths would fail to
understand the Buddha's teaching ([MMK] 24.9, *Dbu ma* tsa
15a). This is so, for Nagarjuna, because (1) without relying
on the conventional truth, the meaning of the ultimate cannot be
explained, and (2) without understanding the meaning of the ultimate,
nirvana is not achieved ([MMK] 24.10, *Dbu ma* tsa
15a).
Nagarjuna's theory of the two truths is fundamentally
different from all theories of truth in other Indian philosophies.
Hindu philosophers of the Nyaya-Vaisesika,
Samkya-Yoga, and
Mimamsa-Vedanta--all advocate a
foundationalism of some kind according to which ultimate reality is
taken to be "substantive reality" (drayva) or foundation
upon which stands the entire edifice of the conventional ontological
structures where the ultimate reality is posited as immutable, fixed,
irreducible and independent of any interpretative conventions. That is
so, even though the conventional structure that stands upon it
constantly changes and transforms.
As we saw the Buddhist realism of the Vaibhasika and the
representationalism of the Sautrantika both advocate ultimate
truth as ultimately real, logically irreducible. The idealism of
Yogacara holds nondual mind as the only ultimate reality and
the external world as merely conventional truths. On
Nagarjuna's Madhyamaka all things including ultimate truth
are ultimately unreal, empty (sunya) of any intrinsic nature
(svabhava) including the emptiness (sunyata)
itself, therefore all are groundless. In this sense a Madhyamika
(a proponent of the Madhyamaka thought) is a an advocate of the
emptiness (sunyavadin), advocate of the intrinsic
unreality (nihsvabhavavadin), groundlessness,
essencelessness, or carelessness. Nevertheless to assert that all
things are empty of any intrinsic reality, for Nagarjuna, is
not to undermine the existential status of things as simply
*nothing*. On the contrary, Nagarjuna argues, to
assert that the things are empty of any intrinsic reality is to
explain the way things really are as causally conditioned phenomena
(*pratityasamputpanha*).
Nagarjuna's central argument to support his radical
non-foundationalist theory of the two truths draws upon an
understanding of conventional truth as tied to dependently arisen
phenomena, and ultimate truth as tied to emptiness of the intrinsic
nature. Since the former and the latter are co-constitutive of each
other, in that each entials the other, ultimate reality is tied to
being that which is conventionally real. Nagarjuna advances
important arguments justifying the correlation between conventional
truth vis-a-vis dependent arising, and emptiness
vis-a-vis ultimate truth. These arguments bring home their
epistemological and ontological correlations ([MMK] 24.14; *Dbu
ma* tsa 15a). He argues that wherever applies emptiness as the
ultimate reality, there applies the causal efficacy of conventional
reality and wherever emptiness does not apply as the ultimate reality,
there does not apply the causal efficacy of conventional reality
(*Vig*.71) (*Dbu ma* tsa 29a). According to
Nagarjuna, ultimate reality's being empty of any intrinsic
reality affords conventional reality its causal efficacy since being
ultimately empty is identical to being causally produced,
conventionally. This must be so since, for Nagarjuna,
"there is no thing that is not dependently arisen; therefore,
there is no such thing that is not empty" ([MMK] 24.19, *Dbu
ma* tsa 15a).
*Svatantrika / Prasangika and the two
truths*
The theory of the two truths in the Madhyamaka in India took a great
resurgence from the fifth century onwards soon after Buddhapalita
(ca. 470-540) wrote *A Commentary on [Nagarjuna's]
Fundamental Verses of the Middle Way*
(*Buddhapalitamulamadhyamakavrtti*,[10]*Dbu
ma* tsa 158b-281a). Set forth in this text is a
thoroughgoing non-foundationalist philosophic reasoning and
method--*prasanga* arguments or *reductio ad
absurdum* style without relying upon the *svatantra*,
formal probative argument--to elucidate the Madhyamaka
metaphysics and epistemology ingrained in theory of the two truths.
For this reason, Buddhapalita is often identified later as the
founder of the Prasangika Madhyamaka, although elucidation
of the theory itself is set out in the works of Candrakirti.
Three decades later
Bhavaviveka[11]
(ca. 500-570) challenged Buddhapalita's interpretation of
the two truths, and developed a Madhyamaka account of the two truths
that reflects a significant ontological and epistemological shift from
Buddhapalita's position, which was later defended in the works of
Candrakirti to whom we shall return shortly.
Many later commentators ignore the philosophical contents of the
debate between the Prasangika and the Svatantrika and
claim that their controversy is confined only to pedagogical or
methodological issues. There is another view according to which the
contents of the debate between the Prasangika and the
Svatantrika--Buddhapalita versus Bhavavevika
followed by Bhavavevika versus Candrakirti--is
essentially philosophic in nature. Underpinning the dialectical or
methodological controversy between the two Madhyamaka camps lies a
deeper ontological and epistemological divide implied within their
theories of the two truths, and this in turn is reflected in the
different methodological considerations they each deploy. So on this
second view, the variation in the methods used by the two schools of
the Madhyamaka are not simple differences in their rhetorical devices
or pedagogical tools, they are underlain by more serious philosophical
disagreements between the two.
According to this view the philosophical differentiations between the
Svatantrika and the Prasangika is best contained
within the discourse of the two truths. Prasangika's theory
of the two truths we leave aside for the time being. First we shall
take up the Svatantrika's account.
### 4.1 Svatantrika Madhyamaka
Bhavaviveka wrote some of the major Madhyamaka treatises
including
* *Lamp of Wisdom
(*Prajnapradipamulamadhyamakavrtti*
[PPMV], Dbu ma* tsha 45b-259b),
* *Verses on the Heart of the Middle Way
(Madhyamakahrdayakarika* [MadhHK],
* *Blazes of Reasoning: A Commentary on Verses of the Heart of
the Middle Way* (*Tarkajvala*).
In these texts Bhavavevika rejects the Brahmanical systems of
Nyaya-Vaisesika, Samkhya-Yoga,
Mimamsa-Vedanta on both metaphysical and
epistemological grounds. All the theories of truth and knowledge
advanced in these systems from his Madhyamaka point of view are too
rigid to be of any significant use. Bhavavevika's critiques of
Abhidharmikas--Vaibhasika and
Sautrantika--and Yogacara predominantly target the
ontological foundationalism which underpins their theories of the
ultimate truth. He rejects them on the ground that from an analytic
cognitive perspective, which scrutinises the nature of reality,
nothing--subject and object--is found to be ultimately real
since all things are rationally reduced to spatial parts or temporal
moments. Thus he proposes the view that both the subject and the
object are *conventionally* intrinsically real as both are
conventional truths, where as both are *ultimately*
intrinsically unreal as both are empty of ultimate reality, hence
emptiness alone is the ultimate reality.
Although Bhavavevika is said to have founded the Svatantrika
Madhyamaka, it is important to note that there exists two different
subschools of the Svatantrika tradition:
1. Sautrantika Svatantrika Madhyamaka
2. Yogacara Svatantrika Madhyamaka
The two schools of the Svatantrika Madhyamaka take
Bhavavevika's intepretation of the two truths, on the whole, to
be more cogent than the Prasangika's. Particularly both
schools reject *ultimate* intrinsic reality while positing
*conventional* intrinsic reality. As far as the presentation of
ultimate truth is concerned both schools are in
agreement--nonself or emptiness alone is the ultimate reality,
and the rest--the entire range of dharmas--are ultimately
empty of any intrinsic reality. Nevertheless, the two schools differ
slightly on the matter related to the theory of conventional truth,
specifically concerning reality or unreality of the external or
physical objects on the conventional level. The Sautrantika
Svatantrika Madhyamaka view held by Bhavavevika himself, and
his student Jnanagarbha, affirms intrinsic reality of the
conventional truths of both the outer domains and the inner domains
and mental faculty along with their six respective
consciousnesses.
The arguments that affirm the conventional reality of the external
objects are the same arguments that Bhavavevika employed to
criticise the Yogacara arguments rejecting the reality of
external objects. Bhavavevika's [MadhHK] 5.15 points out
Yogacara's arguments are contradicted by perception
(pratyaksa), tradition (agama) and commonse sense
(lokaprasiddha), since they all prove the correctness of the cogniton
of material form ([MadhHK] 5.15 Dbu ma dza 20b and
*Tarkajvala* 5.15 Dbu ma dza 204ab).
Yogacara's dream argument is also rejected because,
according to Bhavavevika, dream-consciousness and so forth have
*dharmas* as their objects (alambana / dmigs pa) ([MadhHK]
5.19 Dbu ma dza 20b). And dharmas are based on intrinsically real
conventional objects, because they are the repeated impression of the
objects that have been experienced previously, like memory
(Tarkajvala 5.19 Dbu ma dza 205a).
The Yogacara Svatantrika Madhyamaka view held by
Santaraksita and his student Kamalasila
affirms conventionally intrinsic reality of the inner domains and
rejects the intrinsic reality of the outer domains and it claims the
external objects are mere conceptual fictions.
#### 4.1.1 Sautrantika Svatantrika Madhyamaka
Of the two schools of Svatantraka we shall first take up the
theory of the two truths presented in the Sautrantika
Svatantrika Madhyamaka. Bhavavevika's position represents
the theory of the two truths held by this school which was later
promoted by his disciple Jnanagarbha. The cornerstones of
the Sautrantika Svatantrika Madhyamaka theory of the two
truths are the following theses:
1. That at the level of conventional truth all phenomena are
intrinsically real (svabhavatah), because they are
established as intrinsic realities from the perspective of the
non-analytical cognitions of the naive ordinary beings. Allied to
this thesis is Bhavavevika's claim that the Madhyamaka accepts,
conventionally, the intrinsic reality of things, for it is the
intrinsic reality that defines what is to be conventionally real.
2. That at the level of ultimate truth, all phenomena, with the
exception of the emptiness which is ultimately real, are intrinsically
unreal (nihsvabhavatah), or ultimately unreal,
because they are established as empty of any intrinsic reality from
the perspective of analytical exalted cognition (i.e., ultimate
cognition of arya-beings). Allied to this thesis is
Bhavavevika's claim that Madhyamaka rejects intrinsic reality at
the ultimate level since ultimate reality constitutes that which is
intrinsically unreal, therefore empty.
##### Conventional truth
We shall consider Sautrantika Svatantrika Madhyamaka's
defence of the two claims in turn. We will begin first with
Bhavavevika's definition of the conventional truth. In the
*Tarkajvala*, he defines conventional truths as an
"incontrovertible (phyin ci ma log pa) linguistic
convention." (*Dbu ma* dza 56a) Such conventional truth
on Bhavavevika's Svatantrika takes two forms--unique
particulars (svalaksana / rang mtshan) and universals
(samanyalaksana / spyi mtshan) as the
*Tarkajvala* III.8, explains:
Phenomena have dual characteristics (laksana / mtshan nyid),
differentiated as being universal or unique. Unique particular
(svalaksana / rang mtshan) is a thing's intrinsic reality
(svabhavata / rang gi ngo bo), the domain of engagement
which is definitively ascertained by a non-conceptual cognition
(nirvikalpena jnanena / rnam par rtog pa med pa'i shes
pa). Universal (samanyalaksana) is a
cognitive domain to be apprehended by an inferential cognition
(anumanavijnanena / rjes su dpog pa'i shes pa) which is
conceptual (*Dbu ma* dza 55b).
In this passage Bhavavevika essentially summarises some of the
key features of conventional truths which are also critical to
understanding the differences between the Svatantrika and
Prasangika.
1. Bhavavevika accepts, based on this passage, unique particular
(svalaksana) as an integral part of the ontological
structure of the conventional truth. The Prasangika, by
contrast, rejects the concept of unique particular even at the level
of conventional truth.
2. Bhavavevika ascribes unique particular an ontological
qualification of being "intrinsically real"
(svabhavata), or "inherently real" as its given
status, explicitly affirming a form of foundationalism of unique
particulars at the level of conventional truth, which the
Prasangika totally rejects.
3. Bhavavevika endorses the Pramanika's epistemology
in which unique particular, which is intrinsically real,
conventionally, is considered as the domain of engagement ascertained
by a non-conceptual cognition. The universal
(samanyalaksana) is considered as the
domain to be apprehended by a conceptual inferential cognition.
Bhavavevika's defence of the thesis that conventionally things
are intrinsically real can be summed up in two arguments. First, as
far as things are conventional realities they are also intrinsic or
inherent realities, for being intrinsically real is the reason why
things are designated as "conventional reality," since
from a conventional standpoint--i.e., non-analytical cognitive
engagements of the ordinary beings--things do appear to be
inherently real. This means that being "intrinsic" is, for
Bhavavevika, what makes *conventional reality* a
reality.
Bhavavevika's second argument states that as long as a
Madhyamika accepts conventional reality it must also accept
things as intrinsic in order to avoid the nihilism charge, for
conventional reality is defined in terms of its being intrinsically
and uniquely real (svalaksana). Denying intrinsic reality
of things at the conventional level, would therefore entail the denial
of their conventional existence, since it would entail the denial of
the defining characteristics of the conventional reality. This follows
says Bhavavevika because "The Lord (bhagvan / bcom
ldan ldas) has taught the two truths. Based on this [explanatory
schema] conventionally things are posited in terms of their intrinsic
natures and [unique] particulars ([sva]laksana / mtshan
nyid). It is only ultimately that [things are] posited as having no
intrinsic reality" ((*Dbu ma* dza 60a).
##### Ultimate truth
Bhavavevika's second thesis is that at the level of ultimate
truth, all phenomena are intrinsically unreal
(nihsvabhavatah), therefore Madhyamaka rejects
intrinsic reality ultimately. Bhavavevika's
*Tarkajvala* 3.26 offers three ways to interpret the
compound paramartha "ultimate domain" or
"ultimate object" literally. In the first, both the terms
"ultimate" (param) and "domain" (artha) stands
for the object--emptiness (sunyata)--because
it is both the "ultimate" or "final" as well
as an "object to be known," "analysed"
(pariksaniya / brtag par bya ba) and
"understood" (pratipadya / go bar bya ba). In the
second compound, paramartha stands for "object of the
ultimate" (paramasya artha / dam pa'i don) where
"object" refers to emptiness and "ultimate"
refers to a non-conceptual exalted cognition
(nirvikalpajnana / rnam par mi rtog pa'i ye shes) of the
meditative equipoise (samahita / mnyam bzhag). In the third
compound paramarth stands for a concordant ultimate (rjes su
mthun pa'i shes rab), a cognition that accords with the knowledge of
the ultimate since it has the ultimate as its domain.
The first two etymological senses of the term paramarthas have in
them the sense of emptiness as an objective domain since it is both
the final ontic status of things and it is the ultimate object of a
non-conceptual exalted cognition. Bhavavevika's third
etymological sense of the term paramartha has it identified with
cognition that accords with the knowledge of the ultimate truth on the
ground that such cognition has emptiness as its object and such
knowledge is a means through which one develops a non-conceptual
knowledge of ultimate reality. This third sense of paramartha
allows Bhavavevika to argue that a cognition of the subsequent
attainment (prsthalabdhajnana / rjes
thob ye shes) which directly follows from the non-conceptual
meditative equipoise, while it is conceptual in character, is
nevertheless a concordant ultimate ([PPMV] *Dbu ma* tsa
228a).
So for Bhavavevika then, "the ultimate is of two kinds: one
is transcendent (lokuttara / 'jig rten las 'das pa),
undefiled (zag pa med pa), free from elaboration (aprapanca /
spros med) which needs to be engaged without a deliberative effort.
The second one" as he explains in *Tarkajvala*
3.26 "is elaborative (prapanca / spros pa), hence it
can be engaged with a deliberative effort by means of a correct
mundane cognitive process (dag pa'i 'jig rten pa'i ye shes) that
is sustained by the collective force of virtues (bsod nams) and
insights (ye shes)" (Dbu ma dza 60b).
Bhavavevika advances several arguments to defend the thesis that
ultimately all are intrinsically unreal. The first is the
conditionality argument pertaining to the four elements, according to
which, all the four elements are ultimately empty of any intrinsic
reality, for they all are conditioned by the causal factors
appropriated for their becoming and their existence.
The second is Bhavavevika's non-foundationalist ontological
argument which demonstrates all phenomena, including the atoms, are
ultimately non-foundational, for ultimately there is nothing that can
be taken as the foundational entity (dravya) or intrinsically real
since the ultimate analysis reveals that all phenomena are composed of
the atomic particles that are themselves composites.
Bhavavevika's third non-foundationalist epistemological argument
shows that ultimately the objects are not the domain of cognitions.
The argument says that "A visible form is not ultimately
apprehended by the visual faculty because it is a composite (bsags)
like a sound and because it is a product of the elements like the
sound." ([MadhHK] 3.33 *Dbu ma* dza 5a) The reasons
provided to justify the thesis are twofold: (i) being composite and
(ii) being a product of the elements. Given that being
*composites* and *products* equally apply to visible
form and sound, these two reasons cannot warrant the validity of the
argument justifying the thesis that a visible form is ultimately
apprehended by the visual faculty. If they did, they would equally
warrant the validity of the argument justifying the thesis that a
sound is ultimately apprehended by the visual faculty (*Dbu ma*
dza 65ab).
In conclusion: mirroring these two critically important philosophical
positions Bhavavevika introduces a type of method or pedagogical
device that is unique to the Svatantrika contra to his
Prasangika counterpart.
#### 4.1.2 Yogacara Svatantrika Madhyamaka
In the early eighth century Bhavavevika and
Jnanagarbha's theory of the two truths was adopted and
elucidated in the works of the latter's
student[12]
Santaraksita (705-762) and his student
Kamalasila (ca. 725-788). Santaraksita
wrote:
* *Verses on the Ornament of the Middle Way
(Madhyamakalamkarakarika* [MAK], *Dbu
ma* sa 53a-56b),
* *Commentary on the Verses of the Ornament of the Middle Way
(Madhyamakalamkaravrtti,* [MAV], *Dbu
ma* sa 56b-84a).
In these works he argues for the synthesis of Madhyamaka and
Yogacara's theories of the two truths which resulted in the
formation of the Yogacara-Svatantrika
Madhyamaka--a subschool within the Svatantrika Madhyamaka.
Kamalasila, who is a student of
Santaraksita, held the same conception of the two
truths in his works:
* *Subcommentary on the Ornament of the Middle Way*
(*Madhyamakalamkarapanjika*, Dbu ma
sa 84a-133b),
* *Light of the Middle Way*
(*Madhyamakalokah* [MA], *Dbu ma* sa
133b-244a),
* *Discussion on the Light of Reality*
(*Tattvaloka-nama-prakarana*, *Dbu ma* sa
244b-273a),
* *Establishing the Non-intrinsic Reality of All Phenomena*
(*Sarvadharmasvabhavasiddhi*, Dbu ma sa
273a-291a),
* *Stages of Meditation* (*Bhavanakrama*,
Dbu ma ki 22a-68b).
Unlike Jnanagarbha and Bhavavevika's realistic
Sautrantika Svatantrika Madhyamaka which presents
conventional truth in agreement with the epistemological realism of
the Sautrantika and ultimate truth in agreement with
non-foundationalist ontology of the Madhyamaka,
Santaraksita and Kamalasila advocate the
Yogacara Svatantrika Madhyamaka. They each maintain a
basic presentation of conventional truth in agreement with the
epistemological idealism of Yogacara and the presentation of
ultimate truth in agreement with the ontological non-foundationalism
of the Madhyamaka. Therefore Santaraksita concludes:
"By relying on the Mind Only (cittamatra / sems tsam) one
understands that external phenomena do not exist. And by relying on
this [i.e., Madhyamaka] one understands that even [the mind] is
thoroughly nonself." ([MAK] 23 Dbu ma sa 56a) "Therefore,
one attains a Mahayana [path] when one is able to ride the
chariots of the two systems by holding to the reigns of logic."
([MAK] 24 Dbu ma sa 56a) By "the two systems"
Santaraksita meant, Yogacara's account of
the conventional truth with Madhyamaka's account of the ultimate
truth.[13]
Underlying this syncretistic view of the two truths lay two
fundamental theses that Yogacara Svatantrika Madhyamaka
are committed--namely:
1. that there is no real external object, mind is the only thing that
is conventionally real, hence Yogacara is right about its
presentation of the conventional truth; and
2. that even mind is unreal and empty of any intrinsic nature under
ultimate analysis, therefore nothing is ultimately real, hence
Madhyamaka is right about its presentation of the ultimate truth.
And it is precisely the synthesis of these two theses that defines the
characteristic of the theory of the two truths in the
Yogacara Svatantrika Madhyamaka.
Santarasita and Kamalasila attempt to
achieve this distinctive syncretistic characteristic in their works to
which we turn our discussion.
##### Ultimate truth
We will follow Santaraksita and Kamalasila
and first take up the second major thesis--ultimately everything,
including the mind, is unreal and empty. Therefore, ultimate truth is
the emptiness of any intrinsic reality. According to
Kamalasila ultimate truth has three senses: ([MA], *Dbu
ma* sa 233b)
1. reality, the nature of which is characterised as "nonself of
person" and "nonself of phenomena," is etymologically
both ultimate (param / dam pa) and object (artha / don).
2. transcendent knowledge (lokottaram
jnanam), reliable cognition of the ultimate since it
is directly engaged with ultimate reality.
3. mundane knowledge, while not in itself ultimate knowledge, but
because it is a complementary means of transcendent knowledge, it is
also identified as the ultimate.
Both Santaraksita and Kamalasila maintains
that the emptiness of intrinsic reality is the ultimate truth, and in
order to demonstrate that this is so, Kamalasila deploys
five forms of arguments--namely:
1. The diamond-sliver argument (vajrakanadiyukti / irdo
rje gzegs ma'i gtan tshigs) which shows that all things are empty of
intrinsic reality because things are analytically not found to
arise--neither from themselves, nor from another nor both nor
causelessly ([MA], *Dbu ma* sa 190a-202b).
2. The argument refuting the arising of existent and non-existent
entities (*sadasadutpadapratisedhahetu / yod med skye
'gog gi gtan tshigs*) which shows that all things are empty
of intrinsic reality because their intrinsic reality is not found to
arise either from things that exist or from things that do not exist
([MA], *Dbu ma* sa 202b).
3. The arguments refuting the four modes of arising
(*catuskotiyutpadapratisedhahetu / mu bzhi
skye 'gog gi gtan tshigs*) which shows that things are empty
of intrinsic reality for the reason that such reality is neither found
anaytically in existence (or being) nor in nonexistence (or nonbeing)
nor both existence-nonexistence (being-nonbeing) nor neither existence
nor nonexistence (neither being nor nonbeing) ([MA], *Dbu ma*
sa 210b).
4. The neither-one-nor many argument (*ekanekaviyogahetu /
gcig du bral gyi gtan tshigs*) which shows all phenomena are empty
of intrinsic reality because they all lack the characteristics of
being intrinsically one or intrinsically many ([MA], *Dbu ma*
sa 218b).
5. The argument from dependent arising
(*pratityasamutpadahetu / rten 'brel gyi gtan
tshigs*) which shows that things are produced from the association
of multiple causes and condition, and therefore showing that things do
not have any intrinsic reality of their own ([MA], *Dbu ma* sa
205b).
The most well-known of Santaraksita's arguments is the
neither-one-nor many
argument,[14]
which seeks to examine the final (or ultimate) identity (*ngo bo
la dpyod pa*) of all phenomena. In his [MAK] and [MAV]
Santaraksita develops this argument in a great detail.
The essential feature of the argument is as follows: "The
entities that are proclaimed by our own [Buddhist] schools and the
other [non-Buddhist] schools, lack intrinsic identity because in
reality their identity is neither a (1) singular nor a (2)
plural--like a reflection." ([MAK] 1, *Dbu ma* sa
53a) In this argument:
* The subject (*rtsod gzhi cos can*): the entities that are
proclaimed by our own Buddhist schools and the other non-Buddhist
schools (from here on 'all entities.)
* The property to be proved (*sadhyadharma* / *sgrub
bya'i chos*) : the lack of intrinsic identity.
* What is being proved (*sadhya* / *sgrub bya*) :
all entities lack the intrinsic identity.
* The reason (hetu / rtags) or means of proof (*sadhana*
/ *sgrub byed*) : in reality their identity is neither a
singular nor a plural.
* Example: like a reflection.
According to Santaraksita the validity of the
neither-one-nor many argument depends on whether or not it satisfies
the triple criteria (*trirupalinga / tshul gsum*) of
formal probative reasoning. The first criterion
(*paksadharmata / phyogs chos*) shows that the
reason qualifies the subject (*phyogs / paksa or chos can /
dharmin*), i.e., that the subject must be proven to have the
property of the reason. That is, all entities must be shown, in
reality, as neither singular nor plural. And therefore this argument
consists of two premises:
1. the premise that shows the lack of singularity
("unity") and
2. the premise that shows the lack of plurality (or
"many-ness")
In [MAK] and [MAV] Santaraksita develops the argument
showing the lack of singularity in two parts: (i) the argument that
shows the lack of the *non-pervasive singularity* in (a)
permanent phenomena, and (b) in impermanent entities; and (ii) the
argument that shows the lack of *pervasive singularity* by (a)
refuting the indivisible atomic particles which are said to have
composed the objects, and (b) by refuting the unified consciousness
posited by Yogacara.
The premise that shows the lack of plurality is based upon the
conclusion of the argument that shows the lack of singularity. This is
so, argues Santaraksita, because "When we
analyse any entity, none is [found] to be a singular. For that for
which nothing is singular, there must also be nothing which is
plural" ([MAK] 61, *Dbu ma* sa 55a). This argument of
Santaraksita proceeds from understanding that being
singular and being plural (manifold) are essentially
interdependent--if there are no singular entity, one must accept
that there are no plural entity either since the latter is the former
assembled.
According to Santaraksita the argument from
neither-one-nor many satisfies the triple criteria of valid reasoning.
It satisfies the first criterion because all instances of the subject,
namely the intrinsic identities of those entities asserted by the
Buddhist and non-Buddhist opponents, are instances of entities which
are neither singular nor plural. The second criterion of a valid
reason--the proof of the forward entailment
(*anvayavyapti / rjes khyab*), i.e., the proof that the
reason occurs in only "similar instances"
(*sapaksa / mthun phyogs*) where all instances of the
reason are the instances of the predicates--is also satisfied
because the reason--all phenomena are neither singular nor
plural--is an instance of the predicate--phenomena which
lack identity. The third criterion of a valid reason--the proof
of the counter entailment (*vyatirekavyapti / ldog khyab /
ldog khyab*), or the proof that the reason is completely absent
from the dissimilar instances of the predicate (*vipaksa / mi
mthun phyogs*)--hold because there are no instances of the
predicate which are not instances of the reason.
There are no instances of phenomena, which lacked intrinsic identity,
which are not also instances of phenomena, that are neither singular
nor plural. Therefore "the counter entailment holds since there
are no entities that could be posited as existing in other possible
alternatives" ([MAV] 62 *Dbu ma* sa 69b).
Thus the second and the third criteria which are mutually entailing
prove the entailment of the neither one-nor many argument. Once the
entailment[15]
of the argument is accomplished, in Santaraksita's
view, all potential loopholes in this argument are addressed. And
therefore the argument successfully proves that all phenomena are
empty of any intrinsic reality, since they all lack any singular or
plural identity.
It is clear, then, that the arguments of Santaraksita
and Kamalasila have the final thrust of demonstrating the
reasons why all phenomena *ultimately* lack any intrinsic
reality, and are therefore geared towards proving the emptiness of
intrinsic reality as the ultimate truth, in each argument a specific
domain of analysis is used to prove the final ontological
position.
##### Conventional Truth
Let us now turn to the first thesis and look at the
Yogacara-Svatantratika Madhyamaka defence of the theory
of conventional truth. As we have seen from the arguments such as
neither-one-nor many, Santaraksita and
Kamalasila draw the conclusion that entities are ultimately
empty of reality, for ultimately they do not exist either as a
singular or as a plural. They then move on to explain the way things
do exist, and Santaraksita does this with an explicit
admission that things do exist conventionally--"Therefore
these entities are only conventionally defined. If [someone] accepts
them as ultimate, what can I do for that?" ([MAK] 63 *Dbu
ma* sa 55a). Why is it that all entities are defined as only
conventionally real? To understand the answer to this question, we
need to look at Santaraksita's definition of
conventional reality, which consists of three criteria:
1. it strictly appeals to its face-value and cannot be subjected to
reasoned analysis (*avicaramaniya*);
2. it is causally efficient; and
3. it is subjected to arising and disintegration ([MAK] 64, *Dbu
ma* sa 55a).
Elsewhere in [MAV] Santaraksita also provides a
broader definition of conventionality (samvrti / kun
rdzob), and argues that conventionality is not simply a linguistic
convention (sabda-vyvahara / sgra'i tha snyad), it also
includes what is conventionally real, those that are seen and accepted
by the Madhaymikas as dependently coarisen
(pratityasamutpanna / rten cing 'brel 'byung), and
those which cannot be rationally analysed ([MAV] 65, *Dbu ma*
sa 70a).
Kamalasila's definition of conventional truth stresses
epistemic error (branta jnana / rnam shes khrul ba)
in as much as he does stress on non-erroneous epistemic perspective
(abranta jnana / rnam shes ma khrul ba) in his
definition of the ultimate truth. So on his definition, the nature of
all entities, illusory persons and the like, is classified into two
truths by virtue of erroneous (branta / khrula ba) and
non-erroneous cognition through which they are each represented.
Kamalasila's definition of conventional truth makes
explicitly clear the synthesis between Madhayamaka and
Yogacara. The Yogacara Svatantrika Madhyamaka
claims that the Yogacara is right about the presentation of
the conventional truth as mere mental impressions, and that this can
be seen from the way in which Santarasita and
Kamalasila defend this synthesis textually and rationally
through relying heavily on the
*sutras--Samdhinirmocannasutra,
Lankavatarasutra,
Saddharmapundarikasutra* and so
forth--traditionally supporting Yogacara idealism. The
claim is made here, and Santaraksita does this
explicity, that the Madhyamaka and the idealistic doctrines taught in
these scriptures are all "consistent." ([MAV] 91 *Dbu
ma* sa 79a) The works of these two philosophers produce abundant
citations from these *sutra* literatures stressing the
significance of the passages like this from the
*Lankavatarasutra*: "Material forms
do not exist, one's mind appears to be external."
In terms of the argument, Santaraksita argues
"that which is cause and effect is mere consciousness only. And
whatever is causally established abides in consciousness."
([MAK] 91, *Dbu ma* sa 56a) Therefore phenomena that appear to
be real external objects like a blue-patch are not something that is
distinct from the nature of the phenomenological experience of
blue-patch, even so, as the material form experienced in dreams is not
distinct from the dreaming experience ([MAV] 91 *Dbu ma* sa
79a).
Briefly, then, it is clear the two Svatantriak
schools--Sautrantika-Svatantrika-Madhyamak and the
Yogacara-Svatantrika Madhyamaka--both uphold the
standard Madhyamaka position in terms of treating emptiness as the
ultimate truth, although, they vary somewhat in their presentation of
conventional truth since the former is realistic and the latter
idealistic about the nature of the conventional truth.
### 4.2 Prasangika Madhyamaka
Bhavaviveka's Svatantrika Madhyamaka theory of the two
truths soon came under attack from Candrakirti (ca.
600-650) who, among other works, wrote the following:
* *Clear-word Commentary on the Fundamental Verses of the Middle
Way* (*Mulamadhyamakavrttiprasannapada*
[PP], known simply as *Prasannapada*, *Dbu ma*
'a 1b-200a),
* *An Introduction on Madhyamaka*
(*Madhyamakavatara* [M], *Dbu ma* 'a
201b-219a),
* *Commentary on Introduction on Madhyamaka*
(*Madhyamakavatarabhasya* [MBh], *Dbu
ma* 'a 220b-348a),
* *Commentary on Four Hundred Verses*
(*Catuusatakatika*, *Dbu ma* ya
30b-239a),
* *Commentary on Seventy Verses on Emptiness*
(*Sunayatasaptativrtti*, *Dbu ma*
tsa 110a-121a),
* *Commentary on Sixty Verses on Reasoning*
(*Yuktisasthikavrtii*, *Dbu ma*
ya 1b-30b), and
* *Discussion on the Five Aggregates*
(*Pancaskandhaprakarana*, *Dbu ma* ya
239b-266b)
In these philosophical works, Candrakirti rejects the theories of
the two truths in both Brahmanical and the Buddhist schools of
Vaibhasika, Sautrantika, Yogacara and
Svatantrika Madhyamaka. His chief reason for doing this is his
deep mistrust of the varying degrees of metaphysical and
epistemological foundationalism that these theories are committed to.
For example Candrakirti rejects Bhavavevika's reading of
Nagarjuna's theory of the two truths and explicitly
vindicates Buddhapalita's Prasangika approach to
understanding Nagarjuna's Madhyamaka. He does this by
demonstrating the groundlessness of the fallacies Bhavavevika
attributed to Buddhapalita, and thereby rejecting
Bhavavevika's admission and imposition upon Nagarjuna's
non-foundationalist ontology of a kind of independent formal
svatantra argument, which Candrakirti saw as one encumbered
by foundationalist metaphysics, and is incompatible with
Nagarjuna's ontology. Instead of relying on the two truths
theories of other schools, Candrakirti proposes a distinctive
non-foundationalist theory of the two truths in the Madhyamaka, and he
defends it against his foundationalist opponents. This school later
came to be known as the Prasangika Madhyamaka.
At the core of the Prasangika's theory of the two truths
are these two fundamental theses:
1. Only what is *conventionally* non-intrinsic is causally
effective, for only those phenomena, the conventional nature of which
is non-intrinsically real, are subject to conditioned or dependent
arising. Conventional reality (or dependently arisen phenomenon),
given it is causally effective, is therefore *always*
intrinsically unreal. Hence that which is conventionally (or
dependently) coarisen is always conventionally (or dependently)
arisen.
2. Only what is *ultimately* non-intrinsic is causally
effective, for only those phenomena, the ultimate nature of which is
non-intrinsically real, are subject to conditioned arising. Ultimate
truth (or emptiness), given it is causally effective, is therefore
intrinsically unreal. Hence ultimate truth is ultimately unreal (or
emptiness is always empty).
Although these two theses are advanced separately, they are mutually
coextensive. There is a compatible relationship between conventional
truth and ultimate truth, hence between dependent arising
(pratityasamutpada / rten 'brel) and emptiness
(sunyata / stong pa nyid), and there is no tension
between the two. We shall examine the Prasangika arguments
in defence of these theses.
#### 4.2.1 Conventional Truth
Let us turn to the first thesis. Earlier we saw that both the
Vaibhasika and Sautrantika argue that only
*ultimately* intrinsic reality (svabhava) enables things
to perform a causal function (arthakriya). The Svatantrika
Madhyamaka rejects this, and it instead argues that things are
causally efficient because of their *conventionally* intrinsic
reality (svabhava) or unique particularity
(svalaksana). The Prasangika Madhyamaka,
however, rejects both these positions, and argues only what is
*conventionally* non-intrinsic reality (nihsvabhava)
is causally effective, for only those phenomena, the conventional
nature of which is non-intrinsic, are subject to conditioned or
dependent arising. Conventional reality (here treated as dependently
arisen phenomenon), given it is causally effective, is therefore
*always* intrinsically unreal, and hence lacks any intrinsic
reality even conventionally. Hence that which is conventionally (or
dependently) coarisen is always conventionally (or dependently) arisen
and strictly does not arise ultimately.
In his etymological analysis of the term "convention"
(samvrti / kun rdzob) in [PP] Candrakirti attributes
three senses to the term convention:
1. confusion (avidya / mi shes pa) for it obstructs the mundane
cognitive processes of the ordinary beings from seeing the reality as
it is by way of concealing (*samvrti* / *kun
rdzob*) its nature;
2. codependent (*paraparasambhavana / phan tshun brtan
pa*) for it is an *interdependent* phenomenon; and
3. Signifier (samket / rda) or *worldly convention
(lokavyavahara / 'jig rten tha snyad*) in the sense of
dependently designated by means of expression and expressed,
consciousness and object of consciousness, etc ([PP] 24.8 *Dbu
ma* 'a 163a).
Candrakirti's claims that in the case of mundane cognitive
processes of the ordinary beings, the first sense of convention
eclipses the force of the second and the third senses. As a result,
far from understanding things as dependently arisen (the second sense)
and dependently designated (the third sense), ordinary beings reify
them to be non-dependently arisen, and non-designated. Due to the
force of this cognitive confusion, from the perspective of mundane
cognitive processes of the ordinary beings, things appear real, and
each phenomenon appears intrinsically real in spite of their
non-intrinsic nature. Consequently this confusion, according to
Candrakirti, defines epistemic practices of the ordinary beings.
For this reason, the epistemic norms or the standard set by
*mundane convention* of the ordinary beings, is poles apart
from the epistemic norms set by mundane convention of the noble beings
(aryas / 'phags pa) whose mundane cognitive processes are
not under the sway of afflictive confusion.
In order to illustrate this distinction, in his commentary [MBh] 6.28
Candrakirti introduces the concept: *mere convention* (kun
rdzob tsam) of the noble beings to contrast with *conventionally
real* or *conventional true* (kun rdzob bden pa) of the
ordinary beings. Ordinary beings erroneously grasp all conditioned
phenomena as intrinsically real, therefore things are
*conventionally real* and thus are categories of conventional
truth. For noble beings, however, because they no longer reify them as
real, things are perceived as having the nature of being
*created* (bcos ma) and *unreal* (bden pa ma yin pa)
like the reflected image ([MBh] 6.28 *Dbu ma* 'a
255a).
According to Candrakirti's definition of the two truths, all
things embody dual nature, for each is constituted by its conventional
nature and its ultimate nature. Consequently, all entities, according
to this definition, satisfy the criterion of the two truths, for the
definition of the two truths is one based on the two natures. The two
truths are, therefore, not merely *one* specific nature
mirrored in two different perspectives ([MBh] 6.23 *Dbu ma*
'a 253a). Conventional nature entails conventional truth and
ultimate nature entails ultimate truth, and given each entity is
constituted by the two natures, the two truths define what each entity
is in its ontological and epistemological terms.
Conventional nature is defined as conventional truth because it is the
domain of mundane cognitive process, and is readily accessible for
ordinary beings, including mundane cognitive process of noble beings.
It is a sort of *truth*, while *unreal* and illusory in
reality, it is yet erroneously and non-analytically taken for granted
by mundane cognitive processes of the ordinary beings. Ultimate
nature, on the other hand, is defined as ultimate truth because it is
domain of the exalted cognitive process which engages with its object
analytically, one that is directly accessible for noble beings
(aryas) and only inferentially accessible for ordinary beings. It
is this sort of truth, both *real* and non-illusory, that is
correctly and directly found out by exalted cognitive processes of
noble beings, and by analytic cognitive processes of ordinary
beings.
Mundane cognitive process that is associated with the definition of
conventional truth and therefore allied to the perception of
*unreal* entities is of two kinds ([M] 6.24 *Dbu ma*
'a 205a):
1. correct cognitive process, and
2. fallacious cognitive process.
Correct cognitive process is associated with an acute sense faculty,
which is not impaired by any occasional extraneous causes of
misperception (see below). Fallacious cognitive process is associated
with a defective sense faculty impaired by occasional extraneous
causes of misperception. In [PP] Candrakirti introduces us to a
similar epistemic distinction:
1. mundane convention (*lokasamvrti* /
'*jig rten gyi kun rdzob*) and
2. non-mundane convention (*alokasamvrti /'jig
rten ma yin pa'i kun rdzo*).
Candrakirti's key argument behind to support the distinction
between two mundane epistemic practices--one *mundane*
convention and the other *non-*mundane convention--is that
the former is, for mundane standard, epistemically reliable whereas
the latter is epistemically unreliable. Consequently correct cognitive
processes of both ordinary and awakened beings satisfy the epistemic
standard of mundane convention, being non-*deceptive* by
mundane standard, thus they set the standard of mundane convention;
whereas fallacious cognitive processes of defective sense faculties,
of both ordinary and awakened beings, do not satisfy the epistemic
standard of mundane convention, and therefore, they are
*deceptive* even by mundane standard. They are thus
"non-mundane convention" as Candrakirti characterises
them in [PP] 24.8 (*Dbu ma* 'a 163ab).
Just as they are two kinds of sensory faculty--nonerroneous and
erroneous--their objects are, according to Candrakirti, of
two corresponding kinds:
1. conventionally real and
2. conventionally unreal.
First, cognitive processes associated with sense faculties that are
unimpaired by extraneous causes of misperception grasp the former
objects, and because this kind of object fulfils realistic mundane
ontological standard, it is thus *real* for ordinary beings,
(not so for the noble beings the issue to which we turn later). Hence
it is only *conventionally real*.
Second, cognitive processes associated with sense faculties that are
impaired by extraneous causes of misperception grasp conventionally
unreal objects, and because this kind of object does not meet
realistic mundane ontological standard, they are thus
*conventionally unreal*.
Although Candrakirti recognises that conventional reality
satisfies the ontological standard of ordinary beings, he,
importantly, argues that it does not satisfy the ontological criterion
of the Madhyamaka (here synonymous for noble beings). Just as illusion
is *partly real* although *partly unreal*, all objects
that are considered commonsensically as conventionally real, are
*unreal*. The major difference between the two is that a
knowledge of illusion being unreal is available to mundane cognitive
processes of ordinary beings where as a knowledge of unreality of
conventionally real entities is not available to mundane cognitive
processes of ordinary beings.
However, for Candrakirti *illusory objects* are
conventionally real and causally efficient, and that the
*conventionally real* entities themselves are illusion-like.
Candrakirti develops this argument using the two tier theory of
illusion.
1. The argument from the first tier of the theory of illusion
demonstrates a causal efficiency of conventionally illusory objects.
This, Candrakirti does by identifying *conventionally
illusory* entities with the so-called *conventionally real*
entities.
2. The argument from the second tier of the theory of illusion
demonstrates an *illusory* nature of *conventionally
real* entities. This, Candrakirti does by means of showing
that entities that are *conventionally real* are conventionally
empty of *intrinsic reality*, and by denying conventional
reality the ontological status of the so-called *intrinsic
reality*.
Candrakirti support his argument from the theory of illusion by
means of applying four categories of the conventional entities:
1. Partly unreal entities--illusory objects (e.g. mirage,
reflection, echoe etc.);
2. Partly real entities--non-illusory objects (e.g. water, face,
sound etc.);
3. Intrinsic reality (e.g. substance, essence, intrinsic nature
etc.,) and
4. Conventionally unreal entities (e.g. the rabbit's horn, the
sky-flower etc). ([MBh] 6.28 *Dbu ma* 'a
254b-255a).
Candrakirti accepts conventional reality of both
entities--partly unreal (i) and partly real (ii)--on the
ground that they both appear in mundane cognitive processes who are
confused about their ultimate ontological status. Candrakirti
rejects conventional reality status of those entities that are
supposedly intrinsically real (iii) and conventionally unreal (iv),
although for different reasons. Conventional reality of intrinsically
real entity is rejected in [MBh] 6.28 on the ground that its existence
does not, forever, appear to a mundane cognitive processes.
Candrakirti argues that there is a critical epistemic
differentiation to be made between the two
entities--conventionally real and conventionally illusory.
Deceptive, unreal, and dependently arisen nature of conventionally
real entities is beyond the grasp of mundane cognitive processes. By
contrast, deceptive, unreal and dependently arisen nature of
conventionally illusory, partly unreal, entities is grasped by mundane
cognitive processes. Therefore illusory objects are regarded as
deceptive, and unreal by the standard of mundane knowledge, whereas,
conventionally real things are regarded as nondeceptive and real by
the standard of mundane knowledge.
So the argument from the first tier of the illusory theory shows that
mundane cognitive processes of ordinary beings fail to *know*
illusory and unreal nature of conventionally real entities, they
instead grasp them to be intrinsically real. The argument from the
second tier of the illusory theory offers Candrakirti's reason
why this is so. Here, he argues, the presence of the underlying
confusion operating beneath mundane cognitive processes of the
ordinary beings is the force by which ordinary beings intuitively and
erroneously reify the nature of conventional entities. Hence they grasp
conventional entities as intrinsically real, although they are in
actual fact on a closer analysis, only non-intrinsic, unreal or
illusory.
Therefore intrinsic reality is not a conventional reality and grasping
things to be intrinsically real is only a confused belief, not a
knowledge. Hence Candrakirti clearly excludes it from the
ontological categories of conventional reality, for the reason that
intrinsic reality is a conceptual fiction fabricated by confused minds
of the ordinary beings and that it is a conceptual construct thrust
upon non-intrinsic entities ([MBh] 6.28 *Dbu ma* 'a
255a). In addition Candrakirti argues that if things were
intrinsically real, even conventionally, as the Svatantrikas take
them to be, in Candrakirti's view, things would exist by virtue
of their intrinsic reality, at least *conventionally*. If this
were the case then intrinsic reality would become the ultimate nature
of things. That would be absurd.
For Madhyamikas, things cannot have more than one final mode of
existence--if they exist, they must exist qua only
conventionally. Consequently, given the mutually exclusive relation of
intrinsic reality and emptiness, if intrinsic reality is granted even
a conventional reality status, Candrakirti contends, one has to
reject emptiness as the ultimate reality ([M] 6.34, *DBu ma*
'a 205b). But since the Madhyamika asserts emptiness to be
the ultimate reality, and given that emptiness and intrinsic reality
are mutually exclusive, Madhyamika must reject intrinsic reality
in all its forms--conventionally as well as ultimately ([MBh]
6.36, 1994: 118).
#### 4.2.2 Ultimate truth
We now turn to Candrakirti's second thesis--namely, only
what is *ultimately* non-intrinsic is causally effective, for
only those phenomena, an ultimate nature of which is non-intrinsic,
are subject to conditioned arising. Ultimate reality (or emptiness),
given it is causally effective, is therefore intrinsically unreal.
Therefore ultimate reality is ultimately unreal (or put it
differently, emptiness is ultimately empty).
Candrakirti defines ultimate reality as the nature of things
found by the perception of reality ([M] 6.23 *Dbu ma* 'a
205a). In the commentary Candrakirti expands this definition.
"Ultimate is the object, the nature of which is found by
*particular* exalted cognitive processes (yishes) of those who
perceive reality. But it does not exist by virtue of its *intrinsic
objective reality* (svarupata / bdag gi ngo bo
nyid)." ([MBh] 6.23 *Dbu ma* 'a 253a) Of the two
natures, the object of the perception of reality is the way things
really are, and this is, Candrakirti explains, what it means to
be ultimate reality ([MBh] 6.23 *Dbu ma* 'a 253ab).
Candrakirti's definition raises a couple of important points
here. First, by defining ultimate reality as "the nature of
things found by particular exalted cognitive processes (yeshes) of
those who perceive reality," he means ultimate reality is not
found by *any* exalted cognitive processes, it must be found by
a particular exalted cognitive processes--analytic--that
knows things just as they are. Second, by stating that "it does
not exist by virtue of its *intrinsic objective reality"*
(svarupata / bdag gi ngo bo nyid), he means ultimate reality
is not intrinsically real just as much as conventional reality is not
intrinsically real. Third, he says that ultimate reality is the first
nature of the two natures of things found by perception of reality.
This means ultimate reality is ultimate nature of all conventionally
real things. For they all have an ultimate nature representing
ultimate status, just as they have conventional nature representing
conventional status.
Ultimate truth, according to Candrakirti, for the purpose of
liberating all sentient beings ([M] 6.179 *Dbu ma* 'a
213a). 'a 213a) is differentiated by the Buddha into two
aspects:
1. nonself of person (pudgalanairatmya) and
2. nonself of phenomena (dharmanairatmya)
As we shall see Candrakirti's arguments come from two domains of
analysis which he employs to account for his theory of ultimate truth:
analysis of personal self (pudgala-atmya / gang zag gi bdag) and
analysis of phenomenal self (dharma-atmya / chos kyi bdag). The
analysis will demonstrate whether or not conventionally real phenomena
and persons are more than what they are conventionally. So we shall
consider Candrakirti's arguments under this rubrics:
1. The not-self argumnent: unreality of conventionally real
person;
2. The emptiness argument: unreality of conventionally real
phenomena;
3. The emptiness is emptiness argument: unreality of ultimate
reality.
Let us begin with the first. The not-self argumnent demonstrates, as
we shall see, unreality of conventionally real person. We shall
develop this argument of Candrakirti progressively in three
steps:
* correlation between personal self and the five aggregates;
* arguments against personal self; and
* reasons for positing personal self as merely dependently
designated.
In order to appreciate Candrakirti's not-self argument, or the
argument that shows unreality of conventionally real person, we need
to understand Candrakirti's purpose of refuting personal self.
According to Candrakirti it is important that we look into the
question: from where does the assumption of self arise before we go
any further. He argues, like all the Buddhist philosophers do, the
notion of self or the assumption of self arises in relation to
personhood. After all it is a personal identity issue. To analyse the
self therefore requires us to look closely among the parts that
constitute a person, and given that the conception of self is embedded
in personhood which is constituted by the five aggregates, we need to
examine its relation to our conception of self.
Suffice it here to highlight the important relation that exists
between the aggregates and the notion of self. According to
Candrakirti, the five aggregates are primary categories upon
which we will draw to examine the phenomenal structure of sentient
beings, because of the following reasons: because (1) it serves as a
phenomenological scheme for examining the nature of human experience,
(2) the way in which human beings construct conceptions of the self is
based upon different existential experiences within the framework of
the five aggregates, (3) they are the basis upon which we develop the
sense of self through the view that objectifies the five aggregates
which are transient and composites
(satkayadrsti / 'jig tshogs la lta
ba), either through an appropriation or falsely identifying the
aggregates to be self ([MBh] 6.120, *Dbu ma* 'a 292ab),
(4) they are the objective domain of the arising of defilements that
cause us suffering since the false view objectifying the aggregates
(satkayadrsti / 'jig tshogs la lta
ba) to be real self lead to the arising of all "afflictive
defilements", and finally (5) the five aggregates are the
objective domain of the knowledge of unreality of personal self. For
the simple reason that confusion and misconceptions of self arise in
relation to the aggregates, thus it is within the same framework that
the realisation of nonself must emerge.
Therefore, for Candrakirti, it is clear that within the range of
the five aggregates exhaust "all" or
"everything" that could be considered as the basis for
developing the misconception of a reified *self* and also the
basis for correct knowledge of non-self--i.e., reality of person
or self.
Candrakirti develops his arguments against personal self in a
great detail in [M]/[Mbh] chapter six. Suffice it here to consider one
such argument called "the argument from the sevenfold analysis
of a chariot" (shing rta rnams bdun gyi rigs pa). This argument
however is more or less a way of putting all his arguments together.
In it, Candrakirti concludes: "the basis of the conception
of self, when analysed, is not plausible to be an entity. It is not
plausible to be different from the aggregates, nor identity of the
aggregates, nor dependent (skandhadhara) on the aggregates. The
'basis' (skandhadhara) at issue is a compound that
exposes the implausibility of self as both the container
(adhara / rten) and the contained (adheya / brten pa). The
self does not own the aggregates. This is established in codependence
on the aggregates" ([M] 6.150, *Dbu ma* 'a 221b).
Putting this argument analogically, Candrakirti asserts that the
self is, in terms of not being intrinsically real, analogous to a
chariot. "We do not accept a chariot to be different from its
own parts, nor to be identical [to its parts], nor to be in possession
of them, nor is it *in* the parts, nor are the parts in it, nor
is it the mere composite [of its parts], nor is it the shape [or size
of the those parts]" ([M] 6.151 *Dbu ma* 'a
221b).
So the structure of the nonself "chariot" argument that
demonstrates unreality of a conventionally real person can be briefly
stated as follows:
* P1 Self is not distinct from the aggregates.
* P2 Self is not identical to the aggregates.
* P3 Self and the aggregates are not codependent upon each
other.
* P4 Self and the aggregates do not own each other.
* P5 Self is not the shape of the aggregates.
* P6 Self is not the collection of the aggregates.
* P7 There is nothing more to a person than the five
aggregates.
* C Therefore a real personal self does not exist.
There is no ultimately real self that can be logically proven to
exist. If it did exist it has to be found when it is subjected to
these sevenfold analyses. But the self is not found either to be
different from the aggregates, nor identical to the aggregates, nor to
be in possession of them, nor is the self *in* the aggregates,
nor are the aggregates *in* the self, nor is the self the mere
composite of the aggregates; nor is the self the shape or sizes of the
aggregates. Since there is no more to a person any more than
dependently designated self upon basis of the five aggregates, they
exhaust all that there is in a person, Candrakirti therefore
concludes that there is no real self anywhere whatsoever to be found
in the five aggregates.
From this argument, Candrakirti concludes that both ultimately
and conventionally, any intrinsic reality of a personal self must
remain unproven. Therefore according to any of the sevenfold analysis
a personal self is unreal, and empty of any intrinsic reality. What
Candrakirti does not conclude from this argument, though, is the
implausibility of a conventionally real self for everyday purpose. As
matter of fact, Candrakirti argues, the argument from the
sevenfold analysis reinforces the idea that only conventionally real
self makes sense. For such a self constitutes being dependently
designated on the aggregates in as much as a chariot is dependently
designated on its parts (*M / MBh* 6.158). Just as the chariot
is conveniently designated in dependence on its parts, and it is
referred to in the world as "agent," even so, a
conventionally self which is established through dependent designation
serves as a conventionally real moral agent. Therefore, although the
argument from the sevenfold analyses denies the existence of
intrinsically real self or ultimately real self, the argument does not
entail a denial of conventionally real self. (*M / MBh*
6.159).
It turns out therefore that a self is merely a convenient designation,
or a meaningful label given to the five aggregates, and taken for
granted in the everyday purpose as a agent. And it is this nominal
self that serves the purpose of moral agent. This nominal or
conventional self is comparable to anything that exists
conventionally. Candrakirti compares it to a chariot.
Second, Candrakirti's argument from emptiness demonstrates the
unreality of the conventionally real phenomena by means of appealing
to causal processes at work in producing these entities. The
understanding here is that the way in which things arise and come into
existence definitively informs us about how things actually are. This
argument is an extension of Nagarjuna's famous tetralemma
argument which states: "neither from itself nor from another nor
from both, nor without cause does anything anywhere, ever
arise." ([MMK] 1.1) Candrakirti develops this argument in
great detail in chapter 6 of [M]/[MBh], we shall only have a quick look
at its overall strategy without delving into its details.
If entities are intrinsically real there are only two alternative ways
for them to arise: either they arise (a) causally or (b) causelessly.
If one holds that intrinsically real things arise causally, then there
are only three possible ways for them to arise: either they should
arise from (1) themselves, or from (2) another or from (3) both.
>
>
> P1 If intrinsic entity arise from itself it would absurdly follow that
> cause and effect would be identical and they would exist
> simultaneously. In which case, the production of an effect would be
> pointless for it would be already in existence. It is thus
> unreasonable to assume that something already arisen might arise all
> over again. ([M] 6.8) If an entity in existence still requires arising
> then an infinite regress would follow ([M] 6. 9-13).
>
>
>
> P2 If an intrinsically real entity arises from another, anything could
> arise from anything (like a fire from pitch-darkness), given that
> cause and effect would be distinct, all causes would be equally
> 'another' ([M] 6. 14-21).
>
>
>
> P3 If an intrinsic entity arises from both itself and another then
> both *reductio ad absurdums* in P1 and P2 would apply ([M] 6.
> 98).
>
>
>
> P4 If an intrinsic entity arises causelessly, then anything could
> arise from everything ([M] 6. 99-103).
>
>
>
> Thus it is not possible for any intrinsic entity to exist because no
> such entity could be causally produce either from itself or from
> another or both or causelessly. It follows therefore that any entity
> that arises causally is not intrinsically real, hence it is
> dependently originated entity ([M] 6. 104).
>
>
>
Candrakirti therefore insists that entities that are
intrinsically unreal are not nonexistent unlike intrinsically real
entities which are nonexistent analogous to the rabbit's horn. Even
though they are not intrinsically real, hence remain unproduced
according to the four analyses, unlike the rabbit's horn,
intrinsically unreal entities are objects of mundane cognitive
processes, and they arise codependently ([M] 110-114). Hence, in
summarising the argument, Candrakirti states: "Entities do
not arise causelessly, and they do not arise through causes like God,
for example. Nor do they arise out of themselves, nor from another,
nor from both. They arise codependently." ([M] 6.114, Dbu ma
'a 209b) So because entities arise codependently, according to
Candrakirti, reified concepts such as intrinsic entities cannot
stand up under logical analysis. The argument from dependent arising
therefore eliminates the web of erroneous views presupposing intrinsic
reality. If entities are intrinsically real, then their arising could
be through themselves, from another or from both, causelessly. However
upon critical analysis the entity as such proves to be nonexistent
([M]/[MBh] 6. 115-116).
Third, Candrakirti's argument from the emptiness of emptiness
demonstrates the unreality of ultimate reality. The
"emptiness-argument" and the
"nonself-argument" show that all conventionally real
phenomena and persons are empty of intrinsic reality. As we have seen
from a critical rational point of view, it is demonstrated that there
is no ultimately real or ultimately existent entities or real self of
person to be found. The two arguments do not however show that
emptiness *per se* is empty, nor do they show that the
selflessness is itself selfless. This raises an important question:
does this mean that *emptiness* is the intrinsic reality of
conventionally real entities and selflessness is the intrinsic reality
of conventionally real persons? Putting the question differently:
could a Madhyamika claim that emptiness and selflessness are
ultimately real or are intrinsically real? This question is
reasonable. We tend to posit ultimate reality to be something that is
timeless, independent, transcendent, nondual etc. So if emptiness is
the ultimate reality of all phenomena and if selflessness is the
ultimate reality of all persons,could we say that emptiness and
selflessness are *nonempty*, *ultimately* *real or
intrinsically real*?
In order to answer this question Candrakirti maintains that the
Buddha reclassifies emptiness and nonself under various
aspects--"He elaborated sixteen emptinesses and
subsequently he condensed these into four explanations which are
adopted by the Mahayana." ([M] 6.180 *Dbu ma*
'a 213a) We shall consider one example here: Candrakirti
argues "The eye is empty of eye[ness] because that is its
reality. The ear, nose, tongue, body and mind are also to be explained
in this way." ([M] 6.181 *Dbu ma* 'a 213ab)
"Because each of them are *non*-*eternal (ther zugs tu
gnas pa ma yin*) not *subject to disintegration*
(*'jig pa ma yin pa*) and their reality is
*non-intrinsic* [the emptiness of] the eye and the other six
faculties is accepted as 'internal emptiness'
(*adhyatmasunyata*)"([M] 6.181
*Dbu ma* 'a 2213b).
On Candrakirit's view just as the chariot, self and other
conventionally real phenomena are intrinsically unreal, so too is the
emptiness and the nonself. Emptiness is also empty of intrinsic
reality. Candrakirti explains the emptiness of emptiness as:
>
> The emptiness of intrinsic reality of things is itself called by the
> wise as 'emptiness,' and this emptiness also is considered
> to be empty of any intrinsic reality. ([M] 6.185 *Dbu
> ma* 'a 213b)
>
>
> The emptiness of that which is called
> 'emptiness' is accepted as 'the emptiness of
> emptiness' (sunyatasunyata). It is
> explained in this way for the purpose of controverting objectification
> of the emptiness as intrinsically real (bhava). ([M] 6.186
> *Dbu ma* 'a 213b)
>
>
>
Also consider Candrakirti's argument against positing emptiness
as ultimately real that he develops in his commentary on [MMK] 13.7.
If emptiness is ultimately real, emptiness would not be empty of the
intrinsic reality--the essence of conventionally real objects. In
that case we will have to grant emptiness as existing independently of
conventionally real entities, as their underlying substratum. If this
is granted the emptiness of a chariot and the empty entity, the
chariot itself, would be quite distinct and unrelated. Moreover if the
emptiness of the chariot is nonempty, i.e., if it is ultimately real,
whereas the chariot itself is empty i.e., ultimately unreal, then, one
has to posit two distinct and contradictory verifiable realities even
for one conventionally real chariot. But since there is not even the
slightest nonempty phenomenon verified in the chariot--not even
the slightest bit of entity withstanding critical
analysis--emptiness of the chariot, like the chariot itself, is
implausible to be ultimately real. Like the chariot itself, the
emptiness of the chariot is also ultimately empty.
Therefore, while emptiness is the ultimate truth of the conventionally
real entities, it is not plausible to posit emptiness to be ultimately
real. Similarly, while nonself is the ultimate truth of the personal
self, nonself is implausible to be ultimately real. Therefore nonself
is not considered as the essence of persons or self.
So, to wind up the discussion on the Prasangika's theory of
the two truths, just as conventional truth is empty of intrinsic
reality, hence ultimately unreal, even so, is ultimate truth empty of
intrinsic reality, hence ultimately unreal. It is, therefore,
demonstrated that nothing is ultimately real for Candrakirti,
hence everything is empty of intrinsic reality. The heart of
Candrakirti's two truths theory is the argument that only because
they are empty of intrinsic reality can conventionally real entities
be causally efficient. Only in the context of a categorical rejection
of any foundationalism of intrinsic reality--both conventionally
and ultimate, Candrakirti insists, can there be mundane practices
rooted in the mutually interdependent character of *cognitive
processes* and *objects cognised*. For this reason
Candrakirti concludes that whomsoever emptiness makes sense,
everything makes sense. For whomsoever emptiness makes no sense,
dependent arising would not make sense. Hence nothing would make sense
([PP] 24.14, *Dbu ma* 'a 166b).
## 5. Conclusion
To sum up, though this entry provides just an overview of the theory
of the two truths in Indian Buddhism discussed overview, it
nevertheless offers us enough reasons to believe that there is no
*single* theory of the two truths in Indian Buddhism. As we
have seen there are many such competing theories, some of which are
highly complex and sophisticated. The essay clearly shows, however,
that except for the Prasangika's theory of the two truths,
which unconditionally rejects all forms of *foundationalism*
both conventionally and ultimately, all other theories of the two
truths, while rejecting some forms of foundationalism, embrace another
form of foundationalism. The Sarvastivadin (or
Vaibhasika) theory rejects the substance-metaphysics of the
Brahmanical schools, yet it claims the irreducible spatial units
(e.g., atoms of the material category) and irreducible temporal units
(e.g., point-instant consciousnesses) of the five basic categories as
ultimate truths, which ground conventional truth, which is comprised
of only reducible spatial wholes or temporal continua. Based on the
same metaphysical assumption and although with modified definitions,
the Sautrantika argues that the unique particulars
(svalaksana) which, they say, are ultimately causally
efficient, are ultimately real; whereas the universals
(samanyalaksana) which are only
conceptually constructed, are only conventionally real. Rejecting the
Abhidharmika realism, the Yogacara proposes a form of
idealism in which which it is argued that only mental impressions are
conventionally real and nondual perfect nature is the ultimately real.
The Svatantrika Madhyamaka, however, rejects both the
Abhidharmika realism and the Yogacara idealism as
philosophically incoherent. It argues that things are only
intrinsically real, *conventionally*, for this ensures their
causal efficiency, things do not need to be ultimately intrinsically
real. Therefore it proposes the theory which states that
*conventionally* all phenomena are intrinsically real
(svabhavatah) whereas *ultimately* all phenomena are
intrinsically unreal (nihsvabhavatah). Finally, the
Prasangika Madhyamaka rejects all the theories of the two
truths including the one advanced by its Madhyamaka counterpart,
namely, Svatantrika, on the ground that all the theories are
metaphysically too stringent, and they do not provide the ontological
malleability necessary for the ontological identity of conventional
truth (dependent arising) and ultimate truth (emptiness). It therefore
proposes the theory of the two truths in which the notion of intrinsic
reality is categorically denied. It argues that only when conventional
truth and ultimate truth are both *conventionally* and
*ultimately* non-intrinsic, can they be causally effective. |
twotruths-tibet | ## 1. Nyingma
Longchen Rabjam sets out the course of the Nyingma theory of the
truths and the later philosophers of the school took similar stance
without varying much in essence. In *Treasure* he begins the
section on the Madhyamaka theory of the two truths as follows:
"The Madhyamaka tradition is the secret and profound teachings
of the [Sakya]muni. Although it constitues five
ontological categories, the two truths subsume them" (Longchen,
1983: 204f).
Nyingma defines conventional truth as consisting of unreal phenomena
that appear to be real to the erroneous cognitive processes of the
ordinary beings while ultimate truth is reality which transcends any
mode of thinking and speech, one that unmistakenly appears to the
nonerroneous cognitive processes of the exalted and awakened beings.
In other words (1) ultimate truth represents the perspective of
epistemically correct and warranted cognitive process of exalted
beings ('phags pa); whereas (2) conventional truth represents
the perspective of epistemically deceptive and unwarranted cognitive
processes of the ordinary beings (Mipham Rinpoche 1993d:
543-544).
Let us consider these two definitions turn by turn. Nyingma supports
definition (1) with two premises. The first one says that cognitive
processes of the exalted ('phags pa) and awakened beings (sang
rgyas) are epistemically correct and non-deceptive, because "It
is in relation to this cognitive content that the realisation of the
ultimate truth is so designated. The object of an exalted cognitive
process consists of the way things really are (gshis kyi gnas lugs),
phenomena as they really are (chos kyi dbyings) which is undefiled by
its very nature (rang bzin dag pa)" (Longchen 1983: 202f).
However ultimate truth for Nyingma is not an object per se in the
usual sense of the word. It is object only in the metaphorical sense.
"From the [ultimate] perspective the meditative equipoise of the
realised (sa thob) and awakened beings (sangs rgyas), there exists
neither object of knowledge (shes bya) nor knowing cognitive process
(shes byed) and so forth, for there is neither object to apprehend nor
the subject that does the apprehending. Even the exalted cognitive
process (yeshes) as a subject ceases (zhi ba) to operate"
(Longchen 1983: 201f). Therefore "At this stage, [Nyingma]
accepts the total termination (chad) of *all* the continua
(rgyun) of the cognitive processes ('jug pa) of the mind (sems)
and mental factors (sems las byung ba). This exalted cognitive process
which is inexpressible beyond words and thoughts (smra bsam brjod du
med pa'i yeshes), and thus is designated (btags pa) as a
*correct* and *unmistaken* cognitive process (yang dag
pa'i blo ma khrul ba) as it knows the reality as it is"
(Longchen 1983:201f).
The second premise comes from Nyingma's transcendent theory. According
to this theory ultimate truth constitutes reality and the reality
constitutes transcendence of all elaborations. Reality is that which
cannot be comprehended by the means of linguistic and conceptual
elaboration, as it is utterly beyond the grasp of words and thoughts
which merely defile the cognitive states. Given that exalted beings
are free of defiled mental states, all forms of thoughts and
conceptions are terminated in their realization of the ultimate truth.
Ultimate truth transcends all elaborations, thus remains untouched by
the philosophical speculations (Longchen, 1983: 203f). "In short
the characteristic of [ultimate truth] is that of nirvana,
which is profound and peaceful. It is intrinsically unadulterated
domain (dbyings). The cognitive process by means of which ultimate
truth is realized must therefore be free from all cognitive
limitations (sgribs pa), for it is disclosed to the awakened beings
(sangs rgyas) in whose exalted cognitive processes appear the objects
as they really are without being altered" (Longchen, 1983:
204f).
Nyingma's defence of the definition (2) also has two premises. The
first one comes from its theory of error, the most commonly used by
the Nyingma philosophers. According to this theory, conventional truth
is, as matter of fact, an error confined to the ordinary beings (so so
skye bo) who are blinded by the dispositions (bag chags) of confusion
(ma rig pa). It is argued that under the sway of confusion the
ordinary beings falsely and erroneously believe in the reality of
entirely unreal entities and the truth of wholly false epistemic
instruments, just like the people who mistakenly grasps cataracts and
falling hairs to be real objects. Conventional truths are mere errors
that appear real to the ordinary beings, but they are in fact no more
real than the falling hairs that are reducible into the "modes
of apprehensions" (*snang tshul*) (Mipham Rinpoche
1993c: 3, 1977: 80-81ff, 1993d: 543-544).
The second premise comes from its representationalist or
elaboration-theory which says that conventional truths constitute
merely mental elaborations (spros pa) represented to appear (rnam par
snang ba) in the minds of the ordinary beings as if they are realities
having the subject-object relation. They are thus deceptive since they
are produced through the power of the underlying cognitive confusion
or ignorance, and that they do not cohere with any corresponding
reality externally. Confusion is *samvrti* because
it conceals (*sgrib)* the nature, it fabricates all conditioned
phenomena to appear as if they are real. Even though conventional
truths are to be eventually eradicated (spang bya), the
representational images of the conventional reality will nevertheless
continue to appear in the minds of even those who are highly realized
beings, that is, until they achieved a complete cessation of mind and
mental states (Longchen, 1983: 203f). It is argued that when the eye
that is affected by cataracts mistakenly sees hairs falling in
containers, the healthy eye that is not affected by cataracts does not
even perceive the appearances of falling hairs. Likewise, those who
are affected by the cataracts of afflictive confusion see things as
intrinsically real, hence for them things are conventionally real.
Those noble beings who are free from the afflictive confusion and the
awakened beings who are free from even the non-afflictive confusion
see things as they are (ultimate truth). Just as the person without
cataracts does not see the falling hairs, the noble beings and
awakened beings do not see any reality of things at all.
Mipham Rinpoche also proposes another definition of the two truths
which diverges significantly from the one we saw. As we saw Longchen's
definition is based on two radically opposed epistemic criteria
whereas Mipham's definition, as we shall see, has two radically
opposed ontological criteria--one that withstands reasoned
analysis and the other that does not withstand reasoned analysis. In
his *Clearing Dhamchoe's Doubts* (*Dam chos dogs sel*),
Mipham says that: "Reality as it is (de bzhin nyid) is
established as ultimately real (bden par grub pa). Conventional
entities are actually established as unreal, they are subject to
deception. Being devoid of such characteristics ultimate is
characterized as real, not unreal and not non-deceptive. If this
[reality] does not exist," in Mipham's view, "than it
would be impossible even for the noble beings (arya / phags pa)
to perceive reality. They would, instead, perceive objects that are
unreal and deceptive. If that is the case, everyone would be ordinary
beings. There would be no one who will attain liberation"
(1993a: 602).
Mipham anticipates objections to his definition from his Gelug
opponents when he writes, "Someone may object: although ultimate
truth is real (bden p), its reality is not established ultimately
(bden grub), because to be established ultimately is for it to
withstands reasoned analysis (rigs pas dpyad bzod)" (Mipham
1993a: 603). The response from Mipham makes it even clearer.
"Conventional truth is not ultimately real (bden par grub pa)
because it is not able to withstand reasoned analysis, in spite of the
fact that those that are conventionally real (yang dag kun rdzob) are
nominally real (tha snyad du bden pa) and are the objects of the
dualistic cognitive processes" (Mipham 1993a: 603). In contrast:
"Reality (*dharmata / chos nyid)*, ultimate truth,
is really established (bden par grub pa) on the ground that it is
established as the object of nondual exalted cognitive
process. Besides, it withstands reasoned analysis, for no
logical reasoning whatsoever can undermine it or destroy it.
For that reason so long as it does not withstand reasoned analysis, it
is not ultimate, it would be conventional entity" (Mipham
1993a: 603). According to Mipham's argument, that an object that cannot
withstand reasoned analysis is the kind of object that mundane
cognitive processes find dually, and it is not the kind of object
found by the exalted cognitive processes that perceive ultimate truth
nondually (Mipham 1993a: 603).
As we have noticed, Mipham's definition seems to have departed from
Longchen's some what significantly. In the final analysis, however,
there does not seem to be much variation. Mipham Rinpoche also is
committed to the representationalist argument to ascend to nonduality:
"In the end," he says, "there are no external
objects. It is evident that they appear due to the force of mental
impressions ... All texts that supposedly demonstrate the
existence of external objects are provisional [descriptions of] their
appearances"(Mipham 1977: 159-60ff). Consequently whatever
appears to exist externally, according to Nyingma, "is like a
horse or an elephant appearing in a dream. When it is subjected to
logical analysis, it finally boils down to the interdependent inner
predispositions. And this is at the heart of Buddhist
philosophy" (Mipham 1977: 159-60ff).
Following on from its definition, Nyingma argues that there is no one
entity that can be taken as the basis from which divides the two
truths; since there is no one entity which is both real (true) and
unreal (false). Hence it proposes the two truths division based on the
two types of epistemic practices "because it confirms a direct
opposition between that which is free of the elaborations and that
which is not free of the elaborations, and between what is to be
affirmed and what is to be negated excluding the possibility of the
third alternative. Thus it ascertains the two" (Longchen, 1983:
205-206ff). Nyingma has two key arguments to support the claim
the two truths division is an epistemic. The first argument states,
"It is definite that we posit the objects (yul rnams) in
dependence upon the subjects (yul can). The subjects are exclusively
two types--they are either ultimately (mthar thug pa) fallacious
cognitive processes (khrul ba'i blo) or ultimately (mthar thug pa)
non-fallacious cognitive processes (ma 'khrula ba'i blo). The
fallacious cognitive processes posit [conventional truth]--all
samsaric phenomena--whereas as the non-fallacious
cognitive processes posit the [ultimate] reality. Therefore it is due
to the cognitive processes that we posit the objects in terms of two
[truths]" (Longchen, 1983: 206f) The second argument states that
ultimate truth is not an objective domain of the cognitive processes
with the representational images (dmigs bcas kyi blo'i spyod yul), for
the reason that it may be known by means of the exalted cognitive
processes (yeshes) with no representational image (dmigs pa med).
(Longchen, 1983: 206f) In contrast conventional truth is an objective
domain of the cognitive processes with the representational images
(dmigs bcas kyi blo'i spyod yul), for the reason that it may be known
by means of the cognitive processes having the representational images
(dmigs bcas kyi blo'i spyod ul).
Nyingma categorically rejects commitment to any philosophical
position, for it would entail a commitment to, at least, some forms of
elaborations. This is so since "the Prasangika
rejects all philosophical systems, and does not accept any self-styled
philosophical elaboration (rang las spros pas grub mtha').
(Longchen, 1983: 210ff) The Prasangika therefore presents
the theory of the two truths and so forth by merely designating them
in accordance with the mundane fabrications (sgro btags) (Longchen
1983: 211f). "Therefore ultimate truth which is transcendent of
all elaborations cannot be expressed either as identical to or
separate from the conventional truth. They are rather *merely
different in terms of negating the oneness* (Longchen,
1983: 192-3ff, Mipham 1977: 84f). Interestingly Mipham
Rinpoche also says that "the two truths constitute a single
entity but different conceptual identities *(ngo bo gcig la ldog pa
tha dad)*. "This is because," he says,
"appearance and emptiness are indistinguishable. This is
ascertained through reliable cognitions (tshad ma) by means of which
the two truths are investigated. What appears is empty. If emptiness
were to exist separately from what appears, the reality of that
[apparent] phenomenon would not be empty. Thus the two [truths] are
not distinct" (Mipham 1977: 81f). The identity of ultimate truth
at issue here, according to Mipham, is that of absolute ultimate (rnam
grangs min pa'i don dam ste), rather than provisional ultimate. This
is because the kind of ultimate truth under discussion when we are
discussing its relation with conventional truth is the ultimate truth
"which is beyond the bound of any expression, although it is an
object of direct perception" (Mipham 1977: 81f).
## 2. Kagyu
According to Karmapa Mikyo Dorje, Atisha and his followers
in Tibet, are the authoritative former masters of the
Prasangika. It is this line of reading of the
Prasangika that Kagyu takes it to be authoritative
which the later Kagyu wholeheartedly embraces as the standard
position (Mikyo Dorje, 2006: 272-273). Karmapa Mikyo
Dorje cliams that it is a characteristic of Kagyu masters and
their followers to recognise the ancient Prasangika
tradition to be philosophically impeccable ('khrul med) (2006:
272).
Unlike the other Tibetan schools, Kagyu rejects the more dominant
view that each philosophical school in Indian Buddhism has its own
distinctive theory of the two truths. "There are those who
advance various theories proclaiming that the Vaibhasika on
up to the Prasangika Madhyamaka, for the purpose of
distinctive definitions of the two truths, unique rational and unique
textual bases. This is true," says Karmapa Mikyo Dorje,
an indication "of the failure to understand the logic, and the
differentiation between the definition (mtshan nyid) and the defined
example (mtshan gzhi)" (Mikyo Dorje 2006: 293). The Indian
Buddhist philosophical schools only disagree, according to Kagyu,
on the issue concerning the defined example (mtshan gzhi) of the two
truths. The instances of ultimate truth in each lower school are, it
argues, their philosophical reifications, hence, they are the ones the
higher schools logically refute. There is nevertheless no disagreement
between the Indian Buddhist schools as far as the definiendum (mtshon
bya / laksya) and the definition (mtshan nyid /
laksana) of the two truths are concerned. If they disagree
on these two issues, there would be no commonly agreed definition, but
there is a definition of the ultimate which is common to all schools,
even though there is no commonly agreed defined example to illustrate
such definition (Mikyo Dorje 2006: 293).
Where the Indian Buddhist philosophical schools party their company,
Kagyu argues, is their conception of the defined example of the
ultimate. For the realists (dngos smra ba) ultimate truth is a
foundational entity (rdzas su yod pa) that withstands rational
analysis. The Madhyamaka rejects this and argues that if ultimate
truth is a foundational entity, it is not possible to attain an
awakening to it. If it is possible to attain awakening, then ultimate
truth is impossible to be a foundational entity, for the two mutually
exclude each other. The foundational entity, like the substance of the
Brahmanical metaphysicians, does not allow modification or change to
take place whereas the attainment of awakening is an ongoing process
of modification and change (Mikyo Dorje 2006: 294).
From the position Kagyu takes, it is clear then, that it does not
endorse Gelug's position that the Prasangika has its
own theory of the two truths which affords a unique framework within
which a presentation of the definition and the defined example of the
two truths can be set out. (See Yakherds 2021 vol. 1 for the Gelugpa
critiques)Kagyu argues that the Prasangika's
project is purely for pedagogical and heuristic reasons, "for it
seeks to assuage the cognitive errors of its opponents--the
non-Buddhists, our own-schools such as Vaibhasika on up to
the Svatantrika--who assert philosophical positions based on
certain texts" (Mikyo Dorje 2006: 294). The
non-Prasangika philosophical schools equate the definition
of the ultimate with the defined example, and this is the malady that
the Prasangika seeks to allay. Therefore the
Prasangika's role, as Kagyu sees it, is to point
out the absurdities in the non-Prasangika position and to
show that the definition of the ultimate and the defined example are
rather two separate issues, and that they cannot be identified with
each other (Mikyo Dorje 2006: 294-95). The
Prasangika does not, like Nyingma, entertain any position
of its own, except for the purpose of a therapeutic reason, and for
which "the Prasangika relies on the convention as a
framework for things that need to be cultivated and those that are to
be abandoned. For this, our own position does not need to base the
linguistic conventions on the concepts of definition and the defined
example. Instead [our position]," says Kagyu,
"conforms to the non-analytical mundane practice in which what
needs to be abandoned and what needs to be cultivated are undertaken
in agreement with the real-unreal discourses of the conventional
[truth]" (Mikyo Dorje 2006: 295).
Therefore according to Kagyu, the position of all Indian Buddhist
philosophical schools--be it higher or lower--agree in so
far as the definitions of the two truths are concerned. (1)
"[Ultimate truth] is that which reasoned analysis does not
undermine and that which withstands reasoned analysis. Conventional
truth is the reverse. (2) Ultimate is the non-deceptive nature and it
exists as reality. Conventional [truth] is the reverse. (3) Ultimate
is the object of the non-erroneous subject. Conventional [truth] is
the object of the erroneous subject" (Mikyo Dorje 2006:
294). Of these three definitions, the first one proposes conventional
truth as one that does not withstands reasoned analysis whereas
ultimate truth does indeed withstand the reasoned analysis.
"Conventional [truth] is a phenomenon that is found to exist
from the perspective of the cognitive processes that are either
non-analytical (ma dpyad pa) or slightly analytical (cung zad dpyad
pa), whereas ultimate [truth] is that which is found either from the
perspective of the cognitive process that is thoroughly analytical
(shing tu legs par dpyad pa) or meditative equipoise"
(Mikyo Dorje 2006: 274). Conventional truth is non-analytical in
the sense that ordinary beings assume its reality uncritically, and
become fixated to its reifications in virtue of how it appears to
their uncritical minds, rather than how it really is from the critical
cognitive perspective. When conventional truth is slightly analysed,
however, it reveals that it exists merely as collocations of the
codependent causes and conditions (Mikyo Dorje 2006: 273).
Therefore conventional truth is "one that is devoid of an
intrinsic reality, rather it is *mere name* (ming tsam),
*mere sign* (rda tsam), *mere linguistic convention*
(tha snyad tsam), *mere conception* (rnam par rtog pa tsam) and
*mere fabrication* (sgro btags pa tsam)--one that merely
arises or ceases due to the force of the expressions or the conceptual
linguistic convention" (Mikyo Dorje 2006: 273). Ultimate
truth is defined, on the other hand, as one that is found by
analytical cognitive process or by a meditative equipoise of the
exalted beings, for the reason that when it is subjected to analysis,
phenomena are fundamentally, by its very nature transcendent of the
elaborations (spros pa) and all symbolisms (mtshan ma) (Mikyo
Dorje 2006: 273).
On the second definition "Conventional truth is that which does
not deceive the perspective to which it appears (snang ngor), or that
which does not deceive an erroneous perspective ('khrul ngor),
or that which is non-deceptive according to the standard of the
mundane convention" (Kun Khyen Padkar, 2004: 66). Ultimate
truth, on the other hand, is defined as "that which does not
deceive a correct perception (yang dag pa'i mthong ba), or that which
does not deceive reality as it is (gnas lugs la mi bslu ba), or that
which does not deceive an awakened perspective (sang rgyas kyi gzigs
ngor)" (Kun Khyen Padkar, 2004: 74).
The third definition is based on the two conflicting epistemic
practices. Here ultimate is defined as an ultimate object from the
perspective of exalted beings (arya / 'phags pa). The
ultimate reality, although, is not one that is regarded as cognitively
found to exist in and of itself inherently (rang gi bdag nyid).
Conventional truth is defined as the cognitive process that sees
unreal phenomena (brdzun pa mthong ba) from the perspective of the
ordinary beings whose cognitive processes are obscured by the
cataracts of confusion. Conventional truth is unreal, for it does not
exist in the manner in which it is perceived by the confused cognitive
processes of the ordinary beings (Mikyo Dorje 2006:
269-70). Karmapa Mikyo Dorje puts the point in this way.
"Take a vase as one entity, for instance. It is a conventional
entity, for it is a reality for the ordinary beings, and it is found
to be the basis from which arise varieties of hypothetical
fabrications. The exalted beings," he argues, "however, do
not see any of these [fabrications], anywhere, whatsoever, and this
mode of *seeing by way of not seeing anything at all* is termed
as *the seeing of the ultimate*. Therefore the two truths, in
this sense, are not distinct. They are differentiated from the
perspective of the cognitive processes that are either erroneous or
non-erroneous" (Mikyo Dorje, 2006: 274).
Although, like Nyingma and Sakya it is the view of Kagyu to
maintain that erroneous cognitive process represents the nature of the
ordinary beings, whereas non-erroneous cognitive processes represent
the exalted beings. However, unlike Nyingma and Sakya, Kagyu
insists that the distinctions between the two truths has nothing to do
with the perspective of the exalted beings. Everything, it claims, has
to do with how ordinary beings fabricate things erroneously--one
more real than the other--and that it has nothing to do with how
exalted beings experience things. "Even this distinction is made
from the point of view of the cognitive process of the childish
beings. Since all things the childish beings perceive are
characteristically unreal (brdzun pa) and deceptive (bslu ba), they
constitute the conventionality. The exalted beings, however, do not
perceive at all anything in the way in which the childish beings
perceive and fabricate. So ultimate truth is, according to Kagyu,
that which is unseen and unfound by the exalted beings since it is the
way things really are. Therefore both the truths are taught for the
pedagogical reasons in accord with the perspective of the childish
beings, but not because the exalted beings experienced the two truths
or that there really exist the two truths (Mikyo Dorje 2006:
274). As for the conventional truth, Kagyu divides it into having two
aspects: the conventional in the form of imputed concrete phenomena
that are capable to appear concretely as such, and the conventional in
terms of merely nominal imputations of abstract entities that are
incapable of concrete appearance (Mikyo Dorje, see Yakherds
2021: 153-55).
What follows from the above discussion is that Kagyu is monist
about truth (Karmapa Mikyo Dorje, 2006: 302) in that it claims
only one truth and that truth it equates with transcendent wisdom.
Despite minor differences, Nyingma, Kagyu and Sagya all emphasize
the synthesis between transcendent wisdom and ultimate truth, arguing
that "there is neither separate ultimate truth apart from the
transcendent wisdom, nor transcendent wisdom apart from the ultimate
truth" (2006: 279). For this reason Kagyu advances the view
similar to Nyingma and Sakya that the exceptional quality of awakened
knowledge consists in not experiencing anything conventional or
empirical from the enlightened perspective, but experiencing
everything from the other's--nonenlightened--perspective.
(Karmapa Mikyo Dorje, 2006: 141-42ff)
Finally, on the relationship of the two truths, Kagyu maintains a
position which is consciously ambiguous--that they are
expressible neither as identical nor distinct. Responding to an
interlocutory question, "Are the two truths identical or
distinct?" Karmapa Mikyo Dorje, says, "neither
is the case." And he advances three arguments to support this
position. First, "From a perspective of the childish beings,
[the two truths] are neither identical, for they do not see the
ultimate; nor are they distinct, for they do not see them
separately" (Karmapa Mikyo Dorje, 2006: 285).
Second, "From standpoint of the meditative equipoise of the
exalted beings, the two truths are neither identical, for the
appearance of the varieties of conventional entities do not appear to
them. Nor are they distinct, for they are not perceived as
distinct" (Karmapa Mikyo Dorje, 2006: 285). The
third is the relativity argument which says, "They are all
defined relative to one another--unreality is relative to reality
and reality is relative to unreality. Because one is defined relative
to the other, one to which it is related (ltos sa) could not be
identical to that which it relates (ltos chos). This is because it is
contradictory for one thing to be both that which relates (ltos chos)
and the related (ltos sa). Nor is it the case that they are distinct
because," according to Kagyu's view, "when the
related is not established, so is the other [i.e., one that relates]
not established. Hence there is no relation. If one insists that
relation is still possible, then such a relation would not be relative
to another" (Karmapa Mikyo Dorje, 2006:
285-86).
So from these three arguments, Kagyu concludes that the two
truths are neither expressible as identical nor distinct. It argues
that "just as the conceptual images of a golden vase and a
silver vase do not become distinct on the account of them not being
expressed as identical. Likewise these images do not become identical
on the account of them not being expressed as distinct"
(Mikyo Dorje, 2006: 286).
## 3. Sakya
Sakya's theory of two truths is defended in the works of the
succession of Sakya scholars--Sakya Pandita
(1182-1251), Rongton Shakya Gyaltsen (Rong ston
Sakya rgyal tshan, 1367-1449), the translator Taktsang
Lotsawa (Stag tsang Lo tsa ba, 1405-?), Gorampa Sonam Senge
(Go rams pa Bsod nams seng ge, 1429-89) and Shakya Chogden
(Sakya Mchog ldan, 1428-1509). In particular Gorampa
Sonam Senge's works are unanimously recognised as the authoritative
representation of the Sakya's position. Sakya agree with Nyingma and
Kagyu in maintaining that the distinction between the two truths
is merely subjective processes, and that the two truths are reducible
to the two conflicting perspectives (Sakya Pandita 1968a:
72d, Rongton Shakya Gyaltsen n.d. 7f, Taktsang Lotsawa n.d.: 27,
Shakya Chogden 1975a: 3-4ff, 15f). "Although there
are not two truths in terms of the object's ontological mode of being
*(gnas tshul)*, the truths are divided into two in terms of
[the contrasting perspectives of] the mind that sees the mode of
existence and the mind that does not see the mode of
existence...This makes perfect sense" (Gorampa 1969a:
374ab). Since it emphasizes the subjective nature of the distinction
between the two truths, it proposes "mere mind" *(blo
tsam)* to be the basis of the division (Gorampa 1969a: 374ab). It
aruges that "Here in the Madhyamaka system, the object itself
cannot be divided into two truths. Conventional truth and ultimate
truth are divided in terms of the modes of apprehension *(mthong
tshul)*--in terms of the subject apprehending unreality and
the subject apprehending reality; or in terms of mistaken and
unmistaken apprehensions *('khrul ma 'khrul);* or
deluded or undeluded apprehensions *(rmongs ma rmongs);* or
erroneous or nonerroneous apprehensions *(phyin ci log ma
log);* or reliable cognition or unreliable cognitions *(tshad
ma yin min)*" (Gorampa 1969a: 375b). He also adds that:
"The position which maintains that the truths are divided in
terms of the subjective consciousness is one that all
Prasangikas and Svatantrikas of India unanimously
accepted because they are posited in terms of the subjective cognitive
processes depending on whether it is deluded *(rmongs)* or
nondeluded *(ma rmongs),* a perception of unreality *(brdzun
pa thong ba)* or a perception of reality *(yang dag mthong
ba),* and mistaken *(khrul)* or incontrovertible *(ma
khrul)* (1969a: 384c).
Sakya's advances two reasons to support its position. First, since the
minds of ordinary beings are always deluded, mistaken, and erroneous,
they falsely experience conventional truth. Conventional truth is thus
posited only in relation to the perspective of the ordinary
beings.[2]
The wisdom of exalted meditative equipoise is however never mistaken,
it is always nondeluded, and nonerroneous, hence exalted beings
flawlessly experience ultimate truth. Ultimate truth is thus posited
strictly in relation to the cognitive perspective of exalted beings.
Second, the view held by Sakya argues for separate cognitive agents
corresponding to each of the two truths. Ordinary beings have direct
knowledge of conventional truth, but are utterly incapable of knowing
ultimate truth. The exalted beings in training have direct knowledge
of the ultimate in their meditative equipoise and direct knowledge of
the conventional truth in the post meditative states. Fully awakened
buddhas, on the other hand, only have access to ultimate truth. They
have no access to conventional truth whatsoever from the
*enlightened perspective*, although they may access
conventional truth from the perspective of ordinary deluded
beings.
Etymologically Sakya characterizes ultimate (*parama)* as a
qualification of transcendent exalted cognitive process
*('jig rten las 'das pa'i ye shes,
lokottarajnana)* that belongs to exalted beings with
*artha* as its corresponding object. The sense of ultimate
truth (*paramarthasatya)* grants primacy to the exalted
transcendent cognitive process of the noble beings, which supersedes
the ontological status of conventional phenomena (Gorampa 1969a:
377d). "There is no realization and realized object, nor is there
object and subject" (Gorampa 1969b: 714f, 727-729). As
Taktsang Lotsawa puts it: "A wisdom without dual appearance is
without any object" (Taktsang n.d.: 305f). Strictly speaking,
transcendent wisdom itself becomes the ultimate truth. "Ultimate
truth is to be experienced under a total cessation of dualistic
appearance through exalted personal wisdom," and further,
"Anything that has dualistic appearance, even omniscience,must
not be treated as ultimate truth" (Gorampa 1969b:
612-13ff).
Therefore conventionality (samvrti, kun rdzob) means
primal ignorance (Gorampa, 1969b: 377b, Sakya Pandita
1968a: 72b, Shakya Chogden 1975a: 30f, Rongton Shakya Gyaltsen
1995: 288). In agreement with Nyingma and Kagyu, Sakya treats
primal ignorance as to the villain responsible for projecting the entire
system of conventional truths. It argues ignorance as constituting the
defining characteristics of the conventional
truth.[3]
(Sakya Pandita 1968a: 72, Rendawa 1995: 121, Shakya
Chogden 1975b: 378f, 1975c: 220, Taktsang Lotsawa n.d.: 27f,
Rongton Shakya Gyaltsen 1995: 287, n.d: 6-7ff) It
formulates the definitions of the two truths in terms of the
distinctions between the ignorant experiences of ordinary beings and
the enlightened experiences of noble beings during their meditative
equipoise. Sakya's definition is based on three reasons. First, each
conventionally real phenomenon satisfies only the definition of
conventional truth because each phenomenon has only a conventional
nature (in contrast to the two nature theory of Gelug), and that
ultimate truth has a transcendent ontological status. Second, that
each cognitive agent is potentially capable of knowing only one truth
exhaustively, being equipped with either the requisite conventionally
reliable cognitive process or ultimately reliable cognitive process.
Each truth must be verified by a different individual and that
access to the two truths is mutually exclusive--a cognitive agent
who knows conventional truth cannot know ultimate truth and vice versa
except in the case of exalted beings who are not fully enlightened.
Third, that each conventional entity has only one nature, namely, its
conventional nature, and the so-called ultimate nature must not be
associated with any conventionally real phenomenon. If a sprout, for
example, actually did possess two natures as proposed in Gelug's
theory of the two truths, then, according to Sakya, each nature would
have to be ontologically distinct. Since the ontological structure of
the sprout cannot be separated into a so-called conventional and
ultimate nature, the sprout must possess only one phenomenal nature,
i.e., the conventional reality. As this nature is found only under the
spell of ignorance, it can be comprehended only under the conventional
cognitive processes of ordinary beings, and of unenlightened noble
beings in their post meditative equipoise. It is therefore not
possible, in Sakya's view, to confine the definition of ultimate truth
to the framework of conventional phenomena.
Ultimate truth, on the other hand, requires the metaphysical
transcendence of conventionality. Unlike conventional reality, it is
neither presupposed nor projected by ignorance. Ultimate truth, in
Gorampa's words: "is inexpressible through words and is beyond
the scope of cognition" (1969a: 370a). The cognition is always
conceptual and thus deluded. "Yet ultimate truth is experienced
by noble beings in their meditative equipoise, and is free from all
conceptual categories. It cannot be expressed through
definition, through any defined object, or through anything
else" (1969a: 370a). In fact, Sakya goes so far as to combine
the definition of ultimate truth with that of *intrinsic
reality* *(rang bzhin, svabhava).* When an
interlocutor ask this question: "What is the nature of the
reality of phenomena?" Gorampa replies that reality is
transcendent and it has three defining characteristics: namely:
"It is not created by causes and conditions, it exists
independently of conventions and of other phenomena; and it does not
change" (Gorampa, 1969c: 326a). Like Nagarjuna's
*hypothetical*, not real, intrinsic reality, Sakya claims that
ultimate truth is *ontologically unconditioned*, and hence it
is not a dependently arisen phenomenon; it is *distinct* from
conventional phenomena in every sense of the word; it is
*independent* of conceptual-linguistic conventions; it is a
*timeless* and *unchanging* phenomenon.
It is clear then, according to Sakya view, any duality ascribed to
truth is untenable. Since there is only one truth, it cannot be
distinguished any further. Like Nyingma and Kagyu, Sakya holds
the view that truth itself is not divisible (Sakya
Pandita n.d.: 32ab, Rendawa 1995: 122, Rongton Shakya
Gyaltsen, 1995: 287, n.d.: 22f Taktsang Lotsawa n.d.: 263, Shakya
Chogden 1975a: 7-8ff, 1975c: 222f). It agrees with Nyingma and
Kagyu that the distinction between the two truths is essentially
between two conflicting perspectives, rather than any division within
truth as such. In Shakya Chogden's words: "Precise enumeration
*(grangs nges)* of the twofold truth all the earlier Tibetans
have explained rests on the precise enumeration of the mistaken
cognition *(blo 'khrul)* and unmistaken cognition
*(blo ma 'khrul).* With this underpinning reason, they
explained the precise enumeration through the elimination of the third
alternative. There is not even a single figure to be found who claims
the view comparable with the latter [i.e., Gelug], who asserts a
precise enumeration of the twofold truth based on the certification of
reliable cognitive processes" (1975a: 9-10ff).
Sakya rejects the reality of conventional truth by treating it as a
projection of conventional mind--it is the ignorance of ordinary
beings. When asked this question: "If this were true, even the
mere term *conventional truth* would be unacceptable, for
whatever is conventional is incompatible with truth," Gorampa
Replies: "Since [conventional] truth is posited only in relation
to a conventional mind, there is no problem. Even so-called real
conventionalities *(yang dag kun rdzob ces pa yang)* are
posited as real with respect to a conventional mind" (1969b:
606b). By "conventional mind," Gorampa means the ignorant
mind of an ordinary being experiencing the phenomenal world. In other
words, conventional truth is described as "truth" only
from the perspective of ignorance. It is a truth projected *(sgro
brtag pa)* by it and taken for granted.
Sakya equates conventional truth with "the appearances of
nonexistent entities like illusions" (Gorampa 1969c: 287c). It
follows Sakya Pandita on this point who puts it:
"Conventional truths are like reflections of the moon in the
water--despite their nonexistence, they appear due to
thoughts" (Sakya Pandita 1968a: 72a). According to
Sakya Pandita "The defining characteristic of
conventional truth constitutes the appearances of the nonexistent
objects" (1968a: 72a). In this sense, conventional truths
"are things apprehended by the cognition perceiving conventional
entities. Those very things are found as nonexistent by the cognition
analyzing their mode of existence that is itself posited as the
ultimate" (Gorampa 1969a: 377a).
Since mere mind is the basis of the division of the two truths wherein
ultimate truth--wisdom--alone is seen as satisfying the
criterion of truth, so conventional truth--ignorance--cannot
properly be taken as truth. Wisdom and ignorance are invariably
contradictory, and thus the two truths cannot coexist. Sakya argues,
in fact, that conventional truth must be negated in the ascent to
ultimate truth. Given wisdom's primacy over ignorance, in the final
analysis it is ultimate truth alone that must prevail without its
merely conventional counterpart. Conventional truth is an expedient
means to achieve ultimate truth, and the Buddha described conventional
truth as truth to suit the mentality of ordinary beings (Gorampa
1969a: 370b). The two truths are thus categorized as a *means
(thabs)* and a *result* *(thabs byung)*.
Conventional truth is the means to attain the one and only ultimate
truth.
According to this view, then, the relationship between the two truths
is equivalent to the relationship between the two conflicting
perspectives--namely, ignorance and wisdom. The question now
arises: How is ignorance related to wisdom? Or conversely, how does
wisdom relate to ignorance? It says that the two truths are distinct
in the sense that they are incompatible with unity, like
entity and without entity. In the ultimate sense, it argues,
the two truths transcend identity and difference (Gorampa, 1969a:
376d). The transcendence of identity and difference from the ultimate
standpoint is synonymous with the transcendence of identity and
difference from the purview of the meditative equipoise of noble
beings. However, from the conventional standpoint, it claims that the
two truths are distinct in the sense that they are incompatible with
their unity. It likens this relationship to the one between
*entity* and *without entity* (Gorampa, 1969a:
377a).
Sakya's claim that the two truths are distinct and incompatible
encompasses both ontological and epistemological distinctions. Since
what is divided into the two truths is mere mind, it is
obvious that there is no single phenomenon that could serve as the
objective referent for both. This also means that the two truths must
be construed as corresponding to distinct spheres belonging to
distinct modes of consciousness: conventional truth corresponds to
ignorance and ultimate truth to wisdom. It is thus inappropriate to
describe the relationship between the two truths, and their
corresponding modes of consciousness, in terms of two ways of
perceiving the *same* entity. Although the two truths can be
thought of as two ways of *perceiving*, one based on ignorance
and the other on wisdom, there is no *same* entity perceived by
both. There is nothing common between the two truths, and if they are
both ways of perceiving, then they do not perceive the same thing.
According to this view, the relationship between conventional truth
and ultimate truth is analogous to the relationship between the
appearance of falling hairs when vision is impaired by cataracts and
the absence of such hairs when vision is unimpaired. Although this is
a metaphor, it has a direct application to determining the
relationship between the two truths. Conventional truth is like seeing
falling hairs as a result of cataracts: both conventional truth and
such false seeing are illusory, in the ontological sense that there is
nothing to which each corresponds, and in the epistemological sense
that there is no true knowledge in either case. Ultimate truth is
therefore analogous, ontologically and epistemologically, to the true
seeing unimpaired by cataracts and free of the appearance of falling
hairs. Just as cataracts give rise to illusory appearances, so
ignorance, according to Sakya, gives rise to all conventional truths;
wisdom, on the other hand, gives rise to ultimate truth. As each is
the result of a different state, there is no common link between them
in terms of either an ontological identity or an epistemological or
conceptual identity.
## 4. Gelug
Gelug's (Dge lugs) theory of the two truths is campioned by Tsongkhapa
Lobsang Dragpa (Tsong khapa Blo bzang grags pa, 1357-1419).
Tsongkhapa's theory is adopted, expanded and defended in the works of
his immediate disciples--Gyaltsab Je (Rgyal tshab Rje,
1364-1432), Khedrub Je (Mkhas grub Rje, 1385-1438),
Gendun Drub (Dge 'dun grub,1391-1474)--and other
great Gelug thinkers such as Sera Jetsun Chokyi Gyaltsen (Se
ra Rje tsun Chos kyi rgyal tshan, 1469-1544), Panchen Sonam
Dragpa (Pan chen Bsod nams grags pa,1478-1554), Panchen
Lobsang Chokyi Gyaltsen (Pan chen Blo bzang chos kyi rgyal
tshan,1567-1662), Jamyang Shepai Dorje ('Jam dbyangs Bzhad
ba'i Rdo rje, 1648-1722), Changkya Rolpai Dorje (Lcang skya
Rol pa'i rdo rje, 1717-86), Konchog Jigme Wangpo
(Kon mchog 'jigs med dbang po, 1728-91), and many others
scholars.
Gelug maintains that "objects of knowledge" *(shes
bya)* are the basis for dividing the two truths (Tsongkhapa
1984b:176).[4]
This means that the two truths relate to "two objects of
knowledge," the idea it takes from the statement of the Buddha
from *Discourse on Meeting of the Father and the Son
(Pitaputrasamagama
Sutra)[5]*
By *object of knowledge* Gelug means an object that is
cognizable *(blo'i yul du bya rung ba).* It must be an object
of cognitive processes in general ranging from those of ordinary
sentient beings through to those of enlightened beings. This
definition attempts to capture any thing knowable in the broadest
possible sense. Since the Buddha maintains knowledge of the two truths
to be necessary for awakening, the understanding of the two truths
must constitute an exhaustive understanding of all objects of
knowledge.
Gelug's key argument to support this claim comes from its two-nature
theory in which it has been argued that every nominally *(tha
snyad)* or conventionally *(kun rdzob)* given phenomenon
possesses dual natures: namely, the *nominal* (or conventional
nature) and the *ultimate nature*. The conventional nature is
unreal and deceptive while the ultimate nature is real and
nondeceptive. Since two natures pertain to every phenomenon, the
division of the two truths means the division of each entity into two
natures. Thus the division of the two truths, "reveals that it
makes sense to divide even the nature of a single entity, like a
sprout, into dual natures--its conventional and its ultimate
natures. It does not however show," as non-Gelug schools have
it, "that the one nature of the sprout is itself divided into
two truths in relation to ordinary beings *(so skye)* and to
noble beings (*aryas)*" (Tsongkhapa 1984b: 173,
1992: 406).
The relation of the two truths comes down to the way in which the
single entity appears to cognitive processes--deceptively and
nondeceptively. The two natures correspond to these deceptive or
nondeceptive modes of appearance. While they both belong to the same
ontological entity, they are epistemically or conceptually mutually
exclusive. Take a sprout for instance. If it exists, it necessarily
exhibits a dual nature, and yet those two natures cannot be
ontologically distinct. The ultimate nature of the sprout cannot be
separate from its conventional nature--its color, texture, shape,
extension, and so on. As an object of knowledge, the sprout retains
its single ontological basis, but it is known through its two natures.
These two natures exclude one another so far as knowledge is
concerned. The cognitive process that knows the deceptive conventional
nature of the sprout does not have direct access to its nondeceptive
ultimate nature. Similarly, the cognitive process that apprehends the
nondeceptive ultimate nature of the sprout does not have direct access
to its deceptive conventional nature. In Newland's words: "A
table and its emptiness are a single entity. When an ordinary
conventional mind takes a table as its object of observation, it sees
a table. When a mind of ultimate analysis searches for the table, it
finds the emptiness of the table. Hence, the two truths are posited in
relation to a single entity by way of the perspectives of the
observing consciousness. This is as close as Ge-luk-bas will come to
defining the two truths as perspectives" (1992: 49).
Gelug's two-nature theory not only serves as the basic reference point
for its exposition of the basis of the division of the two truths,
their meanings and definitions, but also serves as the basic
ontological reference for its account of the relationship between the
two truths. Therefore Gelug proposes the view that the two truths are
of single entity with distinct conceptual identities. This view is
also founded on the theory of the two natures. But how are the two
natures related? Are they identical or distinct? For Gelug argues that
there are only two possibilities: either the two natures are identical
*(ngo bo gcig)* or distinct *(ngo bo tha dad);* there
cannot be a third (Tsongkhapa 1984b:176). They are related in terms of
being a single entity with distinct conceptual identities--thus
they are both the same and different. Since the two natures are the
basis of the relationship between the two truths, the relationship
between the two truths will reflect the relationship between the two
natures. Just as the two natures are of a same entity, ultimate truth
and conventional truth are of same ontological status.
Gelug argues that the relationship between the two truths, therefore,
the two natures, is akin to the relationship between being conditioned
and being impermanent (Tsongkhapa 1984b:176). It appropriates this
point from Nagarjuna's *Bodhicittavivarana,*
which states: "Reality is not perceived as separate from
conventionality. The conventionality is explained to be empty. Empty
alone is the conventionality, if one of them does not exist, neither
will the other, like being conditioned and being impermanent"
(v.67-68) (Nagarjuna 1991:45-45, cited
Tsongkhapa, 1984b: 176; Khedrub Je 1992: 364). Commenting on
this passage from the *Bodhicittavivarana*, Tsongkhapa
argues that the first four lines (in original Tibetan verses) show
that things as they really are, are not ontologically distinct from
that of the conventionality. The latter two lines (in the original
Tibetan verse) show their relationship such that if one did not exist,
neither could the other *(med na mi 'bung ba'i 'brel
ba).* This, in fact, he says, is equivalent to their being
constituted by *a single-property relationship* *(bdag cig
pa'i 'brel ba).* Therefore, like the case of being
conditioned and being impermanent, the relation between the two truths
is demonstrated as one of a single ontological identity (Tsongkhapa
1984b: 176-77).
The way in which the two truths are related is thus analogous to the
relationship between being conditioned and being impermanent. They are
ontologically identical and mutually entailing. Just as a conditioned
state is not a result of impermanence, so emptiness is not a result of
the conventional truth (the five aggregates) or the destruction of the
five aggregates--Hence in the *Vimalakirtinirdesa
Sutra* it is stated: "Matter itself is void. Voidness
does not result from the destruction of matter, but the nature of
matter is itself voidness" (Vimalakirti, 1991:74). The same
principle applies in the case of consciousness and the emptiness of
consciousness, as well as to the rest of the five psychophysical
aggregates--the aggregate and its emptiness are not causally
related. For the causal relationship would imply either the aggregate
is the cause, therefore its emptiness is the result, or the aggregate
is the result, and its emptiness the cause. This would imply,
according to Gelug's reading, either the aggregate or the emptiness is
temporally prior to its counterpart, thus leading to the conlusion
that the conventional truth and ultimate truth exist independently of
each other. Such a view is for Gelug is completely unacceptable.
The ontological identity between being conditioned and being
impermanent does not imply identity in *all* and *every*
respect. Insofar as their epistemic mode is concerned, conditioned and
impermanent phenomena are distinct and contrasting. The concept
*impermanence* always presents itself to the cognizing mind as
momentary instants, but not as conditioned. Similarly, the concept
*being-conditioned* always presents itself to its cognizing
mind as constituted by manifold momentary instants, but not as
moments. Thus it does not necessarily follow that the two truths are
identical in every respect just because they share a common
ontological identity. Where the modes of conceptual appearance are
concerned, ultimate nature and conventional nature are distinct. The
conceptual mode of appearance of ultimate truth is nondeceptive and
consistent with its mode of existence, while that of conceptual mode
of conventional truth is deceptive and inconsistent with its mode of
existence.
Conventional truth is uncritically confirmed by conventionally
reliable cognitive process, whereas ultimate truth is critically
confirmed by ultimately reliable cognitive process. Hence, just as
ultimate truth is inaccessible to the conventionally reliable
cognitive process for its uncritical mode of engagement, so, too, is
conventional truth inaccessible to ultimately reliable cognitive
process for its critical mode of engagement. This is how, in Gelug's
view, the truths differ conceptually despite sharing a common
ontological entity. In summarizing Gelug's argument, Khedrub Je
writes: "The two truths are therefore of the same nature, but
different conceptual identities. They have a single-nature
relationship such that, if one did not exist, neither could the other,
just like being conditioned and impermanent" (1992: 364).
The two nature theory also supports Gelug's view that the truth is
twofold. Since the two natures of every conventional phenomenon
provide the ontological and epistemological foundation for each of the
truths, the division of truth into two is entirely appropriate. Both
the conventional and the ultimate are actual truths, and since the two
natures are mutually interlocking, neither of the two truths has
primacy over the other--both have equal status, ontologically,
epistemologically, and even soteriologically.
Given Gelug's stance on conventional truth as actual truth and its
argument for the equal status of the two truths, Gelug must now
address the question: How can conventional truth, which is unreal
(false) and deceptive, be truth (real) at all? In other words, how can
the two truths be of equal status given if conventional truth is
unreal (false)? The success of Gelug's reply depends on its ability to
maintain harmony between the two truths, or their equal footing.There
are several arguments by means of which Gelug defends this. The first,
and the most obvious one, is the argument from the two-nature theory.
It argues since two truths are grounded in the dual nature of a single
phenomenon, then "just as the ultimate reality of the sprout
[for instance] is taken as characteristic of the sprout, hence it is
described as the sprout's nature, so, too," explains Tsongkhapa,
"are the sprout's color, shape, etc., the sprout's
characteristics. Therefore they too are its nature"
(1992: 406). Since the two natures are ontologically mutually
entailing, the sprout's ultimate truth cannot exist ontologically
separate from its conventional truth, and vice versa. Neither truth
could exist without the other.
The most important argument Gelug advances for the unity of the two
truths draws upon an understanding of the compatible relation between
conventional truth and dependent arising on the one hand, and between
ultimate truth and emptiness on the other. For Gelug emptiness and
dependent arising are synonymous. The concept of emptiness is
incoherent unless it means dependent arising, and equally the concept
of dependent arising is incoherent unless it means emptiness of
intrinsic reality. Gelug argues, as Tsongkhapa does in the *Rten
'brel stod pa (In Praise of Dependent Arising,* 1994a),
since emptiness means dependent arising, the emptiness of intrinsic
reality and the efficacy of action and its agent are not
contradictory. If emptiness, however, is misinterpreted as
contradictory with dependent arising, then Gelug contends, there would
be neither action in empty phenomena, nor empty phenomena in action.
But that cannot be the case, for that would entail a rejection of both
phenomena and action, a nihilistic view (v.11-12). Since there
is no phenomenon other than what is dependently arisen, there is no
phenomenon other than what is empty of intrinsic reality (v.15).
Understood emptiness in this way, there is indeed no need to say that
they are noncontradictory--the *utter nonexistence of
intrinsic reality* and making sense of everything in the light of
the principle *this arises depending on this* (v.18). This is
so because despite the fact that whatever is dependently arisen lacks
intrinsic reality, and therefore empty, nonetheless its existence is
illusion-like (v.27).
Sometimes Gelug varies its ontological argument slightly, as does
Tsongkhapa in the *Lam gtso rnam gsum* *(The Three Principal
Pathways)*, to stressing the unity of the two truths in terms of
their causal efficiency. It argues that empty phenomena is causally
efficient is crucial in understanding the inextricable relationship
between ultimate truth and conventional truth. The argument takes two
forms: (1) *appearance* avoids realism--the *extreme of
existence*, and (2) *emptiness* avoids inhilism--the
*extreme of nonexistence*. The former makes sense for Gelug
because the appearance arises from causal conditions, and whatever
arises from the causal conditions is a non-eternal or non-permanent.
When the causal conditions changes any thing that dependently arises
from them also changes. Thus the appearance avoids the realism. The
latter makes sense because the empty phenomena arises from causal
conditions, and whatever arise from the causal conditions is not a
non-existent, even though it lacks intrinsic reality. Only when the
causal conditions are satisfied do we see arising of the empty
phenomena. Hence by understanding that the empty phenomenon itself is
causally efficient, the bearer of cause and effect, one is not robbed
by the extreme view of nihilism (1985: 252).
The argument for the unity of the two truths also takes
epistemological form which also rests on the idea that emptiness and
dependently arising are unified. The knowledge of empty phenomena is
conceptually interlinked with that of dependently arisen
phenomena--the latter is, in fact, founded on the former. To the
extent that empty phenomena are understood in terms of relational and
dependently arisen phenomena, to that extent empty phenomena are
always functional and causally effective. The phrase "empty
phenomena," although expressed negatively, is not negative in a
metaphysical sense--it is not equivalent to no-thingness.
Although the empty phenomenon appears to its cognizing consciousness
negatively and without any positive affirmation, it is nonetheless
equivalent to a relational and dependently arisen phenomenon seen
deconstructively. Since seeing phenomena as empty does not violate the
inevitable epistemic link with the understanding of phenomena as
dependently arisen, and the converse also applies, so the unity
between the two truths--understanding things both as empty and as
dependently arisen--is still sustained.
The unity between the two truths, according to Gelug, does not apply
merely to ontological and epistemological issues; it applies equally
to soteriology--the practical means to the freedom from
suffering. As Jamyang Shepai Dorje argues that undermining either of
the two truths would result in a similar downfall--a similar
eventual ruin. If, however, they are not undermined, the two are alike
insofar as the accomplishment of the two accumulations and the
attainment of the two awakened bodies (kayas), and so forth, are
concerned. If one undermines conventional truth or denies it's
reality, one would succumb to the extreme of nihilism, which would
also undermine the fruit and the means by which an awakened physical
body (rupakaya) is accomplished. It is therefore not
sensible to approach the two truths with bias. Since this relation
continues as a means to avoid falling into extremes, and also to
accomplish the two accumulations and attain the two awakened bodies
(kayas), it is imperative, says Jamyang Shepai Dorje, that the
two truths be understood as mutually interrelated (1992:
898-99).
One could object Gelug's position as follows: If the two natures are
ontologically identical, why is conventional truth unreal and
deceptive, while ultimate truth real and nondeceptive? To this Gelugs
replies that "nondeceptive is the mode of reality *(bden
tshul)* of the ultimate. That is, ultimate truth does not deceive
the world by posing one mode of appearance while existing in another
mode" (1992: 411). Ultimate truth is described as
*ultimate*, not because it is absolute or higher than
conventional truth, but simply because of its consistent character,
therefore non-deceptive--its mode of appearance and its mode of
being are the same--in contrast with the inconsistent (therefore
deceptive) character of conventional truth. Ultimate truth is
nondeceptive for the same reason. The premise follows because to the
cognizing consciousness, conventional truth presents itself as
inherently or instrinsically real. It appears to be substance, or
essence, and therefore it deceives ordinary beings. Insofar
as conventional truth presents itself as more than
conventional--as inherently real--they deceive the ordinary
beings. We take them to be what they are not--to be intrinsically
and objectively real. In that sense, they are unreal. "But to
the extent that we understand them as dependently arisen, empty,
interdependent phenomena," as Garfield explains, "they
constitute a conventional truth" (1995: 208).
Another objection may be advanced as follows: Gelug's position
contradicts the Buddha's teachings since it is not possible to
reconcile Gelug's view that there are two actual truths with the
Buddha's declaration that nirvana is the only
truth.[6]
For its answer to this objection Gelug appropriates
Candrakirti's
*Yuktisastikavrtti* which states
that nirvana is not like conditioned phenomena, which
deceives the childish by presenting false appearances. For the
existence of nirvana is always consistent with its
characteristic of the nonarising nature. Unlike conditioned phenomena,
it never appears, even to the childish, as having a nature of arising
*(skye ba'i ngo bo).* Since nirvana is always
consistent with the mode of existence of nirvana, it is
explained as the noble truth. Yet this explanation is afforded
*strictly in the terms of worldly conventions* (1983:
14-15ff, cited in Tsongkhapa 1992: 312, Khedrub Je 1992:
360).
For Gelug the crucial point here, as Candrakirti emphasizes, is
that nirvana is the truth *strictly in terms of worldly
conventions.* Gelug recognizes this linguistic convention as a
highly significant to the Prasangika system and it insists
on conformity with the worldly conventions for the the following
reasons. First, just as an illusion, a mirror image, etc., are real in
an ordinary sense, despite the fact that they are deceptive and
unreal, so, too, conventional phenomena in the Prasangika
Madhyamaka sense are conventionally real, and can even be said to
constitute truths, despite being recognized by the Madhyamikas
themselves as unreal and deceptive. Second, because the concept
*ultimate truth* is also taken from its ordinary convention,
nirvana is spoken of as *ultimate* on the ground of
its nondeceptive nature, in the sense that its mode of existence is
consistent with its mode of appearance. The nondeceptive nature of the
empty phenomenon itself constitutes its reality, and so it is
conventionally described as ultimate in the Prasangika
system.
Thus Gelug asserts that neither of the two truths is more or less
significant than the other. Indeed, while the illusion only makes
sense as illusion in relation to that which is not illusion, the
reflection only makes sense as reflection in relation to that which is
reflected. So, too, does the real only make sense as real in relation
to the illusion, the thing reflected in relation to its reflection.
This also holds in the case of discussions about the ultimate nature
of things, such as the being of the sprout--it only makes sense
inasmuch as it holds in discussions of ordinary phenomena. The only
criterion that determines a thing's truth in the Prasangika
Madhyamaka system, according to Gelug, is the causal effectiveness of
the thing as opposed to mere *heuristic* significance. The
sprout's empty mode of being and its being as appearance are both
truths, insofar as both are causally effective, and thus both
functional.
The two truths, understood as, respectively, the empty and the
dependently arisen characters of phenomena, are on equal footing
according to Gelug. Nevertheless these truths have different
designations--the sprout's empty mode is always described as
"ultimate truth," while the conventional properties, such
as color and shape, are described as "conventional
truths." The former is accepted as *nondeceptive truth*
while the sprout's conventional properties are accepted as
*deceptive* or *false truth,* despite common sense
dictating that they are true and real.
In conclusion Gelug's theory of the two truths is based on one
fundamental thesis that each conventionally real phenomenon satisfies
the definitions of both truths for each phenomenon, as it sees it,
possesses two natures that serve as the basis of the definitions of
the two truths. The two truths are conceptual distinctions applied to
a particular conventionally real phenomenon, and every conventionally
real phenomenon fulfills the criterion of both truths because each
phenomenon constitutes these two natures they are not merely
*one* specific nature of a phenomenon mirrored in two different
perspectives. As each phenomenon possesses two natures, so each
verifying cognitive process has a different nature as its referent,
even though there is only one ontological entity and one cognitive
agent involved.
## 5. Implications
Gelug considers the two natures of each phenomenon as the defining
factor of the two truths. It argues that the conventional nature of an
entity, as verified by a conventionally reliable cognitive process,
determines the defining criterion of conventional truth; the ultimate
nature of the same entity, as verified by an ultimately reliable
cognitive practice, determines the defining criterion of ultimate
truth. Since both truths are ontologically as well as
epistemologically interdependent, knowledge of conventionally real
entitity as dependently arisen suffices for knowledge of both truths.
In contasty non-Gelug schools--Nyingma, Kagyu and Sakya
Non-Gelug, as we have seen, rejects Gelug's dual-nature theory,
treating each conventional entity as satisfying only the definition of
conventional truth and taking the definition of ultimate truth to be
ontologically and epistemologically transcendent from conventional
truth. They argue, instead, it is through the perspectives of either
an ordinary being or an unenlightened exalted being
*(aryas)* that the definition of conventional truth is
verified--fully enlightened being *(buddhas)* do not
experience the conventional truth in any respect. Similarly, for
non-Gelug, no ordinary being can experience the ultimate truth.
Ultimate truth transcends conventional truth, and the knowledge of
empirically given phenomena as dependently arisen could not satisfy
the criterion of knowing ultimate truth.
For Gelug, there is an essential compatibility between between the two
truths, for the reason that there is a necessary harmony between
dependently arisen and emptiness of intrinsic reality. As dependently
arisen, empty phenomena are not constructions of ignorant
consciousness, so neither is conventional truth such a construction.
Both truths are actual truths that stand on an equal footing.
Moreover, according to this view, whosoever knows conventional truth,
either directly or inferentially, also knows ultimate truth; whosoever
knows ultimate truth, also knows phenomena as dependently arisen, and
hence knows them as empty of intrinsic reality. Where there is no
knowledge of conventional truth, the converse applies. For non-Gelug,
the incommensurability between dependently arisen and emptiness of
intrinsic reality also applies to the two truths. Accordingly,
whosoever knows conventional truth does not know ultimate truth, and
one who knows ultimate truth does not know conventional truth;
whosoever knows phenomena as dependently arisen does not know them as
empty, whereas whosoever knows phenomena as empty does not know them
as dependently arisen.
While Gelug thus distances itself from the subjective division of the
two truths, Nyingma, Kagyu and Sakya attempt to demonstrate the
validity of their view by arguing that perspective provides the
primary basis for the division of the two truths. Unlike Gelug,
non-Gelug schools hold that the two truths do not have any objective
basis. Instead they are entirely reducible to the experiences of the
deluded minds of ordinary beings and the experiences of the wisdom of
exalted being.
According to Gelug, the agent who cognizes the two truths may be one
and the same individual. Each agent may have all the requisite
cognitive resources that are potentially capable of knowing both
truths. Ordinary beings have only conceptual access to ultimate truth,
while exalted beings, who are in the process of learning, have direct,
but intermittent, access. Awakenened beings, however, invariably have
simultaneous access to both truths. The view held by non-Gelug argues
for separate cognitive agents corresponding to each of the two truths.
Ordinary beings have direct knowledge of conventional truth, but are
utterly incapable of knowing ultimate truth. The exalted beings in
training directly know ultimate while they are meditative equipoise
and conventional truth in post meditative states. Fully awakened
buddhas, on the other hand, only have access to ultimate truth.
Awakened beings have no access to conventional truth whatsoever from
the enlightened perspective, although they may access conventional
truth from unenlightened ordinary perspectives. |
types-tokens | ## 1. The Distinction Between Types and Tokens
### 1.1 What the Distinction Is
The distinction between a *type* and its *tokens* is
an ontological one between a general sort of thing and its particular
concrete instances (to put it in an intuitive and preliminary way). So
for example consider the number of words in the Gertrude Stein line
from her poem *Sacred Emily* on the page in front of the
reader's eyes:
>
>
> Rose is a rose is a rose is a rose.
In one sense of 'word' we may count three different
words; in another sense we may count ten different words. C. S. Peirce
(1931-58, sec. 4.537) called words in the first sense
"types" and words in the second sense "tokens".
Types are generally said to be abstract and unique; tokens are concrete
particulars, composed of ink, pixels of light (or the suitably
circumscribed lack thereof) on a computer screen, electronic strings of
dots and dashes, smoke signals, hand signals, sound waves, etc. A study
of the ratio of written types to spoken types found that there are
twice as many word types in written Swedish as in spoken Swedish
(Allwood, 1998). If a pediatrician asks how many words the toddler has
uttered and is told "three hundred", she might well enquire
"word types or word tokens?" because the former answer
indicates a prodigy. A headline that reads "From the Andes to
Epcot, the Adventures of an 8,000 year old Bean" might elicit
"Is that a bean type or a bean token?".
### 1.2 What It Is Not
Although the matter is discussed more fully in SS8 below, it should
be mentioned here at the outset that the type-token distinction is not
the same distinction as that between a type and (what logicians call)
its *occurrences*. Unfortunately, tokens are often explained as
the "occurrences" of types, but not all occurrences of
types are tokens. To see why, consider this time how many words there
are in the Gertrude Stein line itself, *the line type*, not a
token copy of it. Again, the correct answer is either three or ten, but
this time it cannot be ten word *tokens*. The line is an
abstract type with no unique spatio-temporal location and therefore
cannot consist of particulars, of tokens. But as there are only three
word types of which it might consist, what then are we counting ten of?
The most apt answer is that (following logicians' usage) it is
composed of ten *occurrences* of word types. See SS8 below,
*Occurrences*, for more details.
## 2. Importance and Applicability of the Distinction
### 2.1 Linguistics
It is generally accepted that linguists are interested in types.
Some, e.g. Lyons (1977, p. 28), claim the linguist is interested only
in types. Whether this is so or not, linguists certainly appear to be
heavily committed to types; they "talk as though" there
are types. That is, they often quantify over types in their theories
and refer to them by means of singular terms. As Quine has emphasized,
a theory is committed to the existence of entities of a given sort if
and only if they must be counted among the values of the variables in
order for the statements in the theory to be true. Linguistics is rife
with such quantifications. For example, we are told that an educated
person's vocabulary is about 15,000 *words*, but Shakespeare's
was more nearly 30,000. These are types, as are the twenty-six
*letters* of the English alphabet, and its eighteen cardinal
*vowels*. (Obviously the numbers would be much larger if we
were counting tokens). Linguists also frequently refer to types by
means of singular terms. According to the *O.E.D*., for
example, *the noun 'color'*' is from early
modern English and in addition to its pronunciation [ka'
l@r] has two "modern current or most usual spellings" [colour,
color], eighteen earlier spellings [collor, collour, coloure, colowr,
colowre, colur, colure, cooler, couler, coullor, coullour, coolore,
coulor, coulore, coulour, culler, cullor, cullour] and eighteen senses
(vol. 2, p. 636). According to *Webster's*, *the word
'schedule'* has four current pronunciations:
['ske-(,)ju(@)l], ['ske-j@l] *(US)*,
['she-j@l] (*Can*) and ['she-(,)dyu(@)l]
(*Brit*) (p. 1044). Thus, linguistics is apparently committed
to the existence of these words, which are types.
References to types is not limited to letters, vowels and words, but
occur extensively in all branches of linguistics. Lexicography
discusses, in terms that make it clear that it is types being referred
to, nouns, verbs, words, their stems, definitions, forms,
pronunciations and origins. Phonetics is committed to consonants,
syllables, words and sound segments, the human vocal tract and its
parts (the tongue has five). Phonology also organizes sounds but in
terms of phonemes, allophones, alternations, utterances, phonological
representations, underlying forms, syllables, words, stress-groups,
feet and tone groups. Morphology is apparently committed to morphemes,
roots, affixes, and so forth, and syntax to sentences, semantic
representations, LF representations, among other things. Clearly, just
as words and letters and vowels have tokens, so do all of the other
items mentioned (nouns, pronunciations, syllables, tone groups and so
forth). It is more controversial whether the items studied in
semantics (the meanings of signs, their sense relations, etc.) also
come in types and tokens, and similarly for pragmatics (including
speaker meanings, sentence meanings, implicatures, presuppositions,
etc.) It seems to hinge on whether a mental event (token) or part of
it could be a meaning, a matter that cannot be gone into here. See
Davis (2003) for a view according to which concepts and
thoughts--varieties of meaning--come in types and
tokens.
It is notable that when one of the above types is defined, it is
defined in terms of other types. So for example, sentences might be
(partly) defined in terms of words, and words in terms of phonemes.
The universal and largely unscrutinized reliance of linguistics on the
type-token relationship and related distinctions like that of
*langue* to *parole*, and *competence* to
*performance*, is the subject of Hutton's cautionary book
(1990).
### 2.2 Philosophy
Obviously then, types play an important role in philosophy of
language, linguistics and, with its emphasis on expressions, logic.
Especially noteworthy is the debate concerning the relation between the
meaning of a sentence type and the speaker's meaning in using a
token (a relation that figures prominently in Grice 1969). But the
type-token distinction also functions significantly in other branches
of philosophy as well. In *philosophy of mind*, it yields two
versions of the identity theory of mind (each of which is criticized in
Kripke 1972). The type version of the identity theory (defended by
Smart (1959) and Place (1956) among others) identifies *types*
of mental events/states/processes with *types* of physical
events/states/processes. It says that just as lightning turned out to
be electrical discharge, so pain might turn out to be c-fiber
stimulation, and consciousness might turn out to be brain waves of 40
cycles per second. On this type view, thinking and feeling are certain
types of neurological processes, so absent those processes, there can
be no thinking. The token identity theory (defended by Kim (1966) and
Davidson (1980) among others) maintains that every token mental event
is some token physical event or other, but it denies that a type
match-up must be expected. So for example, even if pain in humans turns
out to be c-fiber stimulation, there may be other life forms that lack
c-fibers but have pains too. And even if consciousness in humans turns
out to be a brain waves that occur 40 times per second, perhaps
androids have consciousness even if they lack such brain waves.
In *aesthetics*, it is generally necessary to distinguish
works of art themselves (types) from their physical incarnations
(tokens). (See, for example, Wollheim 1968, Wolterstorff 1980 and
Davies 2001.) This is not the case with respect to oil paintings like
the *Mona Lisa* where there is and perhaps can be only one
token, but seems to be the case for many other works of art. There can
be more than one token of a sculpture made from a mold, more than one
elegant building made from a blueprint, more than one copy of a film,
and more than one performance of a musical work. Beethoven wrote nine
symphonies, but although he conducted the first performance of
*Symphony No. 9*, he never heard the Ninth, whereas the rest of
us have all heard it, that is, we have all heard tokens of it.
In *ethics*, actions are said to be right/wrong--but is it
action types or only action tokens? There is a dispute about this. Most
ethicists from Mill (1979) to Ross (1988) hold that the hallmark of
ethical conduct is universalizability, so that a particular action is
right/wrong only if it is right/wrong for anyone else in similar
circumstances--in other words, only if it is the right/wrong
*type* of action. If certain types of actions are right and
others wrong, then there may be general indefeasible ethical principles
(however complicated they may be to state, and whether they can be
stated at all). But some ethicists hold that there are no general
ethical principles that hold come what may--that there is always some
circumstance in which such principles would prescribe the wrong
outcome--and such ethicists may go on to add that only particular
(token) actions are right/wrong, not types of actions. See, for
example, Murdoch 1970 and Dancy 2004.
### 2.3 Science and Everyday Discourse
Outside of philosophy and linguistics, scientists often quantify
over types in their theories and refer to them by means of singular
terms. When, for example, we read that the "Spirit Bear" is
a rare white bear that lives in rain forests along the British Columbia
coast, we know that no particular bear is *rare*, but rather a
type of bear. When we are told that these Kermode bears "have a
mutation in the gene for the melanocortin 1 receptor" (*The
Washington Post* 9/24/01 A16) we know that it is not a token
mutation, token gene and token receptor being referred to, but a type.
It is even more evident that a type is being referred to when it is
claimed that "all men carry the same Y chromosome.... This
one and only Y has the same sequence of DNA units in every man alive
except for the occasional mutation that has cropped up every thousand
years" (*The New York Times*, Nicholas Wade 5/27/03).
Similarly, to say the ivory-billed woodpecker is not extinct is to be
referring to a type, a species. (The status of species will be
discussed in greater detail in SS4 below.)
The preceding paragraph contains singular terms that (apparently)
refer to types. An even more telling commitment to types are the
frequent quantifications over them. Mayr (1970, p. 233), for example,
tells us that "there are about 28,500 subspecies of birds in a
total of 8,600 species, an average of 3.3 subspecies per
species...79 species of swallows have an average of 2.6
subspecies, while 70 species of cuckoo shrikes average 4.6
subspecies". Although these examples come from biology, physics
(or any other science) would provide many examples too. It was claimed
(in the sixties), for example, that "there are thirty particles,
yet all but the electron, neutrino, photon, graviton, and proton are
unstable." Artifactual types (the Volvo 850 GLT, the Dell
Latitude D610 laptop) easily lend themselves to reference also. In
chess we are told that accepting the Queen's Gambit with 2...dc
has been known since 1512, but Black must be careful in this
opening--the pawn snatch is too risky. Type-talk is ubiquitous.
## 3. Types and Universals
Are types universals? They have usually been so conceived, and with
good reason. But the matter is controversial. It depends in part on
what a universal is. (See the entry on
properties.)
Universals, in contrast to particulars, have been characterized as
*having instances, being repeatable, being abstract, being acausal,
lacking a spatio-temporal location* and *being predicable of
things*. Whether universals have all these characteristics cannot
be resolved here. The point is that types seem to have some, but not
all, of these characteristics. As should be clear from the preceding
discussion, types have or are capable of having instances, of being
exemplified; they are repeatable. To many, this is enough to count as
universals. With respect to being abstract and lacking a
spatio-temporal location, types are also akin to universals--that
is, they are if universals are. On certain views of types and
universals, types, unlike their instances, are abstract and lack a
spatio-temporal location. On other views, types and universals are
*in* their instances and hence are neither abstract nor
acausal; far from lacking a spatio-temporal location, they usually
have many. (For more details, see SS5 below, *The Relation
between Types and Tokens*.) So far, then, types appear to be a
species of universal, and most metaphysicians would so classify
them. (Although a few would not. Zemach (1992), for example, holds
that there are no universals, but there are types, which are
repeatable particulars--*the cat* may be in many different
places at the same time.)
When it comes to *being predicable*, however, most types
diverge from such classic examples of universals as the property of
*being white* or the relation of *being east of.* They
seem not to be predicable, or at least not as obviously so as the
classic examples of universals. That is, if the hallmark of a
universal is to answer to a predicate or open sentence such as
*being white* answers to 'is white', then most
types do not resemble universals, as they more readily answer to
singular terms. This is amply illustrated by the type talk exhibited
in SS2 above. It is also underscored by the observation that it is
more natural to say of a token of a word--'infinity',
say--that it is a token of the word 'infinity' than
that it is an 'infinity'. That is to say, types seem to
be *objects*, like numbers and sets, rather than properties or
relations; it's just that they are not concrete particulars but
are general objects--abstract objects, on some views. If, then,
we follow Gottlob Frege (1977) in classifying all objects as being the
sort of thing referred to by singular terms, and all properties as the
sort of thing referred to by predicates, then types would be
objects. Hence they would not fall into the same category as the
classic examples of universals such as *being white* and
*being east of*, and thus perhaps should not be considered
universals at all. (Although perhaps all this shows is that they are
not akin to properties, but are their own kind of universal.) A
general exception to what has just been claimed about how we refer to
types (with singular terms) might be inferred from the fact that we do
more often say of an animal that it is a tiger, rather than that it is
a member of the species *Felis Tigris*. This raises the
question as to whether the species *Felis Tigris* is just the
property of *being a tiger*, and if it isn't, then what
the relation between these two items is.
Wollheim (1968, p. 76) insightfully puts the point that types seem
to be objects as that the relationship between a type and its tokens is
"more intimate" than that between (a classic example of) a
property and its instances because "for much of the time we think
and talk of the type as though it were itself a kind of token, though a
peculiarly important or pre-eminent one". He (1968, p. 77)
mentions two other differences worth noting between types and the
classic examples of universals. One is that although types and the
classic examples of properties often satisfy the same predicates, there
are many more predicates shared between a type and its tokens than
between a classic example of a property and its instances.
(Beethoven's *Symphony No. 9* is in the same key, has the
same number of measures, same number of notes, etc. as a great many of
its tokens.) Second, he argues that predicates true of tokens in virtue
of being tokens of the type are therefore true of the type (*Old
Glory* is rectangular) but this is never the case with classic
properties (*being white* is not white.)
These considerations may not suffice to show that types aren't
universals, but they do point to a difference between types and the
classic examples of properties.
## 4. What is a Type?
The question permits answers at varying levels of generality. At its
most general, it is a request for a theory of types, the way set
theory answers the question "what is a set?" A general
answer would tell us what sort of thing a type--any
type--is. For example, is it *sui generis*, or a
universal, or perhaps the set of its tokens, or a function from worlds
to sets, or a kind, or, as Peirce maintained, a law? These options are
discussed in SS4.1. At a more specific level, "what is a
type?" is a request for a theory that would shed some light on
the identity conditions for specific types of types, not necessarily
all of them. It would yield an account of what a word (or a symphony,
a species, a disease, etc.) is. This is in many ways a more difficult
thing to do. To see just how difficult it is to give the identity
conditions for an individual type, SS4.2 considers what a word is,
both because *words* are our paradigm of types, since the
type-token distinction is generally illustrated by means of words, and
because doing so will show that some of the most common assumptions
made about types and their tokens are not correct. It will also
illuminate some of the things we want from a theory of types.
### 4.1 Some General Answers
As mentioned in the previous paragraph, one way a theory of types
might answer the question "what is a type?" is the way set
theory answers the question "what is a set?" If types
*are* universals, as most thinkers assume, then there are as
many theories of types as there are theories of universals. Some
axiomatic theories include Zalta 1983 and Jubien 1988. Since theories
of universals are discussed at length in this Encyclopedia elsewhere,
they will not be repeated here. (See
properties.)
However it might be said that types are *not* universals for
the reasons mentioned in SS3 above, where it was urged that types
might be neither properties nor relations but objects, and there is an
absolute difference between objects and properties. Identifying types
as universals would appear to fly in the face of that
consideration.
#### 4.1.1 A Set
It might appear that types are better construed as sets (assuming sets
themselves are not universals). The natural thought is that a type is
the set of its tokens, which is how Quine sometimes (1987, p. 218)
construes a type. After all, a species is often said to be "the
class of its members". There are two serious problems with this
construal. One is that many types have no tokens and yet they are
different types. For example, there are a lot of very long sentences
that have no tokens. So if a type were just the set of its tokens,
these distinct sentences would be wrongly classified as identical,
because each would be identical to the null set. Another closely
related problem also stems from the fact that sets, or classes, are
defined extensionally, in terms of their members. The set of natural
numbers without the number 17 is a distinct set from the set of
natural numbers. One way to put this is that classes have their
members *essentially*. Not so the species *homo
sapiens*, the word 'the', nor Beethoven's *Symphony
No. 9.* The set of specimens of *homo sapiens* without
George W. Bush is a different set from the set of specimens of
*homo sapiens* with him, but the species would be the same even
if George W. Bush did not exist. That is, it is false that had George
W. Bush never existed, the species *homo sapiens* would not
have existed. The same species might have had different members; it
does not depend for its existence on the existence of all its members
as sets do.
Better, then, but still in keeping with an extensional set-theoretic
approach, would be to identify a type as a function from worlds to sets
of objects in that world. It is difficult to see any motivation for
this move that would not also motivate identifying properties as such
functions and then we are left with the question of whether types are
universals, discussed in SS3.
#### 4.1.2 A Kind
The example of *homo sapiens* suggests that perhaps a type is
a *kind*, where a kind is not a set (for the reasons mentioned
two paragraphs above). Of course, this raises the question of what a
kind is; Wolterstorff (1970) adopts the *kind* view of types and
identifies kinds as universals. In Wolterstorff 1980, he takes
*being an example of* as undefined and uses it to define
kinds--so that, for example, a possible kind is one such that it is
possible there is an example of it. *Norm kinds* he then defines
as kinds "such that it is possible for them to have properly
formed and also possible for them to have improperly formed
examples" (p. 56). He identifies both species and artworks as
norm-kinds. Bromberger (1992a) also views the tokens of a type as a
quasi-natural kind relative to appropriate projectible ("What is
its freezing point?" e.g.) and individuating questions
("Where was it on June 13th, 2005?"). However,
he doesn't identify the type as the kind itself, since to do so
does not do justice to the semantic facts mentioned in SS2 above, that
types are largely referred to by singular terms. Instead he views the
type as what he calls the *archetype* of the kind, defined as
something that models all the tokens of a kind with respect to
projectible questions but not something that admits of answers to
individuating questions. Thus for Bromberger the type is not the kind
itself, but models all the tokens of the kind. We shall see some
difficulties for this view in SS5 below.
#### 4.1.3 A Law
It wouldn't do to ignore what the coiner of the type-token
distinction had to say about types. Unfortunately it cannot be
adequately unpacked without an in-depth explication of Peirce's
semiotics, which cannot be embarked upon here. (See the entries on
Charles Sanders Peirce and
Peirce's theory of signs.)
Peirce said types "do not exist", yet they are
"definitely Significant Forms" that "determine
things that do exist" (4.423). A type, or "legisign"
as he also calls it, "has a definite identity, though usually
admitting a great variety of appearances. Thus, & *and* and
the sound are all one word" (8.334). Elsewhere he tells us that
a type is "a general law that is a sign. This law is usually
established by men. Every conventional sign is a legisign. It is not a
single object, but a general type which...shall be
significant. ...[E]very Legisign requires Sinsigns"
(2.246). Sinsigns are tokens. (It should be mentioned that for Peirce
there is actually a trichotomy among types, tokens and *tones*,
or qualisigns, which are "the mere quality of appearance"
(8.334).) Thus types have a definite identity as signs, are general
laws established by men, but they do not exist. Perhaps all Peirce
meant by saying they do not exist was that they are "not
individual things" (6.334), that they are, instead what he calls
"generals"--not to be confused with universals. What
he might have meant by a "general law" is
uncertain. Stebbing (1935, p.15) suggests "a rule in accordance
with which tokens ... could be so used as to have meaning
attached to them". Greenlee (1973, p. 137) suggests that for
Peirce a type is "a habit controlling a specific way of responding
interpretatively." Perhaps, then, types are of a psychological nature.
Obviously two people can have the same habit, so *habits* also
come in types and tokens. Presumably, types are then habit types. This
account may be plausible for words, but it is not plausible for
sentences, because there are sentences that have no tokens because if
Ph,Ps are sentences, then so is (Ph & Ps) and it is
clear that for Peirce "every [type] requires [tokens]" (2.246). And it
is much less plausible for non-linguistic types, like types of
beetles, some of which have yet to be discovered.
### 4.2 What is a Word?
No general theory of types can tell us what we often want to know
when we ask: what is a species, a symphony, a word, a poem, or a
disease? Such questions are just as difficult to answer as what a type
is in general. Even if types were sets, for example, set theory by
itself will not answer the burning question of which set a species is.
One would then have to go to biology and philosophy of biology to find
out whether a species is (i) "a set of individuals closely
resembling each other" as Darwin (1859, p. 52) would have it, (ii)
a set of "individuals sharing the same genetic code" as
Putnam (1975, p. 239) would have it, (iii) a set of "interbreeding
natural populations that are reproductively isolated from other such
groups", as Mayr (1970, p. 12) would have it, or (iv) a set
comprising "a lineage evolving separately from others and with
its own unitary evolutionary role and tendencies", as Simpson
(1961, p. 153) would have it. Similarly, if there is a question of
copyright infringement, one had best look to industry standards and
aesthetics for what a film or a song is, and not set theory. In
general, questions such as "what is a poem, a phoneme, a disease,
a flag,....?" are to be pursued in conjunction with a
specific discipline, and not within the confines of a general theory of
types. It is largely up to linguistics and the philosophy of it, e.g.,
to determine the identity conditions for phonemes, allophones, cardinal
vowels, LF representations, tone groups and all the other linguistic
types mentioned in SS2 above.
#### 4.2.1 Identity Conditions for Words
It's instructive to consider what our paradigm of a type is--a
word. It will reveal how complicated the identity conditions are for
even one specific type, and help to dispel the idea that tokens are to
types as cookies are to cookie cutters. It will also show what we
desire from a theory of types, by exhibiting the facts that a theory
of types has to accommodate. We illustrated the type-token distinction
by appealing to *words*, so presumably we think we know at
least roughly what a word type is. Unfortunately, everyone seems to
think they know, but there is massive disagreement on the matter in
philosophy. However, as urged in the preceding paragraph, it is
crucial to rely on linguistics when we consider what a word is. When
we do we find that there are different linguistic criteria for what a
word is, and a good deal of the disagreement can be chalked up to this
fact. McArthur 1992's *The Oxford Companion to the English
Language* (pp. 1120-1) lists eight: orthographic, phonological,
morphological, lexical, grammatical, onomastic, lexicographical and
statistical-- but adds that more can be demarcated. *There are
different types of types of words.* However deserving of attention
each of these is, it will be useful to focus on just one, and I will
do so in what follows. There is an important and very common use of
the word 'word' that lexicographers and the rest of us use
frequently. It is, roughly, *the sort of thing that merits a
dictionary entry*. (Roughly, because some entries in the
dictionary, e.g.,'il-,' '-ile,' and
'metric system,' are not words, and some words, e.g. many
proper names, do not get a dictionary entry.) This notion was at play
in our opening remarks in SS2 about Shakespeare's vocabulary
containing 30,000 words, and the twenty spellings and eighteen senses
of the noun 'color'/'colour', the verb
'color'/'colour', and four current
pronunciations of the noun 'schedule'. These examples show
(in this ordinary sense of 'word') that the same word can
be written or spoken, can have more than one correct spelling, can
have more than one correct spelling at the same time, can have more
than one sense at the same time and can have more than one correct
pronunciation at the same time. It also shows that different words can
have the same correct spelling and pronunciation; further obvious
examples would show that different words can have the same
sense--e.g. English 'red' and French
'rouge'. A theory of types, or of word types, that cannot
accommodate this notion of a word is worthless. In what follows, I
shall use 'word' in this sense.
#### 4.2.2 What a Theory of Words Might Tell Us
Ideally, a theory of words and their tokens should tell us not only
(i) what a word is (in the sense indicated), but (ii) how a word is to
be individuated, (iii) whether there is anything all and only tokens of a
particular word have in common (other than being tokens of that word);
(iv) how we know about words; (v) what the relation is between words
and their tokens; (vi) what makes a token a token of one word rather
than another; (vii) how word tokens are to be individuated; and (viii)
what makes us think a particular is a token of one word rather than
another. These questions are distinct, although they are apt to run
together because the answer to one may give rise to answers to others.
For example, if we say in answer to (iii), that all tokens of a
certain word (say, 'cat') have something in common besides
being tokens of that word--*they are all spelled
'c'-'a'-'t'*, for
example--then we may be inclined to say to (vi) that
*spelling* makes a word token a token of 'cat'
rather than some other type; and to (vii) that word tokens of
'cat' are to be individuated on the basis of their being
spelled 'c'-'a'-'t'; and to (viii)
that we think something is a token of 'cat' when we see
that it is spelled 'c'-'a'-'t';
and to (ii) that the word 'cat' itself is to be
individuated by its spelling; and to (i) that a word type is a
sequence of letters--e.g. the word 'cat' just
*is* the sequence of letters <'c',
'a', 't'>; and to (iv) that we know about a
particular word, about what properties it has, by perceiving its
tokens: it has all the properties that every one of its tokens has
(except for properties types cannot have, e.g., *being
concrete*).
#### 4.2.3 Orthography
The advantage of starting with (iii) is that if there is some
nontrivial property that all tokens of a word (in the sense indicated)
have in common, then perhaps we can use it to individuate the tokens,
and also to get a handle on what the type is like and on how we know
what the type is like. Unfortunately, it is not spelling, contrary to
what many philosophers seem to think. Stebbing, e.g., considers the
inscribed word a *shape.* But not even the linguist's much
narrower notion of an *orthographic word* ('a visual sign
with a space around it') requires a canonical spelling. We have
seen that not all tokens of 'color' have the same
spelling, even when they are spelled correctly, which they sometimes
are not. Not all tokens are spelled at all--spoken tokens are
not. Moreover, two words can have the same spelling, as the noun
'color' and verb 'color' prove, or to take a
different example, the noun 'down' from German meaning
"the fine soft covering of fowls" and the different noun
'down' from Celtic meaning "open expanse of elevated
land". (They are not the same word with two senses, but different
words with different etymologies.) Notice that even if, contrary to
fact, all tokens of a word had the same spelling and we concluded that
the word type itself just is the sequence of letter types that compose
it, we would have analyzed word types in terms of letter types, but
since we are wondering what types are in the first place, we would
still need an account of what letters are since they are types
too. Providing one is surprisingly difficult. Letters of the alphabet
such as the letter 'A' are not just *shapes*, for
example, contrary to what is implicit in Stebbing 1935 and more
explicit in Goodman and Quine 1947, because Braille and Morse code
tokens of the letter 'A' cannot be said to have "the
same shape", and even standard inscriptions of the letter
'A' do not have the same shape--in either a Euclidean
or a topological sense--as these examples obtained from a few
minutes in the library illustrate:
>
> ![letter A forms](fig-a.jpg)
>
Moreover, the letter 'A' has a long history and many of
its earlier "forms" would not be recognizable to the modern
reader.
#### 4.2.4 Phonology
If we switch instead to a *phonemic* analysis of words, as
being more fundamental, similar problems arise. Not all tokens of a
word are composed of the same *phonemes*, because some tokens
are inscriptions. But even ignoring inscriptions, the example of the
two 'down''s shows that neither can we identify a
word with a sequence of phonemes. This particular difficulty might be
avoided if we identify a word with a phonemic analysis paired with a
sense. But this is too strong; we saw earlier that the noun
'color' has eighteen senses. Moreover,
'schedule' has more than one phonemic analysis. A phoneme
itself is a type with tokens, and so we'd also need an account of what
a phoneme is, and what its tokens have in common (if anything). Saying
what a phoneme is promises to be at least as hard as saying what a
letter is. Phonology, the study of phonemes, is distinct from
phonetics, the scientific study of speech production. Phonetics is
concerned with the physical properties of sounds produced and is not
language relative. Phonemes, on the other hand, are language relative:
two phonetically distinct speech tokens may be classified as tokens of
the same phoneme relative to one language, or as tokens of different
phonemes relative to another language. Phonemes are theoretical
entities, and abstract ones at that: they are sometimes said to be
*sets of features*. The bottom line is that the
*phonological* word is not the lexicographical one either.
It might be thought that we started at too abstract a level--that
if we think there is a hierarchy of types of words, we started "too
high" on the hierarchy and we should start lower on the
hierarchy. That is, that we should first gather together those tokens
that *are* phonetically (and perhaps semantically) identical on
the grounds that this is a perfectly good notion of a
*word*. But notice: this would mean that different dialects of
the same language would have far fewer "words" in common than one
would have supposed, and it would misclassify many words because, for
example, according to Fudge (1990, p. 39) a Cockney 'know'
is like the Queen's 'now'; the Queen's 'know'
is like Scottish 'now'; and a Yorkshire 'know'
is like the Queen's 'gnaw'. Worse, even within the very
same idiolect it would distinguish as different "words" what one would
have thought were the same word. For example, the word
'extraordinary' is variously pronounced with six, five,
four, three or even two syllables by speakers of British English.
According to Fudge (1990, p. 40) it ranges "for most British English
speakers from the hyper-careful
['ekstr@'?o:dIn@rI] through the fairly
careful [Ik'stro:dnrI] to the very colloquial
['stro:nrI]." That is, the very same person may use any of
five pronunciations for what should be considered the same word. Only
an absolute diehard of this "bottom-up" approach would insist on
distinguishing as different words representations for the same
idiolectal word. Not only would a phonologist take this as excessively
complicated, but the representation types themselves can receive
realizations that are acoustically very different (for the small child
and the man may speak the same idiolect). Fudge (1990, p. 31) assures
us that "It is very rare for two repetitions of an utterance to be
exactly identical, even when spoken by the same person." Pretty soon,
each word token would have to count as tokens of different
"words".
The example of ['stro:nri] demonstrates that there may be
*no* phonetic signal in a token for every phoneme that is
supposed to compose the word: it is "missing" several syllables! This
is also demonstrated by reflection on casual speech: [jeet?] for
'did you eat?'. No wonder, then, that many phoneticians
have given up on the attempt to reduce phonological types to
acoustic/articulatory types. (See Bromberger and Halle 1986). Even the
physicalist Bjorn Lindblom (1986, p. 495) concedes that "for a given
language there seems to be no unique set of acoustic properties that
will always be present in the production of a given unit (feature,
phoneme, syllable) and that will reliably be found in all conceivable
contexts."
However, the final nail in the coffin for the suggestion according to
which all tokens of the same word have the "same sound" is that words
can be mispronounced. As Kaplan (1990) has argued, a word can suffer
extreme mispronunciation and still be (a token of) the same word. He
asks us to imagine a test subject, who faithfully repeats each word
she is told. After a time, we put filters on her that radically alter
the results of her efforts. Nonetheless, we would say she is saying
the word she hears. Kaplan concludes that differences in sound between
tokens of the same word can be just about as great as we would
like. Notice that in such circumstances *intention*--what
word the test subject intended to produce--is key. This suggests
that perhaps what all tokens of a word, say, 'color', have
in common is that they were produced as the result of an intention to
produce a token of the word 'color'. Unfortunately,
counterexamples are not hard to manufacture. (A clear phonemic example
of 'supercalifragilisticexpealadocious' in English would
probably not count as a token of 'color', for example.)
Counterexamples aside, it would be putting the cart before the horse
to try to explain what the word 'color' is by appealing to
the intention to produce a token of the word 'color'. It
would be like trying to explain what a fork is by appealing to the
intention to produce a fork. Intentions are important in helping to
identify which type a token is a token of--question
(viii)--but will not help us with what the type is--question
(i)--and so I shall ignore them in what follows.
#### 4.2.5 Conclusion
The upshot of all this is that there is no nontrivial, interesting,
"natural", projectible property that all tokens of a word have in
common, other than being tokens of that word (in the sense of
'word' indicated). Tokens are all the same word, but they
are not all the same. That is, the answer to (iii) is no. What then,
of the other questions, (i)-(viii)? They become more difficult to
answer. Wetzel (2002) attempts to answer them. The primary conclusion
of Wetzel 2002 is that words are *theoretical* entities,
postulated by and justified by linguistic theory. Words, in the sense
indicated, are individuated by a number of variables, including
orthography, phonology, etymology, grammatical function and
sense(s). As for their tokens, they are apt to have some but not all
of the properties of the type. And, as the story from Kaplan shows,
tokens may even be quite deformed. These considerations impact
significantly on the relation between types and their tokens,
discussed in the next section, SS5.
## 5. The Relation between Types and their Tokens
The relation between types and their tokens is obviously dependent on
what a type is. If it is a set, as Quine (1987, p. 217) maintains, the
relation is *class membership*. If it is a kind, as
Wolterstorff maintains, the relation is *kind-membership*. If
it is a law, as Peirce maintains, it is something else again, perhaps
*being used in accordance with*. (Although Peirce also says a
token is an "instance" of a type and that the token
signifies the type.) Nonetheless, it has often been taken to be the
relation of *instantiation, or exemplification*; a token is an
instance of a type; it exemplifies the type. (Not that every instance
of a type is a token--e.g. capital 'A', small
'A', and all the other types of 'A's tokened
in the display in SS4.2 above may be said to be instances of the
letter 'A'.) As with other universals, there are two
versions of this relation, Platonic and Aristotelian. Although the two
versions of property realism are discussed at length under this
encyclopedia's entry for properties, a few remarks about the type
versions are in order.
According to Platonic versions of type realism, e.g., Bromberger 1989,
Hale 1990, Katz 1981 and Wetzel 2002, the type is an abstract object,
not located anywhere in space-time, although its tokens are. This
version appears to give rise to serious epistemological
problems--we don't see or hear the type, it isn't located
anywhere in space-time, so how do spatiotemporal creatures such as
ourselves know it exists, or what properties it has? Admittedly, we
see and hear tokens, but how are they a guide to what the type is
like? One answer, given for example, by Wolterstorff, for whom we saw
in SS4 that a type is a norm-kind, is that ordinary induction from
tokens would give us knowledge of types, at least in the case of
instantiated types. Bromberger (1992a, p.176) claims that linguists
"often impute properties to types after observing and judging
some of their tokens and seem to do this in a principled way"
and calls the principle that licenses this inference the *Platonic
Relationship Principle.* More specifically, he proposes (1992a)
that the type, as the archetype of the quasi-natural kind which
comprises the tokens, has just those projectible properties that all
the tokens have. He has in mind properties such as the same underlying
phonological structure, for words, and the same boiling point, for
elements.
However, as we saw in SS4, generally there are no such properties
had by all and only tokens of a type, at least in the case of
words--not same phonological structure, nor same sense nor same
spelling. Not all tokens have any such natural projectible property
(*except* for the property of being tokens of the same
type). It should be clear from SS4 that the cookie cutter
model--the idea of the type as just a perceptible pattern for
what all the tokens look like--does not work. Goodman (1972,
pp. 437-8) follows Peirce in using the word 'replica' to
apply to all tokens of the same type, (although Peirce seemed to think
they were replicas of the type, whereas Goodman, being a nominalist,
cannot think this) but not all tokens resemble each other in any
ordinary sense beyond being tokens of the same type (although of
course some do). Goodman himself is clear about this, for he notes
there that "Similarity, ever ready to solve philosophical
problems and overcome obstacles, is a pretender, an impostor, a
quack.... Similarity does not pick out inscriptions that are
'tokens of a common type', or replicas of each other. Only
our addiction to similarity deludes us into accepting similarity as
the basis for grouping inscriptions into the several letters, words,
and so forth." But others, e.g. Stebbing (1935, p. 6) and Hardie
(1936), claim that all spoken tokens are more or less similar to each
other. Because they are not, Wetzel (2002) and (2008) proposes that
since the only property all the tokens of a type generally share is
being tokens of the type, one of the primary justifications for
positing word types is that being a token of the word
'color,' say, is the glue that binds the considerable
variety of space-time particulars together. The type is thus a very
important theoretical object, whose function is to unify all the
tokens as being "of the same type"; in accordance with the
Platonic Relationship Principle, the type has properties based on the
properties of *some of* its tokens, but in a complex
way--in addition, the tokens have some of *their*
properties in virtue of what properties the type has.
In Aristotelian versions of exemplification, such as Wollheim 1968 and
Armstrong 1978, the type has no independent existence apart from its
tokens. It is "in" each and every one of its tokens, and
so can be seen or heard just as the tokens can be. This avoids the
epistemological problem mentioned in the preceding paragraph, but
makes it hard to explain how some types--such as very long
sentences--can have no tokens.
As against instantiation of any sort, Stebbing (1935, p. 9) argues
that a token is not an instance of a type, because "the type is
a logical construction out of tokens having similarity or conventional
association [as the inscribed word with the spoken]. It follows that
the type is logically dependent upon the tokens, in the sense that it
is logically impossible to mention the type-word without using a
token, and further, the meaning of the type has to be defined by
reference to the tokens." These claims are quite
controversial. For example, it is clear one can refer to a type word
without using a token *of it*--one can say "the word
a token of which is the first word on the page".
Another alternative to *exemplification* is
*representation*. According to Szabo (1999, p. 150), types are
abstract particulars, as with Platonic realism, but tokens
*represent* their types, just as "paintings, photographs,
maps, numerals, hand gestures, traffic signs and horn signals"
represent, or "stand for" their representata. A word token
of 'horse' represents the word 'horse', which
in turn represents horses. Just as a correct map of the planet can
provide us with knowledge of the planet, so too a token can provide us
with knowledge of properties of the type, thus addressing the
epistemological problem. The representation view gives rise to a
problem however, for it turns out that what we have been calling word
tokens aren't words at all on this view, anymore than a map of a
planet is a planet, and this runs contrary to our usual thinking.
## 6. What is a Token?
It might seem that tokens are less problematic than types, being
spatio-temporal particulars. But certain complications should be
noted. (We continue to use linguistic examples, but the remarks hold
true for tokens generally).
* As Kaplan (1990, p. 97) pointed out, a token of a word could be
suitably circumscribed empty space (e.g., in a piece of cardboard
after letters have been cut out of it).
* It would seem that a token might also be a mental
particular--because a poem might be composed prior to its being
read or written down--although there is disagreement over whether
such a mental particular is to count as a token.
* It will generally be the case that some tokens of the type
resemble in their outward physical appearance some other tokens of the
type, but as we saw in SS4-5, it is generally not the case that
*all* the tokens of the same type resemble one another in
appearance. (Recall the example of the letter 'A' in
SS4.)
* Even being similar in appearance (say by sound or spelling) to a
canonical exemplar token of the type is not enough to make a physical
object/event a token of that type. The phonetic sequence [Ah
'key ess 'oon ah 'may sah] is the same phonetic
(type) spoken in Spanish or Yiddish. Yet if a Spanish speaker uses it
to say *a table goes here*, she has not, in addition, said
*a cow eats without a knife* in Yiddish. She has not said
anything in Yiddish, however phonetically similar what she said might
be to a sentence token of Yiddish. So her token is a token in Spanish,
not Yiddish. Meaningful tokens are tokens in a language.
* Along the same lines, being a physical object/event that is
similar in appearance (say by sound or spelling) to a canonical
exemplar token of the type is not even enough to make the physical
object/event *meaningful*, because, as Putnam (1981, pp. 1-2)
among others has argued, the token must have been appropriately
intentionally produced.
* If it is not already clear from the foregoing, it should be noted
that a physical object that is a token of a type is not one
*intrinsically*--merely by being a certain sequence of
shaped ink marks, say. Just as being a brother is not an intrinsic
property of brothers because one is a brother only relative to another
person, so a token is a token only relative to a type, and to a
language, perhaps to an orientation, and perhaps, as Hugly and Sayward
(1981 p. 186) have argued, to a tokening system (e.g., Morse Code),
which they define as "a set of instructions that tell how, for
any given expression, a speaker of the language could construct a
perceptible particular that tokens that expression given that the
speaker had enough physical and mental resources...".
## 7. Do Types Exist?
### 7.1 Universals
Since types are usually thought to be universals, the debate over
whether they exist is as longstanding as the debate over universals,
and debaters fall into the same camps. Realists say they do, as we saw
in SS5 which surveyed several varieties of realism. Realism's
traditional opponents were nominalists and
conceptualists. Nominalists, who renounce universals and abstract
objects, say they don't. (See, e.g., Goodman and Quine 1947, Quine
1953, Goodman 1977 and Bromberger 1992b). Conceptualists argued that
there are no general things such as the species *homo sapiens*;
there are only general *ideas*--that is, ideas that apply
to more than one thing. Applied to words, the thesis would be that
words are not abstract objects "out there", but objects in
the mind. Their existence then, would be contingent on having been
thought of. While this contingency may have a good deal of
plausibility in the case of linguistic items, by itself conceptualism
is just a stopgap measure on the issue of types and tokens
generally. For ideas themselves are also either types or tokens (as
evidenced by the fact that two people sometimes have the same
idea). So either the conceptualist is proposing that types are idea
types--which would be a species of realism--or she is
proposing that there are no types, only mental idea particulars in
particular persons, which is a version of nominalism. Conceptualism,
therefore, will be ignored in what follows.
### 7.2 Realism
Realism is the most natural view in the debate with nominalism,
because as we saw in some detail in SS2 type talk is
ubiquitous. That is, we talk as though there are types in philosophy,
science and everyday life. To say that we talk as though there are
types is not to invoke the traditional argument for universals, which
is that a sunset and a rose are both red, so they have something in
common; and this something can only be the property of *being
red*; so properties exist. Quine (1953) convinced many at
mid-century that this traditional argument for universals, which
relies on predicates referring to something, fails. He objected that
"the rose is red because the rose partakes of redness" is
uninformative--we are no better off in terms of explanatory power
with such extra objects as *redness* than we are without them;
perhaps a rose's being red and a sunset's being red are just brute
facts. Rather, to say we talk as though there are types is, as we saw
in SS2, to appeal to the fact that in our theories we frequently
use singular terms for types, and we quantify over them. As we saw,
Frege emphasized that singular term usage is in indicator of
*objecthood*, and Quine stressed that we are ontologically
committed to that over which we quantify. Such considerations led
Quine himself (1987, p. 217) to hold that expression types such as
'red' exist, even while he denied that *redness*
does. Since at least on the face of it we are committed to types in
many fields of inquiry, therefore, it is incumbent upon the nominalist
to "analyze them away". (Or to maintain that all theories
which appear to refer to types are false--but this is a pretty
radical approach, which will be ignored below.)
### 7.3 Nominalism
Realism is not without its problems, as was noted in SS5
above. Also favoring nominalism is Occam's principle which would have
us prefer theories with fewer sorts of entities, other things being
equal. The main problem for nominalists is to account for our
apparent theoretical commitment to types, which, whatever types are,
are not spatio-temporal particulars (according to nearly
everyone). Traditional nominalists argued (as their name implies) that
there are no general things, there are only general *words*,
and such words simply apply to more than one thing. But this is not a
solution to the current problem, presupposing as it does that there
are word types--types are the problem. (Attempts to avoid this
are given by Goodman and by Sellars; see below.) So-called class
nominalists hold that a word type is just the class, or set, of its
tokens. But this is unsatisfactory because, first, as we saw in
SS5, classes are ill-suited for the job, since classes have their
membership and their cardinality necessarily, but how many tokens a
type has is usually a contingent matter. (For the same reason,
mereological sums of tokens are unsuited for the job of types, as they
have their *parts* essentially.) And second, classes are
abstract objects too, so it is hard to see how this is really a form
of nominalism about abstract objects at all.
Initially more promising is the nominalistic claim that the surface
grammar of type talk is misleading, that talk of types is just
shorthand for talk of tokens and is thus harmless. To say 'The
horse is a mammal' is just to say 'All horses are
mammals'; to say 'The horse is a four-legged animal'
is to say, as Frege himself (1977) suggested, 'All properly
constituted horses are four-legged animals'. The idea is to
"analyze away" apparent references to types by offering
translations that are apparently type-free, and otherwise
nominalistically acceptable. The problem is how to do this for each
and every reference to or quantification over a type/types. Given the
ubiquity of such references/quantifications, only the procurement of
some sort of systematic procedure could assure us it can be done.
However, chances of formulating a systematic procedure appear slight,
in view of the following obstacles that would have to be overcome.
* If the "translation" is not to turn out to be
trivially true because there are no tokens, it must be the case that
all types have tokens. We saw in SS5 there is reason to deny this,
because it would falsify the law of syntax that says that if
Ph,Ps are sentences, then so is (Ph & Ps), since there
are only finitely many sentence tokens.
* The examples of 'The horse is a mammal' and 'The
horse is four-legged' suggest that 'The Type is P'
is to be analyzed as 'All tokens, or all normal tokens, are
P'. But this won't work. The noun 'colour' is
spelled 'color' (among other ways) , but not all properly
formed tokens of it are. 'Colour' also means *of the
face or skin* (among other things), but not only do not all tokens
of it mean *of the face or skin*, probably most do not, and the
average one does not.
* These semantic facts show that 'The Type is P' cannot
be analyzed as 'Either all tokens are P, all properly formed
tokens are P, most tokens are P or the average token is P'. Nor
is there any point in searching for some other magic statistical
formula that *would* express exactly how many tokens have to be
P for the Type to be P. This is because some of the properties of the
Type are collective properties--e.g., when it is said that
"The grizzly bear, *Ursus Horribilis*, had at one time a
U.S. range of most of the West, and numbered 10,000 is California
alone, but today its range is Montana, Wyoming and Idaho and it
numbers less than 1000." Having a range of most of the West is
true of no individual bear.
* It is very difficult even to find a nominalistic paraphrase for
"Old Glory had twenty-eight stars in 1846 but now has
fifty", as Wetzel (2008) argues.
* So far we have only been considering analyzing away singular
terms. Quantifications are indeed a nightmare. Here is Quine and
Goodman's (1947 p. 180) nominalized version of "There are more
cats than dogs": "Every individual that contains a bit of
each cat is bigger than some individual that contains a bit of each
dog" where a bit "is an object that is just as big as the
smallest animal among all cats and dogs". Ingenious though their
strategy is, it offers no help for nominalizing "Of 20,481
species examined, two-thirds were secure, seven percent were
critically imperiled, and fifteen percent were vulnerable".
#### 7.3.1 Goodman's Nominalism
I have been writing as though it were easy to pick out and quantify
over tokens of a type without referring to the type. Sometimes it is:
'The horse...'-- 'All
horses...'. But this will not generally be the
case. Consider the noun 'color'. The natural way to pick
out its tokens is "all tokens of the noun
'color'", but obviously this will not do as a
nominalistic paraphrase, for it contains a reference to a
type. Goodman (1977) suggests a systematic way of paraphrasing and a
general procedure for substituting nominalistic paraphrases for
linguistic type-ridden sentences. In 1977 (p. 262) he claims that
"any " 'Paris' consists of five letters"
is short for "Every 'Paris'-inscription consists of
five letter-inscriptions" ". The idea seems to be to
replace singular terms by predicates, which nominalists such as
Goodman think carry no ontological commitment. So instead of "an
inscription of 'color'" write "a
'color'-inscription." It seems pretty clear,
however, that, based on the rules of quote-names for words, this
predicate still retains a singular term for a type, the word
'color' just as clearly as "a George W. Bush
appearance" still contains a reference to Bush. That this is so
is even more obvious in the case of "an 'a cat is on the
mat'-inscription". So the question is how we might
identify these tokens grammatically but without referring to the noun
'color' itself and still say something true and (in some
appropriate sense) equivalent. Is there perhaps some other way of
quantifying over tokens of a word, without referring to the word or to
any other type? The fact that tokens are all "the same
type" suggests they are all "the same" somehow,
which begets the idea that the type must embody certain
*similar* features that all and only its tokens have, such as
spelling, or sense, or phonological structure or some combination of
them. This is a beguiling idea, until one tries to find such a
feature, or features, amid the large variety of its tokens--even
the well-formed tokens. As we saw in SS4, there is no such feature
(consider again 'color' and 'schedule'). They
demonstrate that e.g. neither same spelling, same sense, nor same
pronunciation prevail. In any event, such possible defining features
involve reference to types: letter types in the spellings, phoneme
types in the pronunciation. These too would have to be "analyzed
away" in terms of quantifications over particulars. It might be
thought that Sellars solved this problem by appealing to the notion of
a *linguistic role*, which Loux (1998, p. 79) defines as:
two word (tokens) have the same linguistic role if they
"function in the same way as responses to perceptual input; they
enter into the same inference patterns; and they play the same role in
guiding behavior". It is dubious whether this notion can be
unpacked without referring to abstract objects (same inference
patterns?), but in any event it cannot be used to pick out all tokens
of a word, as we have been using the word 'word'. The
reason is that 'red' and 'rouge' are different
words in our sense, but their tokens play the same linguistic role for
Sellars.
But even if expressions like "a
'color'-inscription" did not contain singular terms,
Goodman's suggestion that whatever is true of the type is true of all
the tokens has two fatal defects. First, it would turn truths into
falsehoods. 'Paris' consists of five letters, as does
'color', but not every 'color'-inscription
consists of just five letter-inscriptions since some are spelled
'c'-'o'-'l'-'o'-'u'-'r'.
(As for denying that these are inscriptions of the word
'color' see "Orthography" in SS4 above.)
It doesn't even work for the word 'Paris', as it has the
forms 'Pareiss' and 'Parrys' also. Second,
Goodman's technique of replacing singular terms by predicates only
works if we are employing a realist semantics. That is, the key to his
program is that for every word in the dictionary and every sentence in
the language there corresponds a unique predicate that is true of just
the tokens of that word/that sentence. (Without such predicates, true
statements apparently referring to types would be short for nothing at
all.) But this is only plausible on a realist semantics. If
'predicate' meant 'predicate token' there
would not even be enough predicates for every word in the dictionary,
much less every sentence in the language. That is, it is certainly
false that for every word a in the dictionary, there is an
actual predicate token of the form 'is an
a-inscription'. For more details on the problems with
Goodman's strategy, see Wetzel (2000).
#### 7.3.2 Sellars's nominalism
Sellars's suggestion is rather similar to Goodman's, and faces similar
difficulties, but is sufficiently different to be worth
mentioning. Loux (1998, pp. 75-83), for one, argues that Sellars
(1963) achieves the best nominalist account available by overcoming
critical objections to Carnap's "metalinguistic" account
as follows. Carnap (1959, pp. 284-314) had suggested that all claims
involving apparent reference to abstract objects, such as "Red
is a color", are systematically to be understood as
metalinguistic claims about the word involved ("The word
'red' is a color predicate"). There are two obvious
objections to Carnap's suggestion. The first is that we still have
word types being referred to; the second is that translation of
"Red is a color" and "The word 'red' is
a color predicate" into French, say, will not result in
sentences that are equivalent, since the latter ("Le mot
'red' est un predicat de coleur") will still
refer to an English word, but the former will not. To counter the
first objection, Sellars (1963, pp. 632-33) claims "the word
'red'" is a *distributive singular term*,
which typically consists of 'the' followed by a common
noun that purports to name a kind--e.g., 'the lion'
in "The lion is tawny"--and that use of it results in
a generic claim that is about token lions, not the
type--"All lions are tawny"--since "The K
is f" is logically equivalent to "All Ks are
f". Note that this is similar to Goodman's suggestion and as
such is subject to the same criticism as is Goodman's: the two
sentence forms said to be logically equivalent are not. See the second
bullet point three paragraphs above. Nor is there some other simple
and straightforward logical equivalence that does the job; see the
third and fourth bullet points above. At best this is a suggestion
that some of the sentences in question--the ones that do not
contain "collective" predicates like
'extinct'--are logically equivalent to generic
claims--e.g. "Lions are tawny"--which are
capable of being true while admitting of exceptions. However, that
there is a uniform semantics (even a very complex one) for generic
claims, one moreover that "analyzes away" kind talk, is a
strong empirical claim about language, one that does not receive
strong support from current efforts to analyze generic claims. (See
Carlson and Pelletier 1995.)
To counter the second objection (about non-equivalent translations)
Sellars suggests we introduce a new sort of quotation device, dot
quotation, that would permit translation of the quoted word. Then,
e.g. "The word *red* is a color predicate"
translates as "Le mot *rouge* est un predicat
de coleur". The justification is that 'red' and
'rouge' are functionally equivalent--they play the
same linguistic role. As mentioned above, this means that they have
the same causes (of perceptual stimuli, e.g.) and effects (conduct,
e.g.) and function similarly in inferential transitions. Again,
whether Sellars can unpack the notion of "same linguistic
role" without appealing to any universals remains an open
question. Less open is the question of how successful such an approach
is to dealing with talk of words, e.g., "The word
'red' consists of three letters". If we apply
Sellars' analysis systematically, this becomes ""The word
'red'" consists of a three-lettered predicate"
which is just false; if instead we reckon that "the word
'red'" is already a distributive singular term and
apply his analysis to it, getting "The word *red* is a
three-lettered predicate", this then translates incorrectly as
"Le mot *rouge* est un predicat trois-en
lettres". If we stick instead with regular quotation, which
resists translation, then we cannot appeal to "same linguistic
role" to say what class of things "the word
'red'" distributes over. In any event, Sellars's
account also faces the difficulty noted above with Goodman's account:
it is only plausible on a realist semantics, because if
'predicate' meant 'predicate token' there
would not even be enough predicates for every word in the dictionary,
much less every sentence in the language. (For other problems with
Sellars's account, see Loux 1978: 81-85.)
## 8. Occurrences
As mentioned in SS1 above, there is a related distinction that
needs to be mentioned in connection with the type-token
distinction. It is that between a thing, or type of thing, and (what
is best called) an *occurrence* of it--where an occurrence
is not necessarily a token. The reason the reader was asked above to
count the words in Stein's line *in front of the reader's
eyes*, was to ensure that tokens would be counted. Tokens are
concrete particulars; whether objects or events they have a unique
spatio-temporal location. Had the reader been asked to count the
words in Stein's line itself, the reader might still have correctly
answered either 'three' or 'ten'. There are
exactly three word types, but although there are ten word tokens in a
token copy of the line, there aren't any tokens at all in the line
itself. The line itself is an abstract type, as is the poem in which
it first appeared. Nor are there ten word types in the line, because
as we just said it contains only the three word types,
'a,' 'is' and 'rose,' each of
which is unique. So what are there ten of? *Occurrences* of
words, as logicians say: three occurrences of the word (type)
'a,' three of 'is' and four of
'rose'. Or, to put it in a more ontologically neutral
fashion: the word 'a' occurs three times in the line,
'is' three times and 'rose' four times.
Similarly, the variable 'x' occurs three times in the
formula '[?]x (Ax & Bx)'.
Now this may seem impossible; how can one and the same thing occur
more than once without there being two tokens of it? Simons (1982)
concludes that it can't. Wetzel (1993) argues that it is useful to
distinguish objects from occurrences of them. For example, in the
sequence of numbers <0,1,0,1> the very same number, the number
one, occurs twice, yet its first occurrence is distinct from its
second. The notion of *an occurrence of x in y* involves not
only x and y, but also how x is situated in y. Even a
*concrete* object can occur more than once in a
sequence--the same person occurs twice in the sequence of New
Jersey million dollar lottery winners, remarkably enough. If we think
of an *expression* as a sequence, then the air of mystery over
how the same identical thing can occur twice vanishes. Does this mean
that, in addition to word types and word tokens, word occurrences must
also be recognized? Not necessarily; we can unpack the notion of an
occurrence using "occurs in" if we have the notion of a
sequence available; see Wetzel 1993 for details.
The need to distinguish tokens of types from occurrences of types
arises, not just in linguistics, but whenever types of things have
other types of things occurring in them. There are 10,000 (or so)
notes in Beethoven's *Sonate Pathetique*, but there are
only 88 notes (types) the piano can produce. There are supposed to be
fifty stars (types) in the current *Old Glory* (type), but the
five-pointed star (type) is unique. And what could it mean to say that
the very same atom (type), hydrogen, "occurs four times"
in the methane molecule? Again, the perplexing thing is how the very
same thing can "occur" more than once, without there being
more than one token of it. Armstrong (1986), Lewis (1986a,b) and
Forrest (1986) called such types "structural universals",
which were the subject of a debate among the three. Armstrong and
Forrest defended Armstrong's (1978) view of universals against Lewis,
who delineated seven different views of structural universals
compatible with Armstrong's theory, and found all of them wanting.
Basically, Lewis (1986a) assumed that there are two sorts of ways
structural universals might be constructed of simpler universals:
set-theoretically and mereologically. He argued that set-theoretical
constructions resulted in ersatz universals, not universals worthy of
the name, and that the various mereological constructions just
resulted in a heap of simpler universals, where there could not be
*two* of the simpler universals. Wetzel (2008) argues that
there is a conception of a structural universal, the "occurrence
conception"--which is an extension of the occurrence
conception of expressions mentioned above--that escapes Lewis's
objections. |
type-theory | ## 1. Paradoxes and Russell's Type Theories
The theory of types was introduced by Russell in order to cope with
some contradictions he found in his account of set theory and was
introduced in "Appendix B: The Doctrine of Types"
of Russell 1903. This contradiction was obtained by analysing a
theorem of Cantor that no mapping
\[
F:X \rightarrow \Pow(X)
\]
(where \(\Pow(X)\) is the class of subclasses of a class
\(X)\) can be surjective; that is, \(F\) cannot be such that
every member \(b\) of \(\Pow(X)\) is equal to
\(F(a)\) for some element \(a\) of \(X\). This
can be phrased "intuitively" as the fact that there are
more subsets of \(X\) than elements of \(X\). The proof of
this fact is so simple and basic that it is worthwhile giving it
here. Consider the following subset of \(X\):
\[
A = \{ x \in X \mid x \not\in F(x)\}.
\]
This subset cannot be in the range of \(F\). For if \(A = F(a)\), for
some \(a\), then
\[\begin{align}
a \in F(a) &\text{ iff } a \in A \\
&\text{ iff } a \not\in F(a)
\end{align}\]
which is a contradiction.
Some remarks are in order. First, the proof
does not use the law of excluded middle and is thus valid
intuitionistically. Second, the method that is used, called
*diagonalisation* was already present in the work of du
Bois-Reymond for building real functions growing faster than any
function in a given sequence of functions.
Russell analysed what happens if we apply this theorem to the case
where A is the class of all classes, admitting that there is such a
class. He was then lead to consider the special class of classes that
do not belong to themselves
\[\tag{\*}
R = \{ w \mid w \not\in w \}.
\]
We then have
\[
R \in R \text{ iff } R \not\in R.
\]
It seems indeed that Cantor was already aware of the fact that the
class of all sets cannot be considered itself to be a set.
Russell communicated this problem to Frege, and his letter, together
with Frege's answer appear in van Heijenoort 1967. It is important
to realise that the formulation (\*) does not apply as it is to Frege's
system. As Frege himself wrote in his reply to Russell, the expression
"a predicate is predicated of itself" is not exact. Frege
had a distinction between *predicates* (concepts) and
*objects*. A (first-order) predicate applies to an object but
it cannot have a predicate as argument. The exact formulation of the
paradox in Frege's system uses the notion of the *extension* of
a predicate \(P\), which we designate as \(\varepsilon P\).
The extension of a predicate is itself an object. The important axiom
V is:
\[\tag{Axiom V}
\varepsilon P = \varepsilon Q \equiv \forall x [P(x) \equiv Q(x)]
\]
This axiom asserts that the extension of \(P\) is identical to the
extension of \(Q\) if and only if \(P\) and \(Q\) are materially
equivalent. We can then translate Russell's paradox (\*) in
Frege's system by defining the predicate
\[
R(x) \text{ iff } \exists P[x = \varepsilon P \wedge \neg P(x)]
\]
It can then been checked, using Axiom V in a crucial way, that
\[
R(\varepsilon R) \equiv \neg R(\varepsilon R)
\]
and we have a contradiction as well. (Notice that for defining the
predicate \(R\), we have used an *impredicative*
existential quantification on predicates. It can be shown that the
*predicative* version of Frege's system is consistent (see
Heck 1996 and for further refinements Ferreira 2002).
It is clear from this account that an idea of types was already
present in Frege's work: there we find a distinction between objects,
predicates (or concepts), predicates of predicates, etc. (This point
is stressed in Quine 1940.) This hierarchy is called the
"extensional hierarchy" by Russell (1959), and its
necessity was recognised by Russell as a consequence of his
paradox.
## 2. Simple Type Theory and the \(\lambda\)-Calculus
As we saw above, the distinction: objects, predicates, predicate of
predicates, etc., seems enough to block Russell's paradox (and this
was recognised by Chwistek and Ramsey). We first describe the type
structure as it is in *Principia* and later in this section we
present the elegant formulation due to Church 1940 based on
\(\lambda\)-calculus. The types can be defined as
1. \(i\) is the type of individuals
2. \((\,)\) is the type of propositions
3. if \(A\_1 ,\ldots ,A\_n\) are types then \((A\_1 ,\ldots ,A\_n)\) is
the type of \(n\)-ary relations over objects of respective types \(A\_1
,\ldots ,A\_n\)
For instance, the type of binary relations over individuals is
\((i, i)\), the type of binary connectives is
\(((\,),(\,))\), the type of quantifiers over individuals is
\(((i))\).
For forming propositions we use this type structure: thus \(R(a\_1
,\ldots ,a\_n)\) is a proposition if \(R\) is of type \((A\_1 ,\ldots
,A\_n)\) and \(a\_i\) is of type \(A\_i\) for \(i = 1,\ldots ,n\). This
restriction makes it impossible to form a proposition of the form
\(P(P)\): the type of \(P\) should be of the form \((A)\), and \(P\)
can only be applied to arguments of type \(A\), and thus cannot be
applied to itself since \(A\) is not the same as \((A)\).
However simple type theory is not predicative: we can define an object
\(Q(x, y)\) of type \((i, i)\)
by
\[
\forall P[P(x) \supset P(y)]
\]
Assume that we have two individuals \(a\) and \(b\) such that \(Q(a,
b)\) holds. We can define \(P(x)\) to be \(Q(x, a)\). It is then clear
that \(P(a)\) holds, since it is \(Q(a, a)\). Hence \(P(b)\) holds as
well. We have proved, in an impredicative way, that \(Q(a, b)\)
implies \(Q(b, a)\).
Alternative simpler formulations, which retain only the notion of
classes, classes of classes, etc., were formulated by Godel and
Tarski. Actually this simpler version was used by Godel in his
1931 paper on formally undecidable propositions. The discovery of the
undecidable propositions may have been motivated by a heuristic
argument that it is unlikely that one can extend the completeness
theorem of first-order logic to type theory (see the end of his
Lecture at Konigsberg 1930 in Godel Collected Work, Volume
III, Awodey and Carus 2001 and Goldfarb 2005). Tarski had a version
of the definability theorem expressed in type theory (see Hodges
2008). See Schiemer and Reck 2013.
We have objects of type 0, for individuals, objects of type 1, for
classes of individuals, objects of type 2, for classes of classes of
individuals, and so on. Functions of two or more arguments, like
relations, need not be included among primitive objects since one can
define relations to be classes of ordered pairs, and ordered pairs to
be classes of classes. For example, the ordered pair of individuals
*a, b* can be defined to be \(\{\{a\},\{a,b\}\}\) where
\(\{x,y\}\) denotes the class whose sole elements are \(x\) and
\(y\). (Wiener 1914 had suggested a similar reduction of relations to
classes.) In this system, all propositions have the form \(a(b)\),
where \(a\) is a sign of type \(n+1\) and \(b\) a sign of type
\(n\). Thus this system is built on the concept of an arbitrary class
or subset of objects of a given domain and on the fact that the
collection of *all subsets* of the given domain can form a new
domain of the next type. Starting from a given domain of individuals,
this process is then iterated. As emphasised for instance in Scott
1993, in set theory this process of forming subsets is iterated into
the *transfinite*.
In these versions of type theory, as in set theory, functions are not
primitive objects, but are represented as functional relation. The
addition function for instance is represented as a ternary relation by
an object of type \((i,i,i)\). An elegant
formulation of the simple type theory which extends it by introducing
functions as primitive objects was given by Church in 1940. It uses
the \(\lambda\)-calculus notation (Barendregt 1997). Since such a
formulation is important in computer science, for the connection with
category theory, and for Martin-Lof type theory, we describe it
in some detail. In this formulation, predicates are seen as a special
kind of functions (propositional functions), an idea that goes back to
Frege (see for instance Quine 1940). Furthermore, the notion of
function is seen as more primitive than the notion of predicates and
relations, and a function is not defined anymore as a special kind of
relation. (Oppenheimer and Zalta 2011 presents some arguments against
such a primitive representation of functions.) The types of this
system are defined inductively as follows
1. there are two basic types \(i\) (the type of individuals)
and \(o\) (the type of propositions)
2. if \(A, B\) are types then \(A \rightarrow B\), the type of
functions from \(A\) to \(B\), is a type
We can form in this way the types:
| | |
| --- | --- |
| \(i\rightarrow o\) | (type of predicates) |
| \((i\rightarrow o) \rightarrow o\) | (type of predicates of predicates) |
which correspond to the types \((i)\) and \(((i))\) but also
the new types
| | |
| --- | --- |
| \(i\rightarrow i\) | (type of functions) |
| \((i\rightarrow i) \rightarrow i\) | (type of functionals) |
It is convenient to write
\[
A\_1 ,\ldots ,A\_n \rightarrow B
\]
for
\[
A\_1 \rightarrow(A\_2 \rightarrow \ldots \rightarrow(A\_n \rightarrow B))
\]
In this way
\[
A\_1 ,\ldots ,A\_n \rightarrow o
\]
corresponds to the type \((A\_1 ,\ldots ,A\_n)\).
First-order logic considers only types of the form
| | | |
| --- | --- | --- |
| \(i,\ldots ,i \rightarrow i\) | (type of function symbols), | and |
| \(i,\ldots ,i \rightarrow o\) | (type of predicate, relation symbols) | |
Notice that
\[
A \rightarrow B \rightarrow C
\]
stands for
\[
A \rightarrow(B\rightarrow C)
\]
(association to the right).
For the terms of this logic, we shall not follow Church's account but
a slight variation of it, due to Curry (who had similar ideas before
Church's paper appeared) and which is presented in detail in
R. Hindley's book on type theory. Like Church, we use
\(\lambda\)-calculus, which provides a general notation for functions
\[
M ::= x \mid M M \mid \lambda x.M
\]
Here we have used the so-called BNF notation, very convenient in
computing science. This gives a syntactic specification of the
\(\lambda\)-terms which, when expanded, means:
* every variable is a function symbol;
* every juxtaposition of two function symbols is a function symbol;
* every \(\lambda x.M\) is a function symbol;
* there are no other function symbols.
The notation for function application \(M N\) is a
little different than the mathematical notation, which would be
\(M(N)\). In general,
\[
M\_1 M\_2 M\_3
\]
stands for
\[
(M\_1 M\_2) M\_3
\]
(association to the left). The term \(\lambda x.M\) represents the
function which to \(N\) associates \(M[x:=N\)]. This notation is so
convenient that one wonders why it is not widely used in
mathematics. The main equation of \(\lambda\)-calculus is then
\((\beta\)-conversion)
\[
(\lambda x.M) N = M[x:=N]
\]
which expresses the meaning of \(\lambda x.M\)
as a function. We have used \(M[x:=N\)] as a
notation for the value of the expression that results when \(N\)
is substituted for the variable \(x\) in the matrix \(M\).
One usually sees this equation as a rewrite rule
\((\beta\)-reduction)
\[
(\lambda x.M) N \rightarrow M[x:=N]
\]
In *untyped* lambda calculus, it may be that such rewriting
does not terminate. The canonical example is given by the term
\(\Delta = \lambda x.x x\) and the application
\[
\Delta \Delta \rightarrow \Delta \Delta
\]
(Notice the similarity with Russell's paradox.) The idea of
Curry is then to look at types as predicates over lambda terms,
writing \(M:A\) to express that \(M\) satisfies the predicate/type
\(A\). The meaning of
\[
N:A\rightarrow B
\]
is then
\[
\forall M, \text{ if } M:A \text{ then } N M:B
\]
which justifies the following rules
\[
\frac{N:A\rightarrow B M:A}{N M:B}
\]
\[
\frac{M:B [x:A]}{\lambda x.M:A \rightarrow B}
\]
In general one works with judgements of the form
\[
x\_1 :A\_1,...,x\_n :A\_n \vdash M:A
\]
where \(x\_1,..., x\_n\) are distinct
variables, and \(M\) is a term having all free variables among
\(x\_1,..., x\_n\). In order to be
able to get Church's system, one adds some constants in order to form
propositions. Typically
| | |
| --- | --- |
| not: | \(o\rightarrow o\) |
| imply: | \(o\rightarrow o\rightarrow o\) |
| and: | \(o\rightarrow o\rightarrow o\) |
| forall: | \((A\rightarrow o) \rightarrow o\) |
The term
\[
\lambda x. \neg(x x)
\]
represents the predicate of predicates that do not apply to themselves.
This term does not have a type however, that is, it is not possible to
find \(A\) such that
\[
\lambda x. \neg(x x) : (A\rightarrow o) \rightarrow o
\]
which is the formal expression of the fact that Russell's paradox
cannot be expressed. Leibniz equality
\[
Q: i \rightarrow i \rightarrow o
\]
will be defined as
\[
Q = \lambda x . \lambda y. \forall(\lambda P.\imply(P x) (P y))
\]
One usually writes \(\forall x[M\)] instead of \(\forall(\lambda
x.M)\), and the definition of \(Q\) can then be rewritten as
\[
Q = \lambda x.\lambda y.\forall P[\imply (P x) (P y)]
\]
This example again illustrates that we can formulate impredicative definitions
in simple type theory.
The use of \(\lambda\)-terms and \(\beta\)-reduction is most
convenient for representing the complex substitution rules that are
needed in simple type theory. For instance, if we want to substitute
the predicate \(\lambda x.Q a x\) for \(P\) in the proposition
\[
\imply (P a) (P b)
\]
we get
\[
\imply ((\lambda x.Q a x) a) ((\lambda x.Q a x) b)
\]
and, using \(\beta\)-reduction,
\[
\imply (Q a a) (Q a b)
\]
In summary, simple type theory forbids self-application but not the
circularity present in impredicative definitions.
The \(\lambda\)-calculus formalism also allows for a clearer analysis
of Russell's paradox. We can see it as the definition of the
predicate
\[
R x = \neg(x x)
\]
If we think of \(\beta\)-reduction as the process of unfolding a
definition, we see that there is a problem already with understanding
the definition of R R
\[
R R \rightarrow \neg(R R) \rightarrow \neg(\neg(R R)) \rightarrow \ldots
\]
In some sense, we have a non-wellfounded definition, which is as
problematic as a contradiction (a proposition equivalent to its
negation). One important theorem, the normalisation theorem, says that
this cannot happen with simple types: if we have
\(M:A\) then \(M\) is normalisable in a
strong way (any sequence of reductions starting from \(M\)
terminates).
For more information on this topic, we refer to the entry on
Church's simple type theory.
## 3. Ramified Hierarchy and Impredicative Principles
Russell introduced another hierarchy, that was not motivated by any
formal paradoxes expressed in a formal system, but rather by the fear
of "circularity" and by informal paradoxes similar to the
paradox of the liar. If a man says "I am lying", then we
have a situation reminiscent of Russell's paradox: a proposition which
is equivalent to its own negation. Another informal such paradoxical
situation is obtained if we define an integer to be the "least
integer not definable in less than 100 words". In order to
avoid such informal paradoxes, Russell thought it necessary to
introduce another kind of hierarchy, the so-called "ramified
hierarchy". The need for such a hierarchy is hinted in Appendix
B of Russell 1903. Interestingly this is connected there to the
question of the identity of equivalent propositions and of the logical
product of a class of propositions. A thorough discussion can be found
in Chapter 10 of Russell 1959. Since this notion of ramified
hierarchy has been extremely influential in logic and especially proof
theory, we describe it in some details.
In order to further motivate this hierarchy, here is one example due
to Russell. If we say
Napoleon was Corsican.
we do not refer in this sentence to any assemblage of properties. The
property "to be Corsican" is said to be
*predicative*. If we say on the other hand
Napoleon had all the qualities of a great general
we are referring to a totality of qualities. The property "to
have all qualities of a great general" is said to be
*impredicative*.
Another example, also coming from Russell, shows how impredicative
properties can potentially lead to problems reminiscent
of the liar paradox. Suppose that we suggest the definition
A typical Englishman is one who possesses all the properties possessed
by a majority of Englishmen.
It is clear that most Englishmen do not possess *all* the
properties that most Englishmen possess. Therefore, a typical
Englishman, according to this definition, should be untypical. The
problem, according to Russell, is that the word "typical"
has been defined by a reference to all properties and has been treated
as itself a property. (It is remarkable that similar problems arise
when defining the notion of *random* numbers, cf.
Martin-Lof "Notes on constructive mathematics"
(1970).) Russell introduced the *ramified hierarchy* in order
to deal with the apparent circularity of such impredicative
definitions. One should make a distinction between the
*first-order* properties, like being Corsican, that do not
refer to the totality of properties, and consider that the
*second-order* properties refer only to the totality of
*first-order properties*. One can then introduce third-order
properties, that can refer to the totality of second-order property,
and so on. This clearly eliminates all circularities connected to
impredicative definitions.
At about the same time, Poincare carried out a similar
analysis. He stressed the importance of "predicative"
classifications, and of not defining elements of a class using a
quantification over this class (Poincare
1909). Poincare used the following example. Assume that we have
a collection with elements 0, 1 and an operation + satisfying
\[\begin{align}
x+0 &= 0 \\
x+(y+1) &= (x+y)+1
\end{align}\]
Let us say that a property is *inductive* if it holds of
0 and holds for \(x+1\) if it holds for \(x\).
An impredicative, and potentially "dangerous", definition
would be to define an element to be a *number* if it satisfies
*all* inductive properties. It is then easy to show that this
property "to be a number" is itself inductive. Indeed, 0
is a number since it satisfies all inductive properties, and if \(x\)
satisfies all inductive properties then so does \(x+1\). Similarly it
is easy to show that \(x+y\) is a number if \(x,y\) are
numbers. Indeed the property \(Q(z)\) that \(x+z\) is a number is
inductive: \(Q\)(0) holds since \(x+0=x\) and if \(x+z\) is a number
then so is \(x+(z+1) = (x+z)+1\). This whole argument however is
circular since the property "to be a number" is not
predicative and should be treated with suspicion.
Instead, one should introduce a ramified hierarchy of properties and
numbers. At the beginning, one has only *first-order* inductive
properties, which do not refer in their definitions to a totality of
properties, and one defines the numbers of order 1 to be the elements
satisfying all first-order inductive properties. One can next consider
the second-order inductive properties, that can refer to the
collection of first-order properties, and the numbers of order 2, that
are the elements satisfying the inductive properties of order 2. One
can then similarly consider numbers of order 3, and so
on. Poincare emphasizes the fact that a number of order 2 is
*a fortiori* a number of order 1, and more generally, a number
of order \(n+1\) is *a fortiori* a number of order \(n\). We
have thus a sequence of more and more restricted properties: inductive
properties of order 1, 2, ... and a sequence of more and more
restricted collections of objects: numbers of order 1, 2, ...
Also, the property "to be a number of order \(n\)" is
itself an inductive property of order \(n+1\).
It does not seem possible to prove that \(x+y\) is a number of order
\(n\) if \(x,y\) are numbers of order \(n\). On the other hand, it is
possible to show that if \(x\) is a number of order \(n+1\), and \(y\)
a number of order \(n+1\) then \(x+y\) is a number of order
\(n\). Indeed, the property \(P(z)\) that "\(x+z\) is a number
of order \(n\)" is an inductive property of order \(n+1: P\)(0)
holds since \(x+0 = x\) is a number of order \(n+1\), and hence of
order \(n\), and if \(P(z)\) holds, that is if \(x+z\) is a number of
order \(n\), then so is \(x+(z+1) = (x+z)+1\), and so \(P(z+1)\)
holds. Since \(y\) is a number of order \(n+1\), and \(P(z)\) is an
inductive property of order \(n+1, P(y)\) holds and so \(x+y\) is
a number of order \(n\). This example illustrates well the
complexities introduced by the ramified hierarchy.
The complexities are further amplified if one, like Russell as for
Frege, defines even basic objects like natural numbers as classes of
classes. For instance the number 2 is defined as the class of all
classes of individuals having exactly two elements. We again obtain
natural numbers of different orders in the ramified hierarchy.
Besides Russell himself, and despite all these complications, Chwistek
tried to develop arithmetic in a ramified way, and the interest of
such an analysis was stressed by Skolem. See Burgess and Hazen 1998
for a recent development.
Another mathematical example, often given, of an impredicative
definition is the definition of least upper bound of a bounded class
of real numbers. If we identify a real with the set of rationals that
are less than this real, we see that this least upper bound can be
defined as the union of all elements in this class. Let us identify
subsets of the rationals as predicates. For example, for rational
numbers \(q, P(q)\) holds iff \(q\) is a
member of the subset identified as \(P\). Now, we define the
predicate \(L\_C\) (a subset of the rationals)
to be the least upper bound of class \(C\) as:
\[
\forall q[L\_C (q) \leftrightarrow \exists P[C(P) \wedge P(q)]]
\]
which is impredicative: we have defined a predicate \(L\) by an
existential quantification over all predicates. In the ramified
hierarchy, if \(C\) is a class of first-order classes of
rationals, then \(L\) will be a second-order class of
rationals. One obtains then not *one* notion or real numbers,
but real numbers of different orders 1, 2, ... The least upper
bound of a collection of reals of order 1 will then be at least of
order 2 in general.
As we saw earlier, yet another example of an impredicative definition
is given by Leibniz' definition of equality. For Leibniz, the
predicate "is equal to \(a\)" is true for \(b\)
iff \(b\) satisfies all the predicates satisfied by \(a\).
How should one deal with the complications introduced by the ramified
hierarchy? Russell showed, in the introduction to the second edition
to
*Principia Mathematica*,
that these complications can be avoided in some
cases. He even thought, in Appendix B of the second edition of
*Principia Mathematica*, that the ramified hierarchy of natural
numbers of order 1,2,... collapses at order 5 for defining the reflexive transitive closure of a relation. However,
Godel later found a problem in his argument, and indeed, it was
shown by Myhill 1974 that this hierarchy actually *does not*
collapse at any finite level. A similar problem, discussed by Russell
in the introduction to the second edition to
*Principia Mathematica*
arises in the proof of Cantor's theorem that
there cannot be any injective functions from the collection of all
predicates to the collection of all objects (the version of Russell's
paradox in Frege's system that we presented in the introduction). Can
this be done in a ramified hierarchy? Russell doubted that this could
be done within a ramified hierarchy of predicates and this was indeed
confirmed indeed later (see Chwistek 1926, Fitch 1939 and Heck 1996).
Because of these problems, Russell and Whitehead introduced in the
first edition of *Principia Mathematica* the following
*reducibility axiom*: the hierarchy of predicates, first-order,
second-order, etc., collapses at level 1. This means that for any
predicate of any order, there is a predicate of the first-order level
which is equivalent to it. In the case of equality for instance, we
postulate a first-order relation "\(a=b\)"
which is equivalent to "\(a\) satisfies all properties that
\(b\) satisfies". The motivation for this axiom was purely
pragmatic. Without it, all basic mathematical notions, like real or
natural numbers are stratified into different orders. Also, despite
the apparent circularity of impredicative definitions, the axiom of
reducibility does not seem to lead to inconsistencies.
As noticed however first by Chwistek, and later by Ramsey, in the
presence of the axiom of reducibility, there is actually no point in
introducing the ramified hierarchy at all! It is much simpler to
accept impredicative definitions from the start. The simple
"extensional" hierarchy of individuals, classes, classes
of classes, ... is then enough. We get in this way the simpler
systems formalised in Godel 1931 or Church 1940 that were
presented above.
The axiom of reducibility draws attention to the problematic status of
impredicative definitions. To quote Weyl 1946, the axiom of
reducibility "is a bold, an almost fantastic axiom; there is
little justification for it in the real world in which we live, and
none at all in the evidence on which our mind bases its
constructions"! So far, no contradictions have been found using
the reducibility axiom. However, as we shall see below,
proof-theoretic investigations confirm the extreme strength of such a
principle.
The idea of the ramified hierarchy has been extremely important in
mathematical logic. Russell considered only the finite iteration of
the hierarchy: first-order, second-order, etc., but from the
beginning, the possibility of extending the ramification transfinitely
was considered. Poincare (1909) mentions the work of Koenig in
this direction. For the example above of numbers of different order,
he also defines a number to be inductive of order \(\omega\) if it is
inductive of all finite orders. He then points out that *x+y*
is inductive of order \(\omega\) if both \(x\) and \(y\)
are. This shows that the introduction of transfinite orders can in
some case play the role of the axiom of reducibility. Such transfinite
extension of the ramified hierarchy was analysed further by Godel
who noticed the key fact that the following form of the reducibility
axiom is actually *provable*: when one extends the ramified
hierarchy of properties over the natural numbers into the transfinite
this hierarchy collapses at level \(\omega\_1\), the least
uncountable ordinal (Godel 1995, and Prawitz
1970). Furthermore, while at all levels \(\lt \omega\_1\), the
collection of predicates is countable, the collection of predicates at
level \(\omega\_1\) is of cardinality \(\omega\_1\). This
fact was a strong motivation behind Godel's model of
constructible sets. In this model the collection of all subsets of the
set of natural numbers (represented by predicates) is of cardinality
\(\omega\_1\) and is similar to the ramified hierarchy. This
model satisfies in this way the Continuum Hypothesis, and gives a
relative consistency proof of this axiom. (The motivation of
Godel was originally only to build a model where the collection
of all subsets of natural numbers is well-ordered.)
The ramified hierarchy has been also the source of much work in proof
theory. After the discovery by Gentzen that the consistency of
Arithmetic could be proved by transfinite induction (over decidable
predicates) along the ordinal \(\varepsilon\_0\), the natural question
was to find the corresponding ordinal for the different levels of the
ramified hierarchy. Schutte (1960) found that for the first level
of the ramified hierarchy, that is if we extend arithmetic by
quantifying only over first-order properties, we get a system of
ordinal strength \(\varepsilon\_{\varepsilon\_0}\). For the second level
we get the ordinal strength \(\varepsilon\_{\varepsilon\_{
\varepsilon\_0}}\), etc. We recall that \(\varepsilon\_{\alpha}\)
denotes the \(\alpha\)-th \(\varepsilon\)-ordinal number, an
\(\varepsilon\)-ordinal number being an ordinal \(\beta\) such that
\(\omega^{\beta} = \beta\), see Schutte (1960).
Godel stressed the fact that his approach to the problem of the
continuum hypothesis was not constructive, since it needs the
uncountable ordinal \(\omega\_1\), and it was natural to study
the ramified hierarchy along constructive ordinals. After preliminary
works of Lorenzen and Wang, Schutte analysed what happens if we
proceed in the following more constructive way. Since arithmetic has
for ordinal strength \(\varepsilon\_0\) we consider first the
iteration of the ramified hierarchy up to \(\varepsilon\_0\).
Schutte computed the ordinal strength of the resulting system and
found an ordinal strength \(u(1)\gt \varepsilon\_0\). We
iterate then ramified hierarchy up to this ordinal \(u(1)\) and
get a system of ordinal strength \(u(2)\gt u(1)\),
etc. Each \(u(k)\) can be computed in terms of the
so-called Veblen hierarchy: \(u(k+1)\) is
\(\phi\_{u(k)}(0)\). The limit of this process
gives an ordinal called \(\Gamma\_0\): if we iterate the
ramified hierarchy up to the ordinal \(\Gamma\_0\) we get a
system of ordinal strength \(\Gamma\_0\). Such an ordinal was
obtained independently about the same time by S. Feferman. It has been
claimed that \(\Gamma\_0\) plays for predicative systems a role
similar to \(\varepsilon\_0\) for Arithmetic. Recent
proof-theoretical works however are concerned with systems having
bigger proof-theoretical ordinals that can be considered predicative
(see for instance Palmgren 1995).
Besides these proof theoretic investigations related to the ramified
hierarchy, much work has been devoted in proof theory to analysing the
consistency of the axiom of reducibility, or, equivalently, the
consistency of impredicative definitions. Following Gentzen's
analysis of the cut-elimination property in the sequent calculus,
Takeuti found an elegant sequent formulation of simple type theory
(without ramification) and made the bold conjecture that
cut-elimination should hold for this system. This conjecture seemed at
first extremely dubious given the circularity of impredicative
quantification, which is well reflected in this formalism. The rule
for quantifications is indeed
\[
\frac{\Gamma \vdash \forall X[A(X)]}{\Gamma \vdash A[X:=T]}
\]
where \(T\) is *any* term predicate, which may itself
involve a quantification over all predicates. Thus the formula
\(A[X:=T]\) may be itself much more complex
than the formula \(A(X)\).
One early result is that cut-elimination for Takeuti's
impredicative system implies in a finitary way the consistency of
second-order Arithmetic. (One shows that this implies the consistency
of a suitable form of infinity axiom, see Andrews 2002.) Following
work by Schutte (1960b), it was later shown by W. Tait and
D. Prawitz that indeed the cut-elimination property holds (the proof
of this has to use a stronger proof theoretic principle, as it should
be according to the incompleteness theorem.)
What is important here is that these studies have revealed the extreme
power of impredicative quantification or, equivalently, the extreme
power of the axiom of reducibility. This confirms in some way the
intuitions of Poincare and Russell. The proof-theoretic
strength of second-order Arithmetic is way above all ramified
extensions of Arithmetic considered by Schutte. On the other
hand, despite the circularity of impredicative definitions, which is
made so explicit in Takeuti's calculus, no paradoxes have been found
yet in second-order Arithmetic.
Another research direction in proof theory has been to understand how
much of impredicative quantification can be explained from principles
that are available in intuitionistic mathematics. The strongest such
principles are strong forms of inductive definitions. With such
principles, one can explain a limited form of an impredicative
quantification, called \(\Pi\_{1}^1\)-comprehension,
where one uses only one level of impredicative quantification over
predicates. Interestingly, almost all known uses of impredicative
quantifications: Leibniz equality, least upper bound, etc., can be
done with \(\Pi\_{1}^1\)-comprehension. This reduction
of \(\Pi\_{1}^1\)-comprehension was first achieved by
Takeuti in a quite indirect way, and was later simplified by Buchholz
and Schutte using the so-called \(\Omega\)-rule. It can be seen as
a constructive explanation of some restricted, but nontrivial, uses of
impredicative definitions.
## 4. Type Theory/Set Theory
Type theory can be used as a foundation for mathematics, and indeed,
it was presented as such by Russell in his 1908 paper, which appeared
the same year as Zermelo's paper, presenting set theory as a foundation
for mathematics.
It is clear intuitively how we can explain type theory in set theory:
a type is simply interpreted as a set, and function types \(A
\rightarrow B\) can be explained using the set theoretic notion of
function (as a functional relation, i.e. a set of pairs of
elements). The type \(A \rightarrow o\) corresponds to the powerset
operation.
The other direction is more interesting. How can we explain the notion
of sets in terms of types? There is an elegant solution, due to
A. Miquel, which complements previous works by P. Aczel (1978) and
which has also the advantage of explaining non necessarily
well-founded sets a la Finsler. One simply interprets a *set*
as a *pointed graph* (where the arrow in the graph represents
the membership relation). This is very conveniently represented in
type theory, a pointed graph being simply given by a type A and a pair
of elements
\[
a:A, R:A \rightarrow A \rightarrow o
\]
We can then define in type theory when two such sets \(A, a, R\) and
\(B, b, S\) are equal: this is the case iff there is a bisimulation
\(T\) between \(A\) and \(B\) such that \(Tab\) holds. A bisimulation
is a relation
\[
T:A\rightarrow B\rightarrow o
\]
such that whenever \(Txy\) and \(Rxu\) hold, there exists \(v\) such
that \(Tuv\) and \(Syv\) hold, and whenever \(Txy\) and \(Ryv\) hold,
there exists \(u\) such that \(Tuv\) and \(Rxu\) hold. We can then
define the membership relation: the set represented \(B, b, S\) is a
member of the set represented by \(A, a, R\) iff there exists \(a\_1\)
such that \(Ra\_1a\) and \(A, a\_1, R\) and \(B, b, S\) are
bisimilar.
It can then be checked that all the usual axioms of set theory
extensionality, power set, union, comprehension over bounded formulae
(and even antifoundation, so that the membership relation does not
need to be well-founded) hold in this simple model. (A bounded formula
is a formula where all quantifications are of the form \(\forall x \in
a\ldots\) or \(\exists x \in a\ldots\)). In this way it can been shown
that Church's simple type theory is equiconsistent with the
bounded version of Zermelo's set theory.
## 5. Type Theory/Category Theory
There are deep connections between type theory and category theory.
We limit ourselves to presenting two applications of type theory to
category theory: the constructions of the free cartesian closed
category and of the free topos (see the entry on category theory for an
explanation of "cartesian closed" and "topos").
For building the free cartesian closed category, we extend simple type
theory with the type 1 (unit type) and the product type \(A \times
B\), for \(A, B\) types. The terms are extended by adding pairing
operations and projections and a special element of type 1. As in
Lambek and Scott 1986, one can then define a notion of *typed
conversions* between terms, and show that this relation is
decidable. One can then show (Lambek and Scott 1986) that the category
with types as objects and as morphisms from \(A\) to \(B\) the set of
closed terms of type \(A \rightarrow B\) (with conversion as equality)
is the free cartesian closed category. This can be used to show that
equality between arrows in this category is decidable.
The theory of types of Church can also be used to build the free
topos. For this we take as objects pairs \(A,E\) with \(A\) type and
\(E\) a partial equivalence relation, that is a closed term \(E:A
\rightarrow A \rightarrow o\) which is symmetric and transitive. We
take as morphisms between \(A, E\) and \(B, F\) the relations
\(R:A\rightarrow B\rightarrow o\) that are
*functional* that is such that for any \(a:A\) satisfying \(E a
a\) there exists one, and only one (modulo \(F)\) element \(b\) of
\(B\) such that \(F b b\) and \(R a b\). For the subobject classifier
we take the pair \(o, E\) with \(E:o\rightarrow o\rightarrow o\)
defined as
\[
E M N = \text{ and } (\imply\, M N) (\imply\,N M)
\]
One can then show that this category forms a topos, indeed the free
topos.
It should be noted that the type theory in Lambek and Scott 1986
uses a variation of type theory, introduced by Henkin and refined by
P. Andrews (2002) which is to have an extensional equality as the
only logical connective, i.e. a polymorphic constant
\[
\text{eq} : A \rightarrow A \rightarrow o
\]
and to define all logical connectives from this connective and
constants \(T, F : o\). For instance, one defines
\[
\forall P =\_{df} \text{eq} (\lambda x.T) P
\]
The equality at type \(o\) is logical equivalence.
One advantage of the intensional formulation is that it allows for a
direct notation of proofs based on \(\lambda\)-calculus (Martin-Lof
1971 and Coquand 1986).
## 6. Extensions of Type System, Polymorphism, Paradoxes
We have seen the analogy between the operation A \(\rightarrow\) A \(\rightarrow\) o on
types and the powerset operation on sets. In set theory, the powerset
operation can be iterated transfinitely along the cumulative
hierarchy. It is then natural to look for analogous transfinite
versions of type theory.
One such extension of Church's simple type theory is obtained by
adding universes (Martin-Lof 1970). Adding a universe is a
*reflection* process: we add a type \(U\) whose objects
are the types considered so far. For Church's simple type theory we
will have
\[
o:U, i:U \text{ and } A\rightarrow B:U \text{ if } A:U, B:U
\]
and, furthermore, \(A\) is a type if \(A:U\). We can then consider
types such as
\[
(A:U) \rightarrow A \rightarrow A
\]
and functions such as
\[
\text{id} = \lambda A.\lambda x.x : (A:U) \rightarrow A \rightarrow A
\]
The function id takes as argument a "small" type \(A:U\)
and an element \(x\) of type \(A\), and outputs an element of type
\(A\). More generally if \(T(A)\) is a type under the assumption
\(A:U\), one can form the dependent type
\[
(A:U) \rightarrow T(A)
\]
That \(M\) is of this type means that \(M A:T(A)\) whenever
\(A:U\). We get in this way extensions of type theory whose strength
is similar to the one of Zermelo's set theory (Miquel
2001). More powerful form of universes are considered in (Palmgren
1998). Miquel (2003) presents a version of type theory of strength
equivalent to the one of Zermelo-Fraenkel.
One very strong form of universe is obtained by adding the axiom
\(U:U\). This was suggested by P. Martin-Lof in
1970. J.Y. Girard showed that the resulting type theory is
inconsistent as a logical system (Girard 1972). Though it seems at first
that one could directly reproduce Russell's paradox using a set of all
sets, such a direct paradox is actually not possible due to the
difference between sets and types. Indeed the derivation of a
contradiction in such a system is subtle and has been rather indirect
(though, as noticed in Miquel 2001, it can now be reduced to
Russell's paradox by representing sets as pointed graphs). J.Y. Girard
first obtained his paradox for a weaker system. This paradox was
refined later (Coquand 1994 and Hurkens 1995). (The notion of pure
type system, introduced in Barendregt 1992, is convenient for
getting a sharp formulation of these paradoxes.) Instead of the axiom
\(U:U\) one assumes only
\[
(A:U) \rightarrow T(A) : U
\]
if \(T(A) : U [A:U]\). Notice the circularity, indeed of the same kind
as the one that is rejected by the ramified hierarchy: we define an
element of type \(U\) by quantifying over all elements of \(U\). For
instance the type
\[
(A:U) \rightarrow A \rightarrow A:U
\]
will be the type of the polymorphic identity function. Despite this
circularity, J.Y. Girard was able to show normalisation for type
systems with this form of polymorphism. However, the extension of
Church's simple type theory with polymorphism is inconsistent as a
logical system, i.e. all propositions (terms of type o) are
provable.
J.Y. Girard's motivation for considering a type system with
polymorphism was to extend Godel's Dialectica (Godel 1958)
interpretation to second-order arithmetic. He proved normalisation
using the reducibility method, that had been introduced by Tait (1967)
while analysing Godel 1958. It is quite remarkable that the
circularity inherent in impredicativity does not result in
non-normalisable terms. (Girard's argument was then used to show that
cut-elimination terminates in Takeuti's sequent calculus presented
above.) A similar system was introduced independently by J. Reynolds
(1974) while analysing the notion of polymorphism in computer
science.
Martin-Lof's introduction of a type of all types comes from the
identification of the concept of propositions and types, suggested by
the work of Curry and Howard. It is worth recalling here his three
motivating points:
1. Russell's definition of types as ranges of significance of
propositional functions
2. the fact that one needs to quantify over all propositions
(impredicativity of simple type theory)
3. identification of proposition and types
Given (1) and (2) we should have a type of propositions (as in simple
type theory), and given (3) this should also be the type of all
types. Girard's paradox shows that one cannot have (1),(2) and (3)
simultaneously. Martin-Lof's choice was to take away (2),
restricting type theory to be predicative (and, indeed, the notion of
universe appeared first in type theory as a predicative version of the
type of all types). The alternative choice of taking away (3) is
discussed in Coquand 1986.
## 7. Univalent Foundations
The connections between type theory, set theory and category theory
gets a new light through the work on Univalent Foundations (Voevodsky
2015) and the *Axiom of
Univalence*. This involves in an essential way the extension of
type theory described in the previous section, in particular dependent
types, the view of propositions as types, and the notion of universe
of types. These development are also relevant for discussing the
notion of structure, the importance of which was for instance
emphasized in Russell 1959.
Martin-Lof 1975 [1973] introduced a new basic type
\(\mathbf{Id}\_A (a,b)\), if \(a\) and \(b\) are in the type \(A\),
which can be thought as the type of equality proofs of the element
\(a\) and \(b\). An important feature of this new type is that it can
be iterated, so that we can consider the type
\(\mathbf{Id}\_{\mathbf{Id}\_A (a,b)}(p,q)\) if \(p\) and \(q\) are of
type \(\mathbf{Id}\_A (a,b)\). If we think of a type as a special kind
of set, it is natural to conjecture that such a type of equality
proofs is always inhabited for any two equality proofs \(p\) and
\(q\). Indeed, intuitively, there seems to be at most an equality
proof between two elements \(a\) and \(b\). Surprisingly, Hofmann and
Streicher 1996 designed a model of dependent type theory where this is
not valid, that is a model where they can be different proofs that two
elements are equal. In this model, a type is interpreted by
a *groupoid* and the type \(\mathbf{Id}\_A (a,b)\) by the set of
isomorphisms between \(a\) and \(b\), set which may have more than one
element. The existence of this model has the consequence that it
cannot be proved in general in type theory that an equality type has
at most one element. This groupoid interpretation has been generalized
in the following way, which gives an intuitive interpretation of the
identity type. A *type* is interpreted by a *topological
space*, up to homotopy, and a type \(\mathbf{Id}\_A (a,b)\) is
interpreted by the type of *paths* connecting \(a\) and
\(b\). (See Awodey et al. 2013 and [HoTT 2013, Other Internet
Resources].)
Voevodsky 2015 introduced the following stratification of types. (This
stratification was motivated in part by this interpretation of a type
as a topological space, but can be understood directly without
reference to this interpretation.) We say that a type \(A\) is
a *proposition* if we have \(\mathbf{Id}\_A (a,b)\) for any
element \(a\) and \(b\) of \(A\) (this means that the type \(A\) has
at most one element). We say that a type \(A\) is a *set* if
the type \(\mathbf{Id}\_A (a,b)\) is a proposition for any element
\(a\) and \(b\) of \(A\). We say that a type \(A\) is
a *groupoid* if the type \(\mathbf{Id}\_A (a,b)\) is a set for
any element \(a\) and \(b\) of \(A\). The justification of this
terminology is that it can be shown, only using the rules of type
theory, that any such type can indeed be seen as a groupoid in the
usual categorical sense, where the objects are the elements of this
type, the set of morphisms between \(a\) and \(b\) being represented
by the *set* \(\mathbf{Id}\_A (a,b)\). The composition is the
proof of transitivity of equality, and the identity morphism is the
proof of reflexivity of equality. The fact that each morphism has an
inverse corresponds to the fact that identity is a symmetric
relation. This stratification can then be extended and we can define
when a type is a 2-groupoid, 3-groupoid and so on. In this view,
*type theory* appears as a vast generalization of *set
theory*, since a set is a particular kind of type.
Voevodsky 2015 introduces also a notion of
*equivalence* between types, notion which generalizes in an
uniform way the notions of *logical equivalence* between
propositions, *bijection* between sets, *categorical
equivalence* between groupoids, and so on. We say that a map
\(f:A\rightarrow B\) is an equivalence if, for any element \(b\) in
\(B\) the type of pairs \(a,p\) where \(p\) is of type \(\mathbf{Id}\_B
(f a,b)\), is a proposition and is inhabited. This expresses in a
strong way that an element in \(B\) is the image of exactly one
element in \(A\), and if \(A\) and \(B\) are sets, we recover the
usual notion of bijection between sets. (In general if
\(f:A\rightarrow B\) is an equivalence, then we have a map
\(B\rightarrow A\), which can be thought of as the inverse of \(f\).)
It can be shown for instance that the identity map is always an
equivalence. Let \(\text{Equiv}(A,B)\) be the type of pairs \(f,p\) where
\(f:A\rightarrow B\) and \(p\) is a proof that \(f\) is an
equivalence. Using the fact that the identity map is an equivalence we
have an element of \(\text{Equiv}(A,A)\) for any type \(A\). This implies
that we have a map
\[
\mathbf{Id}\_U (A,B)\rightarrow \text{Equiv}(A,B)
\]
and the *Axiom of Univalence* states that this map is an
equivalence. In particular, we have the implication
\[
\text{Equiv}(A,B)\rightarrow \mathbf{Id}\_U (A,B)
\]
and so if there is an equivalence between two small types then
these types are equal.
This Axiom can be seen as a strong form of the extensionality
principle. It indeed generalizes the Axiom of *propositional
extensionality* mentioned by Church 1940, which states that two
logically equivalent propositions are equal. Surprisingly, it also
implies the Axiom of *function extensionality*, Axiom 10 in
Church 1940, which states that two pointwise equal functions are equal
(Voevodsky 2015). It also directly implies
that two isomorphic sets are equal, that two categorically equivalent
groupoids are equal, and so one.
This can be used to give a formulation of the notion
of *transport of structures* (Bourbaki 1957) along
equivalences. For instance, let \(M A\) be the type of monoid
structures on the set \(A\): this is the type of tuples \(m, e, p\)
where \(m\) is a binary operation on \(A\) and \(e\) an element of
\(A\) and \(p\) a proof that these elements satisfy the usual monoid
laws. The rule of substitution of equal by equal takes the form
\[
\mathbf{Id}\_U (A,B)\rightarrow M A\rightarrow M B
\]
If there is a bijection between \(A\) and \(B\) they are
equal by the Axiom of Univalence, and we can use this implication to
transport any monoid structure of \(A\) in a monoid structure
of \(B\).
Both Russell 1919 and Russell 1959 stress the importance of the
notion of structure. For instance, in Chapter VI of Russell 1919, it
is noticed that two similar relations essentially have the same
properties, and hence have the same "structure". (The
notion of "similarity" of relations was introduced in
Russell 1901.) We can also use this framework to refine
Russell's discussions on the notion of structure. For instance,
let **Monoid** be the type of pairs \(A,p\) where \(p\)
is an element of \(M A\). Two such pairs \(A,p\) and \(B,q\) are
isomorphic if there exists a bijection \(f\) from \(A\) to \(B\) such
that \(q\) is equal to the transport of structure of \(p\) along
\(f\). A consequence of the Axiom of Univalence is that two isomorphic
elements of the type **Monoid** are equal, and hence
shares the same properties. Notice that such a general transport of
properties is *not* possible when structures are formulated in
a set theoretic framework. Indeed, in a set theoretic framework, it is
possible to formulate properties using the membership relations, for
instance the property that the carrier set of the structure contains
the natural number \(0\), property that is not preserved in general by
isomorphisms. Intuitively, the set theoretical description of a
structure is not abstract enough since we can talk about the way this
structure is built up. This difference between set theory and type
theory is yet another illustration of the characterization by
J.Reynolds 1983 of a type structure as a "syntactical discipline
for enforcing level of abstraction". |
type-theory-church | ## 1. Syntax
### 1.1 Fundamental Ideas
We start with an informal description of the fundamental ideas
underlying the syntax of Church's formulation of type
theory.
All entities have types, and if a and b are types, the type
of functions from elements of type b to elements of type a
is written as \((\alpha \beta)\). (This notation was introduced by
Church, but some authors write \((\beta \rightarrow \alpha)\) instead
of \((\alpha \beta)\). See, for example, Section 2 of the entry on
type theory.)
As noted by Schonfinkel (1924), functions of more than one
argument can be represented in terms of functions of one argument when
the values of these functions can themselves be functions. For
example, if *f* is a function of two arguments, for each
element *x* of the left domain of *f* there is a
function *g* (depending on *x*) such that \(gy = fxy\)
for each element *y* of the right domain of *f*. We may
now write \(g = fx\), and regard *f* as a function of a single
argument, whose value for any argument *x* in its domain is a
function \(fx\), whose value for any argument *y* in its domain
is *fxy*.
For a more explicit example, consider the function + which carries any
pair of natural numbers to their sum. We may denote this function by
\(+\_{((\sigma \sigma)\sigma)}\), where \(\sigma\) is the type of
natural numbers. Given any number *x*, \([+\_{((\sigma
\sigma)\sigma)}x]\) is the function which, when applied to any number
*y*, gives the value \([[+\_{((\sigma \sigma)\sigma)}x]y]\),
which is ordinarily abbreviated as \(x + y\). Thus \([+\_{((\sigma
\sigma)\sigma)}x]\) is the function of one argument which adds
*x* to any number. When we think of \(+\_{((\sigma
\sigma)\sigma)}\) as a function of one argument, we see that it maps
any number *x* to the function \([+\_{((\sigma
\sigma)\sigma)}x]\).
More generally, if *f* is a function which maps
*n*-tuples \(\langle w\_{\beta},x\_{\gamma},\ldots
,y\_{\delta},z\_{\tau}\rangle\) of elements of types \(\beta\),
\(\gamma\),..., \(\delta\) ,\(\tau\), respectively, to elements
of type a, we may assign to *f* the type
\(((\ldots((\alpha \tau)\delta)\ldots \gamma)\beta)\). It is customary
to use the convention of association to the left to omit parentheses,
and write this type symbol simply as \((\alpha \tau \delta \ldots
\gamma \beta)\).
A set or property can be represented by a function (often called
*characteristic function*) which maps elements to truth values,
so that an element is in the set, or has the property, in question iff
the function representing the set or property maps that element to
truth. When a statement is asserted, the speaker means that it is
true, so that \(s x\) means that \(s x\) is true, which also expresses
the assertions that *s* maps *x* to truth and that \(x
\in s\). In other words, \(x \in s\) iff \(s x\). We take \({o}\) as
the type symbol denoting the type of truth values, so we may speak of
any function of type \(({o}\alpha)\) as a *set* of elements of
type a. A function of type \((({o}\alpha)\beta)\) is a binary
relation between elements of type b and elements of type a.
For example, if \(\sigma\) is the type of the natural numbers, and
\(<\) is the order relation between natural numbers, \(<\) has
type \(({o}\sigma \sigma)\), and for all natural numbers *x*
and \(y, {<}x y\) (which we ordinarily write as \(x < y)\) has
the value truth iff *x* is less than *y*. Of course,
\(<\) can also be regarded as the function which maps each natural
number *x* to the set \(<x\) of all natural numbers
*y* such that *x* is less than *y*. Thus sets,
properties, and relations may be regarded as particular kinds of
functions. Church's type type theory is thus a logic of
functions, and, in this sense, it is in the tradition of the work of
Frege's *Begriffsschrift*. The opposite approach would be
to reduce functions to relations, which was the approach taken by
Whitehead and Russell (1927a) in the *Principia
Mathematica*.
Expressions which denote elements of type a are called
*well-formed formulas (wffs) of type* a. Thus, statements
of type theory are wffs of type \({o}\).
If \(\bA\_{\alpha}\) is a wff of type a in which \(\bu\_{\alpha
\beta}\) is not free, the function (associated with) \(\bu\_{\alpha
\beta}\) such that \(\forall \bv\_{\beta}[\bu\_{\alpha \beta}\bv\_{\beta}
= \bA\_{\alpha}]\) is denoted by \([\lambda \bv\_{\beta}\bA\_{\alpha}]\).
Thus \(\lambda \bv\_{\beta}\) is a variable-binder, like \(\forall
\bv\_{\beta}\) or \(\exists \bv\_{\beta}\) (but with a quite different
meaning, of course); l is known as an *abstraction
operator*. \([\lambda \bv\_{\beta}\bA\_{\alpha}]\) denotes the
function whose value on any argument \(\bv\_{\beta}\) is
\(\bA\_{\alpha}\), where \(\bv\_{\beta}\) may occur free in
\(\bA\_{\alpha}\). For example, \([\lambda n\_{\sigma}[4\cdot
n\_{\sigma}+3]]\) denotes the function whose value on any natural
number *n* is \(4\cdot n+3\). Hence, when we apply this
function to the number 5 we obtain \([\lambda n\_{\sigma}[4\cdot
n\_{\sigma}+3]]5 = 4\cdot 5+3 = 23\).
We use \(\textsf{Sub}(\bB,\bv,\bA)\) as a notation for the result of
substituting \(\bB\) for \(\bv\) in \(\bA\), and
\(\textsf{SubFree}(\bB,\bv,\bA)\) as a notation for the result of
substituting \(\bB\) for all free occurrences of \(\bv\) in \(\bA\).
The process of replacing \([\lambda
\bv\_{\beta}\bA\_{\alpha}]\bB\_{\beta}\) by
\(\textsf{SubFree}(\bB\_{\beta},\bv\_{\beta},\bA\_{\alpha})\) (or
vice-versa) is known as b-*conversion*, which is one form
of l-*conversion*. Of course, when \(\bA\_{{o}}\) is a
wff of type \({o}\), \([\lambda \bv\_{\beta}\bA\_{{o}}]\) denotes the
set of all elements \(\bv\_{\beta}\) (of type \(\beta)\) of which
\(\bA\_{{o}}\) is true; this set may also be denoted by
\(\{\bv\_{\beta}|\bA\_{{o}}\}\). For example, \([\lambda x\ x<y]\)
denotes the set of *x* such that *x* is less than
*y* (as well as that property which a number *x* has if
it is less than *y*). In familiar set-theoretic notation,
\[[\lambda \bv\_{\beta} \bA\_{{o}}]\bB\_{\beta} = \textsf{SubFree}(\bB\_{\beta},\bv\_{\beta},\bA\_{{o}})\]
would be written
\[\bB\_{\beta} \in \{\bv\_{\beta}|\bA\_{{o}}\} \equiv \textsf{SubFree}(\bB\_{\beta},\bv\_{\beta},\bA\_{{o}}).\]
(By the Axiom of Extensionality for truth values, when \(\bC\_{{o}}\)
and \(\bD\_{{o}}\) are of type \({o}, \bC\_{{o}} \equiv \bD\_{{o}}\) is
equivalent to \(\bC\_{{o}} = \bD\_{{o}}\).)
Propositional connectives and quantifiers can be assigned types and
can be denoted by constants of these types. The negation function maps
truth values to truth values, so it has type \(({o}{o})\). Similarly,
disjunction and conjunction (etc.) are binary functions from truth
values to truth values, so they have type \(({o}{o}{o})\).
The statement \(\forall \bx\_{\alpha}\bA\_{{o}}\) is true iff the set
\([\lambda \bx\_{\alpha}\bA\_{{o}}]\) contains all elements of type
a. A constant \(\Pi\_{{o}({o}\alpha)}\) can be introduced (for
each type symbol \(\alpha)\) to denote a property of sets: a set
\(s\_{{o}\alpha}\) has the property \(\Pi\_{{o}({o}\alpha)}\) iff
\(s\_{{o}\alpha}\) contains all elements of type a. With this
interpretation
\[ \forall s\_{{o}\alpha}\left[\Pi\_{{o}({o}\alpha)}s\_{{o}\alpha} \equiv \forall x\_{\alpha}\left[s\_{{o}\alpha}x\_{\alpha}\right]\right] \]
should be true, as well as
\[ \Pi\_{{o}({o}\alpha)}[\lambda \bx\_{\alpha}\bA\_{{o}}] \equiv \forall \bx\_{\alpha}[[\lambda \bx\_{\alpha}\bA\_{{o}}]\bx\_{\alpha}] \label{eqPi} \]
for any wff \(\bA\_{{o}}\) and variable \(\bx\_{\alpha}\). Since by
l-conversion we have
\[ [\lambda \bx\_{\alpha}\bA\_{{o}}]\bx\_{\alpha} \equiv \bA\_{{o}} \]
this equation can be written more simply as
\[ \Pi\_{{o}({o}\alpha)}[\lambda \bx\_{\alpha}\bA\_{{o}}] \equiv \forall \bx\_{\alpha}\bA\_{{o}} \]
Thus, \(\forall \bx\_{\alpha}\) can be defined in terms of
\(\Pi\_{{o}({o}\alpha)}\), and l is the only variable-binder
that is needed.
### 1.2 Formulas (as certain terms)
Before we state the definition of a "formula", a word of
caution is in order. The reader may be accustomed to thinking of a
formula as an expression which plays the role of an assertion in a
formal language, and of a term as an expression which designates an
object. Church's terminology is somewhat different, and provides
a uniform way of discussing expressions of many different types. What
we call well-formed formula of type a (\(\text{wff}\_{\alpha}\))
below would in more standard terminology be called *term* of
type a, and then only certain terms, namely those with type
\({o}\), would be called formulas. Anyhow, in this entry we have
decided to stay with Church's original terminology. Another
remark concerns the use of some specific mathematical notation. In
what follows, the entry distinguishes between the symbols \(\imath\),
\(\iota\_{(\alpha({o}\alpha))}\), and \(\atoi\). The first is the
symbol used for the type of individuals; the second is the symbol used
for a logical constant (see
Section 1.2.1
below); the third is the symbol used as a variable-binding operator
that represents the definite description "the" (see
Section 1.3.4).
The reader should not confuse them and check to see that the browser
is displaying these symbols correctly.
#### 1.2.1 Definitions
*Type symbols* are defined inductively as follows:
* \(\imath\) is a type symbol (denoting the type of individuals).
There may also be additional primitive type symbols which are used in
formalizing disciplines where it is natural to have several sorts of
individuals.
* \({o}\) is a type symbol (denoting the type of truth values).
* If a and b are type symbols, then \((\alpha \beta)\) is
a type symbol (denoting the type of functions from elements of type
b to elements of type a).
The *primitive symbols* are the following:
* Improper symbols: [, ], \(\lambda\)
* For each type symbol a, a denumerable list of
*variables* of type \(\alpha : a\_{\alpha}, b\_{\alpha},
c\_{\alpha}\ldots\)
* Logical constants: \(\nsim\_{({o}{o})}\), \(\lor\_{(({o}{o}){o})}\),
\(\Pi\_{({o}({o}\alpha))}\) and \(\iota\_{(\alpha({o}\alpha))}\) (for
each type symbol a). The types of these constants are indicated
by their subscripts. The symbol \(\nsim\_{({o}{o})}\) denotes negation,
\(\lor\_{(({o}{o}){o})}\) denotes disjunction, and
\(\Pi\_{({o}({o}\alpha))}\) is used in defining the universal
quantifier as discussed above. \(\iota\_{(\alpha({o}\alpha))}\) serves
either as a description or selection operator as discussed in
Section 1.3.4
and
Section 1.3.5
below.
* In addition, there may be other constants of various types, which
will be called *nonlogical constants* or *parameters*.
Each choice of parameters determines a particular formulation of the
system of type theory. Parameters are typically used as names for
particular entities in the discipline being formalized.
A *formula* is a finite sequence of primitive symbols. Certain
formulas are called *well-formed formulas* (*wff*s). We
write \(\textrm{wff}\_{\alpha}\) as an abbreviation for *wff of
type* a, and define this concept inductively as follows:
* A primitive variable or constant of type a is a
wff\(\_{\alpha}\).
* If \(\bA\_{\alpha \beta}\) and \(\bB\_{\beta}\) are wffs of the
indicated types, then \([\bA\_{\alpha \beta}\bB\_{\beta}]\) is a
wff\(\_{\alpha}\).
* If \(\bx\_{\beta}\) is a variable of type b and
\(\bA\_{\alpha}\) is a wff, then \([\lambda \bx\_{\beta}\bA\_{\alpha}]\)
is a wff\(\_{(\alpha \beta)}\).
Note, for example, that by (a) \(\nsim\_{({o}{o})}\) is a
wff\(\_{({o}{o})}\), so by (b) if \(\bA\_{{o}}\) is a wff\(\_{{o}}\),
then \([\nsim\_{({o}{o})}\bA\_{{o}}]\) is a wff\(\_{{o}}\). Usually, the
latter wff will simply be written as \(\nsim \bA\). It is often
convenient to avoid parentheses, brackets and type symbols, and use
conventions for omitting them. For formulas we use the convention of
association to the right, and we may write \(\lor\_{((oo)o)}\bA\_{{o}}
\bB\_{{o}}\) instead of \([[\lor\_{((oo)o)}\bA\_{{o}}] \bB\_{{o}}]\). For
types the corresponding convention is analogous.
##### Abbreviations:
* \(\bA\_{{o}} \lor \bB\_{{o}}\) stands for \(\lor\_{ooo}\bA\_{{o}}
\bB\_{{o}}\).
* \(\bA\_{{o}} \supset \bB\_{{o}}\) stands for
\([\nsim\_{{o}{o}}\bA\_{{o}}] \lor \bB\_{{o}}\).
* \(\forall \bx\_{\alpha}\bA\_{{o}}\) stands for
\(\Pi\_{{o}({o}\alpha)} [\lambda \bx\_{\alpha}\bA\_{{o}}]\).
* Other propositional connectives, and the existential quantifier,
are defined in familiar ways. In particular,
\[ {\bA\_{{o}} \equiv \bB\_{{o}} \quad \text{stands for} \quad [\bA\_{{o}} \supset \bB\_{{o}}] \land [\bB\_{{o}} \supset \bA\_{{o}}]} \]
* \(\sfQ\_{{o}\alpha \alpha}\) stands for \(\lambda
\)x\(\_{\alpha}\lambda \)y\(\_{\alpha}\forall
f\_{{o}\alpha}{[}f\_{{o}\alpha} x\_{\alpha} \supset
f\_{{o}\alpha}y\_{\alpha}{]}\).
* \(\bA\_{\alpha} = \bB\_{\alpha}\) stands for \(\sfQ\_{{o}\alpha
\alpha}\bA\_{\alpha}\bB\_{\alpha}\).
The last definition is known as the Leibnizian definition of equality.
It asserts that *x* and *y* are the same if *y*
has every property that *x* has. Actually, Leibniz called his
definition "the identity of indiscernibles" and gave it in
the form of a biconditional: *x* and *y* are the same if
*x* and *y* have exactly the same properties. It is not
difficult to show that these two forms of the definition are logically
equivalent.
#### 1.2.2 Examples
We now provide a few examples to illustrate how various assertions and
concepts can be expressed in Church's type theory.
**Example 1** To express the assertion that
"Napoleon is charismatic" we introduce constants
\(\const{Charismatic}\_{{o}\imath}\) and \(\const{Napoleon}\_{\imath}\),
with the types indicated by their subscripts and the obvious meanings,
and assert the wff
\[\const{Charismatic}\_{{o}\imath} \const{Napoleon}\_{\imath}\]
If we wish to express the assertion that "Napoleon has all the
properties of a great general", we might consider interpreting
this to mean that "Napoleon has all the properties of some great
general", but it seems more appropriate to interpret this
statement as meaning that "Napoleon has all the properties which
all great generals have". If the constant
\(\const{GreatGeneral}\_{{o}\imath}\) is added to the formal language,
this can be expressed by the wff
\[\forall p\_{{o}\imath}[\forall g\_{\imath}{[}\const{GreatGeneral}\_{{o}\imath}g\_{\imath} \supset p\_{{o}\imath}g\_{\imath}] \supset p\_{{o}\imath} \const{Napoleon}\_{\imath}]\]
As an example of such a property, we note that the sentence
"Napoleon's soldiers admire him" can be expressed in
a similar way by the wff
\[\forall x\_{\imath}{[}\const{Soldier}\_{{o}\imath}x\_{\imath} \land \const{CommanderOf}\_{\imath\imath}x\_{\imath} = \const{Napoleon}\_{\imath} \supset \const{Admires}\_{{o}\imath\imath}x\_{\imath}\const{Napoleon}\_{\imath}]\]
By l-conversion, this is equivalent to
\[[\lambda n\_{\imath}\forall x\_{\imath}{[}\const{Soldier}\_{{o}\imath}x\_{\imath} \land \const{CommanderOf}\_{\imath\imath}x\_{\imath} = n\_{\imath} \supset \const{Admires}\_{{o}\imath\imath}x\_{\imath}n\_{\imath}]]\, \const{Napoleon}\_{\imath}\]
This statement asserts that one of the properties which Napoleon has
is that of being admired by his soldiers. The property itself is
expressed by the wff
\[\lambda n\_{\imath}\forall x\_{\imath}{[}\const{Soldier}\_{{o}\imath}x\_{\imath} \land \const{CommanderOf}\_{\imath\imath}x\_{\imath} = n\_{\imath} \supset \const{Admires}\_{{o}\imath\imath}x\_{\imath}n\_{\imath}]\]
**Example 2** We illustrate some potential applications
of type theory with the following fable.
>
>
> A rich and somewhat eccentric lady named Sheila has an ostrich and a
> cheetah as pets, and she wishes to take them from her hotel to her
> remote and almost inaccessible farm. Various portions of the trip may
> involve using elevators, boxcars, airplanes, trucks, very small boats,
> donkey carts, suspension bridges, etc., and she and the pets will not
> always be together. She knows that she must not permit the ostrich and
> the cheetah to be together when she is not with them.
>
>
>
We consider how certain aspects of this problem can be formalized so
that Sheila can use an automated reasoning system to help analyze the
possibilities.
There will be a set *Moments* of instants or intervals of time
during the trip. She will start the trip at the location
\(\const{Hotel}\) and moment \(\const{Start}\), and end it at the
location \(\const{Farm}\) and moment \(\const{Finish}\). Moments will
have type \(\tau\), and locations will have type \(\varrho\). A
*state* will have type \(\sigma\) and will specify the location
of Sheila, the ostrich, and the cheetah at a given moment. A
*plan* will specify where the entities will be at each moment
according to this plan. It will be a function from moments to states,
and will have type \((\sigma \tau)\). The exact representation of
states need not concern us, but there will be functions from states to
locations called \(\const{LocationOfSheila}\),
\(\const{LocationOfOstrich}\), and \(\const{LocationOfCheetah}\) which
provide the indicated information. Thus,
\(\const{LocationOfSheila}\_{\varrho \sigma}[p\_{\sigma \tau}t\_{\tau}]\)
will be the location of Sheila according to plan \(p\_{\sigma \tau}\)
at moment \(t\_{\tau}\). The set \(\const{Proposals}\_{{o}(\sigma
\tau)}\) is the set of plans Sheila is considering.
We define a plan *p* to be acceptable if, according to that
plan, the group starts at the hotel, finishes at the farm, and
whenever the ostrich and the cheetah are together, Sheila is there
too. Formally, we define \(\const{Acceptable}\_{{o}(\sigma \tau)}\)
as
\[\begin{multline}
\lambda p\_{\sigma \tau} \left[ \begin{aligned}
& \const{LocationOfSheila}\_{\varrho \sigma} [p\_{\sigma \tau}\const{Start}\_{\tau}] = \const{Hotel}\_{\varrho} \,\land\\
& \const{LocationOfOstrich}\_{\varrho \sigma}[p\_{\sigma \tau} \const{Start}\_{\tau}] = \const{Hotel}\_{\varrho} \,\land\\
& \const{LocationOfCheetah}\_{\varrho \sigma}[p\_{\sigma \tau} \const{Start}\_{\tau}] = \const{Hotel}\_{\varrho} \,\land\\
& \const{LocationOfSheila}\_{\varrho \sigma}[p\_{\sigma \tau} \const{Finish}\_{\tau}] = \const{Farm}\_{\varrho} \,\land\\
& \const{LocationOfOstrich}\_{\varrho \sigma}[p\_{\sigma \tau} \const{Finish}\_{\tau}] = \const{Farm}\_{\varrho} \,\land\\
& \const{LocationOfCheetah}\_{\varrho \sigma}[p\_{\sigma \tau} \const{Finish}\_{\tau}] = \const{Farm}\_{\varrho} \,\land\\
& \forall t\_{\tau} \left [ \begin{split}
& \const{Moments}\_{{o}\tau}t\_{\tau} \\
& \supset \left [ \begin{split}
& \left[ \begin{split}
& \const{LocationOfOstrich}\_{\varrho \sigma}[p\_{\sigma \tau}t\_{\tau}] \\
& = \const{LocationOfCheetah}\_{\varrho \sigma}[p\_{\sigma \tau}t\_{\tau}]
\end{split} \right] \\
& \supset \left[ \begin{split}
& \const{LocationOfSheila}\_{\varrho \sigma}[p\_{\sigma \tau}t\_{\tau}] \\
& = \const{LocationOfCheetah}\_{\varrho\sigma}[p\_{\sigma\tau}t\_{\tau}]
\end{split} \right]
\end{split} \right]
\end{split} \right]
\end{aligned} \right]
\end{multline}\]
We can express the assertion that Sheila has a way to accomplish her
objective with the formula
\[\exists p\_{\sigma \tau}[\const{Proposals}\_{{o}(\sigma \tau)}p\_{\sigma \tau} \land \const{Acceptable}\_{{o}(\sigma \tau)}p\_{\sigma \tau}]\]
**Example 3** We now provide a mathematical example.
Mathematical ideas can be expressed in type theory without introducing
any new constants. An *iterate* of a function *f* from a
set to itself is a function which applies *f* one or more
times. For example, if \(g(x) = f(f(f(x)))\), then *g* is an
iterate of *f*.
\([\text{ITERATE+}\_{{o}(\imath\imath)(\imath\imath)}f\_{\imath\imath}g\_{\imath\imath}]\)
means that \(g\_{\imath\imath}\) is an iterate of \(f\_{\imath\imath}\).
\(\text{ITERATE+}\_{{o}(\imath\imath)(\imath\imath)}\) is defined
(inductively) as
\[ \lambda f\_{\imath\imath}\lambda g\_{\imath\imath}\forall p\_{{o}(\imath\imath)} \left [p\_{{o}(\imath\imath)}f\_{\imath\imath} \land \forall j\_{\imath\imath} \left [p\_{{o}(\imath\imath)}j\_{\imath\imath} \supset p\_{{o}(\imath\imath)} \left [\lambda x\_{\imath}f\_{\imath\imath} \left [j\_{\imath\imath}x\_{\imath} \right] \right] \right] \supset p\_{{o}(\imath\imath)}g\_{\imath\imath} \right] \]
Thus, *g* is an iterate of *f* if *g* is in every
set *p* of functions which contains *f* and which
contains the function \(\lambda
x\_{\imath}f\_{\imath\imath}[j\_{\imath\imath}x\_{\imath}]\) (i.e.,
*f* composed with *j*) whenever it contains
*j*.
A *fixed point* of *f* is an element *y* such
that \(f(y) = y\).
It can be proved that if some iterate of a function *f* has a
unique fixed point, then *f* itself has a fixed point. This
theorem can be expressed by the wff
\[\begin{aligned}
\forall f\_{\imath\imath} \left [ \begin{split}
\exists g\_{\imath\imath} & \left [ \begin{split}
&\text{ITERATE+}\_{{o}(\imath\imath)(\imath\imath)} f\_{\imath\imath} g\_{\imath\imath} \\
&{} \land \exists x\_{\imath} \left [ \begin{split}
& g\_{\imath\imath}x\_{\imath} = x\_{\imath} \\
&{} \land \forall z\_{\imath} \left [g\_{\imath\imath} z\_{\imath} = z\_{\imath} \supset z\_{\imath} = x\_{\imath} \right]
\end{split} \right]
\end{split} \right]\\
& \supset { } \exists y\_{\imath} [f\_{\imath\imath}y\_{\imath} = y\_{\imath}]
\end{split} \right].
\end{aligned}\]
See Andrews et al. 1996, for a discussion of how this theorem, which
is called THM15B, can be proved automatically.
**Example 4** An example from philosophy is
Godel's variant of the ontological argument for the
existence of God. This example illustrates two interesting
aspects:
* Church's type theory can be employed as a meta-logic to
concisely embed expressive other logics such as the higher-order modal
logic assumed by Godel. By exploiting the possible world
semantics of this target logic, its syntactic elements are defined in
such a way that the infrastructure of the meta-logic are reused as
much as possible. In this technique, called *shallow semantical
embedding*, the modal operator \(\Box\), for example, is simply
identified with (taken as syntactic sugar for) the
l-formula
\[\lambda \varphi\_{{o}\imath} \lambda w\_{\imath} \forall v\_{\imath} [R\_{{o}\imath\imath} w\_{\imath} v\_{\imath} \supset \varphi\_{{o}\imath} v\_{\imath}]\]
where \(R\_{{o}\imath\imath}\) denotes the accessibility relation
associated with \(\Box\) and type \({\imath}\) is identified with
possible worlds. Moreover, since \(\forall x\_{\alpha} [\bA\_{{o}\alpha}
x\_{\alpha}]\) is shorthand in Church's type theory for
\(\Pi\_{{o}({o}\alpha)} [\lambda x\_{\alpha} [\bA\_{{o}\alpha}
x\_{\alpha}]]\), the modal formula
\[\Box \forall x \bP x\]
is represented as
\[\Box \Pi' [\lambda x\_{\alpha} \lambda w\_{\imath} [\bP\_{{o}\imath\alpha} x\_{\alpha} w\_{\imath}]]\]
where \(\Pi'\) stands for the l-term
\[\lambda \Phi\_{{o}\imath\alpha} \lambda w\_{\imath} \forall x\_{\alpha} [\Phi\_{{o}\imath\alpha} x\_{\alpha} w\_{\imath}]\]
and the \(\Box\) gets resolved as described above. The above choice of
\(\Pi'\) realizes a possibilist notion of quantification. By
introducing a binary "existence" predicate in the
metalogic and by utilizing this predicate as an additional guard in
the definition of \(\Pi'\) an actualist notion of quantification can
be obtained. Expressing that an embedded modal formula
\(\varphi\_{{o}\imath}\) is globally valid is captured by the formula
\(\forall x\_{\imath} [\varphi\_{{o}\imath} x\_{\imath}]\). Local
validity (and also actuality) can be modeled as \(\varphi\_{{o}\imath}
n\_{\imath}\), where \(n\_{\imath}\) is a nominal (constant symbol in
the meta-logic) denoting a particular possible world.
* The above technique can be exploited for a natural encoding and
automated assessment of Godel's ontological argument in
higher-order modal logic, which unfolds into formulas in
Church's type theory such that higher-order theorem provers can
be applied. Further details are presented in Section 6 (Logic and
Philosophy) of the SEP entry on
automated reasoning
and also in
Section 5.2;
moreover, see Benzmuller & Woltzenlogel-Paleo 2014 and
Benzmuller 2019.
**Example 5** Suppose we omit the use of type symbols in
the definitions of wffs. Then we can write the formula \(\lambda
x\nsim[xx]\), which we shall call \(\textrm{R}\). It can be regarded
as denoting the set of all sets *x* such that *x* is not
in *x*. We may then consider the formula \([\textrm{R R}]\),
which expresses the assertion that \(\textrm{R}\) is in itself. We can
clearly prove \([\textrm{R R}] \equiv [[\lambda x\nsim [xx]]
\textrm{R}]\), so by l-conversion we can derive \([\textrm{R
R}] \equiv\, \nsim[\textrm{R R}]\), which is a contradiction. This is
Russell's paradox. Russell's discovery of this paradox
(Russell 1903, 101-107) played a crucial role in the development of
type theory. Of course, when type symbols are present, \(\textrm{R}\)
is not well-formed, and the contradiction cannot be derived.
### 1.3 Axioms and Rules of Inference
#### 1.3.1 Rules of Inference
1. *Alphabetic Change of Bound Variables*
\((\alpha\)-*conversion*). To replace any well-formed part
\(\lambda \bx\_{\beta}\bA\_{\alpha}\) of a wff by \(\lambda \by\_{\beta}
\textsf{Sub}(\by\_{\beta},\bx\_{\beta},\bA\_{\alpha})\), provided that
\(\by\_{\beta}\) does not occur in \(\bA\_{\alpha}\) and \(\bx\_{\beta}\)
is not bound in \(\bA\_{\alpha}\).
2. b-*contraction*. To replace any well-formed part
\([\lambda \bx\_{\alpha}\bB\_{\beta}] \bA\_{\alpha}\) of a wff by
\(\textsf{Sub}(\bA\_{\alpha},\bx\_{\alpha},\bB\_{\beta})\), provided that
the bound variables of \(\bB\_{\beta}\) are distinct both from
\(\bx\_{\alpha}\) and from the free variables of \(\bA\_{\alpha}\).
3. b-*expansion*. To infer \(\bC\) from \(\bD\) if
\(\bD\) can be inferred from \(\bC\) by a single application of
b-contraction.
4. *Substitution*. From \(\bF\_{({o}\alpha)}\bx\_{\alpha}\), to
infer \(\bF\_{({o}\alpha)}\bA\_{\alpha}\), provided that
\(\bx\_{\alpha}\) is not a free variable of \(\bF\_{({o}\alpha)}\).
5. *Modus Ponens*. From \([\bA\_{{o}} \supset \bB\_{{o}}]\) and
\(\bA\_{{o}}\), to infer \(\bB\_{{o}}\).
6. *Generalization*. From \(\bF\_{({o}\alpha)}\bx\_{\alpha}\) to
infer \(\Pi\_{{o}({o}\alpha)}\bF\_{({o}\alpha)}\), provided that
\(\bx\_{\alpha}\) is not a free variable of \(\bF\_{({o}\alpha)}\).
#### 1.3.2 Elementary Type Theory
We start by listing the axioms for what we shall call *elementary
type theory*.
\[\begin{align}
[p\_{{o}} \lor p\_{{o}}] & \supset p\_{{o}} \tag{E1}\\
p\_{{o}} & \supset [p\_{{o}} \lor q\_{{o}}] \tag{E2}\\
[p\_{{o}} \lor q\_{{o}}] & \supset [q\_{{o}} \lor p\_{{o}}] \tag{E3}\\
[p\_{{o}} \supset q\_{{o}}] & \supset [[r\_{{o}} \lor p\_{{o}}] \supset [r\_{{o}} \lor q\_{{o}}]] \tag{E4}\\
\Pi\_{{o}({o}\alpha)}f\_{({o}\alpha)} & \supset f\_{({o}\alpha)}x\_{\alpha} \tag{E\(5^{\alpha}\)} \\
\forall x\_{\alpha}[p\_{{o}} \lor f\_{({o}\alpha)}x\_{\alpha}] & \supset \left[p\_{{o}} \lor \Pi\_{{o}({o}\alpha)}f\_{({o}\alpha)}\right] \tag{E\(6^{\alpha}\)}
\end{align}\]
The theorems of elementary type theory are those theorems which can be
derived, using the rules of inference, from Axioms
\((\rE1){-}(\rE6^{\alpha})\) (for all type symbols \(\alpha)\). We shall
sometimes refer to elementary type theory as \(\cT\). It embodies the
logic of propositional connectives, quantifiers, and
l-conversion in the context of type theory.
To illustrate the rules and axioms introduced above, we give a short
and trivial proof in \(\cT\). Following each wff of the proof, we
indicate how it was inferred. (The proof is actually quite
inefficient, since line 3 is not used later, and line 7 can be derived
directly from line 5 without using line 6. The additional proof lines
have been inserted to illustrate some relevant aspects. For the sake
of readability, many brackets have been deleted from the formulas in
this proof. The diligent reader should be able to restore them.)
\[\begin{alignat}{2}
\forall x\_{\imath}\left[p\_{{o}} \lor f\_{{o}\imath}x\_{\imath}\right] \supset\left[p\_{{o}} \lor \Pi\_{{o}({o}\imath)} f\_{{o}\imath}\right] && \text{Axiom \(6^{\imath}\)} \tag\*{1.}\\
\bigg[\lambda f\_{{o}\imath}\bigg[\forall x\_{\imath}[p\_{{o}} \lor f\_{{o}\imath}x\_{\imath}] \supset \bigg[p\_{{o}} \lor \Pi\_{{o}({o}\imath)}f\_{{o}\imath}\bigg]\bigg]\bigg] f\_{{o}\imath} && \text{b-expansion: 1} \tag\*{2.}\\
\Pi\_{{o}({o}({o}\imath))}\bigg[\lambda f\_{{o}\imath}\bigg[\forall x\_{\imath}[p\_{{o}} \lor f\_{{o}\imath}x\_{\imath}] \supset \bigg[p\_{{o}} \lor \Pi\_{{o}({o}\imath)}f\_{{o}\imath}\bigg]\bigg]\bigg] && \text{Generalization: 2} \tag\*{3.}\\
\bigg[\lambda f\_{{o}\imath}\bigg[\forall x\_{\imath}[p\_{{o}}\lor f\_{{o}\imath}x\_{\imath}] \supset \bigg[p\_{{o}} \lor \Pi\_{{o}({o}\imath)}f\_{{o}\imath}\bigg]\bigg]\bigg] [\lambda x\_{\imath}r\_{{o}\imath}x\_{\imath}] && \text{Substitution: 2} \tag\*{4.}\\
\forall x\_{\imath}[p\_{{o}} \lor [\lambda x\_{\imath}r\_{{o}\imath}x\_{\imath}]x\_{\imath}] \supset \left[p\_{{o}} \lor \Pi\_{{o}({o}\imath)}\left[\lambda x\_{\imath}r\_{{o}\imath}x\_{\imath}\right]\right] && \text{b-contraction: 4} \tag\*{5.}\\
\forall x\_{\imath}[p\_{{o}} \lor [\lambda y\_{\imath} r\_{{o}\imath} y\_{\imath}] x\_{\imath}] \supset \left[p\_{{o}} \lor \Pi\_{{o}({o}\imath)}\left[\lambda x\_{\imath}r\_{{o}\imath}x\_{\imath}\right]\right] && \text{a-conversion: 5} \tag\*{6.}\\
\forall x\_{\imath}\left[p\_{{o}} \lor r\_{{o}\imath}x\_{\imath}\right] \supset \left[p\_{{o}} \lor \Pi\_{{o}({o}\imath)}\left[\lambda x\_{\imath}r\_{{o}\imath}x\_{\imath}\right]\right] && \text{b-contraction: 6} \tag\*{7.}
\end{alignat}\]
Note that line 3 can be written as
\[ \forall f\_{{o}\imath} [\forall x\_{\imath} [p\_{{o}} \lor f\_{{o}\imath}x\_{\imath}] \supset [p\_{{o}} \lor [\forall x\_{\imath} f\_{{o}\imath} x\_{\imath}] ]] \tag\*{\(3'.\)} \]
and line 7 can be written as
\[ \forall x\_{\imath}[p\_{{o}} \lor r\_{{o}\imath}x\_{\imath}] \supset [p\_{{o}} \lor \forall x\_{\imath}r\_{{o}\imath}x\_{\imath}] \tag\*{\(7'.\)} \]
We have thus derived a well known law of quantification theory. We
illustrate one possible interpretation of the line \(7'\) wff (which is
closely related to Axiom E6) by considering a situation in which a
rancher puts some horses in a corral and leaves for the night. Later,
he cannot remember whether he closed the gate to the corral. While
reflecting on the situation, he comes to a conclusion which can be
expressed on line \(7'\) if we take the horses to be the elements of type
\(\imath\), interpret \(p\_{{o}}\) to mean "the gate was
closed", and interpret \(r\_{{o}\imath}\) so that
\(r\_{{o}\imath}x\_{\imath}\) asserts "\(x\_{\imath}\) left the
corral". With this interpretation, line \(7'\) says
>
>
> If it is true of every horse that the gate was closed or that the
> horse left the corral, then the gate was closed or every horse left
> the corral.
>
>
>
To the axioms listed above we add the axioms below to obtain
Church's type theory.
#### 1.3.3 Axioms of Extensionality
The axioms of boolean and functional extensionality are the
following:
\[\begin{align}
[x\_{{o}} \equiv y\_{{o}}] & \supset x\_{{o}} = y\_{{o}} \tag{\(7^{o}\)} \\
\forall x\_{\beta}[f\_{\alpha \beta}x\_{\beta} = g\_{\alpha \beta}x\_{\beta}] & \supset f\_{\alpha \beta} = g\_{\alpha \beta}\tag{\(7^{\alpha \beta}\)}
\end{align}\]
Church did not include Axiom \(7^{{o}}\) in his list of axioms in
Church 1940, but he mentioned the possibility of including it. Henkin
did include it in Henkin 1950.
#### 1.3.4 Descriptions
The expression
\[\exists\_1\bx\_{\alpha}\bA\_{{o}}\]
stands for
\[[\lambda p\_{{o}\alpha}\exists y\_{\alpha}[p\_{{o}\alpha}y\_{\alpha} \land \forall z\_{\alpha}[p\_{{o}\alpha}z\_{\alpha} \supset z\_{\alpha} = y\_{\alpha}]]]\, [\lambda \bx\_{\alpha}\bA\_{{o}}]\]
For example,
\[\exists\_1 x\_{\alpha}P\_{{o}\alpha}x\_{\alpha}\]
stands for
\[[\lambda p\_{{o}\alpha}\exists y\_{\alpha}[p\_{{o}\alpha}y\_{\alpha} \land \forall z\_{\alpha}[p\_{{o}\alpha}z\_{\alpha} \supset z\_{\alpha} = y\_{\alpha}]]]\, [\lambda x\_{\alpha}P\_{{o}\alpha}x\_{\alpha}]\]
By l-conversion, this is equivalent to
\[\exists y\_{\alpha}[[\lambda x\_{\alpha}P\_{{o}\alpha}x\_{\alpha}]y\_{\alpha} \land \forall z\_{\alpha}[[\lambda x\_{\alpha}P\_{{o}\alpha}x\_{\alpha}] z\_{\alpha} \supset z\_{\alpha} = y\_{\alpha}]]\]
which reduces by l-conversion to
\[\exists y\_{\alpha}[P\_{{o}\alpha}y\_{\alpha} \land \forall z\_{\alpha}[P\_{{o}\alpha}z\_{\alpha} \supset z\_{\alpha} = y\_{\alpha}]]\]
This asserts that there is a unique element which has the property
\(P\_{{o}\alpha}\). From this example we can see that in general,
\(\exists\_1\bx\_{\alpha}\bA\_{{o}}\) expresses the assertion that
"there is a unique \(\bx\_{\alpha}\) such that
\(\bA\_{{o}}\)".
When there is a unique such element \(\bx\_{\alpha}\), it is convenient
to have the notation \(\atoi\bx\_{\alpha}\bA\_{{o}}\) to represent the
expression "the \(\bx\_{\alpha}\) such that \(\bA\_{{o}}\)".
Russell showed in Whitehead & Russell 1927b how to provide
contextual definitions for such notations in his formulation of type
theory. In Church's type theory \(\atoi\bx\_{\alpha}\bA\_{{o}}\)
is defined as \(\iota\_{\alpha({o}\alpha)}[\lambda
\bx\_{\alpha}\bA\_{{o}}]\). Thus, \(\atoi\) behaves like a
variable-binding operator, but it is defined in terms of l with
the aid of the logical constant \(\iota\_{\alpha({o}\alpha)}\). Thus,
l is still the only variable-binding operator that is
needed.
Since \(\bA\_{{o}}\) describes \(\bx\_{\alpha},
\iota\_{\alpha({o}\alpha)}\) is called a *description operator*.
Associated with this notation is the following:
##### Axiom of Descriptions:
\[ \exists\_1 x\_{\alpha}[p\_{{o}\alpha}x\_{\alpha}] \supset p\_{{o}\alpha} [\iota\_{\alpha({o}\alpha)}p\_{{o}\alpha}] \tag{\(8^{\alpha}\)} \]
This says that when the set \(p\_{{o}\alpha}\) has a unique member,
then \(\iota\_{\alpha({o}\alpha)}p\_{{o}\alpha}\) is in
\(p\_{{o}\alpha}\), and therefore is that unique member. Thus, this
axiom asserts that \(\iota\_{\alpha({o}\alpha)}\) maps one-element sets
to their unique members.
If from certain hypotheses one can prove
\[\exists\_1\bx\_{\alpha}\bA\_{{o}}\]
then by using Axiom \(8^{\alpha}\) one can derive
\[[\lambda \bx\_{\alpha}\bA\_{{o}}] [\iota\_{\alpha({o}\alpha)}[\lambda \bx\_{\alpha}\bA\_{{o}}]]\]
which can also be written as
\[[\lambda \bx\_{\alpha}\bA\_{{o}}] {[\atoi\bx\_{\alpha}\bA\_{{o}}]}\]
We illustrate the usefulness of the description operator with a small
example. Suppose we have formalized the theory of real numbers, and
our theory has constants \(1\_{\varrho}\) and \(\times\_{\varrho \varrho
\varrho}\) to represent the number 1 and the multiplication function,
respectively. (Here \(\varrho\) is the type of real numbers.) To
represent the multiplicative inverse function, we can define the wff
\(\textrm{INV}\_{\varrho \varrho}\) as
\[{\lambda z\_{\varrho} \atoi x\_{\varrho} [\times\_{\varrho \varrho \varrho}z\_{\varrho}x\_{\varrho} = 1\_{\varrho}]}\]
Of course, in traditional mathematical notation we would not write the
type symbols, and we would write \(\times\_{\varrho \varrho
\varrho}z\_{\varrho}x\_{\varrho}\) as \(z \times x\) and write
\(\textrm{INV}\_{\varrho \varrho}z\) as \(z^{-1}\). Thus \(z^{-1}\) is
defined to be that *x* such that \(z \times x = 1\). When
*Z* is provably not 0, we will be able to prove \(\exists\_1
x\_{\varrho}[\times\_{\varrho \varrho \varrho} \textrm{Z x}\_{\varrho} =
1\_{\varrho}]\) and \(Z \times Z^{-1} = 1\), but if we cannot establish
that *Z* is not 0, nothing significant about \(Z^{-1}\) will be
provable.
#### 1.3.5 Axiom of Choice
The Axiom of Choice can be expressed as follows in Church's type
theory:
\[ \exists x\_{\alpha}p\_{{o}\alpha}x\_{\alpha} \supset p\_{{o}\alpha}[\iota\_{\alpha({o}\alpha)}p\_{{o}\alpha}] \tag{\(9^{\alpha}\)} \]
\((9^{\alpha})\) says that the choice function
\(\iota\_{\alpha({o}\alpha)}\) chooses from every nonempty set
\(p\_{{o}\alpha}\) an element, designated as
\(\iota\_{\alpha({o}\alpha)}p\_{{o}\alpha}\), of that set. When this
form of the Axiom of Choice is included in the list of axioms,
\(\iota\_{\alpha({o}\alpha)}\) is called a selection operator instead
of a description operator, and \(\atoi\bx\_{\alpha} \bA\_{{o}}\) means
"an \(\bx\_{\alpha}\) such that \(\bA\_{{o}}\)" when there
is some such element \(\bx\_{\alpha}\). These selection operators have
the same meaning as Hilbert's \(\epsilon\)-operator (Hilbert
1928). However, we here provide one such operator for each type
a.
It is natural to call \(\atoi\) a definite description operator in
contexts where \(\atoi\bx\_{\alpha}\bA\_{{o}}\) means "the
\(\bx\_{\alpha}\) such that \(\bA\_{{o}}\)", and to call it an
indefinite description operator in contexts where
\(\atoi\bx\_{\alpha}\bA\_{{o}}\) means "an \(\bx\_{\alpha}\) such
that \(\bA\_{{o}}\)".
Clearly the Axiom of Choice implies the Axiom of Descriptions, but
sometimes formulations of type theory are used which include the Axiom
of Descriptions, but not the Axiom of Choice.
Another formulation of the Axiom of Choice simply asserts the
existence of a choice function without explicitly naming it:
\[ \exists j\_{\alpha ({o}\alpha)}\forall p\_{{o}\alpha}[\exists x\_{\alpha}p\_{{o}\alpha}x\_{\alpha} \supset p\_{{o}\alpha}[j\_{\alpha({o}\alpha)}p\_{{o}\alpha}]] \tag{\(\text{AC}^{\alpha}\)} \]
Normally when one assumes the Axiom of Choice in type theory, one
assumes it as an axiom schema, and asserts AC\(^{\alpha}\) for each
type symbol a. A similar remark applies to the axioms for
extensionality and description. However, modern proof systems for
Church's type theory, which are, e.g., based on resolution, do
in fact avoid the addition of such axiom schemata for reasons as
further explained in
Sections 3.4
and
4
below. They work with more constrained, goal-directed proof rules
instead.
Before proceeding, we need to introduce some terminology. \(\cQ\_0\) is
an alternative formulation of Church's type theory which will be
described in
Section 1.4
and is equivalent to the system described above using Axioms
(1)-(8). A type symbol is *propositional* if the only
symbols which occur in it are \({o}\) and parentheses.
Yasuhara (1975) defined the relation "\(\ge\)" between
types as the reflexive transitive closure of the minimal relation such
that \((\alpha \beta) \ge \alpha\) and \((\alpha \beta) \ge \beta\).
He established that:
* If \(\alpha \ge \beta\), then \(\cQ\_0 \vdash\) AC\(^{\alpha}
\supset\) AC\(^{\beta}\).
* Given a set *S* of types, none of which is propositional,
there is a model of \(\cQ\_0\) in which AC\(^{\alpha}\) fails if and
only if \(\alpha \ge \beta\) for some b in *S*.
The existence of a choice functions for "higher" types
thus entails the existence of choice functions for "lower"
types, the opposite is generally not the case though.
Buchi (1953) has shown that while the schemas expressing the
Axiom of Choice and Zorn's Lemma can be derived from each other,
the relationships between the particular types involved are
complex.
#### 1.3.6 Axioms of Infinity
One can define the natural numbers (and therefore other basic
mathematical structures such as the real and complex numbers) in type
theory, but to prove that they have the required properties (such as
Peano's Postulates), one needs an Axiom of Infinity. There are
many viable possibilities for such an axiom, such as those discussed
in Church 1940, section 57 of Church 1956, and section 60 of Andrews
2002.
### 1.4 A Formulation Based on Equality
In
Section 1.2.1,
\(\nsim\_{({o}{o})}, \lor\_{(({o}{o}){o})}\), and the
\(\Pi\_{({o}({o}\alpha))}\)'s were taken as primitive constants,
and the wffs \(\sfQ\_{{o}\alpha \alpha}\) which denote equality
relations at type a were defined in terms of these. We now
present an alternative formulation \(\cQ\_0\) of Church's type
theory in which there are primitive constants \(\sfQ\_{{o}\alpha
\alpha}\) denoting equality, and \(\nsim\_{({o}{o})},
\lor\_{(({o}{o}){o})}\), and the \(\Pi\_{({o}({o}\alpha))}\)'s are
defined in terms of the \(\sfQ\_{{o}\alpha \alpha}\)'s.
Tarski (1923) noted that in the context of higher-order logic, one can
define propositional connectives in terms of logical equivalence and
quantifiers. Quine (1956) showed how both quantifiers and connectives
can be defined in terms of equality and the abstraction operator
l in the context of Church's type theory. Henkin (1963)
rediscovered these definitions, and developed a formulation of
Church's type theory based on equality in which he restricted
attention to propositional types. Andrews (1963) simplified the axioms
for this system.
\(\cQ\_0\) is based on these ideas, and can be shown to be equivalent
to a formulation of Church's type theory using Axioms
(1)-(8) of the preceding sections. This section thus provides an
alternative to the material in the preceding Sections
1.2.1-1.3.4. More details about \(\cQ\_0\) can be found in
Andrews 2002.
#### 1.4.1 Definitions
* Type symbols, improper symbols, and variables of \(\cQ\_0\) are
defined as in
Section 1.2.1.
* The logical constants of \(\cQ\_0\) are
\(\sfQ\_{(({o}\alpha)\alpha)}\) and \(\iota\_{(\imath({o}\imath))}\)
(for each type symbol a).
* Wffs of \(\cQ\_0\) are defined as in
Section 1.2.1.
##### Abbreviations:
* \(\bA\_{\alpha} = \bB\_{\alpha}\) stands for \(\sfQ\_{{o}\alpha
\alpha}\bA\_{\alpha}\bB\_{\alpha}\)
* \(\bA\_{{o}} \equiv \bB\_{{o}}\) stands for
\(\sfQ\_{{o}{o}{o}}\)**A\(\_{{o}}\)B**\(\_{{o}}\)
* \(T\_{{o}}\) stands for \(\sfQ\_{{o}{o}{o}} =
\sfQ\_{{o}{o}{o}}\)
* \(F\_{{o}}\) stands for \([\lambda x\_{{o}}T\_{{o}}] = [\lambda
x\_{{o}}x\_{{o}}]\)
* \(\Pi\_{{o}({o}\alpha)}\) stands for
\(\sfQ\_{{o}({o}\alpha)({o}\alpha)}[\lambda x\_{\alpha}T\_{{o}}]\)
* \(\forall \bx\_{\alpha}\bA\) stands for
\(\Pi\_{{o}({o}\alpha)}[\lambda \bx\_{\alpha}\bA]\)
* \(\land\_{{o}{o}{o}}\) stands for \(\lambda x\_{{o}}\lambda
y\_{{o}}[[\lambda g\_{{o}{o}{o}}[g\_{{o}{o}{o}}T\_{{o}}T\_{{o}}]] =
[\lambda g\_{{o}{o}{o}}[g\_{{o}{o}{o}}x\_{{o}}y\_{{o}}]]]\)
* \(\bA\_{{o}} \land \bB\_{{o}}\) stands for \(\land\_{{o}{o}{o}}
\bA\_{{o}} \bB\_{{o}}\)
* \(\nsim\_{{o}{o}}\) stands for \(\sfQ\_{{o}{o}{o}}F\_{{o}}\)
\(T\_{{o}}\) denotes truth. The meaning of \(\Pi\_{{o}({o}\alpha)}\) was
discussed in
Section 1.1.
To see that this definition of \(\Pi\_{{o}({o}\alpha)}\) is
appropriate, note that \(\lambda x\_{\alpha}T\) denotes the set of all
elements of type a, and that
\(\Pi\_{{o}({o}\alpha)}s\_{{o}\alpha}\) stands for
\(\sfQ\_{{o}({o}\alpha)({o}\alpha)}[\lambda x\_{\alpha}T]
s\_{{o}\alpha}\), respectively for \([\lambda x\_{\alpha}T] =
s\_{{o}\alpha}\). Therefore \(\Pi\_{{o}({o}\alpha)}s\_{{o}\alpha}\)
asserts that \(s\_{{o}\alpha}\) is the set of all elements of type
a, so \(s\_{{o}\alpha}\) contains all elements of type a.
It can be seen that \(F\_{{o}}\) can also be written as \(\forall
x\_{{o}}x\_{{o}}\), which asserts that everything is true. This is
false, so \(F\_{{o}}\) denotes falsehood. The expression \(\lambda
g\_{{o}{o}{o}}[g\_{{o}{o}{o}}x\_{{o}}y\_{{o}}]\) can be used to represent
the ordered pair \(\langle x\_{{o}},y\_{{o}}\rangle\), and the
conjunction \(x\_{{o}} \land y\_{{o}}\) is true iff \(x\_{{o}}\) and
\(y\_{{o}}\) are both true, i.e., iff \(\langle T\_{{o}},T\_{{o}}\rangle
= \langle x\_{{o}},y\_{{o}}\rangle\). Hence \(x\_{{o}} \land y\_{{o}}\)
can be expressed by the formula \([\lambda
g\_{{o}{o}{o}}[g\_{{o}{o}{o}}T\_{{o}}T\_{{o}}]] = [\lambda
g\_{{o}{o}{o}}[g\_{{o}{o}{o}}x\_{{o}}y\_{{o}}]]\).
Other propositional connectives, such as \(\lor, \supset\), and the
existential quantifier \(\exists\), are easily defined. By using
\(\iota\_{(\imath({o}\imath))}\), one can define description operators
\(\iota\_{\alpha({o}\alpha)}\) for all types a.
#### 1.4.2 Axioms and Rules of Inference
\(\cQ\_0\) has a single rule of inference.
>
> **Rule R:** From \(\bC\) and \(\bA\_{\alpha} =
> \bB\_{\alpha}\), to infer the result of replacing one occurrence of
> \(\bA\_{\alpha}\) in \(\bC\) by an occurrence of \(\bB\_{\alpha}\),
> provided that the occurrence of \(\bA\_{\alpha}\) in \(\bC\) is not (an
> occurrence of a variable) immediately preceded by l.
The *axioms for \(\cQ\_0\)* are the following:
\[\begin{align}
[g\_{{o}{o}}T\_{{o}} \land g\_{{o}{o}}F\_{{o}}] &= \forall x\_{{o}}[g\_{{o}{o}}x\_{{o}}] \tag{Q1}\\
[x\_{\alpha} = y\_{\alpha}] & \supset [h\_{{o}\alpha}x\_{\alpha} = h\_{{o}\alpha}y\_{\alpha}] \tag{Q\(2^{\alpha}\)}\\
[f\_{\alpha \beta} = g\_{\alpha \beta}] & = \forall x\_{\beta}[f\_{\alpha \beta}x\_{\beta} = g\_{\alpha \beta}x\_{\beta}] \tag{Q\(3^{\alpha \beta}\)}\\
[\lambda \bx\_{\alpha}\bB\_{\beta}]\bA\_{\alpha} & = \textsf{SubFree}(\bA\_{\alpha},\bx\_{\alpha},\bB\_{\beta}), \tag{Q4}\\
&\quad \text{ provided } \bA\_{\alpha} \text{ is free for } \bx \text{ in } \bB\_{\beta}\\
\iota\_{\imath({o}\imath)}[\sfQ\_{{o}\imath\imath}y\_{\imath}] &= y\_{\imath} \tag{Q5}
\end{align}\]
The additional condition in axiom \((\rQ4)\) ensures that the
substitution does not lead to free variables of A being bound in the
result of the substitution.
## 2. Semantics
It is natural to compare the semantics of type theory with the
semantics of first-order logic, where the theorems are precisely the
wffs which are valid in all interpretations. From an intuitive point
of view, the natural interpretations of type theory are *standard
models*, which are defined below. However, it is a consequence of
Godel's Incompleteness Theorem (Godel 1931) that
axioms (1)-(9) do not suffice to derive all wffs which are valid
in all standard models, and there is no consistent recursively
axiomatized extension of these axioms which suffices for this purpose.
Nevertheless, experience shows that these axioms are sufficient for
most purposes, and Leon Henkin considered the problem of clarifying in
what sense they are complete. The definitions and theorem below
constitute Henkin's (1950) solution to this problem, which is
often referred to as *general semantics* or *Henkin
semantics*.
A *frame* is a collection \(\{\cD\_{\alpha}\}\_{\alpha}\) of
nonempty domains (sets) \(\cD\_{\alpha}\), one for each type symbol
a, such that \(\cD\_{{o}} = \{\sfT,\sfF\}\) (where \(\sfT\)
represents truth and \(\sfF\) represents falsehood), and \(\cD\_{\alpha
\beta}\) is some collection of functions mapping \(\cD\_{\beta}\) into
\(\cD\_{\alpha}\). The members of \(\cD\_{\imath}\) are called
*individuals*.
An *interpretation* \(\langle \{\cD\_{\alpha}\}\_{\alpha},
\frI\rangle\) consists of a frame and a function \(\frI\) which maps
each constant *C* of type a to an appropriate element of
\(\cD\_{\alpha}\), which is called the *denotation of*
*C*. The logical constants are given their standard
denotations.
An *assignment* of values in the frame
\(\{\cD\_{\alpha}\}\_{\alpha}\) to variables is a function \(\phi\) such
that \(\phi \bx\_{\alpha} \in \cD\_{\alpha}\) for each variable
\(\bx\_{\alpha}\). (Notation: The assignment \(\phi[a/x]\) maps
variable *x* to value *a* and it is identical with
\(\phi\) for all other variable symbols different from
*x*.)
An interpretation \(\cM = \langle \{\cD\_{\alpha}\}\_{\alpha},
\frI\rangle\) is a *general model* (aka *Henkin model*)
iff there is a binary function \(\cV\) such that
\(\cV\_{\phi}\bA\_{\alpha} \in \cD\_{\alpha}\) for each assignment
\(\phi\) and wff \(\bA\_{\alpha}\), and the following conditions are
satisfied for all assignments and all wffs:
* \(\cV\_{\phi}\bx\_{\alpha} = \phi \bx\_{\alpha}\) for each variable
\(\bx\_{\alpha}\).
* \(\cV\_{\phi}A\_{\alpha} = \frI A\_{\alpha}\) if \(A\_{\alpha}\) is a
primitive constant.
* \(\cV\_{\phi}[\bA\_{\alpha \beta}\bB\_{\beta}] =
(\cV\_{\phi}\bA\_{\alpha \beta})(\cV\_{\phi}\bB\_{\beta})\) (the value of
a function \(\cV\_{\phi}\bA\_{\alpha \beta}\) at the argument
\(\cV\_{\phi}\bB\_{\beta})\).
* \(\cV\_{\phi}[\lambda \bx\_{\alpha}\bB\_{\beta}] =\) that function
from \(\cD\_{\alpha}\) into \(\cD\_{\beta}\) whose value for each
argument \(z \in \cD\_{\alpha}\) is \(\cV\_{\psi}\bB\_{\beta}\), where
\(\psi\) is that assignment such that \(\psi \bx\_{\alpha} = z\) and
\(\psi \by\_{\beta} = \phi \by\_{\beta}\) if \(\by\_{\beta} \ne
\bx\_{\alpha}\).
If an interpretation \(\cM\) is a general model, the function \(\cV\)
is uniquely determined. \(\cV\_{\phi}\bA\_{\alpha}\) is called the
*value* of \(\bA\_{\alpha}\) in \(\cM\) with respect to
\(\phi\).
One can easily show that the following statements hold in all general
models \(\cM\) for all assignments \(\phi\) and all wffs \(\bA\) and
\(\bB\):
* \(\cV\_{\phi} T\_{{o}} = \sfT\) and \(\cV\_{\phi} F\_{{o}} =
\sfF\)
* \(\cV\_{\phi} [\nsim\_{{o}{o}} \bA\_{{o}}] = \sfT\) iff \(\cV\_{\phi}
\bA\_{{o}} = \sfF\)
* \(\cV\_{\phi} [ \bA\_{{o}} \lor \bB\_{{o}} ] = \sfT\) iff
\(\cV\_{\phi} \bA\_{{o}} = \sfT\) or \(\cV\_{\phi} \bB\_{{o}} =
\sfT\)
* \(\cV\_{\phi} [ \bA\_{{o}} \land \bB\_{{o}} ] = \sfT\) iff
\(\cV\_{\phi} \bA\_{{o}} = \sfT\) and \(\cV\_{\phi} \bB\_{{o}} =
\sfT\)
* \(\cV\_{\phi} [ \bA\_{{o}} \supset \bB\_{{o}} ] = \sfT\) iff
\(\cV\_{\phi} \bA\_{{o}} = \sfF\) or \(\cV\_{\phi} \bB\_{{o}} =
\sfT\)
* \(\cV\_{\phi} [ \bA\_{{o}} \equiv \bB\_{{o}} ] = \sfT\) iff
\(\cV\_{\phi} \bA\_{{o}} = \cV\_{\phi} \bB\_{{o}}\)
* \(\cV\_{\phi} [\forall \bx\_{\alpha}\bA] = \sfT\) iff
\(\cV\_{\phi[a/x]} \bA= \sfT\) for all \(a \in \cD\_{\alpha}\)
* \(\cV\_{\phi} [\exists \bx\_{\alpha}\bA] = \sfT\) iff there exists
an \(a \in \cD\_{\alpha}\) such that \(\cV\_{\phi[a/x]} \bA= \sfT\)
The semantics of general models is thus as expected. However, there is
a subtlety to note regarding the following condition for arbitrary
types a:
* [equality] \(\cV\_{\phi} [ \bA\_{\alpha} = \bB\_{\alpha} ] = \sfT\)
iff \(\cV\_{\phi} \bA\_{\alpha} = \cV\_{\phi} \bB\_{\alpha}\)
When the definitions of
Section 1.2.1
are employed, where equality has been defined in terms of
Leibniz' principle, then this statement is not implied for all
types a. It only holds if we additionally require that the
domains \(\cD\_{{o}\alpha}\) contain all the unit sets of objects of
type a, or, alternatively, that the domains
\(\cD\_{{o}\alpha\alpha}\) contain the respective identity relations on
objects of type a (which entails the former). The need for this
additional requirement, which is not included in the original work of
Henkin (1950), has been demonstrated in Andrews 1972a.
When instead the alternative definitions of
Section 1.4
are employed, then this requirement is obviously met due to the
presence of the logical constants \(\sfQ\_{{o}\alpha \alpha}\) in the
signature, which by definition denote the respective identity
relations on the objects of type a and therefore trivially
ensure their existence in each general model \(\cM\). It is therefore
a natural option to always assume primitive equality constants (for
each type a) in a concrete choice of base system for
Church's type theory, just as realized in Andrews' system
\(\cQ\_0\).
An interpretation \(\langle \{\cD\_{\alpha}\}\_{\alpha}, \frI\rangle\)
is a *standard model* iff for all a and \(\beta ,
\cD\_{\alpha \beta}\) is the set of all functions from \(\cD\_{\beta}\)
into \(\cD\_{\alpha}\). Clearly a standard model is a general
model.
We say that a wff \(\bA\) is *valid* in a model \(\cM\) iff
\(\cV\_{\phi}\bA = \sfT\) for every assignment \(\phi\) into \(\cM\). A
model for a set \(\cH\) of wffs is a model in which each wff of
\(\cH\) is valid.
A wff \(\bA\) is *valid in the general* [*standard*]
*sense* iff \(\bA\) is valid in every general [standard] model.
Clearly a wff which is valid in the general sense is valid in the
standard sense, but the converse of this statement is false.
**Completeness and Soundness Theorem (Henkin 1950):** A
wff is a theorem if and only if it is valid in the general sense.
Not all frames belong to interpretations, and not all interpretations
are general models. In order to be a general model, an interpretation
must have a frame satisfying certain closure conditions which are
discussed further in Andrews 1972b. Basically, in a general model
every wff must have a value with respect to each assignment.
A model is said to be *finite* iff its domain of individuals is
finite. Every finite model for \(\cQ\_0\) is standard (Andrews 2002,
Theorem 5404), but every set of sentences of \(\cQ\_0\) which has
infinite models also has nonstandard models (Andrews2002, Theorem
5506).
An understanding of the distinction between standard and nonstandard
models can clarify many phenomena. For example, it can be shown that
there is a model \(\cM = \langle \{\cD\_{\alpha}\}\_{\alpha},
\frI\rangle\) in which \(\cD\_{\imath}\) is infinite, and all the
domains \(\cD\_{\alpha}\) are countable. Thus \(\cD\_{\imath}\) and
\(\cD\_{{o}\imath}\) are both countably infinite, so there must be a
bijection *h* between them. However, Cantor's Theorem
(which is provable in type theory and therefore valid in all models)
says that \(\cD\_{\imath}\) has more subsets than members. This
seemingly paradoxical situation is called Skolem's Paradox. It
can be resolved by looking carefully at Cantor's Theorem, i.e.,
\(\nsim \exists g\_{{o}\imath\imath}\forall f\_{{o}\imath}\exists
j\_{\imath}[g\_{{o}\imath\imath}j\_{\imath} = f\_{{o}\imath}]\), and
considering what it means in a model. The theorem says that there is
no function \(g \in \cD\_{{o}\imath\imath}\) from \(\cD\_{\imath}\) into
\(\cD\_{{o}\imath}\) which has every set \(f\_{{o}\imath} \in
\cD\_{{o}\imath}\) in its range. The usual interpretation of the
statement is that \(\cD\_{{o}\imath}\) is bigger (in cardinality) than
\(\cD\_{\imath}\). However, what it actually means in this model is
that *h* cannot be in \(\cD\_{{o}\imath\imath}\). Of course,
\(\cM\) must be nonstandard.
While the Axiom of Choice is presumably true in all standard models,
there is a nonstandard model for \(\cQ\_0\) in which AC\(^{\imath}\) is
false (Andrews 1972b). Thus, AC\(^{\imath}\) is not provable in
\(\cQ\_0\).
Thus far, investigations of model theory for Church's type
theory have been far less extensive than for first-order logic.
Nevertheless, there has been some work on methods of constructing
nonstandard models of type theory and models in which various forms of
extensionality fail, models for theories with arbitrary (possibly
incomplete) sets of logical constants, and on developing general
methods of establishing completeness of various systems of axioms with
respect to various classes of models. Relevant papers include Andrews
1971, 1972a,b, and Henkin 1975. Further related work can be found in
Benzmuller et al. 2004, Brown 2004, 2007, and Muskens 2007.
## 3. Metatheory
### 3.1 Lambda-Conversion
The first three rules of inference in
Section 1.3.1
are called rules of l-*conversion*. If \(\bD\) and
\(\bE\) are wffs, we write \(\bD \conv \bE\) to indicate that \(\bD\)
can be converted to \(\bE\) by applications of these rules. This is an
equivalence relation between wffs. A wff \(\bD\) is in
b-*normal form* iff it has no well-formed parts of the
form \([[\lambda \bx\_{\alpha}\bB\_{\beta}]\bA\_{\alpha}]\). Every wff is
convertible to one in b-normal form. Indeed, every sequence of
contractions (applications of rule 2, combined as necessary with
alphabetic changes of bound variables) of a wff is finite; obviously,
if such a sequence cannot be extended, it terminates with a wff in
b-normal form. (This is called the strong normalization theorem.)
By the Church-Rosser Theorem, this wff in b-normal form is unique
modulo alphabetic changes of bound variables. For each wff \(\bA\) we
denote by \({\downarrow}\bA\) the first wff (in some enumeration) in
b-normal form such that \(\bA \conv {\downarrow} \bA\). Then
\(\bD \conv \bE\) if and only if \({\downarrow} \bD = {\downarrow}
\bE\).
By using the Axiom of Extensionality one can obtain the following
derived rule of inference:
>
>
> \(\eta\)-*Contraction*. Replace a well-formed part \([\lambda
> \by\_{\beta}[\bB\_{\alpha \beta}\by\_{\beta}]]\) of a wff by
> \(\bB\_{\alpha \beta}\), provided \(\by\_{\beta}\) does not occur free
> in \(\bB\_{\alpha \beta}\).
>
>
>
This rule and its inverse (which is called
\(\eta\)-*Expansion*) are sometimes used as additional rules of
l-conversion. See Church 1941, Stenlund 1972, Barendregt 1984,
and Barendregt et al. 2013 for more information about
l-conversion.
It is worth mentioning (again) that l-abstraction replaces the
need for comprehension axioms in Church's type theory.
### 3.2 Higher-Order Unification
The challenges in higher-order unification are outlined very briefly.
More details on the topic are given in Dowek 2001; its utilization in
higher-order theorem provers is also discussed in Benzmuller
& Miller 2014.
**Definition.** A *higher-order unifier* for a
pair \(\langle \bA,\bB\rangle\) of wffs is a substitution \(\theta\)
for free occurrences of variables such that \(\theta \bA\) and
\(\theta \bB\) have the same b-normal form. A higher-order
unifier for a set of pairs of wffs is a unifier for each of the pairs
in the set.
Higher-order unification differs from first-order unification (Baader
& Snyder 2001) in a number of important respects. In
particular:
1. Even when a unifier for a pair of wffs exists, there may be no
most general unifier (Gould 1966).
2. Higher-order unification is undecidable (Huet 1973b), even in the
"second-order" case (Goldfarb 1981).
However, an algorithm has been devised (Huet 1975, Jensen &
Pietrzykowski 1976), called *pre-unification*, which will find
a unifier for a set of pairs of wffs if one exists. The pre-unifiers
computed by Huet's procedure are substitutions that can reduce
the original unification problem to one involving only so called
*flex-flex* unification pairs. Flex-flex pairs have variable
head symbols in both terms to be unified and they are known to always
have a solution. The concrete computation of these solutions can thus
be postponed or omitted. Pre-unification is utilized in all the
resolution based theorem provers mentioned in
Section 4.
*Pattern unification* refers a small subset of unification
problems, first studied by Miller 1991, whose identification has been
important for the construction of practical systems. In a pattern
unification problem every occurrence of an existentially quantified
variable is applied to a list of arguments that are all distinct
variables bound by either a l-binder or a universal quantifier
in the scope of the existential quantifier. Thus, existentially
quantified variables cannot be applied to general terms but a very
restricted set of bound variables. Pattern unification, like
first-order unification, is decidable and most general unifiers exist
for solvable problems. This is why pattern unification is preferably
employed (when applicable) in some state-of-the-art theorem provers
for Church's type theory.
### 3.3 A Unifying Principle
The *Unifying Principle* was introduced in Smullyan 1963 (see
also Smullyan 1995) as a tool for deriving a number of basic
metatheorems about first-order logic in a uniform way. The principle
was extended to elementary type theory by Andrews (1971) and to
extensional type theory, that is, Henkin's general semantics
without description or choice, by Benzmuller, Brown and Kohlhase
(2004). We outline these extensions in some more detail below.
#### 3.3.1 Elementary Type Theory
The Unifying Principle was extended to elementary type theory (the
system \(\cT\) of
Section 1.3.2)
in Andrews 1971 by applying ideas in Takahashi 1967. This Unifying
Principle for \(\cT\) has been used to establish cut-elimination for
\(\cT\) in Andrews 1971 and completeness proofs for various systems of
type theory in Huet 1973a, Kohlhase 1995, and Miller 1983. We first
give a definition and then state the principle.
**Definition.** A property \(\Gamma\) of finite sets of
wffs\(\_{{o}}\) is an *abstract consistency property* iff for
all finite sets \(\cS\) of wffs\(\_{{o}}\), the following properties
hold (for all wffs **A, B**):
1. If \(\Gamma(\cS)\), then there is no atom \(\bA\) such that \(\bA
\in \cS\) and \([\nsim \bA] \in \cS\).
2. If \(\Gamma(\cS \cup \{\bA\})\), then \(\Gamma(\cS \cup
{\downarrow} \bA\})\).
3. If \(\Gamma(\cS \cup \{\nsim \nsim \bA\})\), then \(\Gamma(\cS
\cup \{\bA\})\).
4. If \(\Gamma(\cS \cup \{\bA \lor \bB\})\), then \(\Gamma(\cS \cup
\{\bA\})\) or \(\Gamma(\cS \cup \{\bB\})\).
5. If \(\Gamma(\cS \cup \{\nsim[\bA \lor \bB]\})\), then \(\Gamma(\cS
\cup \{\nsim \bA,\nsim \bB\})\).
6. If \(\Gamma(\cS \cup \{\Pi\_{{o}({o}\alpha)}\bA\_{{o}\alpha}\})\),
then \(\Gamma(\cS \cup \{\Pi\_{{o}({o}\alpha)}\bA\_{{o}\alpha},
\bA\_{{o}\alpha}\bB\_{\alpha}\})\) for each wff \(\bB\_{\alpha}\).
7. If \(\Gamma(\cS \cup \{\nsim
\Pi\_{{o}({o}\alpha)}\bA\_{{o}\alpha}\})\), then \(\Gamma(\cS \cup
\{\nsim \bA\_{{o}\alpha}\bc\_{\alpha}\})\), for any variable or
parameter \(\bc\_{\alpha}\) which does not occur free in
\(\bA\_{{o}\alpha}\) or any wff in \(\cS\).
Note that *consistency* is an abstract consistency
property.
**Unifying Principle for \(\cT\).** If \(\Gamma\) is an
abstract consistency property and \(\Gamma(\cS)\), then \(\cS\) is
consistent in \(\cT\).
Here is a typical application of the Unifying Principle. Suppose there
is a procedure \(\cM\) which can be used to refute sets of sentences,
and we wish to show it is complete for \(\cT\). For any set of
sentences, let \(\Gamma(\cS)\) mean that \(\cS\) is not refutable by
\(\cM\), and show that \(\Gamma\) is an abstract consistency property.
Now suppose that \(\bA\) is a theorem of \(\cT\). Then \(\{\nsim
\bA\}\) is inconsistent in \(\cT\), so by the Unifying Principle not
\(\Gamma(\{\nsim \bA\})\), so \(\{\nsim \bA\}\) is refutable by
\(\cM\).
#### 3.3.2 Extensional Type Theory
Extensions of the above Unifying principle towards Church's type
theory with general semantics were studied since the mid nineties. A
primary motivation was to support (refutational) completeness
investigations for the proof calculi underlying the emerging
higher-order automated theorem provers (see
Section 4
below). The initial interest was on a fragment of Church's type
theory, called *extensional type theory*, that includes the
extensionality axioms, but excludes \(\iota\_{(\alpha({o}\alpha))}\)
and the axioms for it (description and choice were largely neglected
in the automated theorem provers at the time). Analogous to before, a
distinction has been made between extensional type theory with
*defined equality* (as in
Section 1.2.1,
where equality is defined via Leibniz' principle) and
extensional type theory with *primitive equality* (e.g., system
\(\cQ\_0\) as in
Section 1.4,
or, alternatively, a system based on logical constants
\(\nsim\_{({o}{o})}, \lor\_{(({o}{o}){o})}\), and the
\(\Pi\_{({o}({o}\alpha))}\)'s as in
Section 1.2.1,
but with additional primitive logical constants
\(=\_{{o}\alpha\alpha}\) added).
A first attempt towards a Unifying Principle for extensional type
theory with primitive equality is presented in Kohlhase 1993. The
conditions given there, which are still
incomplete[1],
were subsequently modified and complemented as follows:
* To obtain a Unifying Principle for extensional type theory with
defined equality, Benzmuller & Kohlhase 1997 added the
following conditions for boolean extensionality, functional
extensionality and saturation to the above conditions 1.-7. for
\(\cT\) (their presentation has been adapted here; for technical
reasons, they also employ a slightly stronger variant for condition 2.
based on b-conversion rather than b-normalization):
8. If \(\Gamma(\cS \cup \{ \bA\_{{o}} = \bB\_{{o}} \})\), then
\(\Gamma(\cS \cup \{ \bA\_{{o}}, \bB\_{{o}} \})\) or \(\Gamma(\cS \cup
\{ \nsim \bA\_{{o}}, \nsim \bB\_{{o}} \})\)
9. If \(\Gamma(\cS \cup \{ \bA\_{\alpha\beta} = \bB\_{\alpha\beta}
\})\), then \(\Gamma(\cS \cup \{ \bA\_{\alpha\beta} \bc\_\beta =
\bB\_{\alpha\beta} \bc\_\beta \})\) for any parameter \(\bc\_{\beta}\)
which does not occur free in \(\cS\).
10. \(\Gamma(\cS \cup \{ \bA\_{{o}} \})\) or \(\Gamma(\cS \cup \{ \nsim
\bA\_{{o}} \})\)
The saturation condition 10. was required to properly establish the
principle. However, since this condition is related to the proof
theoretic notion of cut-elimination, it limits the utility of the
principle in completeness proofs for machine-oriented calculi. The
principle was nevertheless used in Benzmuller & Kohlhase
1998a and Benzmuller 1999a,b to obtain a completeness proof for a
system of extensional higher-order resolution. The principle was also
applied in Kohlhase 1998 to study completeness for a related
extensional higher-order tableau
calculus,[2]
in which the extensionality rules for Leibniz equality were adapted
from Benzmuller & Kohlhase 1998a, respectively
Benzmuller 1997.
* Different options for achieving a Unifying Principle for extensional
type theory with primitive equality are presented in Benzmuller
1999a (in this work primitive logical constants
\(=\_{{o}\alpha\alpha}\) were used in addition to \(\nsim\_{({o}{o})},
\lor\_{(({o}{o}){o})}\), and the \(\Pi\_{({o}({o}\alpha))}\)'s;
such a redundant choice of logical constants is not rare in
higher-order theorem provers). One option is to introduce a
reflexivity and substitutivity condition. An alternative is to combine
a reflexivity condition with a condition connecting primitive with
defined equality, so that the substitutivity condition follows. Note
that introducing a defined notion of equality based on the Leibniz
principle is, of course, still possible in this context (defined
equality is denoted in the remainder of this section by \(\doteq\) to
properly distinguish it from primitive equality \(=\)):
8. Not \(\Gamma(\cS \cup \{ \nsim [\bA\_{\alpha} = \bA\_{\alpha}]
\})\)
9. If \(\Gamma(\cS \cup \{ \bA\_{\alpha} = \bA\_{\alpha} \})\), then
\(\Gamma(\cS \cup \{ \bA\_{\alpha} \doteq \bA\_{\alpha} \})\)
10. \(\Gamma(\cS \cup \{ \bA\_{{o}} \})\) or \(\Gamma(\cS \cup \{ \nsim
\bA\_{{o}} \})\)
The saturation condition 10. still has to be added independent of
which option is considered. The principle was applied in
Benzmuller 1999a,b to prove completeness for the extensional
higher-order
RUE-resolution[3]
calculus underlying the higher-order automated theorem prover LEO and
its successor LEO-II.
* In Benzmuller et al. 2004 the principle is presented in a very
general way which allows for various possibilities concerning the
treatment of extensionality and equality in the range between
elementary type theory and extensional type theory. The principle is
applied to obtain completeness proofs for an associated range of
natural deduction calculi. The saturation condition is still used in
this work.
* Based on insights from Brown's (2004, 2007) thesis, a solution
for replacing the undesirable saturation condition by two weaker
conditions is presented in Benzmuller, Brown, and Kohlhase 2009;
this work also further studies the relation between saturation and
cut-elimination. The two weaker conditions, termed mating and
decomposition, are easier to demonstrate than saturation in
completeness proofs for machine-oriented calculi. They are (omitting
some type information in the second one and abusing notation):
1. If \(\Gamma(\cS \cup \{ \nsim \bA\_{{o}}, \bB\_{{o}} \})\) for atoms
\(\bA\_{{o}}\) and \(\bB\_{{o}}\), then \(\Gamma(\cS \cup \{ \nsim
[\bA\_{{o}} \doteq \bB\_{{o}}] \})\)
2. If \(\Gamma(\cS \cup \{ \nsim [h \overline{\bA^n\_{\alpha^n}}
\doteq h \overline{\bB^n\_{\alpha^n}} ] \})\), where head symbol
\(h\_{\beta\overline{\alpha^n}}\) is a parameter, then there is an \(i\
(1 \leq i \leq n)\) such that \(\Gamma(\cS \cup \{ \nsim
[\bA^i\_{\alpha^i} \doteq \bB^i\_{\alpha^i}] \})\).
The modified principle is applied in Benzmuller et al. 2009 to
show completeness for a sequent calculus for extensional type theory
with defined equality.
* A further extended Unifying Principle for extensional type theory with
primitive equality is presented and used in Backes & Brown 2011 to
prove the completeness of a tableau calculus for type theory which
incorporates the axiom of choice.
* A closely related and further simplified principle has also been
presented and studied in Steen 2018, where it was applied for showing
completeness of the paramodulation calculus that is underlying the
theorem prover Leo-III (Steen & Benzmuller 2018, 2021).
### 3.4 Cut-Elimination and Cut-Simulation
Cut-elimination proofs (see also the SEP entry on
proof theory)
for Church's type theory, which are often closely related to
such proofs (Takahashi 1967, 1970; Prawitz 1968; Mints 1999) for other
formulations of type theory, may be found in Andrews 1971, Dowek &
Werner 2003, and Brown 2004. In Benzmuller et al. 2009 it is
shown how certain wffs\(\_{{o}}\), such as axioms of extensionality,
descriptions, choice (see
Sections 1.3.3
to
1.3.5),
and induction, can be used to justify cuts in cut-free sequent
calculi for elementary type theory. Moreover, the notions of
*cut-simulation* and *cut-strong axioms* are introduced
in this work, and the need for omitting defined equality and for
eliminating *cut-strong axioms* such as extensionality,
description, choice and induction in machine-oriented calculi (e.g.,
by replacing them with more constrained, goal-directed rules) in order
to reduce *cut-simulation* effects are discussed as a major
challenge for higher-order automated theorem proving. In other words,
including cut-strong axioms in a machine-oriented proof calculus for
Church's type theory is essentially as bad as including a cut
rule, since the cut rule can be mimicked by them.
### 3.5 Expansion Proofs
An *expansion proof* is a generalization of the notion of a
Herbrand expansion of a theorem of first-order logic; it provides a
very elegant, concise, and nonredundant representation of the
relationship between the theorem and a tautology which can be obtained
from it by appropriate instantiations of quantifiers and which
underlies various proofs of the theorem. Miller (1987) proved that a
wff \(\bA\) is a theorem of elementary type theory if and only if
\(\bA\) has an expansion proof.
In Brown 2004 and 2007, this concept is generalized to that of an
*extensional expansion proof* to obtain an analogous theorem
involving type theory with extensionality.
### 3.6 The Decision Problem
Since type theory includes first-order logic, it is no surprise that
most systems of type theory are undecidable. However, one may look for
solvable special cases of the decision problem. For example, the
system \(\cQ\_{0}^1\) obtained by adding to \(\cQ\_0\) the additional
axiom \(\forall x\_{\imath}\forall y\_{\imath}[x\_{\imath}=y\_{\imath}]\)
is decidable.
Although the system \(\cT\) of elementary type theory is analogous to
first-order logic in certain respects, it is a considerably more
complex language, and special cases of the decision problem for
provability in \(\cT\) seem rather intractable for the most part.
Information about some very special cases of this decision problem may
be found in Andrews 1974, and we now summarize this.
A wff of the form \(\exists \bx^1 \ldots \exists \bx^n [\bA=\bB]\) is
a theorem of \(\cT\) iff there is a substitution \(\theta\) such that
\(\theta \bA \conv \theta \bB\). In particular, \(\vdash \bA=\bB\) iff
\(\bA \conv \bB\), which solves the decision problem for wffs of the
form \([\bA=\bB]\). Naturally, the circumstance that only trivial
equality formulas are provable in \(\cT\) changes drastically when
axioms of extensionality are added to \(\cT\). \(\vdash \exists
\bx\_{\beta}[\bA=\bB]\) iff there is a wff \(\bE\_{\beta}\) such that
\(\vdash[\lambda \bx\_{\beta}[\bA=\bB]]\bE\_{\beta}\), but the decision
problem for the class of wffs of the form \(\exists
\bx\_{\beta}[\bA=\bB]\) is unsolvable.
A wff of the form \(\forall \bx^1 \ldots \forall \bx^n\bC\), where
\(\bC\) is quantifier-free, is provable in \(\cT\) iff \({\downarrow}
\bC\) is tautologous. On the other hand, the decision problem for wffs
of the form \(\exists \bz\bC\), where \(\bC\) is quantifier-free, is
unsolvable. (By contrast, the corresponding decision problem in
first-order logic with function symbols is known to be solvable
(Maslov 1967).) Since irrelevant or vacuous quantifiers can always be
introduced, this shows that the only solvable classes of wffs of
\(\cT\) in prenex normal form defined solely by the structure of the
prefix are those in which no existential quantifiers occur.
## 4. Automation
### 4.1 Machine-Oriented Proof Calculi
The development, respectively improvement, of machine-oriented proof
calculi for Church's type theory has made good progress in
recent years; see, e.g., the references on superposition based calculi
given below. The challenges are obviously much bigger than in
first-order logic. The practically way more expressive nature of the
term-language of Church's type theory causes a larger, bushier
and more difficult to traverse proof search space than in first-order
logic. Moreover, remember that unification, which constitutes a very
important control and filter mechanism in first-order theorem proving,
is undecidable (in general) in type theory; see
Section 3.2.
On the positive side, however, there is a chance to find
significantly shorter proofs than in first-order logic. This is well
illustrated with a small, concrete example in Boolos 1987, for which a
fully automated proof seems now in reach (Benzmuller et al.
2023).
Clearly, further progress is needed to further leverage the practical
relevance of existing calculi for Church's type theory and their
implementations (see
Section 4.3).
The challenges include
* an appropriate handling of the impredicative nature of
Church's type theory (some form of blind guessing cannot
generally be avoided in a complete proof procedure, but must be
intelligently guided),
* the elimination/reduction of cut-simulation effects (see
Section 3.4)
caused by defined equality or cut-strong axioms (e.g.,
extensionality, description, choice, induction) in the search
space,
* the general undecidability of unification, rendering it a rather
problematic filter mechanism for controlling proof search,
* the invention of suitable heuristics for traversing the search
space,
* the provision of suitable term-orderings and their effective
exploitation in term rewriting procedures,
* and efficient data structures in combination with strong technical
support for essential operations such l-conversion,
substitution and rewriting.
It is planned that future editions of this article further elaborate
on machine-oriented proof calculi for Church's type theory. For
the time being, however, we provide only a selection of historical and
more recent references for the interested reader (see also
Section 5
below):
* **Sequent calculi:** Schutte 1960; Takahashi
1970; Takeuti 1987; Mints 1999; Brown 2004, 2007; Benzmuller et
al. 2009.
* **Mating method:** Andrews 1981; Bibel 1981; Bishop
1999.
* **Resolution calculi:** Andrews 1971; Huet 1973a;
Jensen & Pietrzykowski 1976; Benzmuller 1997, 1999a;
Benzmuller & Kohlhase 1998a.
* **Tableau method:**
Kohlhase[4]
1995, 1998; Brown & Smolka 2010; Backes & Brown 2011.
* **Paramodulation calculi:** Benzmuller 1999a,b;
Steen 2018, Steen & Benzmuller 2018, 2021.
* **Superposition calculi:** Bentkamp et al. 2018,
Bentkamp et al. 2021, Bentkamp et al. 2023a,c
* **Combinator-based superposition calculi:** Bhayat
& Reger 2020
### 4.2 Proof Assistants
Early computer systems for proving theorems of Church's type
theory (or extensions of it) include HOL (Gordon 1988; Gordon &
Melham 1993), TPS (Andrews et al. 1996; Andrews & Brown 2006),
Isabelle (Paulson 1988, 1990), PVS (Owre et al. 1996; Shankar 2001),
IMPS (Farmer et al. 1993), HOL Light (Harrison 1996), OMEGA (Siekmann
et al. 2006), and lClam (Richardson et al. 1998). Prominent
proof assistents that support more powerful dependent type theory
include Coq (Bertot & Casteran 2004) and the recent Lean
system (de Moura & Ullrich 2021). See
Other Internet References
section below for links to further info on these and other provers
mentioned later.
Most of the early systems mentioned above focused (at least initially)
on interactive proof. However, the TPS project in particular had been
working since the mid-1980s on the integration of ND-style interactive
proof and automated theorem proving using the mating method, and had
investigated proof transformation techniques between the two. This
research was further intensified in the 1990s, when other projects
investigated various solutions such as proof planning and bridges to
external theorem provers using so-called hammer tools. The hammers
developed at that time were early precursors of todays very successful
hammer tools such as Sledgehammer, HolyHammer and related systems
(Blanchette et al. 2013, Kaliszyk & Urban 2015 and Czaika &
Kaliszyk 2018).
While, triggered by the mentioned dynamics, some first initial
progress was made in the late 80s and during the 90s with regards to
the automation of Church's type theory, the resource investments
and achievements at the time were lacking much behind those seen, for
example, in first-order theorem proving. Good progress was fostered
only later, in particular, through the development of a commonly
supported syntax for Church's type theory, called TPTP THF
(Sutcliffe & Benzmuller 2010, Sutcliffe 2022), and the
inclusion, from 2009 onwards, of a TPTP THF division in the yearly
CASC competitions (kind of world championships for automated theorem
proving; see Sutcliffe 2016 for further details). These competitions,
in combination with further factors, triggered an increasing interest
in the full automation of Church's type theory. Particularly
successful was recently the exploration of equality based theorem
proving using the superposition approach and the adaptation of SMT
techniques to HOL.
### 4.3 Automated Theorem Provers
A selection of theorem provers for Church's type theory is presented.
The focus is on systems that have successfully participated in TPTP
THF CASC competitions in the past, and the order of presentation is
motivated by their first-time CASC participation. The latest editions
of most mentioned systems can be accessed online via the SystemOnTPTP
infrastructure (Sutcliffe 2017). Nearly all mentioned systems produce
verifiable proof certificates in the TPTP TSTP syntax.
The TPS prover (Andrews et al. 1996, Andrews & Brown 2006) can be
used to prove theorems of elementary type theory or extensional type
theory automatically, interactively, or semi-automatically. When
searching for a proof automatically, TPS first searches for an
expansion proof (Miller 1987) or an extensional expansion proof (Brown
2004, 2007) of the theorem. Part of this process involves searching
for acceptable matings (Andrews 1981, Bishop 1999). The behavior of
TPS is controlled by sets of flags, also called modes. A simple
scheduling mechanism is employed in the latest versions of TPS to
sequentially run a about fifty modes for a limited amount of time. TPS
was the winner of the first THF CASC competition in 2009.
The LEO-II prover (Benzmuller et al. 2015) is the successor of
LEO (Benzmuller & Kohlhase 1998b, which was hardwired with
the OMEGA proof assistant (LEO stands for Logical Engine of OMEGA).
The provers are based on the RUE-resolution calculi developed in
Benzmuller 1999a,b. LEO was the first prover to implement
calculus rules for extensionality to avoid cut-simulation effects.
LEO-II inherits and adapts them, and provides additional calculus
rules for description and choice. The prover, which internally
collaborates with first-order provers (preferably E) and SAT solvers,
has pioneered cooperative higher-order/first-order proof automation.
Since the prover is often too weak to find a refutation among the
steadily growing set of clauses on its own, some of the clauses in
LEO-II's search space attain a special status: they are
first-order clauses modulo the application of an appropriate
transformation function. Therefore, LEO-II progressively launches time
limited calls with these clauses to a first-order theorem prover, and
when the first-order prover reports a refutation, LEO-II also
terminates. Parts of these ideas were already implemented in the
predecessor LEO. Communication between LEO-II and the cooperating
first-order theorem provers uses the TPTP language and standards.
LEO-II was the winner of the second THF CASC competition in 2010.
The Satallax prover (Brown 2012) is based on a complete ground tableau
calculus for Church's type theory with choice (Backes &
Brown 2011). An initial tableau branch is formed from the assumptions
of a conjecture and negation of its conclusion. From that point on,
Satallax tries to determine unsatisfiability or satisfiability of this
branch. Satallax progressively generates higher-order formulas and
corresponding propositional clauses. Satallax uses the SAT solver
MiniSat as an engine to test the current set of propositional clauses
for unsatisfiability. If the clauses are unsatisfiable, the original
branch is unsatisfiable. Satallax provides calculus rules for
extensionality, description and choice. If there are no quantifiers at
function types, the generation of higher-order formulas and
corresponding clauses may terminate. In that case, if MiniSat reports
the final set of clauses as satisfiable, then the original set of
higher-order formulas is satisfiable (by a standard model in which all
types are interpreted as finite sets). Satallax was the winner of the
THF CASC competition in 2011 and from 2013 to 2019
The Isabelle/HOL system (Nipkow, Wenzel, & Paulson 2002) has
originally been designed as an interactive prover. However, in order
to ease user interaction several automatic proof tactics have been
added over the years. By appropriately scheduling a subset of these
proof tactics, some of which are quite powerful, Isabelle/HOL has
since about 2011 been turned also into an automatic theorem prover for
TPTP THF (and other TPTP syntax formats), that can be run from a
command shell like other provers. The most powerful proof tactics that
are scheduled by Isabelle/HOL include the Sledgehammer tool
(Blanchette et al. 2013), which invokes a sequence of external
first-order and higher-order theorem provers, the model finder
*Nitpick* (Blanchette & Nipkow 2010), the equational
reasoner *simp*, the untyped tableau prover *blast*, the
simplifier and classical reasoners *auto*, *force*, and
*fast*, and the best-first search procedure *best*. In
contrast to all other automated theorem provers mentioned above, the
TPTP incarnation of Isabelle/HOL does not output proof certificates.
Isabelle/HOL was the winner of the THF CASC competition in 2012.
The agsyHOL prover is based on a generic lazy narrowing proof search
algorithm. Backtracking is employed and a comparably small search
state is maintained. The prover outputs proof terms in sequent style
which can be verified in the Agda system.
coqATP implements (the non-inductive) part of the calculus of
constructions (Bertot & Casteran 2004). The system outputs
proof terms which are accepted as proofs (after the addition of a few
definitions) by the Coq proof assistant. The prover employs axioms for
functional extensionality, choice, and excluded middle. Boolean
extensionality is not supported. In addition to axioms, a small
library of basic lemmas is employed.
The Leo-III prover implements a paramodulation calculus for
Church's type theory (Steen 2018). The system, which is a
descendant of LEO and LEO-II, provides calculus rules for
extensionality, description and choice. The system has put an emphasis
on the implementation of an efficient set of underlying data
structures, on simplification routines and on heuristic rewriting. In
the tradition of its predecessors, Leo-III cooperates with first-order
reasoning tools using translations to many-sorted first-order logic.
The prover accepts every common TPTP syntax dialect and is thus very
widely applicable. Recently, the prover has also been extended to
natively supports almost every normal higher-order modal logic (Steen
et al. 2023).
CVC4, CVC5 and Verit (Barbosa et al. 2019) are SMT-based automated
provers with built-in support for many theories, including linear
arithmetic, arrays, bit vectors, data types, finite sets and strings.
These provers have been extended to support (fragments of) Church's
type theory.
The Zipperposition prover (Bentkamp et al. 2018, Vukmirovic et
al. 2022) focuses on the effective implementation of superposition
calculi for Church's type theory. It comes with Logtk, a supporting
library for manipulating terms, formulas, clauses, etc. in Church's
type theory, and supports relevant extensions such as datatypes,
recursive functions, and arithmetic. Zipperposition was the winner of
the THF CASC competition in 2020, 2021 and 2022.
Vampire (Bhayat & Reger 2020), which has dominated the TPTP
competitions in first-order logic for more than two decades, now also
supports the automation of Church's type theory using a
combinator-based superposition calculus. Vampire was the winner of the
THF CASC competition in 2023.
The theorem prover E (Vukmirovic et al. 2023), another prominent
and leading first-order automated theorem prover, has been extended
for Church's type theory using a variant of the supperposition
approach that is also implemented in Zipperposition.
Lash (Brown and Kaliszyk 2022) is a theorem prover for Church's type
theory created as a fork of Satallax.
Duper is a superposition-based theorem prover for dependent type
theory, deigned to prove theorems in the proof assistant Lean.
In recent years, there has thus been a significant shift from
first-order to higher-order automated theorem proving.
### 4.4 (Counter-)Model Finding
Support for finding finite models or countermodels for formulas of
Church's type theory was implemented already in the
tableau-based prover HOT (Konrad 1998). Restricted (counter-)model
finding capabilities are also implemented in the provers Satallax,
LEO-II and LEO-III. The most advanced (finite) model finding support
is currently realized in the systems Nitpick, Nunchaku and Refute.
These tools have been integrated with the Isabelle proof assistant.
Nitpick is also available as a standalone tool that accepts TPTP THF
syntax. The systems are particularly valuable for exposing errors and
misconceptions in problem encodings, and for revealing bugs in the THF
theorem provers.
## 5. Applications
In addition to its elegance, Church's type theory also has many
practical advantages. This aspect is emphasised, for example, in the
recent textbook by Farmer (2023), which deals with various relevant
and important further aspects that could not be covered in this
article. Because of its good practical expressiveness, the range of
applications of Church's type theory is very wide, and only a few
examples can be given below (a good source for further information on
practical examples are the proceedings of the conferences on
Interactive Theorem Proving).
### 5.1 Semantics of Natural Language
Church's type theory plays an important role in the study of the
formal semantics of natural language. Pioneering work on this was done
by Richard Montague. See his papers "English as a formal
language", "Universal grammar", and "The
proper treatment of quantification in ordinary English", which
are reprinted in Montague 1974. A crucial component of
Montague's analysis of natural language is the definition of a
tensed intensional logic (Montague 1974: 256), which is an enhancement
of Church's type theory. Montague Grammar had a huge impact, and
has since been developed in many further directions, not least in
Typelogical/Categorical Grammar. Further related work on intensional
and higher-order modal logic is presented in Gallin 1975 and Muskens
2006.
### 5.2 Mathematics and Computer Science
Proof assistants based on Church's Type Theory, including
Isabelle/HOL, HOL Light, HOL4, and PVS, have been successfully
utilized in a broad range of application in computer science and
mathematics.
Applications in computer science include the verification of hardware,
software and security protocols. A prominent example is the
L4.verified project in which Isabelle/HOL was used to formally prove
that the seL4 operating system kernel implements an abstract,
mathematical model specifying of what the kernel is supposed to do
(Klein et al. 2018).
In mathematics proof assistants have been applied for the development
of libraries mathematical theories and the verification of challenge
theorems. An early example is the mathematical library that was
developed since the eighties in the TPS project. A exemplary list of
theorems that were proved automatically with TPS is given in Andrews
et al. 1996. A very prominent recent example is Hales Flyspeck in
which HOL Light was employed to develop a formal proof for
Kepler's conjecture (Hales et al. 2017). An example that
strongly exploits automation support in Isabelle/HOL with Sledgehammer
and Nitpick is presented in Benzmuller & Scott 2019. In this
work different axiom systems for category theory were explored and
compared.
A solid overview of past and ongoing formalization projects can be
obtained by consulting appropriate sources such as Isabelle's Archive
of Formal Proofs, the Journal of Formalized Reasoning, or the THF
entries in Sutcliffe's TPTP problem library. Relevant further
information and a discussion of implications and further work can also
be found in Bentkamp et al. 2023b and in Bayer et al. 2024.
Further improving proof automation within these proof
assistants--based on proof hammering tools or on other forms of
prover integration--is relevant for minimizing interaction effort
in future applications.
### 5.3 Computational Metaphysics and Artificial Intelligence
Church's type theory is a classical logic, but topical
applications in philosophy and artificial intelligence often require
expressive non-classical logics. In order to support such applications
with reasoning tools for Church's type theory, the shallow
semantical embedding technique (see also
Section 1.2.2)
has been developed that generalizes and extends the ideas underlying
the well known standard translation of modal logics to first-order
logic. The technique was applied for the assessment of modern variants
of the ontological argument with a range of higher-order theorem
provers, including LEO-II, Satallax, Nitpick and Isabelle/HOL. In the
course of experiments, LEO-II detected an inconsistency in the
premises of Godel's argument, while the provers succeeded
in automatically proving Scott's emendation of it and to confirm
the consistency of the emended premises. More details on this work are
presented in a related SEP entry on
automated reasoning
(see Section 4.6 on Logic and Philosophy). The semantical embedding
approach has been adapted and further extended for a range of other
non-classical logics and related applications. In philosophy this
includes the encoding and formal assessment of ambitious ethical and
metaphysical theories, and in artificial intelligence this includes
the mechanization of deontic logics and normative reasoning to control
AI systems (Benzmuller et al. 2020) as well as an automatic proof
of the
muddy children puzzle (see Appendix B of dynamic epistemic logic),
which is a famous puzzle in epistemic reasoning, respectively dynamic
epistemic reasoning.
### 5.4. Metalogical Studies
Church's type theory is also well suited to support metalogical
studies, including the encoding of other logical formalisms and
formalized soundness and completeness studies (see, e.g.,
Schlichtkrull et al. 2020, Halkjaer et al. 2023, and various
further pointers given in Benzmuller 2019). In particular,
Church's type theory can be studied formally in itself, as shown, for
example, by Kumar et al. (2016), Schlichtkrull (2023), and Diaz
(2023). |
type-theory-intuitionistic | ## 1. Overview
We begin with a bird's eye view of some important aspects of
intuitionistic type theory. Readers who are unfamiliar with the theory
may prefer to skip it on a first reading.
The origins of intuitionistic type theory are Brouwer's
intuitionism and Russell's type theory. Like
Church's classical simple theory of types
it is based on the lambda calculus with types,
but differs from it in that it is based on the propositions-as-types
principle, discovered by Curry (1958) for propositional logic and
extended to predicate logic by Howard (1980) and de Bruijn
(1970). This extension was made possible by the introduction of
indexed families of types (dependent types) for representing the
predicates of predicate logic. In this way all logical connectives and
quantifiers can be interpreted by type formers. In intuitionistic type
theory further types are added, such as a type of natural numbers, a
type of small types (a universe) and a type of well-founded trees. The
resulting theory contains intuitionistic number theory (Heyting
arithmetic) and much more.
The theory is formulated in natural deduction where the rules for
each type former are classified as formation, introduction,
elimination, and equality rules. These rules exhibit certain
symmerties between the introduction and elimination rules following
Gentzen's and Prawitz' treatment of natural deduction,
as explained in the entry on
proof-theoretic semantics.
The elements of propositions, when interpreted as types, are
called *proof-objects*. When proof-objects are added to the
natural deduction calculus it becomes a typed lambda calculus with
dependent types, which extends Church's original typed lambda
calculus. The equality rules are computation rules for the terms of
this calculus. Each function definable in the theory is total and
computable. Intuitionistic type theory is thus a typed functional
programming language with the unusual property that all programs
terminate.
Intuitionistic type theory is not only a formal logical system but
also provides a comprehensive philosophical framework for
intuitionism. It is an *interpreted language*, where the
distinction between the *demonstration of a judgment* and
the *proof of a proposition* plays a fundamental role (Sundholm
2012). The framework clarifies the Brouwer-Heyting-Kolmogorov
interpretation of intuitionistic logic and extends it to the more
general setting of intuitionistic type theory. In doing so it provides
a general conception not only of what a constructive proof is, but
also of what a constructive mathematical object is. The meaning of the
judgments of intuitionistic type theory is explained in terms of
computations of the canonical forms of types and terms. These
informal, intuitive meaning explanations are
"pre-mathematical" and should be contrasted to formal
mathematical models developed inside a standard mathematical framework
such as set theory.
This meaning theory also justifies a variety of inductive,
recursive, and inductive-recursive definitions. Although
proof-theoretically strong notions can be justified, such as analogues
of certain large cardinals, the system is considered
predicative. Impredicative definitions of the kind found in
higher-order logic, intuitionistic set theory, and topos theory are
not part of the theory. Neither is Markov's principle, and thus the
theory is distinct from Russian constructivism.
An alternative formal logical system for predicative constructive
mathematics is Myhill and Aczel's
constructive Zermelo-Fraenkel set theory
(CZF). This theory, which is based on
intuitionistic first-order predicate logic and weakens some of the
axioms of classical Zermelo-Fraenkel Set Theory, has a natural
interpretation in intuitionistic type theory. Martin-Lof's meaning
explanations thus also indirectly form a basis for CZF.
Variants of intuitionistic type theory underlie several widely used
proof assistants, including NuPRL, Coq, and Agda. These proof
assistants are computer systems that have been used for formalizing
large and complex proofs of mathematical theorems, such as the Four
Colour Theorem in graph theory and the Feit-Thompson Theorem in finite
group theory. They have also been used to prove the correctness of a realistic C compiler (Leroy 2009) and other
computer software.
Philosophically and practically, intuitionistic type theory is a
foundational framework where constructive mathematics and computer
programming are, in a deep sense, the same. This point has been
emphasized by (Gonthier 2008) in the paper in which he describes his
proof of the Four Colour Theorem:
>
> The approach that proved successful for this proof was to turn
> almost every mathematical concept into a data structure or a program
> in the Coq system, thereby converting the entire enterprise into one
> of program verification.
>
>
>
## 2. Propositions as Types
### 2.1 Intuitionistic Type Theory: a New Way of Looking at Logic?
Intuitionistic type theory offers a new way of analyzing logic,
mainly through its introduction of explicit proof objects. This
provides a direct computational interpretation of logic, since there
are computation rules for proof objects. As regards expressive power,
intuitionistic type theory may be considered as an extension of
first-order logic, much as higher order logic, but predicative.
#### 2.1.1 A Type Theory
Russell developed
type theory in response to his discovery
of a paradox in naive set theory. In his ramified type theory
mathematical objects are classified according to their *types*:
the type of propositions, the type of objects, the type of properties
of objects, etc. When Church developed his
simple theory of types on the
basis of a typed version of his lambda calculus he added
the rule that there is a type of functions between any two types of
the theory. Intuitionistic type theory further extends the simply
typed lambda calculus with dependent types, that is, indexed families
of types. An example is the family of types of \(n\)-tuples indexed by
\(n\).
Types have been widely used in programming for a long time. Early
high-level programming languages introduced types of integers and
floating point numbers. Modern programming languages often have rich
type systems with many constructs for forming new
types. Intuitionistic type theory is a functional programming language
where the type system is so rich that practically any conceivable
property of a program can be expressed as a type. Types can thus be
used as specifications of the task of a program.
#### 2.1.2 An intuitionstic logic with proof-objects
Brouwer's analysis of logic led him to an intuitionistic logic
which rejects the law of excluded middle and the law of double
negation. These laws are not valid in intuitionistic type theory. Thus
it does not contain classical (Peano) arithmetic but only
intuitionistic (Heyting) arithmetic. (It is another matter that Peano
arithmetic can be interpreted in Heyting arithmetic by the double
negation interpretation, see the entry on
intuitionistic logic.)
Consider a theorem of intuitionistic arithmetic, such as the
division theorem
\[\forall m, n. m > 0 \supset \exists q, r. mq + r = n
\wedge m > r \]
A formal proof (in the usual sense) of this theorem is a sequence
(or tree) of formulas, where the last (root) formula is the theorem
and each formula in the sequence is either an axiom (a leaf) or the
result of applying an inference rule to some earlier (higher)
formulas.
When the division theorem is proved in intuitionistic type theory,
we do not only build a formal proof in the usual sense but also
a *construction* (or *proof-object*)
"\(\divi\)" which witnesses the truth of the theorem. We
write
\[\divi : \forall m, n {:} \N.\, m > 0 \supset \exists q, r {:} \N.\, mq + r = n \wedge m > r \]
to express that \(\divi\) is a proof-object for the division
theorem, that is, an element of the type representing the division
theorem. When propositions are represented as types, the
\(\forall\)-quantifier is identified with the dependent function space
former (or general cartesian product) \(\Pi\), the
\(\exists\)-quantifier with the dependent pairs type former (or
general disjoint sum) \(\Sigma\), conjunction \(\wedge\) with cartesian product \( \times \), the identity relation = with the
type former \(\I\) of proof-objects of identities, and the greater
than relation \(>\) with the type former \(\GT\) of
proof-objects of greater-than statements. Using
"type-notation" we thus write
\[
\divi : \Pi m, n {:} \N.\, \GT(m,0)\rightarrow
\Sigma q, r {:} \N.\, \I(\N,mq + r,n) \times \GT(m,r)
\]
to express that the proof object "\(\divi\)" is a
function which maps two numbers \(m\) and \(n\) and a proof-object \(p\) witnessing that
\(m > 0\) to a quadruple \((q,(r,(s,t)))\), where \(q\) is the quotient
and \(r\) is the remainder obtained when dividing \(n\) by \(m\). The
third component \(s\) is a proof-object witnessing the fact that \(mq
+ r = n\) and the fourth component \(t\) is a proof object witnessing \(m > r \).
Crucially, \(\divi\) is not only a function in the classical sense;
it is also a function in the intuitionistic sense, that is, a program
which computes the output \((q,(r,(s,t)))\) when given \(m\), \(n\), \(p\)
as inputs. This program is a term in a lambda calculus with special
constants, that is, a program in a functional programming
language.
#### 2.1.3 An extension of first-order predicate logic
Intuitionistic type theory can be considered as an extension of
first-order logic, much as higher order logic is an extension of first
order logic. In higher order logic we find some individual domains
which can be interpreted as any sets we like. If there are relational
constants in the signature these can be interpreted as any relations
between the sets interpreting the individual domains. On top of that
we can quantify over relations, and over relations of relations,
etc. We can think of higher order logic as first-order logic equipped
with a way of introducing new domains of quantification: if \(S\_1,
\ldots, S\_n\) are domains of quantification then \((S\_1,\ldots,S\_n)\)
is a new domain of quantification consisting of all the n-ary
relations between the domains \(S\_1,\ldots,S\_n\). Higher order logic
has a straightforward set-theoretic interpretation where
\((S\_1,\ldots,S\_n)\) is interpreted as the power set \(P(A\_1 \times
\cdots \times A\_n)\) where \(A\_i\) is the interpretation of \(S\_i\),
for \(i=1,\ldots,n\). This is the kind of higher order logic or simple
theory of types that Ramsey, Church and others introduced.
Intuitionistic type theory can be viewed in a similar way, only
here the possibilities for introducing domains of quantification are
richer, one can use \(\Sigma, \Pi, +, \I\) to construct new ones from
old. (Section 3.1; Martin-Lof 1998
[1972]). Intuitionistic type theory has a straightforward
set-theoretic interpretation as well, where \(\Sigma\), \(\Pi\) etc
are interpreted as the corresponding set-theoretic constructions; see
below. We can add to intuitionistic type theory unspecified individual
domains just as in HOL. These are interpreted as sets as for HOL. Now
we exhibit a difference from HOL: in intuitionistic type theory we can
introduce unspecified family symbols. We can introduce \(T\) as a
family of types over the individual domain \(S\):
\[T(x)\; {\rm type} \;(x{:}S).\]
If \(S\) is interpreted as \(A\), \(T\) can be interpreted as any
family of sets indexed by \(A\). As a non-mathematical example, we can
render the binary relation *loves* between members of an
individual domain of *people* as follows. Introduce the binary
family Loves over the domain People
\[{\rm Loves}(x,y)\; {\rm type}\; (x{:}{\rm People}, y{:}{\rm People}).\]
The interpretation can be any family of sets \(B\_{x,y}\) (\(x{:}A\),
\(y{:}A\)). How does this cover the standard notion of relation? Suppose
we have a binary relation \(R\) on \(A\) in the familiar set-theoretic
sense. We can make a binary family corresponding to this as
follows
\[
B\_{x,y} =
\begin{cases}
\{0\} &\text{if } R(x,y) \text{ holds} \\
\varnothing &\text{if } R(x,y) \text{ is false.}
\end{cases}\]
Now clearly \(B\_{x,y}\) is nonempty if and only if \(R(x,y)\)
holds. (We could have chosen any other element from our set theoretic
universe than 0 to indicate truth.) Thus from any relation we can
construct a family whose truth of \(x,y\) is equivalent to \(B\_{x,y}\)
being non-empty. Note that this interpretation does not care what the
proof for \(R(x,y)\) is, just that it holds. Recall that
intuitionistic type theory interprets propositions as types, so
\(p{:} {\rm Loves}({\rm John}, {\rm Mary})\) means that \({\rm Loves}({\rm
John}, {\rm Mary})\) is true.
The interpretation of relations as families allows for keeping
track of proofs or evidence that \(R(x,y)\) holds, but we may also
chose to ignore it.
In Montague semantics,
higher order logic is used to give
semantics of natural language (and examples as above). Ranta (1994)
introduced the idea to instead employ intuitionistic type theory to
better capture sentence structure with the help of dependent
types.
In contrast, how would the mathematical relation \(>\) between
natural numbers be handled in intuitionistic type theory? First of all
we need a type of numbers \(\N\). We could in principle introduce an
unspecified individual domain \(\N\), and then add axioms just as we
do in first-order logic when we set up the axiom system for Peano
arithmetic. However this would not give us the desirable computational
interpretation. So as explained below we lay down introduction rules
for constructing new natural numbers in \(\N\) and elimination and
computation rules for defining functions on \(\N\) (by recursion). The
standard order relation \(>\) should satisfy
\[\mbox{\(x > y\) iff there exists \(z{:} \N\) such that \(y+z+1 = x\)}.
\]
The right hand is rendered as \(\Sigma z{:}\N.\, \I(\N,y+z+1,x)\) in
intuitionistic type theory, and we take this as definition of relation
\(>\). (\(+\) is defined by recursive equations, \(\I\) is the
identity type construction). Now all the properties of \(>\) are
determined by the mentioned introduction and elimination and
computation rules for \(\N\).
#### 2.1.4 A logic with several forms of judgment
The type system of intuitionistic type theory is very
expressive. As a consequence the well-formedness of a type is no
longer a simple matter of parsing, it is something which needs to be
proved. Well-formedness of a type is one form of *judgment* of
intuitionistic type theory. Well-typedness of a term with respect to a
type is another. Furthermore, there are equality judgments for types
and terms. This is yet another way in which intuitionistic type theory
differs from ordinary first order logic with its focus on the sole
judgment expressing the truth of a proposition.
#### 2.1.5 Semantics
While a standard presentation of first-order logic would follow
Tarski in defining the notion of model, intuitionistic type theory
follows the tradition of Brouwerian meaning theory as further
developed by Heyting and Kolmogorov, the so called BHK-interpretation
of logic. The key point is that the proof of an implication \(A
\supset B \) is a *method* that transforms a proof of \(A\) to
a proof of \(B\). In intuitionistic type theory this method is
formally represented by the program \(f {:} A \supset B\) or \(f {:} A
\rightarrow B\): the type of proofs of an implication \(A \supset B\)
is the type of functions which maps proofs of \(A\) to proofs of
\(B\).
Moreover, whereas Tarski semantics is usually presented
meta-mathematically, and assumes set theory, Martin-Lof's meaning
theory of intuitionistic type theory should be understood directly and
"pre-mathematically", that is, without assuming a
meta-language such as set theory.
#### 2.1.6 A functional programming language
Readers with a background in the lambda calculus and functional
programming can get an alternative first approximation of
intuitionistic type theory by thinking about it as a typed functional
programming language in the style of Haskell or one of the dialects of
ML. However, it differs from these in two crucial aspects: (i) it has
dependent types (see below) and (ii) all typable programs
terminate. (Note that intuitionistic type theory has influenced recent
extensions of Haskell with *generalized algebraic datatypes*
which sometimes can play a similar role as inductively defined
dependent types.)
### 2.2 The Curry-Howard Correspondence
As already mentioned, the principle that
>
> *a proposition is the type of its proofs.*
>
>
>
is fundamental to intuitionistic type theory. This principle is
also known as the Curry-Howard correspondence or even Curry-Howard
isomorphism. Curry discovered a correspondence between the
implicational fragment of intuitionistic logic and the simply typed
lambda-calculus. Howard extended this correspondence to first-order
predicate logic. In intuitionistic type theory this correspondence
becomes an *identification* of proposition and types, which has
been extended to include quantification over higher types and
more.
### 2.3 Sets of Proof-Objects
So what are these proof-objects like? They should not be thought of
as logical derivations, but rather as some (structured) symbolic
evidence that something is true. Another term for such evidence is
"truth-maker".
It is instructive, as a somewhat crude first approximation, to
replace types by ordinary sets in this correspondence. Define a set
\(\E\_{m,n}\), depending on \(m, n \in {{\mathbb N}}\), by:
\[\E\_{m,n} =
\left\{\begin{array}{ll}
\{0\} & \mbox{if \(m = n\)}\\
\varnothing & \mbox{if \(m \ne n\).}
\end{array}
\right.\]
Then \(\E\_{m,n}\) is nonempty exactly when \(m=n\). The set
\(\E\_{m,n}\) corresponds to the proposition \(m=n\), and the number
\(0\) is a proof-object (truth-maker) inhabiting the sets
\(\E\_{m,m}\).
Consider the proposition that *\(m\) is an even number*
expressed as the formula \(\exists n \in {{\mathbb N}}. m= 2n\). We
can build a set of proof-objects corresponding to this formula by
using the general set-theoretic sum operation. Suppose that \(A\_n\)
(\(n\in {{\mathbb N}}\)) is a family of sets. Then its disjoint sum is
given by the set of pairs
\[
(\Sigma n \in {{\mathbb N}})A\_n = \{ (n,a) : n \in {{\mathbb N}}, a \in A\_n\}.\]
If we apply this construction to the family \(A\_n = \E\_{m,2n}\) we
see that \((\Sigma n \in {{\mathbb N}})\E\_{m,2n}\) is nonempty exactly
when there is an \(n\in {{\mathbb N}}\) with \(m=2n\). Using the
general set-theoretic product operation \((\Pi n \in {{\mathbb
N}})A\_n\) we can similarly obtain a set corresponding to a universally
quantified proposition.
### 2.4 Dependent Types
In intuitionistic type theory there are primitive type formers
\(\Sigma\) and \(\Pi\) for general sums and products, and \(\I\) for
identity types, analogous to the set-theoretic constructions described
above. The *identity type* \(\I(\N,m,n)\) corresponding to the
set \(\E\_{m,n}\) is an example of a *dependent type* since it
depends on \(m\) and \(n\). It is also called an *indexed family of
types* since it is a family of types indexed by \(m\) and
\(n\). Similarly, we can form the general disjoint sum \(\Sigma x {:}
A.\, B\) and the general cartesian product \(\Pi x {:} A.\, B\) of such a
family of types \(B\) indexed by \(x {:} A\), corresponding to the set
theoretic sum and product operations above.
Dependent types can also be defined by primitive recursion. An
example is the type of \(n\)-tuples \(A^n\) of elements of type \(A\)
and indexed by \(n {:} N\) defined by the equations
\[\begin{align\*}
A^0 &= 1\\
A^{n+1} &= A \times A^n
\end{align\*}\]
where \(1\) is a
one element type and \(\times\) denotes the cartesian product of two
types. We note that dependent types introduce computation in types:
the defining rules above are computation rules. For example, the
result of computing \(A^3\) is \(A \times (A \times (A \times
1))\).
### 2.5 Propositions as Types in Intuitionistic Type Theory
With propositions as types, predicates become dependent types. For
example, the predicate \(\mathrm{Prime}(x)\) becomes the type of
proofs that \(x\) is prime. This type *depends* on
\(x\). Similarly, \(x < y\) is the type of proofs that \(x\) is
less than \(y\).
According to the Curry-Howard interpretation of propositions as
types, the logical constants are interpreted as type formers:
\[\begin{align\*}
\bot &= \varnothing\\
\top &= 1\\
A \vee B &= A + B\\
A \wedge B &= A \times B\\
A \supset B &= A \rightarrow B\\
\exists x {:} A.\, B &= \Sigma x {:} A.\, B\\
\forall x {:} A.\, B &= \Pi x {:} A.\, B
\end{align\*}\]
where \(\Sigma x {:} A.\, B\) is the
disjoint sum of the \(A\)-indexed family of types \(B\) and \(\Pi x {:}
A.\, B\) is its cartesian product. The canonical elements of \(\Sigma x {:}
A.\, B\) are pairs \((a,b)\) such that \(a {:} A\) and \(b {:} B[x:=a]\)
(the type obtained by substituting all free occurrences of \(x\) in
\(B\) by \(a\)). The elements of \(\Pi x {:} A.\, B\) are (computable)
functions \(f\) such that \(f\,a {:} B[x:=a]\), whenever \(a {:} A\).
For example, consider the proposition
\[\begin{equation}
\forall m {:} \N.\, \exists n {:} \N.\, m \lt n \wedge \mathrm{Prime}(n)
\tag{1}\label{prop1}
\end{equation}\]
expressing that there are
arbitrarily large primes. Under the Curry-Howard interpretation this
becomes the type \(\Pi m {:} \N.\, \Sigma n {:} \N.\, m \lt n \times
\mathrm{Prime}(n)\) of functions which map a number \(m\) to a triple
\((n,(p,q))\), where \(n\) is a number, \(p\) is a proof that \(m \lt
n\) and \(q\) is a proof that \(n\) is prime. This is the *proofs
as programs* principle: a constructive proof that there are
arbitrarily large primes becomes a program which given any number
produces a larger prime together with proofs that it indeed is larger
and indeed is prime.
Note that the proof which derives a contradiction from the
assumption that there is a largest prime is not constructive, since it
does not explicitly give a way to compute an even larger prime. To
turn this proof into a constructive one we have to show explicitly how
to construct the larger prime. (Since proposition (\ref{prop1}) above
is a \(\Pi^0\_2\)-formula we can for example use Friedman's
A-translation to turn such a proof in classical arithmetic into a
proof in intuitionistic arithmetic and thus into a proof in
intuitionistic type theory.)
## 3. Basic Intuitionistic Type Theory
We now present a core version of intuitionistic type theory,
closely related to the first version of the theory presented by
Martin-Lof in 1972 (Martin-Lof 1998 [1972]). In addition to
the type formers needed for the Curry-Howard interpretation of typed
intuitionistic predicate logic listed above, we have two types: the
type \(\N\) of natural numbers and the type \(\U\) of small types.
The resulting theory can be shown to contain intuitionistic number
theory \(\HA\) (Heyting arithmetic), Godel's System \(\T\) of
primitive recursive functions of higher type, and the theory
\(\HA^\omega\) of Heyting arithmetic of higher type.
This core intuitionistic type theory is not only the original one,
but perhaps the minimal version which exhibits the essential features
of the theory. Later extensions with primitive identity types,
well-founded tree types, universe hierarchies, and general notions of
inductive and inductive-recursive definitions have increased the
proof-theoretic strength of the theory and also made it more
convenient for programming and formalization of mathematics. For
example, with the addition of well-founded trees we can interpret the
Constructive Zermelo-Fraenkel Set Theory \(\CZF\) of Aczel
(1978 [1977]). However, we will
wait until the next section to describe those extensions.
### 3.1 Judgments
In Martin-Lof (1996) a general philosophy of logic is presented
where the traditional notion of judgment is expanded and given a
central position. A judgment is no longer just an affirmation or
denial of a proposition, but a general act of knowledge. When
reasoning mathematically we make judgments about mathematical
objects. One form of judgment is to state that some mathematical
statement is true. Another form of judgment is to state that something
is a mathematical object, for example a set. The logical rules give
methods for producing correct judgments from earlier judgments. The
judgments obtained by such rules can be presented in tree form
\[
\begin{prooftree}
\AxiomC{\(J\_1\)}
\AxiomC{\(J\_2\)}
\RightLabel{\(r\_1\)}
\BinaryInfC{\(J\_3\)}
\AxiomC{\(J\_4\)}
\RightLabel{\(r\_5\)}
\UnaryInfC{\(J\_5\)}
\AxiomC{\(J\_6\)}
\RightLabel{\(r\_3\)}
\BinaryInfC{\(J\_7\)}
\RightLabel{\(r\_4\)}
\BinaryInfC{\(J\_8\)}
\end{prooftree}\]
or in sequential form
* (1) \(J\_1 \quad\text{ axiom} \)
* (2) \(J\_2 \quad\text{ axiom} \)
* (3) \(J\_3 \quad\text{ by rule \(r\_1\) from (1) and (2)} \)
* (4) \(J\_4 \quad\text{ axiom} \)
* (5) \(J\_5 \quad\text{ by rule \(r\_2\) from (4)} \)
* (6) \(J\_6 \quad\text{ axiom} \)
* (7) \(J\_7 \quad\text{ by rule \(r\_3\) from(5) and (6)} \)
* (8) \(J\_8 \quad\text{ by rule \(r\_4\) from (3) and (7)} \)
The latter form is common in mathematical arguments. Such a
sequence or tree formed by logical rules from axioms is
a *derivation* or *demonstration* of a judgment.
First-order reasoning may be presented using a single kind of
judgment:
>
> the proposition \(B\) is true under the hypothesis that the
> propositions \(A\_1, \ldots, A\_n\) are all true.
>
>
>
We write this *hypothetical judgment* as a
so-called *Gentzen sequent*
\[A\_1, \ldots, A\_n {\vdash}B.\]
Note that this is a single judgment that should not be confused with
the derivation of the judgment \({\vdash}B\) from the judgments
\({\vdash}A\_1, \ldots, {\vdash}A\_n\). When \(n=0\), then
the *categorical judgment* \( {\vdash}B\) states that \(B\) is
true without any assumptions. With sequent notation the familiar rule
for conjunctive introduction becomes
\[\begin{prooftree}
\AxiomC{\(A\_1, \ldots,A\_n {\vdash}B\)}
\AxiomC{\(A\_1, \ldots, A\_n {\vdash}C\)}
\RightLabel{\((\land I)\).}
\BinaryInfC{\(A\_1, \ldots, A\_n {\vdash}B \land C\)}
\end{prooftree}\]
### 3.2 Judgment Forms
Martin-Lof type theory has four basic forms of judgments and is a
considerably more complicated system than first-order logic. One
reason is that more information is carried around in the derivations
due to the identification of propositions and types. Another reason is
that the syntax is more involved. For instance, the well-formed
formulas (types) have to be generated simultaneously with the provably
true formulas (inhabited types).
The four forms of *categorical* judgment are
* \(\vdash A \; {\rm type}\), meaning that \(A\) is a well-formed
type,
* \(\vdash a {:} A\), meaning that \(a\) has type \(A\),
* \(\vdash A = A'\), meaning that \(A\) and \(A'\) are equal
types,
* \(\vdash a = a' {:} A\), meaning that \(a\) and \(a'\) are
equal elements of type \(A\).
In general, a judgment is *hypothetical*, that is, it is
made in a context \(\Gamma\), that is, a list \(x\_1 {:} A\_1, \ldots, x\_n
{:} A\_n\) of variables which may occur free in the judgment together
with their respective types. Note that the types in a context can
depend on variables of earlier types. For example, \(A\_n\) can depend
on \(x\_1 {:} A\_1, \ldots, x\_{n-1} {:} A\_{n-1}\). The four forms of
hypothetical judgments are
* \(\Gamma \vdash A \; {\rm type}\), meaning that \(A\) is a
well-formed type in the context \(\Gamma\),
* \(\Gamma \vdash a {:} A\), meaning that \(a\) has type \(A\) in
context \(\Gamma\),
* \(\Gamma \vdash A = A'\), meaning that \(A\) and \(A'\) are
equal types in the context \(\Gamma\),
* \(\Gamma \vdash a = a' {:} A\), meaning that \(a\) and \(a'\)
are equal elements of type \(A\) in the context \(\Gamma\).
Under the proposition as types interpretation
\[\tag{2}\label{analytic} \vdash a {:} A
\]
can be understood as the judgment that \(a\) is a proof-object for the
proposition \(A\). When suppressing this object we get a judgment
corresponding to the one in ordinary first-order logic (see
above):
\[\tag{3}\label{synthetic} \vdash A\; {\rm true}.
\]
Remark 3.1. Martin-Lof
(1994) argues that
Kant's *analytic judgment a priori* and *synthetic judgment
a priori* can be exemplified, in the realm of logic, by
([analytic]) and ([synthetic]) respectively. In the analytic judgment
([analytic]) everything that is needed to make the judgment evident is
explicit. For its synthetic version ([synthetic]) a possibly
complicated proof construction \(a\) needs to be provided to make it
evident. This understanding of analyticity and syntheticity has the
surprising consequence that "the logical laws in their usual
formulation are all synthetic." Martin-Lof (1994:
95). His analysis further
gives:
> " [...] the logic of analytic judgments,
> that is, the logic for deriving judgments of the two analytic forms,
> is complete and decidable, whereas the logic of synthetic judgments is
> incomplete and undecidable, as was shown by Godel."
> Martin-Lof (1994: 97).
>
>
>
The decidability of the two analytic judgments (\(\vdash a{:}A\) and
\(\vdash a=b{:}A\)) hinges on the metamathematical properties of type
theory: strong normalization and decidable type checking.
Sometimes also the following forms are explicitly considered to be
judgments of the theory:
* \(\Gamma \; {\rm context}\), meaning that \(\Gamma\) is a
well-formed context.
* \(\Gamma = \Gamma'\), meaning that \(\Gamma\) and
\(\Gamma'\) are equal contexts.
Below we shall abbreviate the judgment \(\Gamma \vdash A \; {\rm
type}\) as \(\Gamma \vdash A\) and \(\Gamma \; {\rm context}\) as
\(\Gamma \vdash.\)
### 3.3 Inference Rules
When stating the rules we will use the letter \(\Gamma\) as a
meta-variable ranging over contexts, \(A,B,\ldots\) as meta-variables
ranging over types, and \(a,b,c,d,e,f,\ldots\) as meta-variables
ranging over terms.
The first group of inference rules are general rules including
rules of assumption, substitution, and context formation. There are
also rules which express that equalities are equivalence
relations. There are numerous such rules, and we only show the
particularly important *rule of type equality* which is crucial
for computation in types:
\[\frac{\Gamma \vdash a {:} A\hspace{2em}\Gamma \vdash A = B}
{\Gamma \vdash a {:} B}\]
The remaining rules are specific to the type formers. These are
classified as formation, introduction, elimination, and equality
rules.
### 3.4 Intuitionistic Predicate Logic
We only give the rules for \(\Pi\). There are analogous rules for
the other type formers corresponding to the logical constants of typed
predicate logic.
In the following \(B[x := a]\) means the term obtained by
substituting the term \(a\) for each free occurrence of the variable
\(x\) in \(B\) (avoiding variable capture).
\(\Pi\)-formation.
\[\frac{\Gamma \vdash A\hspace{2em} \Gamma, x {:} A \vdash B}
{\Gamma \vdash \Pi x {:} A. B}\]
\(\Pi\)-introduction.
\[\frac{\Gamma, x {:} A \vdash b {:} B}
{\Gamma \vdash \lambda x. b {:} \Pi x {:} A. B}\]
\(\Pi\)-elimination.
\[\frac
{\Gamma \vdash f {:} \Pi x {:} A.B\hspace{2em}\Gamma \vdash a {:} A}
{\Gamma \vdash f\,a {:} B[x := a]}\]
\(\Pi\)-equality.
\[\frac
{\Gamma, x {:} A \vdash b {:} B\hspace{2em}\Gamma \vdash a {:} A}
{\Gamma \vdash (\lambda x.b)\,a = b[x := a] {:} B[x := a]}\]
This is the rule of \(\beta\)-conversion. We
may also add the rule of \(\eta\)-conversion:
\[\frac
{\Gamma \vdash f {:} \Pi x {:} A. B}
{\Gamma \vdash \lambda x. f\,x = f {:} \Pi x {:} A. B}.\]
Furthermore, there are congruence rules expressing that operations
introduced by the formation, introduction, and elimination rules
preserve equality. For example, the congruence rule for \(\Pi\) is
\[\frac{\Gamma \vdash A = A'\hspace{2em} \Gamma, x {:} A \vdash B=B'}
{\Gamma \vdash \Pi x {:} A. B = \Pi x {:} A'. B'}.\]
### 3.5 Natural Numbers
As in Peano arithmetic the natural numbers are generated by 0 and
the successor operation \(\s\). The elimination rule states that these
are the only possible ways to generate a natural number.
We write \(f(c) = \R(c,d,xy.e)\) for the function which is defined
by primitive recursion on the natural number \(c\) with base case
\(d\) and step function \(xy.e\) (or alternatively \(\lambda xy.e\))
which maps the value \(y\) for the previous number \(x {:} \N\) to the
value for \(\s(x)\). Note that \(\R\) is a new variable-binding
operator: the variables \(x\) and \(y\) become bound in \(e\).
\(\N\)-formation.
\[\Gamma \vdash \N\]
\(\N\)-introduction.
\[\Gamma \vdash 0 {:} \N
\hspace{2em}
\frac{\Gamma \vdash a {:} \N}
{\Gamma \vdash s(a) {:} \N}\]
\(\N\)-elimination.
\[\frac{
\Gamma, x {:} \N \vdash C
\hspace{1em}
\Gamma \vdash c {:} \N
\hspace{1em}
\Gamma \vdash d {:} C[x := 0]
\hspace{1em}
\Gamma, y {:} \N, z {:} C[x := y] \vdash e {:} C[x := s(y)]
}
{
\Gamma \vdash \R(c,d,yz.e) {:} C[x := c]
}\]
\(\N\)-equality (under appropriate premises).
\[\begin{align\*}
\R(0,d,yz.e) &= d {:} C[x := 0]\\
\R(s(a),d,yz.e) &= e[y := a, z := \R(a,d,yz.e)] {:} C[x := s(a)]
\end{align\*}\]
The rule of \(\N\)-elimination simultaneously expresses the type of
a function defined by primitive recursion and, under the Curry-Howard
interpretation, the rule of mathematical induction: we prove the
property \(C\) of a natural number \(x\) by induction on \(x\).
Godel's System \(\T\) is essentially intuitionistic type theory with
only the type formers \(\N\) and \(A \rightarrow B\) (the type of
functions from \(A\) to \(B\), which is the special case of
\((\Pi x {:} A)B\) where \(B\) does not depend on \(x {:} A\)). Since there are no
dependent types in System \(\T\) the rules can be simplified.
### 3.6 The Universe of Small Types
Martin-Lof's first version of type theory (Martin-Lof 1971a) had an
axiom stating that there is a type of all types. This was proved
inconsistent by Girard who found that the Burali-Forti paradox could
be encoded in this theory.
To overcome this pathological impredicativity, but still retain
some of its expressivity, Martin-Lof introduced in 1972 a universe
\(\U\) of small types closed under all type formers of the theory,
except that it does not contain itself (Martin-Lof 1998 [1972]). The rules
are:
\(\U\)-formation.
\[\Gamma \vdash \U\]
\(\U\)-introduction.
\[\Gamma \vdash \varnothing {:} \U
\hspace{3em}
\Gamma \vdash 1 {:} \U\]
\[\frac{\Gamma \vdash A {:} \U\hspace{2em} \Gamma \vdash B {:} \U}
{\Gamma \vdash A + B {:} \U}
\hspace{3em}
\frac{\Gamma \vdash A {:} \U\hspace{2em} \Gamma \vdash B {:} \U}
{\Gamma \vdash A \times B {:} \U}\]
\[\frac{\Gamma \vdash A {:} \U\hspace{2em} \Gamma \vdash B {:} \U}
{\Gamma \vdash A \rightarrow B {:} \U}\]
\[\frac{\Gamma \vdash A {:} U\hspace{2em} \Gamma, x {:} A \vdash B {:} \U}
{\Gamma \vdash \Sigma x {:} A.\, B {:} \U}
\hspace{3em}
\frac{\Gamma \vdash A {:} \U\hspace{2em} \Gamma, x {:} A \vdash B {:} \U}
{\Gamma \vdash \Pi x {:} A.\, B {:} \U}\]
\[\Gamma \vdash \N {:} \U\]
\(\U\)-elimination.
\[\frac{\Gamma \vdash A {:} \U}
{\Gamma \vdash A}\]
Since \(\U\) is a type, we can use \(\N\)-elimination to define small
types by primitive recursion. For example, if \(A : \U\), we can define
the type of \(n\)-tuples of elements in \(A\) as follows:
\[A^n = \R(n,1,xy.A \times y) {:} \U\]
This type-theoretic universe \(\U\) is analogous to a Grothendieck
universe in set theory which is a set of sets closed under all the
ways sets can be constructed in Zermelo-Fraenkel set theory. The
existence of a Grothendieck universe cannot be proved from the usual
axioms of Zermelo-Fraenkel set theory but needs a new axiom.
In Martin-Lof (1975) the universe is extended to a countable
hierarchy of universes
\[\U\_0 : \U\_1 : \U\_2 : \cdots .\]
In this way each type has a type, not only each small type.
### 3.7 Propositional Identity
Above, we introduced the equality judgment
\[\tag{4}\label{defeq} \Gamma \vdash a = a' {:} A.\]
This is usually called a "definitional equality" because
it can be decided by normalizing the terms \(a\) and \(a'\) and
checking whether the normal forms are identical. However, this
equality is a judgment and not a proposition (type) and we thus cannot
prove such judgmental equalities by induction. For this reason we need
to introduce propositional identity types. For example, the identity
type for natural numbers \(\I(\N,m,n)\) can be defined by
\(\U\)-valued primitive recursion. We can then express and prove the
Peano axioms. Moreover, extensional equality of ufnctions can be
defined by
\[\I(\N\rightarrow \N,f,f') = \Pi x {:} \N. \I(\N,f\,x,f'\,x).\]
### 3.8 The Axiom of Choice is a Theorem
The following form of the axiom of choice is an immediate
consequence of the BHK-interpretation of the intuitionistic
quantifiers, and is easily proved in intuitionistic type theory:
\[(\Pi x {:} A. \Sigma y {:} B. C) \rightarrow \Sigma f {:} (\Pi x {:} A. B). C[y := f\,x]\]
The reason is that \(\Pi x {:} A. \Sigma y {:} B. C\) is the type of
functions which map elements \(x {:} A\) to pairs \((y,z)\) with \(y {:}
B\) and \(z {:} C\). The choice function \(f\) is obtained by returning
the first component \(y {:} B\) of this pair.
It is perhaps surprising that intuitionistic type theory directly
validates an axiom of choice, since this axiom is often considered
problematic from a constructive point of view. A possible explanation
for this state of affairs is that the above is an axiom of choice
for *types*, and that types are not in general appropriate
constructive approximations of sets in the classical sense. For
example, we can represent a real number as a Cauchy sequence in
intuitionistic type theory, but the set of real numbers is not the
type of Cauchy sequences, but the type of Cauchy sequences up to
equiconvergence. More generally, a set in Bishop's constructive
mathematics is represented by a type (commonly called
"preset") together with an equivalence relation.
If \(A\) and \(B\) are equipped with equivalence relations, there
is of course no guarantee that the choice function, \(f\) above, is
extensional in the sense that it maps equivalent element to equivalent
elements. This is the failure of the *extensional axiom of
choice*, see Martin-Lof (2009) for an analysis.
## 4. Extensions
### 4.1 The Logical Framework
The above completes the description of a core version of
intuitionistic type theory close to that of (Martin-Lof 1998 [1972]).
In 1986 Martin-Lof proposed a reformulation of intuitionistic type
theory; see Nordstrom, Peterson and Smith (1990) for an
exposition. The purpose was to give a more compact formulation, where
\(\lambda\) and \(\Pi\) are the only variable binding operations. It
is nowadays considered the main version of the theory. It is also the
basis for the Agda proof assistant. The 1986 theory has two parts:
* the theory of types (the logical framework);
* the theory of sets (small types).
Remark 4.1. Note that the word
"set" in the logical framework does not coincide with the
way it is used in Bishop's constructive mathematics. To avoid this
confusion, types together with equivalence relations are usually
called "setoids" or "extensional sets" in
intuitionistic type theory.
The logical framework has only two type formers: \(\Pi x {:} A. B\)
(usually written \((x {:} A)B\) or \((x {:} A) \rightarrow B\) in the
logical framework formulation) and \(\U\) (usually called
\(\Set\)). The rules for \(\Pi x{:} A. B\) (\((x {:} A) \rightarrow B\))
are the same as given above (including \(\eta\)-conversion). The rules
for \(\U\) (\(\Set\)) are also the same, except that the logical
framework only stipulates closure under \(\Pi\)-type formation.
The other small type formers ("set formers") are
introduced in the theory of sets. In the logical framework formulation
each formation, introduction, and elimination rule can be expressed as
the typing of a new constant. For example, the rules for natural
numbers become
\[\begin{align\*} \N &: \Set,\\
0 &: \N,\\
\s &: \N \rightarrow \N,\\
\R &: (C {:} \N \rightarrow \Set) \rightarrow C\,0
\rightarrow (( x {:} \N) \rightarrow C\,x \rightarrow C\,(\s\,x))
\rightarrow (c {:} \N) \rightarrow C\,c.
\end{align\*}\]
where we have omitted the common context \(\Gamma\), since the types
of these constants are closed. Note that the recursion operator \(R\)
has a first argument \(C {:} \N \rightarrow \Set\) unlike in the
original formulation.
Moreover, the equality rules can be expressed as equations
\[\begin{align\*}
\R\, C\, d\, e\, 0 &= d {:} C\,0\\
\R\, C\, d\, e\, (\s\, a) &= e\, a\, (\R\, C\, d\, e\, a) {:} C\,(\s\,a)
\end{align\*}\]
under suitable assumptions.
In the sequel we will present several extensions of type theory. To
keep the presentation uniform we will however *not* use the
logical framework presentation of type theory, but will use the same
notation as in section 2.
### 4.2 A General Identity Type Former
As we mentioned above, identity on natural numbers can be defined
by primitive recursion. Identity relations on other types can also be
defined in the basic version of intuitionistic type theory presented
in section 2.
However, Martin-Lof (1975) extended intuitionistic type theory with
a uniform primitive identity type former \(\I\) for all types. The
rules for \(\I\) express that the identity relation is inductively
generated by the proof of reflexivity, a canonicial constant called
\(\r\). (Note that \(\r\) was coded by the number 0 in the introductory
presentation of proof-objects
in 2.3. The elimination rule for the identity type is a
generalization of identity elimination in predicate logic and
introduces an elimination constant \(\J\). We here show the
formulation due to Paulin-Mohring (1993) rather than the
original formulation of Martin-Lof (1975). The inference rules are
the following.
\(\I\)-formation.
\[\frac{\Gamma \vdash A
\hspace{1em}
\Gamma \vdash a {:} A
\hspace{1em}
\Gamma \vdash a' {:} A}
{\Gamma \vdash \I(A,a,a')}\]
\(\I\)-introduction.
\[\frac{\Gamma \vdash A
\hspace{1em}
\Gamma \vdash a {:} A}
{\Gamma \vdash \r {:} \I(A,a,a)}\]
\(\I\)-elimination.
\[\frac{
\Gamma, x {:} A, y {:} \I(A,a,x) \vdash C
\hspace{1em}
\Gamma \vdash b {:} A
\hspace{1em}
\Gamma \vdash c {:} \I(A,a,b)
\hspace{1em}
\Gamma \vdash d {:} C[x := a, y := r]}
{ \Gamma \vdash \J(c,d) {:} C[x := b, y:= c]}\]
\(\I\)-equality (under appropriate assumptions).
\[\begin{align\*}
\J(r,d) &= d
\end{align\*}\]
Note that if \(C\) only depends on \(x : A\) and not on the proof \(y : \I(A,a,x)\) (and we also suppress proof objects) in the rule of \(\I\)-elimination we recover the rule of identity elimination in predicate logic.
By constructing a model of type theory where types are interpreted
as *groupoids* (categories where all arrows are isomorphisms)
Hofmann and Streicher (1998) showed that it cannot be proved in
intuitionistic type theory that all proofs of \(I(A,a,b)\) are
identical. This may seem as an incompleteness of the theory and
Streicher suggested a new axiom \(\K\) from which it follows that all
proofs of \(\I(A,a,b)\) are identical to \(\r\).
The \(\I\)-type is often called the *intensional identity
type*, since it does not satisfy the principle of function
extensionality. Intuitionistic type theory with the intensional
identity type is also often called *intensional intuitionistic type
theory* to distinguish it from *extensional intuitionistic type
theory* which will be presented in
section 7.1.
### 4.3 Well-Founded Trees
A type of well-founded trees of the form \(\W x {:} A. B\) was
introduced in Martin-Lof 1982 (and in a more restricted form by Scott
1970). Elements of \(\W x {:} A. B\) are trees of varying and arbitrary
branching: varying, because the branching type \(B\) is indexed by \(x
{:} A\) and arbitrary because \(B\) can be arbitrary. The type is given
by a *generalized inductive definition* since the well-founded
trees may be infinitely branching. We can think of \(\W x{:}A. B\) as the
free term algebra, where each \(a {:} A\) represents a term constructor
\(\sup\,a\) with (possibly infinite) arity \(B[x := a]\).
\(\W\)-formation.
\[\frac{\Gamma \vdash A\hspace{2em} \Gamma, x {:} A \vdash B}
{\Gamma \vdash \W x {:} A. B}\]
\(\W\)-introduction.
\[\frac{\Gamma \vdash a {:} A \hspace{2em} \Gamma, y {:} B[x:=a] \vdash
b : Wx{:}A. B} {\Gamma \vdash \sup(a, y.b) : \W x {:} A. B}\]
We omit the rules of \(\W\)-elimination and \(\W\)-equality.
Adding well-founded trees to intuitionistic type theory increases
its proof-theoretic strength significantly (Setzer
(1998)).
### 4.4 Iterative Sets and CZF
An important application of well-founded trees is Aczel's (1978)
construction of a type-theoretic model of Constructive Zermelo
Fraenkel Set Theory. To this end he defines the type of iterative sets
as
\[\V = \W x {:} \U. x.\]
Let \(A {:} \U\) be a small type, and \(x {:} A\vdash M\) be an indexed
family of iterative sets. Then \(\sup(A,x.M)\), or with a more
suggestive notation \(\{ M\mid x {:} A\}\), is an iterative set. To
paraphrase: an iterative set is a family of iterative sets indexed by a small type.
Note that an iterative set is a data-structure in the sense of
functional programming: a possibly infinitely branching well-founded
tree. Different trees may represent the same set. We therefore need to
define a notion of extensional equality between iterative sets which
disregards repetition and order of elements. This definition is
formally similar to the definition of bisimulation of processes in
process algebra. The type \(\V\) up to extensional equality can be
viewed as a constructive type-theoretic model of the cumulative
hierarchy, see the entry on
set theory: constructive and intuitionistic ZF
for further information about CZF.
### 4.5 Inductive Definitions
The notion of an inductive definition is fundamental in
intuitionistic type theory. It is a primitive notion and not, as in
set theory, a derived notion where an inductively defined set is
defined impredicatively as the smallest set closed under some
rules. However, in intuitionistic type theory inductive definitions
are considered predicative: they are viewed as being built up from
below.
The inductive definability of types is inherent in the meaning
explanations of intuitionistic type theory which we shall discuss in
the next section. In fact, intuitionistic type theory can be described
briefly as a theory of inductive, recursive, and inductive-recursive
definitions based on a framework of lambda calculus with dependent
types.
We have already seen the type of natural numbers and the type of
well-founded trees as examples of types given by inductive
definitions; the natural numbers is an example of an ordinary finitary
inductive definition and the well-founded trees of a generalized
possibly infinitary inductive definition. The introduction rules
describe how elements of these types are inductively generated and the
elimination and equality rules describe how functions from these types
can be defined by structural recursion on the way these elements are
generated. According to the propositions as types principle, the
elimination rules are simultaneously rules for proof by structural
induction on the way the elements are generated.
The type formers \(0, 1, +, \times, \rightarrow, \Sigma,\) and
\(\Pi\) which interpret the logical constants for intuitionistic
predicate logic are examples of degenerate inductive definitions. Even
the identity type (in intensional intuitionistic type theory) is
inductively generated; it is the type of proofs generated by the
reflexivity axiom. Its elimination rule expresses proof by pattern
matching on the proof of reflexivity.
The common structure of the rules of the type formers can be
captured by a general schema for inductive definitions (Dybjer
1991). This general schema has many useful instances, for example, the
type \(\List(A)\) of lists with elements of type \(A\) has the
following introduction rules:
\[\Gamma \vdash \nil {:} \List(A)
\hspace{3em}
\frac{\Gamma \vdash a {:} A\hspace{2em}\Gamma \vdash as {:} \List(A)}
{\Gamma \vdash \cons(a,as) {:} \List(A)}\]
Other useful instances are types of binary trees and other trees
such as the infinitely branching trees of the Brouwer ordinals of the
second and higher number classes.
The general schema does not only cover inductively defined types,
but also inductively defined families of types, such as the identity
relation. The above mentioned type \(A^n\) of \(n\)-tuples of type
\(A\) was defined above by primitive recursion on \(n\). It can also
be defined as an inductive family with the following introduction
rules
\[\Gamma \vdash \nil {:} A^0
\hspace{3em}
\frac{\Gamma \vdash a {:} A\hspace{2em}\Gamma \vdash as {:} A^n}
{\Gamma \vdash \cons(a,as) {:} A^{\s(n)}}\]
The schema for inductive types and families is a type-theoretic
generalization of a schema for iterated inductive definitions in
predicate logic (formulated in natural deduction) presented by
Martin-Lof (1971b). This paper immediately preceded
Martin-Lof's first version of intuitionistic type
theory. It is both conceptually and technically a forerunner to the
development of the theory.
It is an essential feature of proof assistants such as Agda and Coq
that it enables users to define their own inductive types and families
by listing their introduction rules (the types of their
constructors). This is much like in typed functional programming
languages such as Haskell and the different dialects of ML. However,
unlike in these programming languages the schema for inductive
definitions in intuitionistic type theory enforces a restriction
amounting to well-foundedness of the elements of the defined
types.
### 4.6 Inductive-Recursive Definitions
We already mentioned that there are two main definition principles
in intuitionistic type theory: the inductive definition of types
(sets) and the (primitive, structural) definition of functions by
recursion on the way the elements of such types are inductively
generated. Usually, the inductive definition of a set comes first: the
formation and introduction rules make no reference to the elimination
rule. However, there are definitions in intuitionistic type theory for
which this is not the case and we simultaneously inductively generate
a type and a function from that type defined by structural
recursion. Such definitions are
simultaneously *inductive-recursive*.
The first example of such an inductive-recursive definition is an
alternative formulation *a la Tarski* of the universe of small
types. Above we presented the universe formulated *a la
Russell*, where there is no notational distinction between the
element \(A {:} \U\) and the corresponding type \(A\). For a
universe *a la* Tarski there is such a distinction, for
example, between the element \(\hat{\N} {:} \U\) and the corresponding
type \(\N\). The element \(\hat{\N}\) is called the *code* for
\(\N\).
The elimination rule for the universe *a la* Tarski is:
\[\frac{\Gamma \vdash a {:} \U}
{\Gamma \vdash \T(a)}\]
This expresses that there is a function \(\T\) which maps a code
\(a\) to its corresponding type \(T(a)\). The equality rules define
this correspondence. For example,
\[\T(\hat{\N}) = \N.\]
We see that \(\U\) is inductively generated with one introduction
rule for each small type former, and \(\T\) is defined by recursion on
these small type formers. The simultaneous inductive-recursive nature
of this definition becomes apparent in the rules for \(\Pi\) for
example. The introduction rule is
\[\frac{\Gamma \vdash a {:} \U\hspace{2em} \Gamma, x {:} \T(a) \vdash b {:} \U}
{\Gamma \vdash \hat{\Pi} x {:} a. b {:} \U}\]
and the corresponding equality rule is
\[\T(\hat{\Pi}x {:} a. b) = \Pi x {:} \T(a). \T(b)\]
Note that the introduction rule for \(\U\) refers to \(\T\), and hence
that \(\U\) and \(\T\) must be defined simultaneously.
There are a number of other universe constructions which are
defined inductive-recursively: universe hierarchies, superuniverses
(Palmgren 1998; Rathjen, Griffor, and Palmgren 1998), and Mahlo
universes (Setzer 2000). These universes are analogues of certain
large cardinals in set theory: inaccessible, hyperinaccessible, and
Mahlo cardinals.
Other examples of inductive-recursive definitions include an
informal definition of computability predicates used by Martin-Lof in
an early normalization proof of intuitionistic type theory (Martin-Lof
1998 [1972]). There are also many natural examples of "small"
inductive-recursive definitions, where the recursively defined
(decoding) function returns an element of a type rather than a
type.
A large class of inductive-recursive definitions, including the
above, can be captured by a general schema (Dybjer 2000) which extends
the schema for inductive definitions mentioned above. As shown by
Setzer, intuitionistic type theory with this class of
inductive-recursive definitions is very strong proof-theoretically
(Dybjer and Setzer 2003). However, as proposed in recent unpublished
work by Setzer, it is possible to increase the strength of the theory
even further and define universes such as an *autonomous Mahlo
universe* which are analogues of even larger cardinals.
## 5. Meaning Explanations
The consistency of intuitionistic type theory relative to set
theory can be proved by model constructions. Perhaps the simplest
method is an interpretation whereby each type-theoretic concept is
given its corresponding set-theoretic meaning, as outlined
in
section 2.3. For example the type of functions \(A \rightarrow B\)
is interpreted as the set of all functions in the set-theoretic sense
between the set denoted by \(A\) and the set denoted by \(B\). To
interpret \(\U\) we need a set-theoretic universe which is closed under
all (set-theoretic analogues of) the type constructors. Such a
universe can be proved to exist if we assume the existence of an
inaccessible cardinal \(\kappa\) and interpret \(\U\) by \(V\_\kappa\)
in the cumulative hierarchy.
Alternatives are realizability models, and for intensional type
theory, a model of terms in normal forms. The latter can also be used
for proving decidability of the judgments of the theory.
Mathematical models only prove consistency relative to classical
set theory (or whatever other meta-theory we are using). Is it
possible to be convinced about the consistency of the theory in a more
direct way, so called *simple minded consistency*
(Martin-Lof 1984)? In fact, is there a way to explain what
it *means* for a judgment to be correct in a
direct *pre-mathematical* way? And given that we know what the
judgments mean can we then be convinced that the inference rules of
the theory are valid? An answer to this problem was proposed by
Martin-Lof in 1979 in the paper "Constructive Mathematics
and Computer Programming" (Martin-Lof 1982) and elaborated
later on in numerous lectures and notes, see for example,
Martin-Lof (1984, 1987). These meaning explanations for
intuitionistic type theory are also referred to as the *direct
semantics*, *intuitive semantics*, *informal
semantics*, *standard semantics*, or
the *syntactico-semantical* approach to meaning theory.
This meaning theory follows the Wittgensteinian meaning-as-use
tradition. The meaning is based on rules for building objects
(introduction rules) of types and computation rules (elimination
rules) for computing with these objects. A difference from much of the
Wittgensteinian tradition is that also higher order types like \(\N
\rightarrow \N\) are given meaning using rules.
To explain the meaning of a judgment we must first know how the
terms in the judgment are computed to canonical form. Then the
formation rules explain how correct canonical types are built and the
introduction rules explain how correct canonical objects of such
canonical types are built. We quote (Martin-Lof 1982):
>
> A canonical type \(A\) is defined by prescribing how a canonical
> object of type \(A\) is formed as well as how two equal canonical
> objects of type \(A\) are formed. There is no limitation on this
> prescription except that the relation of equality which it defines
> between canonical objects of type \(A\) must be reflexive, symmetric
> and transitive.
In other words, a canonical type is equipped with an equivalence
relation on the canonical objects. Below we shall give a simplified
form of the meaning explanations, where this equivalence relation is
extensional identity of objects.
In spite of the *pre-mathematical* nature of this meaning
theory, its technical aspects can be captured as a mathematical model
construction similar to Kleene's *realizability* interpretation
of intuitionistic logic, see the next section. The realizers here are
the terms of type theory rather than the number realizers used by
Kleene.
### 5.1 Computation to Canonical Form
The meaning of a judgment is explained in terms of the computation
of the types and terms in the judgment. These computations stop when a
canonical form is reached. By canonical form we mean a term where the outermost form is a constructor (introduction form). These are the canonical forms used in lazy
functional programming (for example in the Haskell language).
For the purpose of illustration we consider meaning explanations
only for three type formers: \(\N, \Pi x {:} A.B\), and \(\U\). The
context free grammar for the terms of this fragment of Intuitionistic
Type Theory is as follows:
\[
a :: = 0 \mid \s(a) \mid \lambda
x.a \mid \N \mid \Pi x{:}a.a \mid \U \mid \R(a,a,xx.a) \mid a\,a .
\]
The canonical terms are generated by the following grammar:
\[v :: = 0 \mid \s(a) \mid \lambda x.a \mid \N \mid \Pi
x{:}a.a \mid \U ,\]
where \(a\) ranges over arbitrary, not necessarily canonical,
terms. Note that \(\s(a)\) is canonical even if \(a\) is not.
To explain how terms are computed to canonical form, we introduce the relation \(a \Rightarrow
v\) between *closed* terms \(a\) and canonical forms (values)
\(v\) given by the following computation rules:
\[
\frac{c \Rightarrow 0\hspace{1em}d \Rightarrow v}{\R(c,d,xy.e)\Rightarrow v}
\hspace{2em}
\frac{c \Rightarrow \s(a)\hspace{1em}e[x := d,y := \R(a,d,xy.e)]\Rightarrow v}{\R(c,d,xy.e)\Rightarrow v}
\]
\[
\frac{f\Rightarrow \lambda x.b\hspace{1em}b[x := a]\Rightarrow v}{f\,a \Rightarrow v}
\]
in addition to the rule
\[v \Rightarrow v\]
stating that a canonical term has itself as value.
### 5.2 The Meaning of Categorical Judgments
A categorical judgment is a judgment where the context is empty and
there are no free variables.
The meaning of the categorical judgment \(\vdash A\) is that \(A\)
has a canonical type as value. In our fragment this means that either
of the following holds:
* \(A \Rightarrow \N\),
* \(A \Rightarrow \U\),
* \(A \Rightarrow \Pi x {:} B. C\) and furthermore that \(\vdash B\) and
\(x {:} B \vdash C\).
The meaning of the categorical judgment \(\vdash a {:} A\) is that
\(a\) has a canonical term of the canonical type of \(A\) as value. In
our fragment this means that either of the following holds:
* \(A \Rightarrow \N\) and either \(a \Rightarrow 0\) or \(a
\Rightarrow \s(b)\) and \(\vdash b {:} \N\),
* \(A \Rightarrow \U\) and either \(a \Rightarrow \N\) or \(a
\Rightarrow \Pi x {:} b. c\) where furthermore \(\vdash b {:} \U\) and
\(x {:} b \vdash c {:} \U\),
* \(A \Rightarrow \Pi x {:} B. C\) and \(a \Rightarrow \lambda x.c\) and
\(x {:} B \vdash c {:} C\).
The meaning of the categorical judgment \(\vdash A = A'\) is
that \(A\) and \(A'\) have the same canonical types as values. In
our fragment this means that either of the following holds:
* \(A \Rightarrow \N\) and \(A' \Rightarrow \N\),
* \(A \Rightarrow \U\) and \(A' \Rightarrow \U\),
* \(A \Rightarrow \Pi x {:} B. C\) and \(A' \Rightarrow \Pi x {:}
B'. C'\) and furthermore that \(\vdash B = B'\) and \(x {:} B
\vdash C = C'\).
The meaning of the categorical judgment \(\vdash a = a' {:} A\)
is explained in a similar way.
It is a tacit assumption of the meaning explanations that the
repeated computations of canonical forms is well-founded. For example,
a natural number is the result of finitely many computations of the
successor function \(\s\) ended by \(0\). A computation which results
in infinitely many computations of \(\s\) is not a natural number in
intuitionistic type theory. (However, there are extensions of type
theory, for example, partial type theory, and non-standard type
theory, where such infinite computations can occur,
see section 7.3. To justify the rules of such
theories the present meaning explanations do not suffice.)
### 5.3 The Meaning of Hypothetical Judgments
According to Martin-Lof (1982) the meaning of a hypothetical
judgment is reduced to the meaning of the categorical judgments by
substituting the closed terms of appropriate types for the free
variables. For example, the meaning of
\[x\_1 {:} A\_1, \ldots, x\_n {:} A\_n \vdash a {:} A\]
is that the categorical judgment
\[\vdash a[x\_1 := a\_1, \ldots , x\_n := a\_n] : A[x\_1 := a\_1, \ldots ,
x\_n := a\_n]\]
is valid whenever the categorical judgments
\[\vdash a\_1 {:} A\_1, \ldots , \vdash a\_n[x\_1 := a\_1, \ldots , x\_{n-1} :=
a\_{n-1}] {:} A\_n[x\_1 := a\_1, \ldots , x\_{n-1} := a\_{n-1}]\]
are valid.
## 6. Mathematical Models
### 6.1 Categorical Models
#### 6.1.1 Hyperdoctrines
Curry's correspondence between propositions and types was extended
to predicate logic in the late 1960s by Howard (1980) and de Bruijn
(1970). At around the same time Lawvere developed related ideas in
categorical logic. In particular he proposed the notion of
a *hyperdoctrine* (Lawvere 1970) as a categorical model of
(typed) predicate logic. A hyperdoctrine is an indexed category \(P {:}
T^{op} \rightarrow \mathbf{Cat}\), where \(T\) is a category where the
objects represent types and the arrows represent terms. If \(A\) is a
type then the *fibre* \(P(A)\) is a category of propositions
depending on a variable \(x {:} A\). The arrows in this category are
proofs \(Q \vdash R\) and can be thought of as
proof-objects. Moreover, since we have an indexed category, for each
arrow \(t\) from \(A\) to \(B\), there is a reindexing functor \(P(B)
\rightarrow P(A)\) representing substitution of \(t\) for a variable
\(y {:} B\). The category \(P(A)\) is assumed to be cartesian closed and
conjunction and implications are modelled by products and exponentials
in this category. The quantifiers \(\exists\) and \(\forall\) are
modelled by the left and right adjoints of the reindexing
functor. Moreover, Lawvere added further structure to hyperdoctrines
to model identity propositions (as left adjoints to a diagonal
functor) and a comprehension schema.
#### 6.1.2 Contextual categories, categories with attributes, and categories with families
Lawvere's definition of hyperdoctrines preceded intuitionistic type
theory but did not go all the way to identifying propositions and types. Nevertheless Lawvere influenced
Scott's (1970) work on *constructive validity*, a somewhat
preliminary precursor of intuitionistic type theory. After Martin-Lof
(1998 [1972]) had presented a
more definite formulation of the theory, the first work on categorical
models was presented by Cartmell in 1978 with his notions of category
with attributes and contextual category (Cartmell 1986). However, we
will not define these structures here but instead the closely
related *categories with families* (Dybjer 1996) which are
formulated so that they directly model a variable-free version of a
formulation of intuitionistic type theory with explicit substitutions
(Martin-Lof 1995).
A category with families is a functor \(T {:} C^{op} \rightarrow
\mathbf{Fam}\), where \(\mathbf{Fam}\) is the category of families of
sets. The category \(C\) is the category of contexts and
substitutions. If \(\Gamma\) is an object of \(C\) (a context), then
\(T(\Gamma)\) is the family of terms of type \(A\) which depend on
variables in \(\Gamma\). If \(\gamma\) is an arrow in \(C\)
representing a substitution, then the arrow part of the functor
represents substitution of \(\gamma\) in types and terms. A category
with families also has a terminal object and a notion of context
comprehension, reminiscent of Lawvere's comprehension in
hyperdoctrines. The terminal object captures the rules for empty
contexts and empty substitutions. Context comprehension captures the
rules for extending contexts and substitutions, and has projections
capturing weakening and assumption of the last variable.
Categories with families are algebraic structures which model the
general rules of dependent type theory, those which come before the
rules for specific type formers, such as \(\Pi\), \(\Sigma\), identity
types, universes, etc. In order to model specific type-former
corresponding extra structure needs to be added.
#### 6.1.3 Locally cartesian closed categories
From a categorical perspective the above-mentioned structures may
appear somewhat special and ad hoc. A more regular structure which
gives rise to models of intuitionistic type theory are the locally
cartesian closed categories. These are categories with a terminal
object, where each slice category is cartesian closed. It can be shown
that the pullback functor has a left and a right adjoint, representing
\(\Sigma\)- and \(\Pi\)-types, respectively. Locally cartesian closed
categories correspond to intuitionistic type theory with extensional
identity types and \(\Sigma\) and \(\Pi\)-types (Seely 1984,
Clairambault and Dybjer 2014). It should be remarked that the
correspondence with intuitionistic type theory is somewhat indirect,
since a coherence problem, in the sense of category theory, needs to
be solved. The problem is that in locally cartesian closed categories
type substituion is represented by pullbacks, but these are only
defined up to isomorphism, see Curien 1993 and Hofmann 1994.
### 6.2 Set-Theoretic Model
Intuitionistic type theory is a possible framework for constructive
mathematics in Bishop's sense. Such constructive mathematics is
compatible with classical mathematics: a constructive proof in
Bishop's sense can directly be understood as a proof in classical
logic. A formal way to understand this is by constructing a
set-theoretic model of intuitionistic type theory, where each concept
of type theory is interpreted as the corresponding concept in
Zermelo-Fraenkel Set Theory. For example, a type is interpreted as a
set, and the type of functions in \(A \rightarrow B\) is interpreted
as the set of all functions in the set-theoretic sense from the set
representing \(A\) to the set representing \(B\). The type of natural
numbers is interpreted as the set of natural numbers. The
interpretations of identity types, and \(\Sigma\) and \(\Pi\)-types
were already discussed in the introduction. And as already mentioned,
to interpret the type-theoretic universe we need an inaccessible
cardinal.
#### 6.2.1 Model in CZF
It can be shown that the interpretation outlined above can be
carried out in Aczel's constructive set theory CZF. Hence it does not
depend on classical logic or impredicative features of set theory.
### 6.3 Realizability Models
The set-theoretic model can be criticized on the grounds that it
models the type of functions as the set of all set-theoretic
functions, in spite of the fact that a function in type theory is
always computable, whereas a set-theoretic function may not be.
To remedy this problem one can instead construct
a *realizability model* whereby one starts with a set
of *realizers*. One can here follow Kleene's numerical
realizability closely where functions are realized by codes for Turing
machines. Or alternatively, one can let realizers be terms in a lambda
calculus or combinatory logic possibly extended with appropriate
constants. Types are then represented by sets of realizers, or often
as partial equivalence relations on the set of realizers. A partial
equivalence relation is a convenient way to represent a type with a
notion of "equality" on it.
There are many variations on the theme of realizability model. Some
such models tacitly
assume set theory as the metatheory (Aczel 1980, Beeson 1985), whereas others explictly assume a
constructive metatheory (Smith 1984).
Realizability models are also models of the extensional version of
intuitionistic type theory (Martin-Lof 1982) which will be presented
in section 7.1 below.
### 6.4 Model of Normal Forms and Type-Checking
In intuitionistic type theory each type and each well-typed term
has a normal form. A consequence of this normal form property is that
all the judgments are decidable: for example, given a correct context
\(\Gamma\), a correct type \(A\) and a possibly ill-typed term \(a\),
there is an algorithm for deciding whether \(\Gamma \vdash a {:}
A\). This type-checking algorithm is the key component of
proof-assistants for Intensional Type Theory, such as Agda.
The correctness of the normal form property can be expressed as a
model of normal forms, where each context, type, and term are
interpreted as their respective normal forms.
## 7. Variants of the Theory
### 7.1 Extensional Type Theory
In extensional intuitionistic type theory (Martin-Lof 1982) the
rules of \(\I\)-elimination and \(\I\)-equality for the general identity
type are replaced by the following two rules:
\[\frac{\Gamma\vdash c {:} \I(A,a,a')} {\Gamma \vdash a=a' {:} A} \hspace{3em}
\frac{\Gamma\vdash c{:}I(A,a,a')} {\Gamma\vdash c = \r {:} \I(A,a,a')}\]
The first causes the distinction between
propositional and judgmental equality to disappear. The second forces
identity proofs to be unique. Unlike the rules for the intensional
identity type former, the rules for extensional identity types do not
fit into the schema for inductively defined types mentioned above.
These rules are however justified by the meaning explanations in
Martin-Lof (1982). This is because the categorical judgment
\[\vdash c {:} \I(A,a,a')\]
is valid iff \(c \Rightarrow \r\) and the judgment \(\vdash a = a' {:}
A\) is valid.
However, these rules make it possible to define terms without
normal forms. Since the type-checking algorithm relies on the
computation of normal forms of types, it no longer works for
extensional type theory, see (Castellan, Clairambault, and Dybjer 2015).
On the other hand, certain constructions which are not available in
intensional type theory are possible in extensional type theory. For
example, function extensionality
\[(\Pi x {:} A. \I(B,f\,x,f'\,x)) \rightarrow \I(\Pi x{:}A.B,f,f')\]
is a theorem.
Another example is that \(\W\)-types can be used for encoding other
inductively defined types in Extensional Type Theory. For example, the
Brouwer ordinals of the second and higher number classes can be
defined as special instances of the \(\W\)-type (Martin-Lof 1984). More
generally, it can be shown that all inductively defined types which
are given by a *strictly positive type operator* can be
represented as instances of well-founded trees (Dybjer 1997).
### 7.2 Univalent Foundations and Homotopy Type Theory
Univalent foundations refer to Voevodsky's programme for a new
foundation of mathematics based on intuitionistic type theory and
employing ideas from homotopy theory. Here every type \(A\) is
considered as a space, and the identity type \(\I(A,a,b)\) is the space
of paths from point \(a\) to point \(b\) in \(A\). Iterated identity types represent higher homotopies, e.g.
\[\I(\I(A,a,b),f,g)\]
is the space of homotopies between \(f\) and \(g\).
The notion of
ordinary set can be thought of as a discrete space \(A\) where
all paths in \(\I(A,a,b)\) are trivial loops.
The origin of these ideas
was the remarkable discovery by (Hofmann and Streicher 1998) that the axioms of
intensional type theory do not force all proofs of an identity to be equal, that is, not all paths need to be trivial. This was
shown by a model construction where each type is interpreted as a
groupoid.
Further connections between identity
types and notions from homotopy theory and higher categories were
subsequently discovered by (Awodey and Warren 2009), (Lumsdaine 2010), and
(van den Berg and Garner 2011). Voevodsky realized that the whole intensional intuitionistic type
theory could be modelled by a well-known category studied in homotopy
theory, namely the Kan simplicial sets. Inspired by this model he
introduced the *univalence axiom*. For a universe
\(\U\) of small types, this axiom states that the substitution map associated with
the \(J\)-operator
\[\I(\U,a,b) \longrightarrow \T(a) \cong \T(b)\]
is an equivalence. Equivalence (\(\cong\)) here refers to a general notion of
equivalence of higher dimensional objects, as in the
sequence *equal elements, isomorphic sets, equivalent groupoids,
biequivalent bigroupoids*, etc. The univalence axiom expresses
that "everything is preserved by equivalence", thereby
realizing the informal categorical slogan that all categorical
constructions are preserved by isomorphism, and its generalization,
that all constructions of categories are preserved by equivalence of
categories, etc.
The axiom of univalence was originally justified by
Voevodsky's simplical set model. This model is however not
constructive and (Bezem, Coquand and Huber 2014 [2013]) has more
recently proposed a model in Kan cubical sets.
Although univalent foundations concern preservation of mathematical
structure in general, strongly inspired by category theory,
applications within homotopy theory are particularly actively
investigated. Intensional type theory extended with the univalence
axiom and so called higher inductive types is therefore also called
"homotopy type theory". We refer to the entry on
type theory for further details.
### 7.3 Partial and Non-Standard Type Theory
Intuitionistic type theory is not intended to model Brouwer's
notion of *free choice sequence*, although lawlike choice
sequences can be modelled as functions from \(\N\). However, there are
extensions of the theory which incorporate such choice sequences:
namely *partial type theory* and *non-standard type
theory* (Martin-Lof 1990). The types in partial type theory
can be interpreted as Scott domains (Martin-Lof 1986, Palmgren
and Stoltenberg-Hansen 1990, Palmgren 1991). In this way a type \(\N\)
which contains an infinite number \(\infty\) can be
interpreted. However, in partial type theory all types are inhabited
by a least element \(\bot\), and thus the propositions as types
principle is not maintained. Non-standard type theory incorporates
non-standard elements, such as an infinite number \(\infty {:} \N\)
without inhabiting all types.
### 7.4 Impredicative Type Theory
The inconsistent version of intuitionistic type theory of
Martin-Lof (1971a) was based on the strongly impredicative axiom that
there is a type of all types. However, (Coquand and Huet 1988) showed with their
calculus of constructions, that there is a powerful impredicative but
consistent version of type theory. In this theory the universe \(\U\)
(usually called \({\bf Prop}\) in this theory) is closed under the following formation rule
for cartesian product of families of types:
\[\frac{\Gamma \vdash A \hspace{2em} \Gamma, x {:} A \vdash B {:} \U}
{\Gamma \vdash \Pi x {:} A. B {:} \U}\]
This rule is more general than the rule for constructing small
cartesian products of families of small types in intuitionistic type
theory, since we can now quantify over arbitrary types \(A\),
including \(\U\), and not just small types. We say that \(\U\) is impredicative since we can construct a new element of it by quantifying over all elements, even the element which is constructed.
The motivation for this theory was that inductively defined types
and families of types become definable in terms of impredicative
quantification. For example, the type of natural numbers can be
defined as the type of Church numerals:
\[\N = \Pi X {:} \U. X \rightarrow (X \rightarrow X) \rightarrow X {:} \U\]
This is an impredicative definition, since it is a small type which
is constructed by quantification over all small types. Similarly we
can define an identity type by impredicative quantification:
\[\I(A,a,a')= \Pi X {:} A \rightarrow \U. X\,a \rightarrow X\,a' {:} \U\]
This is Leibniz' definition of equality: \(a\) and \(a'\) are
equal iff they satisfy the same properties (ranged over by \(X\)).
Unlike in intuitionistic type theory, the function type in
impredicative type cannot be interpreted set-theoretically in a
straightfoward way, see (Reynolds 1984).
### 7.5 Proof Assistants
In 1979 Martin-Lof wrote the paper "Constructive Mathematics
and Computer Programming" where he explained that intuitionistic
type theory is a programming language which can also be used as a
formal foundation for constructive mathematics. Shortly after that,
interactive proof systems which help the user to derive valid
judgments in the theory, so called proof assistants, were
developed.
One of the first systems was the NuPrl system (PRL Group 1986),
which is based on an extensional type theory similar to (Martin-Lof
1982).
Systems based on versions of intensional type theory go back to the
type-checker for the impredicative calculus of constructions which was
written around 1984 by Coquand and Huet. This led to the Coq system,
which is based on the calculus of inductive constructions
(Paulin-Mohring 1993), a theory which extends the calculus of
construction with primitive inductive types and families. The
encodings of the pure calculus of constructions were found to be
inconvenient, since the full elimination rules could not be derived
and instead had to be postulated. We also remark that the calculus of
inductive constructions has a subsystem, the predicative calculus of
inductive constructions, which follows the principles of
Martin-Lof's intuitionistic type theory.
Agda is another proof assistant which is based on the logical
framework formulation of intuitionistic type theory, but adds numerous
features inspired by practical programming languages (Norell 2008). It
is an intensional theory with decidable judgments and a type-checker
similar to Coq's. However, in contrast to Coq it is based on
Martin-Lof's predicative intuitionistic type theory.
There are several other systems based either on the calculus of
constructions (Lego, Matita, Lean) or on intuitionistic type theory
(Epigram, Idris); see (Pollack 1994; Asperti *et al.* 2011; de Moura *et al.* 2015;
McBride and McKinna 2004; Brady 2011). |
type-theory-intuitionistic | ## 1. Overview
We begin with a bird's eye view of some important aspects of
intuitionistic type theory. Readers who are unfamiliar with the theory
may prefer to skip it on a first reading.
The origins of intuitionistic type theory are Brouwer's
intuitionism and Russell's type theory. Like
Church's classical simple theory of types
it is based on the lambda calculus with types,
but differs from it in that it is based on the propositions-as-types
principle, discovered by Curry (1958) for propositional logic and
extended to predicate logic by Howard (1980) and de Bruijn
(1970). This extension was made possible by the introduction of
indexed families of types (dependent types) for representing the
predicates of predicate logic. In this way all logical connectives and
quantifiers can be interpreted by type formers. In intuitionistic type
theory further types are added, such as a type of natural numbers, a
type of small types (a universe) and a type of well-founded trees. The
resulting theory contains intuitionistic number theory (Heyting
arithmetic) and much more.
The theory is formulated in natural deduction where the rules for
each type former are classified as formation, introduction,
elimination, and equality rules. These rules exhibit certain
symmerties between the introduction and elimination rules following
Gentzen's and Prawitz' treatment of natural deduction,
as explained in the entry on
proof-theoretic semantics.
The elements of propositions, when interpreted as types, are
called *proof-objects*. When proof-objects are added to the
natural deduction calculus it becomes a typed lambda calculus with
dependent types, which extends Church's original typed lambda
calculus. The equality rules are computation rules for the terms of
this calculus. Each function definable in the theory is total and
computable. Intuitionistic type theory is thus a typed functional
programming language with the unusual property that all programs
terminate.
Intuitionistic type theory is not only a formal logical system but
also provides a comprehensive philosophical framework for
intuitionism. It is an *interpreted language*, where the
distinction between the *demonstration of a judgment* and
the *proof of a proposition* plays a fundamental role (Sundholm
2012). The framework clarifies the Brouwer-Heyting-Kolmogorov
interpretation of intuitionistic logic and extends it to the more
general setting of intuitionistic type theory. In doing so it provides
a general conception not only of what a constructive proof is, but
also of what a constructive mathematical object is. The meaning of the
judgments of intuitionistic type theory is explained in terms of
computations of the canonical forms of types and terms. These
informal, intuitive meaning explanations are
"pre-mathematical" and should be contrasted to formal
mathematical models developed inside a standard mathematical framework
such as set theory.
This meaning theory also justifies a variety of inductive,
recursive, and inductive-recursive definitions. Although
proof-theoretically strong notions can be justified, such as analogues
of certain large cardinals, the system is considered
predicative. Impredicative definitions of the kind found in
higher-order logic, intuitionistic set theory, and topos theory are
not part of the theory. Neither is Markov's principle, and thus the
theory is distinct from Russian constructivism.
An alternative formal logical system for predicative constructive
mathematics is Myhill and Aczel's
constructive Zermelo-Fraenkel set theory
(CZF). This theory, which is based on
intuitionistic first-order predicate logic and weakens some of the
axioms of classical Zermelo-Fraenkel Set Theory, has a natural
interpretation in intuitionistic type theory. Martin-Lof's meaning
explanations thus also indirectly form a basis for CZF.
Variants of intuitionistic type theory underlie several widely used
proof assistants, including NuPRL, Coq, and Agda. These proof
assistants are computer systems that have been used for formalizing
large and complex proofs of mathematical theorems, such as the Four
Colour Theorem in graph theory and the Feit-Thompson Theorem in finite
group theory. They have also been used to prove the correctness of a realistic C compiler (Leroy 2009) and other
computer software.
Philosophically and practically, intuitionistic type theory is a
foundational framework where constructive mathematics and computer
programming are, in a deep sense, the same. This point has been
emphasized by (Gonthier 2008) in the paper in which he describes his
proof of the Four Colour Theorem:
>
> The approach that proved successful for this proof was to turn
> almost every mathematical concept into a data structure or a program
> in the Coq system, thereby converting the entire enterprise into one
> of program verification.
>
>
>
## 2. Propositions as Types
### 2.1 Intuitionistic Type Theory: a New Way of Looking at Logic?
Intuitionistic type theory offers a new way of analyzing logic,
mainly through its introduction of explicit proof objects. This
provides a direct computational interpretation of logic, since there
are computation rules for proof objects. As regards expressive power,
intuitionistic type theory may be considered as an extension of
first-order logic, much as higher order logic, but predicative.
#### 2.1.1 A Type Theory
Russell developed
type theory in response to his discovery
of a paradox in naive set theory. In his ramified type theory
mathematical objects are classified according to their *types*:
the type of propositions, the type of objects, the type of properties
of objects, etc. When Church developed his
simple theory of types on the
basis of a typed version of his lambda calculus he added
the rule that there is a type of functions between any two types of
the theory. Intuitionistic type theory further extends the simply
typed lambda calculus with dependent types, that is, indexed families
of types. An example is the family of types of \(n\)-tuples indexed by
\(n\).
Types have been widely used in programming for a long time. Early
high-level programming languages introduced types of integers and
floating point numbers. Modern programming languages often have rich
type systems with many constructs for forming new
types. Intuitionistic type theory is a functional programming language
where the type system is so rich that practically any conceivable
property of a program can be expressed as a type. Types can thus be
used as specifications of the task of a program.
#### 2.1.2 An intuitionstic logic with proof-objects
Brouwer's analysis of logic led him to an intuitionistic logic
which rejects the law of excluded middle and the law of double
negation. These laws are not valid in intuitionistic type theory. Thus
it does not contain classical (Peano) arithmetic but only
intuitionistic (Heyting) arithmetic. (It is another matter that Peano
arithmetic can be interpreted in Heyting arithmetic by the double
negation interpretation, see the entry on
intuitionistic logic.)
Consider a theorem of intuitionistic arithmetic, such as the
division theorem
\[\forall m, n. m > 0 \supset \exists q, r. mq + r = n
\wedge m > r \]
A formal proof (in the usual sense) of this theorem is a sequence
(or tree) of formulas, where the last (root) formula is the theorem
and each formula in the sequence is either an axiom (a leaf) or the
result of applying an inference rule to some earlier (higher)
formulas.
When the division theorem is proved in intuitionistic type theory,
we do not only build a formal proof in the usual sense but also
a *construction* (or *proof-object*)
"\(\divi\)" which witnesses the truth of the theorem. We
write
\[\divi : \forall m, n {:} \N.\, m > 0 \supset \exists q, r {:} \N.\, mq + r = n \wedge m > r \]
to express that \(\divi\) is a proof-object for the division
theorem, that is, an element of the type representing the division
theorem. When propositions are represented as types, the
\(\forall\)-quantifier is identified with the dependent function space
former (or general cartesian product) \(\Pi\), the
\(\exists\)-quantifier with the dependent pairs type former (or
general disjoint sum) \(\Sigma\), conjunction \(\wedge\) with cartesian product \( \times \), the identity relation = with the
type former \(\I\) of proof-objects of identities, and the greater
than relation \(>\) with the type former \(\GT\) of
proof-objects of greater-than statements. Using
"type-notation" we thus write
\[
\divi : \Pi m, n {:} \N.\, \GT(m,0)\rightarrow
\Sigma q, r {:} \N.\, \I(\N,mq + r,n) \times \GT(m,r)
\]
to express that the proof object "\(\divi\)" is a
function which maps two numbers \(m\) and \(n\) and a proof-object \(p\) witnessing that
\(m > 0\) to a quadruple \((q,(r,(s,t)))\), where \(q\) is the quotient
and \(r\) is the remainder obtained when dividing \(n\) by \(m\). The
third component \(s\) is a proof-object witnessing the fact that \(mq
+ r = n\) and the fourth component \(t\) is a proof object witnessing \(m > r \).
Crucially, \(\divi\) is not only a function in the classical sense;
it is also a function in the intuitionistic sense, that is, a program
which computes the output \((q,(r,(s,t)))\) when given \(m\), \(n\), \(p\)
as inputs. This program is a term in a lambda calculus with special
constants, that is, a program in a functional programming
language.
#### 2.1.3 An extension of first-order predicate logic
Intuitionistic type theory can be considered as an extension of
first-order logic, much as higher order logic is an extension of first
order logic. In higher order logic we find some individual domains
which can be interpreted as any sets we like. If there are relational
constants in the signature these can be interpreted as any relations
between the sets interpreting the individual domains. On top of that
we can quantify over relations, and over relations of relations,
etc. We can think of higher order logic as first-order logic equipped
with a way of introducing new domains of quantification: if \(S\_1,
\ldots, S\_n\) are domains of quantification then \((S\_1,\ldots,S\_n)\)
is a new domain of quantification consisting of all the n-ary
relations between the domains \(S\_1,\ldots,S\_n\). Higher order logic
has a straightforward set-theoretic interpretation where
\((S\_1,\ldots,S\_n)\) is interpreted as the power set \(P(A\_1 \times
\cdots \times A\_n)\) where \(A\_i\) is the interpretation of \(S\_i\),
for \(i=1,\ldots,n\). This is the kind of higher order logic or simple
theory of types that Ramsey, Church and others introduced.
Intuitionistic type theory can be viewed in a similar way, only
here the possibilities for introducing domains of quantification are
richer, one can use \(\Sigma, \Pi, +, \I\) to construct new ones from
old. (Section 3.1; Martin-Lof 1998
[1972]). Intuitionistic type theory has a straightforward
set-theoretic interpretation as well, where \(\Sigma\), \(\Pi\) etc
are interpreted as the corresponding set-theoretic constructions; see
below. We can add to intuitionistic type theory unspecified individual
domains just as in HOL. These are interpreted as sets as for HOL. Now
we exhibit a difference from HOL: in intuitionistic type theory we can
introduce unspecified family symbols. We can introduce \(T\) as a
family of types over the individual domain \(S\):
\[T(x)\; {\rm type} \;(x{:}S).\]
If \(S\) is interpreted as \(A\), \(T\) can be interpreted as any
family of sets indexed by \(A\). As a non-mathematical example, we can
render the binary relation *loves* between members of an
individual domain of *people* as follows. Introduce the binary
family Loves over the domain People
\[{\rm Loves}(x,y)\; {\rm type}\; (x{:}{\rm People}, y{:}{\rm People}).\]
The interpretation can be any family of sets \(B\_{x,y}\) (\(x{:}A\),
\(y{:}A\)). How does this cover the standard notion of relation? Suppose
we have a binary relation \(R\) on \(A\) in the familiar set-theoretic
sense. We can make a binary family corresponding to this as
follows
\[
B\_{x,y} =
\begin{cases}
\{0\} &\text{if } R(x,y) \text{ holds} \\
\varnothing &\text{if } R(x,y) \text{ is false.}
\end{cases}\]
Now clearly \(B\_{x,y}\) is nonempty if and only if \(R(x,y)\)
holds. (We could have chosen any other element from our set theoretic
universe than 0 to indicate truth.) Thus from any relation we can
construct a family whose truth of \(x,y\) is equivalent to \(B\_{x,y}\)
being non-empty. Note that this interpretation does not care what the
proof for \(R(x,y)\) is, just that it holds. Recall that
intuitionistic type theory interprets propositions as types, so
\(p{:} {\rm Loves}({\rm John}, {\rm Mary})\) means that \({\rm Loves}({\rm
John}, {\rm Mary})\) is true.
The interpretation of relations as families allows for keeping
track of proofs or evidence that \(R(x,y)\) holds, but we may also
chose to ignore it.
In Montague semantics,
higher order logic is used to give
semantics of natural language (and examples as above). Ranta (1994)
introduced the idea to instead employ intuitionistic type theory to
better capture sentence structure with the help of dependent
types.
In contrast, how would the mathematical relation \(>\) between
natural numbers be handled in intuitionistic type theory? First of all
we need a type of numbers \(\N\). We could in principle introduce an
unspecified individual domain \(\N\), and then add axioms just as we
do in first-order logic when we set up the axiom system for Peano
arithmetic. However this would not give us the desirable computational
interpretation. So as explained below we lay down introduction rules
for constructing new natural numbers in \(\N\) and elimination and
computation rules for defining functions on \(\N\) (by recursion). The
standard order relation \(>\) should satisfy
\[\mbox{\(x > y\) iff there exists \(z{:} \N\) such that \(y+z+1 = x\)}.
\]
The right hand is rendered as \(\Sigma z{:}\N.\, \I(\N,y+z+1,x)\) in
intuitionistic type theory, and we take this as definition of relation
\(>\). (\(+\) is defined by recursive equations, \(\I\) is the
identity type construction). Now all the properties of \(>\) are
determined by the mentioned introduction and elimination and
computation rules for \(\N\).
#### 2.1.4 A logic with several forms of judgment
The type system of intuitionistic type theory is very
expressive. As a consequence the well-formedness of a type is no
longer a simple matter of parsing, it is something which needs to be
proved. Well-formedness of a type is one form of *judgment* of
intuitionistic type theory. Well-typedness of a term with respect to a
type is another. Furthermore, there are equality judgments for types
and terms. This is yet another way in which intuitionistic type theory
differs from ordinary first order logic with its focus on the sole
judgment expressing the truth of a proposition.
#### 2.1.5 Semantics
While a standard presentation of first-order logic would follow
Tarski in defining the notion of model, intuitionistic type theory
follows the tradition of Brouwerian meaning theory as further
developed by Heyting and Kolmogorov, the so called BHK-interpretation
of logic. The key point is that the proof of an implication \(A
\supset B \) is a *method* that transforms a proof of \(A\) to
a proof of \(B\). In intuitionistic type theory this method is
formally represented by the program \(f {:} A \supset B\) or \(f {:} A
\rightarrow B\): the type of proofs of an implication \(A \supset B\)
is the type of functions which maps proofs of \(A\) to proofs of
\(B\).
Moreover, whereas Tarski semantics is usually presented
meta-mathematically, and assumes set theory, Martin-Lof's meaning
theory of intuitionistic type theory should be understood directly and
"pre-mathematically", that is, without assuming a
meta-language such as set theory.
#### 2.1.6 A functional programming language
Readers with a background in the lambda calculus and functional
programming can get an alternative first approximation of
intuitionistic type theory by thinking about it as a typed functional
programming language in the style of Haskell or one of the dialects of
ML. However, it differs from these in two crucial aspects: (i) it has
dependent types (see below) and (ii) all typable programs
terminate. (Note that intuitionistic type theory has influenced recent
extensions of Haskell with *generalized algebraic datatypes*
which sometimes can play a similar role as inductively defined
dependent types.)
### 2.2 The Curry-Howard Correspondence
As already mentioned, the principle that
>
> *a proposition is the type of its proofs.*
>
>
>
is fundamental to intuitionistic type theory. This principle is
also known as the Curry-Howard correspondence or even Curry-Howard
isomorphism. Curry discovered a correspondence between the
implicational fragment of intuitionistic logic and the simply typed
lambda-calculus. Howard extended this correspondence to first-order
predicate logic. In intuitionistic type theory this correspondence
becomes an *identification* of proposition and types, which has
been extended to include quantification over higher types and
more.
### 2.3 Sets of Proof-Objects
So what are these proof-objects like? They should not be thought of
as logical derivations, but rather as some (structured) symbolic
evidence that something is true. Another term for such evidence is
"truth-maker".
It is instructive, as a somewhat crude first approximation, to
replace types by ordinary sets in this correspondence. Define a set
\(\E\_{m,n}\), depending on \(m, n \in {{\mathbb N}}\), by:
\[\E\_{m,n} =
\left\{\begin{array}{ll}
\{0\} & \mbox{if \(m = n\)}\\
\varnothing & \mbox{if \(m \ne n\).}
\end{array}
\right.\]
Then \(\E\_{m,n}\) is nonempty exactly when \(m=n\). The set
\(\E\_{m,n}\) corresponds to the proposition \(m=n\), and the number
\(0\) is a proof-object (truth-maker) inhabiting the sets
\(\E\_{m,m}\).
Consider the proposition that *\(m\) is an even number*
expressed as the formula \(\exists n \in {{\mathbb N}}. m= 2n\). We
can build a set of proof-objects corresponding to this formula by
using the general set-theoretic sum operation. Suppose that \(A\_n\)
(\(n\in {{\mathbb N}}\)) is a family of sets. Then its disjoint sum is
given by the set of pairs
\[
(\Sigma n \in {{\mathbb N}})A\_n = \{ (n,a) : n \in {{\mathbb N}}, a \in A\_n\}.\]
If we apply this construction to the family \(A\_n = \E\_{m,2n}\) we
see that \((\Sigma n \in {{\mathbb N}})\E\_{m,2n}\) is nonempty exactly
when there is an \(n\in {{\mathbb N}}\) with \(m=2n\). Using the
general set-theoretic product operation \((\Pi n \in {{\mathbb
N}})A\_n\) we can similarly obtain a set corresponding to a universally
quantified proposition.
### 2.4 Dependent Types
In intuitionistic type theory there are primitive type formers
\(\Sigma\) and \(\Pi\) for general sums and products, and \(\I\) for
identity types, analogous to the set-theoretic constructions described
above. The *identity type* \(\I(\N,m,n)\) corresponding to the
set \(\E\_{m,n}\) is an example of a *dependent type* since it
depends on \(m\) and \(n\). It is also called an *indexed family of
types* since it is a family of types indexed by \(m\) and
\(n\). Similarly, we can form the general disjoint sum \(\Sigma x {:}
A.\, B\) and the general cartesian product \(\Pi x {:} A.\, B\) of such a
family of types \(B\) indexed by \(x {:} A\), corresponding to the set
theoretic sum and product operations above.
Dependent types can also be defined by primitive recursion. An
example is the type of \(n\)-tuples \(A^n\) of elements of type \(A\)
and indexed by \(n {:} N\) defined by the equations
\[\begin{align\*}
A^0 &= 1\\
A^{n+1} &= A \times A^n
\end{align\*}\]
where \(1\) is a
one element type and \(\times\) denotes the cartesian product of two
types. We note that dependent types introduce computation in types:
the defining rules above are computation rules. For example, the
result of computing \(A^3\) is \(A \times (A \times (A \times
1))\).
### 2.5 Propositions as Types in Intuitionistic Type Theory
With propositions as types, predicates become dependent types. For
example, the predicate \(\mathrm{Prime}(x)\) becomes the type of
proofs that \(x\) is prime. This type *depends* on
\(x\). Similarly, \(x < y\) is the type of proofs that \(x\) is
less than \(y\).
According to the Curry-Howard interpretation of propositions as
types, the logical constants are interpreted as type formers:
\[\begin{align\*}
\bot &= \varnothing\\
\top &= 1\\
A \vee B &= A + B\\
A \wedge B &= A \times B\\
A \supset B &= A \rightarrow B\\
\exists x {:} A.\, B &= \Sigma x {:} A.\, B\\
\forall x {:} A.\, B &= \Pi x {:} A.\, B
\end{align\*}\]
where \(\Sigma x {:} A.\, B\) is the
disjoint sum of the \(A\)-indexed family of types \(B\) and \(\Pi x {:}
A.\, B\) is its cartesian product. The canonical elements of \(\Sigma x {:}
A.\, B\) are pairs \((a,b)\) such that \(a {:} A\) and \(b {:} B[x:=a]\)
(the type obtained by substituting all free occurrences of \(x\) in
\(B\) by \(a\)). The elements of \(\Pi x {:} A.\, B\) are (computable)
functions \(f\) such that \(f\,a {:} B[x:=a]\), whenever \(a {:} A\).
For example, consider the proposition
\[\begin{equation}
\forall m {:} \N.\, \exists n {:} \N.\, m \lt n \wedge \mathrm{Prime}(n)
\tag{1}\label{prop1}
\end{equation}\]
expressing that there are
arbitrarily large primes. Under the Curry-Howard interpretation this
becomes the type \(\Pi m {:} \N.\, \Sigma n {:} \N.\, m \lt n \times
\mathrm{Prime}(n)\) of functions which map a number \(m\) to a triple
\((n,(p,q))\), where \(n\) is a number, \(p\) is a proof that \(m \lt
n\) and \(q\) is a proof that \(n\) is prime. This is the *proofs
as programs* principle: a constructive proof that there are
arbitrarily large primes becomes a program which given any number
produces a larger prime together with proofs that it indeed is larger
and indeed is prime.
Note that the proof which derives a contradiction from the
assumption that there is a largest prime is not constructive, since it
does not explicitly give a way to compute an even larger prime. To
turn this proof into a constructive one we have to show explicitly how
to construct the larger prime. (Since proposition (\ref{prop1}) above
is a \(\Pi^0\_2\)-formula we can for example use Friedman's
A-translation to turn such a proof in classical arithmetic into a
proof in intuitionistic arithmetic and thus into a proof in
intuitionistic type theory.)
## 3. Basic Intuitionistic Type Theory
We now present a core version of intuitionistic type theory,
closely related to the first version of the theory presented by
Martin-Lof in 1972 (Martin-Lof 1998 [1972]). In addition to
the type formers needed for the Curry-Howard interpretation of typed
intuitionistic predicate logic listed above, we have two types: the
type \(\N\) of natural numbers and the type \(\U\) of small types.
The resulting theory can be shown to contain intuitionistic number
theory \(\HA\) (Heyting arithmetic), Godel's System \(\T\) of
primitive recursive functions of higher type, and the theory
\(\HA^\omega\) of Heyting arithmetic of higher type.
This core intuitionistic type theory is not only the original one,
but perhaps the minimal version which exhibits the essential features
of the theory. Later extensions with primitive identity types,
well-founded tree types, universe hierarchies, and general notions of
inductive and inductive-recursive definitions have increased the
proof-theoretic strength of the theory and also made it more
convenient for programming and formalization of mathematics. For
example, with the addition of well-founded trees we can interpret the
Constructive Zermelo-Fraenkel Set Theory \(\CZF\) of Aczel
(1978 [1977]). However, we will
wait until the next section to describe those extensions.
### 3.1 Judgments
In Martin-Lof (1996) a general philosophy of logic is presented
where the traditional notion of judgment is expanded and given a
central position. A judgment is no longer just an affirmation or
denial of a proposition, but a general act of knowledge. When
reasoning mathematically we make judgments about mathematical
objects. One form of judgment is to state that some mathematical
statement is true. Another form of judgment is to state that something
is a mathematical object, for example a set. The logical rules give
methods for producing correct judgments from earlier judgments. The
judgments obtained by such rules can be presented in tree form
\[
\begin{prooftree}
\AxiomC{\(J\_1\)}
\AxiomC{\(J\_2\)}
\RightLabel{\(r\_1\)}
\BinaryInfC{\(J\_3\)}
\AxiomC{\(J\_4\)}
\RightLabel{\(r\_5\)}
\UnaryInfC{\(J\_5\)}
\AxiomC{\(J\_6\)}
\RightLabel{\(r\_3\)}
\BinaryInfC{\(J\_7\)}
\RightLabel{\(r\_4\)}
\BinaryInfC{\(J\_8\)}
\end{prooftree}\]
or in sequential form
* (1) \(J\_1 \quad\text{ axiom} \)
* (2) \(J\_2 \quad\text{ axiom} \)
* (3) \(J\_3 \quad\text{ by rule \(r\_1\) from (1) and (2)} \)
* (4) \(J\_4 \quad\text{ axiom} \)
* (5) \(J\_5 \quad\text{ by rule \(r\_2\) from (4)} \)
* (6) \(J\_6 \quad\text{ axiom} \)
* (7) \(J\_7 \quad\text{ by rule \(r\_3\) from(5) and (6)} \)
* (8) \(J\_8 \quad\text{ by rule \(r\_4\) from (3) and (7)} \)
The latter form is common in mathematical arguments. Such a
sequence or tree formed by logical rules from axioms is
a *derivation* or *demonstration* of a judgment.
First-order reasoning may be presented using a single kind of
judgment:
>
> the proposition \(B\) is true under the hypothesis that the
> propositions \(A\_1, \ldots, A\_n\) are all true.
>
>
>
We write this *hypothetical judgment* as a
so-called *Gentzen sequent*
\[A\_1, \ldots, A\_n {\vdash}B.\]
Note that this is a single judgment that should not be confused with
the derivation of the judgment \({\vdash}B\) from the judgments
\({\vdash}A\_1, \ldots, {\vdash}A\_n\). When \(n=0\), then
the *categorical judgment* \( {\vdash}B\) states that \(B\) is
true without any assumptions. With sequent notation the familiar rule
for conjunctive introduction becomes
\[\begin{prooftree}
\AxiomC{\(A\_1, \ldots,A\_n {\vdash}B\)}
\AxiomC{\(A\_1, \ldots, A\_n {\vdash}C\)}
\RightLabel{\((\land I)\).}
\BinaryInfC{\(A\_1, \ldots, A\_n {\vdash}B \land C\)}
\end{prooftree}\]
### 3.2 Judgment Forms
Martin-Lof type theory has four basic forms of judgments and is a
considerably more complicated system than first-order logic. One
reason is that more information is carried around in the derivations
due to the identification of propositions and types. Another reason is
that the syntax is more involved. For instance, the well-formed
formulas (types) have to be generated simultaneously with the provably
true formulas (inhabited types).
The four forms of *categorical* judgment are
* \(\vdash A \; {\rm type}\), meaning that \(A\) is a well-formed
type,
* \(\vdash a {:} A\), meaning that \(a\) has type \(A\),
* \(\vdash A = A'\), meaning that \(A\) and \(A'\) are equal
types,
* \(\vdash a = a' {:} A\), meaning that \(a\) and \(a'\) are
equal elements of type \(A\).
In general, a judgment is *hypothetical*, that is, it is
made in a context \(\Gamma\), that is, a list \(x\_1 {:} A\_1, \ldots, x\_n
{:} A\_n\) of variables which may occur free in the judgment together
with their respective types. Note that the types in a context can
depend on variables of earlier types. For example, \(A\_n\) can depend
on \(x\_1 {:} A\_1, \ldots, x\_{n-1} {:} A\_{n-1}\). The four forms of
hypothetical judgments are
* \(\Gamma \vdash A \; {\rm type}\), meaning that \(A\) is a
well-formed type in the context \(\Gamma\),
* \(\Gamma \vdash a {:} A\), meaning that \(a\) has type \(A\) in
context \(\Gamma\),
* \(\Gamma \vdash A = A'\), meaning that \(A\) and \(A'\) are
equal types in the context \(\Gamma\),
* \(\Gamma \vdash a = a' {:} A\), meaning that \(a\) and \(a'\)
are equal elements of type \(A\) in the context \(\Gamma\).
Under the proposition as types interpretation
\[\tag{2}\label{analytic} \vdash a {:} A
\]
can be understood as the judgment that \(a\) is a proof-object for the
proposition \(A\). When suppressing this object we get a judgment
corresponding to the one in ordinary first-order logic (see
above):
\[\tag{3}\label{synthetic} \vdash A\; {\rm true}.
\]
Remark 3.1. Martin-Lof
(1994) argues that
Kant's *analytic judgment a priori* and *synthetic judgment
a priori* can be exemplified, in the realm of logic, by
([analytic]) and ([synthetic]) respectively. In the analytic judgment
([analytic]) everything that is needed to make the judgment evident is
explicit. For its synthetic version ([synthetic]) a possibly
complicated proof construction \(a\) needs to be provided to make it
evident. This understanding of analyticity and syntheticity has the
surprising consequence that "the logical laws in their usual
formulation are all synthetic." Martin-Lof (1994:
95). His analysis further
gives:
> " [...] the logic of analytic judgments,
> that is, the logic for deriving judgments of the two analytic forms,
> is complete and decidable, whereas the logic of synthetic judgments is
> incomplete and undecidable, as was shown by Godel."
> Martin-Lof (1994: 97).
>
>
>
The decidability of the two analytic judgments (\(\vdash a{:}A\) and
\(\vdash a=b{:}A\)) hinges on the metamathematical properties of type
theory: strong normalization and decidable type checking.
Sometimes also the following forms are explicitly considered to be
judgments of the theory:
* \(\Gamma \; {\rm context}\), meaning that \(\Gamma\) is a
well-formed context.
* \(\Gamma = \Gamma'\), meaning that \(\Gamma\) and
\(\Gamma'\) are equal contexts.
Below we shall abbreviate the judgment \(\Gamma \vdash A \; {\rm
type}\) as \(\Gamma \vdash A\) and \(\Gamma \; {\rm context}\) as
\(\Gamma \vdash.\)
### 3.3 Inference Rules
When stating the rules we will use the letter \(\Gamma\) as a
meta-variable ranging over contexts, \(A,B,\ldots\) as meta-variables
ranging over types, and \(a,b,c,d,e,f,\ldots\) as meta-variables
ranging over terms.
The first group of inference rules are general rules including
rules of assumption, substitution, and context formation. There are
also rules which express that equalities are equivalence
relations. There are numerous such rules, and we only show the
particularly important *rule of type equality* which is crucial
for computation in types:
\[\frac{\Gamma \vdash a {:} A\hspace{2em}\Gamma \vdash A = B}
{\Gamma \vdash a {:} B}\]
The remaining rules are specific to the type formers. These are
classified as formation, introduction, elimination, and equality
rules.
### 3.4 Intuitionistic Predicate Logic
We only give the rules for \(\Pi\). There are analogous rules for
the other type formers corresponding to the logical constants of typed
predicate logic.
In the following \(B[x := a]\) means the term obtained by
substituting the term \(a\) for each free occurrence of the variable
\(x\) in \(B\) (avoiding variable capture).
\(\Pi\)-formation.
\[\frac{\Gamma \vdash A\hspace{2em} \Gamma, x {:} A \vdash B}
{\Gamma \vdash \Pi x {:} A. B}\]
\(\Pi\)-introduction.
\[\frac{\Gamma, x {:} A \vdash b {:} B}
{\Gamma \vdash \lambda x. b {:} \Pi x {:} A. B}\]
\(\Pi\)-elimination.
\[\frac
{\Gamma \vdash f {:} \Pi x {:} A.B\hspace{2em}\Gamma \vdash a {:} A}
{\Gamma \vdash f\,a {:} B[x := a]}\]
\(\Pi\)-equality.
\[\frac
{\Gamma, x {:} A \vdash b {:} B\hspace{2em}\Gamma \vdash a {:} A}
{\Gamma \vdash (\lambda x.b)\,a = b[x := a] {:} B[x := a]}\]
This is the rule of \(\beta\)-conversion. We
may also add the rule of \(\eta\)-conversion:
\[\frac
{\Gamma \vdash f {:} \Pi x {:} A. B}
{\Gamma \vdash \lambda x. f\,x = f {:} \Pi x {:} A. B}.\]
Furthermore, there are congruence rules expressing that operations
introduced by the formation, introduction, and elimination rules
preserve equality. For example, the congruence rule for \(\Pi\) is
\[\frac{\Gamma \vdash A = A'\hspace{2em} \Gamma, x {:} A \vdash B=B'}
{\Gamma \vdash \Pi x {:} A. B = \Pi x {:} A'. B'}.\]
### 3.5 Natural Numbers
As in Peano arithmetic the natural numbers are generated by 0 and
the successor operation \(\s\). The elimination rule states that these
are the only possible ways to generate a natural number.
We write \(f(c) = \R(c,d,xy.e)\) for the function which is defined
by primitive recursion on the natural number \(c\) with base case
\(d\) and step function \(xy.e\) (or alternatively \(\lambda xy.e\))
which maps the value \(y\) for the previous number \(x {:} \N\) to the
value for \(\s(x)\). Note that \(\R\) is a new variable-binding
operator: the variables \(x\) and \(y\) become bound in \(e\).
\(\N\)-formation.
\[\Gamma \vdash \N\]
\(\N\)-introduction.
\[\Gamma \vdash 0 {:} \N
\hspace{2em}
\frac{\Gamma \vdash a {:} \N}
{\Gamma \vdash s(a) {:} \N}\]
\(\N\)-elimination.
\[\frac{
\Gamma, x {:} \N \vdash C
\hspace{1em}
\Gamma \vdash c {:} \N
\hspace{1em}
\Gamma \vdash d {:} C[x := 0]
\hspace{1em}
\Gamma, y {:} \N, z {:} C[x := y] \vdash e {:} C[x := s(y)]
}
{
\Gamma \vdash \R(c,d,yz.e) {:} C[x := c]
}\]
\(\N\)-equality (under appropriate premises).
\[\begin{align\*}
\R(0,d,yz.e) &= d {:} C[x := 0]\\
\R(s(a),d,yz.e) &= e[y := a, z := \R(a,d,yz.e)] {:} C[x := s(a)]
\end{align\*}\]
The rule of \(\N\)-elimination simultaneously expresses the type of
a function defined by primitive recursion and, under the Curry-Howard
interpretation, the rule of mathematical induction: we prove the
property \(C\) of a natural number \(x\) by induction on \(x\).
Godel's System \(\T\) is essentially intuitionistic type theory with
only the type formers \(\N\) and \(A \rightarrow B\) (the type of
functions from \(A\) to \(B\), which is the special case of
\((\Pi x {:} A)B\) where \(B\) does not depend on \(x {:} A\)). Since there are no
dependent types in System \(\T\) the rules can be simplified.
### 3.6 The Universe of Small Types
Martin-Lof's first version of type theory (Martin-Lof 1971a) had an
axiom stating that there is a type of all types. This was proved
inconsistent by Girard who found that the Burali-Forti paradox could
be encoded in this theory.
To overcome this pathological impredicativity, but still retain
some of its expressivity, Martin-Lof introduced in 1972 a universe
\(\U\) of small types closed under all type formers of the theory,
except that it does not contain itself (Martin-Lof 1998 [1972]). The rules
are:
\(\U\)-formation.
\[\Gamma \vdash \U\]
\(\U\)-introduction.
\[\Gamma \vdash \varnothing {:} \U
\hspace{3em}
\Gamma \vdash 1 {:} \U\]
\[\frac{\Gamma \vdash A {:} \U\hspace{2em} \Gamma \vdash B {:} \U}
{\Gamma \vdash A + B {:} \U}
\hspace{3em}
\frac{\Gamma \vdash A {:} \U\hspace{2em} \Gamma \vdash B {:} \U}
{\Gamma \vdash A \times B {:} \U}\]
\[\frac{\Gamma \vdash A {:} \U\hspace{2em} \Gamma \vdash B {:} \U}
{\Gamma \vdash A \rightarrow B {:} \U}\]
\[\frac{\Gamma \vdash A {:} U\hspace{2em} \Gamma, x {:} A \vdash B {:} \U}
{\Gamma \vdash \Sigma x {:} A.\, B {:} \U}
\hspace{3em}
\frac{\Gamma \vdash A {:} \U\hspace{2em} \Gamma, x {:} A \vdash B {:} \U}
{\Gamma \vdash \Pi x {:} A.\, B {:} \U}\]
\[\Gamma \vdash \N {:} \U\]
\(\U\)-elimination.
\[\frac{\Gamma \vdash A {:} \U}
{\Gamma \vdash A}\]
Since \(\U\) is a type, we can use \(\N\)-elimination to define small
types by primitive recursion. For example, if \(A : \U\), we can define
the type of \(n\)-tuples of elements in \(A\) as follows:
\[A^n = \R(n,1,xy.A \times y) {:} \U\]
This type-theoretic universe \(\U\) is analogous to a Grothendieck
universe in set theory which is a set of sets closed under all the
ways sets can be constructed in Zermelo-Fraenkel set theory. The
existence of a Grothendieck universe cannot be proved from the usual
axioms of Zermelo-Fraenkel set theory but needs a new axiom.
In Martin-Lof (1975) the universe is extended to a countable
hierarchy of universes
\[\U\_0 : \U\_1 : \U\_2 : \cdots .\]
In this way each type has a type, not only each small type.
### 3.7 Propositional Identity
Above, we introduced the equality judgment
\[\tag{4}\label{defeq} \Gamma \vdash a = a' {:} A.\]
This is usually called a "definitional equality" because
it can be decided by normalizing the terms \(a\) and \(a'\) and
checking whether the normal forms are identical. However, this
equality is a judgment and not a proposition (type) and we thus cannot
prove such judgmental equalities by induction. For this reason we need
to introduce propositional identity types. For example, the identity
type for natural numbers \(\I(\N,m,n)\) can be defined by
\(\U\)-valued primitive recursion. We can then express and prove the
Peano axioms. Moreover, extensional equality of ufnctions can be
defined by
\[\I(\N\rightarrow \N,f,f') = \Pi x {:} \N. \I(\N,f\,x,f'\,x).\]
### 3.8 The Axiom of Choice is a Theorem
The following form of the axiom of choice is an immediate
consequence of the BHK-interpretation of the intuitionistic
quantifiers, and is easily proved in intuitionistic type theory:
\[(\Pi x {:} A. \Sigma y {:} B. C) \rightarrow \Sigma f {:} (\Pi x {:} A. B). C[y := f\,x]\]
The reason is that \(\Pi x {:} A. \Sigma y {:} B. C\) is the type of
functions which map elements \(x {:} A\) to pairs \((y,z)\) with \(y {:}
B\) and \(z {:} C\). The choice function \(f\) is obtained by returning
the first component \(y {:} B\) of this pair.
It is perhaps surprising that intuitionistic type theory directly
validates an axiom of choice, since this axiom is often considered
problematic from a constructive point of view. A possible explanation
for this state of affairs is that the above is an axiom of choice
for *types*, and that types are not in general appropriate
constructive approximations of sets in the classical sense. For
example, we can represent a real number as a Cauchy sequence in
intuitionistic type theory, but the set of real numbers is not the
type of Cauchy sequences, but the type of Cauchy sequences up to
equiconvergence. More generally, a set in Bishop's constructive
mathematics is represented by a type (commonly called
"preset") together with an equivalence relation.
If \(A\) and \(B\) are equipped with equivalence relations, there
is of course no guarantee that the choice function, \(f\) above, is
extensional in the sense that it maps equivalent element to equivalent
elements. This is the failure of the *extensional axiom of
choice*, see Martin-Lof (2009) for an analysis.
## 4. Extensions
### 4.1 The Logical Framework
The above completes the description of a core version of
intuitionistic type theory close to that of (Martin-Lof 1998 [1972]).
In 1986 Martin-Lof proposed a reformulation of intuitionistic type
theory; see Nordstrom, Peterson and Smith (1990) for an
exposition. The purpose was to give a more compact formulation, where
\(\lambda\) and \(\Pi\) are the only variable binding operations. It
is nowadays considered the main version of the theory. It is also the
basis for the Agda proof assistant. The 1986 theory has two parts:
* the theory of types (the logical framework);
* the theory of sets (small types).
Remark 4.1. Note that the word
"set" in the logical framework does not coincide with the
way it is used in Bishop's constructive mathematics. To avoid this
confusion, types together with equivalence relations are usually
called "setoids" or "extensional sets" in
intuitionistic type theory.
The logical framework has only two type formers: \(\Pi x {:} A. B\)
(usually written \((x {:} A)B\) or \((x {:} A) \rightarrow B\) in the
logical framework formulation) and \(\U\) (usually called
\(\Set\)). The rules for \(\Pi x{:} A. B\) (\((x {:} A) \rightarrow B\))
are the same as given above (including \(\eta\)-conversion). The rules
for \(\U\) (\(\Set\)) are also the same, except that the logical
framework only stipulates closure under \(\Pi\)-type formation.
The other small type formers ("set formers") are
introduced in the theory of sets. In the logical framework formulation
each formation, introduction, and elimination rule can be expressed as
the typing of a new constant. For example, the rules for natural
numbers become
\[\begin{align\*} \N &: \Set,\\
0 &: \N,\\
\s &: \N \rightarrow \N,\\
\R &: (C {:} \N \rightarrow \Set) \rightarrow C\,0
\rightarrow (( x {:} \N) \rightarrow C\,x \rightarrow C\,(\s\,x))
\rightarrow (c {:} \N) \rightarrow C\,c.
\end{align\*}\]
where we have omitted the common context \(\Gamma\), since the types
of these constants are closed. Note that the recursion operator \(R\)
has a first argument \(C {:} \N \rightarrow \Set\) unlike in the
original formulation.
Moreover, the equality rules can be expressed as equations
\[\begin{align\*}
\R\, C\, d\, e\, 0 &= d {:} C\,0\\
\R\, C\, d\, e\, (\s\, a) &= e\, a\, (\R\, C\, d\, e\, a) {:} C\,(\s\,a)
\end{align\*}\]
under suitable assumptions.
In the sequel we will present several extensions of type theory. To
keep the presentation uniform we will however *not* use the
logical framework presentation of type theory, but will use the same
notation as in section 2.
### 4.2 A General Identity Type Former
As we mentioned above, identity on natural numbers can be defined
by primitive recursion. Identity relations on other types can also be
defined in the basic version of intuitionistic type theory presented
in section 2.
However, Martin-Lof (1975) extended intuitionistic type theory with
a uniform primitive identity type former \(\I\) for all types. The
rules for \(\I\) express that the identity relation is inductively
generated by the proof of reflexivity, a canonicial constant called
\(\r\). (Note that \(\r\) was coded by the number 0 in the introductory
presentation of proof-objects
in 2.3. The elimination rule for the identity type is a
generalization of identity elimination in predicate logic and
introduces an elimination constant \(\J\). We here show the
formulation due to Paulin-Mohring (1993) rather than the
original formulation of Martin-Lof (1975). The inference rules are
the following.
\(\I\)-formation.
\[\frac{\Gamma \vdash A
\hspace{1em}
\Gamma \vdash a {:} A
\hspace{1em}
\Gamma \vdash a' {:} A}
{\Gamma \vdash \I(A,a,a')}\]
\(\I\)-introduction.
\[\frac{\Gamma \vdash A
\hspace{1em}
\Gamma \vdash a {:} A}
{\Gamma \vdash \r {:} \I(A,a,a)}\]
\(\I\)-elimination.
\[\frac{
\Gamma, x {:} A, y {:} \I(A,a,x) \vdash C
\hspace{1em}
\Gamma \vdash b {:} A
\hspace{1em}
\Gamma \vdash c {:} \I(A,a,b)
\hspace{1em}
\Gamma \vdash d {:} C[x := a, y := r]}
{ \Gamma \vdash \J(c,d) {:} C[x := b, y:= c]}\]
\(\I\)-equality (under appropriate assumptions).
\[\begin{align\*}
\J(r,d) &= d
\end{align\*}\]
Note that if \(C\) only depends on \(x : A\) and not on the proof \(y : \I(A,a,x)\) (and we also suppress proof objects) in the rule of \(\I\)-elimination we recover the rule of identity elimination in predicate logic.
By constructing a model of type theory where types are interpreted
as *groupoids* (categories where all arrows are isomorphisms)
Hofmann and Streicher (1998) showed that it cannot be proved in
intuitionistic type theory that all proofs of \(I(A,a,b)\) are
identical. This may seem as an incompleteness of the theory and
Streicher suggested a new axiom \(\K\) from which it follows that all
proofs of \(\I(A,a,b)\) are identical to \(\r\).
The \(\I\)-type is often called the *intensional identity
type*, since it does not satisfy the principle of function
extensionality. Intuitionistic type theory with the intensional
identity type is also often called *intensional intuitionistic type
theory* to distinguish it from *extensional intuitionistic type
theory* which will be presented in
section 7.1.
### 4.3 Well-Founded Trees
A type of well-founded trees of the form \(\W x {:} A. B\) was
introduced in Martin-Lof 1982 (and in a more restricted form by Scott
1970). Elements of \(\W x {:} A. B\) are trees of varying and arbitrary
branching: varying, because the branching type \(B\) is indexed by \(x
{:} A\) and arbitrary because \(B\) can be arbitrary. The type is given
by a *generalized inductive definition* since the well-founded
trees may be infinitely branching. We can think of \(\W x{:}A. B\) as the
free term algebra, where each \(a {:} A\) represents a term constructor
\(\sup\,a\) with (possibly infinite) arity \(B[x := a]\).
\(\W\)-formation.
\[\frac{\Gamma \vdash A\hspace{2em} \Gamma, x {:} A \vdash B}
{\Gamma \vdash \W x {:} A. B}\]
\(\W\)-introduction.
\[\frac{\Gamma \vdash a {:} A \hspace{2em} \Gamma, y {:} B[x:=a] \vdash
b : Wx{:}A. B} {\Gamma \vdash \sup(a, y.b) : \W x {:} A. B}\]
We omit the rules of \(\W\)-elimination and \(\W\)-equality.
Adding well-founded trees to intuitionistic type theory increases
its proof-theoretic strength significantly (Setzer
(1998)).
### 4.4 Iterative Sets and CZF
An important application of well-founded trees is Aczel's (1978)
construction of a type-theoretic model of Constructive Zermelo
Fraenkel Set Theory. To this end he defines the type of iterative sets
as
\[\V = \W x {:} \U. x.\]
Let \(A {:} \U\) be a small type, and \(x {:} A\vdash M\) be an indexed
family of iterative sets. Then \(\sup(A,x.M)\), or with a more
suggestive notation \(\{ M\mid x {:} A\}\), is an iterative set. To
paraphrase: an iterative set is a family of iterative sets indexed by a small type.
Note that an iterative set is a data-structure in the sense of
functional programming: a possibly infinitely branching well-founded
tree. Different trees may represent the same set. We therefore need to
define a notion of extensional equality between iterative sets which
disregards repetition and order of elements. This definition is
formally similar to the definition of bisimulation of processes in
process algebra. The type \(\V\) up to extensional equality can be
viewed as a constructive type-theoretic model of the cumulative
hierarchy, see the entry on
set theory: constructive and intuitionistic ZF
for further information about CZF.
### 4.5 Inductive Definitions
The notion of an inductive definition is fundamental in
intuitionistic type theory. It is a primitive notion and not, as in
set theory, a derived notion where an inductively defined set is
defined impredicatively as the smallest set closed under some
rules. However, in intuitionistic type theory inductive definitions
are considered predicative: they are viewed as being built up from
below.
The inductive definability of types is inherent in the meaning
explanations of intuitionistic type theory which we shall discuss in
the next section. In fact, intuitionistic type theory can be described
briefly as a theory of inductive, recursive, and inductive-recursive
definitions based on a framework of lambda calculus with dependent
types.
We have already seen the type of natural numbers and the type of
well-founded trees as examples of types given by inductive
definitions; the natural numbers is an example of an ordinary finitary
inductive definition and the well-founded trees of a generalized
possibly infinitary inductive definition. The introduction rules
describe how elements of these types are inductively generated and the
elimination and equality rules describe how functions from these types
can be defined by structural recursion on the way these elements are
generated. According to the propositions as types principle, the
elimination rules are simultaneously rules for proof by structural
induction on the way the elements are generated.
The type formers \(0, 1, +, \times, \rightarrow, \Sigma,\) and
\(\Pi\) which interpret the logical constants for intuitionistic
predicate logic are examples of degenerate inductive definitions. Even
the identity type (in intensional intuitionistic type theory) is
inductively generated; it is the type of proofs generated by the
reflexivity axiom. Its elimination rule expresses proof by pattern
matching on the proof of reflexivity.
The common structure of the rules of the type formers can be
captured by a general schema for inductive definitions (Dybjer
1991). This general schema has many useful instances, for example, the
type \(\List(A)\) of lists with elements of type \(A\) has the
following introduction rules:
\[\Gamma \vdash \nil {:} \List(A)
\hspace{3em}
\frac{\Gamma \vdash a {:} A\hspace{2em}\Gamma \vdash as {:} \List(A)}
{\Gamma \vdash \cons(a,as) {:} \List(A)}\]
Other useful instances are types of binary trees and other trees
such as the infinitely branching trees of the Brouwer ordinals of the
second and higher number classes.
The general schema does not only cover inductively defined types,
but also inductively defined families of types, such as the identity
relation. The above mentioned type \(A^n\) of \(n\)-tuples of type
\(A\) was defined above by primitive recursion on \(n\). It can also
be defined as an inductive family with the following introduction
rules
\[\Gamma \vdash \nil {:} A^0
\hspace{3em}
\frac{\Gamma \vdash a {:} A\hspace{2em}\Gamma \vdash as {:} A^n}
{\Gamma \vdash \cons(a,as) {:} A^{\s(n)}}\]
The schema for inductive types and families is a type-theoretic
generalization of a schema for iterated inductive definitions in
predicate logic (formulated in natural deduction) presented by
Martin-Lof (1971b). This paper immediately preceded
Martin-Lof's first version of intuitionistic type
theory. It is both conceptually and technically a forerunner to the
development of the theory.
It is an essential feature of proof assistants such as Agda and Coq
that it enables users to define their own inductive types and families
by listing their introduction rules (the types of their
constructors). This is much like in typed functional programming
languages such as Haskell and the different dialects of ML. However,
unlike in these programming languages the schema for inductive
definitions in intuitionistic type theory enforces a restriction
amounting to well-foundedness of the elements of the defined
types.
### 4.6 Inductive-Recursive Definitions
We already mentioned that there are two main definition principles
in intuitionistic type theory: the inductive definition of types
(sets) and the (primitive, structural) definition of functions by
recursion on the way the elements of such types are inductively
generated. Usually, the inductive definition of a set comes first: the
formation and introduction rules make no reference to the elimination
rule. However, there are definitions in intuitionistic type theory for
which this is not the case and we simultaneously inductively generate
a type and a function from that type defined by structural
recursion. Such definitions are
simultaneously *inductive-recursive*.
The first example of such an inductive-recursive definition is an
alternative formulation *a la Tarski* of the universe of small
types. Above we presented the universe formulated *a la
Russell*, where there is no notational distinction between the
element \(A {:} \U\) and the corresponding type \(A\). For a
universe *a la* Tarski there is such a distinction, for
example, between the element \(\hat{\N} {:} \U\) and the corresponding
type \(\N\). The element \(\hat{\N}\) is called the *code* for
\(\N\).
The elimination rule for the universe *a la* Tarski is:
\[\frac{\Gamma \vdash a {:} \U}
{\Gamma \vdash \T(a)}\]
This expresses that there is a function \(\T\) which maps a code
\(a\) to its corresponding type \(T(a)\). The equality rules define
this correspondence. For example,
\[\T(\hat{\N}) = \N.\]
We see that \(\U\) is inductively generated with one introduction
rule for each small type former, and \(\T\) is defined by recursion on
these small type formers. The simultaneous inductive-recursive nature
of this definition becomes apparent in the rules for \(\Pi\) for
example. The introduction rule is
\[\frac{\Gamma \vdash a {:} \U\hspace{2em} \Gamma, x {:} \T(a) \vdash b {:} \U}
{\Gamma \vdash \hat{\Pi} x {:} a. b {:} \U}\]
and the corresponding equality rule is
\[\T(\hat{\Pi}x {:} a. b) = \Pi x {:} \T(a). \T(b)\]
Note that the introduction rule for \(\U\) refers to \(\T\), and hence
that \(\U\) and \(\T\) must be defined simultaneously.
There are a number of other universe constructions which are
defined inductive-recursively: universe hierarchies, superuniverses
(Palmgren 1998; Rathjen, Griffor, and Palmgren 1998), and Mahlo
universes (Setzer 2000). These universes are analogues of certain
large cardinals in set theory: inaccessible, hyperinaccessible, and
Mahlo cardinals.
Other examples of inductive-recursive definitions include an
informal definition of computability predicates used by Martin-Lof in
an early normalization proof of intuitionistic type theory (Martin-Lof
1998 [1972]). There are also many natural examples of "small"
inductive-recursive definitions, where the recursively defined
(decoding) function returns an element of a type rather than a
type.
A large class of inductive-recursive definitions, including the
above, can be captured by a general schema (Dybjer 2000) which extends
the schema for inductive definitions mentioned above. As shown by
Setzer, intuitionistic type theory with this class of
inductive-recursive definitions is very strong proof-theoretically
(Dybjer and Setzer 2003). However, as proposed in recent unpublished
work by Setzer, it is possible to increase the strength of the theory
even further and define universes such as an *autonomous Mahlo
universe* which are analogues of even larger cardinals.
## 5. Meaning Explanations
The consistency of intuitionistic type theory relative to set
theory can be proved by model constructions. Perhaps the simplest
method is an interpretation whereby each type-theoretic concept is
given its corresponding set-theoretic meaning, as outlined
in
section 2.3. For example the type of functions \(A \rightarrow B\)
is interpreted as the set of all functions in the set-theoretic sense
between the set denoted by \(A\) and the set denoted by \(B\). To
interpret \(\U\) we need a set-theoretic universe which is closed under
all (set-theoretic analogues of) the type constructors. Such a
universe can be proved to exist if we assume the existence of an
inaccessible cardinal \(\kappa\) and interpret \(\U\) by \(V\_\kappa\)
in the cumulative hierarchy.
Alternatives are realizability models, and for intensional type
theory, a model of terms in normal forms. The latter can also be used
for proving decidability of the judgments of the theory.
Mathematical models only prove consistency relative to classical
set theory (or whatever other meta-theory we are using). Is it
possible to be convinced about the consistency of the theory in a more
direct way, so called *simple minded consistency*
(Martin-Lof 1984)? In fact, is there a way to explain what
it *means* for a judgment to be correct in a
direct *pre-mathematical* way? And given that we know what the
judgments mean can we then be convinced that the inference rules of
the theory are valid? An answer to this problem was proposed by
Martin-Lof in 1979 in the paper "Constructive Mathematics
and Computer Programming" (Martin-Lof 1982) and elaborated
later on in numerous lectures and notes, see for example,
Martin-Lof (1984, 1987). These meaning explanations for
intuitionistic type theory are also referred to as the *direct
semantics*, *intuitive semantics*, *informal
semantics*, *standard semantics*, or
the *syntactico-semantical* approach to meaning theory.
This meaning theory follows the Wittgensteinian meaning-as-use
tradition. The meaning is based on rules for building objects
(introduction rules) of types and computation rules (elimination
rules) for computing with these objects. A difference from much of the
Wittgensteinian tradition is that also higher order types like \(\N
\rightarrow \N\) are given meaning using rules.
To explain the meaning of a judgment we must first know how the
terms in the judgment are computed to canonical form. Then the
formation rules explain how correct canonical types are built and the
introduction rules explain how correct canonical objects of such
canonical types are built. We quote (Martin-Lof 1982):
>
> A canonical type \(A\) is defined by prescribing how a canonical
> object of type \(A\) is formed as well as how two equal canonical
> objects of type \(A\) are formed. There is no limitation on this
> prescription except that the relation of equality which it defines
> between canonical objects of type \(A\) must be reflexive, symmetric
> and transitive.
In other words, a canonical type is equipped with an equivalence
relation on the canonical objects. Below we shall give a simplified
form of the meaning explanations, where this equivalence relation is
extensional identity of objects.
In spite of the *pre-mathematical* nature of this meaning
theory, its technical aspects can be captured as a mathematical model
construction similar to Kleene's *realizability* interpretation
of intuitionistic logic, see the next section. The realizers here are
the terms of type theory rather than the number realizers used by
Kleene.
### 5.1 Computation to Canonical Form
The meaning of a judgment is explained in terms of the computation
of the types and terms in the judgment. These computations stop when a
canonical form is reached. By canonical form we mean a term where the outermost form is a constructor (introduction form). These are the canonical forms used in lazy
functional programming (for example in the Haskell language).
For the purpose of illustration we consider meaning explanations
only for three type formers: \(\N, \Pi x {:} A.B\), and \(\U\). The
context free grammar for the terms of this fragment of Intuitionistic
Type Theory is as follows:
\[
a :: = 0 \mid \s(a) \mid \lambda
x.a \mid \N \mid \Pi x{:}a.a \mid \U \mid \R(a,a,xx.a) \mid a\,a .
\]
The canonical terms are generated by the following grammar:
\[v :: = 0 \mid \s(a) \mid \lambda x.a \mid \N \mid \Pi
x{:}a.a \mid \U ,\]
where \(a\) ranges over arbitrary, not necessarily canonical,
terms. Note that \(\s(a)\) is canonical even if \(a\) is not.
To explain how terms are computed to canonical form, we introduce the relation \(a \Rightarrow
v\) between *closed* terms \(a\) and canonical forms (values)
\(v\) given by the following computation rules:
\[
\frac{c \Rightarrow 0\hspace{1em}d \Rightarrow v}{\R(c,d,xy.e)\Rightarrow v}
\hspace{2em}
\frac{c \Rightarrow \s(a)\hspace{1em}e[x := d,y := \R(a,d,xy.e)]\Rightarrow v}{\R(c,d,xy.e)\Rightarrow v}
\]
\[
\frac{f\Rightarrow \lambda x.b\hspace{1em}b[x := a]\Rightarrow v}{f\,a \Rightarrow v}
\]
in addition to the rule
\[v \Rightarrow v\]
stating that a canonical term has itself as value.
### 5.2 The Meaning of Categorical Judgments
A categorical judgment is a judgment where the context is empty and
there are no free variables.
The meaning of the categorical judgment \(\vdash A\) is that \(A\)
has a canonical type as value. In our fragment this means that either
of the following holds:
* \(A \Rightarrow \N\),
* \(A \Rightarrow \U\),
* \(A \Rightarrow \Pi x {:} B. C\) and furthermore that \(\vdash B\) and
\(x {:} B \vdash C\).
The meaning of the categorical judgment \(\vdash a {:} A\) is that
\(a\) has a canonical term of the canonical type of \(A\) as value. In
our fragment this means that either of the following holds:
* \(A \Rightarrow \N\) and either \(a \Rightarrow 0\) or \(a
\Rightarrow \s(b)\) and \(\vdash b {:} \N\),
* \(A \Rightarrow \U\) and either \(a \Rightarrow \N\) or \(a
\Rightarrow \Pi x {:} b. c\) where furthermore \(\vdash b {:} \U\) and
\(x {:} b \vdash c {:} \U\),
* \(A \Rightarrow \Pi x {:} B. C\) and \(a \Rightarrow \lambda x.c\) and
\(x {:} B \vdash c {:} C\).
The meaning of the categorical judgment \(\vdash A = A'\) is
that \(A\) and \(A'\) have the same canonical types as values. In
our fragment this means that either of the following holds:
* \(A \Rightarrow \N\) and \(A' \Rightarrow \N\),
* \(A \Rightarrow \U\) and \(A' \Rightarrow \U\),
* \(A \Rightarrow \Pi x {:} B. C\) and \(A' \Rightarrow \Pi x {:}
B'. C'\) and furthermore that \(\vdash B = B'\) and \(x {:} B
\vdash C = C'\).
The meaning of the categorical judgment \(\vdash a = a' {:} A\)
is explained in a similar way.
It is a tacit assumption of the meaning explanations that the
repeated computations of canonical forms is well-founded. For example,
a natural number is the result of finitely many computations of the
successor function \(\s\) ended by \(0\). A computation which results
in infinitely many computations of \(\s\) is not a natural number in
intuitionistic type theory. (However, there are extensions of type
theory, for example, partial type theory, and non-standard type
theory, where such infinite computations can occur,
see section 7.3. To justify the rules of such
theories the present meaning explanations do not suffice.)
### 5.3 The Meaning of Hypothetical Judgments
According to Martin-Lof (1982) the meaning of a hypothetical
judgment is reduced to the meaning of the categorical judgments by
substituting the closed terms of appropriate types for the free
variables. For example, the meaning of
\[x\_1 {:} A\_1, \ldots, x\_n {:} A\_n \vdash a {:} A\]
is that the categorical judgment
\[\vdash a[x\_1 := a\_1, \ldots , x\_n := a\_n] : A[x\_1 := a\_1, \ldots ,
x\_n := a\_n]\]
is valid whenever the categorical judgments
\[\vdash a\_1 {:} A\_1, \ldots , \vdash a\_n[x\_1 := a\_1, \ldots , x\_{n-1} :=
a\_{n-1}] {:} A\_n[x\_1 := a\_1, \ldots , x\_{n-1} := a\_{n-1}]\]
are valid.
## 6. Mathematical Models
### 6.1 Categorical Models
#### 6.1.1 Hyperdoctrines
Curry's correspondence between propositions and types was extended
to predicate logic in the late 1960s by Howard (1980) and de Bruijn
(1970). At around the same time Lawvere developed related ideas in
categorical logic. In particular he proposed the notion of
a *hyperdoctrine* (Lawvere 1970) as a categorical model of
(typed) predicate logic. A hyperdoctrine is an indexed category \(P {:}
T^{op} \rightarrow \mathbf{Cat}\), where \(T\) is a category where the
objects represent types and the arrows represent terms. If \(A\) is a
type then the *fibre* \(P(A)\) is a category of propositions
depending on a variable \(x {:} A\). The arrows in this category are
proofs \(Q \vdash R\) and can be thought of as
proof-objects. Moreover, since we have an indexed category, for each
arrow \(t\) from \(A\) to \(B\), there is a reindexing functor \(P(B)
\rightarrow P(A)\) representing substitution of \(t\) for a variable
\(y {:} B\). The category \(P(A)\) is assumed to be cartesian closed and
conjunction and implications are modelled by products and exponentials
in this category. The quantifiers \(\exists\) and \(\forall\) are
modelled by the left and right adjoints of the reindexing
functor. Moreover, Lawvere added further structure to hyperdoctrines
to model identity propositions (as left adjoints to a diagonal
functor) and a comprehension schema.
#### 6.1.2 Contextual categories, categories with attributes, and categories with families
Lawvere's definition of hyperdoctrines preceded intuitionistic type
theory but did not go all the way to identifying propositions and types. Nevertheless Lawvere influenced
Scott's (1970) work on *constructive validity*, a somewhat
preliminary precursor of intuitionistic type theory. After Martin-Lof
(1998 [1972]) had presented a
more definite formulation of the theory, the first work on categorical
models was presented by Cartmell in 1978 with his notions of category
with attributes and contextual category (Cartmell 1986). However, we
will not define these structures here but instead the closely
related *categories with families* (Dybjer 1996) which are
formulated so that they directly model a variable-free version of a
formulation of intuitionistic type theory with explicit substitutions
(Martin-Lof 1995).
A category with families is a functor \(T {:} C^{op} \rightarrow
\mathbf{Fam}\), where \(\mathbf{Fam}\) is the category of families of
sets. The category \(C\) is the category of contexts and
substitutions. If \(\Gamma\) is an object of \(C\) (a context), then
\(T(\Gamma)\) is the family of terms of type \(A\) which depend on
variables in \(\Gamma\). If \(\gamma\) is an arrow in \(C\)
representing a substitution, then the arrow part of the functor
represents substitution of \(\gamma\) in types and terms. A category
with families also has a terminal object and a notion of context
comprehension, reminiscent of Lawvere's comprehension in
hyperdoctrines. The terminal object captures the rules for empty
contexts and empty substitutions. Context comprehension captures the
rules for extending contexts and substitutions, and has projections
capturing weakening and assumption of the last variable.
Categories with families are algebraic structures which model the
general rules of dependent type theory, those which come before the
rules for specific type formers, such as \(\Pi\), \(\Sigma\), identity
types, universes, etc. In order to model specific type-former
corresponding extra structure needs to be added.
#### 6.1.3 Locally cartesian closed categories
From a categorical perspective the above-mentioned structures may
appear somewhat special and ad hoc. A more regular structure which
gives rise to models of intuitionistic type theory are the locally
cartesian closed categories. These are categories with a terminal
object, where each slice category is cartesian closed. It can be shown
that the pullback functor has a left and a right adjoint, representing
\(\Sigma\)- and \(\Pi\)-types, respectively. Locally cartesian closed
categories correspond to intuitionistic type theory with extensional
identity types and \(\Sigma\) and \(\Pi\)-types (Seely 1984,
Clairambault and Dybjer 2014). It should be remarked that the
correspondence with intuitionistic type theory is somewhat indirect,
since a coherence problem, in the sense of category theory, needs to
be solved. The problem is that in locally cartesian closed categories
type substituion is represented by pullbacks, but these are only
defined up to isomorphism, see Curien 1993 and Hofmann 1994.
### 6.2 Set-Theoretic Model
Intuitionistic type theory is a possible framework for constructive
mathematics in Bishop's sense. Such constructive mathematics is
compatible with classical mathematics: a constructive proof in
Bishop's sense can directly be understood as a proof in classical
logic. A formal way to understand this is by constructing a
set-theoretic model of intuitionistic type theory, where each concept
of type theory is interpreted as the corresponding concept in
Zermelo-Fraenkel Set Theory. For example, a type is interpreted as a
set, and the type of functions in \(A \rightarrow B\) is interpreted
as the set of all functions in the set-theoretic sense from the set
representing \(A\) to the set representing \(B\). The type of natural
numbers is interpreted as the set of natural numbers. The
interpretations of identity types, and \(\Sigma\) and \(\Pi\)-types
were already discussed in the introduction. And as already mentioned,
to interpret the type-theoretic universe we need an inaccessible
cardinal.
#### 6.2.1 Model in CZF
It can be shown that the interpretation outlined above can be
carried out in Aczel's constructive set theory CZF. Hence it does not
depend on classical logic or impredicative features of set theory.
### 6.3 Realizability Models
The set-theoretic model can be criticized on the grounds that it
models the type of functions as the set of all set-theoretic
functions, in spite of the fact that a function in type theory is
always computable, whereas a set-theoretic function may not be.
To remedy this problem one can instead construct
a *realizability model* whereby one starts with a set
of *realizers*. One can here follow Kleene's numerical
realizability closely where functions are realized by codes for Turing
machines. Or alternatively, one can let realizers be terms in a lambda
calculus or combinatory logic possibly extended with appropriate
constants. Types are then represented by sets of realizers, or often
as partial equivalence relations on the set of realizers. A partial
equivalence relation is a convenient way to represent a type with a
notion of "equality" on it.
There are many variations on the theme of realizability model. Some
such models tacitly
assume set theory as the metatheory (Aczel 1980, Beeson 1985), whereas others explictly assume a
constructive metatheory (Smith 1984).
Realizability models are also models of the extensional version of
intuitionistic type theory (Martin-Lof 1982) which will be presented
in section 7.1 below.
### 6.4 Model of Normal Forms and Type-Checking
In intuitionistic type theory each type and each well-typed term
has a normal form. A consequence of this normal form property is that
all the judgments are decidable: for example, given a correct context
\(\Gamma\), a correct type \(A\) and a possibly ill-typed term \(a\),
there is an algorithm for deciding whether \(\Gamma \vdash a {:}
A\). This type-checking algorithm is the key component of
proof-assistants for Intensional Type Theory, such as Agda.
The correctness of the normal form property can be expressed as a
model of normal forms, where each context, type, and term are
interpreted as their respective normal forms.
## 7. Variants of the Theory
### 7.1 Extensional Type Theory
In extensional intuitionistic type theory (Martin-Lof 1982) the
rules of \(\I\)-elimination and \(\I\)-equality for the general identity
type are replaced by the following two rules:
\[\frac{\Gamma\vdash c {:} \I(A,a,a')} {\Gamma \vdash a=a' {:} A} \hspace{3em}
\frac{\Gamma\vdash c{:}I(A,a,a')} {\Gamma\vdash c = \r {:} \I(A,a,a')}\]
The first causes the distinction between
propositional and judgmental equality to disappear. The second forces
identity proofs to be unique. Unlike the rules for the intensional
identity type former, the rules for extensional identity types do not
fit into the schema for inductively defined types mentioned above.
These rules are however justified by the meaning explanations in
Martin-Lof (1982). This is because the categorical judgment
\[\vdash c {:} \I(A,a,a')\]
is valid iff \(c \Rightarrow \r\) and the judgment \(\vdash a = a' {:}
A\) is valid.
However, these rules make it possible to define terms without
normal forms. Since the type-checking algorithm relies on the
computation of normal forms of types, it no longer works for
extensional type theory, see (Castellan, Clairambault, and Dybjer 2015).
On the other hand, certain constructions which are not available in
intensional type theory are possible in extensional type theory. For
example, function extensionality
\[(\Pi x {:} A. \I(B,f\,x,f'\,x)) \rightarrow \I(\Pi x{:}A.B,f,f')\]
is a theorem.
Another example is that \(\W\)-types can be used for encoding other
inductively defined types in Extensional Type Theory. For example, the
Brouwer ordinals of the second and higher number classes can be
defined as special instances of the \(\W\)-type (Martin-Lof 1984). More
generally, it can be shown that all inductively defined types which
are given by a *strictly positive type operator* can be
represented as instances of well-founded trees (Dybjer 1997).
### 7.2 Univalent Foundations and Homotopy Type Theory
Univalent foundations refer to Voevodsky's programme for a new
foundation of mathematics based on intuitionistic type theory and
employing ideas from homotopy theory. Here every type \(A\) is
considered as a space, and the identity type \(\I(A,a,b)\) is the space
of paths from point \(a\) to point \(b\) in \(A\). Iterated identity types represent higher homotopies, e.g.
\[\I(\I(A,a,b),f,g)\]
is the space of homotopies between \(f\) and \(g\).
The notion of
ordinary set can be thought of as a discrete space \(A\) where
all paths in \(\I(A,a,b)\) are trivial loops.
The origin of these ideas
was the remarkable discovery by (Hofmann and Streicher 1998) that the axioms of
intensional type theory do not force all proofs of an identity to be equal, that is, not all paths need to be trivial. This was
shown by a model construction where each type is interpreted as a
groupoid.
Further connections between identity
types and notions from homotopy theory and higher categories were
subsequently discovered by (Awodey and Warren 2009), (Lumsdaine 2010), and
(van den Berg and Garner 2011). Voevodsky realized that the whole intensional intuitionistic type
theory could be modelled by a well-known category studied in homotopy
theory, namely the Kan simplicial sets. Inspired by this model he
introduced the *univalence axiom*. For a universe
\(\U\) of small types, this axiom states that the substitution map associated with
the \(J\)-operator
\[\I(\U,a,b) \longrightarrow \T(a) \cong \T(b)\]
is an equivalence. Equivalence (\(\cong\)) here refers to a general notion of
equivalence of higher dimensional objects, as in the
sequence *equal elements, isomorphic sets, equivalent groupoids,
biequivalent bigroupoids*, etc. The univalence axiom expresses
that "everything is preserved by equivalence", thereby
realizing the informal categorical slogan that all categorical
constructions are preserved by isomorphism, and its generalization,
that all constructions of categories are preserved by equivalence of
categories, etc.
The axiom of univalence was originally justified by
Voevodsky's simplical set model. This model is however not
constructive and (Bezem, Coquand and Huber 2014 [2013]) has more
recently proposed a model in Kan cubical sets.
Although univalent foundations concern preservation of mathematical
structure in general, strongly inspired by category theory,
applications within homotopy theory are particularly actively
investigated. Intensional type theory extended with the univalence
axiom and so called higher inductive types is therefore also called
"homotopy type theory". We refer to the entry on
type theory for further details.
### 7.3 Partial and Non-Standard Type Theory
Intuitionistic type theory is not intended to model Brouwer's
notion of *free choice sequence*, although lawlike choice
sequences can be modelled as functions from \(\N\). However, there are
extensions of the theory which incorporate such choice sequences:
namely *partial type theory* and *non-standard type
theory* (Martin-Lof 1990). The types in partial type theory
can be interpreted as Scott domains (Martin-Lof 1986, Palmgren
and Stoltenberg-Hansen 1990, Palmgren 1991). In this way a type \(\N\)
which contains an infinite number \(\infty\) can be
interpreted. However, in partial type theory all types are inhabited
by a least element \(\bot\), and thus the propositions as types
principle is not maintained. Non-standard type theory incorporates
non-standard elements, such as an infinite number \(\infty {:} \N\)
without inhabiting all types.
### 7.4 Impredicative Type Theory
The inconsistent version of intuitionistic type theory of
Martin-Lof (1971a) was based on the strongly impredicative axiom that
there is a type of all types. However, (Coquand and Huet 1988) showed with their
calculus of constructions, that there is a powerful impredicative but
consistent version of type theory. In this theory the universe \(\U\)
(usually called \({\bf Prop}\) in this theory) is closed under the following formation rule
for cartesian product of families of types:
\[\frac{\Gamma \vdash A \hspace{2em} \Gamma, x {:} A \vdash B {:} \U}
{\Gamma \vdash \Pi x {:} A. B {:} \U}\]
This rule is more general than the rule for constructing small
cartesian products of families of small types in intuitionistic type
theory, since we can now quantify over arbitrary types \(A\),
including \(\U\), and not just small types. We say that \(\U\) is impredicative since we can construct a new element of it by quantifying over all elements, even the element which is constructed.
The motivation for this theory was that inductively defined types
and families of types become definable in terms of impredicative
quantification. For example, the type of natural numbers can be
defined as the type of Church numerals:
\[\N = \Pi X {:} \U. X \rightarrow (X \rightarrow X) \rightarrow X {:} \U\]
This is an impredicative definition, since it is a small type which
is constructed by quantification over all small types. Similarly we
can define an identity type by impredicative quantification:
\[\I(A,a,a')= \Pi X {:} A \rightarrow \U. X\,a \rightarrow X\,a' {:} \U\]
This is Leibniz' definition of equality: \(a\) and \(a'\) are
equal iff they satisfy the same properties (ranged over by \(X\)).
Unlike in intuitionistic type theory, the function type in
impredicative type cannot be interpreted set-theoretically in a
straightfoward way, see (Reynolds 1984).
### 7.5 Proof Assistants
In 1979 Martin-Lof wrote the paper "Constructive Mathematics
and Computer Programming" where he explained that intuitionistic
type theory is a programming language which can also be used as a
formal foundation for constructive mathematics. Shortly after that,
interactive proof systems which help the user to derive valid
judgments in the theory, so called proof assistants, were
developed.
One of the first systems was the NuPrl system (PRL Group 1986),
which is based on an extensional type theory similar to (Martin-Lof
1982).
Systems based on versions of intensional type theory go back to the
type-checker for the impredicative calculus of constructions which was
written around 1984 by Coquand and Huet. This led to the Coq system,
which is based on the calculus of inductive constructions
(Paulin-Mohring 1993), a theory which extends the calculus of
construction with primitive inductive types and families. The
encodings of the pure calculus of constructions were found to be
inconvenient, since the full elimination rules could not be derived
and instead had to be postulated. We also remark that the calculus of
inductive constructions has a subsystem, the predicative calculus of
inductive constructions, which follows the principles of
Martin-Lof's intuitionistic type theory.
Agda is another proof assistant which is based on the logical
framework formulation of intuitionistic type theory, but adds numerous
features inspired by practical programming languages (Norell 2008). It
is an intensional theory with decidable judgments and a type-checker
similar to Coq's. However, in contrast to Coq it is based on
Martin-Lof's predicative intuitionistic type theory.
There are several other systems based either on the calculus of
constructions (Lego, Matita, Lean) or on intuitionistic type theory
(Epigram, Idris); see (Pollack 1994; Asperti *et al.* 2011; de Moura *et al.* 2015;
McBride and McKinna 2004; Brady 2011). |
god-ultimates | ## 1. Conceptual Foundations and Motivations
### 1.1 Definition of "ultimate"
Brahman, the Dao, emptiness, God, the One, Reasonableness--there,
in alphabetical order, are names of the central subjects of concern in
what are commonly parsed as some of the world's religions,
philosophies and
quasi-religious-philosophies.[1]
They are all names for what is ultimate, at least on some uses of the
names (for instance, "God" is not always taken to be
ultimate, more soon). But what is it to be ultimate, in this
sense?
To answer in terms of its use, the term "ultimacy", meaning
the state or nature of being ultimate, has Brahman, the Dao, emptiness
etc. as instances. To answer semantically, with a meaning, is
difficult for at least two reasons. First, though there is abundant
precedent in the literature for collecting these subjects as ideas of
ultimacy,[2]
doing so presupposes they have some shared characteristic or family
resemblance that makes them count as ultimate. But *is* there a
shared core idea of being ultimate? "Particularists" among
others argue no: the diverse range of cultural and historical contexts
from which these subjects come, coupled with how hard it is to talk
across such contexts, makes them all "separate cultural
islands" (Hedges 2014: 206; see also Berthrong 2001:
237-239,
255-256).[3]
The second concern is related, and not far from one Tomoko Masuzawa
(2005) among others has raised about religion: even if we found a
substantive account of ultimacy visible in multiple traditions, such
an account necessarily will be borne from a cultural conceptual
context. Thus, far from delivering the notions at work in other
traditions, such an account actually risks de-forming them.
Regarding the first concern, John H. Berthrong, among others, is far
more optimistic than the particularists that concepts not only can be
shared across cultures but in fact are
>
>
> already comparative, having been generated by the interactions of
> people, texts, rituals, cultural sensibilities and the vagaries of
> history and local customs. (2001: 238)
>
>
>
Other theorists explore factors that could detail or add to
Berthrong's list--e.g., trade and conquests (Gayatriprana
2020), shared human evolutionary biology (Wildman 2017), and the
evolution of moral development (Wright
2010).[4]
Still, most take the second concern about enculturation to stick and
thus to temper the optimism: there is *both* a shared humanity
*and* real cultural difference to own in reaching a global idea
of ultimacy. Raimon Panikkar says it well:
>
>
> *Brahman* is certainly not the one true and living God of the
> Abrahamic traditions. Nor can it be said that Shang-ti or
> *kami* are the same as *brahman.* And yet they are not
> totally unrelated. (Panikkar 1987 [2005: 2254])
>
>
>
The Cross-Cultural Comparative Religious Ideas Project, run by Robert
C. Neville and Wesley Wildman from 1995-99, balanced the overlap
and difference when they concluded that an account of ultimacy should
be a "properly vague" category: it needs enough shared
content to count as a category, but enough vagueness to cover the
disparate instances generally taken to be
ultimates.[5]
There are multiple contenders for such vague categories, including,
e.g., Paul Tillich's "object of ultimate concern"
(1957a, e.g., 10-11), John Hick's "the Real"
(1989: 11ff); Keith Ward's "the Transcendent"
(1998); and the Project's own proposal as "that which is
most important to religious life because of the nature of
reality" (Neville & Wildman 2001, see 151ff for an
explanation of each part of the phrase).
Informed by the Project's finding, this entry will
"vague-ify" the content of John Schellenberg's
account of what is ultimate to use as a cross-cultural core idea of
ultimacy and as an organizing principle. To paraphrase, Schellenberg
takes being ultimate to mean being that which is (1) the most real,
(2) the most valuable, and (3) the source of deepest fulfillment among
all that is or perhaps could be. More carefully, and in
Schellenberg's words, being ultimate requires being ultimate in
three ways: (1) metaphysically ultimate, i.e., the "most
fundamental fact" about the nature of things (2016: 168), (2)
axiologically ultimate, i.e., that which "has unsurpassably
great value" (2009: 31), meaning greatness along all its
categories of being, and (3) soteriologically ultimate, i.e.,
"the source of an ultimate good (salvific)" (2009: xii,
also 2005: 15), meaning being the source of salvation or liberation of
the kind practitioners of the world's religions and philosophies
ardently seek (e.g., nirvana, communing with God, moksha, ascent to
the One, etc.), whether these all amount to the same salvation (e.g.,
Hick 1989: Ch. 14) or constitute radically distinct types of
salvations (Heim 1995: Ch. 5). Schellenberg's choice of these
three terms is insightful: most extant takes on ultimacy *per
se* and on Brahman, God and the Dao in particular are variations
on a theme of Schellenberg, as scrutiny of even the brief definitions
above as well as the models in
Section 2
will bear
out.[6]
Schellenberg's account of what it takes to be ultimate is
already somewhat vague: he recommends no further precisification of
his three terms in order to stay open about what counts as ultimate as
our knowledge grows (2009: 31). This entry will loosen his account
further by placing a disjunction between its terms instead of a
conjunction. That is, Schellenberg takes metaphysical, axiological and
soteriological ultimacy to be severally necessary and jointly
sufficient for something to count as ultimate; he requires
"triple ultimacy", as James Elliott calls it, for ultimacy
*per se* (2017: 103-04). Those who agree in principle
include, e.g., Clooney 2001 and Rubenstein
2019,[7]
as well as, e.g., Aquinas, Leibniz and Samuel Clarke, who in fact
argue that, given the entailments between the terms, there is triple
ultimacy or none at
all.[8]
Others take double or even single ultimacy to be not only possible
but also sufficient for being ultimate. For example, Neville defines
ultimacy in strictly metaphysical terms as "the ontological act
of creation" (2013: 1); Elliott and Paul Draper each take
soteriological ultimacy to be sufficient for ultimacy (Elliott 2017:
105-109; Draper 2019: 161); and John Bacon takes his
understanding of God as "<Creator, Good>" to be
metaphysically and soteriologically ultimate though a "let-down
axiologically" (2013: 548). Moreover, requiring triple ultimacy
stops some of the paradigmatic models of Brahman, God and the Dao from
counting as ultimacy, as
Section 2
will demonstrate (see also Diller 2013b). Thus, this entry will take
some combinations of the three types of ultimacy to be sufficient for
being ultimate, without settling which, provided nothing else in a
system has more. Note that replacing Schellenberg's conjunction
with a disjunction makes the field of ultimates a family resemblance
class.
Even disjunctivized, the Schellenbergian view adopted here is clearly
an inheritance from Abrahamic perfect being
theology,[9]
so two cautions about scope. First, when we look outward and find
that some non-Abrahamic traditions have ultimacy in the sense
here--as we will in
Section 2
for Hinduism and Daoism--we should acknowledge that this result
comes framed from the outside. Second, not all non-Abrahamic
traditions will have ultimates in the sense just adopted. For
instance, to offer just one example, Barbara Mann suggests that in
non-colonized interpretations of Native American spiritualities there
is nothing that is ultimate in Schellenberg's sense (2010:
33-36).[10]
So ultimacy as just defined may be widespread in the world's
religions and spiritualities, but it is not universal.
Finally, three points of clarification about terminology. First,
talking about ultimacy does not entail that anything ultimate exists.
The concern that it might is related to "the problem of singular
negative existential statements", to which Frege and Russell
offered solutions that in turn have been enshrined in predicate logic,
though not without complaint (for more see Kripke 2013 and the SEP
articles on
existence
and on
nonexistent objects).
Second, the term "ultimate reality" could be taken to
mean *how reality ultimately is*--i.e., that which is
metaphysically ultimate, perhaps a part of every complete
ontology--instead of meaning the richer sense of
"ultimacy" at issue here, which is not a part of every
complete
ontology.[11]
To avoid confusion, this entry will reserve the term "ultimate
reality" for metaphysical ultimacy *per se* and use
"ultimate" and variants for the combinations of the
Schellenbergian
disjunction.[12]
Thus framed, the distinction leaves open this central question: is
ultimate reality ultimate? Lastly, regarding the choice of the term
"ultimate" and its variants, there is a syntactic parallel
to the semantic issue above: we need a sufficiently vague kind of
speech to cover the diverse ontological kinds implicit in accounts of
ultimacy, including concrete or abstract particular things (e.g., God
or Brahman on some views); states of being (e.g.,
Existence-Consciousness-Bliss for Brahman, see
Section 2.1);
properties (e.g., everything is empty on Buddhism or divinely
intentional for Karl Pfeifer, see
Section 2.2);
actions and events that things perform or undergo (D. Cooper models
God as "a verb" as in "God-ing", 1997: 70; cf.
Bishop & Perszyk 2017); and grounds of being that are meant to be
category-less (e.g., as in "the creative source of the
categories themselves", Vallicella 2006 [2019], see also, e.g.,
Tillich in
Section 2.2,
the Dao in
Section 2.3).
Though no term is quite right given the diversity, this entry uses
the adjective form of being "ultimate" as primary, to
describe *x* as metaphysically, axiologically or
soteriologically ultimate per above, whatever *x*'s
ontological kind (cf. with the more familiar term from Abrahamic
monotheism of "divine"). It also uses "ultimacy"
as a noun for the nature or state of being ultimate (cf.
"divinity"), the nouns "the ultimate", "an
ultimate" or "ultimates" to function flexibly both as a
mass noun (such as "water" or "butter") for
the property or uncountable substance of being ultimate and as a count
noun for things, events and grounds of being (cf. "God" or
"gods"); and "to ultimize" as the verb form,
if ever we need
it.[13]
### 1.2 Definition of "model"
A "model" of what is ultimate is a way it can be conceived.
In general, a model is a representation of a target phenomenon for
some purpose. For instance, R. Axelrod developed a computational model
that represented a target of two "agents" caught in an
iterated prisoner's dilemma, with the purpose of solving the
dilemma.[14]
The term "model" in religious and philosophical contexts
is related to but not identical with its use in scientific contexts,
in ways Ian Barbour (1974) foundationally traced. Though there is some
disagreement between the fields on whether models describe their
target or not and what the point of modeling is, importantly, in both
contexts, the term "model" (1) connotes that its target is
somehow out of reach--not able to be directly examined--and,
perhaps by force of this, (2) encodes a conceptual distance between
the target and the model. In particular, the model is not a copy of
the target but rather chooses revealing aspects of the target to relay
by leaving out or distorting other
aspects.[15]
Think of a model of a city that is by design not to scale, precisely
so viewers can see the relationships between the city's streets,
buildings and neighborhoods. Models are thus simultaneously
epistemically instructive and humble.
Thus understood, "model" captures well the ways people
have thought about what is ultimate. Taking a model's target to
be obscured per (1) and the model itself to be fallible per (2) is not
only apt but also crucial for thinking about what is ultimate given
our necessarily limited knowledge of it (see
Section 1.4).
Moreover, among the choices on the linguistic menu,
"model" is a middle way. It is more specific than
"idea" understood in the Cartesian sense as
"whatever is immediately perceived by the
mind",[16]
since a model is the more particular kind of idea just relayed. At
the same time, "model" is general enough to cover the very
diverse kinds of extant linguistic accounts of what is ultimate in the
literature, including, e.g., "concepts" understood in the
classic sense as necessary and sufficient conditions for being
ultimate, such as Anselm's idea of God as "that than which
no greater can be conceived;" "conceptions" which
are "more particular fillers" for more general concepts,
such as Richard Swinburne's verdict about what it would take to
be that than which nothing greater can be
conceived;[17]
sustained "metaphors" such as Ramanuja's Brahman as
the "Soul" of the cosmos (see
Section 2.1)
or Sallie McFague's "God as Mother, Lover and
Friend" (1987); and "indexical signs" that point to
an indeterminate ultimate visible in Neville (2013). This entry will
call all these linguistic types "models" of what is
ultimate, given that they each in their own way represent or aim at a
target of what is ultimate for various purposes, at least as much as
language can (more in
Section 1.4).[18]
As
Section 1.5
will detail, some models of what is ultimate nest, e.g.,
Shankara's idea of Brahman is a species of Vedantic ideas of
Brahman which in turn are species of Hindu ideas of Brahman. To
simplify, this entry will use the term "model" for ideas
of what is ultimate at all levels, and take a model's target to
be "the ultimate" if the modeler thinks what is ultimate is
single or uncountable (as in models of e.g., Brahman, God and the Dao)
or "ultimates" if multiple (as in polytheistic or perhaps
communotheistic models).
### 1.3 Motivations
Understanding the scope of the work on modeling what is ultimate is
central in multiple ways for assessing claims about its existence.
First, the meaning we have in mind for "*x*" can
decide our take on whether *x* exists. For example, in the case
of God, some are convinced by arguments from suffering that there is
no God, but such arguments, even if they succeed, generally apply only
to God conceived as an omnipotent, omniscient and omnibenevolent
("OOO") person and as written do not apply to God
conceived differently ("generally" since there are notable
exceptions, e.g., Bishop 2007 and Bishop & Perszyk 2016). Further,
the act of abandoning one model while being aware that there are
alternative models can set up an exploration into the field of models
of the ultimate or ultimates. For example, if one decides there is no
OOO personal God, one might ask: are there any *other* models
of God genuinely worth the name "God" without these
properties? If not, are there perhaps any non-theistic (non-God)
models of what is ultimate without these properties? In other words,
for those who have a limited sense of what it takes to be ultimate
(e.g., what is ultimate is a OOO God or nothing at all), a search of
extant models can open up the range of options on their menu and
position them to see whether any are philosophically palatable. In
fact, such a search may be required in order to settle the question of
whether anything ultimate exists, because it is invalid to conclude
that there is nothing ultimate on the basis of arguments against
specific models of ultimacy if there could still be genuine ultimacy
of another kind (see Diller 2016: 123-124).
In addition to their pivotal role in deciding questions of existence,
notions of what is ultimate profoundly impact questions about
religious diversity, such as whether multiple religions can have
simultaneously true core beliefs. For example, on "plenitude of
being" models--where the ultimate is infinitely full of
incommensurable content--multiple religions are taken to be
grasping different aspects of it accurately. So multiple religions can
have true beliefs about what is ultimate, even if each has only a part
of the truth. Finally, models of what is ultimate are also intimately
bound up with philosophical questions of meaning, e.g., in
meaning's "Cosmic" and "Ultimate" senses
as Rivka Weinberg explains them, though she decides that even if there
were something ultimate, it would not finally deliver meaning in
either sense (2021).
### 1.4 Challenges
Though some model what is ultimate, others object to the entire
project of modeling it. There are three main kinds of concerns in the
literature: about the existence of ultimacy, about the coherence of
the very idea of ultimacy, and about whether talk and knowledge of
ultimacy is humanly possible, even assuming the idea is coherent. The
first concern is motivational: for those who think there is nothing
ultimate, there seems no point worrying about how it has been
conceived. However, as indicated in
Section 1.3,
it is invalid to conclude that there is nothing ultimate without
having some sense of the range of what it might be. So even those who
think there is nothing ultimate are thrown into the project of
modeling it--at least enough to license the conclusion that this
work is not worth doing.
To offer just one example of the second challenge, Stephen Maitzen
(2017) argues against the concept of ultimacy on multiple fronts,
including, e.g., that nothing can be metaphysically ultimate in
Schellenberg's sense because it would need to be simultaneously
*a se* *and* concrete so that it can be metaphysically
independent *and* explain concrete things, respectively. But no
concrete thing can explain itself because (to shorten the argument
considerably) even the necessity of such a thing is not identical to
itself so it is not explaining itself (2017: 53). If Maitzen is right,
nothing can be metaphysically ultimate in Schellenberg's sense,
and thus not metaphysically, axiologically and soteriologically
ultimate at once.
The third kind of concern is more widespread--that it makes no
sense for us to model what is ultimate because it is beyond human
language (ineffable) or beyond human cognitive grasp (cognitively
unknowable), or both. Why think this? To combine a few common
arguments: what is ultimate (1) goes beyond the world, (2) is in a
class by itself, and (3) is infinite, while our predicates are
(1') suited to describe things in the world, (2') classify
things with other things, and (3') are limited (see, e.g.,
Wildman 2013: 770; Seeskin 2013: 794-795). Thus, for any
*P*, a statement of the form "ultimacy is
*P*" seems necessarily false, if it is meaningful at all.
Ineffability and unknowability are related: if we can say nothing true
about ultimacy, then we can know nothing about it--at least
nothing that can be said in words.
That last hesitation--"at least not in
words"--leaves room for, e.g., embodied ways of knowing by
way of religious, mystical or spiritual experiences which are reported
in the world's religious traditions and more generally (whether
such experiences actually happen or not, see, e.g., James 1902
[1961]). Still, the combined arguments above and the gut intuition
behind them represent an enormous challenge to the whole enterprise of
modeling what is ultimate *in words*. There are those who opt
for silence in the face of these arguments, and thus understandably
but alas "literally disappear from the conversation"
(Wildman 2013: 768, "alas" because they are missed). One
main move of those who do keep talking is to distinguish what is
ultimate as it is in itself, which they concede we can never talk
about or know, from it as it affects our experience, which they think
we *can* talk about and know. So some distinguish, e.g., the
absolute from the relative Dao (*Daodejing,* chapter 1); the
Godhead from God as revealed (Meister Eckhart, e.g., Sermon 97;
Panikkar 1987 [2005: 2254]); the noumenal from the phenomenal Real (Hick
1989) etc. and talk or make knowledge claims only about the latter
(see also, e.g., Paul Hedges 2020
[Other Internet Resources]).[19]
Some also claim to make true statements about what is ultimate by
restricting those statements to certain kinds of claims about it. One
tactic is to talk about how ultimacy is related to us or to other
parts of the natural (or non-natural) world instead of talking about
how it is in itself, i.e., to talk about its extrinsic vs. intrinsic
properties. For example, Maimonides suggests that one way to make
"ultimacy is *P*" true is to make *P* an
"attribute of action", i.e., an "action that he who
is described has performed, such as *Zayd carpentered this
door*..." (*The Guide of the Perplexed*,
I.52-3, italics added), an attribute which says nothing about
Zayd's intrinsic properties save that he has what it takes to
carpenter this door. An analogue of this for ultimacy is, e.g.,
*the Dao generated being*, again an attribute which indicates
only that the Dao, whatever it is or is not, can and has generated
being. Other possible ways of speaking truly about ultimacy include
famously the *via negativa* ("God is not
*P*", Shaffer 2013: 783), the *via eminentia*
("God is better than *P*"), the way of analogy
("God is perfectly *P*", Copleston 1952: 351 on
Aquinas, e.g., *Summa Theologica* I, 13 and *Summa contra
Gentiles* I, 30, see also Kennedy 2013: 158-159), the way of
super-eminence ("God is beyond *P* or not
*P*", Pseudo-Dionysius, Shaffer 2013: 786ff)
and--though this seems doable in theory only--equivocal
predication ("God is *P*\*" where *P*\* is a
predicate outside of human language, Shaffer 2013: 783). What does all
this really allow us to say, know, and do philosophically, though?
More than one might think, says Neville: though ineffability might
seem to stop metaphysics, it actually tells us how to do the
metaphysics, e.g., "the *dao...*can be discussed
mainly by negations and indirections" (2008: 43), or in these
other ways.
### 1.5 Philosophical categories of ultimacy
For those who decide to model what is ultimate in the face of or
informed by these challenges, there are several common categories, or
"model types" as Philip Clayton once called them, which
distinguish kinds of models of what is ultimate from each other.
Knowing these categories can organize what might be an otherwise
haphazard array of models, in something like the way knowing what an
oak, maple and birch are can help sort the sights on a walk through a
forest. The categories of ultimacy are best grasped by framing them
with a question. For example, Hartshorne and Reese (1953) categorized
models of God in particular as the logically possible variations on a
theme of five questions, whose positive answers get symbolized by
ETCKW:
* E: Is God eternal?
* T: Is God temporal?
* C: Is God conscious?
* K: Does God know the world?
* W: Does God include the world?
To use models that will be discussed in
Section 2,
Shankara's pantheism is ECKW since he takes Brahman to be an
"Eternal Consciousness, Knowing and including the World"
but which is atemporal and never changes; Hartshorne and Reese's
own panentheism is ETCKW, etc. (see their 1953 [2000: 17] and Viney
2013 for more examples). Wildman (2017) has his own set of questions
and entire system built out of them (more in
Section 3),
which adds another important question that ETCKW leaves out or
perhaps merely implies: Is what is ultimate
personal/anthropomorphic--i.e., is it aware and does it have
intentions--or is it impersonal/non-anthropomorphic? One might
also look for functional categories: is what is ultimate the
efficient, material or final cause of the universe?, does it intervene
in it?, does it provoke religious experience?, and more.
Another question that may be asked about a particular model is: How
many ultimates does the model conceive ultimacy to be? Though the very
idea of plural ultimates sounds contradictory--wouldn't one
thing need to beat out all competitors in order to satisfy the
superlatives "most fundamental" fact,
"highest" value, "deepest" source of
fulfillment?--those who model "ultimate multiplicity"
see a tie for first place. Some models of Zoroaster's view, for
instance, take there to be two ultimates, the good Ahura Mazda and the
evil Angra Mainyu engaged in fundamental battle (though such models
will have to explain why Ahura Mazda is not the true ultimate given
his apparent axiological edge on Angra Mainyu). John Cobb and David
Ray Griffin at least some of the time take there to be three distinct
ultimates: the Supreme Being experienced in theistic experiences,
Being Itself (or emptiness) experienced in acosmic experiences, and
the Cosmos experienced in cosmic experiences (Griffin 2005:
47-49).[20]
Monica Coleman models what is ultimate in traditional Yoruba religion
as a "communotheism", in which "the Divine is a
community of gods who are fundamentally related to each other but
ontologically equal", including
*Olodumare* and the 401
*orisa* (2013: 345-349). All the
models we will look at in
Section 2
will be "ultimate unities" although there will be some
diversity in the unity for the panentheisms in particular. George
Mavrodes also cautions us to be wary of the unity/multiplicity
distinction:
>
>
> there are monotheisms that seem to include an element of
> multiplicity--e.g. Christianity with its puzzling idea of the
> divine trinity--and views of divine multiplicity--such as
> the African religions [have]--that seem to posit some sort of
> unity composed of a large number of individual divine entities. (2013:
> 660)
>
>
>
Probably the most frequently-used categories to sort models from each
other are the logically exhaustive answers to the question: How does
the ultimate or ultimates relate to the world? Though the answers (and
question actually) are commonly framed with the word
"God", as the *theos* root in some of the category
names below belies, their descriptions at least are framed here with
"ultimacy" to include non-theistic ultimates,
too:[21]
* *Monism* (literally, one-ism): There exists just one thing
or one kind of thing (the "One" or "Unity"),
depending on how the monism is read: either ultimacy is identical to
the world, or all there is is ultimacy, or all there is is the world.
*Pantheism* is a species of monism in which the one thing or
kind of thing is God, or as Linda Mercadante once said in
conversation, it is a monism in which the emphasis is on the divinity
of the world instead of on the worldliness of the divine. Baruch
Spinoza's view of "God, or Nature" (*Deus sive
natura*) is often deemed a
pantheism,[22]
though some experts deem it a panentheism (1677, Part IV.4; see,
e.g., Edwin Curley 2013).
* *Panentheism* (all-in-God-ism): The world is a proper part
of ultimacy. I.e., though the world is in ultimacy, ultimacy is more
than the world. Panentheism is a wide tent since there is ambiguity in
the "in", but R. T. Mullins, for example, offers one
disambiguated panentheism: take space and time to be attributes of
God, and "affirm that the universe is literally in God because
the universe is spatially and temporally located in God" (2016:
343).
* *Merotheism* (part-God-ism): Ultimacy is a proper part of
the world. I.e., though ultimacy is in the world, the world is more
than ultimacy. This type is rare, but see, e.g., Alexander 1920 and
Draper 2019 (more soon).
* *Dualism* (two-ism): Ultimacy is not the world and the
world is not ultimacy, so there are two fundamentally different things
or kinds of things, depending on how the underlying ontology is read.
Dualism includes both the view that ultimacy and the world are
disjoint (they share no parts) and the view that they bear a relation
of proper overlap (they overlap in part, but not in
whole).[23]
The various models of God, Brahman and the Dao in
Section 2
taken together will instantiate all four of these categories.
## 2. Models of Brahman, God, and the Dao
This section turns to multiple models of three ultimates extant in
living world traditions: Brahman in the Hindu traditions indigenous in
India, God in the Abrahamic traditions (Judaism, Christianity and
Islam) indigenous in the Middle East and the Arabian Peninsula, and
the Dao in the Daoist and Ru (Confucian) traditions indigenous in
China. The sections here are ordered historically, with the idea of
Brahman surfacing first in the Rig Veda ca. 1000-1500 BCE, the
idea of the Dao during the Warring States period ca. fifth century
BCE, and the idea of God in the Jewish tradition shifting from a
henotheistic to monotheistic deity somewhere in between.
A main point of looking at these models is to grasp some ways people
across the globe have conceived and are conceiving of what is most
real, most valuable or most fulfilling to them--knowledge worth
having for its own sake. Looking at the models also deepens
understanding of ultimacy in general, by seeing how it plays out in
the specifics. Finally, studying the models is a window into how they
relate. Specifically, as we go, look for (1) the wide
intra-traditional disagreement about how to model what is ultimate
(e.g., there are Hindu monisms, panentheisms and dualisms) and related
(2) the significant inter-traditional agreement about it (e.g., there
are Hindu, Christian, and Daoist panentheisms). The amount of
disagreement *within* a tradition makes talk about "the
idea of Brahman" or "the idea of God" ambiguous, and
crucially so in some contexts, as indicated in
Section 1.2.
The amount of agreement *between* traditions creates strange
bedfellows across the landscape of models--strange because the
modelers disagree about their religious or philosophical tradition,
but bedfellows all the same because they agree about which
philosophical categories from
Section 1.5
best suit what is ultimate (see Diller 2013a). The existence of such
agreement at the very heart of diverse traditions is an important
fact.
One perplexing issue before we encounter the models is the range of
their connection to a lived religious tradition. The "religious
models" purport to provide an understanding of reality as it is
believed (or held by faith) to be in a particular tradition--as
evidenced by their efforts to capture as much as they can of a
tradition's key texts, figures, practices, symbols
etc.--while "philosophical models" do not do this.
For example, the models of the Dao in
Section 2.3
seem to be religious models given the regular citations of the
*Daodejing* and the *Zhuangzi* and liturgical practices
in explication and defense of the models. Some philosophical models
include Aristotle's Unmoved Mover or divine *Nous*
(*Physics* Bk. VIII, *Metaphysics* Bk. XII),
Plato's Demiurge in the *Timaeus* (29-30) or
Plotinus' One in the *Enneads*. None of these modelers
turn to traditional sources to explain or defend their views; indeed,
it is not even clear what traditional sources would be germane.
Perhaps there are intermediate "quasi-religio-philosophical
models", e.g., Spinoza 1677 or Hegel 1832, which have some
salient connection with a religious tradition, either because of the
model's status as a revision of a traditional model or because
they share some key features of ultimacy within a religious
tradition.
The puzzle: are all these modelers modeling the same thing? This
question may be just a rehash of the "God of Abraham Isaac and
Jacob" vs. "God of the philosophers" issue (see
Section 2.2).
The answer to it is yes in the vague-categorial sense that both
philosophical and religious models are talking about that which is
fundamentally real, valuable or liberating, but no in the sense that
philosophical models are about what is ultimate *per se* while
the religious models are about what is ultimate as it appears in their
tradition. In any case, *nota bene*:
Section 2.3
(Models of God) houses several recent philosophical models because
they use the term "God" to name the ultimate they are
modeling. Perhaps they are misnamed and should be renamed models of
"the ultimate" and housed in an entirely different
section. Or perhaps they are rightly named "God" and God
is not dead as Nietzsche had it, but is becoming increasingly unhinged
from the lived religious traditions.
### 2.1 Models of Brahman
Though the term "Hinduism" and its variants was originally
a foreign imposition during the British Raj in India, they have become
widely used in the public realm to refer to forms of religiosity and
spirituality that had their start in the Indus River valley at least
since 2000 BCE and have endured with great internal diversification
ever
since.[24]
The idea of Brahman was birthed in the Rig Veda (ca. 1000-1500
BCE), was defined in the Upanishads (ca. 500 BCE), came to full flower
in the Bhagavad-Gita (ca. 200 BCE) and the Brahmasutras (400-450
CE), and has been refined in commentaries about them for over the past
millennium, first by Adi Shankara (788-820 BCE) and then by many
others after him which constitute the Vedanta system of Hindu
philosophy (*Vedanta* = "end of the
Vedas").[25]
Vedanta has been dominant among the traditional six Hindu
philosophical systems for at least the last 500
years.[26]
All the Vedanta schools agree on three things about Brahman, or
ultimacy (here is this entry's first, very general model of an
ultimate). First, Brahman's nature is essentially
*sat-chid-ananda*, "Existence-Consciousness-Bliss",
which means that the metaphysical rock bottom of reality
is--surprise, given the phenomenal mess--a blissful
consciousness. Second, most of the schools take Brahman to be what
Jeffery D. Long calls a "theocosm": (1) God and (2) a
cosmos, meaning a universe/multiverse of all natural
forms.[27]
The original Sanskrit in the Vedantic texts is conceptually precise
and better than what we have in the God-world relations in
Section 1.5
because there is a name for God and the world put together: the
theocosm gets called "Brahman", God is
"Ishvara" (or "Narayana" or
"Krishna" etc.), and the cosmos is "samsara",
and the whole thing and each part are
eternal.[28]
Third, the Vedanta schools all agree that their view about Brahman is
associated with (1) an epistemological license in direct experience,
either in texts heard by spiritual adepts (*sruti*) or in
proponents' own firsthand spiritual experience (for much more,
see Phillips 2019), and (2) a life expression that lives out such
experience and the view so deeply that it is hard to know which came
first, the life expression or the metaphysical commitments that enable
it.[29]
What the Vedanta schools disagree about is the kind of link between
God and the cosmos in the theocosm. These disagreements spread them
out in a range.
On one end is the Dvaita (*dvaita* = "two") Vedanta
school, which is *dualistic* in its ultimate-world relation and
not far from the classical monotheistic views (model 2 in this entry).
It reads the theocosm to be two really distinct things, viz., God and
the cosmos, with no organic link between them. Though there is no
creation *ex nihilo* in Dvaita or any other Vedanta school,
Ishvara (God) is the Divine creator and sustainer of the distinct
cosmos and ensouls it and resides within it. The ultimate is the two
eternally co-existing, like an eternal dweller eternally content to
live in an eternal house It built out of something other than Itself.
Of the four traditional Hindu yogas, Dvaita is a deeply
bhakti-oriented system (*bhakti* = "devotion"),
generally practicing devotion to Vishnu/Krishna.
On the other end of the range is the Advaita Vedanta school
(*advaita* = not two) which is well-known in the West but not
as dominant in India. It reads the theocosm/Brahman as one/non-dual,
and takes God and the cosmos not to be really distinct, i.e., God =
the cosmos. The paradigmatic filling out of this view, often and
perhaps mistakenly attributed to Shankara, is *pantheistic* and
world-denying.[30]
Specifically (here is model 3), such classical Advaitans hold that
since reality is one and the cosmos is many, the cosmos cannot be
real. It *looks* as if there is a cosmos filled with many
things, but the cosmos is merely an appearance (*maya),* and
taking it to be reality is like taking a rope at dusk to be a snake
(Shankara's famous metaphor, see, e.g., Tapasyananda 1990: 34).
With the cosmos out of the picture, the theocosm is just the
"*theo*" part; to use the metaphor above, the
ultimate is all dweller, no house. That move makes the terms
"Brahman" and "God" interchangeable, and on
it, God-Brahman is generally read as impersonal. The epistemological
license in direct experience for this view is *samadhi*, a
state of spiritual absorption in which all dualities vanish and one
experiences just infinite bliss. Since the classical Advaitan view
denies the world, Anantanand Rambachan, a contemporary Advaitan, takes
its best life expression to be that of the renunciant
(*sannyasi*) who lives out that denial (2006: 69).
In the middle range between Dvaita and Advaita are a dozen subtly
distinct schools of mainstream Vedanta, often called the "Bhakti
schools" given their emphasis on devotion (the distinctions
between them make a dozen distinct models here). These schools are all
*panentheistic* and world-affirming. Their best representative
is Ramanuja (ca. 1017-1157), the first comprehensive critic of
Shankara, who synthesized Advaita with bhakti in a system called
Vishishtadvaita (meaning "non-duality of the qualified
whole", model 4 for this
entry).[31]
Like classical Advaitans, Ramanuja takes the theocosm/Brahman to be
non-dual because it is all God, but unlike them, he takes the cosmos
and the many things within it to be real and distinct from Brahman.
How can the ultimate be one but have parts? Ramanuja's
*panentheistic* answer: the cosmic parts form an
"inseparable and integral union" (*aprthaksiddhi*)
with Brahman, like the union of body and soul (*sarira* and
*sariri*): Brahman is the cosmos' soul, and the cosmos is
Brahman's body--a body which, to use Ramanuja's
illustration from a quote from the Upanishads,
>
>
> "is born in, sustained by, and is dissolved in
> Brahman"....[just] as pearls strung on a thread...are
> held as a unity without impairing their manifoldness. (Tapasyananda
> 1990: 6)
>
>
>
So for Ramanuja, Brahman builds a house not out of something else as
in Dvaita but out of Brahmanself, and eternally dwells in this eternal
Brahman-house which (here is the mark of panentheism) depends on
Brahman for its existence but not vice versa, i.e., (Brahman -
the cosmos) but not (the cosmos - Brahman). Ramanuja also takes
Brahman to be not impersonal but the Divine Person, known under
different sacred names, e.g., "Vishnu",
"Narayana", "Isvara", etc. The moves to a real
cosmos and divine personality have Ramanuja deciding that
Brahman's essential nature as *sat-chid-ananda* is filled
out with countless extrinsic "auspicious properties"
("qualities manifested in him in relation to finite
beings"), among them some of the classical perfections such as
omnipotence, omniscience and immutability, as well as compassion,
generosity, lordship, creative power, and splendor (1990:
36-37).The experiential epistemological licenses for
Vishishtadvaita are the combination of *samadhi* (which
justifies the non-dual piece) and *darsan* (visual contact with
the divine through the eyes of images, which justifies the
qualification to non-duality). Its best life expression is the
practice of bhakti yoga, supported by the reality and personality of
both devotee and devoted in Ramanuja's metaphysics, and
exemplified by the Alvars' passionate devotion to Vishnu from
the second to eighth century CE (Tapasyananda 1990: 33).
In the midst of this centuries-long dispute between the Dvaitans,
Advaitans and Bhakti schools, a key modern figure arose: Ramakrishna
(1836-86), sometimes called "the Great Reconciler"
because he attempted to integrate all the schools into one
pluralistic, non-sectarian approach. His major insight (birthing model
5 for this entry) is that "God is infinite, and the paths to God
are infinite", and the schools are among these infinite paths
(Maharaj 2018: frontispiece). Ramakrishna came by this view firsthand
when, after being in a state of *samadhi* for six months, which
he said was "like reaching the roof of a house by leaving the
steps
behind",[32]
he had a divine command to come down and stay in a state he called
"*vijnana*" (intimate knowledge),
during which he could see the roof but also see that "the steps
are made of the same material as the roof (brick, lime, brick
dust)". In other words, in *vijnana* he saw
that Brahman is both non-dual *and* dual, and thus began to
affirm the "spiritual core" of *both* schools, and
eventually of all the Hindu schools and Christian and Muslim ones, too
(Long 2020: 166). He decided they all must be contacting different
"aspects" of one and the same reality, and that this
reality thus must be deeply, indeed infinitely complex in order to
make such diverse experiences possible (Maharaj 2018: Chapter 1, part
3, tenets 1 and 3; see also interpretive principle 4 and
K422/G423).[33]
Because he affirms the core of all schools, Ramakrishna is hard to
classify, but his view is probably best read as panentheistic and
world-affirming since he says over and over again that "all is
Brahman" and that Brahman "has become everything",
i.e., Brahman - the cosmos but not vice versa (Long 2020,
especially 163).
Contemporary Vedanta is alive and well. To offer just three examples:
Long recently merged Ramakrishna's thought with
Whitehead's to develop a Hindu process theology that offers an
ultimate unity behind the God-world relations that was missing in both
Whitehead's and Griffin and Cobb's interpretations (see
Section 2.2
and Long 2013, our model 6). Ayon Maharaj has systematized
Ramakrishna's thought (no small task!) and put it into
conversation with major Western philosophers to advance global
philosophical work on the problem of evil, religious pluralism and
mystical experience (2018). Finally, Rambachan, a present-day
Advaitan, moves Advaita closer to Vishishtadvaita when he asserts that
"not-two is not one" (2015): non-dualism is not
pantheistic but rather panentheistic because Brahman is the
cosmos' material and efficient
cause;[34]
Brahman intentionally self-multiplies to make the
cosmos.[35]
The cosmos is thus not *maya* but rather a finite
"celebrative expression of Brahman's fullness"
(Rambachan 2006: 79). This reading (model 7 here) opens up Advaitic
theology to help heal a variety of human problems from low self-esteem
to the caste system because there is real power in re-seeing people as
infinite-conscious-bliss, so worthy of respect (Rambachan 2015).
Much of the explicit discussion among the schools is debate over
metaphysical ultimacy in general and over the Brahman-God-cosmos
relation in particular. Still, there is an implicit drumbeat in all
the schools that one can find ultimate fulfillment in Brahman, whether
that is in a state of *samadhi*, *vijnana*
or devotion (e.g., the "profound and mutual sharing in the life
of God...creates unsurpassed bliss [for the devotee]", Long
2013: 364). So Brahman seems to be soteriologically as well as
metaphysically ultimate. What is less clear is whether Brahman is
axiologically ultimate, especially in the moral category of being.
There are suggestive phrases sprinkled through the texts in the
affirmative, but as Alan Watts says:
>
>
> reason and the moral sense rebel at pantheistic monism which must
> reduce all things to a flat uniformity and assert that even the most
> diabolical things are precisely God, thus destroying all values.
> (Hartshorne & Reese 1953 [2000: 325], quoting Watts 1947)
>
>
>
Though this sentiment is softened by the panentheistic readings in
several of the schools since the world can take the fall, in the end,
all the schools affirm that Brahman is everything, in some sense. And
if Brahman is everything, and not everything seems to be good, then
Brahman seems not to be good, at least not full stop. There is no
question that Brahman is still the ultimate in Vedantic schools even
if Brahman is not axiologically ultimate; as indicated at the start,
metaphysical and soteriological ultimacy are sufficient for ultimacy
when nothing else in a system has all three marks. But it is jarring
and a real contribution to global thought about ultimacy to consider
dropping axiological ultimacy from the trio: what is most deeply real
may not be all good, though contact with it may still manage to
fulfill us deeply all the same.
### 2.2 Models of God
As in Vedanta, the extant models of God disagree about the
ultimate-world relation and those disagreements spread them out in a
range, with dualisms on one end and monisms on the other, and
panentheisms and--for the first time in this
entry--merotheisms in between. The idea of God seems to be
nothing if not flexible. Even the relatively common view that God is
by definition a personal ultimate--an ultimate that is conscious
and self-aware--has been on the move for millennia and is hotly
debated today.
The most venerable model of God that is often read dualistically is
known as "perfect being theology", which bears traces of
its origin in its name (this is model 8--a general model, species
coming). The idea fully grown, as we have it today, defines God as
that which is perfect (whether personal or not), where perfection is
typically taken to entail being unsurpassable in power, knowledge, and
goodness, and several models add being immutable, impassible, *a
se*, eternal, simple and necessary in some sense. Most perfect
being theologians take God to have created the universe out of nothing
(*ex nihilo*), and that view can be taken to entail dualism for
a variety of reasons. To offer one, as Brian Davies says, "God
makes things to be, but not *out* of anything" (italics
his), including not out of Godself, so the cosmos is entirely fresh
stuff--a second kind of stuff, distinct from and radically
dependent on God (2004:
3).[36]
Perfect being theology was birthed during the Hellenistic era from
the fusing of the Jewish idea of a single God that acts in history
(the *theos* in "perfect being theology") with the
Greek philosophical idea of perfect ultimacy ("perfect
being").[37]
From the very start, there were conceptual tensions in the
combination: how can the God who led us out of Egypt, who hears our
prayers and who intervenes in the world as the Jews say (Cohen 1987:
44) also be immutable, impassible and *a se* as the Greeks say
(e.g., Guthrie 1965: 26-39, 272-279; Guthrie 1981: 254-263)? This
question is sometimes framed: how can "the God of Abraham, Isaac
and Jacob" be the "God of the philosophers?" Even
after perfect being theology had passed for centuries from Judaism to
Christianity to Islam--with an important handoff in the midst by
Anselm who amped up the Greek perfection by taking God to be that than
which no greater can be conceived--the great medieval theologians
in all three faiths were still hitting up against the tensions and
finding ways to tamp them down. For instance, on the issue of
anthropomorphic descriptions of God in the Bible and the Quran, both
Maimonides and Aquinas read them as negations and said that God
"is not a body" (Dorff 2013: 113; Kennedy 2013: 158) and
both Ibn Rushd (Averroes) and al-Ghazali parted with
"theologians who took all these descriptions literally"
because "beings that have bodily form...have
characteristics incompatible with a perfect being" (Hasan 2013:
142).
The tensions continued into the modern era and are still felt in our
time. Perhaps as early as 1644, perfect being theology split into two
camps over them (see Davies 2004: chapter 1, and Page 2019). Both
camps take God to be absolutely perfect, but disagree over what it
takes to be perfect: "classical theists" deny or weaken
God's personhood to save the Greek perfections such as
impassibility, immutability and simplicity, while "theistic
personalists" (a species of "neoclassical theists")
conversely deny or weaken the Greek perfections to save God's
personhood. "Open theists" (model 9 in this entry), for
example, are theistic personalists: they call for new readings of,
e.g., omnipotence and omniscience and drop immutability and
impassibility to comport with God's desire "to be in an
ongoing, dynamic relationship with us" (Basinger 2013:
264-268, see also, e.g., Clark 1992; Pinnock et al. 1994;
Sanders
1998).[38]
Other neoclassical theists aim merely to resolve inconsistencies
among the perfections, as in Nagasawa's Maximal God Theism
(2008, 2017; model 10). In addition to its old challenges, perfect
being theology also hit new ones in the modern era from advances in
science. When it met Newtonian mechanics (and more) during the
Enlightenment, the combination spawned "deism", the idea
that God set the initial conditions of the universe and then left it
to play out on its own (model 11). Deism is a dualism because it
assumes God *can* leave the world behind and thus is neither
"in" it as in panentheism nor identical with it as in
pantheism. Picture all these theistic dualisms as close to Dvaita
Vedanta's image of the eternal builder building a house out of
something different from itself and dwelling in it as it pleases, but
make the house not necessarily eternal (it may have had a start and
may end), and for classical theism, give the builder all the
perfections; for neoclassical theisms, give it a few less and perhaps
have it throw better parties in the house; and for deism, have the
builder abandon the house altogether once it is built and leave it to
its own devices, like an "absentee landlord" (Mitchell
2008: 169).
On the other end of the spectrum from these varieties of theistic
dualism, we find pantheism, the species of monism that takes the One
to be God (a general model, 13). All monisms face a problem of
*unity*: how are the many things in the world integrated enough
to call them One? But pantheisms face an additional problem of
*divinity*: even if all is truly One, does the One have what it
takes to be God? Here we will focus on two contemporary pantheisms,
both in Buckareff & Nagasawa (2016): what we might call a
"one-thing" pantheism by Peter Forrest (2016) (a specific
pantheism, model 14), where the One is a count noun (as in "a
walrus is sleeping over there"), so the cosmos as a whole is One
thing, and a "one-stuff" pantheism by Karl Pfeifer (2016)
where the One is a mass term (as in "that little lamb is made of
*butter*"), so everything in the cosmos is made of the
same kind of One-stuff (model 15). Unlike the Advaita pantheists who
take the universe to be a mere appearance, Forrest and Pfeifer
definitely take the universe to exist. So for them (and pantheists
like them), the One will have to be identical to the universe, and the
work is to show how the universe can be identical to God. In other
words, this is not all builder no house as in Advaita; the builder is
the house, and the builder-house is special enough to call it
"God". Forrest's main move to effect this is to take
the universe to be a conscious self, by way of a "properly
anthropocentric" non-reductive physicalism: just as our brain
processes correlate with our mental states, so also the
universe's physical processes correlate with universal mental
states, which on the model involve a unity of consciousness and thus a
sense of self. Forrest has a strong reply to the problem of unity
here: the One is an integrated Self precisely because of what emerges
from the processes of the many. But is the Self conscious in high
enough ways to meet the problem of divinity, to count as God? Though
Forrest does not argue like this, the resources for nascent
perfections are here, such as omnipotence (the Self has all the power
in the universe), omniscience (It could know the entire universe by
biofeedback), good will for all (since to hurt any part of the
universe is to hurt Itself) etc.--enough in theory to count as
God in the classical or at least neoclassical sense. For
Pfeifer's view, instead of picturing the universe as a person,
picture it as an "intentional field", like an
electromagnetic field except spread physical dispositional states
across space instead of magnetic forces and electrons. Because those
physical dispositional states have the same extension as intentional
states,[39]
intentional states are effectively spread everywhere too. That
spreading means there is a kind of "panintentionalism",
and if the intentions are divine enough, then a panGodism, i.e.,
pantheism. So if Forrest is right, the universe is God itself, and if
Pfeifer is right, the universe is made of God-stuff, this field of
divine intentional states--both strong thoughts. How plausible,
though? On the plus side, Forrest's view is an instance of
"cosmopsychism" ("the cosmos as a whole is
phenomenal", i.e., the Cosmos as a whole has conscious states)
and Pfeifer's an instance of "panpsychism"
("everything in the cosmos is phenomenal", every
particular in the cosmos is conscious), both of which are receiving
growing attention in philosophy of mind since, e.g., cosmopsychism may
solve physicalism's problem of strong emergence and
panpsychism's combination problem at once (Nagasawa 2019, for
more on these views and their link with Hinduism and Buddhism see,
e.g., Shani 2015, Albahari 2019, Mathews 2019). However, even if
either the cosmo- or panpsychic aspect of Forrest's or
Pfeifer's views turns out to be true, the divine part seems
doubtful for a reason Pfeifer enunciates: the kind of intentionality
the universe would have within it (on Pfeifer's view) or that
would supervene on it (in Forrest's view) seems likely to be at
best the consciousness of an animal, or a comatose or schizoid human,
etc.--not even close to the kind of consciousness that would make
it count as God (see Pfeifer's footnote on 2016: 49).
In the middle, between the theistic dualisms and the pantheisms, stand
the merotheisms and the panentheisms (two general kinds of models, 16
and 17, again, species coming). As indicated above, the merotheisms
are rare, the "odd bird" idea that God is in the world,
but the world goes beyond God. Though the term
"merotheism" was coined only recently by Paul Draper for
his own view (2019: 160), merotheisms have been around well before,
for example, in divine emergence theories such as Samuel
Alexander's (1920, a specific model 18) on which the world is
metaphysically ultimate and God arises in
it.[40]
So on the metaphor, the house comes first, then God grows within it.
Alexander, for instance, thinks the rock-bottom reality is space-time,
and that when "patterns" or "groupings" of it
become complex enough, matter comes to evolve in it, then life, then
mind and then deity (257). The universe now is at mind, so we are
waiting for deity to emerge, not from small "groupings" of
things as with the other levels, but from the universe as a whole.
Because things can think only about the things *below*
themselves in the hierarchy, we cannot know what deity will be like
when it comes (Thomas 2016: 258)--a nice way to explain why God
is ineffable and unknowable, albeit one that gives no (other) content
to say why Alexander's "deity" should count as God.
In contrast to the emergence merotheisms, Draper (2019) offers a sheer
"meros" one (model 19), in which nature, instead of
growing God, always has God as one proper part. Specifically, nature
is composed of two parts which are both metaphysically ultimate:
fundamental matter, and fundamental mind. So what there is not only
all the familiar material stuff but also one and only one immaterial
mind, i.e., God--"the single subject of all phenomenally
conscious experiences", located in and coextensive with space
(2019: 163). Assuming that minds are the source of value, this one
mind is the fundamental "source of all the value there
is", and hence is axiologically ultimate (2019: 163).
Interestingly, just like prisms immersed in sunlight naturally
diffract the electromagnetic spectrum (2019: 167, originally from
William James), our brains, which are with everything else immersed in
this omnipresent universal mind, naturally diffract what we might
think of as the divine spectrum--displaying aspects of the
universal consciousness by generating one of its "multiple
streams", "making use" of it for our own ends,
tuning in to it in mystical experiences, etc. (2019: 163, 170). So
brains don't produce consciousness--they tap into
it--and God doesn't make the universe or emerge in
it--God is the mental part of it that gives it value, and gives
us a hope for a form of life after death because the consciousness
that runs through our brains and that we mistakenly call our own
continues to live on after the brain dies as the aspect of the
enduring universal consciousness it always was. This hope secures some
soteriological ultimacy: though it makes sense to mourn our deaths, we
should "not despair" (2019: 170) since, if we ally
ourselves with our consciousnesses, we are even after death still what
we always were, an aspect of the mental fundamental reality, as
Shankara and Ramakrishna and others would tell us.
Panentheistic models of God (on which the world is in God but God goes
beyond the world) have been popular for millennia, to the point that
John Cooper calls them "the other God of the Philosophers"
in the title of his book on panentheism (Cooper 2006, a general model,
#20 in this entry). There are literally too many panentheistic models
of God to count, from a star-studded list of historical thinkers
including Plato, Pseudo-Dionysius, Ibn Arabi, Meister Eckhart,
Nicholas of Cusa, Kant, Hegel, Peirce and more, with a resurgence in
the last decade owing at least in part to Yujin Nagasawa and Andrei
Buckareff's Pantheism and Panentheism Project (2017-19
[see Other Internet Resources]). Though some complain
that the "in" in panentheism is so ambiguous it is not
obviously a single view (see Gasser 2019), Chad Meister suggests that
the recent appeal of panentheism is a direct result of (1) some of the
neoclassical revisions to the idea of God (more immanent, more
passible, etc.) which can be explained by the world's being in
God, as well as (2) the advent of emergentist theories in science
which make room not only for the emergence merotheisms sketched above
(on which God emerges from the world) but also for their converse, the
emergence panentheisms (on which the world emerges from God), among
other reasons (Meister 2017: section 4).
Hartshorne's process theology is a great example of the first
impulse Meister identifies (so it is a specific panentheism, model
21). Hartshorne's process view begins with Whitehead's
metaphysics from *Process and Reality*--with the idea that
the world is dynamic, not static, and indeed that the fundamental
units are events, "actual occasions", not substances,
which
>
>
> do not endure through a tiny bit of time unchanged but [take] a tiny
> bit of time to become...concrete ("concresence", Cobb
> & Griffin 1976: 15)
>
>
>
and which are thus dynamic all the way down. Hartshorne then places
this dynamic world of events in God, by taking a page from
Ramanuja's book and saying that all of it--this
"totality of individuals as a physical or spatial whole *is
God's body, the Soul of which is God*" (Hartshorne
1984: 94, quoted in Meister 2017: section 5, italics added)--a
move which cements his view as a panentheism, since the world is
literally in God, but God, as Soul of the world, goes beyond the
world. The practical pay dirt of the view is that, in the same way we
feel our bodies, so also God as the Soul of the world feels the
world--feels every last "drop of experience" as
Whitehead says, every last bit of change happening in every last
actual occasion. Moreover, just as we respond to what we feel in our
bodies, so also God responds to each felt occasion, and in that
instant does two things: runs through a catalog of all possible next
occasions, next moves as it were, and then "lures the world
forward" with suggestions for the best next moves to actualize
in the next occasion. The world can "listen" or not to
these suggestions as the next occasion concresces, and then God will
regroup again, moment after moment after moment. This is the dynamic
process of perfecting--from the world to God back to the world
again--which gives process theology its name, and makes it a kind
of "becoming-perfect-being" theology.
John Bishop and Ken Perszyk (2016, 2017) propose a panentheism they
call a "euteleological conception of divinity" (model 22
here), on which (1) divinity is the property or activity of being the
supreme good ("*eu*" in
"euteleological") and (2) realizing this property or
activity is the point ("*telos*" or final cause) of
the universe. In addition--inspired by an unusual kind of
efficient causation called "axiarchism" on which final
causes can function as efficient ones, an idea visible at least since
Plato and having something of a revival in the last couple
decades--Bishop and Perszyk (3) take concrete realizations of the
supreme good to be the efficient cause of the
universe.[41]
Thus, on (2) and (3), these realizations of the good are both the
efficient and final cause of the universe, both *alpha* and
*omega*. This model is, as its authors say, "prone to be
met with incomprehension or blank incredulity" (2017: 613): how
can effects in the universe be the cause the universe, and thus their
own causes? Though Bishop and Perszyk do not answer this question in
2016 or 2017, they do point out the eerie precedent in the Christian
tradition, the model's home context, for efficient causes to
double as final ones: Jesus is both the source and offspring of David,
both "root and flower;" Mary "gives birth to her own
creator;" the Divine word is both "without which was not
anything made that was made" and "born late in time"
(2017: 614). They also identify the supreme good in Christianity as
perfect love, take Jesus to have instantiated it in his person and
time and again in relationships, and take us to do so too when we
"love one another as he has loved us" (2017: 613). Note
such concrete occasions of love are per (1) literally divinity dotting
(and hopefully eventually overrunning) the universe, and that they
deserve to count as divinity because they are triply ultimate:
metaphysically since they are the efficient and final cause of the
universe; axiologically since they are the best of things, and
soteriologically since they are deeply fulfilling (to quote the
Beatles, "all you need is love"). Whether or not the
axiarchism at its heart is a strike against euteleological theism, an
enormous point in its favor is how profoundly it addresses the problem
of evil: it makes God the force in nature that defuses evil instead of
intending
it.[42]
This section would be incomplete without at least mentioning
Tillich's "ground of being theology" in closing
(model 23). His view is not filed into the range of God-world
relations above because it is famously difficult to categorize:
Christopher Demuth Rodkey (2013) says Tillich has been read variously
as a panentheist, deist (i.e., dualist), and pantheist, and that it is
in fact best to characterize him as none of the above but rather as an
"ecstatic naturalist", where the Power of Being delivers
the naturalism (since this Power is "the power in every thing
that has power") and the Depth of Being delivers the ecstasy
(persons experience this Power of Being ecstatically, as holy). This
interpretation tracks Tillich's method of correlative theology
in *Systematic Theology I and II*: ecstasy is a "state of
mind" which is "an exact correlate" to the
"state of reality" of the power of being which animates
and transcends the finite world (see, e.g., Tillich 1957b: 13). So for
Tillich, God is the power or energy that animates the world which,
when truly encountered, provokes ecstatic response. This view is spare
enough that it is not obvious how someone might work up an
"ultimate concern" about God, another of Tillich's
central ideas mentioned at the start of this entry (1957a, e.g.,
10-11). Tillich will have to hypothesize that the ecstasy
provoked is, for believers, strong enough to rouse such a concern.
These are, then, several models of God, sorted mainly by how they see
the relationship between God and the world. Is the God that is modeled
in each of these ways metaphysically, axiologically and
soteriologically ultimate, in Schellenberg's terms?
Interestingly, the answers differ dramatically for each model. To
offer just two examples, on classical theism we get a yes, yes, yes:
God as single-handed origin of the universe, making everything out of
nothing, is metaphysically the fundamental fact; and, in
Anselm's hands, God as the greatest not only actual but also
possible being in every category of being, is as axiologically
ultimate as anything can be; and in Aquinas' idea, God as our
very *telos*, the point of our being, is soteriologically
ultimate as well. In contrast, God on Alexander's view gets a
no, maybe, maybe. Alexander's deity is not metaphysically the
most fundamental fact in any of the ways collected in the models seen
so far: it is neither the efficient cause of the universe (as in the
dualisms), nor its material cause (as in the pantheisms and some
panentheisms) nor its final cause (as in Bishop and
Perszyk).[43]
Alexander also cannot say if deity will be axiologically or
soteriologically ultimate when it arrives, since deity is by
definition unknown for him. Thus, God as modeled in some ways is
ultimate and in others is not.
### 2.3 Models of the Dao
The idea of the Dao (Way, Path, Guide) emerged during the Warring
States period in China (fifth to second centuries BCE), when the
reigning idea of *Tian* (Heaven) as a kind of personal god or
God started to shatter along with the rest of the imperial structures
of the Zhou Dynasty. Chinese thinkers faced their version of the
problem of evil: "Why is *Tian* letting this chaos
persist?" and added "Where is the *dao* to
harmony?" (Perkins 2019; Miller 2003: 37). An extended debate
arose among different schools of thought arguing for different answers
(Zurn 2018: 300ff), including two schools that have endured: the
early Ru (Confucian) thinkers who said the *dao* could be
brought back into the human world by reestablishing right social
relationships and customs, and the early or
proto-Daoists[44]
who found a new focus in the *dao* in the impersonal,
consistent patterns of the non-human natural world. The
*Daodejing* (ca. sixth to fourth centuries BCE, hereafter
"*DDJ*") is the earliest Daoist text that reads
these natural patterns as evidence of a single force or principle of
all that there is--as a single metaphysical ultimate--and
"tentatively", as Perkins (2019) says well, names this
ultimate "the *dao*" or in some translations
"the *Dao*" or "*Dao*". Though
this entry will focus mainly on the Daoist tradition and use the word
"Dao" (hereafter not italicized) to refer to it, the
*res* in question runs under other important names and concepts
in both the Daoist and Ru traditions, including *Taiji* (Great
Ultimate or Grand One), *Xuan Tian* (Dark Heaven),
*Zhen* (Truth or noumenal Reality) and conjoined with
*Tian* as *Tiandao* in Ruism.
Gradually, the early Daoist thinkers took the Dao to have multiple
functional roles--metaphysically, as the cosmos' origin,
its pattern or structure (*ti*), its functioning
(*yong*); and soteriologically as a guide through the cosmos
for humans, as Robin Wang says (*DDJ* ch. 25, Wang 2012: 47).
Combining the Dao's role as the origin of all things with its
undeniable unitariness threw Daoist thinkers into the question of how
the One became Many, and thus into a focus on cosmogony. The Daoist
cosmogonists generally agreed, and agree now, on at least six things
about the Dao (the last general model this entry will
showcase)--though there is substantial diversity in
interpretations of each which help constitute various thinkers'
models of the Dao.
First, in a seeming nod to the consistent patterns of the universe
that encouraged postulation of the Dao in the first place, the Dao is
taken to be immanent in everything. As the *Zhuangzi* says,
>
>
> There's no place [the Dao] doesn't exist....It is in
> the panic grass....in the tiles and shards...in the piss and
> shit!..."Complete", "universal",
> "all-inclusive"--all point to a single reality.
> (*Zhuangzi*, sec. 22, Watson translation)
>
>
>
Second, because it is capable of singlehandedly originating
everything, the Dao is taken to be necessarily *ziran*, meaning
"self-so" or "spontaneous", which is read as
entailing something like the kind of necessity and aseity of being
*causa sui* in the Thomist tradition (Perkins 2019). Wang
explains the entailment in her explication of a famous passage
("Human beings follow earth, earth follows heaven, heaven
follows *dao*, and *dao* follows *ziran*"):
the Dao's following *ziran* arrests the regress because
"following" spontaneity is the opposite of following since
spontaneity is making it up yourself on the fly (*DDJ*, ch. 25;
Wang 2012: 51).
What is the nature of a *ziran* generator of all things, then?
Zhuangzi answers in his inimitable way: "what things things is
not itself a thing" (ch. 11, see Schipper 1982 [1993: 115]). In
other words, the third commonly held claim is that the Dao is no
thing, nothing, nonbeing (*wu*). Bin Song (2018) helpfully
disambiguates several readings of nonbeing, including as (a) sheer
nothingness, a great vacuum "before" time and things; or
(b) abstract forms not yet made concrete, e.g., Zhu-Xi's
"pattern-principles" or Wang's "patterns and
processes of interrelatedness" (2018: 48) or, instead of
nothingness or abstractness, (c) concrete no-thingness, i.e., a
totally undetermined whole of being, stuff without form. Song and Poul
Andersen favor option (c), translating a key phrase in *DDJ* 21
that describes the Dao as a "complete blend" and as having
"murky indistinctness", respectively (Song 2018:
224-230, Andersen 2019: 131-132). Another line in that
chapter also tells against the Dao's being sheer
nothingness:
>
>
> yet within it is a substance, within it is an essence, quite genuine,
> within it, something that can be tested. (*DDJ*, Lau
> translation)
>
>
>
On any of these readings of nonbeing, it is clear why the Dao is taken
to be impersonal: the Dao is not only not anthropomorphic; it is not
even thingmorphic. It is also clear why it is taken to be ineffable:
it is not just because its being is beyond us; it is also because it
is not a being at all, and most uses of words (to talk like Zhuangzi)
thing it. So we find Daoist texts using the tricks of the ineffability
trade to talk about the Dao, including famously, e.g., a use of the
*via negativa* in the opening line of the *DDJ*:
"the way that can be spoken of is not the constant
way..." Also, in an expression that is perhaps less about
the Dao's ineffability and more about the futility of finding it
intellectually, there is Zhuangzi's dynamic *semper
negativa* of "continuous self-negation" or
"unsaying", visible also in the Buddhist tradition and in
Tillich millennia after and oceans away:
>
>
> There is being, there is no-being, there is not yet beginning to be
> no-being, there is not yet beginning to be not yet beginning to be
> no-being. (*Zhuangzi*, sec. 2; on Tillich, Rodkey 2013:
> 491-493)
>
>
>
The fourth and fifth widely held views of the Dao are both about how
nonbeing generates being, namely with *wu wei*
("non-action"), and in stages. Andersen describes *wu
wei* more fully: "the Way does not cause [things] to come
into being but provides a gap that allows things to emerge"
(2019: 131). To reveal *wu wei*, Daoist literature frequently
uses images of the female and infants, e.g., twice over in
*DDJ* 10, where *wu wei* is likened first to
"keeping the role of the female" who with no apparent
(anyway) action naturally nurtures the fetus and then, to use
Andersen's words, provides a gap to allow it to emerge in birth;
and second to "being as supple as a babe" who is the
epitome of the *wu wei* ruler since a baby does not do anything
but gets everyone else to act to please it (Zurn in
conversation; see also Erkes and Ho-Shang-Kung1945: 128).
The Dao is also taken to generate in stages, and actually in four of
them (*DDJ* 40, 42; Perkins 2019; Robson 2015: 1483; Wang 2012:
48; etc.). There are various readings of the sequence, but one
prominent view takes 0 as nonbeing, the Dao; 1 as unity, Being; 2 as
duality,
*yinyang*;[45]
and 3 as multiplicity, namely heaven, earth and human beings. In a
prescient recognition of how natural (heaven and earth) and social
(human) constructions combine to make reality, 3 burgeons into 4,
i.e., the 10,000 or myriad things that comprise the universe.
Crucially, 4 returns to 0--to use the feminine imagery, returns
to the womb where everything is possible and everything
develops--and then the sequence repeats (Pregadio 2016 [2020:
sec. 4], Wang 2012, 51 citing the *Huainanzi*). This
world-to-Dao-and-back cycle is reminiscent of the
occasion-to-God-and-back cycle in process theology, though on a grand
vs. momentary
scale.[46]
Interestingly, in general, Daoists read the sequence as strictly
cyclical (so 4 returns to 0, i.e., 0, 1, 2, 3, 4, 0, 1, ...) while
Ruists read it as an "endless advance to novelty" so that
we never step into the same cycle twice (i.e., 0, 1, ..., 0',
1', ..., 0'', 1'', ... etc.) (Song 2018:
230-232). Moreover, there is an open question whether the
sequence is temporal or ontological (Song 2018: 225-226) which
is sometimes crystallized into a debate about whether 0 is temporally
(and thus ontologically) prior to 1 or merely ontologically so. If the
move from 0 to 1 is temporal, then 0 happens before 1 and they are
really distinct; as Andersen argues:
>
>
> the One is a product of the Way, not the Way itself... [because]
> identity and stability as a thing in the world depends on being one.
> (Andersen 2019: 180)
>
>
>
If, on the other hand, 0 is only ontologically prior to 1, then
nonbeing and being are always co-existing as two eternal aspects of
what there is, with being still depending for its existence on
non-being like a burning eternal candle depends for its existence on
its eternal flame. The ontological-only reading seems licensed by
*DDJ* 1 which talks about "the nameless [who] was the
beginning of heaven and earth" and "the named [who] was
the mother of the myriad creatures....*these two are the same
but diverge in name* as they issue forth" (italics mine, for
more see Pregadio 2016 [2020: sec. 4.1])--assuming that the
passage implies that they diverge in name "only". The
dispute about how nonbeing and being relate is vexing enough that it
is a relief, actually, when *DDJ* 81 throws up its hands and
says: "their coexistence is a mystery"!
Grasping all this together--that the Dao is the origin of all,
immanent in all, *ziran*, essentially nonbeing, generating
forms of being by *wu wei*, generating stage by stage until we
reach the roiling boil of being in the myriad things--we are able
to catch the last and perhaps deepest thought of all about the Dao:
that it is generative by nature, or as Neville says: "nonbeing
is simply fecund from the perspective of being" (2008: 3). This
idea hails back to the *I Ching*, which takes "the
foundation of the changes" to be *sheng sheng*, literally
meaning "life life" or "generating
generating", sometimes glossed with the phrase *shengsheng
buxi* "generating generating never
ceasing!"--held as the highest metaphysical principle by
Ruists (especially neo-Ruists) and not far from Spinoza's
*natura naturans* (Gao Heng 1998: 388 cited in Perkins 2019).
Thus, at the bottom of it all, there is just endless, unformed,
spontaneous life stuff, which is generating increasingly formed
spontaneous life stuff which, because it is shot through with the Dao,
in turn generates even more, even further formed, spontaneous life
stuff, and we are off and running to the 10,000 things, until, as yang
wanes to yin, life life goes back to 0 (death death?), and all is
still until it waxes to yang again. In spite of the cycle that always
sends it back to yin, it is fitting to call the Dao "life
life" because it always waxes to life again; the life force is
irrepressible, no matter how many times it temporarily dies.
Though the Daoist literature does not use these philosophical
distinctions much, and though occasionally it is read there as a
monism in passing, the standard model of the Dao just recounted is
best understood as an impersonal panentheism--as are the
classical theisms and Bishop and Perszyk's model of God
(Section 2.2).
The Dao-world relation has the asymmetry that defines panentheism:
all the forms of being at 1-4 (or at least 2-4, if Being
is read as identical to the Dao) depend for their existence on the
Dao, but the Dao does not depend on them, or anything, for its
existence.[47]
In addition, if we take the theme of immanence in a full-throated way
so that really there is nowhere down the line of the stages that the
Dao is not, then the Dao is both the efficient and material cause of
the universe, as we saw in the Bhakti Vedanta views. So the view of
the Dao traced here is close to the idea of Brahman as the builder of
a house out of Brahmanself, who eternally dwells in this eternal
Brahman-house which depends on Brahman for its existence but not vice
versa. But "builder" is too intentional for the Dao, and,
at least on the temporal reading of the sequence, there is no eternal
house and thus no eternal dwelling in it. So try this: think of the
Dao as an eternal seed for a house. The seed sprouts naturally in
stages into a house filled with 10,000 things which depends on the
seed but not vice versa, until the house dies back into the seed,
which then lies dormant, pregnant with being, until it sprouts into
being again, and so on, eternally.
After all that has been said, the Dao is clearly metaphysically
ultimate in Schellenberg's sense: it is "the most
fundamental fact about the nature of things" (2016: 168). The
Dao is soteriologically ultimate as well, but, as with Brahman, it is
not clearly axiologically so. Regarding axiology, first, if we
understand axiology as greatness along all the categories of being, we
can see immediately that the Dao, at least understood as 0, has
greatness along no category of being since it is not being at all (for
more see Kohn 2001:
18).[48]
Moreover, if we narrow the idea of axiology just to the moral
category of being, the Dao is still not axiologically ultimate. In
multiple texts, the Dao is taken to be neither good nor bad; it is
taken to be what is. Famous among them is *DDJ*, ch. 5
("Heaven and earth are not kind, they treat the ten thousand
beings as straw dogs", see also chs. 18, 62). Interpreters take
these passages to mean that the Dao is either amoral--as
nonbeing, not the kind of thing that has moral interests in the first
place; or "value-contrarian" (Hansen 2020); or, to add a
thought, perhaps "ananthropocentric", having moral
concerns that are not human-centered (see Mulgan 2015 and 2017). Wang
Bi's commentary on the straw dogs passage suggests that these
amoral or anti-moral moments spring from the Dao's and the
sage's *wu wei*:
>
>
> The one who is kind necessarily creates and erects, impacts and
> transforms. He shows mercy and acts. If someone creates and erects,
> impacts and transforms things, then these things lose their true
> reality. If he shows mercy and acts, then these things are not
> entirely there. (quoted in Andersen 2019: 130)
>
>
>
In other words, acting *wu wei* actually requires not being
kind in the usual sense. But if Wang Bi is right, maybe there is an
axiology after all to treating the myriad things like straw dogs:
being kind in the traditional sense may not be being kind deeply since
it destroys a thing's power to be itself.
Regarding soteriology, it is agreed that a--or even
the--central goal of Daoist practices such as inner alchemy,
t'ai chi, etc. is to return to the Dao (*fandao,
huandao*, *DDJ* 16 and 40; Andersen 2019: 126), and
specifically to return to nonbeing, which is the Dao at its most
creative, powerful and sublime, on the crest of becoming being (Song
2018: 234-5). If we can return to 0, we embody this power and
sublimity in human form and, as Andersen says, also do our part to
return the cosmos to the start for a new beginning (2019: 123). There
are specific rituals Daoists do in communal contexts to return. In one
of the important rites in Daoist liturgy called *bugang*
("walking along the guideline"), a Daoist high priest
walks through the ritual space, with the audience making their own
occasional movements too, to embody a complete motion of return with a
successful arrival back to 0 by the end of the rite, when
>
>
> the forces that animated the universe at the beginning of time may
> once again be channeled into the community on behalf of which the
> ritual is performed. (2019: 118-123)
>
>
>
Practitioners outside of ritual contexts also try to return by
inwardly cultivating the skill of acting as the Dao does when it
generates being: with *wu wei*. Miller reminds us that *wu
wei* is not some loose form of letting go, but is rather a
specific "spiritual technology" of intervening very gently
at the right time, in the right place--as Neville says, with
"a subtle infinitesimal dose" when there is a rare
"opening for spontaneity [in the otherwise hard-to-beat]
inertial forces of the Dao" (Miller 2003: 140; Neville 2008:
47-51). Andersen's take on these efforts is haunting:
>
>
> An accomplished Daoist resides in the gap between being and nonbeing.
> The fundamental truth of Daoism is in this gap, in the Way and its
> manifestation as true and real. (2019: 130)
>
>
>
This thought suggests an answer to the puzzle that surfaced about
Brahman--about why contact with a metaphysical ultimate that is
not axiologically so might still be fulfilling to us. At least for
those seeking awareness of Brahman and harmony with the Dao, our whole
desire looks like it is to be in the presence of what is true and real
(*Zhen*), whether what is true and real is bad or good or
neither or both. We are fans of unvarnished reality.
## 3. Responses to the Diversity of Models of What is Ultimate
After surveying these many models of Brahman, God, and the Dao, and
recognizing that they are just a small sample of the range of options
for modeling Brahman, God, and the Dao, which are in turn a small
sample of the range of ultimates that could be modeled, one may
wonder: What should one do with all this information?
People respond in various ways after grasping the diversity of the
models. Some abandon the models altogether, either exhausted by their
complexity (embodied in Watt's wonderful phrase "the which
than which there is no whicher", 1972: 110), or convinced by
their number and inconsistency that some models logically must be
making a mistake, and it will be very hard to tell which ones. In
other words, one response to the diversity is to decide more deeply
that the nature of what is ultimate is indeed beyond us, if there is
anything ultimate at all, so it is not worth thinking about it.
In sharp contrast, others actually embrace the diversity of models as
part of the path to understanding what is ultimate. The comparative
theologians, for example, study novel models in order to carry fresh
insights from them back to their own tradition and re-see their own
models more deeply (see Clooney 2010 for an introduction and, e.g.,
Feldmeier 2019 for the method applied to Buddhist and Christian models
of the ultimate). An emerging movement, Theology Without Walls (TWW),
draws on the models to understand the nature of a globally shared
ultimate, one to which all the models may be intending to refer,
reading the body of models as data and their number and
inconsistencies as an interpretive challenge instead of a deal breaker
(see, e.g., Martin 2020). Ramakrishna offers one such interpretation
in the TWW spirit: he decides there is no need to choose between the
models because each is a finite start on the "infinite paths to
an infinite reality" (see
Section 2.1)--each
is news about an ultimate whose nature is so full that we actually
need all the models to help us see it. Both TWW and Ramakrishna will
have to explain how it is possible for many or all the models to
deliver news of what is ultimate given their inconsistency, e.g., by
relying on perspectivalism, a phenomenal/noumenal distinction, the
models' incommensurability, etc. (see Ruhmkorff 2013 for a
survey of options).
Others fall somewhere between abandoning and embracing the plurality
of models by recommending that we hold the models loosely somehow,
that we attenuate our commitment to them. Kierkegaard for instance
tells us to move our focus from the content of our model to our
orientation to what we are attempting to model: it is better to pray
to a false God truly than to a true God falsely (paraphrase of 1846,
Part Two, Chapter II).[49]
Similarly, J.R. Hustwit cautions us to
"balance engagement with non-attachment" to models to
avoid ego-reinforcement and more (2013: 1003-1007)--not far
perhaps from Zhuangzi's and Tillich's *semper
negativa* of holding and letting go model after model, a view
which converts the pile of models into grist for the mill of a
spiritual practice. For his part, Schellenberg suggests that instead
of having faith in a specific model of, e.g., Brahman, God or the Dao,
we do better to have it in the more general thing (*res*) that
underlies them all--the axiological, soteriological, and
metaphysical ultimate which has been the organizing principle of this
entry. One advantage of reading ultimacy in Schellenberg's way
is that the general ultimate is more likely to exist than any of the
particular ultimates it covers, since it exists if any of them
do.[50]
A second advantage is that Schellenberg's general ultimate is
by design the core, the overlooked "heart" of the many
models--the same thing that the particular models it covers are
about, just at a higher level of description. So faith in
Schellenberg's ultimate permits a "faith without
details" (2009: ch. 2) in many of the world's religions
and philosophies at once.
For those who, after this long journey through the landscape of models
and now these responses to them, still hope to discover which model is
philosophically the best of them all, know that Wildman (2017) has a
plan for "think[ing] our way through the morass" (2017:
viii). In brief:
1. identify the models worth your time;
2. place them in a "reverent competition" that scores
them on "comparative criteria" (2017: viii-ix, ff.),
then
3. adopt the winner, at least provisionally, since the entire inquiry
is "fallibilist" (2017: 161 and elsewhere).
Wildman demonstrates his plan by following it himself (2017).
Interestingly, he frames his options for step 1 in terms of
"entire systems of thought" comprised of combinations of
models of what is ultimate (which he calls "U types") plus
"ontological cosmologies" ("C types")--an
idea which may really do a better job of identifying our choices than
the models *per se* do, given their fuller capture of an entire
worldview. His U types include agential models on which the ultimate
is personal, ground of being models on which it is impersonal, and
"subordinate deity" models such as process theology on
which it is "disjoint:" one or more personal deities
operate in an impersonal ultimacy (2017: 13, 165, 182). His C types
include supernaturalism, which involves disembodied agency;
naturalism, which does not; and monism. Combining the U and C types
produces nine U + C views, and to live out step 1 of his plan, he
chooses his top three to place in competition: supernaturalist
theistic personalism (God as a personal perfect being), naturalist
ground of being (think, e.g., the panentheistic impersonal Dao), and
Whitehead's or Hartshorne's process theism. For step (2),
he then subjects these three views to his comparative criteria, which
include coherence, ability to handle the problems of evil and the One
and the Many, fit with the sciences, and most importantly
non-anthropomorphism, his main criterion since he takes
anthropomorphism to result from misapplying human cognitive structures
that were naturally selected for mere survival purposes to ideas of
ultimacy (2017: 217).
When Wildman ran his competitors against these criteria, the
naturalist ground of being system won. But it is obviously up to each
of us interested in such a project to run our own competitions on the
models we take to be worth our time, with comparative criteria we
think make a model truth-conducive, in order to light on the most
philosophically satisfying model of what is ultimate that we can. That
model would be the one to then subject to the best arguments for and
against the existence of God and other ultimates, to discover in a
fully researched and now clarified way, whether there is anything
ultimate. |
umar-khayyam | ## 1. The Formative Period
Abu'l Fath 'Umar ibn Ibrahim Khayyam, commonly known as Umar Khayyam, is the best known Iranian poet-scientist in the West. He was born in the district of Shadyakh of Nayshabur (originally "Nayshapur") in the province of Khorasan sometime around 439 AH/1048 CE,[1] and died there between 515 and 520 AH/1124 and 1129 CE.[2] The word "Khayyam," means "tent maker," and thus, it is likely that his father Ibrahim or forefathers were tent makers. Khayyam is said to have been quiet, reserved, and humble. His reluctance to accept students drew criticism from opponents, who claimed that he was impatient, bad tempered, and uninterested in sharing his knowledge. Given the radical nature of his views in the *Ruba'iyyat*, he may merely have wished to remain intellectually inconspicuous.
>
> The secrets which my book of love has bred,
>
> Cannot be told for fear of loss of head;
>
> Since none is fit to learn, or cares to know,
>
> 'Tis better all my thoughts remain unsaid.
>
> (*Ruba'iyyat*, Tirtha 1941, 266)
>
>
>
Khayyam's reference to Ibn Sina as "his teacher" has led some to speculate that he actually studied with Ibn Sina. Although this is incorrect, several traditional biographers indicate that Umar Khayyam may have studied with Bahmanyar, an outstanding student of Ibn Sina.[3]
Following a number of journeys to Herat, Ray, and Isfahan (the latter being the capital of the Seljuqs) in search of libraries and in pursuit of astronomical calculations, Khayyam's declining health caused him to return to Nayshabur, where he died in the district of Shadyakh.
## 2. The Philosophical Works and Thoughts of Umar Khayyam
Khayyam wrote little, but his works--some fourteen treatises identified to date--were remarkable. They can be categorized primarily in three genres: mathematics, philosophy, and poetry. His philosophical works which have been edited and published recently are (we have not translated *risalah* (*treatise*) in the titles):
1. [**Lucid Discourse**] "A Translation of Ibn Sina's (Avicenna's) *Lucid Discourse*" (*Khutbah al-gharra'*).
2. [**On Being and Obligation**] "On Being and Obligation" (*Risalah fi'l-kawn wa'l-taklif*).
3. [**The Necessity of Contrariety**] "The Response to Three Problems: The Necessity of Contrariety in the World, Predeterminism and Persistence" (*Al-jawab 'an thalath masa'il: Darurat al-tadad fi'l-'alam wa'l-jabr wa'l-baqa'*).
4. [**The Light of the Intellect**] "The Light of the Intellect on the Subject of Universal Knowledge" (*Risalah al-diya' al-'aqli fi mawdu' al-'ilm al-kulli*).
5. [**Principles of Existence**] "On the Knowledge of the Universal Principles of Existence" (*Risalah dar 'ilm kulliyat-i wujud*).
6. [**On Existence**] "On Existence" (*Risalah fi'l-wujud*).
7. [**A Response**] "A Response to Three Problems" (*Risalah jawaban li thalath masa'il*). (3 and 7 are distinct works.)
Except the first work mentioned above which is a free translation and commentary on a discourse by Ibn Sina, the other six philosophical treatises represent Khayyam's own independent philosophical views. It is noteworthy that Khayyam's philosophical treatises were written in the Peripatetic tradition at a time when philosophy in general and rationalism in particular were under attack by orthodox Muslim jurists--so much that Khayyam had to defend himself against the charge of "being a philosopher."
>
> "A philosopher I am," my enemies falsely say,
>
> But God knows I am not what they say;
>
> While in this sorrow-laden nook, I reside
>
> Need to know who I am, and why Here stay
>
> (translation by Aminrazavi)
>
>
>
Khayyam identifies the main types of inquiry in "philosophy" along the Peripatetic line: "The essential and real inquiries that are discussed in philosophy are three inquiries, [first], 'is it?'...second, 'what is it?'...third, 'why is it?'" (*On Being and Obligation*, Malik (ed.) 1998, 335). While these are standard Aristotelian questions, for Khayyam they have a wider range of philosophical implications, especially with regard to the following topics:
1. The existence of God, His attributes and knowledge
2. Hierarchy of Existents and the problem of multiplicity
3. Eschatology
4. Theodicy
5. Predeterminism and free will
6. Subjects and predicates
7. Existence and essence
### 2.1 The existence of God, His attributes and knowledge
In accordance with the Avicennan tradition, Khayyam refers to God as the "Necessary Existence" (or as it is more common in English translation, "Necessary Existent") and "that which cannot be conceived unless being existent," and "the one whose existence is from its essence, from the intellect's [point of view]" (*On Existence*, Jamshid Nijad Awwal (ed.), 112) and then offers cosmological, teleological, and ontological arguments for His existence.[4] The most fundamental principle about the necessary existent is its absolute oneness. Khayyam discusses issues such as unity, necessity, causality, and the impossibility of a chain of causes and effects continuing *ad infinitum*. In his translation of Avicenna's treatise, as Malik (ed.) (1998, 307) has rightly pointed out, Khayyam adds short remarks and explications to clarify and support Avicenna's view after each sub-section. He completely follows Avicenna's view on God's existence and His attributes. God, the necessary existent, is neither a substance nor an accident (*Principles of Existence*, Malik (ed.) 1998, 388). There is no motion in Him, and thus He is not in time. He has no intention (*qasd*), end or goal (*gharad*), since having an end or a goal indicates *not having something* [*desirable*], and this in turn implies *imperfection* (*Lucid Discourse*, Malik (ed.) 1998, 313-6). Among other topics pertaining to God which Khayyam discusses are God's knowledge of universals and particulars, the absolutely simple essence of the Necessary Existent, and that "all His attributes, with regard to Its essence, are [rational] considerations (*i'itbarat*)" (ibid). We will return to this last point, on rational considerations, below.
### 2.2 Hierarchy of existents and the Problem of Unity and Multiplicity
For Khayyam, one of the most complex philosophical problems is to account for the hierarchy of existents and the manner in which they are ranked in terms of their nobility. In *On Being and Obligation*, Khayyam asserts:
>
> What remains from among the most important and difficult problems [to solve] is the difference among the existents in their degree of nobility.... Perhaps I, and my teacher, the master of all who have proceeded before him, Avicenna, have thoughtfully reflected upon this problem, and consequently have reached the point of convincing our selves [of its truth]. This conviction is either because of the weakness of our souls, which are convinced by something inwardly faulty and apparently fancy, or because of the strength in that view, which makes us convince ourselves [of its truth], and we will shortly express part of that view symbolically. (*On Being and Obligation*, Malik (ed.) 1998, 338)
>
>
>
How to read the last sentence of the above quote may be subject to scholarly debate. Particularly, Khayyam's formulation may suggest that he is not fully satisfied with the Neoplatonic view he is going to present. Nonetheless, in his treatise "On the Knowledge of the Universal Principles of Existence" (*Principles of Existence*, Malik (ed.) 1998, 381-3), as well as some other works, Khayyam adopts the Neoplatonic scheme of emanation and offers an analysis of a number of traditional philosophical themes within this context. Particularly, he follows the theory of hierarchical intellects and souls associated with the celestial spheres, as interpreted in the Islamic philosophical "emanationist" tradition, to explain creation and celestial motions (ibid, 382).
### 2.3 Eschatology
Khayyam has been accused of believing in the transmigration of the soul and even corporeal resurrection in this world. This is partially due to some of the inauthentic quatrains attributed to him. Khayyam's philosophical treatises indicate that he did believe in life after death, and in this regard his views were in line with traditional Islamic eschatological doctrine. Khayyam the poet, however, plays with the notion of life after death in a variety of ways. First, he casts doubt on the very existence of a life beyond our earthly existence; second, he says that based on our very experience in this world, all things seem to perish and not return. Some of his poems play with the idea of the transmigration of the soul (*tanasukh*). This can be taken symbolically, rather than literally; in numerous poems he tells us that we turn to dust and it is from our dust that other living beings rise. Khayyam's comments regarding the possibility of life after death may well have been an indirect criticism of the orthodox jurists who spoke of the intricacies of heaven and hell with certainty.[5]
Khayyam's ideas about the human soul and its persistence are scarce and scattered in his philosophical writings. Toward the end of *The Necessity of Contrariety* (Naji Isfahani (ed.) 2000, 169-70), he explains the notion of *persistence* as *existence through time*, and rejects the view that the concept of *existence* itself implies *duration* (in time). It seems to follow that if an entity exists atemporally, its ontological status cannot be described as "persisting." In his translation of Avicenna's treatise, Khayyam is explicit that "when it [the rational soul] is separated from the matter, it becomes like the angels, in simplicity and apprehension of the intelligible [meanings], till immortal persistence inevitably accompanies it" (*Lucid Discourse*, Malik (ed.) 1998, 317). Describing the fate of the rational soul as "immortally persisting" suggests that for Khayyam, like Avicenna, the rational soul is an incorporeal entity with temporal existence.
### 2.4 Theodicy (The Problem of Evil)
The problem of theodicy, which Khayyam handles both philosophically and poetically, is one of the most prevalent themes in his quatrains, yet his approach differs in each medium. It is an irony that while in his philosophy Khayyam offers a rational explanation for the existence of evil, in his *Ruba'iyyat* he strongly criticizes the presence of evil and finds no justification for it fully satisfying. One may argue that such an inconsistency bears witness to the fact that the philosophical treatises and the *Ruba'iyyat* are not authored by the same person. While this remains a possibility, it is also reasonable that these seemingly contradictory works might belong to the same person. The discrepancy speaks to the human condition that despite our rationalization of the problem of evil, on a practical and emotional level, we remain fundamentally bewildered by the unnecessary presence of so much pain and suffering.
Qadi Abu Nasr, a statesman and scholar from Shiraz, posed the following question to Khayyam:
>
> It is therefore implied that the Necessary Existent is the cause of the occurrence of evil, contrariety and corruption in the world. This is not worthy of the Divine status. So how can we resolve this problem and the conflict, so evil will not be attributed to the Necessary Existent? (*The Necessity of Contrariety*, Malik (ed.), 1998, 362-3)
>
>
>
The first sentence of the above quote is in fact the conclusion of a short argument, described on the previous page. Here is a slightly revised formulation of that argument:
1. Evil and contrary events occur in this world.
2. The evil and contrary events that occur in this world are either necessary existents or contingent existents.
3. They cannot be necessary existents (because, this implies that multiple things are necessary existents, but it has been demonstrated that there is one and only one Necessary Existent, which is absolutely simple and one in all respects).
So,
4. The evil and contrary events that occur in this world are contingent existents.
But
5. The Necessary Existent is the ultimate cause of all contingent existents.
Thus,
6. The Necessary Existent is the ultimate cause of the occurrence of evil and contrary events.
7. If A is the cause of B and B is the cause of C, then A is the cause of C (Khayyam has formulated this premise explicitly in his response to the question).
Therefore,
8. The Necessary Existent is (not only the ultimate cause but) the cause of the occurrence of evil and contrary events.
In *The Necessity of Contrariety*, Khayyam offers an extended argument to exonerate God from being morally blameworthy for the creation of evil (the expression "morally blameworthy" is ours). Some have understood his response in terms of associating evil with non-existence or absence. Accordingly, God has created the essences of all contingent beings, which are good in and of themselves since any being, ontologically speaking, is better than non-being. Evil therefore represents an absence, a non-being for which God cannot be blamed (Nasr, *The Poet Scientist Khayyam as Philosophe*r; Aminrazavi 2007, 172-5).[6]
A closer reading of the text, however, may suggest a different and novel solution. Evil is sometimes introduced as nonexistence (*On Existence*, Jamshid Nijad Awwal (ed.) 2000, 118) and sometimes as following from nonexistence (*ibid*. and *The Necessity of Contrariety,* Naji Isfahani 2000, 168), which itself follows from the contrary relations that hold between contingent objects or events. Many instances of evil, according to Khayyam, do exist in reality. Khayyam's main point is not that evil acts and events are absences or negative facts, as it were. Though he talks about nonexistence and associates evil with absence, the contribution of his argument is not to conceptualize all instances of evil in terms of nonexistence or merely mental existence. Rather, evil is characterized as the concomitant of the contrary relations that obtain between contingent existents, which themselves are essentially created by God. In this formulation, two notions call for clarification. First, the notion of "concomitant" (*lawazim*) as a "necessary non-essential predicable": this notion is well known in the Avicennan school and borrowed from the Aristotelian notion of *per se accidents* (*kath' hauto sembebekos*). The second notion is the distinction between *essential* and *accidental* causation. This is also rooted in an Aristotelian theory of causation and Avicenna's reconstruction of that. Evil, as the concomitant of the contrary relations that hold between contingent existents, is only accidentally created by God. Otherwise put, God essentially causes the ("quiddative," this is our term) contingent beings and only accidentally causes contrary relations, and, as a result, the evil that follows from the co-existence of those essences. Furthermore, Khayyam claims, the problem in this case is whether God is the *essential* cause of the occurrence of evil. This can be interpreted as a value claim: causing something is not worthy of Divine status only if God essentially and intentionally causes it. (Khayyam immediately modifies this claim, explaining that the First, properly speaking, has no "intention" (*qasd*) but there is eternal guardianship ('*inaya*).) Since God does not essentially and intentionally cause evil (in fact, the existence of some contrary relations is a necessary consequence of creating (quiddative) contingent existents), it follows that God accidentally causes evil and this is not morally problematic.
In the appendix to his response to the first question, Khayyam raises another objection: Given that evil is accidentally created by God and assuming that God is omniscient, "Why did God essentially create things, i.e. (quiddative) contingent existents, if He knew their existence would imply contrary relations, nonexistence, and eventually evil?" Khayyam's response is based on three premises: (1) the existence of evil can be justified if the benefits following from essentially creating (quiddative) contingent beings massively outweigh the harms following from accidentally creating contrary relations (and eventually evil). Furthermore, (2) withholding an act whose benefits massively outweigh the harms following it, given an appropriate principle of proportionality, would imply massive evil. And (3), in fact the benefits following from essentially creating (quiddative) contingent beings massively outweigh the harms following from accidentally creating contrary relations (and eventually evil). These premises are intended to justify God's essential creation of contingent beings despite His knowledge of the ensuing evil.
### 2.5 Determinism and Free Will
Both his Western and Eastern expositors consider Khayyam to be a predeterminist (*jabri*). However, his views on the subject matter are far more complex. Some (Nasr, *The Poet Scientist Khayyam as Philosopher*; Aminrazavi 2007, 175) have interpreted "On Being and Obligation" as a treatise on the problem of determinism and free will. According to Aminrazavi (2007, 175), Khayyam uses the term *taklif* (translated by Nasr (ibid) and Aminrazavi (ibid) as "necessity" as well) to denote "determinism." There is a short paragraph in *The Necessity of Contrariety* that Nasr and Aminrazavi interpret as indicating that Khayyam is inclined to "predeterminism," or "determinism" as they use the term, provided it is not taken to its extreme:
>
> As to his question [i.e., *Qadi Nasawi's* question] concerning which of the two groups [pre-determinists or free will theorists] are closer to truth, I say *prima facie* and at the first sight, perhaps the pre-determinists are closer to truth, provided they do not enter into their nonsensical and absurd [claims], for in this case they verily depart far from truth. (*The Necessity of Contrariety*, Naji Isfahani (ed.) 2000, 169)
>
>
>
Aminrazavi (2007, 177-80), then, identifies and explains three types of "determinism" in Khayyam's view as follows:
1. Universal-cosmic
2. Socio-economic
3. Ontological
In the universal and cosmic sense, our presence in this world and our entry and exit is predetermined, a condition that Khayyam bemoans throughout his *Ruba'iyyat*. The universal-cosmic determinism is the cause of our bewilderment and existential anxiety. Khayyam expresses this when he says:
>
> With Earth's first Clay They did the Last Man knead,
>
> And there of the Last Harvest sow'd the Seed:
>
> And the first Morning of Creation wrote
>
> What the Last Dawn of Reckoning shall read.
>
> (*Ruba'iyyat*, FitzGerald 1859, 41)
>
>
>
The second sense of determinism is Socio-economic, which is rarely addressed by Muslim philosophers. Khayyam incidentally refers to this notion, for example, in:
>
> God created the human species such that it is not possible for it to survive and reach perfection unless it is through reciprocity, assistance, and help. Until food, clothes, and a home that are the essentials of life are not prepared, the possibility of the attainment of perfection does not exist. (*On Being and Obligation*, Hashemipour (ed.) 2000, 143).
>
>
>
Finally there is "ontological determinism," which relies on a Neoplatonic scheme of emanation which Khayyam considers to be "among the most significant and complex of all questions," since "the order of the world is in accordance to how the wisdom of God decreed it" (*On Being and Obligation*, Hashemipour (ed.) 2000, 145). He continues, "obligation (*taklif*) is a command issued from God Most High, so people may attain those perfections that lead them to happiness" (*ibid*, 143). This Greek concept of happiness, restated by Farabi as "For every being is made to achieve the ultimate perfection it is susceptible of achieving according to its specific place in the order of being," (al-Farabi 1973, 224) implies that our ontological status or capacity is (at least to some extent) pre-determined.
Here, we have translated *taklif* as "obligation" (not "necessity") and *jabr* as "*predeterminism*" (not "determinism"). This may be justified as follows: "Determinism" as we use it today is considered to be compatible with free will (from the compatibilist point of view) but *jabr*, both in its historical context and in contemporary usage, is not considered to be compatible with free will. Furthermore, we are not sure how to interpret the short quote in response to the second question in *The Necessity of Contrariety* (Naji Isfahani (ed.) 2000, 169). It can be interpreted as a formulation of Khayyam's dissatisfaction with both predeterminism and the free will view. This is because he says that predeterminism is closer to truth "*prima facie* and at the first sight" and then suggests that if the view is expanded, or fully applied, it can lead to nonsensical consequences. It is noteworthy that in the same passage he does not attempt to defend the free will view either. The original term for what has been translated as the "free will" view here, is *qadariyya* (*Danish namah-yi* *Khayyami*, Malik (ed.) 1998, 344, note 2). The term is associated with different views and problems, one of which is whether humans are the *creators* (*khaliq*) of their sinful acts. The term "creator" is important in this context, particularly because it is not necessarily synonymous with "cause" and more importantly, because some, particularly in the early Islamic Kalam tradition, held the view that the only real creator is God. So, the context of this discussion is not just the metaphysics of causation, as we use the term. Finally, the above considerations make reconstructing Khayyam's view on this matter and mapping it on the contemporary views on "determinism" more difficult than it might initially appear. Khayyam has the concept of "necessitation" (*wujub*) and discusses that in his treatises on existence [references 5 and 6 above] at length. Thus, the three notions of "determinism" introduced by Aminrazavi may require further investigation.
### 2.6 Subjects, Predicates, and Attributes
In a complex discussion, Khayyam presents his views on the relationship between the subject, predicate, and attributes using a mixture of original insight and Aristotelian precedent. Dividing the attributes (*al-awsaf*) into two categories, essential and accidental, he discusses both categories and their subdivisions, such as concomitant accidental attributes vs. detachable accidental attributes (and the latter, in turn, is divided into merely detachable in estimation vs. detachable in estimation *and* in existence) in details (*On Existence*, Jamshid Nijad Awwal (ed.), 101-2).
Developing the same line of argument in *The Necessity of Contrariety* (Naji Isfahani (ed.) 2000, 164), Khayyam proposes the following characterization of an "essential attribute":
>
> [An attribute (*wasf*)] is essential if [1] it is not possible to conceive *the subject* (*al-mawsuf*) [of the attribution] except by conceiving *that* as having the attribute in the first place, and it is required for it [2] to belong to the subject [of the attribution] with no [extra] cause, like *animality* [as an essential attribute] of *human*, and [3 it is required for it] to be prior to *the subject* [of the attribution] essentially (*bi al-**d**at*), that is, it [i.e. the essential attribute] is the cause of the subject [of the attribution] and not its effect, like *animal* [as an essential attribute] of *human* and [also] *rational* [as an essential attribute of *human*] (*The Necessity of Contrariety*, Naji Isfahani 2000, 164; emphasis is ours).
>
>
>
(We have translated *wasf* as *attribute* and *mawsuf* as *the subject* of attribution.) Khayyam's explication includes three conditions for an essential attribute (in relation to the subject of attribution): (1) conceptual priority, (2) no causal dependence (or posteriority), and (3) causal priority. First, conceptual priority indicates that Khayyam understands "essential attributes" not merely as metaphysically essential properties; they should satisfy some epistemological condition as well. Second, "no causal dependence" distinguishes essential attributes from all other attributes that do not follow from the essence of something (and thus their attribution to the subject requires a cause distinct from the subject). This is a strong, and well-known, metaphysical condition in the Islamic Aristotelian tradition. Nonetheless, "concomitants" may satisfy "no causal dependence" requirement as well, because their attribution to the subject does not require any extra cause. Third, causal priority is supposed to exclude concomitants. So, no concomitant, even though necessarily predicable of its subject, is an essential attribute in the sense under discussion.
According to Aminrazavi's interpretation of the above quote, "essential attributes are those which are not possible to conceive of without the preconception of these *a priori* (*badawi*) attributes, such as 'animality which is an essential attribute of man'" (Aminrazavi 2007, 183). The term *badawi* is not in the original Arabic text, in fact it can only be found in a Persian translation of *The Necessity of Contrariety* (Naji Isfahani 2000, 171). Moreover, *badawi* can be translated in many ways and "*a priori*" is not its most straightforward translation, if a proper translation in the first place. Finally, the above observations suggest that Khayyam's notion of "essential attributes" only implies conceptual priority (not *a prioricity*) of the essential attribute(s) to its subject. Conceiving *animal* is prior to conceiving *human;* this does not imply that the attribute *animal is a priori* (in its Kantian sense). If one could find further pieces of evidence in Khayyam's philosophy, with regard to *a priori* concepts or *innate* ideas, in its Cartesian sense, one might attempt to bolster Aminrazavi's claim. At present, we are not aware of any such evidence or argument.
Khayyam also divides (essential and accidental) attributes into *considerational* (*i'tibari*) and *existential* (*wujudi*) (*i'tibari* has also been translated in the literature of Islamic/Arabic philosophy as *item of consideration*, *abstract*, *secondary*, *fictional*, *intentional*, *mental*, and *conceptual*; see, for example, *Online Dictionary of Arabic Philosophical Terms*. In this context, these translations may have misleading connotations. Thus, we suggest *considerational*, and use it in contrast with *existential*). Khayyam attempts to provide different criteria for this distinction:
>
> And the existential [attribute] is like the attribute *black* [attributed to a] *material object* (*jism*), when [in fact] it is black. Thus, *being black* is an existential attribute, that is, it is an additional meaning to the essence [to which] black [is attributed], as [it is] existing *in re* (*mawjudun* *fi al-a'yan*). Thus, if *being black* is an existential attribute (*wasf*), then *black* is [called] an existential description (*sifa*). (*On Existence*, Jamshid Nijad Awwal (ed.) 2000, 102)
>
>
>
Accordingly, an existential attribute is something that exists *in re* as an additional attribute of its subject. This point, i.e. being additional to the subject as it exists *in re*, is crucial to Khayyam's formulation of existential attributes since considerational attributes are only additional to their subject *in intellectu* (or, more specifically, in the intellect or in the estimative faculty). Khayyam finds existential attributes easily understandable and then quickly moves to introduce considerational attributes:[7]
>
> If the intellect intellects a meaning (*ma'na*) and details that intelligible [meaning] in an intelligible manner and considers its status/conditions (*al-ahwal*), if that meaning happens/occurs to some simple, not multiple, [meaning] like all existing accidents *in re* (*fi al-a'yan*), and it [i.e., the simple meaning] happens to have some attributes, then [the intellect] knows that all those attributes belong to that [simple meaning] on the basis of consideration, not on the basis of the existence *in re*. (*On Existence*, Jamshid Nijad Awwal (ed.) 2000, 103)
>
>
>
A considerational attribute is an intelligible meaning whose subject of attribution is another intelligible meaning that is simple, not multiple, *in re* and has multiple attributes. These attributes of the simple meaning are considerational attributes. This is because simple things, with no multiplicity *in re*, are not hylomorphically composite, that is, they are not composed of matter and form. So, their attributes are not grounded on real multiplicity or composition. And according to the Peripatetic philosophical school, accidents (in the sense of the nine Aristotelian categories) are simple *in re*, with no matter or form. If *black* is an accident (quality), *being a color* is a considerational attribute of that. Likewise, if *two* is an accident (quantity), *being half of four* is a considerational attribute of that as well. Let us explain these two examples in more detail.
Considerational attributes, as well as existential ones, come in two varieties: essential and accidental. Khayyam illustrates both cases. An example of an essential considerational attribute is *being a color* as an attribute of *black*. Conceiving *being a color* is conceptually prior to conceiving *black* and is the cause of *black*, that is, *being a color* is essentially prior to *black* (or *white*). Furthermore, an existential attribute of a subject is additional to its essence. But *black* itself is an accident and, here Khayyam seems to assume that, no accident can be the subject (in the metaphysical sense) of another accident. Finally, *being a color* and *black* cannot be accidents of the same subject because if this were the case, then *black* could come apart from *being a color*, at least in estimation, but this is impossible. So, *being a color* is an essential considerational attribute of *black* (*On Existence*, Jamshid Nijad Awwal (ed.) 2000, 103-4). An example of an *accidental* considerational attribute is *being half of four* as an attribute of *two*. Conceiving *being* *half of four* is not conceptually prior to conceiving *two*, nor is *being half of four* the cause of *two*. So, *being half of four* is not an essential attribute of *two*. Hence, *being half of four* is an accidental attribute of two. Furthermore, an existential attribute of a subject is additional to its essence. If *being half of four* were something additional (*za'id*) to *two* (*in re*), then *two* would have infinite meanings added to its essence (*in re*). But this is impossible, according to Khayyam. Therefore, *being half of four* is not an accidental existential attribute of *two*. Therefore, *being half of four* is an accidental considerational attribute of *two* (*On Existence*, Jamshid Nijad Awwal (ed.) 2000, 102-3). (This example suggests that *two*, as an accident, is a *simple* meaning, with no matter or form, even though it is true of two entities.)
By way of analogy, and in some respects, the problem of considerational vs. existential attributes, as Khayyam discusses, is like the problem of non-natural vs. natural properties in contemporary analytic philosophy. This semantic division plays a significant metaphysical role in Khayyam's metaphysics, to which we shall turn next.
### 2.7 Existence (*wujud*) and Essence (*mahiyyah*)
Khayyam offers a series of arguments for the thesis that "existence is a 'considerational' attribute" (*On Existence*, Jamshid Nijad Awwal (ed.) 2000, 106; *The Light of the Intellect*, Nadvi (ed.) 1933 [2010, 347-9]). In section seventeen of *On Existence*, entitled "Existence is an added meaning (*ma'na*) to the intelligible essence," he writes, "And it is as if [the soul/intellect] encounters existence in all things, by way of accidents, and there is no doubt that existence is a meaning added to the intelligible essence" (*On Existence*, Jamshid Nijad Awwal (ed.) 2000, 111). By "by way of accidents" Khayyam should mean *as a form of accident*, and "accident" should be used here in the sense of *predicable*, not one of the nine Aristotelian categories of accidents. The "additionality" should also mean "being additional *in intellectu*," not *in re*. By relying on *reductio ad absurdum*, he concludes that if *existence* were to be an existential attribute, it would have to exist prior to itself, which is impossible. Khayyam states "essence (*dat*) was non-existent and then became existent." He goes on to argue that "essence does not need existence [before coming to existence] or a relation to existence because the essence prior to existing was non-existing (*ma'dum*), thus how can something need something else prior to its existence?" (*On Existence*, Jamshid Nijad Awwal (ed.) 2000, 110). Towards the end of this treatise he uses the Neoplatonic scheme of emanation to explain the origin of essences and states: "Therefore, it became clear that all essences (*dawat*) and quiddities (*mahiyyat*) emanate from the essence of the First Exalted Origin, in an orderly fashion, may glory be upon Him" (*On Existence*, Jamshid Nijad Awwal (ed.) 2000, 118).
In Aminrazavi and Van Brummelen (2017) earlier edition of this entry, as well as in Nasr (*The Poet Scientist Khayyam as Philosopher*) and Aminrazavi 2007, 168-71), it is suggested that "Clearly Khayyam supports the principality of essence" (i.e., the second view) among the following three views:
1. The existence of an existent is the same as its essence. This view is attributed to Abu'l-Hasan Ash'ari, Abu'l-Hasan Basri, and some of the other Ash'arite theologians.
2. Commonly known as the principality of essence (*a**salat al-mahiyyah*), this view maintains that essence is primary and existence is added to it. Many philosophers such as Abu Hashim Juba'i and later Suhrawardi and Mir Damad came to advocate this view.
3. Commonly known as the principality of existence (*a**salat al-wujud*), this view maintains that existence is primary and essence is then added (Aminrazavi 2007, 168).
Nasr (*The Poet Scientist Khayyam as Philosopher*) who prefers the same reading, that is "Khayyam asserts that for each existent, it is the quiddity that is principal while *wujud* is a conceptual (*i'tibari*) quality," immediately qualifies his view by adding that "the distinction between the principality of *wujud* (*asalat* *al-wujud)* and the principality of *mahiyyah* (*asalat* *al-mahiyyah*) goes back to the School of Isfahan and especially Mulla Sadra," and Khayyam "does not use the term *asalat* *al-mahiyyah* as was done by Mulla Sadra." We find Nasr's comment on the notion of "principality" important and take it to indicate that there may be room to argue that not only the term "principality of quiddity" but also the corresponding concept is absent from Khayyam's philosophical treatises. What one finds in Khayyam's work is a series of arguments for the thesis that existence is a considerational (or "secondary" as translated by Aminrazavi) attribute. If one further grants the conditional claim that if existence is a considerational attribute, then quiddity is principal (in the sense at stake in the debate over the principality of existence vs. the principality of quiddity in the post Isfahan school). Thus, one might derive the conclusion that Khayyam supports the principality of quiddity. The latter conditional presumption, however, is in need of a sound argument, and as far we can see, no such argument has yet been provided.
Aminrazavi and Van Brummelen (2017) then explain that "a more careful reading reveals an interesting twist: namely, that Khayyam's understanding of how essences came to be casts doubt on his belief in the principality of essence" (Note: we have translated *al-mahiyyah* as "quiddity," not "essence"). This comment may strengthen our hypothesis that Khayyam does not support the principality of quiddity in the first place, at least in the sense rejected by Mulla Sadra and his followers. Perhaps Khayyam does not offer an argument against it either (if he does not work with the notion of *principality* in question). Aminrazavi and Van Brummelen (2017) then continue: "Towards the end of the *Risalah fi'l-wujud* he uses the Neoplatonic scheme of emanation to explain the origin of essences [...] Khayyam replaces essence with existence here and the question is whether he equates them and thereby deviates from his teacher Ibn Sina. [...] It appears that Khayyam equates existence and essence as having emanated from God in an orderly fashion, but there is no explanation of how essence becomes primary and existence secondary. In fact, if existence did not exist how could essences come to be?" After discussing all these "problems," they conclude: "Although the distinction between the principality of *wujud* (*asalat al-wujud*) and the principality of *mahiyyah* (*asalat al-mahiyyah*) can be found among early Muslim philosophers, the subject matter became particularly significant in later Islamic philosophy, especially through the School of Isfahan and the work of its most outstanding figure, Mulla Sadra. This is important for our discussion since Umar Khayyam may simply have presented the arguments for and against the priority and posterity of essence and existence without attaching much significance to their philosophical consequences, as was the case in later Islamic philosophy" (Aminrazavi and Van Brummelen 2017).
The above concerns and the ensuing questions are based on the assumption that Khayyam supports the principality of quiddity, which in turn stems from casting a post Isfahan school conceptual framework on Khayyam's philosophy. This latter interpretational strategy, however, requires proper doctrinal and textual justification. In the absence of such evidence, Khayyam's view that existence is a considerational attribute does not seem to imply the principality of quiddity and hence may not be subject to the above criticisms.
In *The Light of the Intellect*, Khayyam offers three reasons why existence is not added to essence *in re*. A summary of his reasons goes as follows:
1. Existence cannot be added to essence *in re*; otherwise an infinite regress will follow, which is untenable.
2. Existence is not added to essence *in re*; otherwise essence should have existed prior to its existence, and this is absurd.
3. With regard to the Necessary Existence, existence clearly is not added to essence *in re*, for dualism would follow.
Some have also read *The Light of the Intellect* as providing an argument for the principality of essence (or quiddity); we believe such readings face problems similar to the ones discussed above. This treatise reconfirms the view that Khayyam believes that *existence*, *oneness*, and many other key metaphysical and scientific attributes (e.g., *color*), are considerational, not existential, attributes.
Khayyam's philosophical works are the least studied aspects of his thought, and were not even available in published form until a few years ago. They permit a fresh look at overall Khayyamian thought and prove indispensable to an understanding of his *Ruba'iyyat*. In his philosophical works, Khayyam writes as a Muslim philosopher and treats a variety of traditional philosophical problems in a careful manner; but in his *Ruba'iyyat*, our Muslim philosopher morphs into an agnostic Epicurean, or so it seems. A detailed study of Khayyam's philosophical works reveals several explanations for this dichotomy, the most likely of which is the conflict between pure and practical reasoning. Whereas such questions as theodicy, the existence of God, soul and the possibility of life after death may be argued for philosophically, such arguments hardly seem relevant to the human condition, given our daily share of suffering.
It is in light of the distinction between "is" and "ought," the "ideal" and the "actual," that discrepancies between Khayyam's *Ruba'iyyat* and his philosophical views should be understood. Khayyam's *Ruba'iyyat* are the works of a sober philosopher and not those of a hedonistic poet. Whereas Khayyam the philosopher-mathematician justifies theism based on the existing order in the universe, Khayyam the poet, for whom suffering in the world remains insoluble, does not talk about theism, or any type of eschatological doctrine, as a solution to the problem of the meaning of human existence.
## 3. The *Ruba'iyyat* (*Quatrains*)
>
> Here with a Loaf of Bread beneath the Bough,
>
> A Flask of Wine, a Book of Verse--and Thou
>
> Beside me singing in the Wilderness--
>
> And Wilderness is Paradise enow.
>
> (*Ruba'iyyat*, FitzGerald 1859, 30; 2009, 21)
>
>
>
Although Umar Khayyam's *Ruba'iyyat* have been admired in the Persian speaking world for many centuries, they have only been known in the West widely since the mid 19th century, when Edward FitzGerald rendered the *Ruba'iyyat* into English.
The word Ruba'i (plural: *Ruba'iyyat*), meaning "quatrain," comes from the word *al-Rabi'*, the number four in Arabic. It refers to a poetic form which consists of a four-lined stanza and two hemistiches for a total of four parts. *Taranah* (snatch) and *dobaiti* (two-liner) are very similar to quatrains; they differ from each other in terms of rhyme scheme and themes or message. They all have a short and simple form that provides a type of "poetic punch line."
The overwhelming majority of the literary works on the *Ruba'iyyat* have been devoted to the monumental task of determining the authentic *Ruba'iyyat* from the inauthentic ones. In our current discussion, we shall bypass that controversy and rely on the most authoritative *Ruba'iyyat* in order to provide a commentary on Khayyam's critique of the fundamental tenets of organized religion(s). The salient features of his critique address the following:
1. Impermanence and the quest for the meaning of life
2. Theodicy
3. The here and now
4. Epistemology
5. Eschatology
6. Determinism and free will
7. Philosophical wisdom
### 3.1 Impermanence and the quest for the meaning of life
The *Ruba'**iyyat*'s overarching theme is the temporality of human existence and the suffering that one endures during a seemingly senseless existence. Clearly, such a view based on his observation of the world around him is in sharp contrast with the Islamic view presented in the Quran: "I (Allah) have not created the celestial bodies and the earth in vain" (Quran, 38:27). Umar Khayyam was caught between the rationalistic tradition of the Peripatetics deeply entrenched in the Islamic religious universe and his own failure to find any meaning or purpose in human existence on a more immediate and experiential level. Khayyam the poet criticizes the meaninglessness of life whereas Khayyam the philosopher remains loyal to the Islamic Peripatetic tradition which adheres to a theocentric world view.
Using the imagery of a *kuzah* ("jug") and clay throughout the *Ruba'iyyat*, Khayyam alludes to the temporality of life and its senselessness:[8]
>
> I saw the potter in the market yesterday
>
> Pounding and pounding upon a piece of clay
>
> "Behold," said the clay to the potter
>
> Treat me gently for once like you, now I am clay
>
> (translation by Aminrazavi)
>
>
>
The above verse, for example, may be read as suggesting that Khayyam does not see a profound meaning in human existence; his existential anxiety is compounded by the fact that we are subject to our daily share of suffering, a concept that runs contrary to that of the all merciful and compassionate God of Islam.
### 3.2 Theodicy and Justice
The problem of suffering has an ominous presence in the *Ruba'iyyat*, which contains both Epicurean and Stoic themes. On theodicy, Khayyam remarks:
>
> In what life yields in this Two-door monastery
>
> Your share in the pain of heart and death will tarry
>
> The one who does not bear a child is happy
>
> And he not born of a mother, merry
>
> (translation by Aminrazavi)
>
>
>
And also:
>
> Life is dark and maze-like, it is
>
> Suffering cast upon us and comfort in abyss
>
> Praise the Lord for all the means of evil
>
> Ask none other than He for malice
>
> (translation by Aminrazavi)
>
>
>
(It is an irony that while Khayyam complains about theodicy and human suffering throughout his *Ruba'iyyat*, in his philosophical works he offers a treatise almost entirely devoted to a philosophical justification of the problem of evil.) It might seem that theodicy as a theological and philosophical problem in Islam never received the attention it did in Western intellectual tradition; however, a series of problems associated with evil have been discussed under different names in this tradition. In the discussion of *Tawhid*, absolute oneness and unity of God in Islam, the problem of existence of evil, particularly the manner in which evil is created by God, is discussed (see section 2.4 above). In the debate over God's attribute of justice, between the Asharites and the Mu'tazilites, some aspects of the problem of evil, e.g., whether our morally wrong actions are ultimately created by God or attributed to him, are carefully examined as well. Also, in the discussion of relative and negative (privative) attributes, the question of the nature of evil is raised again. Khayyam's poem seems to acknowledge both the reality and malicious nature of evil and the ensuing suffering, on the one hand, and God's role in allowing evil and his "answerability," on the other hand. This makes questioning God on the existence of evil not illegitimate: "Ask none other than He for malice."
### 3.3 Here and Now
For Khayyam the poet, "the tale of the seventy-two nations," by which he refers to organized branches of Islam, and of religions in general, and traditional metaphysical beliefs (mainly Platonic and neo-Platonic), are merely flight of fancy, for the human condition, which he describes as a "sorrow laden nest," at least in part contradicts many such beliefs. The art of living in the present, a theme dealt with in Sufi literature, is a type of wisdom that must be acquired, since living for the hereafter and heavenly rewards is conventional wisdom, and more suitable for the masses. On this point, Khayyam asserts:
>
> Today is thine to spend, but not to-morrow,
>
> Counting on morrow breedeth naught but sorrow;
>
> Oh! Squander not this breath that heaven hath lent thee,
>
> Nor make too sure another breath to borrow
>
> (Whinfield 2001, 30; modified by Aminrazavi)
>
>
>
And also:
>
> What matters if I feast, or have to fast?
>
> What if my days in joy or grief are cast?
>
> Fill me with Thee, O Guide! I cannot ken
>
> If breath I draw returns or fails at last.
>
> (Whinfield 2001, 144)
>
>
>
Khayyam's emphasis on living in the present, or as Sufis say, "Sufi is the son of time," along with his use of other Sufi metaphors such as wine, intoxication and love making, have been interpreted by some scholars as merely mystical allegories.[9] Although a mystical interpretation of the *Ruba'iyyat* has been advocated by some, it remains the view of a minority of scholars.
The complexity of the world according to Khayyam the mathematician-astronomer necessitates the existence of a creator and sustainer of the universe; and yet on a more immediate and existential level, he finds no reason or meaning for human existence. He seems to want to hold both the idea of "a necessary existence" and what necessarily follows from it (as a divine character is always present and sometimes critically addressed in his quatrains) and the immediate intuition of the presence of apparently unexplainable suffering, evil, and meaninglessness in this world. This leads to the theme of doubt and bewilderment. A practical response to this existential crisis opens room for a number of Epicurean and Stoic themes in Khayyam's view, such as being "the son of time," or the art of living in presence.
### 3.4 Doubt and Bewilderment
Humans, Khayyam tells us, are thrown into an existence they cannot make sense of:
>
> The sphere upon which mortals come and go,
>
> Has no end nor beginning that we know;
>
> And none there is to tell us in plain truth:
>
> Whence do we come and whither do we go.
>
> (Whinfield 2001, 132)
>
>
>
Again, the vivid and apparent inconsistency between a seemingly senseless existence and a complex and orderly world leads to existential and philosophical doubt and bewilderment. The tension between Khayyam's philosophical writings in which he embraces the Islamic Peripatetic philosophical tradition and his *Ruba'iyyat* where he expresses his profound skepticism stems from this paradox. In his *Ruba'iyyat* Khayyam embraces, or comes very close to embracing, humanism and agnosticism, leaving the individual human being disoriented, anxious and bewildered; whereas in his philosophical writings he operates within a theistic world where the most fundamental questions, like the existence of God and explanation of evil, find (relatively) decisive solutions. Lack of certainty with regard to religious truth leaves the individual in an epistemologically suspended state where one has to live in the here and now, irrespective of the question of truth:
>
> Since neither truth nor certitude is at hand
>
> Do not waste your life in doubt for a fairyland
>
> O let us not refuse the goblet of wine
>
> For, sober or drunken, in ignorance we stand
>
> (translation by Aminrazavi)
>
>
>
This skepticism towards organized religions runs deep in Khayyam's quatrains; he explicitly questions the orthodoxy and how the final disclosure of ultimate truth might be surprising to all "believers," and perhaps "nonbelievers" (note: your reward is neither *here* nor *there*):
>
> Alike for those who for To-day prepare,
>
> And those that after a To-morrow stare,
>
> A Muezzin from the Tower of Darkness cries
>
> "Fools! your Reward is neither Here nor There!"
>
> (*Ruba'iyyat*, FitzGerald 2009, 28)
>
>
>
### 3.5 Eschatology
The *Ruba'iyyat* casts doubt on Islamic eschatological and soteriological views. Once again the tension between Khayyam's poetic and philosophical modes of thought surfaces; experientially there is evidence to conclude that death is the end:
>
> Behind the curtain none has found his way
>
> None came to know the secret as we could say
>
> And each repeats the dirge his fancy taught
>
> Which has no sense-but never ends the lay
>
> (Whinfield 2001, 229)
>
>
>
In the *Ruba'iyyat*, Khayyam portrays the universe as a beautiful ode which reads "from dust we come and to dust we return," and "every brick is made from the skull of a man." While Khayyam does not explicitly deny the existence of life after death, perhaps for political reasons and fear of being labeled a heretic, there are subtle references throughout his *Ruba'iyyat* that the hereafter should be taken with a grain of salt. In contrast, in his philosophical writings we see him argue for the incorporeality of the soul, which paves the path for the existence of life after death. The irreconcilable conflict between Khayyam's observation that death is the inevitable end for all beings, and his philosophical reflections in favor of the possibility of the existence of life after death, remains an insoluble riddle.[10]
### 3.6 Free Will, Determinism and Predestination
Khayyam is known as a determinist in both the East and the West, and deterministic themes can be seen in much of the *Ruba'iyyat*. But if we read his *Ruba'iyyat* together with his philosophical writings, the picture that emerges may be more rightly called "soft determinism." One of Khayyam's best known quatrains in which determinism is clearly conveyed asserts:
>
> The Moving Finger writes; and, having writ,
>
> Moves on: nor all your Piety nor Wit
>
> Shall lure it back to cancel half a Line,
>
> Nor all your Tears wash out a Word of it
>
> (*Ruba'iyyat*, FitzGerald 1859, 20)
>
>
>
Again, in his philosophical treatise *The Necessity of Contrariety*, Khayyam adheres to three types of determinism. On a universal or cosmic level, our birth is determined in the sense that we had no choice in this matter. Ontologically speaking, our essence and our place on the overall hierarchy of beings appears also to be predetermined. However, the third category of determinism, socio-economic determinism, is man-made and thus changeable. Our attempt to face this determined end of our life, however, will again lead us to nowhere but bewilderment:
>
> At first they brought me perplexed in this way
>
> Amazement still enhances day by day
>
> We all alike are tasked to go but Oh!
>
> Why are we brought and sent? This none can say
>
> (*Ruba'iyyat*, Tirtha 1941, 18)
>
>
>
Thus a reading of the *Ruba'iyyat* in conjunction with Khayyam's philosophical reflections bring forward a more sophisticated view of free will and determinism, indicating that Khayyam believed in free will within a form of cosmic determinism and did not support a full-blown predeterminism.
### 3.7 Philosophical Wisdom
Khayyam uses the concept of "wine and intoxication" throughout his *Ruba'iyyat* in three distinct ways:
1. The intoxicant wine
2. The mystical wine
3. The wine of wisdom
The pedestrian use of wine in the *Ruba'iyyat*, devoid of any intellectual significance, emphasizes the need to forget our daily suffering. The mystical allusions to wine pertain to a type of intoxication which stands opposed to discursive thought. The esoteric use of wine and drinking, which has a long history in Persian Sufi literature, refers to the state of ecstasy in which one is intoxicated with Divine love. Those supporting the Sufi interpretation of *Ruba'iyyat* rely on this literary genre. While Khayyam was not a Sufi in the traditional sense of the word, he includes the mystical use of wine among his allusions.
Khayyam's use of wine in the profound sense in his *Ruba'iyyat* is a type of Sophia that provides a sage with philosophical wisdom, allowing one to come to terms with the temporality of life and to live in the here and now:
>
> Those imprisoned by the intellect's need to decipher
>
> Humbled; knowing being from non-being, they proffer
>
> Seek ignorance and drink the juice of the grape
>
> Those fools acting as wise, scoffer.
>
> (translation by Aminrazavi)
>
>
>
*Khirad* is the type of wisdom that brings about a rapprochement between the poetic and discursive modes of thought, one that sees the fundamental irony in what appears to be a senseless human existence within an orderly and complex physical universe. For Khayyam the mathematician-astronomer, the universe cannot be the result of a random chance; on the other hand, Khayyam the poet fails to find any purpose for an individual human existence in this orderly universe.
>
> As Spring and Fall make their appointed turn,
>
> The leaves of life one aft another turn;
>
> Drink wine and brood not--as the Sage has said:
>
> "Life's cares are poison, wine the cure in turn."
>
> (*Ruba'iyyat*, Sa'idi 1994, 58)
>
>
>
## 4. Khayyam the Mathematician and Scientist
>
> And the part of philosophy known as *mathematics* is the easiest of its parts to grasp both as to conception and as to assent. As to the numerical part of it, it is a very evident matter. And as to the geometrical, likewise hardly anything of it will be unknown to someone who has an unimpaired constitution, a sharp view, an excellent intuition. (*Commentary on the Difficulties of Certain Postulates of Euclid's Work*, Rashed and Vahabzadeh, 2000, 217-218).
>
>
>
In several respects Khayyam's mathematical writings are similar to his texts in other genres: they are relatively few in number, but deal with well-chosen topics and carry deep implications. Some of his mathematics relate in passing to philosophical matters (in particular, reasoning from postulates and definitions), but his most significant work deals with issues internal to mathematics and in particular the intersection between geometry and algebra.
Khayyam's mathematical works with recent editions and publications are:
1. "Treatise on the division of a quadrant of a circle" (*Risala fi taqsim rub' al-da'ira*) in *Hakim Omare Khayyam as an Algebraist*, Arabic text and Persian translation in Mossaheb, 1960, 59-74, 251-291; English translation and critical edition in Rashed and Vahabzadeh, 2000, 165-180.
2. "Treatise on Algebra" (*Risala* *fi'**l**-barahin 'ala* *masa*'*il* *al-jabr* *wa*'*l**-muqabala*) in *Hakim Omare Khayyam as an Algebraist*, Persian translation in Mossaheb, 1960, 159-250; English translation and critical edition in Rashed and Vahabzadeh, 2000, 111-164.
3. "Commentary on the Difficulties of Certain Postulates of Euclid's Work" (*Sharh ma ashkal min musadirat al-uqlidis*) in *Danish namah-yi Khayyami*, Malik (ed.) 1377 (A.H.s.), 71-112; English translation and critical edition in Rashed and Vahabzadeh, 2000, 217-255.
4. "Problems of Arithmetic" (*Mushkilat al-hisab*), (only the title page is extant at the University of Leiden library collection Cod. or. 199, but Khayyam references the work in his *Algebra*).
### 4.1 Solutions of Cubic Equations
Khayyam seems to have been attracted to cubic equations originally through his consideration of the following geometric problem: in a quadrant of a circle, drop a perpendicular from some point on the circumference to one of the radii so that the ratio of the perpendicular to the radius is equal to the ratio of the two parts of the radius on which the perpendicular falls. In the short "Treatise on the division of a quadrant of a circle," Khayyam leads us from one case of this problem to the equation \(x^3 + 200x = 20x^2 + 2000\).[11] Khayyam states that an exact solution is not possible and provides an approximation. Khayyam also generates a direct geometric solution: he uses the numbers in the equation to determine intersecting curves of two conic sections (a circle and a hyperbola) and demonstrates that the solution \(x\) is equal to the length of a particular line segment in the diagram.
Solving algebraic problems using geometric tools was not new; in the case of quadratic equations these geometric methods date back as far as the Greeks and even to the Babylonians. Khayyam's predecessors such as al-Khwarizmi (early 9th century) and Thabit ibn Qurra (late 9th century) already had solved quadratic equations using the straightedge and compass geometry of Euclid's *Elements*. Since negative numbers were avoided in Arabic mathematics, Muslim mathematicians needed to solve several different types of quadratic equations with positive coefficients: for instance, \(x^2 = mx + n\) was fundamentally different from \(x^2 + mx = n\). For cubic equations, fourteen distinct types of equations cannot be reduced to a linear or quadratic form after dividing by \(x\) or \(x^2\). In his "Treatise on Algebra"[12] Khayyam notes that four of these fourteen types had been solved and says that al-Khazin (ca. 900-971) had solved a problem from Archimedes' treatise *On the Sphere and Cylinder* that al-Mahani (fl. ca. 860) had previously converted into a cubic equation and solved using conic sections. Khayyam also states that al-Layth (fl. 10th century) had not treated these cubic forms exhaustively.
In the *Algebra*, Khayyam claims to be the first to deal systematically with all fourteen types of cubic equations. He solves each one in turn, again through the use of intersecting conic sections. Khayyam also considers circumstances when certain cubic equations have multiple solutions. Although he does not handle this topic perfectly, his effort nonetheless exceeds previous efforts. In an algebra where powers of \(x\) corresponded to geometrical dimensions, the solution of cubic equations was the apex of the discipline. Nevertheless, even here Khayyam advanced algebra by considering its unknowns as dimension-free abstractions of continuous quantities.[13]
A geometric solution to a polynomial equation may seem peculiar to modern eyes, but the study of cubic equations (and indeed much of medieval algebra) was motivated by geometric problems. Khayyam was explicitly aware that the algebraic solution of the cubic equation remained to be solved. He never produced such a solution, nor did anyone else until Scipione del Ferro, Niccolo Fontana Tartaglia, and Gerolamo Cardano in the mid-16th century.
Khayyam's work intersecting algebra and geometry is a precursor to the analytic geometry popularized in Descartes' *La Geometrie* published in 1637. Descartes' *La Geometrie* refines and generalizes Khayyam's methods and forms a bridge from the medieval mathematics of Khayyam to the modern mathematics of Newton and Leibniz (Rashed and Vahabzadeh, 2000, 20-29).
### 4.2 The Parallel Postulate and the Theory of Ratios
The process of reasoning from postulates and definitions has been basic to mathematics at least since the time of Euclid. Islamic geometers were well versed in this art, but also spent some effort examining the logical foundations of the method. They were unafraid to revise and improve upon Euclid's starting points, and they rebuilt Euclid's *Elements* from the foundation in several ways. Khayyam's "Commentary on the Difficulties of Certain Postulates of Euclid's Work " [14] deals with the two most important issues in this context, the parallel postulate and the definition of equality of ratios.
Euclid's fifth "parallel" postulate states that if a line falls on two given lines such that the two interior angles add up to less than two right angles, then the given lines must meet on that side. This statement is logically equivalent to several more easily understood assertions, such as: there is exactly one parallel to a given line that passes through a given point not on the given line; or, the angles of a triangle sum to two right angles. It has been known since the early 19th century that non-Euclidean geometries violate these properties; indeed, it is not yet known whether the space in which we live satisfies them.
The parallel postulate, however, was not subject to doubt at Khayyam's time, so it is more appropriate to think of Islamic efforts in this area as part of the tradition of improving upon Euclid rather than as the origin of non-Euclidean geometry. Khayyam's reconstruction of Euclid is one of the better ones: he does not try to prove the parallel postulate. Rather, he replaces it with two statements, which he attributes to Aristotle,[15] that are both simpler and more self-evident: two lines that converge must intersect, and two lines that converge can never diverge in the direction of convergence. Khayyam then replaces Euclid's 29th proposition, the first in which the parallel postulate is used, with a new sequence of eight propositions. Khayyam's insertion amounts to determining that the so-called Saccheri quadrilateral (one with two equal sides perpendicular to the base) is in fact a rectangle. As Giovanni Girolamo Saccheri published this quadrilateral in his book *Euclides ab omni naevo vindicatus* in 1733, it is also known as the Khayyam-Saccheri quadrilateral. Khayyam believed his approach was an improvement on that of his predecessor Ibn al-Haytham (ca. 965-1040) because his method does not rely on the concept of motion, which Khayyam said should be excluded from geometry. Apparently Nasir al-Din al-Tusi agreed, since he followed Khayyam's path in the next century (Rashed and Vahabzadeh, 2000, 186).
Book II of "Commentary on the Difficulties of Certain Postulates of Euclid's Work" takes up the question of the proper definition of ratio. This topic is obscure to the modern reader, but it was fundamental to Greek and medieval mathematics. If the quantities joined in a ratio are whole numbers, then the definition of their ratio poses no difficulty. If the quantities are geometric magnitudes, the situation is more complicated because the two line segments might be incommensurable (in modern terms, their ratio corresponds to an irrational number). Euclid, following Eudoxus, asserts that \(A/B = C/D\) when, for any magnitudes \(x\) and \(y\), the magnitudes \(xA\) and \(xC\) are both (i) greater than, (ii) equal to, or (iii) less than the magnitudes \(yB\) and \(yD\) respectively. There is little wonder that Khayyam and others were unhappy with this definition, for while it is clearly true, it does not get at the heart of what it means for ratios to be equal.
An alternate approach, which may have existed in ancient Greece but certainly existed in the 9th century, is the "anthyphairetic" definition (Hogendijk 2002). The Euclidean algorithm is an iterative process that is used to find the greatest common divisor of a pair of numbers. The algorithm may be applied equally well to find the greatest common measure of two geometric magnitudes, but the algorithm will never terminate if the ratio between the two magnitudes is irrational. A sequence of divisions within the algorithm results in a "continued fraction" that corresponds to the ratio between the original two quantities. Khayyam, following several earlier Islamic mathematicians, defines the equality of \(A/B\) and \(C/D\) according to whether their continued fractions are equal.
One may wonder why the proponents of the anthyphairetic definition felt that it was more natural than Euclid's approach. There is no doubt, however, that it was preferred; Khayyam even refers to the anthyphairetic definition as the "true" nature of proportionality. Part of the explanation might be simply that the Euclidean algorithm applied to geometric quantities was much more familiar to medieval mathematicians than to us. Khayyam's preference is due to the fact that the anthyphairetic definition allows a ratio to be considered on its own, rather than always in equality to some other ratio. Khayyam's achievement in this topic was not to invent a new definition, but rather to demonstrate that each of the existing definitions logically implies the other. Thus Islamic mathematicians could continue to use ratio theorems from the *Elements* without having to prove them again according to the anthyphairetic definition.
Book III continues the discussion of ratios; Khayyam sets himself the task of demonstrating the seemingly innocuous proposition \(A/C = (A/B) (B/C)\), a fact which is used in the *Elements* but never proved. During this process Khayyam sets an arbitrary fixed magnitude to serve as a unit, to which he relates all other magnitudes of the same kind. This arrangement allows Khayyam to incorporate both numbers and geometric magnitudes within the same system. Thus Khayyam thinks of irrational magnitudes as numbers themselves, which effectively defines the set of "real numbers" that we take for granted today which is composed of rational numbers together with irrational numbers. This revolutionary step to consider irrational numbers as mathematical entities in their own right was one of the most significant advances of conception to occur between ancient Greek and modern mathematics (Youschkevitch and Rosenfeld 1973, 327).
### 4.3 Root Calculations and the Binomial Theorem
We know that Khayyam wrote a treatise, now lost, titled "Problems of Arithmetic," involving the determination of \(n\)-th roots (Youschkevitch and Rosenfeld 1973, 325-326). In his *Algebra* Khayyam writes that methods for calculating square and cube roots come from India, and that he is the first to extend these methods to the determination of roots of any order (Rashed and Vahabzadeh 2000, 6). Even more interestingly, Khayyam states that he has demonstrated the validity of his methods using proofs that "are only numerical demonstrations based on the arithmetical Books of the *Elements*" (Rashed and Vahabzadeh 2000, 116-117). If both of these statements are true, then it is hard to avoid the conclusion that Khayyam had used the binomial theorem
\[(a + b)^n = a^n + na^{n-1}b + \cdots + nab^{n-1}+ b^n\]
in his method, improving on and extending Abu Bakr al-Karaji's (ca. 953-1029) work on binomial expansions. "Pascal's" celebrated arithmetic triangle is a triangular arrangement of the binomial expansion terms' coefficients. Scholars such as Chavoshi advocate renaming Pascal's triangle as "al-Kharaji-Khayyam's" triangle, given how European mathematicians adopted the same method of extracting roots as Khayyam's (Chavoshi 1380 A.H.s., 262-273).
The history of Pascal's triangle appears to originate in India with Acharya Pingala's (ca. 200 BCE) study of meter and sound combinations in Sanskrit poetry. The Indian commentator Halayudha wrote a commentary on Pingala's *Chandahsutra* at the end of the 10th century CE giving the first appearance of a triangular arrangement of the binomial coefficients, solely as a means of finding the number of combinations of short and long syllable patterns. Meanwhile in China, Jia Xian (ca. 1050) had also discovered the method of finding \(n\)-th roots with Pascal's triangle, as relayed by the 13th century mathematician Yang Hui. However, Jia Xian's work is not extant. Thus, we rely on two lost manuscripts, over approximately the same time period, to determine the Islamic and Chinese origins of Pascal's triangle as a means for finding \(n\)-th roots (Joseph 2011, 247, 270-281, 354-355).
### 4.4 Astronomy and Other Works
Khayyam moved to Isfahan in 1074 to help establish a new observatory under the patronage of Jalal al-Din Malik-shah, the Seljuk sultan, and his vizier, Nizam al-Mulk. Undoubtedly, Khayyam played a major role in the creation of the Maliki or Jalali calendar, the observatory's most significant project. Khayyam succeeded in devising a calendar more accurate than the current Western Gregorian calendar, needing correction of one day every 5,000 years compared to every 3,333 years respectively. In addition to the calendar, the Isfahan observatory produced the *Zij Malik-shahi* (of which only a fragment of its star catalogue survives), apparently one of the more important astronomical handbooks (Youschkevitch and Rosenfeld 1973, 324).
Remaining works attributed to Khayyam in other fields are:
1. "Treatise on the Difficulties of the 'Book of Music' (by Euclid)" (*Sharh al-mushkil min kitab al-musiqi*). Only one chapter titled "Discussion on Genera Contained in a Fourth" (*Al-Qawl 'ala ajnas allati* *bi'**l**-arba'a*) is extant. For a critical edition see Chavoshi, "Omar al-Khayyam wa'l-musiqi al-nadariyyah," translated into English by M.M. 'Abd al-Jalil, *Farhang*, vol. 1, 14 (1378 A.H.s.): 29-32 and 203-214.
2. "The Balance of Wisdoms" (*Mizan al-hikmah*), "Two Treatises on the Level Balance" (*Fi*'*l**-qustas al-mustaqim)*, and "On the Art of Determination of Gold and Silver in a Body Consisting of Them" (*Risalah fi'l- ihtiyaj lima 'rifat miqdari al-dhahab wa'l-fidah fi jism murakkab minha*) in *Danish namah-yi Khayyami*, Malik (ed.) 1377 A.H.s., 293-300, and Persian trans., 301-304.
3. "On discovering the truth of Noruz" (*Risalah dar kashf haqiqat Noruz)* in *Danish namah-yi Khayyami*, Malik (ed.) 1377 A.H.s., 424-438.
Book III of "Commentary on the Difficulties of Certain Postulates of Euclid's Work" shows Khayyam discussing how compounding ratios in music composition are numerical. In this discussion, Khayyam refers to his earlier commentary of Euclid's "Book of Music" (Rashed and Vahabzadeh, 2000, 251-252). However, only one chapter "Discussion on Genera Contained in a Fourth" of Khayyam's commentary discussing tetrachords (a series of four notes separated by three intervals) survives (Aminrazavi 2007, 198-199).
Khayyam's treatise "On the Art of Determination of Gold and Silver in a Body Consisting of Them" appears in the book *Mizan al-hikmah* written by his student Abd'l-Rahman Khazeni (fl. 1115-1130). Khazeni attributes this article, as well as the "Two Treatises on the Level Balance," to Khayyam. Techniques for determining the proportions of gold and silver in a compound date back to Archimedes, and Khayyam's contribution appears to be the precision of his method along with the intricate designs of his level balance (Hall 1981, 341-343).
Another treatise titled "On discovering the truth of Noruz" is apocryphal, where only one of its three parts is believed to have been written by Khayyam. Khayyam allegedly writes of the history and mythology of the Persian new year at the beginning of the spring equinox, although the style and errors of the text lend doubt to Khayyam's authorship (Aminrazavi 2007, 38).
As with his philosophical and mathematical writings, Khayyam's texts in other disciplines appear to have been taken seriously by his contemporaries and later scholars.
## 5. Khayyam in the West
### 5.1 Orientalism and the European Khayyam
The earliest extant translation of the *Ruba'iyyat* was produced by Thomas Hyde in the 1760s when his translation of a single quatrain appeared in the *Veterum Persarum et Parthorum et Medorum Religionis*. It was not until the 19th century, however, that the Western world and literary circles discovered Umar Khayyam in all his richness.
The voyage of the *Ruba'iyyat* to the West began when Sir Gore Ouseley, the British ambassador to Iran, presented his collection to the Bodleian Library at Oxford University upon his return to England. In the 1840s Professor Edward Byles Cowell of Oxford University discovered a copy of the *Ruba'iyyat* of Khayyam and translated several of the *Ruba'iyyat*. Amazed by their profundity, he shared them with Edward FitzGerald, who took an immediate interest and published the first edition of his own translation in 1859. Four versions of FitzGerald's *Ruba'iyyat* were published over his lifetime as new quatrains were discovered. Realizing the free nature of his work in his first translation, FitzGerald chose the word *rendered* to appear on the title page in later editions instead of "translation" (Lange 1968).
### 5.2 The Impact of Khayyam on Western Literary and Philosophical Circles
While the connection between the Pre-Raphaelites and Umar Khayyam should not be exaggerated, the relationship that Algernon Charles Swinburne, George Meredith, and Dante G. Rossetti shared with Edward FitzGerald and their mutual admiration of Khayyam cannot be ignored. The salient themes of the *Ruba'iyyat* became popular among the Pre-Raphaelites and their circle (Lange 1968). Khayyam's popularity led to the formation of the "Omar Khayyam Club of London" (Conway 1893, 305) in 1892, which attracted a number of literary figures and intellectuals. The success of the Club soon led to the simultaneous formation of the Omar Khayyam Clubs of Germany and America.
In America, Umar Khayyam was well received in the New England area where his poetry was propagated by the official members of the Omar Khayyam Club of America. The academic community discovered Khayyam's mathematical writings and poetry in the 1880s, when his scholarly articles and translations of his works were published. Some, such as William Edward Story, praised Umar as a mathematician and compared his views with those of Johannes Kepler, Gottfried Wilhelm Leibniz, and Isaac Newton, while others drew their inspiration from his literary tradition and called themselves "Umarians." This new literary movement soon attracted such figures as Mark Twain, who composed forty-five burlesque versions of FitzGerald's quatrains and integrated them with two of FitzGerald's stanzas entitled *AGE-A Ruba'iyat* (Twain, 1983, 14). The movement also drew the attention of T.S. Eliot's grandfather William Greenleaf Eliot (1811-1887), two of T.S. Eliot's cousins, and T.S. Eliot himself. Umar Khayyam's *Ruba'iyyat* seems to have elicited two distinct responses among many of his followers in general and the Eliot family in particular: admiration for a rational theology on the one hand, and concern with the rise of skepticism and moral decay in America on the other.
Among other figures influenced by the *Ruba'iyyat* of Umar Khayyam were certain members of the New England School of Transcendentalism, including Henry Wadsworth Longfellow, Ralph Waldo Emerson, and Henry David Thoreau (Aminrazavi 2013; for a complete discussion on Umar Khayyam in the West see Aminrazavi 2007, 204-278).
## 6. Conclusion
In the foregoing discussion, we have seen that Umar Khayyam was a philosopher-sage (*hakim*) and a spiritual-pragmatist whose *Ruba'iyyat* should be seen as a philosophical commentary on the human condition. The salient features of Umar Khayyam's pioneering work in various branches of mathematics were also discussed. Khayyam's mathematical genius not only produced the most accurate calendar to date, but the issues he treated remained pertinent up until the modern period.
For Khayyam, there are two discourses, each of which pertains to one dimension of human existence: philosophical and poetic. Philosophically, Khayyam defended rationalism against the rise of orthodoxy and made an attempt to revive the spirit of rationalism which was so prevalent in the first four centuries in Islam. Poetically, Khayyam represents a voice of protest against what he regards to be a fundamentally unjust world. Many people found in him a voice they needed to hear, and centuries after he had died, his works became a venue for those who were experiencing the same trials and tribulations as Khayyam had. |
qt-uncertainty | ## 1. Introduction
The uncertainty principle is certainly one of the most famous aspects
of quantum mechanics. It has often been regarded as the most
distinctive feature in which quantum mechanics differs from classical
theories of the physical world. Roughly speaking, the uncertainty
principle (for position and momentum) states that one cannot assign
exact simultaneous values to the position and momentum of a physical
system. Rather, these quantities can only be determined with some
characteristic "uncertainties" that cannot become
arbitrarily small simultaneously. But what is the exact meaning of
this principle, and indeed, is it really a principle of quantum
mechanics? (In his original work, Heisenberg only speaks of
uncertainty relations.) And, in particular, what does it mean to say
that a quantity is determined only up to some uncertainty? These are
the main questions we will explore in the following, focusing on the
views of Heisenberg and Bohr.
The notion of "uncertainty" occurs in several different
meanings in the physical literature. It may refer to a lack of
knowledge of a quantity by an observer, or to the experimental
inaccuracy with which a quantity is measured, or to some ambiguity in
the definition of a quantity, or to a statistical spread in an
ensemble of similarly prepared systems. Also, several different names
are used for such uncertainties: inaccuracy, spread, imprecision,
indefiniteness, indeterminateness, indeterminacy, latitude, etc. As we
shall see, even Heisenberg and Bohr did not decide on a single
terminology for quantum mechanical uncertainties. Forestalling a
discussion about which name is the most appropriate one in quantum
mechanics, we use the name "uncertainty principle" simply
because it is the most common one in the literature.
## 2. Heisenberg
### 2.1 Heisenberg's road to the uncertainty relations
Heisenberg introduced his famous relations in an article of 1927,
entitled *Ueber den anschaulichen Inhalt der quantentheoretischen
Kinematik und Mechanik*. A (partial) translation of this title is:
"On the *anschaulich* content of quantum theoretical
kinematics and mechanics". Here, the term *anschaulich*
is particularly notable. Apparently, it is one of those German words
that defy an unambiguous translation into other languages.
Heisenberg's title is translated as "*On the physical
content* ..." by Wheeler and Zurek (1983). His
collected works (Heisenberg 1984) translate it as "*On the
perceptible content* ...", while Cassidy's
biography of Heisenberg (Cassidy 1992), refers to the paper as
"*On the perceptual content* ...". Literally,
the closest translation of the term *anschaulich* is
"visualizable". But, as in most languages, words that make
reference to vision are not always intended literally. Seeing is
widely used as a metaphor for understanding, especially for immediate
understanding. Hence, *anschaulich* also means
"intelligible" or
"intuitive".[1]
Why was this issue of the *Anschaulichkeit* of quantum
mechanics such a prominent concern to Heisenberg? This question has
already been considered by a number of commentators (Jammer 1974;
Miller 1982; de Regt 1997; Beller 1999). For the answer, it turns out,
we must go back a little in time. In 1925 Heisenberg had developed the
first coherent mathematical formalism for quantum theory (Heisenberg
1925). His leading idea was that only those quantities that are in
principle observable should play a role in the theory, and that all
attempts to form a picture of what goes on inside the atom should be
avoided. In atomic physics the observational data were obtained from
spectroscopy and associated with atomic transitions. Thus, Heisenberg
was led to consider the "transition quantities" as the
basic ingredients of the theory. Max Born, later that year, realized
that the transition quantities obeyed the rules of matrix calculus, a
branch of mathematics that was not so well-known then as it is now. In
a famous series of papers Heisenberg, Born and Jordan developed this
idea into the matrix mechanics version of quantum theory.
Formally, matrix mechanics remains close to classical mechanics. The
central idea is that all physical quantities must be represented by
infinite self-adjoint matrices (later identified with operators on a
Hilbert space). It is postulated that the matrices \(\bQ\)
and \(\bP\) representing the canonical position and
momentum variables of a particle satisfy the so-called canonical
commutation rule
\[\tag{1}
\bQ\bP - \bP\bQ = i\hslash\]
where \(\hslash = h/2\pi\), \(h\) denotes
Planck's constant, and boldface type is used to represent
matrices (or operators). The new theory scored spectacular empirical
success by encompassing nearly all spectroscopic data known at the
time, especially after the concept of the electron spin was included
in the theoretical framework.
It came as a big surprise, therefore, when one year later, Erwin
Schrodinger presented an alternative theory, that became known as
wave mechanics. Schrodinger assumed that an electron in an atom
could be represented as an oscillating charge cloud, evolving
continuously in space and time according to a wave equation. The
discrete frequencies in the atomic spectra were not due to
discontinuous transitions (quantum jumps) as in matrix mechanics, but
to a resonance phenomenon. Schrodinger also showed that the two
theories were equivalent.[2]
Even so, the two approaches differed greatly in interpretation and
spirit. Whereas Heisenberg eschewed the use of visualizable pictures,
and accepted discontinuous transitions as a primitive notion,
Schrodinger claimed as an advantage of his theory that it was
*anschaulich*. In Schrodinger's vocabulary, this
meant that the theory represented the observational data by means of
continuously evolving causal processes in space and time. He
considered this condition of *Anschaulichkeit* to be an
essential requirement on any acceptable physical theory.
Schrodinger was not alone in appreciating this aspect of his
theory. Many other leading physicists were attracted to wave mechanics
for the same reason. For a while, in 1926, before it emerged that wave
mechanics had serious problems of its own, Schrodinger's
approach seemed to gather more support in the physics community than
matrix mechanics.
Understandably, Heisenberg was unhappy about this development. In a
letter of 8 June 1926 to Pauli he confessed that "The more I
think about the physical part of Schrodinger's theory, the
more disgusting I find it", and: "What Schrodinger
writes about the *Anschaulichkeit* of his theory, ... I
consider *Mist*" (Pauli 1979: 328). Again, this last
German term is translated differently by various commentators: as
"junk" (Miller 1982) "rubbish" (Beller 1999)
"crap" (Cassidy 1992), "poppycock"
(Bacciagaluppi & Valentini 2009) and perhaps more literally, as
"bullshit" (Moore 1989; de Regt 1997). Nevertheless, in
published writings, Heisenberg voiced a more balanced opinion. In a
paper in *Die Naturwissenschaften* (1926) he summarized the
peculiar situation that the simultaneous development of two competing
theories had brought about. Although he argued that
Schrodinger's interpretation was untenable, he admitted
that matrix mechanics did not provide the *Anschaulichkeit*
which made wave mechanics so attractive. He concluded:
>
>
> to obtain a contradiction-free *anschaulich* interpretation, we
> still lack some essential feature in our image of the structure of
> matter.
>
>
>
The purpose of his 1927 paper was to provide exactly this lacking
feature.
### 2.2 Heisenberg's argument
Let us now look at the argument that led Heisenberg to his uncertainty
relations. He started by redefining the notion of
*Anschaulichkeit*. Whereas Schrodinger associated this
term with the provision of a causal space-time picture of the
phenomena, Heisenberg, by contrast, declared:
>
>
> We believe we have gained *anschaulich* understanding of a
> physical theory, if in all simple cases, we can grasp the experimental
> consequences qualitatively and see that the theory does not lead to
> any contradictions. Heisenberg 1927: 172)
>
>
>
His goal was, of course, to show that, in this new sense of the word,
matrix mechanics could lay the same claim to *Anschaulichkeit*
as wave mechanics.
To do this, he adopted an operational assumption: terms like
"the position of a particle" have meaning only if one
specifies a suitable experiment by which "the position of a
particle" can be measured. We will call this assumption the
"measurement=meaning principle". In general, there is no
lack of such experiments, even in the domain of atomic physics.
However, experiments are never completely accurate. We should be
prepared to accept, therefore, that in general the meaning of these
quantities is also determined only up to some characteristic
inaccuracy.
As an example, he considered the measurement of the position of an
electron by a microscope. The accuracy of such a measurement is
limited by the wave length of the light illuminating the electron.
Thus, it is possible, in principle, to make such a position
measurement as accurate as one wishes, by using light of a very short
wave length, e.g., \(\gamma\)-rays. But for \(\gamma\)-rays, the
Compton effect cannot be ignored: the interaction of the electron and
the illuminating light should then be considered as a collision of at
least one photon with the electron. In such a collision, the electron
suffers a recoil which disturbs its momentum. Moreover, the shorter
the wave length, the larger is this change in momentum. Thus, at the
moment when the position of the particle is accurately known,
Heisenberg argued, its momentum cannot be accurately known:
>
>
> At the instant of time when the position is determined, that is, at
> the instant when the photon is scattered by the electron, the electron
> undergoes a discontinuous change in momentum. This change is the
> greater the smaller the wavelength of the light employed, i.e., the
> more exact the determination of the position. At the instant at which
> the position of the electron is known, its momentum therefore can be
> known only up to magnitudes which correspond to that discontinuous
> change; thus, the more precisely the position is determined, the less
> precisely the momentum is known, and conversely. (Heisenberg 1927:
> 174-5)
>
>
>
This is the first formulation of the uncertainty principle. In its
present form it is an epistemological principle, since it limits what
we can *know* about the electron. From "elementary
formulae of the Compton effect" Heisenberg estimated the
"imprecisions" to be of the order
\[\tag{2} \delta p\delta q \sim h\]
He continued: "In this circumstance we see the direct
*anschaulich* content of the relation \(\boldsymbol{QP} -
\boldsymbol{PQ} = i\hslash\)."
He went on to consider other experiments, designed to measure other
physical quantities and obtained analogous relations for time and
energy:
\[\tag{3} \delta t \delta E \sim h\]
and action \(J\) and angle \(w\)
\[\tag{4} \delta w \delta J \sim h\]
which
he saw as corresponding to the "well-known" relations
\[\tag{5}
\boldsymbol{tE} - \boldsymbol{Et} =
i\hslash \text{ or } \boldsymbol{wJ} - \boldsymbol{Jw} = i\hslash\]
However, these generalisations are not as straightforward as
Heisenberg suggested. In particular, the status of the time variable
in his several illustrations of relation
(3)
is not at all clear (Hilgevoord 2005; see also
Section 2.5).
Heisenberg summarized his findings in a general conclusion: all
concepts used in classical mechanics are also well-defined in the
realm of atomic processes. But, as a pure fact of experience (*rein
erfahrungsgemass*), experiments that serve to provide
such a definition for one quantity are subject to particular
indeterminacies, obeying relations
(2)-(4)
which prohibit them from providing a simultaneous definition of two
canonically conjugate quantities. Note that in this formulation the
emphasis has slightly shifted: he now speaks of a limit on the
definition of concepts, i.e., not merely on what we can *know*,
but what we can meaningfully *say* about a particle. Of course,
this stronger formulation follows by application of the above
measurement=meaning principle: if there are, as Heisenberg claims, no
experiments that allow a simultaneous precise measurement of two
conjugate quantities, then these quantities are also not
simultaneously well-defined.
Heisenberg's paper has an interesting "Addition in
proof" mentioning critical remarks by Bohr, who saw the paper
only after it had been sent to the publisher. Among other things, Bohr
pointed out that in the microscope experiment it is not the change of
the momentum of the electron that is important, but rather the
circumstance that this change cannot be precisely determined in the
*same* experiment. An improved version of the argument,
responding to this objection, is given in Heisenberg's Chicago
lectures of 1930.
Here (Heisenberg 1930: 16), it is assumed that the electron is
illuminated by light of wavelength \(\lambda\) and that the scattered
light enters a microscope with aperture angle \(\varepsilon\).
According to the laws of classical optics, the accuracy of the
microscope depends on both the wave length and the aperture angle;
Abbe's criterium for its "resolving power", i.e.,
the size of the smallest discernable details, gives
\[\tag{6}
\delta q \sim \frac{\lambda}{\sin \varepsilon}.\]
On the other hand, the direction of a scattered photon, when it enters
the microscope, is unknown within the angle \(\varepsilon\), rendering
the momentum change of the electron uncertain by an amount
\[\tag{7}
\delta p \sim \frac{h \sin \varepsilon}{\lambda}\]
leading again to the result
(2).
Let us now analyse Heisenberg's argument in more detail. Note
that, even in this improved version, Heisenberg's argument is
incomplete. According to Heisenberg's "measurement=meaning
principle", one must also specify, in the given context, what
the meaning is of the phrase "momentum of the electron",
in order to make sense of the claim that this momentum is changed by
the position measurement. A solution to this problem can again be
found in the Chicago lectures (Heisenberg 1930: 15). Here, he assumes
that initially the momentum of the electron is precisely known, e.g.,
it has been measured in a previous experiment with an inaccuracy
\(\delta p\_{i}\), which may be arbitrarily small. Then, its position
is measured with inaccuracy \(\delta q\), and after this, its final
momentum is measured with an inaccuracy \(\delta p\_{f}\). All three
measurements can be performed with arbitrary precision. Thus, the
three quantities \(\delta p\_{i}, \delta q\), and \(\delta p\_{f}\) can
be made as small as one wishes. If we assume further that the initial
momentum has not changed until the position measurement, we can speak
of a definite momentum until the time of the position measurement.
Moreover we can give operational meaning to the idea that the momentum
is changed during the position measurement: the outcome of the second
momentum measurement (say \(p\_{f}\) will generally differ from the
initial value \(p\_{i}\). In fact, one can also show that this change
is discontinuous, by varying the time between the three
measurements.
Let us try to see, adopting this more elaborate set-up, if we can
complete Heisenberg's argument. We have now been able to give
empirical meaning to the "change of momentum" of the
electron, \(p\_{f} - p\_{i}\). Heisenberg's argument claims that
the order of magnitude of this change is at least inversely
proportional to the inaccuracy of the position measurement:
\[\tag{8} \abs{p\_{f} - p\_{i}} \delta q \sim h\]
However, can we now draw the conclusion that the momentum is only
imprecisely defined? Certainly not. Before the position measurement,
its value was \(p\_{i}\), after the measurement it is \(p\_{f}\). One
might, perhaps, claim that the value at the very instant of the
position measurement is not yet defined, but we could simply settle
this by a convention, e.g., we might assign the mean value \((p\_{i} +
p\_{f})/2\) to the momentum at this instant. But then, the momentum is
precisely determined at all instants, and Heisenberg's
formulation of the uncertainty principle no longer follows. The above
attempt of completing Heisenberg's argument thus overshoots its
mark.
A solution to this problem can again be found in the Chicago Lectures.
Heisenberg admits that position and momentum can be known exactly. He
writes:
>
>
> If the velocity of the electron is at first known, and the position
> then exactly measured, the position of the electron for times previous
> to the position measurement may be calculated. For these past times,
> \(\delta p\delta q\) is smaller than the usual bound. (Heisenberg
> 1930: 15)
>
>
>
Indeed, Heisenberg says: "the uncertainty relation does not hold
for the past".
Apparently, when Heisenberg refers to the uncertainty or imprecision
of a quantity, he means that the value of this quantity cannot be
given *beforehand*. In the sequence of measurements we have
considered above, the uncertainty in the momentum after the
measurement of position has occurred, refers to the idea that the
value of the momentum is not fixed just *before* the final
momentum measurement takes place. Once this measurement is performed,
and reveals a value \(p\_{f}\), the uncertainty relation no longer
holds; these values then belong to the past. Clearly, then, Heisenberg
is concerned with *unpredictability*: the point is not that the
momentum of a particle changes, due to a position measurement, but
rather that it changes by an unpredictable amount. It is, however
always possible to measure, and hence define, the size of this change
in a subsequent measurement of the final momentum with arbitrary
precision.
Although Heisenberg admits that we can consistently attribute values
of momentum and position to an electron in the past, he sees little
merit in such talk. He points out that these values can never be used
as initial conditions in a prediction about the future behavior of the
electron, or subjected to experimental verification. Whether or not we
grant them physical reality is, as he puts it, a matter of personal
taste. Heisenberg's own taste is, of course, to deny their
physical reality. For example, he writes,
>
>
> I believe that one can formulate the emergence of the classical
> "path" of a particle succinctly as follows: *the
> "path" comes into being only because we observe it*.
> (Heisenberg 1927: 185)
>
>
>
Apparently, in his view, a measurement does not only serve to give
meaning to a quantity, it *creates* a particular value for this
quantity. This may be called the "measurement=creation"
principle. It is an ontological principle, for it states what is
physically real.
This then leads to the following picture. First we measure the
momentum of the electron very accurately. By "measurement=
meaning", this entails that the term "the momentum of the
particle" is now well-defined. Moreover, by the
"measurement=creation" principle, we may say that this
momentum is physically real. Next, the position is measured with
inaccuracy \(\delta q\). At this instant, the position of the particle
becomes well-defined and, again, one can regard this as a physically
real attribute of the particle. However, the momentum has now changed
by an amount that is unpredictable by an order of magnitude
\(\abs{p\_{f} - p\_{i}} \sim h/\delta q\). The meaning and validity of
this claim can be verified by a subsequent momentum measurement.
The question is then what status we shall assign to the momentum of
the electron just before its final measurement. Is it real? According
to Heisenberg it is not. Before the final measurement, the best we can
attribute to the electron is some unsharp, or fuzzy momentum. These
terms are meant here in an ontological sense, characterizing a real
attribute of the electron.
### 2.3 The interpretation of Heisenberg's uncertainty relations
Heisenberg's relations were soon considered to be a cornerstone
of the Copenhagen interpretation of quantum mechanics. Just a few
months later, Kennard (1927) already called them the "essential
core" of the new theory. Taken together with Heisenberg's
contention that they provide the intuitive content of the theory and
their prominent role in later discussions on the Copenhagen
interpretation, a dominant view emerged in which the uncertainty
relations were regarded as a fundamental principle of the theory.
The interpretation of these relations has often been debated. Do
Heisenberg's relations express restrictions on the experiments
we can perform on quantum systems, and, therefore, restrictions on the
information we can gather about such systems; or do they express
restrictions on the meaning of the concepts we use to describe quantum
systems? Or else, are they restrictions of an ontological nature,
i.e., do they assert that a quantum system simply does not possess a
definite value for its position and momentum at the same time? The
difference between these interpretations is partly reflected in the
various names by which the relations are known, e.g., as
"inaccuracy relations", or: "uncertainty",
"indeterminacy" or "unsharpness relations".
The debate between these views has been addressed by many authors, but
it has never been settled completely. Let it suffice here to make only
two general observations.
First, it is clear that in Heisenberg's own view all the above
questions stand or fall together. Indeed, we have seen that he adopted
an operational "measurement=meaning" principle according
to which the meaningfulness of a physical quantity was equivalent to
the existence of an experiment purporting to measure that quantity.
Similarly, his "measurement=creation" principle allowed
him to attribute physical reality to such quantities. Hence,
Heisenberg's discussions moved rather freely and quickly from
talk about experimental inaccuracies to epistemological or ontological
issues and back again.
However, ontological questions seemed to be of somewhat less interest
to him. For example, there is a passage (Heisenberg 1927: 197), where
he discusses the idea that, behind our observational data, there might
still exist a hidden reality in which quantum systems have definite
values for position and momentum, unaffected by the uncertainty
relations. He emphatically dismisses this conception as an unfruitful
and meaningless speculation, because, as he says, the aim of physics
is only to describe observable data. Similarly, in the Chicago
Lectures, he warns against the fact that the human language permits
the utterance of statements which have no empirical content, but
nevertheless produce a picture in our imagination. He notes,
>
>
> One should be especially careful in using the words
> "reality", "actually", etc., since these words
> very often lead to statements of the type just mentioned. (Heisenberg
> 1930: 11)
>
>
>
So, Heisenberg also endorsed an interpretation of his relations as
rejecting a reality in which particles have simultaneous definite
values for position and momentum.
The second observation is that although for Heisenberg experimental,
informational, epistemological and ontological formulations of his
relations were, so to say, just different sides of the same coin, this
is not so for those who do not share his operational principles or his
view on the task of physics. Alternative points of view, in which
e.g., the ontological reading of the uncertainty relations is denied,
are therefore still viable. The statement, often found in the
literature of the thirties, that Heisenberg had *proved* the
impossibility of associating a definite position and momentum to a
particle is certainly wrong. But the precise meaning one can
coherently attach to Heisenberg's relations depends rather
heavily on the interpretation one favors for quantum mechanics as a
whole. And because no agreement has been reached on this latter issue,
one cannot expect agreement on the meaning of the uncertainty
relations either.
### 2.4 Uncertainty relations or uncertainty principle?
Let us now move to another question about Heisenberg's
relations: do they express a *principle* of quantum theory?
Probably the first influential author to call these relations a
"principle" was Eddington, who, in his Gifford Lectures of
1928 referred to them as the "Principle of Indeterminacy".
In the English literature the name uncertainty principle became most
common. It is used both by Condon and Robertson in 1929, and also in
the English version of Heisenberg's Chicago Lectures (Heisenberg
1930), although, remarkably, nowhere in the original German version of
the same book (see also Cassidy 1998). Indeed, Heisenberg never seems
to have endorsed the name "principle" for his relations.
His favourite terminology was "inaccuracy relations"
(*Ungenauigkeitsrelationen*) or "indeterminacy
relations" (*Unbestimmtheitsrelationen*). We know only
one passage, in Heisenberg's own Gifford lectures, delivered in
1955-56 (Heisenberg 1958: 43), where he mentioned that his
relations "are usually called relations of uncertainty or
principle of indeterminacy". But this can well be read as his
yielding to common practice rather than his own preference.
But does the relation
(2)
qualify as a principle of quantum mechanics? Several authors,
foremost Karl Popper (1967), have contested this view. Popper argued
that the uncertainty relations cannot be granted the status of a
principle on the grounds that they are derivable from the theory,
whereas one cannot obtain the theory from the uncertainty relations.
(The argument being that one can never derive any equation, say, the
Schrodinger equation, or the commutation relation
(1),
from an inequality.)
Popper's argument is, of course, correct but we think it misses
the point. There are many statements in physical theories which are
called principles even though they are in fact derivable from other
statements in the theory in question. A more appropriate departing
point for this issue is not the question of logical priority but
rather Einstein's distinction between "constructive
theories" and "principle theories".
Einstein proposed this famous classification in Einstein 1919.
Constructive theories are theories which postulate the existence of
simple entities behind the phenomena. They endeavour to reconstruct
the phenomena by framing hypotheses about these entities. Principle
theories, on the other hand, start from empirical principles, i.e.,
general statements of empirical regularities, employing no or only a
bare minimum of theoretical terms. The purpose is to build up the
theory from such principles. That is, one aims to show how these
empirical principles provide sufficient conditions for the
introduction of further theoretical concepts and structure.
The prime example of a theory of principle is thermodynamics. Here the
role of the empirical principles is played by the statements of the
impossibility of various kinds of perpetual motion machines. These are
regarded as expressions of brute empirical fact, providing the
appropriate conditions for the introduction of the concepts of energy
and entropy and their properties. (There is a lot to be said about the
tenability of this view, but that is not our topic here.)
Now obviously, once the formal thermodynamic theory is built, one can
also *derive* the impossibility of the various kinds of
perpetual motion. (They would violate the laws of energy conservation
and entropy increase.) But this derivation should not misguide one
into thinking that they were no principles of the theory after all.
The point is just that empirical principles are statements that do not
rely on the theoretical concepts (in this case entropy and energy) for
their meaning. They are interpretable independently of these concepts
and, further, their validity on the empirical level still provides the
physical content of the theory.
A similar example is provided by special relativity, another theory of
principle, which Einstein deliberately designed after the ideal of
thermodynamics. Here, the empirical principles are the light postulate
and the relativity principle. Again, once we have built up the modern
theoretical formalism of the theory (Minkowski space-time), it is
straightforward to prove the validity of these principles. But again
this does not count as an argument for claiming that they were no
principles after all. So the question whether the term
"principle" is justified for Heisenberg's relations,
should, in our view, be understood as the question whether they are
conceived of as empirical principles.
One can easily show that this idea was never far from
Heisenberg's intentions. We have already seen that Heisenberg
presented the relations as the result of a "pure fact of
experience". A few months after his 1927 paper, he wrote a
popular paper "*Uber die Grundprincipien der
Quantenmechanik*" ("On the fundamental principles of
quantum mechanics") where he made the point even more clearly.
Here Heisenberg described his recent break-through in the
interpretation of the theory as follows: "It seems to be a
general law of nature that we cannot determine position and velocity
simultaneously with arbitrary accuracy". Now actually, and in
spite of its title, the paper does not identify or discuss any
"fundamental principle" of quantum mechanics. So, it must
have seemed obvious to his readers that he intended to claim that the
uncertainty relation was a fundamental principle, forced upon us as an
empirical law of nature, rather than a result derived from the
formalism of the theory.
This reading of Heisenberg's intentions is corroborated by the
fact that, even in his 1927 paper, applications of his relation
frequently present the conclusion as a matter of principle. For
example, he says "In a stationary state of an atom its phase is
*in principle* indeterminate" (Heisenberg 1927: 177,
[emphasis added]). Similarly, in a paper of 1928, he described the
content of his relations as:
>
>
> It has turned out that it is *in principle* impossible to know,
> to measure the position and velocity of a piece of matter with
> arbitrary accuracy. (Heisenberg 1984: 26, [emphasis added])
>
>
>
So, although Heisenberg did not originate the tradition of calling his
relations a principle, it is not implausible to attribute the view to
him that the uncertainty relations represent an empirical principle
that could serve as a foundation of quantum mechanics. In fact, his
1927 paper expressed this desire explicitly:
>
>
> Surely, one would like to be able to deduce the quantitative laws of
> quantum mechanics directly from their *anschaulich*
> foundations, that is, essentially, relation
> [(2)].
> (*ibid*: 196)
>
>
>
This is not to say that Heisenberg was successful in reaching this
goal, or that he did not express other opinions on other
occasions.
Let us conclude this section with three remarks. First, if the
uncertainty relation is to serve as an empirical principle, one might
well ask what its direct empirical support is. In Heisenberg's
analysis, no such support is mentioned. His arguments concerned
thought experiments in which the validity of the theory, at least at a
rudimentary level, is implicitly taken for granted. Jammer (1974: 82)
conducted a literature search for high precision experiments that
could seriously test the uncertainty relations and concluded they were
still scarce in 1974. Real experimental support for the uncertainty
relations in experiments in which the inaccuracies are close to the
quantum limit have come about only more recently (see Kaiser, Werner,
and George 1983; Uffink 1985; Nairz, Andt, and Zeilinger 2002).
A second point is the question whether the theoretical structure or
the quantitative laws of quantum theory can indeed be derived on the
basis of the uncertainty principle, as Heisenberg wished. Serious
attempts to build up quantum theory as a full-fledged Theory of
Principle on the basis of the uncertainty principle have never been
carried out. Indeed, the most Heisenberg could and did claim in this
respect was that the uncertainty relations created "room"
(Heisenberg 1927: 180) or "freedom" (Heisenberg 1931: 43)
for the introduction of some non-classical mode of description of
experimental data, not that they uniquely lead to the formalism of
quantum mechanics. A serious proposal to approach quantum mechanics as
a theory of principle was provided more recently by Bub (2000) and
Chiribella & Spekkens (2016). But, remarkably, this proposal does
not use the uncertainty relation as one of its fundamental principles.
Third, it is remarkable that in his later years Heisenberg put a
somewhat different gloss on his relations. In his autobiography
*Der Teil und das Ganze* of 1969 he described how he had found
his relations inspired by a remark by Einstein that "it is the
theory which decides what one can observe"--thus giving
precedence to theory above experience, rather than the other way
around. Some years later he even admitted that his famous discussions
of thought experiments were actually trivial since
>
>
> ... if the process of observation itself is subject to the laws
> of quantum theory, it must be possible to represent its result in the
> mathematical scheme of this theory. (Heisenberg 1975: 6)
>
>
>
### 2.5 Mathematical elaboration
When Heisenberg introduced his relation, his argument was based only
on qualitative examples. He did not provide a general, exact
derivation of his
relations.[3]
Indeed, he did not even give a definition of the uncertainties
\(\delta q\), etc., occurring in these relations. Of course, this was
consistent with the announced goal of that paper, i.e., to provide
some qualitative understanding of quantum mechanics for simple
experiments.
The first mathematically exact formulation of the uncertainty
relations is due to Kennard. He proved in 1927 the theorem that for
all normalized state vectors \(\ket{\psi}\) the following
inequality holds:
\[\tag{9}
\Delta\_{\psi}\bP \Delta\_{\psi}\bQ \ge \hslash/2 \]
Here, \(\Delta\_{\psi}\bP\) and
\(\Delta\_{\psi}\bQ\) are standard deviations of position
and momentum in the state vector \(\ket{\psi}\), i.e.,
\[\tag{10}
\begin{align\*}
(\Delta\_{\psi}\bP)^2 &=
\expval{\bP^2}\_{\psi} - \expval{\bP}\_{\psi}^2 \\
(\Delta\_{\psi}\bQ)^2 &=
\expval{\bQ^2}\_{\psi} - \expval{\bQ}\_{\psi}^2
\end{align\*}\]
where \(\expval{\cdot}\_{\psi} = \expvalexp{\cdot}{\psi}\)
denotes the expectation value
in state \(\ket{\psi}\). Equivalently we can use the wave
function \(\psi(q)\) and its Fourier transform:
\[\begin{align\*}
\tag{11} \psi(q) &= \braket{q}{\psi} \\
\notag \tilde{\psi}(p) & = \braket{p}{\psi}
=\frac{1}{\sqrt{2\pi \hbar} }\int \! \! dq\, e^{-ipq/\hbar} \psi(q)
\end{align\*}\]
to write
\[\begin{align\*}
(\Delta\_\psi {\bQ})^2 & = \! \int\!\! dq\, \abs{\psi(q)}^2 q^2 -
\left(\int \!\!dq \, \abs{\psi(q)}^2 q \right)^2 \\
(\Delta\_\psi {\bP})^2 & = \! \int \!\!dp \, \abs{\tilde{\psi}(p)}^2 p^2 -
\left(\int\!\!dp \, \abs{\tilde{\psi}(p)}^2 p \right)^2
\end{align\*}\]
The inequality
(9)
was generalized by Robertson (1929) who proved that for all
observables (self-adjoint operators) \(\bA\) and
\(\bB\):
\[\tag{12}
\Delta \_{\psi}\bA \Delta\_{\psi}\bB \ge
\frac{1}{2} \abs{\expval{[\bA,\bB]}\_{\psi}}
\]
where \([\bA,\bB] := \bA\bB - \bB\bA\) denotes the commutator.
Since the above inequalities
(9)
and
(12)
have the virtue of being exact, in contrast to Heisenberg's
original semi-quantitative formulation, it is tempting to regard them
as the exact counterpart of Heisenberg's relations
(2)-(4).
Indeed, such was Heisenberg's own view. In his Chicago Lectures
(Heisenberg 1930: 15-19), he presented Kennard's
derivation of relation
(9)
and claimed that "this proof does not differ at all in
mathematical content" from his semi-quantitative argument, the
only difference being that now "the proof is carried through
exactly".
But it may be useful to point out that both in status and intended
role there is a difference between Kennard's inequality and
Heisenberg's previous formulation
(2).
The inequalities discussed here are not statements of empirical fact,
but theorems of the quantum mechanical formalism. As such, they
presuppose the validity of this formalism, and in particular the
commutation relation
(1),
rather than elucidating its intuitive content or to create
"room" or "freedom" for the validity of this
formalism. At best, one should see the above inequalities as showing
that the formalism is consistent with Heisenberg's empirical
principle.
This situation is similar to that arising in other theories of
principle where, as noted in
Section 2.4,
one often finds that, next to an empirical principle, the formalism
also provides a corresponding theorem. And similarly, this situation
should not, by itself, cast doubt on the question whether
Heisenberg's relation can be regarded as a principle of quantum
mechanics.
There is a second notable difference between
(2)
and
(9).
Heisenberg did not give a general definition for the
"uncertainties" \(\delta p\) and \(\delta q\). The most
definite remark he made about them was that they could be taken as
"something like the mean error". In the discussions of
thought experiments, he and Bohr would always quantify uncertainties
on a case-to-case basis by choosing some parameters which happened to
be relevant to the experiment at hand. By contrast, the inequalities
(9)
and
(12)
employ a single specific expression as a measure for
"uncertainty": the standard deviation. At the time, this
choice was not unnatural, given that this expression is well-known and
widely used in error theory and the description of statistical
fluctuations. However, there was very little or no discussion of
whether this choice was appropriate for a general formulation of the
uncertainty relations. A standard deviation reflects the spread or
expected fluctuations in a series of measurements of an observable in
a given state. It is not at all easy to connect this idea with the
concept of the "inaccuracy" of a measurement, such as the
resolving power of a microscope. In fact, even though Heisenberg had
taken Kennard's inequality as the precise formulation of the
uncertainty relation, he and Bohr never relied on standard deviations
in their many discussions of thought experiments, and indeed, it has
been shown (Uffink and Hilgevoord 1985; Hilgevoord and Uffink 1988)
that these discussions cannot be framed in terms of standard
deviations.
Another problem with the above elaboration is that the
"well-known" relations
(5)
are actually false if energy \(\boldsymbol{E}\) and action
\(\boldsymbol{J}\) are to be positive operators (Jordan 1927). In that
case, self-adjoint operators \(\boldsymbol{t}\) and \(\boldsymbol{w}\)
do not exist and inequalities analogous to
(9)
cannot be derived. Also, these inequalities do not hold for angle and
angular momentum (Uffink 1990). These obstacles have led to a quite
extensive literature on time-energy and angle-action uncertainty
relations (Busch 1990; Hilgevoord 1996, 1998, 2005; Muga et al. 2002;
Hilgevoord and Atkinson 2011; Pashby 2015).
## 3. Bohr
In spite of the fact that Heisenberg's and Bohr's views on
quantum mechanics are often lumped together as (part of) "the
Copenhagen interpretation", there is considerable difference
between their views on the uncertainty relations.
### 3.1 From wave-particle duality to complementarity
Long before the development of modern quantum mechanics, Bohr had been
particularly concerned with the problem of particle-wave duality,
i.e., the problem that experimental evidence on the behaviour of both
light and matter seemed to demand a wave picture in some cases, and a
particle picture in others. Yet these pictures are mutually exclusive.
Whereas a particle is always localized, the very definition of the
notions of wavelength and frequency requires an extension in space and
in time. Moreover, the classical particle picture is incompatible with
the characteristic phenomenon of interference.
His long struggle with wave-particle duality had prepared him for a
radical step when the dispute between matrix and wave mechanics broke
out in 1926-27. For the main contestants, Heisenberg and
Schrodinger, the issue at stake was which view could claim to
provide a single coherent and universal framework for the description
of the observational data. The choice was, essentially between a
description in terms of continuously evolving waves, or else one of
particles undergoing discontinuous quantum jumps. By contrast, Bohr
insisted that elements from both views were equally valid and equally
needed for an exhaustive description of the data. His way out of the
contradiction was to renounce the idea that the pictures refer, in a
literal one-to-one correspondence, to physical reality. Instead, the
applicability of these pictures was to become dependent on the
experimental context. This is the gist of the viewpoint he called
"complementarity".
Bohr first conceived the general outline of his complementarity
argument in early 1927, during a skiing holiday in Norway, at the same
time when Heisenberg wrote his uncertainty paper. When he returned to
Copenhagen and found Heisenberg's manuscript, they got into an
intense discussion. On the one hand, Bohr was quite enthusiastic about
Heisenberg's ideas which seemed to fit wonderfully with his own
thinking. Indeed, in his subsequent work, Bohr always presented the
uncertainty relations as the symbolic expression of his
complementarity viewpoint. On the other hand, he criticized Heisenberg
severely for his suggestion that these relations were due to
discontinuous changes occurring during a measurement process. Rather,
Bohr argued, their proper derivation should start from the
indispensability of both particle and wave concepts. He pointed out
that the uncertainties in the experiment did not exclusively arise
from the discontinuities but also from the fact that in the experiment
we need to take into account both the particle theory and the wave
theory. It is not so much the unknown disturbance which renders the
momentum of the electron uncertain but rather the fact that the
position and the momentum of the electron cannot be simultaneously
defined in this experiment (see the "Addition in Proof" to
Heisenberg's paper).
We shall not go too deeply into the matter of Bohr's
interpretation of quantum mechanics since we are mostly interested in
Bohr's view on the uncertainty principle. For a more detailed
discussion of the former we refer to Scheibe (1973), Folse (1985),
Honner (1987) and Murdoch (1987). It may be useful, however, to sketch
some of the main points. Central in Bohr's considerations is
the *language* we use in physics. No matter how abstract and
subtle the concepts of modern physics may be, they are essentially an
extension of our ordinary language and a means to communicate the
results of our experiments. These results, obtained under
well-defined experimental circumstances, are what Bohr calls the
"phenomena". A phenomenon is "the comprehension of
the effects observed under given experimental conditions" (Bohr
1939: 24), it is the resultant of a physical object, a measuring
apparatus and the interaction between them in a concrete experimental
situation. The essential difference between classical and quantum
physics is that in quantum physics the interaction between the object
and the apparatus cannot be made arbitrarily small; the interaction
must at least comprise one quantum. This is expressed by Bohr's
quantum postulate:
>
>
> [... the] essence [of the formulation of the quantum theory] may
> be expressed in the so-called quantum postulate, which attributes to
> any atomic process an essential discontinuity or rather individuality,
> completely foreign to classical theories and symbolized by
> Planck's quantum of action. (Bohr 1928: 580)
>
>
>
A phenomenon, therefore, is an indivisible whole and the result of a
measurement cannot be considered as an autonomous manifestation of the
object itself independently of the measurement context. The quantum
postulate forces upon us a new way of describing physical
phenomena:
>
>
> In this situation, we are faced with the necessity of a radical
> revision of the foundation for the description and explanation of
> physical phenomena. Here, it must above all be recognized that,
> however far quantum effects transcend the scope of classical physical
> analysis, the account of the experimental arrangement and the record
> of the observations must always be expressed in common language
> supplemented with the terminology of classical physics. (Bohr 1948:
> 313)
>
>
>
This is what Scheibe (1973) has called the "buffer
postulate" because it prevents the quantum from penetrating into
the classical description: A phenomenon must always be described in
classical terms; Planck's constant does not occur in this
description.
Together, the two postulates induce the following reasoning. In every
phenomenon the interaction between the object and the apparatus
comprises at least one quantum. But the description of the phenomenon
must use classical notions in which the quantum of action does not
occur. Hence, the interaction cannot be analysed in this description.
On the other hand, the classical character of the description allows
us to speak in terms of the object itself. Instead of saying:
"the interaction between a particle and a photographic plate has
resulted in a black spot in a certain place on the plate", we
are allowed to forgo mentioning the apparatus and say: "the
particle has been found in this place". The experimental
context, rather than changing or disturbing pre-existing properties of
the object, defines what can meaningfully be said about the
object.
Because the interaction between object and apparatus is left out in
our description of the phenomenon, we do not get the whole picture.
Yet, any attempt to extend our description by performing the
measurement of a different observable quantity of the object, or
indeed, on the measurement apparatus, produces a new phenomenon and we
are again confronted with the same situation. Because of the
unanalyzable interaction in both measurements, the two descriptions
cannot, generally, be united into a single picture. They are what Bohr
calls complementary descriptions:
>
>
> [the quantum of action]...forces us to adopt a new mode of
> description designated as complementary in the sense that any given
> application of classical concepts precludes the simultaneous use of
> other classical concepts which in a different connection are equally
> necessary for the elucidation of the phenomena. (Bohr 1929: 10)
>
>
>
The most important example of complementary descriptions is provided
by the measurements of the position and momentum of an object. If one
wants to measure the position of the object relative to a given
spatial frame of reference, the measuring instrument must be rigidly
fixed to the bodies which define the frame of reference. But this
implies the impossibility of investigating the exchange of momentum
between the object and the instrument and we are cut off from
obtaining any information about the momentum of the object. If, on the
other hand, one wants to measure the momentum of an object the
measuring instrument must be able to move relative to the spatial
reference frame. Bohr here assumes that a momentum measurement
involves the registration of the recoil of some movable part of the
instrument and the use of the law of momentum conservation. The
looseness of the part of the instrument with which the object
interacts entails that the instrument cannot serve to accurately
determine the position of the object. Since a measuring instrument
cannot be rigidly fixed to the spatial reference frame and, at the
same time, be movable relative to it, the experiments which serve to
precisely determine the position and the momentum of an object are
mutually exclusive. Of course, in itself, this is not at all typical
for quantum mechanics. But, because the interaction between object and
instrument during the measurement can neither be neglected nor
determined the two measurements cannot be combined. This means that in
the description of the object one must choose between the assignment
of a precise position or of a precise momentum.
Similar considerations hold with respect to the measurement of time
and energy. Just as the spatial coordinate system must be fixed by
means of solid bodies so must the time coordinate be fixed by means of
unperturbed, synchronised clocks. But it is precisely this requirement
which prevents one from taking into account of the exchange of energy
with the instrument if this is to serve its purpose. Conversely, any
conclusion about the object based on the conservation of energy
prevents following its development in time.
The conclusion is that in quantum mechanics we are confronted with a
complementarity between two descriptions which are united in the
classical mode of description: the space-time description (or
coordination) of a process and the description based on the
applicability of the dynamical conservation laws. The quantum forces
us to give up the classical mode of description (also called the
"causal" mode of description by
Bohr[4]:
it is impossible to form a classical picture of what is going on when
radiation interacts with matter as, e.g., in the Compton effect.
>
>
> Any arrangement suited to study the exchange of energy and momentum
> between the electron and the photon must involve a latitude in the
> space-time description sufficient for the definition of wave-number
> and frequency which enter in the relation [\(E = h\nu\) and \(p =
> h\sigma\)]. Conversely, any attempt of locating the collision between
> the photon and the electron more accurately would, on account of the
> unavoidable interaction with the fixed scales and clocks defining the
> space-time reference frame, exclude all closer account as regards the
> balance of momentum and energy. (Bohr 1949: 210)
>
>
>
A causal description of the process cannot be attained; we have to
content ourselves with complementary descriptions. "The
viewpoint of complementarity may be regarded", according to
Bohr, "as a rational generalization of the very ideal of
causality".
In addition to complementary descriptions Bohr also talks about
complementary phenomena and complementary quantities. Position and
momentum, as well as time and energy, are complementary
quantities.[5]
We have seen that Bohr's approach to quantum theory puts heavy
emphasis on the language used to communicate experimental
observations, which, in his opinion, must always remain classical. By
comparison, he seemed to put little value on arguments starting from
the mathematical formalism of quantum theory. This informal approach
is typical of all of Bohr's discussions on the meaning of
quantum mechanics. One might say that for Bohr the conceptual
clarification of the situation has primary importance while the
formalism is only a symbolic representation of this situation.
This is remarkable since, finally, it is the formalism which needs to
be interpreted. This neglect of the formalism is one of the reasons
why it is so difficult to get a clear understanding of Bohr's
interpretation of quantum mechanics and why it has aroused so much
controversy. We close this section by citing from an article of 1948
to show how Bohr conceived the role of the formalism of quantum
mechanics:
>
>
> The entire formalism is to be considered as a tool for deriving
> predictions, of definite or statistical character, as regards
> information obtainable under experimental conditions described in
> classical terms and specified by means of parameters entering into the
> algebraic or differential equations of which the matrices or the
> wave-functions, respectively, are solutions. These symbols themselves,
> as is indicated already by the use of imaginary numbers, are not
> susceptible to pictorial interpretation; and even derived real
> functions like densities and currents are only to be regarded as
> expressing the probabilities for the occurrence of individual events
> observable under well-defined experimental conditions. (Bohr 1948:
> 314)
>
>
>
### 3.2 Bohr's view on the uncertainty relations
In his Como lecture, published in 1928, Bohr gave his own version of a
derivation of the uncertainty relations between position and momentum
and between time and energy. He started from the relations
\[\tag{13} E = h\nu \text{ and } p = h/\lambda\]
which connect the notions of energy \(E\) and momentum
\(p\) from the particle picture with those of frequency \(\nu\) and
wavelength \(\lambda\) from the wave picture. He noticed that a wave
packet of limited extension in space and time can only be built up by
the superposition of a number of elementary waves with a large range
of wave numbers and frequencies. Denoting the spatial and temporal
extensions of the wave packet by \(\Delta x\) and \(\Delta t\), and
the extensions in the wave number \(\sigma := 1/\lambda\) and
frequency by \(\Delta \sigma\) and \(\Delta \nu\), it follows from
Fourier analysis that in the most favorable case \(\Delta x \Delta
\sigma \approx \Delta t \Delta \nu \approx 1\), and, using (13), one
obtains the relations
\[\tag{14} \Delta t \Delta E \approx \Delta x \Delta p \approx h\]
Note that \(\Delta x, \Delta \sigma\), etc., are not standard
deviations but unspecified measures of the size of a wave packet. (The
original text has equality signs instead of approximate equality
signs, but, since Bohr does not define the spreads exactly the use of
approximate equality signs seems more in line with his intentions.
Moreover, Bohr himself used approximate equality signs in later
presentations.) These equations determine, according to Bohr:
>
>
> the highest possible accuracy in the definition of the energy and
> momentum of the individuals associated with the wave field. (Bohr
> 1928: 571).
>
He noted,
>
>
> This circumstance may be regarded as a simple symbolic expression of
> the complementary nature of the space-time description and the claims
> of causality.
> (*ibid*).[6]
>
>
>
>
We note a few points about Bohr's view on the uncertainty
relations. First of all, Bohr does not refer to *discontinuous
changes* in the relevant quantities during the measurement
process. Rather, he emphasizes the possibility of *defining*
these quantities. This view is markedly different from
Heisenberg's view. A draft version of the Como lecture is even
more explicit on the difference between Bohr and Heisenberg:
>
>
> These reciprocal uncertainty relations were given in a recent paper of
> Heisenberg as the expression of the statistical element which, due to
> the feature of discontinuity implied in the quantum postulate,
> characterizes any interpretation of observations by means of classical
> concepts. It must be remembered, however, that the uncertainty in
> question is not simply a consequence of a discontinuous change of
> energy and momentum say during an interaction between radiation and
> material particles employed in measuring the space-time coordinates of
> the individuals. According to the above considerations the question is
> rather that of the impossibility of defining rigorously such a change
> when the space-time coordination of the individuals is also
> considered. (Bohr 1985: 93)
>
>
>
Indeed, Bohr not only rejected Heisenberg's argument that these
relations are due to discontinuous disturbances implied by the act of
measuring, but also his view that the measurement process
*creates* a definite result:
>
>
> The unaccustomed features of the situation with which we are
> confronted in quantum theory necessitate the greatest caution as
> regard all questions of terminology. Speaking, as it is often done of
> disturbing a phenomenon by observation, or even of creating physical
> attributes to objects by measuring processes is liable to be
> confusing, since all such sentences imply a departure from conventions
> of basic language which even though it can be practical for the sake
> of brevity, can never be unambiguous. (Bohr 1939: 24)
>
>
>
Nor did he approve of an epistemological formulation or one in terms
of experimental inaccuracies:
>
>
> [...] a sentence like "we cannot know both the momentum and
> the position of an atomic object" raises at once questions as to
> the physical reality of two such attributes of the object, which can
> be answered only by referring to the mutual exclusive conditions for
> an unambiguous use of space-time concepts, on the one hand, and
> dynamical conservation laws on the other hand. (Bohr 1948: 315; also
> Bohr 1949: 211)
>
>
>
> It would in particular not be out of place in this connection to warn
> against a misunderstanding likely to arise when one tries to express
> the content of Heisenberg's well-known indeterminacy relation by
> such a statement as "the position and momentum of a particle
> cannot simultaneously be measured with arbitrary accuracy".
> According to such a formulation it would appear as though we had to do
> with some arbitrary renunciation of the measurement of either the one
> or the other of two well-defined attributes of the object, which would
> not preclude the possibility of a future theory taking both attributes
> into account on the lines of the classical physics. (Bohr 1937:
> 292)
>
>
>
Instead, Bohr always stressed that the uncertainty relations are first
and foremost an expression of complementarity. This may seem odd since
complementarity is a dichotomic relation between two types of
description whereas the uncertainty relations allow for intermediate
situations between two extremes. They "express" the
dichotomy in the sense that if we take the energy and momentum to be
perfectly well-defined, symbolically \(\Delta E = \Delta p\) = 0, the
position and time variables are completely undefined, \(\Delta x =
\Delta t = \infty\), and vice versa. But they also allow intermediate
situations in which the mentioned uncertainties are all non-zero and
finite. This more positive aspect of the uncertainty relation is
mentioned in the Como lecture:
>
>
> At the same time, however, the general character of this relation
> makes it possible to a certain extent to reconcile the conservation
> laws with the space-time coordination of observations, the idea of a
> coincidence of well-defined events in space-time points being replaced
> by that of unsharply defined individuals within space-time regions.
> (Bohr 1928: 571)
>
>
>
However, Bohr never followed up on this suggestion that we might be
able to strike a compromise between the two mutually exclusive modes
of description in terms of unsharply defined quantities. Indeed, an
attempt to do so, would take the formalism of quantum theory more
seriously than the concepts of classical language, and this step Bohr
refused to take. Instead, in his later writings he would be content
with stating that the uncertainty relations simply defy an unambiguous
interpretation in classical terms:
>
>
> These so-called indeterminacy relations explicitly bear out the
> limitation of causal analysis, but it is important to recognize that
> no unambiguous interpretation of such a relation can be given in words
> suited to describe a situation in which physical attributes are
> objectified in a classical way. (Bohr 1948: 315)
>
>
>
Finally, on a more formal level, we note that Bohr's derivation
does not rely on the commutation relations
(1)
and
(5),
but on Fourier analysis. These two approaches are equivalent as far
as the relationship between position and momentum is concerned, but
this is not so for time and energy since most physical systems do not
have a time operator. Indeed, in his discussion with Einstein (Bohr
1949), Bohr considered time as a simple classical variable. This even
holds for his famous discussion of the "clock-in-the-box"
thought-experiment where the time, as defined by the clock in the box,
is treated from the point of view of classical general relativity.
Thus, in an approach based on commutation relations, the
position-momentum and time-energy uncertainty relations are not on
equal footing, which is contrary to Bohr's approach in terms of
Fourier analysis. For more details see (Hilgevoord 1996 and 1998).
## 4. The Minimal Interpretation
In the previous two sections we have seen how both Heisenberg and Bohr
attributed a far-reaching status to the uncertainty relations. They
both argued that these relations place fundamental limits on the
applicability of the usual classical concepts. Moreover, they both
believed that these limitations were inevitable and forced upon us.
However, we have also seen that they reached such conclusions by
starting from radical and controversial assumptions. This entails, of
course, that their radical conclusions remain unconvincing for those
who reject these assumptions. Indeed, the operationalist-positivist
viewpoint adopted by these authors has long since lost its appeal
among philosophers of physics.
So the question may be asked what alternative views of the uncertainty
relations are still viable. Of course, this problem is intimately
connected with that of the interpretation of the wave function, and
hence of quantum mechanics as a whole. Since there is no consensus
about the latter, one cannot expect consensus about the interpretation
of the uncertainty relations either. Here we only describe a point of
view, which we call the "minimal interpretation", that
seems to be shared by both the adherents of the Copenhagen
interpretation and of other views.
In quantum mechanics a system is supposed to be described by its wave
function, also called its quantum state or state vector. Given the
state vector \(\ket{\psi}\), one can derive probability
distributions for all the physical quantities pertaining to the
system, usually called its observables, such as its position,
momentum, angular momentum, energy, etc. The operational meaning of
these probability distributions is that they correspond to the
distribution of the values obtained for these quantities in a long
series of repetitions of the measurement. More precisely, one imagines
a great number of copies of the system under consideration, all
prepared in the same way. On each copy the momentum, say, is measured.
Generally, the outcomes of these measurements differ and a
distribution of outcomes is obtained. The theoretical momentum
distribution derived from the quantum state is supposed to coincide
with the hypothetical distribution of outcomes obtained in an infinite
series of repetitions of the momentum measurement. The same holds,
*mutatis mutandis*, for all the other physical quantities
pertaining to the system. Note that no simultaneous measurements of
two or more quantities are required in defining the operational
meaning of the probability distributions.
The uncertainty relations discussed above can be considered as
statements about the spreads of the probability distributions of the
several physical quantities arising from the same state. For example,
the uncertainty relation between the position and momentum of a system
may be understood as the statement that the position and momentum
distributions cannot both be arbitrarily narrow--in some sense of
the word "narrow"--in any quantum state. Inequality
(9)
is an example of such a relation in which the standard deviation is
employed as a measure of spread. From this characterization of
uncertainty relations follows that a more detailed interpretation of
the quantum state than the one given in the previous paragraph is not
required to study uncertainty relations as such. In particular, a
further ontological or linguistic interpretation of the notion of
uncertainty, as limits on the applicability of our concepts given by
Heisenberg or Bohr, need not be supposed.
Of course, this minimal interpretation leaves the question open
whether it makes sense to attribute precise values of position and
momentum to an individual system. Some interpretations of quantum
mechanics, e.g., those of Heisenberg and Bohr, deny this; while
others, e.g., the interpretation of de Broglie and Bohm insist that
each individual system has a definite position and momentum (see the
entry on
Bohmian mechanics).
The only requirement is that, as an empirical fact, it is not
possible to prepare pure ensembles in which all systems have the same
values for these quantities, or ensembles in which the spreads are
smaller than allowed by quantum theory. Although interpretations of
quantum mechanics, in which each system has a definite value for its
position and momentum are still viable, this is not to say that they
are without strange features of their own; they do not imply a return
to classical physics.
We end with a few remarks on this minimal interpretation. First, it
may be noted that the minimal interpretation of the uncertainty
relations is little more than filling in the empirical meaning of
inequality
(9).
As such, this view shares many of the limitations we have noted above
about this inequality. Indeed, it is not straightforward to relate the
spread in a statistical distribution of measurement results with the
*inaccuracy* of this measurement, such as, e.g., the resolving
power of a microscope, or of a *disturbance* of the system by
the measurement. Moreover, the minimal interpretation does not address
the question whether one can make *simultaneous* accurate
measurements of position and momentum.
As a matter of fact, one can show that the standard formalism of
quantum mechanics does not allow such simultaneous measurements. But
this is not a consequence of relation
(9).
Rather, it follows from the fact that this formalism simply does not
contain any observable that would accomplish such a task. The
extension of this formalism that allows observables to be represented
by positive-operator-valued measures or POVM's, does allow the
formal introduction of observables describing joint measurements (see
also
section 6.1).
But even here, for the case of position and momentum, one finds that
such measurements have to be "unsharp", which entails that
they cannot be regarded as simultaneous accurate measurements.
If one feels that statements about inaccuracy of measurement, or the
possibility of simultaneous measurements, belong to any satisfactory
formulation of the uncertainty principle, one will need to look for
other formulations of the uncertainty principle. Some candidates for
such formulations will be discussed in
Section 6.
First, however, we will look at formulations of the uncertainty
principle that stay firmly within the minimal interpretation, and
differ from
(9)
only by using measures of uncertainty other than the standard
deviation.
## 5. Alternative measures of uncertainty
While the standard deviation is the most well-known quantitative
measure for uncertainty or the spread in the probability distribution,
it is not the only one, and indeed it has distinctive drawbacks that
other such measures may lack. For example, in the definition of the
standard deviations
(11)
one can see that that the probability density function
\(\abs{\psi(q)}^2\) is weighed by a quadratic factor \(q^2\) that
puts increasing emphasis on its tails. Therefore, the value of
\(\Delta\_\psi \bQ\) will depend predominantly at how this
density behaves at the tails: if these falls off very quickly, e.g.,
like a Gaussian, it will be small, but if the tails drop off only
slowly the standard deviation may be very large, even when most
probability is concentrated in a small interval.
The upshot of this objection is that having a lower bound on the
product of the standard deviations of position and momentum, as the
Heisenberg-Kennard uncertainty relation
(9)
gives, does not by itself rule out a state where *both* the
probability densities for position and momentum are extremely
concentrated, in the sense of having more than \((1- \epsilon)\) of
their probability concentrated in a a region of size smaller than
\(\delta\), for any choice of \(\epsilon, \delta >0\). This means,
in our view, that relation
(9)
actually fails to express what most physicists would take to be the
very core idea of the uncertainty principle.
One way to deal with this objection is to consider alternative
measures to quantify the spread or uncertainty associated with a
probability density. Here we discuss two such proposals.
### 5.1 Landau-Pollak uncertainty relations
The most straightforward alternative is to pick some value \(\alpha\)
close to one, say \(\alpha = 0.9\), and ask for the width of the
smallest interval that supports the fraction \(\alpha\) of the total
probability distribution in position and similarly for momentum:
\[\begin{align\*}
\tag{15} W\_{\alpha}(\bQ, \psi) &:=
\inf\_{\abs{I}} \left\{ I: \int\_I {\abs{\psi(q)}}^2 dq \geq \alpha \right\} \\
\notag W\_{\beta}(\bP,\psi) &:= \inf\_I \left\{\int\_I \abs{\tilde\psi(p)}^2 dp \geq \beta \right\}
\end{align\*}\]
In a previous work (Uffink and Hilgevoord 1985) we called such
measures *bulk widths*, because they indicate how concentrated
the "bulk" (i.e., fraction \(\alpha\) or \(\beta\)) of the
probability distribution is. Landau and Pollak (1961) obtained an
uncertainty relation in terms of these bulk widths.
\[\begin{align\*}
\tag{16} W\_\alpha (\bQ, \psi) W\_\beta (\bP, \psi) &\geq
2\pi \hbar \left( \alpha \beta - \sqrt{(1-\alpha)(1-\beta)} \right)^2 \\
\notag &\mbox{if } \alpha+ \beta \geq 1/2
\end{align\*}\label{LP}\]
This Landau-Pollak inequality shows that if the choices of \(\alpha,
\beta\) are not too low, there is a state-independent lower bound on
the product of the bulk widths of the position and momentum
distribution for any quantum state.
Note that bulk widths are not so sensitive to the behavior of the
tails of the distributions and, therefore, the Landau-Pollak
inequality is immune to the objection above.Thus, this inequality
expresses constraints on quantum mechanical states not contained in
relation
(9).
Further, by the well-known Bienayme-Chebyshev inequality, one
has
\[\begin{align\*}
\tag{17} W\_\alpha (\bQ,\psi) &\leq \frac{2}{\sqrt {1- \alpha}} \Delta\_\psi \bQ \\
\notag W\_\beta (\bP, \psi) &\leq \frac{2}{\sqrt {1- \beta}} \Delta\_\psi \bP
\end{align\*}\]
so that inequality
(16)
implies (by choosing \(\alpha,\beta\) optimal) that \( \Delta\_\psi
\bQ \Delta\_\psi \bP \geq 0.12 \hbar \). This,
obviously, is not the best lower bound for the product of standard
deviations, but the important point is here that the Landau-Pollak
inequality
(16)
in terms of bulk widths *implies* the existence of a lower
bound on the product of standard deviations, while conversely, the
Heisenberg-Kennard equality
(9)
does *not imply* any bound on the product of bulk widths. A
generalization of this approach to non-commuting observables in a
finite-dimensional Hilbert space is discussed in Uffink 1990.
### 5.2 Entropic uncertainty relations
Another approach to express the uncertainty principle is to use
entropic measures of uncertainty. The foremost example of these is the
*Shannon entropy*, which for the position and momentum
distribution of a given state vector \(\ket{\psi}\) may be
defined as:
\[\begin{align\*}
\tag{18} H(\bQ, \psi) &:= -\int \abs{\psi(q)}^2 \ln \abs{\psi(q)}^2 dq \\
\notag H(\bP, \psi) &:= -\int \abs{\tilde{\psi}(p)}^2 \ln \abs{\tilde{\psi}(p)}^2 dp
\end{align\*}\]
One can then show (see Beckner 1975;
Bialinicki-Birula and Micielski 1975) that
\[\tag{19} H(\bQ, \psi) + H(\bP,\psi) \geq \ln (e \pi \hbar) \]
A nice feature of this entropic uncertainty relation is that it
provides a strict improvement of the Heisenberg-Kennard relation. That
is to say, one can show (independently of quantum theory) that for any
probability density function \(p(x)\)
\[\tag{20} -\int\! p(x) \ln p(x) dx \leq \ln (\sqrt{2 \pi e} \Delta x )\]
Applying this to the inequality
(19)
we get:
\[\tag{21}
\frac{\hslash}{2} \leq(2\pi e)^{-1} \exp (H(\bQ, \psi) + H(\bP,\psi)) \leq
\Delta\_\psi \bQ\Delta\_\psi \bP
\]
showing that the entropic uncertainty relation
implies the Heisenberg-Kennard uncertainty relation. A drawback of
this relation is that it does not completely evade the objection
mentioned above, (i.e., these entropic measures of uncertainty can
become as large as one pleases while \(1-\epsilon\) of the probability
in the distribution is concentrated on a very small interval), but the
examples needed to show this are admittedly more far-fetched.
For non-commuting observables in a \(n\)-dimensional Hilbert space,
one can similarly define an entropic uncertainty in the probability
distribution \(\abs{\braket{a\_i}{\psi}}^2\) for a
given state \(\ket{\psi}\) and a complete set of
eigenstates \(\ket{a\_i}\), \( (i= 1, \ldots n)\), of the
observable \(\bA\):
\[\tag{22}
H(\bA ,\psi) := -\sum\_{i=1}^n \abs{\braket{a\_i}{\psi}}^2 \ln \abs{\braket{a\_i}{\psi}}^2
\]
and \(H(\bB,\psi)\) similarly in terms of the probability distribution
\(\abs{\braket{b\_j}{\psi}}^2\) for a complete set
of eigenstates \(\ket{b\_j}\), (\(j =1, \ldots, n\)) of
observable \(\bB\). Then we obtain the uncertainty relation
(Maassen and Uffink 1988):
\[\tag{23}
H( bA, \psi) + H(\bB, \psi) \geq 2 \ln \max\_{i,j} \abs{\braket{a\_i}{b\_j}},
\]
which was further generalized and improved
by (Frank and Lieb 2012). The most important advantage of these
relations is that, in contrast to Robertson's inequality
(12),
the lower bound is a positive constant, independent of the state.
## 6. Uncertainty relations for inaccuracy and disturbance
Both the standard deviation and the alternative measures of
uncertainty considered in the previous subsection (and many more that
we have not mentioned!) are designed to indicate the width or spread
of a single given probability distribution. Applied to quantum
mechanics, where the probability distributions for position and
momentum are obtained from a given quantum state vector, one can use
them to formulate uncertainty relations that characterize the spread
in those distribution for any given state. The resulting inequalities
then express limitations on what state-preparations quantum mechanics
allows. They are thus expressions of what may be called a
*preparation uncertainty principle*:
>
>
> In quantum mechanics, it is impossible to prepare any system in a
> state \(\ket{\psi}\) such that its position and momentum
> are both precisely predictable, in the sense of having both the
> expected spread in a measurement of position and the expected spread
> in a momentum measurement arbitrarily small.
>
>
>
The relations
(9,
16,
19)
all belong to this category; the mere difference being that they
employ different measures of spread: viz. the standard deviation, the
bulk width or the Shannon entropy.
Note that in this formulation, there is no reference to simultaneous
or joint measurements, nor to any notion of accuracy like the
resolving power of the measurement instrument, nor to the issue of how
much the system in the state that is being measured is
*disturbed* by this measurement. This section is devoted to
attempts that go beyond the mold of this preparation uncertain
principle.
### 6.1 The recent debate on error-disturbance relations
We have seen that in 1927 Heisenberg argued that the measurement of
(say) position must necessarily disturb the conjugate variable (i.e.,
momentum) by an amount that is inversely proportional to the
inaccuracy of measurement of the former. We have also seen that this
idea was not maintained in the Kennard's uncertainty relation
(9),
a relation that was embraced by Heisenberg (1930) and most textbooks.
A rather natural question thus arises whether there are further
inequalities in quantum mechanics that would address
Heisenberg's original thinking more directly, i.e., that do deal
with how much one variable is disturbed by the accurate measurement of
another. That is, we will look at attempts that would establish a
claim which may be called a *measurement uncertainty
principle*.
>
>
> In quantum mechanics, there is no measurement procedure by which one
> can accurately measure the position of a system without disturbing it
> momentum, in the sense that some measure of inaccuracy in position and
> some measure of the disturbance of momentum of the system by the
> measurement cannot both be arbitrarily small.
>
>
>
This formulation of the uncertainty principle has always remained
controversial. Uncertainty relations that would express this alleged
principle are often called "error-disturbance" relations
or "noise-disturbance" relations We will look at two
recent proposals to search for such relations: Ozawa (2003) and Busch,
Lahti, and Werner (2013).
In Ozawa's approach, we assume that a system \(\cal S\) of
interest in state \(\ket{\psi}\) is coupled to a
measurement device \(\cal M\) in state \(\ket{\chi}\), and
their interaction is governed by a unitary operator \(U\). On the
Hilbert space of the joint system the observable \(\bQ\) of
the system \(\cal S\) we are interested in is represented by
\[\tag{24} \bQ\_{\rm in} = \bQ \otimes \mathbb{1}\]
The measurement interaction will allow us to perform an
(inaccurate) measurement of this quantity by reading off a pointer
observable \(\boldsymbol{Q'}\) of the measurement device after the
interaction. Hence this inaccurate observable may be represented as
\[\tag{25}
\bQ'\_{\rm out} = U^\dagger( \mathbb{1} \otimes \bQ') U\]
The measure of noise in the measurement of \(\bQ\) is then
chosen as:
\[\tag{26}
\epsilon\_\psi(\bQ) := \expval{(\bQ'\_{\rm out} - \bQ\_{\rm in})^2}\_{\psi \otimes \chi}^{1/2}\]
A comparison of the initial momentum
\(\bP\_{\rm in} = \bP \otimes \mathbb{1}\)
and the final momentum after the measurement
\(\bP\_{\rm out} = U^\dagger (\bP \otimes \mathbb{1})U\) is made by choosing
a measure of the disturbance of \(\bP\) by the measurement procedure:
\[\tag{27}
\eta\_\psi(\bP):= \expval{(\bP\_{\rm in} - \bP\_{\rm out})^2}\_{\psi\otimes\chi}^{1/2}
\]
Ozawa
obtained an inequality involving those two measures, which, however,
is more involved than previous uncertainty relations. For our
purposes, however, the important point is that Ozawa showed that the
product \(\epsilon\_\psi (\bQ) \eta\_\psi (\bP)\)
has no positive lower bound. His conclusion from this was that
Heisenberg's noise-disturbance relation is violated.
Yet, whether Ozawa's result indeed succeeds in formulating
Heisenberg's qualitative discussion of disturbance and accuracy
in the microscope example has come under dispute. See Busch, Lahti and
Werner (2013, and 2014 (Other Internet Resources)), and Ozawa (2013,
Other Internet Resources).
An objection raised in this dispute is that a quantity like
\(\expval{(\bQ'\_{\rm out} - \bQ\_{\rm in})^2}^{1/2}\) tells us very
little about how good the observable \({\bQ'}\_{\rm out}\) can stand in
as an inaccurate measurement of \(\bQ\_{\rm in}\). The main point to
observe here is that these operators generally do not commute, and
that measurements of \(\bQ'\_{\rm out}\), of \(\bQ\_{\rm in}\) and of
their difference will require altogether three different measurement
contexts. To require that \(\epsilon\_\psi(\bQ)\) vanishes, for
example, means only that the state prepared belongs to the linear
subspace corresponding to the zero eigenvalue of the operator
\(\bQ'\_{\rm out} - {\bQ}\_{\rm in}\), and therefore that
\(\expval{\bQ'\_{\rm out}}\_\psi = \expval{\bQ\_{\rm in}}\_\psi\),
but this does not preclude that the probability
distribution of \(\bQ\_{\rm out}\) in state \(\psi\) might be wildly
different from that of \(\bQ\_{\rm in}\). But then no one would think
of \(\bQ\_{\rm out}\) as an accurate measurement of \(\bQ\_{\rm in}\)
so that the definition of \(\epsilon\_\psi(\bQ)\) does not express what
it is supposed to express. A similar objection can also be raised
against \(\eta\_\psi (\bP)\).
Another observation is that Ozawa's conclusion that there is no
lower bound for his error-disturbance product for is not at all
surprising. That is, even without probing the system by a measurement
apparatus, one can show that such a lower bound does not exist. If the
initial state of a system is prepared at time \(t=0\) as a Gaussian
quasi-monochromatic wave packet with \(\expval{\bQ\_0}\_\psi =0\) and
evolves freely, we can use a time-of-flight
measurement to learn about its later position. Ehrenfest's
theorem tells us: \(\expval{\bQ\_t}\_\psi = \frac{t}{m} \expval{\bP}\_\psi\).
Hence, as an approximative measurement of the position
\(\bQ\_t\), one could propose the observable
\(\bQ'\_t = \frac{t}{m}\bP\). It is known that
under the stated conditions (and with \(m\) and \(t\) large) this
approximation holds very well, i.e., we do not only have
\(\expval{\bQ'\_t -\bQ\_t}\_\psi =0\), but also
\(\expval{(\bQ'\_t -\bQ\_t)^2} \approx 0\),
as nearly as we please. But since \(\bQ'\_t\) is just the
momentum multiplied by a constant, its measurement will obviously not
disturb the momentum of the system. In other words, for this example,
one has \(\epsilon\_\psi (\bQ)\) as small as we please with
zero disturbance of the momentum. Therefore, any hopes that there
could be a positive lower bound for the product \(\epsilon\_\psi
(\bQ) \eta\_\psi (\bP)\) seem to be dashed, even
with the simplest of measurement schemes, i.e. a free evolution.
Ozawa's results do not show that Heisenberg's analysis of
the microscope argument was wrong. Rather, they throw doubt on the
appropriateness of the definitions he used to formalize
Heisenberg's informal argument.
An entirely different analysis of the problem of substantiating a
measurement uncertainty relation was offered by Busch, Lahti, and
Werner (2013). These authors consider a measurement device \(\cal M\)
that makes a joint unsharp measurement of both position and momentum.
To describe such joint unsharp measurements, they employ the extended
modern formalism that characterizes obervables not by self-adjoint
operators but by positive-operator-valued measures (POVM's). In
the present case, this means that the measurement procedure is
characterized by a collection of positive operators, \(M(p,q)\), where
the pair \(p,q\) represent the outcome variables of the measurement,
with
\[\tag{28}
M(p,q) \geq 0, \iint \! dp dq \, M(p,q) =\mathbb{1} .\]
The two marginals of this POVM,
\[\tag{29}
\begin{align\*}
M\_1(q) &= \int\! dp M(p,q)\\
M\_2(p) &= \int\! dq M(p,q)
\end{align\*}
\]
are also POVM's in
their own right and represent the unsharp position \(Q'\) and unsharp
momentum \(P'\) observables respectively. (Note that these do
*not* refer to a self-adjoint operator!)
For a system prepared in a state \(\ket{\psi}\), the joint
probability density of obtaining outcomes \((p,q)\) in the joint
unsharp measurement
(28)
is then
\[\tag{30}
\rho(p,q) := \expvalexp{M(p,q)}{\psi},\]
while the marginals of this joint probability
distribution give the distributions for \(Q'\) and \(P'\).
\[\begin{align\*}
\tag{31} \mu'(q) &: = \int \! dp \, \rho(p,q) = \expvalexp{M\_1(q)}{\psi} \\
\notag \nu'(p) &:= \int \! dp \, \rho(p,q) = \expvalexp{M\_2(q)}{\psi}
\end{align\*}\]
Since a joint sharp measurement of position and momentum
is impossible in quantum mechanics, these marginal distributions
(31)
obtained from \(M\) will differ from that of ideal measurements of
\(\bQ\) and of \(\bP\) on the system of interest
in state \(\ket{\psi}\). However, one can indicate how
much these marginals deviate from separate exact position and momentum
measurements on the state \(\ket{\psi}\) by a pairwise
comparison of
(31)
to the exact distributions
\[\begin{align\*}
\tag{32} \mu(q) &:= \abs{\braket{q}{\psi}}^2 \\
\notag \nu(p) &:= \abs{\braket{p}{\psi}}^2
\end{align\*}\]
In order to do so, BLW propose a distance function \(D\) between
probability distributions, such that \(D(\mu, \mu')\) tells us how
close the marginal position distribution \(\mu'(q)\) for the unsharp
position \(Q'\) is to the exact distribution \(\mu(q)\) in a sharp
position measurement, and likewise, \(D(\nu ,\nu')\) tells us how
close the marginal momentum distribution \(\nu'(p)\) for \(P'\) is to
the exact momentum distribution \(\nu(p)\).
The distance they chose is the Wasserstein-2 distance, a.k.a. (a
variation on) the earth-movers distance.
*Definition* (Wasserstein-2 distance)
Let \(\mu(x)\) and \(\mu'(y)\) be any two probability distributions on
the real line, and \(\gamma(x,y)\) any joint probability distribution
that has \(\mu'\) and \(\mu\) as its marginals. Then:
\[\tag{33}
D(\mu, \mu') := \inf\_\gamma \left(\iint (x-y)^2 \gamma (x,y) dx dy \right)^{1/2}\]
Applying this definition to the case at hand, i.e. pairwise to the
quantum mechanical distributions \(\mu'(q)\) and \(\mu(q)\) and to
\(\nu'(p)\) and \(\nu(p)\) in
(31)
and
(32),
BLW's final step is to take a supremum over all possible input
states \(\ket{\psi}\) to obtain
\[\tag{34}
\begin{align\*}
\Delta(Q, Q') & = \sup\_{\ket{\psi}} D(\mu, \mu') \\
\Delta(P, P') & = \sup\_{\ket{\psi}} D(\nu, \nu')
\end{align\*}
\]
From these
definitions, they obtain
\[\tag{35}\Delta(Q, Q') \Delta (P,P') \geq \frac{\hbar}{2}\]
Arguing that \(\Delta(Q, Q')\) provides a sensible measure for the
inaccuracy or noise about position, and \(\Delta(P, P')\) for the
disturbance of momentum by any such joint unsharp measurement, the
authors conclude, in contrast to Ozawa's analysis, that an
error-disturbance uncertainty relation does hold, which they take as
"a remarkable vindication of Heisenberg's
intuitions" in the microscope thought experiment.
In comparison of the two, there are a few positive remarks to make
about the Busch-Lahti-Werner (BLW) approach. First of all, by focusing
on the distance
(33)
this approach is comparing entire *probability distributions*
rather than just the expectations of operator differences. When this
distance is very small, one is justified to conclude that the
distribution has changed very little under the measurement procedure.
This brings us closer to the conclusion that the error or disturbance
introduced is small. Secondly, by introducing a supremum over all
states to obtain \(\Delta( Q, Q')\), it follows that when this latter
expression is small, the measured distribution \(\mu'\) differs only
little from the exact distribution \(\mu\) *whatever the state of
the system* is. As the authors argue, this means that
\(\Delta(Q,Q')\) can be seen as a figure-of-merit of the measurement
device alone, and in this sense analogous to the resolving power of a
microscope.
But we also think there is an undesirable feature of the BLW approach.
This is due to the supremum over states appearing *twice*, both
in \(\Delta(Q,Q')\) and in \(\Delta(P,P')\). This feature, we argue,
deprives their result from practical applicability.
To elucidate: In concrete applications, one would prepare a system in
some state (not exactly known) and perform a given joint measurement
\(M\) of \(Q'\) and \(P'\). If it is given that, say, \(\Delta(Q,Q')\)
is very small, one can safely infer that \(Q\) has been measured with
small inaccuracy, since this guarantees that the measured position
distribution differs very little from what an exact position
measurement would give, regardless of the state of the system. Now,
one would like to be able to infer that in this case the disturbance
of the momentum \(P\) from \(P'\) must be considerable *for the
state prepared*. But the BLW only gives us:
\[
\Delta(P, P') = \sup\_{\ket{\psi}} D(\nu, \nu') \geq
\frac{\hbar}{2 \Delta(Q, Q')}
\]
and this
does not imply anything for the state in question! Thus, the BLW
uncertainty relation does not rule out that for some states it might
be possible to perform a joint measurement in which both \(D(\mu,
\mu')\) and \(D(\nu, \nu')\) are very small, and in this sense have
negligibe error and disturbance. It seems premature to say that this
vindicates Heisenberg's intuitions.
Summing up, we emphasize that there is no contradiction between the
BLW analysis and the Ozawa analysis: where Ozawa claims that the
product of two quantities might for some states be less than the usual
limit, BLW show that product of different quantities will satisfy this
limit. The dispute is not about mathematically validity, but about how
reasonable these quantities are to capture Heisenberg's
qualitative considerations. The present authors feel that, in this
dispute, Ozawa's analysis fail to be convincing. On the other
hand, we also think that the BLW uncertainty relation is not
satisfactory. Also, we would like to remark that both protagonists
employ measures that are akin to standard deviations in being very
sensitive to the tail behavior of probability distributions, and thus
face a similar objection as raised in
section 5.
The final word in this dispute on whether a measurement uncertainty
principle holds has not been reached, in our view. |
scientific-underdetermination | ## 1. A First Look: Duhem, Quine, and the Problems of Underdetermination
The scope of the epistemic challenge arising from underdetermination
is not limited only to scientific contexts, as is perhaps most readily
seen in classical skeptical attacks on our knowledge more generally.
Rene Descartes ([1640] 1996) famously sought to doubt any and
all of his beliefs which could possibly be doubted by supposing that
there might be an all-powerful Evil Demon who sought to deceive him.
Descartes' challenge appeals to a form of underdetermination: he
notes that all our sensory experiences would be just the same if they
were caused by this Evil Demon rather than an external world of rocks
and trees. Likewise, Nelson Goodman's (1955) "New Riddle
of Induction" turns on the idea that the evidence we now have
could equally well be taken to support inductive generalizations quite
different from those we usually take them to support, with radically
different consequences for the course of future
events.[1]
Nonetheless, underdetermination has been thought to arise in
scientific contexts in a variety of distinctive and important ways
that do not simply recreate such radically skeptical
possibilities.
The traditional locus classicus for underdetermination in science is
the work of Pierre Duhem, a French physicist as well as historian and
philosopher of science who lived at the turn of the 20th
Century. In *The Aim and Structure of Physical Theory*, Duhem
formulated various problems of scientific underdetermination in an
especially perspicuous and compelling way, although he himself argued
that these problems posed serious challenges only to our efforts to
confirm theories in physics. In the middle of the 20th
Century, W. V. O. Quine suggested that such challenges applied not
only to the confirmation of all types of scientific theories, but to
all knowledge claims whatsoever. His incorporation and further
development of these problems as part of a general account of human
knowledge was one of the most significant developments of
20th Century epistemology. But neither Duhem nor Quine was
careful to systematically distinguish a number of fundamentally
distinct lines of thinking about underdetermination found in their
work. Perhaps the most important division is between what we might
call holist and contrastive forms of underdetermination. Holist
underdetermination (Section 2 below) arises whenever our inability to
test hypotheses in isolation leaves us underdetermined in our
*response* to a failed prediction or some other piece of
disconfirming evidence. That is, because hypotheses have empirical
implications or consequences only when *conjoined* with other
hypotheses and/or background beliefs about the world, a failed
prediction or falsified empirical consequence typically leaves open to
us the possibility of blaming and abandoning one of these background
beliefs and/or 'auxiliary' hypotheses rather than the
hypothesis we set out to test in the first place. But contrastive
underdetermination (Section 3 below) involves the quite different
possibility that for any body of evidence confirming a theory, there
might well be *other theories* that are also well confirmed by
that very same body of evidence. Moreover, claims of
underdetermination of either of these two fundamental varieties can
vary in strength and character in any number of ways: one might, for
example, suggest that the choice between two theories or two ways of
revising our beliefs is *transiently* underdetermined simply by
the evidence we happen to have *at present*, or instead
*permanently* underdetermined by *all possible*
evidence. Indeed, the variety of forms of underdetermination that
confront scientific inquiry, and the causes and consequences claimed
for these different varieties, are sufficiently heterogeneous that
attempts to address "the" problem of underdetermination
for scientific theories have often engendered considerable confusion
and argumentation at
cross-purposes.[2]
Moreover, such differences in the character and strength of various
claims of underdetermination turn out to be crucial for resolving the
significance of the issue. For example, in some recently influential
discussions of science it has become commonplace for scholars in a
wide variety of academic disciplines to make casual appeal to claims
of underdetermination (especially of the holist variety) to support
the idea that *something* besides evidence must step in to do
the further work of determining beliefs and/or changes of belief in
scientific contexts. Perhaps most prominent among these are adherents
of the sociology of scientific knowledge (SSK) movement and some
feminist science critics who have argued that it is typically the
sociopolitical interests and/or pursuit of power and influence by
scientists themselves which play a crucial and even decisive role in
determining which beliefs are actually abandoned or retained in
response to conflicting evidence. As we will see in Section 2.2,
however, Larry Laudan has argued that such claims depend upon simple
equivocation between comparatively weak or trivial forms of
underdetermination and the far stronger varieties from which they draw
radical conclusions about the limited reach of evidence and
rationality in science. In the sections that follow we will seek to
clearly characterize and distinguish the various forms of both holist
and contrastive underdetermination that have been suggested to arise
in scientific contexts (noting some important connections between them
along the way), assess the strength and significance of the
heterogeneous argumentative considerations offered in support of and
against them, and consider just which forms of underdetermination pose
genuinely consequential challenges for scientific inquiry.
## 2. Holist Underdetermination and Challenges to Scientific Rationality
### 2.1 Holist Underdetermination: The Very Idea
Duhem's original case for holist underdetermination is, perhaps
unsurprisingly, intimately bound up with his arguments for
confirmational holism: the claim that theories or hypotheses can only
be subjected to empirical testing in groups or collections, never in
isolation. The idea here is that a single scientific hypothesis does
not by itself carry any implications about what we should expect to
observe in nature; rather, we can derive empirical consequences from
an hypothesis only when it is conjoined with many other beliefs and
hypotheses, including background assumptions about the world, beliefs
about how measuring instruments operate, further hypotheses about the
interactions between objects in the original hypothesis' field
of study and the surrounding environment, etc. For this reason, Duhem
argues, when an empirical prediction is falsified, we do not know
whether the fault lies with the hypothesis we originally sought to
test or with one of the many other beliefs and hypotheses that were
also needed and used to generate the failed prediction:
>
> A physicist decides to demonstrate the inaccuracy of a proposition; in
> order to deduce from this proposition the prediction of a phenomenon
> and institute the experiment which is to show whether this phenomenon
> is or is not produced, in order to interpret the results of this
> experiment and establish that the predicted phenomenon is not
> produced, he does not confine himself to making use of the proposition
> in question; he makes use also of a whole group of theories accepted
> by him as beyond dispute. The prediction of the phenomenon, whose
> nonproduction is to cut off debate, does not derive from the
> proposition challenged if taken by itself, but from the proposition at
> issue joined to that whole group of theories; if the predicted
> phenomenon is not produced, the only thing the experiment teaches us
> is that among the propositions used to predict the phenomenon and to
> establish whether it would be produced, there is at least one error;
> but where this error lies is just what it does not tell us. ([1914]
> 1954, 185)
>
Duhem supports this claim with examples from physical theory,
including one designed to illustrate a celebrated further consequence
he draws from it. Holist underdetermination ensures, Duhem argues,
that there cannot be any such thing as a "crucial
experiment" (experimentum crucis): a single experiment whose
outcome is predicted differently by two competing theories and which
therefore serves to definitively confirm one and refute the other. For
example, in a famous scientific episode intended to resolve the
ongoing heated battle between partisans of the theory that light
consists of a stream of particles moving at extremely high speed (the
particle or "emission" theory of light) and defenders of
the view that light consists instead of waves propagated through a
mechanical medium (the wave theory), the physicist Foucault designed
an apparatus to test the two theories' competing claims about
the speed of transmission of light in different media: the particle
theory implied that light would travel faster in water than in air,
while the wave theory implied that the reverse was true. Although the
outcome of the experiment was taken to show that light travels faster
in air than in
water,[3]
Duhem argues that this is far from a refutation of the hypothesis of
emission:
>
> in fact, what the experiment declares stained with error is the whole
> group of propositions accepted by Newton, and after him by Laplace and
> Biot, that is, the whole theory from which we deduce the relation
> between the index of refraction and the velocity of light in various
> media. But in condemning this system as a whole by declaring it
> stained with error, the experiment does not tell us where the error
> lies. Is it in the fundamental hypothesis that light consists in
> projectiles thrown out with great speed by luminous bodies? Is it in
> some other assumption concerning the actions experienced by light
> corpuscles due to the media in which they move? We know nothing about
> that. It would be rash to believe, as Arago seems to have thought,
> that Foucault's experiment condemns once and for all the very
> hypothesis of emission, i.e., the assimilation of a ray of light to a
> swarm of projectiles. If physicists had attached some value to this
> task, they would undoubtedly have succeeded in founding on this
> assumption a system of optics that would agree with Foucault's
> experiment. ([1914] 1954, p. 187)
>
From this and similar examples, Duhem drew the quite general
conclusion that our response to the experimental or observational
falsification of a theory is always underdetermined in this way. When
the world does not live up to our theory-grounded expectations, we
must give up *something*, but because no hypothesis is ever
tested in isolation, no experiment ever tells us precisely which
belief it is that we must revise or give up as mistaken:
>
> In sum, the physicist can never subject an isolated hypothesis to
> experimental test, but only a whole group of hypotheses; when the
> experiment is in disagreement with his predictions, what he learns is
> that at least one of the hypotheses constituting this group is
> unacceptable and ought to be modified; but the experiment does not
> designate which one should be changed. ([1914] 1954, 187)
>
The predicament Duhem here identifies is no mere rainy day puzzle for
philosophers of science, but a methodological challenge that
consistently arises in the course of scientific practice itself. It is
simply not true that for practical purposes and in concrete contexts
there is always just a single revision of our beliefs in response to
disconfirming evidence that is obviously correct, most promising, or
even most sensible to pursue. To cite a classic example, when
Newton's celestial mechanics failed to correctly predict the
orbit of Uranus, scientists at the time did not simply abandon the
theory but protected it from refutation by instead challenging the
background assumption that the solar system contained only seven
planets. This strategy bore fruit, notwithstanding the falsity of
Newton's theory: by calculating the location of a hypothetical
eighth planet influencing the orbit of Uranus, the astronomers Adams
and Leverrier were eventually led to discover Neptune in 1846. But the
very same strategy failed when used to try to explain the advance of
the perihelion in Mercury's orbit by postulating the existence
of "Vulcan", an additional planet located between Mercury
and the sun, and this phenomenon would resist satisfactory explanation
until the arrival of Einstein's theory of general relativity. So
it seems that Duhem was right to suggest not only that hypotheses must
be tested as a group or a collection, but also that it is by no means
a foregone conclusion which member of such a collection should be
abandoned or revised in response to a failed empirical test or false
implication. Indeed, this very example illustrates why Duhem's
own rather hopeful appeal to the 'good sense' of
scientists themselves in deciding when a given hypothesis ought to be
abandoned promises very little if any relief from the general
predicament of holist underdetermination.
As noted above, Duhem thought that the sort of underdetermination he
had described presented a challenge only for theoretical physics, but
subsequent thinking in the philosophy of science has tended to the
opinion that the predicament Duhem described applies to theoretical
testing in all fields of scientific inquiry. We cannot, for example,
test an hypothesis about the phenotypic effects of a particular gene
without presupposing a host of further beliefs about what genes are,
how they work, how we can identify them, what other genes are doing,
and so on. In the middle of the 20th Century, W. V. O.
Quine would incorporate confirmational holism and its associated
concerns about underdetermination into an extraordinarily influential
account of knowledge in general. As part of his famous (1951) critique
of the widely accepted distinction between truths that are analytic
(true by definition, or as a matter of logic or language alone) and
those that are synthetic (true in virtue of some contingent fact about
the way the world is), Quine argued that *all* of the beliefs
we hold at any given time are linked in an interconnected web, which
encounters our sensory experience only at its periphery:
>
> The totality of our so-called knowledge or beliefs, from the most
> casual matters of geography and history to the profoundest laws of
> atomic physics or even of pure mathematics and logic, is a man-made
> fabric which impinges on experience only along the edges. Or, to
> change the figure, total science is like a field of force whose
> boundary conditions are experience. A conflict with experience at the
> periphery occasions readjustments in the interior of the field. But
> the total field is so underdetermined by its boundary conditions,
> experience, that there is much latitude of choice as to what
> statements to reevaluate in the light of any single contrary
> experience. No particular experiences are linked with any particular
> statements in the interior of the field, except indirectly through
> considerations of equilibrium affecting the field as a whole. (1951,
> 42-3)
>
One consequence of this general picture of human knowledge is that all
of our beliefs are tested against experience only as a corporate
body--or as Quine sometimes puts it, "The unit of empirical
significance is the whole of science" (1951, p.
42).[4]
A mismatch between what the web as a whole leads us to expect and the
sensory experiences we actually receive will occasion *some*
revision in our beliefs, but which revision we should make to bring
the web as a whole back into conformity with our experiences is
radically underdetermined by those experiences themselves. To use
Quine's example, if we find our belief that there are brick
houses on Elm Street to be in conflict with our immediate sense
experience, we might revise our beliefs about the houses on Elm
Street, but we might equally well modify instead our beliefs about the
appearance of brick, our present location, or innumerable other
beliefs constituting the interconnected web. In a pinch, we might even
decide that our present sensory experiences are simply hallucinations!
Quine's point was not that any of these are particularly likely
or reasonable responses to recalcitrant experiences (indeed, an
important part of his account is the explanation of why they are not),
but instead that they would serve equally well to bring the web of
belief as a whole in line with our experience. And if the belief that
there are brick houses on Elm Street were sufficiently important to
us, Quine insisted, it would be possible for us to preserve it
"come what may" (in the way of empirical evidence), by
making sufficiently radical adjustments elsewhere in the web of
belief. It is in principle open to us, Quine argued, to revise even
beliefs about logic, mathematics, or the meanings of our terms in
response to recalcitrant experience; it might seem a tempting solution
to certain persistent difficulties in quantum mechanics, for example,
to reject classical logic's law of the excluded middle (allowing
physical particles to both have and not have some determinate
classical physical property like position or momentum at a given
time). The only test of a belief, Quine argued, is whether it fits
into a web of connected beliefs that accords well with our experience
*on the whole*. And because this leaves any and all beliefs in
that web at least potentially subject to revision on the basis of our
ongoing sense experience or empirical evidence, he insisted, there
simply are no beliefs that are analytic in the originally supposed
sense of immune to revision in light of experience, or true no matter
what the world is like.
Quine recognized, of course, that many of the logically possible ways
of revising our beliefs in response to recalcitrant experiences that
remain open to us nonetheless strike us as ad hoc, perfectly
ridiculous, or worse. He argues (1955) that our actual revisions of
the web of belief seek to maximize the theoretical
"virtues" of simplicity, familiarity, scope, and
fecundity, along with conformity to experience, and elsewhere suggests
that we typically seek to resolve conflicts between the web of our
beliefs and our sensory experiences in accordance with a principle of
"conservatism", that is, by making the smallest possible
number of changes to the least central beliefs we can that will
suffice to reconcile the web with experience. That is, Quine
recognized that when we encounter recalcitrant experience we are not
usually at a loss to decide which of our beliefs to revise in response
to it, but he claimed that this is simply because we are strongly
disposed as a matter of fundamental psychology to prefer whatever
revision requires the most minimal mutilation of the existing web of
beliefs and/or maximizes virtues that he explicitly recognizes as
pragmatic in character. Indeed, it would seem that on Quine's
view the very notion of a belief being more central or peripheral or
in lesser or greater "proximity" to sense experience
should be cashed out simply as a measure of our willingness to revise
it in response to recalcitrant experience. That is, it would seem that
what it *means* for one belief to be located
"closer" to the sensory periphery of the web than another
is simply that we are more likely to revise the first than the second
if doing so would enable us to bring the web as a whole into
conformity with otherwise recalcitrant sense experience. Thus, Quine
saw the traditional distinction between analytic and synthetic beliefs
as simply registering the endpoints of a psychological continuum
ordering our beliefs according to the ease and likelihood with which
we are prepared to revise them in order to reconcile the web as a
whole with our sense experience as a whole.
### 2.2 Challenging the Rationality of Science
It is perhaps unsurprising that such holist underdetermination has
been taken to pose a threat to the fundamental rationality of the
scientific enterprise. The claim that the empirical evidence alone
underdetermines our response to failed predictions or recalcitrant
experience might even seem to *invite* the suggestion that what
systematically steps into the breach to do the further work of
singling out just one or a few candidate responses to disconfirming
evidence is something irrational or at least arational in character.
Imre Lakatos and Paul Feyerabend each suggested that because of
underdetermination, the difference between empirically successful and
unsuccessful theories or research programs is largely a function of
the differences in talent, creativity, resolve, and resources of those
who advocate them. And at least since the influential work of Thomas
Kuhn, one important line of thinking about science has held that it is
ultimately the social and political interests (in a suitably broad
sense) of scientists themselves which serve to determine their
responses to disconfirming evidence and therefore the further
empirical, methodological, and other commitments of any given
scientist or scientific community. Mary Hesse suggests that Quinean
underdetermination showed why certain "non-logical" and
"extra-empirical" considerations must play a role in
theory choice, and claims that "it is only a short step from
this philosophy of science to the suggestion that adoption of such
criteria, that can be seen to be different for different groups and at
different periods, should be explicable by social rather than logical
factors" (1980, 33). Perhaps the most prominent modern day
defenders of this line of thinking are those scholars in the sociology
of scientific knowledge (SSK) movement and in feminist science studies
who argue that it is typically the career interests, political
affiliations, intellectual allegiances, gender biases, and/or pursuit
of power and influence by scientists themselves which play a crucial
or even decisive role in determining precisely which beliefs are
abandoned or retained when faced with conflicting evidence (classic
works in SSK include Bloor 1991, Collins 1992, and Shapin and Schaffer
1985; in feminist science studies, see Longino, 1990, 2002, and for a
recent review, Nelson 2022). The shared argumentative schema here is
one on which holist underdetermination ensures that the evidence alone
cannot do the work of picking out a unique response to failed
predictions or recalcitrant experience, thus something else must step
in to do the job, and sociologists of scientific knowledge, feminist
critics of science, and other interest-driven theorists of science
each have their favored suggestions close to hand. (For useful further
discussion, see Okasha 2000. Note that historians of science have also
appealed to underdetermination in presenting "counterfactual
histories" exploring the ways in which important historical
developments in science might have turned out quite differently than
they actually did; see, for example, Radick 2023.)
In response to this line of argument, Larry Laudan (1990) argues that
the significance of such underdetermination has been greatly
exaggerated. Underdetermination actually comes in a wide variety of
strengths, he insists, depending on precisely what is being asserted
about the character, the availability, and (most importantly) the
*rational defensibility* of the various competing hypotheses or
ways of revising our beliefs that the evidence supposedly leaves us
free to accept. Laudan usefully distinguishes a number of different
dimensions along which claims of underdetermination vary in strength,
and he goes on to insist that those who attribute dramatic
significance to the thesis that our scientific theories are
underdetermined by the evidence defend only the weaker versions of
that thesis, yet draw dire consequences and shocking morals regarding
the character and status of the scientific enterprise from much
stronger versions. He suggests, for instance, that Quine's
famous claim that any hypothesis can be preserved "come what
may" in the way of evidence can be defended simply as a
description of what it is *psychologically* possible for human
beings to do, but Laudan insists that in this form the thesis is
simply bereft of interesting or important consequences for
epistemology-- the study of *knowledge*. Along this
dimension of variation, the strong version of the thesis asserts that
it is always normatively or rationally *defensible* to retain
any hypothesis in the light of any evidence whatsoever, but this
latter, stronger version of the claim, Laudan suggests, is one for
which no convincing evidence or argument has ever been offered. More
generally, Lauden argues, arguments for underdetermination turn on
implausibly treating all logically possible responses to the evidence
as equally justified or rationally defensible. For example, Laudan
suggests that we might reasonably hold the resources of *deductive
logic* to be insufficient to single out just one acceptable
response to disconfirming evidence, but not that deductive logic
*plus the sorts of ampliative principles of good reasoning
typically deployed in scientific contexts* are insufficient to do
so. Similarly, defenders of underdetermination might assert the
*nonuniqueness* claim that for any given theory or web of
beliefs, either there is at least one alternative that can also be
reconciled with the available evidence, or the much stronger claim
that *all* of the contraries of any given theory can be
reconciled with the available evidence equally well. And the claim of
such "reconciliation" itself disguises a wide range of
further alternative possibilities: that our theories can be made
*logically compatible* with any amount of disconfirming
evidence (perhaps by the simple expedient of removing any claim(s)
with which the evidence is in conflict), that any theory may be
reformulated or revised so as to *entail* any piece of
previously disconfirming evidence, or so as to *explain*
previously disconfirming evidence, or that any theory can be made to
be *as well supported empirically* by any collection of
evidence as any other theory. And in *all* of these respects,
Laudan claims, partisans have defended only the weaker forms of
underdetermination while founding their further claims about and
conceptions of the scientific enterprise on versions much stronger
than those they have managed or even attempted to defend.
Laudan is certainly right to distinguish these various versions of
holist underdetermination, and he is equally right to suggest that
many of the thinkers he confronts have derived grand morals concerning
the scientific enterprise from much stronger versions of
underdetermination than they are able to defend, but the underlying
situation is somewhat more complex than he suggests. Laudan's
overarching claim is that champions of holist underdetermination show
only that a wide variety of responses to disconfirming evidence are
logically possible (or even just psychologically possible), rather
than that these are all rationally defensible or equally
well-supported by the evidence. But his straightforward appeal to
further epistemic resources like ampliative principles of belief
revision that are supposed to help narrow the merely logical
possibilities down to those which are reasonable or rationally
defensible is itself problematic, at least as part of any attempt to
respond to Quine. This is because on Quine's holist picture of
knowledge such further ampliative principles governing legitimate
belief revision are *themselves* simply part of the web of our
beliefs, and are therefore open to revision in response to
recalcitrant experience as well. Indeed, this is true even for the
principles of deductive logic and the (consequent) demand for
particular forms of logical consistency between parts of the web
itself! So while it is true that the ampliative principles we
currently embrace do not leave all logically or even psychologically
possible responses to the evidence open to us (or leave us free to
preserve any hypothesis "come what may"), our continued
adherence to *these very principles*, rather than being willing
to revise the web of belief so as to abandon them, is part of the
phenomenon to which Quine is using underdetermination to draw our
attention, and so cannot be taken for granted without begging the
question. Put another way, Quine does not simply ignore the further
principles that function to ensure that we revise the web of belief in
one way rather than others, but it follows from his account that such
principles are themselves part of the web and therefore candidates for
revision in our efforts to bring the web of beliefs into conformity
(by the resulting web's own lights) with sensory experience.
This recognition makes clear why it will be extremely difficult to say
how the shift to an alternative web of belief (with alternative
ampliative or even deductive principles of belief revision) should or
even can be evaluated for its rational defensibility. Each proposed
revision is likely to be maximally rational by the lights of the
principles it itself
sanctions.[5]
Of course we can rightly say that many candidate revisions would
violate our *presently accepted* ampliative principles of
rational belief revision, but the preference we have for those rather
than the alternatives is itself simply generated by their position in
the web of belief we have inherited, and the role that they themselves
play in guiding the revisions we are inclined to make to that web in
light of ongoing experience.
Thus, if we accept Quine's general picture of knowledge, it
becomes quite difficult to disentangle normative from descriptive
issues, or questions about the psychology of human belief revision
from questions about the justifiability or rational defensibility of
such revisions. It is in part for this reason that Quine famously
suggests (1969, 82; see also p 75-76) that epistemology itself
"falls into place as a chapter of psychology and hence of
natural science." His point is not that epistemology should
simply be abandoned in favor of psychology, but instead that there is
ultimately no way to draw a meaningful distinction between the two.
(James Woodward, in comments on an earlier draft of this entry,
pointed out that this makes it all the harder to assess the
significance of Quinean underdetermination in light of Laudan's
complaint or even know the rules for doing so, but in an important way
this difficulty was Quine's point all along!) Quine's
claim is that "[e]ach man is given a scientific heritage plus a
continuing barrage of sensory stimulation; and the considerations
which guide him in warping his scientific heritage to fit his
continuing sensory promptings are, where rational, pragmatic"
(1951, 46), but the role of these pragmatic considerations or
principles in selecting just one of the many possible revisions of the
web of belief in response to recalcitrant experience is not to be
contrasted with those same principles having rational or epistemic
justification. Far from conflicting with or even being orthogonal to
the search for truth and our efforts to render our beliefs maximally
responsive to the evidence, Quine insists, revising our beliefs in
accordance with such pragmatic principles "at bottom, is what
evidence is" (1955, 251). Whether or not this strongly
naturalistic conception of epistemology can ultimately be defended, it
is misleading for Laudan to suggest that the thesis of
underdetermination becomes trivial or obviously insupportable the
moment we inquire into the rational defensibility rather than the mere
logical or psychological possibility of alternative revisions to the
holist's web of belief.
In fact, there is an important connection between this lacuna in
Laudan's discussion and the further uses made of the thesis of
underdetermination by sociologists of scientific knowledge, feminist
epistemologists, and other vocal champions of holist
underdetermination. When faced with the invocation of further
ampliative standards or principles that supposedly rule out some
responses to disconfirmation as irrational or unreasonable, these
thinkers typically respond by insisting that the embrace of such
further standards or principles (or perhaps their application to
particular cases) is *itself* underdetermined, historically
contingent, and/or subject to ongoing social negotiation. For this
reason, they suggest, such appeals (and their success or failure in
convincing the members of a given community) should be explained by
reference to the same broadly social and political interests that they
claim are at the root of theory choice and belief change in science
more generally (see, e.g., Shapin and Schaffer, 1985). On both
accounts, then, our response to recalcitrant evidence or a failed
prediction is constrained in important ways by features of the
existing web of beliefs. But for Quine, the continuing force of these
constraints is ultimately imposed by the fundamental principles of
human psychology (such as our preference for minimal mutilation of the
web, or the pragmatic virtues of simplicity, fecundity, etc.), while
for constructivist theorists of science such as Shapin and Schaffer,
the continuing force of any such constraints is limited only by the
ongoing negotiated agreement of the communities of scientists who
respect them.
As this last contrast makes clear, recognizing the limitations of
Laudan's critique of Quine and the fact that we cannot dismiss
holist underdetermination with any straightforward appeal to
ampliative principles of good reasoning by itself does nothing to
establish the further *positive* claims about belief revision
advanced by interest-driven theorists of science. Conceding that
theory choice or belief revision in science is underdetermined by the
evidence in just the ways that Duhem and/or Quine suggested leaves
entirely open whether it is instead the (suitably broad) social or
political interests of scientists themselves that do the further work
of singling out the particular beliefs or responses to falsifying
evidence that any particular scientist or scientific community will
actually adopt or find compelling. Even many of those philosophers of
science who are most strongly convinced of the general significance of
various forms of underdetermination remain deeply skeptical of this
latter thesis and thoroughly unconvinced by the empirical evidence
that has been offered in support of it (usually in the form of case
studies of particular historical episodes in science).
Appeals to underdetermination have also loomed large in recent
philosophical debates concerning the place of values in science, with
a number of authors arguing that the underdetermination of theory by
data is among the central reasons that values (or
"non-epistemic" values) do and perhaps must play a central
role in scientific inquiry. Feminist philosophers of science have
sometimes suggested that it is such underdetermination which creates
room not only for unwarranted androcentric values or assumptions to
play central roles in the embrace of particular theoretical
possibilities, but also for the critical and alternative approaches
favored by feminists themselves (e.g. Nelson 2022). But appeals to
underdetermination also feature prominently in more general arguments
against the possibility or desirability of value-free science. Perhaps
most influentially, Helen Longino's "contextual
empiricism" suggests that a wide variety of non-epistemic values
play important roles in determining our scientific beliefs in part
because underdetermination prevents data or evidence alone from doing
so. For this and other reasons she concludes that objectivity in
science is therefore best served by a diverse set of participants who
bring a variety of different values or value-laden assumptions to the
enterprise (Longino 1990, 2002).
## 3. Contrastive Underdetermination, Empirical Equivalents, and Unconceived Alternatives
### 3.1 Contrastive Underdetermination: Back to Duhem
Although it is also a form of underdetermination, what we described in
Section 1 above as contrastive underdetermination raises fundamentally
different issues from the holist variety considered in Section 2 (Bonk
2008 raises many of these issues). John Stuart Mill articulated the
challenge of contrastive underdetermination with impressive clarity in
*A System of Logic*, where he writes:
>
> Most thinkers of any degree of sobriety allow, that an hypothesis...is
> not to be received as probably true because it accounts for all the
> known phenomena, since this is a condition sometimes fulfilled
> tolerably well by two conflicting hypotheses...while there are
> probably a thousand more which are equally possible, but which, for
> want of anything analogous in our experience, our minds are unfitted
> to conceive. ([1867] 1900, 328)
>
This same concern is also evident in Duhem's original writings
concerning so-called crucial experiments, where he seeks to show that
even when we explicitly suspend any concerns about holist
underdetermination, the contrastive variety remains an obstacle to our
discovery of truth in theoretical science:
>
> But let us admit for a moment that in each of these systems
> [concerning the nature of light] everything is compelled to be
> necessary by strict logic, except a single hypothesis; consequently,
> let us admit that the facts, in condemning one of the two systems,
> condemn once and for all the single doubtful assumption it contains.
> Does it follow that we can find in the 'crucial
> experiment' an irrefutable procedure for transforming one of the
> two hypotheses before us into a demonstrated truth? Between two
> contradictory theorems of geometry there is no room for a third
> judgment; if one is false, the other is necessarily true. Do two
> hypotheses in physics ever constitute such a strict dilemma? Shall we
> ever dare to assert that no other hypothesis is imaginable? Light may
> be a swarm of projectiles, or it may be a vibratory motion whose waves
> are propagated in a medium; is it forbidden to be anything else at
> all? ([1914] 1954, 189)
>
Contrastive underdetermination is so-called because it questions the
ability of the evidence to confirm any given hypothesis *against
alternatives*, and the central focus of discussion in this
connection (equally often regarded as "the" problem of
underdetermination) concerns the character of the supposed
alternatives. Of course the two problems are not entirely
disconnected, because it is open to us to consider alternative
possible modifications of the web of beliefs as alternative theories
between which the empirical evidence alone is powerless to decide. But
we have already seen that one *need* not think of the
alternative responses to recalcitrant experience as competing
theoretical alternatives to appreciate the character of the
holist's challenge, and we will see that one need not embrace
any version of holism about confirmation to appreciate the quite
distinct problem that the available evidence might support more than
one theoretical alternative. It is perhaps most useful here to think
of holist underdetermination as starting from a particular theory or
body of beliefs and claiming that our revision of those beliefs in
response to new evidence may be underdetermined, while contrastive
underdetermination instead starts from a given body of evidence and
claims that more than one theory may be well-supported by that
evidence. Part of what has contributed to the conflation of these two
problems is the holist presuppositions of those who originally made
them famous. After all, on Quine's view, we simply revise the
web of belief in response to recalcitrant experience, and so the
suggestion that there are multiple possible revisions of the web
available in response to any particular evidential finding just
*is* the claim that there are in fact many different
"theories" (i.e. candidate webs of belief) that are
equally well-supported by any given body of
data.[6]
But if we give up such extreme holist views of evidence, meaning,
and/or confirmation, the two problems take on very different
identities, with very different considerations in favor of taking them
seriously, very different consequences, and very different candidate
solutions. Notice, for instance, that even if we somehow knew that no
other hypothesis on a given subject was well-confirmed by a given body
of data, that would not tell us where to place the blame or which of
our beliefs to give up if the remaining hypothesis in conjunction with
others subsequently resulted in a failed empirical prediction. And as
Duhem suggests in the passage cited above, even if we supposed that we
somehow knew exactly which of our hypotheses to blame in response to a
failed empirical prediction, this would not help us to decide whether
or not there are other hypotheses available that are also
well-confirmed by the data we actually have.
One way to see why not is to consider an analogy that champions of
contrastive underdetermination have sometimes used to support their
case. If we consider any finite group of data points, an elementary
proof reveals that there are an infinite number of distinct
mathematical functions describing different curves that will pass
through all of them. As we add further data to our initial set we will
eliminate functions describing curves which no longer capture all of
the data points in the new, larger set, but no matter how much data we
accumulate, there will always be an infinite number of functions
*remaining* that define curves including all the data points in
the new set and which would therefore seem to be equally well
supported by the empirical evidence. *No* finite amount of data
will *ever* be able to narrow the possibilities down to just a
single function or indeed, any finite number of candidate functions,
from which the distribution of data points we have might have been
generated. Each new data point we gather eliminates an infinite number
of curves that *previously* fit all the data (so the problem
here is not the holist's challenge that we do not know which
beliefs to give up in response to failed predictions or disconfirming
evidence), but also leaves an infinite number still in contention.
### 3.2 Empirically Equivalent Theories
Of course, generating and testing fundamental scientific hypotheses is
rarely if ever a matter of finding curves that fit collections of data
points, so nothing follows directly from this mathematical analogy for
the significance of contrastive underdetermination in most scientific
contexts. But Bas van Fraassen has offered an extremely influential
line of argument intended to show that such contrastive
underdetermination is a serious concern for scientific theorizing more
generally. In *The Scientific Image* (1980), van Fraassen uses
a now-classic example to illustrate the possibility that even our best
scientific theories might have *empirical equivalents*: that
is, alternative theories making the very same empirical predictions,
and which therefore cannot be better or worse supported by any
*possible* body of evidence. Consider Newton's cosmology,
with its laws of motion and gravitational attraction. As Newton
himself realized, exactly the same predictions are made by the theory
whether we assume that the entire universe is at rest or assume
instead that it is moving with some constant velocity in any given
direction: from our position within it, we have no way to detect
constant, absolute motion by the universe as a whole. Thus, van
Fraassen argues, we are here faced with empirically equivalent
scientific theories: Newtonian mechanics and gravitation conjoined
either with the fundamental assumption that the universe is at
absolute rest (as Newton himself believed), or with any one of an
infinite variety of alternative assumptions about the constant
velocity with which the universe is moving in some particular
direction. All of these theories make all and only the same empirical
predictions, so no evidence will ever permit us to decide between them
on empirical
grounds.[7]
Van Fraassen is widely (though mistakenly) regarded as holding that
the prospect of contrastive underdetermination grounded in such
empirical equivalents demands that we restrict our epistemic ambitions
for the scientific enterprise itself. His constructive empiricism
holds that the aim of science is not to find true theories, but only
theories that are empirically adequate: that is, theories whose claims
about *observable phenomena* are all true. Since the empirical
adequacy of a theory is not threatened by the existence of another
that is empirically equivalent to it, fulfilling this aim has nothing
to fear from the possibility of such empirical equivalents. In reply,
many critics have suggested that van Fraassen gives no reasons for
restricting belief to empirical adequacy that could not also be used
to argue for suspending our belief in the *future* empirical
adequacy of our best present theories. Of course there *could*
be empirical equivalents to our best theories, but there could also be
theories equally well-supported by all the evidence up to the present
which diverge in their predictions about observables in future cases
not yet tested. This challenge seems to miss the point of van
Fraassen's epistemic voluntarism: his claim is that we should
believe no more *but also no less* than we need to take full
advantage of our scientific theories, and a commitment to the
empirical adequacy of our theories, he suggests, is the least we can
get away with in this regard. Of course it is true that we are running
some epistemic risk in believing in even the full empirical adequacy
of our present theories, but this is the minimum we need to take full
advantage of the fruits of our scientific labors, and the risk is
considerably less than what we assume in believing in their truth: as
van Fraassen famously suggests, "it is not an epistemic
principle that one might as well hang for a sheep as a lamb"
(1980, 72).
In an influential discussion, Larry Laudan and Jarrett Leplin (1991)
argue that philosophers of science have invested even the bare
possibility that our theories might have empirical equivalents with
far too much epistemic significance. Notwithstanding the popularity of
the presumption that there are empirically equivalent rivals to every
theory, they argue, the conjunction of several familiar and relatively
uncontroversial epistemological theses is sufficient to defeat it.
Because the boundaries of what is observable change as we develop new
experimental methods and instruments, because auxiliary assumptions
are always needed to derive empirical consequences from a theory (cf.
confirmational holism, above), and because these auxiliary assumptions
are themselves subject to change over time, Laudan and Leplin conclude
that there is no guarantee that any two theories judged to be
empirically equivalent at a given time will remain so as the state of
our knowledge advances. Accordingly, any judgment of empirical
equivalence is both defeasible and relativized to a particular state
of science. So even if two theories are empirically equivalent at a
given time this is no guarantee that they will *remain* so, and
thus there is no foundation for a general pessimism about our ability
to distinguish theories that are empirically equivalent to each other
on empirical grounds. Although they concede that we could have good
reason to think that particular theories have empirically equivalent
rivals, this must be established case-by-case rather than by any
general argument or presumption.
One fairly standard reply to this line of argument is to suggest that
what Laudan and Leplin really show is that the notion of empirical
equivalence must be applied to larger collections of beliefs than
those traditionally identified as scientific theories--at least
large enough to encompass the auxiliary assumptions needed to derive
empirical predictions from them. At the extreme, perhaps this means
that the notion of empirical equivalents (or at least timeless
empirical equivalents) cannot be applied to anything less than
"systems of the world" (i.e. total Quinean webs of
belief), but even that is not fatal: what the champion of contrastive
underdetermination asserts is that there are empirically equivalent
systems of the world that *incorporate* different theories of
the nature of light, or spacetime, or whatever (for useful discussion,
see Okasha 2002). On the other hand, it might seem that quick examples
like van Fraassen's variants of Newtonian cosmology do not serve
to make *this* thesis as plausible as the more limited claim of
empirical equivalence for individual theories. It seems equally
natural, however, to respond to Laudan and Leplin simply by conceding
the variability in empirical equivalence but insisting that this is
not enough to undermine the problem. Empirical equivalents create a
serious obstacle to belief in a theory so long as there is
*some* empirical equivalent to that theory at any given time,
but it need not be the same one at each time. On this line of
thinking, cases like van Fraassen's Newtonian example illustrate
how easy it is for theories to admit of empirical equivalents at any
given time, and thus constitute a reason for thinking that there
probably are or will be empirical equivalents to any given theory at
any particular time, assuring that whenever the question of belief in
a given theory arises, the challenge posed to it by contrastive
underdetermination arises as well.
Laudan and Leplin also suggest, however, that even if the universal
existence of empirical equivalents were conceded, this would do much
less to establish the significance of underdetermination than its
champions have supposed, because "theories with exactly the same
empirical consequences may admit of differing degrees of evidential
support" (1991, 465). A theory may be better supported than an
empirical equivalent, for instance, because the former but not the
latter is derivable from a more general theory whose consequences
include a third, well supported, hypothesis. More generally, the
belief-worthiness of an hypothesis depends crucially on how it is
connected or related to other things we believe and the evidential
support we have for those other
beliefs.[8]
Laudan and Leplin suggest that we have invited the specter of rampant
underdetermination only by failing to keep this familiar home truth in
mind and instead implausibly identifying the evidence bearing on a
theory exclusively with the theory's own entailments or
empirical consequences (but cf. Tulodziecki 2012). This impoverished
view of evidential support, they argue, is in turn the legacy of a
failed foundationalist and positivistic approach to the philosophy of
science which mistakenly assimilates epistemic questions about how to
decide whether or not to believe a theory to semantic questions about
how to establish a theory's meaning or truth-conditions.
John Earman (1993) has argued that this dismissive diagnosis does not
do justice to the threat posed by underdetermination. He argues that
worries about underdetermination are an aspect of the more general
question of the reliability of our inductive methods for determining
beliefs, and notes that we cannot decide how serious a problem
underdetermination poses without specifying (as Laudan and Leplin do
not) the inductive methods we are considering. Earman regards some
version of Bayesianism as our most promising form of inductive
methodology, and he proceeds to show that challenges to the long-run
reliability of our Bayesian methods can be motivated by considerations
of the empirical indistinguishability (in several different and
precisely specified senses) of hypotheses stated in any language
richer than that of the evidence itself that do not amount simply to
general skepticism about those inductive methods. In other words, he
shows that there are more reasons to worry about underdetermination
concerning inferences to hypotheses about unobservables than to, say,
inferences about unobserved observables. He also goes on to argue that
at least two genuine cosmological theories have serious, nonskeptical,
and nonparasitic empirical equivalents: the first essentially replaces
the gravitational field in Newtonian mechanics with curvature in
spacetime itself,
[9]
while the second recognizes that Einstein's General Theory of
Relativity permits cosmological models exhibiting different global
topological features which cannot be distinguished by any evidence
inside the light cones of even idealized observers who live
forever.[10]
And he suggests that "the production of a few concrete examples
is enough to generate the worry that only a lack of imagination on our
part prevents us from seeing comparable examples of underdetermination
all over the map" (1993, 31) even as he concedes that his case
leaves open just how far the threat of underdetermination extends
(1993, 36).
Most philosophers of science, however, have not embraced the idea that
it is only lack of imagination which prevents us from finding
empirical equivalents to our scientific theories generally. They note
that the convincing examples of empirical equivalents we do have are
all drawn from a single domain of highly mathematized scientific
theorizing in which the background constraints on serious theoretical
alternatives are far from clear, and suggest that it is therefore
reasonable to ask whether even a small handful of such examples should
make us believe that there are probably empirical equivalents to most
of our scientific theories most of the time. They concede that it is
always *possible* that there are empirical equivalents to even
our best scientific theories concerning any domain of nature, but
insist that we should not be willing to suspend belief in any
*particular* theory until some convincing alternative to it can
actually be produced: as Philip Kitcher puts it, "give us a
rival explanation, and we'll consider whether it is sufficiently
serious to threaten our confidence" (1993, 154; see also Leplin
1997, Achinstein 2002). That is, these thinkers insist that until we
are able to *actually construct* an empirically equivalent
alternative to a given theory, the bare possibility that such
equivalents exist is insufficient to justify suspending belief in the
best theories we do have. For this same reason most philosophers of
science are unwilling to follow van Fraassen into what they regard as
constructive empiricism's unwarranted epistemic modesty. Even if
van Fraassen is right about the most minimal beliefs we must hold in
order to take full advantage of our scientific theories, most thinkers
do not see why we should believe the least we can get away with rather
than believing the most we are entitled to by the evidence we
have.
Champions of contrastive underdetermination have most frequently
responded by trying to establish that *all* theories have
empirical equivalents, typically by proposing something like an
algorithmic procedure for generating such equivalents from any theory
whatsoever. Stanford (2001, 2006) suggests that these efforts to
*prove* that all our theories must have empirical equivalents
fall roughly but reliably into global and local varieties, and that
neither makes a convincing case for a distinctive scientific problem
of contrastive underdetermination. Global algorithms are
well-represented by Andre Kukla's (1996) suggestion that from
any theory *T* we can immediately generate such empirical
equivalents as *T*' (the claim that *T*'s
observable consequences are true, but *T* itself is false),
*T*'' (the claim that the world behaves according to
*T* when observed, but some specific incompatible alternative
otherwise), and the hypothesis that our experience is being
manipulated by powerful beings in such a way as to make it appear that
*T* is true. But such possibilities, Stanford argues, amount to
nothing more than the sort of Evil Deceiver to which Descartes
appealed in order to doubt any of his beliefs that could possibly be
doubted (see Section 1, above). Such radically skeptical scenarios
pose an equally powerful (or powerless) challenge to any knowledge
claim whatsoever, no matter how it is arrived at or justified, and
thus pose no *special* problem or challenge for beliefs offered
to us by theoretical science. If global algorithms like Kukla's
are the only reasons we can give for taking underdetermination
seriously in a scientific context, then there is no distinctive
problem of the underdetermination of scientific theories by data, only
a salient reminder of the irrefutability of classically Cartesian or
radical
skepticism.[11]
In contrast to such global strategies for generating empirical
equivalents, local algorithmic strategies instead begin with some
particular scientific theory and proceed to generate alternative
versions that will be equally well supported by all possible evidence.
This is what van Fraassen does with the example of Newtonian
cosmology, showing that an infinite variety of supposed empirical
equivalents can be produced by ascribing different constant absolute
velocities to the universe as a whole. But Stanford suggests that
empirical equivalents generated in this way are also insufficient to
show that there is a distinctive and genuinely troubling form of
underdetermination afflicting scientific theories, because they rely
on simply saddling particular scientific theories with further claims
for which those theories themselves (together with whatever background
beliefs we actually hold) imply that we cannot have any evidence. Such
empirical equivalents invite the natural response that they simply
tack on to our theories further commitments that are or should be no
part of those theories themselves. Such claims, it seems, should
simply be excised from our theories, leaving over just the claims that
sensible defenders would have held were all we were entitled to
believe by the evidence in any case. In van Fraassen's Newtonian
example, for instance, this could be done simply by undertaking no
commitment concerning the absolute velocity and direction (or lack
thereof) of the universe as a whole. Note also that if we believe a
given scientific theory when one of the empirical equivalents we could
generate from it by the local algorithmic strategy is correct instead,
most of what we originally believed will nonetheless turn out to be
straightforwardly true.
Philosophers of science have responded in
a variety of ways to the suggestion that a few or even a small handful
of serious examples of empirical equivalents does not suffice to
establish that there are probably such equivalents to most scientific
theories in most domains of inquiry. One such reaction has been to
invite more careful attention to the details of particular examples of
putative underdetermination: considerable work has been devoted to
assessing the threat of underdetermination in the case of particular
scientific theories (for recent examples see Pietsch 2012; Tulodziecki
2013; Werndl 2013; Belot 2014; Butterfield 2014; Miyake 2015; Kovaka
2019; Fletcher 2021, and others). Other thinkers have sought to
investigate whether certain types of scientific theories or theories
in particular scientific fields differ in the extent to which they are
genuinely threatened by underdetermination. Some have argued that
underdetermination poses a less serious (Cleland 2002; Carman 2005) or
a more serious (Turner 2005, 2007) challenge for
'historical' sciences like geology, paleontology, and
archaeology than it does for 'experimental' sciences like
particle physics. Others have resisted such generalizations but
nonetheless argued that underdetermination predicaments arise in
distinctive or characteristic ways in historical sciences and that
scientists in these fields have different resources and strategies for
addressing them (Currie 2018; Forber and Griffith 2011; Stanford
2010). Finally, some thinkers have sought to defend particular forms
of explanation frequently deployed in historical sciences, such as
narrative explanation (Sterelny and Currie 2017), from the charge that
they suffer from rampant underdetermination.
### 3.3 Unconceived Alternatives and A New Induction
Stanford (2001, 2006) concludes that no convincing general case has
been made for the presumption that there are empirically equivalent
rivals to all or most scientific theories, or to any theories besides
those for which such equivalents can actually be constructed. But he
goes on to insist that empirical equivalents are no essential part of
the case for a significant problem of contrastive underdetermination.
Our efforts to confirm scientific theories, he suggests, are no less
threatened by what Larry Sklar (1975, 1981) has called
"transient" underdetermination, that is, theories which
are *not* empirically equivalent but are equally (or at least
reasonably) well confirmed by all the evidence we happen to have in
hand at the moment, so long as this transient predicament is also
"recurrent", that is, so long as we think that there is
(probably) at least one such (fundamentally distinct) alternative
available--and thus the transient predicament
re-arises--whenever we are faced with a decision about whether to
believe a given theory at a given time. Stanford argues that a
convincing case for contrastive underdetermination of this recurrent,
transient variety can indeed be made, and that the evidence for it is
available in the historical record of scientific inquiry itself.
Stanford concedes that present theories are not transiently
underdetermined by the theoretical alternatives we have actually
developed and considered to date: we think that our own scientific
theories are considerably better confirmed by the evidence than any
rivals we have actually produced. The central question, he argues, is
whether we should believe that there are well confirmed alternatives
to our best scientific theories that are *presently
unconceived* by us. And the primary reason we should believe that
there are, he claims, is the long history of repeated transient
underdetermination by *previously* unconceived alternatives
across the course of scientific inquiry. In the progression from
Aristotelian to Cartesian to Newtonian to contemporary mechanical
theories, for instance, the evidence available at the time each
earlier theory dominated the practice of its day also offered
compelling support for each of the later alternatives (unconceived at
the time) that would ultimately come to displace it. Stanford's
"New Induction" over the history of science claims that
this situation is typical; that is, that "we have, throughout
the history of scientific inquiry and in virtually every scientific
field, repeatedly occupied an epistemic position in which we could
conceive of only one or a few theories that were well confirmed by the
available evidence, while subsequent inquiry would routinely (if not
invariably) reveal further, radically distinct alternatives as well
confirmed by the previously available evidence as those we were
inclined to accept on the strength of that evidence" (2006, 19).
In other words, Stanford claims that in the past we have repeatedly
failed to exhaust the space of fundamentally distinct theoretical
possibilities that were well confirmed by the existing evidence, and
that we have every reason to believe that we are probably also failing
to exhaust the space of such alternatives that are well confirmed by
the evidence we have at present. Much of the rest of his case is taken
up with discussing historical examples illustrating that earlier
scientists did not simply ignore or dismiss, but instead genuinely
*failed to conceive of* the serious, fundamentally distinct
theoretical possibilities that would ultimately come to displace the
theories they defended, only to be displaced in turn by others that
were similarly unconceived at the time. He concludes that "the
history of scientific inquiry itself offers a straightforward
rationale for thinking that there typically are alternatives to our
best theories equally well confirmed by the evidence, even when we are
unable to conceive of them at the time" (2006, 20; for
reservations and criticisms concerning this line of argument, see
Magnus 2006, 2010; Godfrey-Smith 2008; Chakravartty 2008; Devitt 2011;
Ruhmkorff 2011; Lyons 2013). Stanford concedes, however, that the
historical record can offer only fallible evidence of a distinctive,
general problem of contrastive scientific underdetermination, rather
than the kind of deductive proof that champions of the case from
empirical equivalents have typically sought. Thus, claims and
arguments about the various forms that underdetermination may take,
their causes and consequences, and the further significance they hold
for the scientific enterprise as a whole continue to evolve in the
light of ongoing controversy, and the underdetermination of scientific
theory by evidence remains very much a live and unresolved issue in
the philosophy of science. |
understanding | ## 1. Contexts
The concept of understanding has been sometimes prominent, sometimes
neglected, and sometimes viewed with suspicion, across a number of
different areas of philosophy (for a partial overview, see Zagzebski
2001). This section traces some of that background, beginning with the
place of understanding in Ancient Greek philosophy. It then considers
how the topic of understanding was lost and then
"recovered" in contemporary discussions in epistemology
and the philosophy of science.
### 1.1 Ancient Philosophy
The ancient Greek word *episteme* is at the root of our
contemporary word "epistemology", and among philosophers
it has been common to translate *episteme* simply as
"knowledge" (see, e.g., Parry 2003 [2020]).
For the last several decades, however, a case has been made that
"understanding" is a better translation of
*episteme*. (For early influential arguments see, e.g.,
Moravscik 1979; Burnyeat 1980, 1981; Annas 1981.) To appreciate why,
note that knowledge, as now commonly conceived, can apparently be
quite easy to get. Thus it seems I can know a proposition such as
*that it is raining outside* just by opening my eyes. It also
seems that items of knowledge can be in principle isolated or
atomistic. I can therefore apparently know that it is raining while
knowing very little about other things, such as why it is raining, or
what constitutes rain, or when it will stop.
Understanding seems to be different than knowledge in both respects.
For one thing, understanding typically seems harder to acquire, and
more of an epistemic accomplishment, than knowledge (Pritchard 2010).
For another, the objects of understanding seem more structured and
interconnected (Zagzebski 2019). Thus the subject matters we try to
understand are often highly complex (quantum mechanics, the U.S. Civil
War), and even when we try to understand isolated events (such as the
spilling of my coffee cup), we typically do so by drawing connections
with other events (such as the jostling of the table by my knee).
With contrasts such as these in mind, it has seemed to several
scholars of Ancient philosophy that *episteme* has more in
common with what we would now call "understanding" than
what we would now call "knowledge". Thus Julia Annas notes
that, for Plato, the person with *episteme* has a
*systematic* understanding of things, and is not a mere
possessor of various truths (Annas 1981: ch. 10). Jonathan Lear
similarly claims that for Aristotle,
>
>
> To have *episteme* one must not only know a thing, one must
> also grasp its cause or explanation. This is to understand it: to know
> in a deep sense what it is and how it has come to be. (Lear 1988:
> 6)
>
>
>
Granted, what Greek philosophers had in mind by *episteme*
often does not fully align with our contemporary ideas about
understanding, because in the hands of philosophers such as Plato and
Aristotle *episteme* is an exceptionally high-grade epistemic
accomplishment. For Plato, full *episteme* seems to require a
grasp of the basic elements of reality--in its most complete
form, a grasp that traces back to the Form of the Good itself (Schwab
2016, 2020; Moss 2020). For Aristotle, it seems to require an
appreciation of the deductive relationships that allegedly hold
between natures or first principles and observable phenomena (Burnyeat
1981). In our contemporary use, by contrast, we often happily ascribe
understanding to quite low-grade cases, where forms or first
principles do not seem to be grasped or even relevant--as when we
take ourselves to understand why the coffee cup spilled. Still, with
its stress on systematicity and interconnectedness, *episteme*
plausibly has more in common with our contemporary concept
*understanding* than our contemporary concept
*knowledge*.
If an understanding-like state was of fundamental epistemic importance
to the Greeks, it is interesting to ask why the focus of epistemology
shifted over time, and why an interest in knowledge, especially
propositional knowledge, came to predominate--including and maybe
especially quite isolated bits of propositional knowledge, such as
*that I am sitting in front of a fire*, or *that I have two
hands*.
Perhaps the shift occurred in response to the rise of scepticism in
Hellenistic philosophy (Burnyeat 1980: 188; cf. Zagzebski 2001: 236).
Or perhaps the modern focus on propositional knowledge was a response
to the wars of religion in sixteenth and seventeenth century Europe,
where it became increasingly important to distinguish good knowledge
claims from bad, even with respect to quite isolated claims. (For more
on the modern rise of interest in propositional knowledge, see Pasnau
2010; 2017.) Regardless of why the shift occurred, a desire to grasp
"how things hang together" undoubtedly remained a part of
the human condition. It is therefore unsurprising that a move to
revive understanding as a topic of philosophical inquiry eventually
emerged.
### 1.2 Epistemology & Philosophy of Science
Although understanding as an epistemic good was largely neglected by
modern epistemologists in favor of theorizing about knowledge (or
related epistemic properties, such as justification and rationality),
it reappeared as a central object of concern at the end of the
twentieth century, for a few different reasons.
Catherine Elgin, for one, influentially argued that we cannot make
sense of some of our greatest intellectual accomplishments, especially
the accomplishments we associate with science and art, without
appreciating the way they are often oriented not towards knowledge,
but rather understanding (see especially Elgin 1991, 1996). From the
perspective of virtue epistemology, Linda Zagzebski claimed that if we
think of an intellectual virtue as an "excellence of the
mind", attuned to a variety of epistemic goods, then there is
something one-sided about focusing attention only on the good of
knowledge, while neglecting other highly prized goods such as
understanding and wisdom (see especially Zagzebski 1996: 43-50;
2001). Finally, Jonathan Kvanvig forcefully argued that while
understanding is distinctively valuable from an epistemic point of
view--i.e., more valuable than any of its proper parts, such as
truth, or justification, or a combination of the two--knowledge
is not (Kvanvig 2003). For all these thinkers, the spotlight of
concern within epistemology needed to be broadened so that goods such
as understanding could be given their proper due, and their claims
resonated with other epistemologists (for overviews, see Gordon 2017;
Hannon *forthcoming*).
While the notion of understanding was often simply neglected in
epistemology, in the philosophy of science it was for many years
actively downplayed. A primary figure in this dynamic was Carl Hempel
(see especially Hempel 1965). Although Hempel helped bring the notion
of explanation back into respectability in the philosophy of science,
he had significant reservations about tying explanation too closely to
the notion of understanding.
Part of this seemed to stem from the fact that the idea of
understanding that prevailed in his day was highly subjective and
psychological--it emphasized more a subjective
"sense" of understanding, often tied to a felt sense of
familiarity. As Hempel notes, however, poor explanations might excel
along this dimension because they might
>
>
> give the questioner a sense of having attained some understanding;
> they may resolve his perplexity and in this sense "answer"
> his question.
>
>
>
"But", he continues,
>
>
> however satisfactory these answers may be psychologically, they are
> not adequate for the purposes of science, which, after all, is
> concerned to develop a conception of the world that has a clear,
> logical bearing on our experience and is thus capable of objective
> test. (Hempel 1966: 47-48)
>
>
>
The goodness of an explanation therefore seems to have little obvious
connection to whether it manages to generate understanding in a
particular audience. A good explanation might do that. But then again,
it might not. Patently poor explanations are also able to generate a
rich "sense" of understanding in some audiences (think of
conspiracy theorists), despite their shortcomings.
Philosophers such as Michael Friedman responded to Hempel's
concerns by noting that simply because there seems to be a
psychological element to understanding it does not follow that
understanding is merely subjective or up for grabs (Friedman 1974).
After all, knowledge has a psychological element, in light of the
belief condition, but few hold that knowledge is merely subjective or
up for grabs. Others, such as Jaegwon Kim, argued that leaving
considerations about understanding out of accounts of explanation was
deeply mistaken (Kim 1994 [2010]). After all, Kim claimed, we desire
to explain things *because* we want to understand them.
Despite the efforts of Friedman, Kim, and others, significant
reservations about the notion of understanding continued to linger in
the philosophy of science (see, for example, Trout 2002). A notable
shift occurred with Henk de Regt's distinction between the
"feeling" or phenomenology of understanding and genuine
understanding (de Regt 2004, 2009: ch. 1; cf. de Regt and Dieks 2005).
In particular, he argued that the feeling is neither necessary nor
sufficient for genuine understanding. De Regt's important
distinction helped pave the way for a new surge of work on the topic
over the last two decades, and helped philosophers move beyond
thinking of understanding mainly in terms of felt
"*aha!*" or "*eureka!*"
experiences.
## 2. Theoretical Frameworks
As we look to particular accounts of understanding, it will help to
consider in turn:
1. understanding's distinctive object (or objects),
2. its distinctive psychology, and
3. the distinctive sort of normative relationship that needs to hold
between the psychology of the person who understands and the object of
his or her understanding.
By way of comparison, consider the traditional "justified true
belief" analysis of knowledge. On this view, knowledge involves
a distinctive object (roughly, the truth, or a true proposition), a
distinctive psychology (the psychological act of belief or assent),
and a distinctive normative relationship that needs to hold between
the psychology of the believer and the thing believed (namely, that
one's belief in the true proposition needs to be justified, in
some sense).
What can be said, in a parallel way, about the elements of
understanding?
### 2.1 Objects of Understanding
At least at first glance, the objects of understanding appear to be so
varied that it is not obvious where one might find a common thread.
Thus, we can understand fields of study, particular states of affairs,
institutions, other people, and on and on(cf. Elgin
1996: 123).
In line with the discussion in Ancient philosophy, and setting aside
for the moment special issues related to understanding other people
(see Section 5), let us start with the generic view that the objects
of understanding are something like "connections" or
"relations". Following a distinction from Kim (1994
[2010]), we can contrast two ways of thinking about these connections
and relations--i.e., these plausible objects of
understanding.
According to *explanatory internalism*, the connections or
explanatory relations one grasps are "logico-linguistic"
relations that hold among a person's beliefs or attitudes, or
more exactly the contents of those beliefs or attitudes (Kim 1994
[2010]). *What* we grasp or see, when we understand some
phenomenon, are how these various contents are logically or
semantically related to one another.
Kim argues that Hempel is a paradigm example of an explanatory
internalist (Kim 1994 [2010]; cf. 1999 [2010]). For instance, suppose
that one wants to explain and hence understand why a particular bar of
metal began to rust (McCain 2016: ch. 9). A good Hempelian explanation
would be one in which a sentence describing the rusting follows
inferentially from (a) a statement of the initial conditions (the
moisture in the air, the constitution of the bar) and (b) a further
law-like statement connecting the moisture, the constitution of the
bar, and the onset of rust.
This Hempelian framework seems internalist because what you see or
grasp, when you understand the phenomenon, are connections among the
propositions you accept--more exactly, you see or grasp different
inferential or probabilistic connections among the contents of your
beliefs that bear on the rusting. As Kim puts it:
>
>
> the basic relation that generates an explanatory relation is a
> logico-linguistic one that connects descriptions of events, and the
> job of formulating an explanation consists, it seems, in merely
> re-arranging appropriate items in the body of propositions that
> constitute our total knowledge at a time. In explaining something,
> then, all action takes place *within* the epistemic system, on
> the subjective side of the divide between knowledge and the reality
> known, or between representation and the world represented. (Kim 1994
> [2010: 171-172])
>
>
>
An *explanatory externalist* by comparison holds that the basic
connections or relationship one grasps, when one understands, are not
logico-linguistic but metaphysical. What you grasp, when you
understand why the metal began to rust, are not primarily
relationships among your beliefs or their contents. Rather, you
primarily grasp real, mind-independent relationships that obtain in
the world.
Theorists who hold that the objects of understanding are
internal--i.e., logico-linguistic relations, the grasp of which
yields understanding--vary somewhat about which relations count.
Although it seems clear in Hempel that the objects are inferential
relationships of deductive and inductive support, epistemologists
often appeal in a general way to relations of *coherence*. Thus
Carter and Gordon write:
>
>
> We think it is clear that *objectual understanding*--for
> example, as one attains when one grasps the relevant coherence-making
> relations between propositions comprising some subject matter--is
> a particularly valuable epistemic good... Understanding wider
> subject matters will tend to be more cognitively demanding than
> understanding narrow subject matters because *more*
> propositions must be believed and their relations grasped. (Carter
> & Gordon 2014: 7-8)
>
>
>
Kvanvig likewise points to the importance of coherence:
>
>
> Central to the notion of understanding are various coherence-like
> elements: to have understanding is to grasp explanatory and conceptual
> connections between various pieces of information involved in the
> subject matter in question. Such language involves a subjective
> element (the grasping or seeing of the connections in question) and a
> more objective, epistemic element. The more objective, epistemic
> element is precisely the kind of element identified by coherentists as
> central to the notion of epistemic justification or rationality, as
> clarified, in particular, by Lehrer (1974), BonJour (1985) and Lycan
> (1988). (Kvanvig 2018: 699)
>
>
>
Other epistemologists similarly point to coherence relations as the
objects of understanding (e.g., Riggs 2003: 192). Since
"coherence" is presumably a relation that holds among the
contents of beliefs, and not among items out there in the world, these
views would qualify as internalist accounts of the objects of
understanding, according to the Kimean framework. (For further
examples and criticism, see Khalifa 2017a.)
For externalists about the object of understanding, especially
concerning phenomena "out there in the world", the
proposed objects vary. For instance, there is some support for the
idea that the objects are *nomic relations*, or relationships
according to which individual events and other phenomena are explained
by laws, and upper-level laws are explained by lower-level or more
fundamental laws (Railton 1978). Another view is that the objects are
*causal relations* (Salmon 1984; Lipton 1991 [2004]), or more
generally *dependence relations* (Kim 1974, 1994; Grimm 2014,
2017; Greco 2014; Dellsen 2020), where causation is usually
taken to be just one species of dependence.
A variation on the "dependence relationships" idea is that
the objects of understanding are "possibility spaces" (cf.
le Bihan 2017). Indeed, on the plausible assumption that dependence is
an essentially modal notion--that is, fundamentally tied to ideas
of possibility and necessity--dependence relations ineliminably
give rise to or generate possibility spaces. This is in keeping with a
suggestion by Robert Nozick that
>
>
> explanation locates something in actuality, showing its actual
> connections with other things, while *understanding* locates it
> in a network of possibility, showing the connections it would have to
> other nonactual things or processes. (Nozick 1981: 12; cf. Grimm
> 2008)
>
>
>
Note that the distinction between internal and external explanation
relations is related to, but cross-cuts, another influential
distinction by Wesley Salmon, between ontic and epistemic accounts of
explanation (Salmon 1989; for discussion see Bechtel & Abrahamsen
2005 and Craver 2014). Salmon's distinction has come in for
criticism (see, e.g., Illari 2013; Bokulich 2016, 2018), because among
other things it is not clear why epistemic accounts should not be
ontic or world-involving; knowledge is an epistemic concept, after
all, but it is ontic or world-involving in virtue of the truth
condition. The distinction between explanatory internalism and
externalism plausibly avoids this concern.
Suppose in any case that the objects of understanding--the
connections or relations we grasp when we understand--are out
there in the world, and are not simply logico-linguistic items or
elements of our psychology. According to a further important
distinction from John Greco (2014), it does not follow that these
logico-linguistic/psychological items--or more generally these
*mental representations*--do not play a crucial role in
our coming to understand, because the representations typically play
the role of *vehicles* of understanding, even if they are not
themselves understanding's object. More generally, Greco notes,
it is important to distinguish between the *object of
understanding* vs. the *vehicle of understanding*, i.e.,
"between the *thing* understood and its
*representation*" (Greco 2014: 293).
Thus consider that a good map--say, of Midtown Manhattan--is
typically a vehicle of understanding rather than an object of
understanding. When it is accurate, it allows the mind to grasp how
the streets and landmarks of Midtown are laid out, and to appreciate
the relationships they bear to one another. Or again, typically when
you take a look at your car's gas gauge and form the belief that
the tank is half empty, the gauge is the *vehicle* by which you
form your belief about the gas, but it is not the *object* of
your belief (see Dretske 1981). Assuming that the gauge is functioning
properly, the object of your belief is the gas itself, while the gauge
is simply the vehicle that represents the state of the gas to you.
(Of course, either the map or the gauge could *become* an
object of understanding--one could think about how the streets
and landmarks are represented, e.g., the different colors or shapes or
fonts the map uses to represent Midtown, or the length of the
gauge--but this would be to "involve a representation of a
representation" (Greco 2014: 293). And while the mind is capable
of such an act, it represents an element of abstraction, and plausibly
a departure from our usual way of engaging with the world.)
This distinction between objects and vehicles of understanding leaves
open the possibility that mental representations might not be the only
vehicles of understanding, or the only way of latching on to real
relations in the world. Perhaps a person might come to understand the
world by first manually manipulating it, thus noting the possibilities
that it affords and how its different elements are connected (cf.
Lipton 2009; Kuorikoski and Ylikoski 2015). This would be a way to
"directly" grasp causal structure, a structure that could
then--cognitively downstream, as it were--be represented by
the mind, perhaps in the form of mental maps, or "dependence
maps" (see Section 2.2).
Appreciating the contrast between vehicle of understanding and object
of understanding helps to reveal that in addition to pure internalist
views about the object of understanding (where the objects of
understanding are logico-linguistic relations), and pure externalist
views (where the objects are worldly), there are also possible hybrid
views.
For instance, on the sort of view we find in Michael Strevens's
*Depth* (2008), what we grasp in the first instance are
relations that hold among the contents of our beliefs: relations of
deductive entailment or coherence or probabilistic support. This view
is not simply internalist, however, because grasping these
logico-linguistic relationships, and in particular the relationships
of deductive entailment, "mirror" the real relationships
that obtain in the world, and thus provide a vehicle for apprehending
those relationships (Strevens 2008: 72). Thus Strevens holds that his
theory is able to:
>
>
> show how a deductive argument, or similar formal structure, can
> represent causal influence.... When a deductive argument is of
> the right sort to represent a causal process that has as a consequence
> the state of affairs asserted to hold by the argument's
> conclusion, I say that the argument *causally entails* the
> conclusion. A derivation that causally entails its conclusion, then,
> is one that can be used to represent (whether truly or not) a causal
> process that has the concluding state of affairs as its consequence.
> (Strevens 2012: 449)
>
>
>
On this view, by grasping or appreciating the necessitating force of
an entailment relation one thereby grasps or appreciates the
necessitating force of a causal relation--or, at least, something
importantly like it.
According to another hybrid approach, the internal logical or
probabilistic relationships do not *mirror* the relationships
out there in the world, but rather *provide evidence* for the
existence of relationships out there in the world. Thus some hold that
when things go well an appreciation or grasp of the probabilistic
connections among the various things we believe allows us to infer the
existence of real causal connections in the world (see**S**pirtes, Glymour, & Scheines 1993; Pearl 2000 [2009]). As
Wesley Salmon characterizes an earlier version of this idea,
>
>
> The explanatory significance of statistical relations is indirect.
> Their fundamental import lies in the fact... that they constitute
> evidence for causal relations. (Salmon 1984: 192)
>
>
>
Bearing in mind earlier distinctions, these would be accounts on which
one's appreciation of statistical relations is the vehicle
through which one grasps real (external) causal relations in the
world.
### 2.2 Psychology of Understanding
With respect to the psychology of understanding, recall that the
psychological element of propositional knowledge is typically
construed in terms of *belief*, where belief is taken to be a
kind of assent or saying "Yes" to the content of the
proposition. Thus to believe that the sky is blue is to assent to the
proposition *that the sky is blue*; it is "taking it to
be true" *that the sky is blue*.
When we turn to understanding, by contrast, some have claimed that a
new suite of cognitive abilities comes onto the scene, abilities that
we did not find in ordinary cases of propositional knowledge. In
particular, some philosophers claim that the kind of mental action
verbs that naturally come to the fore when we think about
understanding--"grasping" and "seeing",
for example--evoke mental abilities "beyond belief",
i.e., beyond simple assent or taking-to-be-true (for an overview, see
Baumberger, Beisbart, & Brun 2017).
For instance, Elgin, de Regt, and Wilkenfeld argue that those who
grasp how things are connected are able to cognitively
"do" things that others cannot--they can apply their
understanding to new cases, for example, and draw new inferences.
(See, for example, Elgin 1996: 122-24; de Regt 2004, 2017;
Wilkenfeld 2013). Thus car mechanics who understand how engines work
are able to make sense of engines that they have not encountered
before, and to anticipate how changes in one part of the engine will
typically lead (or fail to lead) to changes in other parts.
Grimm claims that the distinctive nature of these abilities flows from
the distinctive nature of the objects of understanding (Grimm 2017).
Suppose that the connections or relations one grasps are complex
enough to constitute what we might call
"structures"--structures with parts that depend upon
one another in various ways. Arguably, for the mind to take up these
structures in the right way, it is not enough simply to assent to
their existence. Rather, one needs to appreciate how the structure
"works", or how changes in its various parts will lead, or
fail to lead, to changes in other parts.
For instance, suppose the structures to be grasped are represented by
"causal maps" or "dependence maps"--maps
with nodes representing variables that can take on different values
and thus represent directions of dependence (Gopnik & Glymour
2002; Gopnik et al. 2004). In the schematic image below, from Gopnik
et al. (2004), the map represents a system in which changes to the
value of *Z* bring about changes to the value of *S*,
changes to the value of *S* bring about changes to the values of
*X* and *R*, and so on.
![6 nodes: Z, S, R, X, W, and Y. Z is above S, R is below S, X is to the right of S, W is to the right of X, Y is below X. Arrows go from Z to S, S to R, S to X, X to W, Y to X and Y to S](figure1.svg)
According to Grimm (2017), a striking feature of these maps is that
they are, as it were, "mobile" maps. That is, they are
cognitive representations that by their very nature can adapt and
change as the variables represented by the map take on different
values. Put another way, they are "unsaturated" maps, in
the sense that they are characterized in terms of unsaturated
variables that can *become* saturated by taking on different
values. What this means in terms of cognitive uptake is important,
Grimm argues, because if the maps are mobile or unsaturated in this
way, then the mind that aptly takes them up must itself be mobile. In
particular, the mind that takes up causal maps in a way that yields
understanding must be able to anticipate how varying or adjusting the
value of one of the variables will lead (or fail to lead) to changes
in the values of the other variables (cf. Woodward 2003).
Relatedly, Alison Hills characterizes the distinctive psychological
abilities that undergird understanding in terms of "cognitive
control" (Hills 2016). For example, Hills claims that in order
to understand why it is right to give money to charity, it is not
enough to simply believe the proposition *that it is right to give
money to charity because we owe assistance to the very needy*
(Hills 2016: 669), because I could accept a "because"
claim along these lines on the basis of testimony while having only a
very dim sense of how the two ingredient claims (that it is right to
give money and we owe assistance to the very needy) are related. What
is needed in addition is an appreciation of the relationship between
these two claims. And what this involves, according to the cognitive
control view, is an ability to "manipulate" the things
standing in relationship: for example, to be able to make a variety of
inferences in light of the relationship (Hills 2016: 663).
Although the views we have considered so far agree in thinking that
there is "something special" from a psychological point of
view about understanding, other accounts claim that there is nothing
particularly special about understanding's psychological
profile. More exactly, it is said that understanding does not appeal
to any special abilities or capacities that we do not find in ordinary
cases of knowledge.
Emily Sullivan, for instance, claims that even if we grant that
understanding involves abilities, it does not follow that there is
something special about understanding from a psychological point of
view, because ordinary propositional knowledge itself requires
abilities (Sullivan 2018). Thus I can apparently only know that the
traffic light is red, in the actual world, if I would have gotten the
color right in close possible worlds as well. More generally, knowing
seems to require the ability to track the truth about the world, so
that when the world changes, my cognitive attitude about the world
changes with it. But this ability to be responsive to changing
conditions is not obviously anything exotic, and it does seem to
entail that the object of the mind is not a proposition (*that the
traffic light is red* seems like a paradigm example of a
proposition, after all).
Along the same lines, Kareem Khalifa claims that abilities are
involved in understanding, but holds that they are "especially
unspecial" (Khalifa 2017b: 56). For Khalifa, the modal character
of abilities is evident in normal scientific practice, because as
scientists evaluate explanations on the basis of testing they
naturally rule out inadequate explanations and gravitate towards
well-supported ones. But evaluating and testing hypotheses in this way
does not appear to require anything exotic or special from a
psychological point of view, or to entail that the object of
understanding is not propositional.
Finally, psychologists themselves have increasingly focused on
characterizing the cognitive profile of understanding. Thus Tania
Lombrozo and colleagues have explored the question of why we seek
understanding, and how activities such as offering explanations aid in
the acquisition and retention of understanding (Lombrozo, 2012;
Williams & Lombrozo, 2010, 2013; Lombrozo & Wilkenfeld 2019).
Psychologists have also explored the empirical question of how good
(or bad) we are at identifying real relationships and dependencies in
the world. According to Frank Keil, for example, we are not good at
this at all, and we frequently fall prey to illusions of understanding
(Keil 2006; cf. Ylikoski 2009). For instance, we often think we
understand how a helicopter achieves lift, or how moving the
pedals on a bicycle help to propel the bike forward. But in both
cases, we are often well wide of the mark (Keil 2006; cf. Grimm 2009;
Sloman & Fernbach 2017).
### 2.3 Normativity of Understanding
Suppose you accurately grasp that your house burned down because of
faulty wiring. You do not think it burned down because of a lightning
strike, or a stray cigarette. Faulty wiring was the cause, and you
accurately grasp it as the cause (see Pritchard 2009, 2010 for more
details on a case like this).
Is that all there is to understanding why your house burned down? Or
does it also matter, for example, *how one comes to grasp* this
relationship? These questions push us to ask about the normative
dimension of understanding, and in particular to ask: Are there better
and worse ways to come by one's grasp, and does this matter for
the acquisition of understanding? For example, are careful experiments
or good evidence needed? Or can one come by one's grasp more or
less by luck, and yet still in a way that generates understanding?
A parallel question with respect to knowledge would ask: Supposing you
have a true belief, is that all there is to knowledge? And here most
would say "no". In addition to the true belief, *how
you came by* *that true belief* is also important. Is it
based on good evidence, or a reliable process, or a well-functioning
design plan? When it comes to knowledge, these further normative
questions clearly matter. It is therefore not surprising that the same
sorts of questions have been asked with respect to understanding.
While some argue that the normative profile of understanding is
essentially the same as the normative profile of knowledge (Grimm
2006; Khalifa 2013, 2017b; Greco 2014), others claim that they differ
in important ways, and especially with respect to the way in which
understanding, but not knowledge, seems compatible with *luck*.
Among theorists who see a difference between knowledge and
understanding here, we can distinguish those who claim that
understanding can be *fully externally lucky* from those who
think it can only be *partly externally lucky*.
A leading advocate of the *fully externally lucky* view is
Jonathan Kvanvig, who argues that how one comes by one's
accurate grasp matters--what we might call the
"etiology" of the accurate grasp--but it does not
matter in all the same ways that we find in cases of knowledge (see
especially Kvanvig 2003). In particular, it matters that the grasp was
acquired in a way that was *internally appropriate*
(especially, in accord with the evidence in one's possession),
but it does not matter that the grasp was *externally
appropriate*--for example, that one came by one's
evidence in a reliable way. Thus Kvanvig argues:
>
>
> What is distinctive about understanding has to do with the way in
> which an individual combines pieces of information into a unified
> body. This point is not meant to imply that truth is not important for
> understanding, for we have noted already the factive character of both
> knowledge and understanding. But once we move past its facticity, the
> grasping of relations between items of information is central to the
> nature of understanding. By contrast, when we move past the facticity
> of knowledge, the central features involve non-accidental connections
> between mind and world. (Kvanvig 2003: 197)
>
>
>
For instance, suppose you read a history book full of inaccurate facts
about the Comanche dominance of the Southern Plains in the nineteenth
century, but your dyslexia miraculously transforms them all into
truths (Kvanvig 2009). For Kvanvig, despite the one-in-a-billion
luckiness of your grasp's origins, you could nonetheless come to
an understanding of this topic. So long as the accuracy and internal
appropriateness conditions are satisfied, how you came by your
accurate grasp is not important.
Other philosophers have objected to Kvanvig's account. On their
view, it is implausible to think you can come to understand the world
through sheer external luck, or (perhaps worse) through being the
victim of massive deception (Grimm 2006; Pritchard 2010; Khalifa
2017b: ch. 7; Kelp 2021). Thus Pritchard claims that one needs to
acquire one's accurate grasp "in the right fashion"
(Pritchard 2010: 108) or "in the right kind of way"
(Pritchard 2010: 110)--in other words, by means of a reliable
source or method. One's grasp therefore cannot be the result of
"Gettier luck"--where, for instance, by sheer chance
a story intended to deceive you happens to be right (Pritchard
2010).
At the same time, philosophers such as Pritchard and Alison Hills hold
that understanding does tolerate *certain* kinds of luck, and
in a way that propositional knowledge does not (Pritchard 2010; Hills
2016). They therefore hold that understanding can be *partly
externally lucky.* Thus imagine your environment is filled with
misleading information about some event--the fire that destroyed
your house, for example. Suppose that only Source *X* will offer
you the truth about the fire--that faulty wiring was the
cause--while all of the other sources will offer plausible but
mistaken accounts. If you luckily rely on Source *X*, that source
can enable you to understand why the event occurred, even though it
cannot generate knowledge about why it occurred, due to the presence
of what Pritchard calls "environmental luck". The upshot,
on this view, is that while understanding is compatible with
environmental luck, it is not compatible with Gettier luck. They
conclude from this that since knowledge is compatible with neither
Gettier luck nor environmental luck, understanding is not a species of
knowledge.
(For recent empirical studies regarding the compatibility of judgments
of understanding with luck, and finding mixed support for this
compatibility, see Wilkenfeld et al. 2018; Carter et al. 2019.)
Regarding the normative profile of understanding and its relation to
knowledge, others have argued that understanding is not vulnerable to
defeat, especially in the face of known counterevidence, in the way
that knowledge is (Hills 2016; Dellsen 2017). Dellsen
for instance argues that if I come to believe or grasp that a car
engine works a certain way, and it really does work that way, then my
understanding of how the engine works is secure even if I have
(misleading) evidence that the person telling me about the engine is
unreliable (Dellsen 2017). To these philosophers, this provides
still another reason for thinking that understanding is not a species
of knowledge.
## 3. Special Issues in Epistemology
### 3.1 The Epistemic Value of Understanding
As part of what Wayne Riggs has called the "value turn in
epistemology", epistemologists have increasingly attempted to
identify the fundamental bearers of epistemic value (Riggs 2008). From
a purely epistemic point of view, is knowledge of just any sort
valuable? Or is the fundamentally valuable thing instead the ability
to provide reasons, or to possess beliefs that are in some way
"Gettier proof"?
Into this mix some have argued that the really valuable things from an
epistemic point of view are "higher grade" cognitive
accomplishments, such as understanding or wisdom (Riggs 2003; cf.
Baehr 2014). Thus by nature we do not seem to have any desire to
acquire trivial pieces of information, such as the name of the 589th
person in the 1971 Dallas phone book. But *any* instance of
understanding might seem worth pursuing or in some way worthwhile;
thus Riggs writes,
>
>
> it seems to me that any understanding, even of some subject matter we
> may consider trivial or mundane, contributes to the epistemic value of
> one's life. (2003: 217)
>
>
>
Understanding might therefore be a better candidate for a
fundamentally valuable epistemic state than knowledge or truth.
Just why understanding might have this property is up for debate. Some
proposals focus on the "internal" benefits that
understanding is alleged to provide. According to Zagzebski (2001),
understanding has a kind of first-person transparency, tied to an
ability to articulate reasons, that we do not always find in cases of
knowledge. Thus to say a chicken-sexer *knows* the sex of a
particular chicken without being able to cite his or her grounds is
one thing, but to say that someone *understands* some subject
matter--say, the U.S. Civil War--without being able to
explain it or to describe how the subject's different elements
depend upon and relate to one another seems like another, more
implausible step (cf. Pritchard 2010). Understanding therefore
arguably provides "a more natural home" (cf. Kvanvig 2003:
1993) for internalist intuitions in epistemology than knowledge,
because it seems to more naturally appeal to notions such as
transparency, articulacy, and reason-giving than knowledge. (For the
view that understanding too can be inarticulate, see Grimm 2017.)
Another view, from Pritchard (2009, 2010), is that any instance of
understanding is valuable because it counts as an achievement, and
achievements are the kinds of things that are distinctively and
finally valuable. Instances of understanding count as achievements
(and indeed what Pritchard calls *strong* achievements) either
because they involve overcoming an obstacle or because they bring to
bear "significant cognitive ability". For Pritchard,
however, the same cannot be said for any instance of knowledge. For
example, in many cases I might come to believe the truth on the basis
of testimony, but the credit for my true belief seems to belong more
to the testifier than to me (cf. Lackey 2007). I do not reach the
truth on this question primarily because of my ability or skill, but
because of the ability or skill of the testifier; such items of
knowledge therefore do not count as achievements, and hence lack final
value.
Against this, some have objected that there are many cases of
"easy understanding" where no significant obstacle seems
to be involved, and no significant cognitive ability at play (e.g.,
Lawler 2019). For instance, it seems I could come to understand why my
tumble-dryer isn't working--because it is
unplugged--quite easily and without bringing to bear any
particularly significant cognitive ability (Carter & Gordon 2014:
5). It has therefore been claimed that not just *any* item of
understanding is valuable, and particularly not just any item of
"understanding why". Rather, only so-called *objectual
understanding* is distinctively valuable, where objectual
understanding is said to concern "wide" subject matters
that involve "a large network of propositions and relations
between those propositions" (Carter & Gordon 2014: 8; cf.
Khalifa 2017b: ch. 8).
A final proposal is that understanding is a better candidate for the
*goal of inquiry* than knowledge (Pritchard 2016; cf. Kvanvig
2003: 202 and Kelp 2021). Suppose you learn from a reliable source
that the rising and falling of the tides are due to the moon's
gravitational pull. You will then apparently have knowledge of why the
tides rise and fall, but according to Pritchard this knowledge will
not properly close your inquiry or satisfy your curiosity.
>
>
> Indeed, one would expect our subject to continue asking questions of
> her informant until she gains a proper explanatory grip on how cause
> and effect are related; mere knowledge of the cause will not suffice.
> (Pritchard 2016: 34)
>
>
>
On this view, an explanation of how cause and effect are related is
essential to understanding why, and one's inquiry will not reach
its natural end until one resolves this question.
### 3.2 Testimony
A key issue in social epistemology is whether understanding, like
ordinary propositional knowledge, can be transmitted *via*
testimony. Thus it seems that I can transmit my knowledge to you that
the next train is arriving at 4:15, just by telling you. Transmitting
understanding, however, does not seem to work so easily, if it is
possible at all. According to Myles Burnyeat,
>
>
> Understanding is not transmissible in the same sense as knowledge is.
> It is not the case that in normal contexts of communication the
> expression of understanding imparts understanding to one's
> hearer as the expression of knowledge can and often does impact
> understanding. (Burnyeat 1980: 186)
>
>
>
Thus a teacher might try to convey her understanding of some subject
matter--say, of how Type II Diabetes works--but there is no
guarantee that the understanding will in fact *be* transmitted,
or that her students will come to see or grasp what she herself sees
and grasps.
Supposing that understanding cannot be transmitted by testimony, there
are a few explanations for why this might be the case. For one,
Zagzebski argues that the kind of "seeing" or
"grasping" that seems integral to understanding is
something a person can only do "first hand"--it
cannot be inherited from anyone else (Zagzebski 2008: ch. 6). For
another, and according to Hills, if we agree that understanding is a
skill or ability, then understanding will be difficult or impossible
to transmit because skills and abilities in general are difficult or
impossible to transmit (Hills 2009, 2020). It is therefore a specific
case of a more general phenomena.
Others have argued that understanding is not, in fact, so difficult to
transmit. Suppose I ask why you are late for our meeting, and you tell
me "traffic". It then seems like you have transmitted your
understanding to me quite directly--apparently just as directly
as when you communicate your knowledge to me that the next train is
coming at 4:15 (Grimm 2020). What seems required for successful
transmission of understanding is thus that the right conceptual
scaffolding is in place, on the part of the recipient. But supposing
the scaffolding is in place, understanding can plausibly be
transmitted in much the way knowledge can (Boyd 2017; cf. Malfatti
2019, 2020). Others argue that since something like propositional
knowledge of causes entails at least a low degree of understanding, a
low degree of understanding can be transmitted via testimony in the
same way as knowledge can (Hu 2019).
It has also been pointed out that even if we agree that seeing or
grasping needs to be first-hand, and that others cannot do this for
me, the same "first-handedness" apparently holds for
belief. That is, no one else can *believe* for me, because
believing is a first-personal act (Hazlett forthcoming: sec. 2.3). But
just as this fact about belief does not lead us to think that
propositional knowledge cannot be transmitted, so too it should not
lead us to think that understanding cannot be transmitted (Boyd
2017).
In relation to moral testimony in particular, understanding has been
invoked to explain why it often seems odd or "fishy" to
defer to others about moral issues, such as whether eating meat is
morally wrong (for an overview, see Callahan 2020; cf. Riaz 2015). One
explanation of the fishiness is that the epistemic good we really want
with respect to moral questions is not mere knowledge but rather
understanding. It is then said, for reasons tied to those just
mentioned, that understanding cannot be easily (if at all) transmitted
by testimony--either because it is an ability, and abilities
cannot easily be transmitted (Hills 2009, 2013, 2016) or because
genuine understanding in moral matters involves a suite of emotional
and affective responses that cannot easily be transmitted by testimony
(Callahan 2018). Deferring to moral testimony is therefore fishy, it
is argued, because it does not get us the epistemic good we really
want--moral understanding.
## 4. Special Issues in the Philosophy of Science
### 4.1 Explanation and Understanding
We seek explanations for an epistemic benefit, but how should we think
about that benefit? We noted above (Section 1.2) that philosophers
such as Hempel have significant reservations about thinking about the
benefit in terms of understanding. As they point out, just as a
mistaken explanation can generate a "feeling" of
understanding in some people, so too an accurate explanation might
leave people cold.
In response to this concern, others note that the feeling of
understanding (or the phenomenology of understanding) should not be
confused with the epistemic state itself, any more than the feeling of
knowing should be confused with the epistemic state of knowledge (de
Regt 2004, 2017). But even if this move is granted, and we think of
understanding as a full-bodied epistemic state and not a mere feeling,
the relationship between explanation and understanding remains
controversial.
According to what we might call an "understanding-first"
approach to the relationship, thinking about the epistemology of
understanding is indispensable for thinking about what makes for a
good explanation. More exactly, because we seem to assess the goodness
or badness of an explanation in terms of its ability to generate
understanding, understanding is in some sense conceptually prior to,
or more basic than, the notion of explanation. Thus intuitions about
understanding are often taken to be diagnostic of the goodness (the
explanatoriness) of an explanation (Wilkenfeld 2013; cf. Wilkenfeld
& Lombrozo 2020), and if we can think of a case where all of the
conditions for the theory of explanation are met, but understanding
does not result, that is a reason to reject the sufficiency of the
conditions (see, e.g., Woodward 2003: 195, in diagnosing
Hempel's view). Similarly, if we have understanding in ways not
sanctioned by the theory, that is reason to think the conditions are
not necessary. More generally, an advocate of the
"understanding-first" approach would likely concur with
Paul Humphreys's claim that:
>
>
> Scientific understanding provides a far richer terrain than does
> scientific explanation and the latter is best viewed as a vehicle to
> understanding, rather than an end in itself. (Humphreys 2000: 267; cf.
> Potochnik 2017: ch. 5)
>
>
>
Against understanding-first approaches, there are
"debunking" approaches according to which understanding is
of little if any help to accounts of explanation. (More positively,
these might simply be considered "explanation-first"
approaches.) Thus Khalifa argues that attempts to revive understanding
as a central notion in the philosophy of science have amounted to
little more than a repackaging of existing models of explanation
(Khalifa 2012, 2017b: ch. 3), and that strictly speaking all one needs
for a plausible account of understanding is a plausible account of
what counts as a good or correct explanation, combined with a
plausible account of knowledge. Understanding therefore amounts to
*knowing a correct explanation*. But then nothing new or
special is needed to theorize about understanding; our accounts of
explanation and our theories of knowledge do all the important
theoretical work (see especially Khalifa 2017b; cf. Kelp 2015).
Similarly, consider the following attempt, from Bradford Skow, to
characterize the relationship between explanation and
understanding:
>
>
> *Something *E* is an explanation of why *Q* only if
> someone who possesses *E* understands why *Q**. (Skow
> 2018: 214)
>
>
>
This condition on explanation cannot help us as theorists, Skow
argues, because it is not even true. Thus suppose we construe
"possessing" an explanation as *knowing* an
explanation. Plugged into the formula, the result is that one cannot
know an explanation of why *Q* unless that knowledge constitutes
an understanding of why *Q*. But, plausibly, we can know an
explanation of why *Q* without understanding why *Q*. For
example, someone can know why the litmus paper turned
red--because it was dipped in acid--without understanding
why it turned red (Skow 2018: 215). This suggests it takes
*more* to understand why p than simply to know why p. In the
case of the litmus paper, Skow claims one also needs to appreciate
*how* the acid turns the paper red, or *why* it turns it
red.
Opinions differ about the force of such examples. According to some,
to require that understanding why p involves knowledge of mechanisms
or deeper processes in this way would annihilate most of our everyday
understanding (Grimm 2019a). Thus I can apparently understand why my
eyes are watering--because I am chopping onions--without
appreciating anything about the mechanism or connection that underlies
the watering. Others argue that there is indeed understanding in these
cases, although perhaps not a lot (Sliwa 2015).
### 4.2 Understanding and Idealization
A longstanding puzzle in the philosophy of science concerns how
idealized models and representations ("idealizations", for
short) allow cognitive access to the world. The puzzle is that
although idealizations seem to provide real epistemic benefits, the
benefits cannot apparently be identified with truth, because to the
extent that they falsify or mispresent the world, idealizations often
fail to reflect the truth. For example, idealizations often appeal to
entities that do not and perhaps cannot exist--fully rational
agents, frictionless planes, etc. Or they subtract important worldly
elements from their accounts--e.g., long range inter-molecular
forces.
Yet if idealizations provide epistemic benefits, and we cannot readily
think of the benefits in terms of truth, then how exactly should we
think about them? According to some philosophers, we should think not
in terms of truth but rather in terms of understanding. Understanding
is the epistemic benefit we receive from idealizations, and
understanding and truth can come apart. On this view, understanding
(unlike knowledge) can therefore be "non-factive" (Elgin
2004, 2017; Potochnik 2017; cf. Sullivan & Khalifa 2019).
For instance, Elgin asks us to consider a paint sample we might see in
a hardware store--say, of the shade jonquil yellow (Elgin 2017:
187). While the sample might in fact be subtly different than the
actual shade, and hence in that sense misrepresent it, it nonetheless
seems to provide epistemic access to the shade. Idealizations such as
the Ideal Gas Law or Snell's law work are then held to work in a
similar way. They represent not by mirroring but rather by
exemplifying certain aspects of the target system, and via the
mechanism of exemplification enable understanding of the system (Elgin
2017).
Others agree that idealizations enable scientists to understand the
world even though, strictly speaking, the idealizations
misrepresent their target systems. According to Angela Potochnik
(2017), this is because the systems studied by scientists are
enormously complex networks of causal patterns and interactions.
Particular interests and cognitive limitations will lead scientists to
focus on some of these causal patterns and neglect others, and they
might get those particular patterns right. In the process, however,
they will misrepresent the actual messy complexity of the target
system, and the understanding of phenomena that results will thereby
be non-factive.
For Strevens, idealizations enable understanding because they draw
attention to the difference makers that bring about the thing to be
explained (Strevens 2017). More exactly, idealizations help us to
appreciate the factors that make a difference to bringing about the
phenomena we want to explain, and to identifying the factors that do
not make a difference. The Ideal Gas Law helps us to understand the
Boylean behavior of a gas, for example, even though the law imagines
that there are zero long-range forces at work, because the existence
of those long-rage forces does not make a difference to the derivation
of the Boylean behavior: they are insignificant enough that their
values can be set to zero without loss.
One difference between theorists such as Strevens and Potochnik may
lie in the nature of the explanandum. For Potochnik, it seems clear
that the thing to be explained is a concrete, coarse-grained
phenomenon: the behavior of a gas in a container, for example. For
Strevens, it is a more abstract, fine-grained, and high-level event:
the expansion of the gas. Thus for Strevens the thing to be explained
is something along the lines of: *that* the gas expanded, not
*how* it expanded. If one were to hold the explanandum
consistent, there might be notable agreement between these views
(though see Potochnik 2016 for further discussion).
## 5. Humanistic Understanding
To this point we have mainly focused on what it takes to understand
phenomena in the natural world, such as rusting metal bars, or the
rising and falling of the tides, or the behavior of gases. A
longstanding question, however, especially in the Continental
tradition of philosophy and especially in the philosophy of the social
sciences, is whether understanding human beings and their artifacts
requires something different and distinctive from an epistemic point
of view--perhaps, for example, a distinctive set of abilities or
methodologies, tied to the distinctive objects (or, importantly,
subjects!) we are trying to understand.
Moreover, a long string of influential thinkers--starting roughly
with Giambattista Vico and continuing through figures such Wilhelm
Dilthey and R. G. Collingwood--answer this question
affirmatively. Distinctive abilities and perhaps methodologies
*do* come on the scene when we try to understand human beings,
at least when we try to understand them in a particular way. In broad
strokes, the idea is that there is a kind of understanding of other
human beings that we can only acquire by reconstructing their
perspectives--in some sense, "from within" those
perspectives and according to their own terms (cf. Ismael 2018).
Sometimes this approach has been referred to as "the
*verstehen* tradition"--in honor of its special
roots in the German tradition, and leaving untranslated the German
word for understanding (Martin 2000; Feest 2010). Alternative labels
for this approach include "the interpretative tradition",
"the historicist tradition", "the hermeneutic
tradition", and "the humanistic tradition" (for
overviews, see Hiley et al. 1992; Stueber 2012). For convenience, we
will refer to this as "the *humanistic* tradition".
It is not an essential part of this tradition that human beings can
*only* be understood by reconstructing their perspectives, but
the tradition does characteristically claim that there is special
value in trying to do so. It also typically claims that taking up
perspectives "from within" requires distinctive cognitive
resources and perhaps distinctive methods--resources and methods
that are not clearly needed when we attempt to understand natural
phenomena.
Giambattista Vico (1668-1744) was among the first to try to
articulate how exactly understanding human beings differs from
understanding the natural world (see especially Vico 1725 [2002].)
According to Vico, just as we have a special *scienza*
(knowledge or understanding) of things we have made or produced
ourselves, so too we can have special insight into things that other
human beings have made or produced--where the things made or
produced included not just physical artifacts but also human actions.
Vico further postulated a special ability--*fantasia*, or
reconstructive imagination--by which we can enter into the minds
of others and see the world through their eyes and in terms of their
categories of thought (Miller 1993: ch. 5).
Further north in Europe, Johann Gottfried Herder (1744-1803)
held that because human nature is not static, or fixed irrespective of
time and place, understanding alien cultures requires a process of
*Einfuhlung*--a "feeling one's way
into"--the culture in question, in all its specificity and
perhaps with an eye to its particular genius. In Herder's hands,
however, *Einfuhlung* was not a quasi-mystical or
irrational attempt to leap into the minds of others. Instead, it was
often a slow, methodical process that needed to be aided by careful
historical-philological inquiry (Forster 2002: xvii). (For central
texts, see Herder 2002.)
In the late nineteenth and early twentieth century, Wilhelm Dilthey
(1833-1911) became the most celebrated advocate of the idea that
the human sciences--in German, the
*Geisteswissenschaften*, such as history--had a different
focus than the natural sciences, and centered on the idea of inner or
*lived* experience. (See Dilthey 1883 [1989] for a
characteristic work.) Someone's lived experience, moreover, was
constituted not only by their beliefs or attitudes, but also by their
emotions and volitions. For Dilthey, as Frederick Beiser puts the
idea,
>
>
> We understand an episode in a person's life only when we see how
> it plays a role in realizing his basic values, his conception of what
> makes life worth living. (Beiser 2011: 334)
>
>
>
In England, the humanistic approach was championed especially by R. G.
Collingwood (1889-1943). According to Collingwood, what
distinguishes human history from, say, geological history, is that
human history is suffused with thought--i.e., it is a record of
human aspirations, goals, beliefs, frustrations, and so on. We can,
moreover, have two possible stances towards these thoughts. We can try
to map them from a third person point of view, and identify the
various ways the thoughts depend upon and relate to one another. But
we can also try to "re-enact" or "rethink"
them (D'Oro 2000, 2004). Thus Collingwood held that,
>
>
> The events of history are never mere phenomena, never mere spectacles
> for contemplation, but things which the historian looks, not at, but
> through, to discern the thought within them. (Collingwood 1946 [1993:
> 214])
>
>
>
### 5.1 Perspective-taking and its Critics
A central tenet of the humanistic tradition is that when an agent has
a first-person perspective on the world, there is value in trying to
"take up" or "assume" that perspective, rather
than simply trying to map that agent's perspective from a
detached, third-person perspective. Abraham Lincoln for instance,
could presumably be understood from a third-person, detached
perspective by a psychologist adept at mapping the various ways in
which Lincoln's beliefs and desires, hopes and fears, seemed to
depend upon and relate to one another. What the humanistic tradition
says is that even if the psychologist were to do this exquisitely, so
that all of these relationships were accurately plotted, there would
still be an important sort of understanding missing--an
understanding of what it was like to be Lincoln from the inside, or to
regard the world as he regarded it, at least in part.
Philosophers differ about the epistemic value of this project of
perspective-taking. Thus some claim that there are certain
*facts* that only show up from a first-person point of view
(Nagel 1974), and hence can only be apprehended from that point of
view. For instance, if I want to apprehend facts about "what it
is like" to experience the world from your perspective, I
arguably need to try to place myself, somehow, in that perspective.
Others claim that certain *concepts* only show up for us from a
first-personal, engaged perspective. For instance, perhaps the concept
of chronic pain can only be acquired by someone who has experienced
it, or the concept of romantic love, or racial bigotry. A related view
is that while these concepts can perhaps be *acquired* without
the relevant first-person experience, they cannot be adequately
grasped or mastered without that experience. As Stephen Turner
writes:
When a mother tells her 13 year old daughter that she does not know
what "love" is, she is not making a comment about
semantics; she is pointing to the nonlinguistic experiential
conditions that are bound up with the understanding of the term that
the daughter does not share. (Turner 2019: 254)
The thought here seems to be that while the daughter might have the
concept of love, her grasp of the concept will be extremely poor
without the benefit of the first-person experience, and her attempts
to apply it will be unreliable.
In fields such as anthropology, ethnography, and sociology, and in the
philosophical areas that reflect on these disciplines (primarily, the
philosophy of social science), questions related to the epistemic
value of perspective-taking gave rise to debates about whether it is
necessary to immerse oneself in a society in order to understand it
adequately. According to some, the answer is "yes": if I
want to understand a society or culture, I need to understand it from
the participants' perspective, with special sensitivity to the
concepts and rules that guide the participants' perspective,
even if only implicitly (see especially Winch 1958, 1964). It is then
sometimes claimed that one can only really grasp or master these
concepts by participating in the relevant form of life. As the
ethnographer James Spradley writes:
>
>
> Immersion is the time-honored strategy used by most ethnographers. By
> cutting oneself off from other interests and concerns, by listening to
> informants hours on end, by participating in the cultural scene, and
> by allowing one's mental life to be taken over by the new
> culture, themes [the implicit beliefs of a culture] often emerge
> ... This type of immersion will often reveal new relationships
> among domains and bring to light cultural themes you cannot discover
> any other way. (Spradley 1980: 145; cf. Kampa 2019)
>
>
>
Another view is that immersion, while perhaps helpful, is not strictly
necessary. What is necessary is recognizing the priority of the
participants' perspective, and using that perspective to
characterize the activities that need to be understood in the first
place (Taylor 1971; McCarthy 1973). If we want to make sense of what
looks like (say) voting or praying in another culture, we need to
adopt that culture's way of identifying what counts as voting or
praying (which perhaps will turn out to be a distinct but closely
related activity, such as voting\* or praying\*). Carving up the
behavior according to our *own* categories will only
misrepresent what is happening and lead to misunderstanding.
Peter Winch, among others, further argues that it is hard or
impossible to try to adopt another culture's way of carving up
the world without somehow taking that culture on board in a holistic
way, because it is hard or impossible to characterize particular bits
of behavior in isolation (Winch 1958). To make sense of what a
medieval knight means by praying (or praying\*), for example, I might
need to connect this with other concepts (perhaps such as honor, or
duty, or salvation) that again might be quite different than my own.
The result might then be that one gradually acquires what Alasdair
MacIntyre calls a "second first language", as one's
grasp of the relationships between different concepts and their
applications increases (MacIntyre 1988: ch. 19).
Against the push for perspective-taking as a source of understanding,
especially in the social sciences, at least three strains of criticism
arose: Positivist, Critical, and Gadamerian.
The Positivist critique--associated with thinkers such as
Theodore Abel (1948), Ernest Nagel (1953), and Richard Rudner
(1966)--had a number of prongs. For one thing, there were grave
doubts about the reliability of the processes associated with
perspective taking. After all, it seems all too easy to project
one's own cares and concerns onto the minds of others. It was
also argued that reliving experiences, to the extent that this is
possible, does not actually amount to an explanation of why the
experiences happened. I could, after all, have lived through a series
of events myself, but not been able to explain or understand
them--so why should imaginatively living through someone
else's experiences, to the extent this is possible,
automatically grant me understanding? (For an overview of these
objections, see Martin 2000; Fay 2016; Beiser 2019.)
Critical Theorists have argued that the first-person perspective is
not the most theoretically or politically important perspective,
because it often masks deeper and more significant sources of behavior
(see Warnke 2019). The deeper sources might include an agent's
own, unacknowledged motivations, or they might include the
presuppositions, power dynamics, and economic conditions of the
society that shaped the possibility spaces in which the agent moves.
It is thus what happens "behind the back" of the subject
that is often crucial for truly understanding behavior--the
hidden impulses and power dynamics and systems of oppression that are
often obscured from the perspective of the agent, sometimes in an
unacknowledged way by the agent herself. (Here we also find
influential Marxist and Feminist analyses and critiques. See Alcoff
2005; Warnke 2014.) A hybrid view, from Habermas, is that we need
to be able to shift back and forth between the subject's
perspective and the structural perspective, if we are to properly
understand the reality of social life (Habermas 1981 [1984/1987]; for
an overview, see Baynes 2016).
The Gadamerian critique, finally, is that we can never fully jump
outside of our own cares and concerns in order to adopt the cares and
concerns of others. Reliving or re-experiencing-- in the sense of
transposing ourselves *out* of our framework and *into*
the framework of other agents or cultures--is thus an impossible
ideal. Instead, we always take our frameworks with us (Gadamer 1960
[1989]). For Gadamer, what understanding others--and especially
their written or spoken words--requires is not transposing
ourselves into their worldviews, but trying to fuse our worldviews in
some way (*Horizontverschmelzung*). Perhaps, for example, my
conception of voting or prayer will become enlarged or modified by
engaging with the conceptions I find in other cultures, and perhaps
especially through a dialogue with the texts of those cultures.
Whether this fusion essentially amounts to a project of translation
(from one worldview to another), or something else is a matter for
debate. (See Vessey 2009 for an overview.)
### 5.2 Understanding as an Ontological Category
An additional question, especially in the Continental tradition, is
whether we should think of understanding primarily in epistemic terms.
Perhaps it is better to think of understanding as an
*ontological* category--a way of being in the
world--rather than an epistemic one (for an overview, see
Feher 2016).
According to Heidegger's influential account, for instance,
understanding is not a cognitive act that we might or might not
perform. Rather, it is our fundamental way of living in the world, and
we are "always already" engaged in understanding. Thus he
writes that:
>
>
> Understanding is not an acquaintance derived from knowledge, but a
> primordially existential kind of being, which, more than anything
> else, makes such knowledge and acquaintance possible (123-4;
> trans. by Wrathall 2013).
>
>
>
Just as Descartes spoke of himself as a *res cogitans*--a
thinking thing--so for Heidegger we are, fundamentally,
understanding things. As Gadamer elaborates the Heideggerian idea:
>
>
> Understanding is... the original form of the realization of
> Dasein, which is being-in-the-world. Before any differentiation of
> understanding into the various directions of pragmatic or theoretical
> interest, understanding is Dasein's mode of being. (Gadamer 1960
> [1989: 259])
>
>
>
One central element of this view is that we are always projecting
possibilities onto the world around us. Or rather, it is not as if the
world first presents itself as a bare thing empty of possibilities,
and then we construe it as possessing these possibilities; instead,
our experience of the world is from the first suffused with this sense
of possibility. To take an example that Samantha Matherne (2019) uses
to illustrate Heidegger's view: when I first apprehend the
martini in front of me, I take it as offering a variety of
possibilities--to be sipped, to be thrown, to be shaken, to be
stirred. If I then take the martini as to be sipped, I am seizing on
one of these possibilities and interpreting the martini in light of
this specific possibility. But my experience of the drink was never
devoid of an apprehension of possibilities altogether.
A further question is why, on the Heideggerian framework, the state of
projecting possibilities does not count as epistemic or
truth-evaluable, even if we allow that it is also in some sense
ontological. After all, the possibilities that we project onto the
world might not be genuine or grounded in reality. As Grimm (2008)
notes, someone might think that apparently solid things (like
baseballs) could not possibly pass through other apparently solid
things (like tables), but learning about quantum tunneling might call
this into question, and transform one's sense of possibility.
There thus seem to be facts about the possibilities that the world
affords, and it seems like a mind could either track these facts
accurately, or not. But this appears to be a topic of interest not
just to ontologists or metaphysicians, but to epistemologists as well
(cf. Westphal 1999). |
selection-units | ## 1. Introduction
When we think of evolutionary theory and natural selection, we usually
think of organisms, say, a herd of deer, in which some deer are faster
than others at escaping their predators. These swifter deer will, all
things being equal, leave more offspring, and these offspring will
have a tendency to be swifter than other deer. Thus, we get a change
in the average swiftness of deer over evolutionary time. In a case
like this, the unit of selection, sometimes called the
"target" of selection, is the single organism, the
individual deer, and the property being selected, swiftness, also lies
at the organismic level, in that it is exhibited by the intact and
whole deer, and not by either parts of deer, such as cells, or groups
of deer, such as herds. But there are other levels of biological
organization that have been proposed to be units or targets of
selection--levels at which selection may act to increase a given
property at that level, and at which units increase or decrease as a
result of selection at that specific level of biological
organization.
But for over thirty years, some participants in the "units of
selection" debates have argued that more than one issue is at
stake. The notions of "replicator" and
"vehicle" were introduced, to stand for different roles in
the evolutionary process (Dawkins 1978, 1982a,b). In this case, the
individual deer would be called the "vehicles" and their
genes that make them tend to be swifter would be called the
"replicators." The genic selection argument proceeded to
assert that the units of selection debates should not be about
vehicles, as they formerly had been, but about replicators. It was then
asserted that the "replicator" actually subsumes two
distinct functional roles, which can be broken up into
"replicator" and "interactor":
>
>
> Dawkins...has replicators interacting with their environment in
> two ways--to produce copies of themselves and to influence their
> own survival and the survival of their copies. (Hull 1980: 318)
>
>
>
The new view would call the individual deer the
"interactors." It was then argued that the force of this
distinction between replicator and interactor had been
underappreciated, and if the units of selection controversies were
analyzed further, that the question about interactors should more
accurately be called the "levels of selection" debate to
distinguish it from the dispute about replicators, which should be
allowed to keep the "units of selection debate" title
(Brandon 1982; Mitchell 1987).
The purpose of this article is to delineate further the various
questions pursued under the rubric of "units and levels of
selection."[1]
Four quite distinct questions will be isolated that have, in fact,
been asked in the context of considering, what is a unit of selection?
In
section 2,
these distinct questions are described.
Section 3
returns to the sites of several very confusing, occasionally heated
debates about "the" unit of selection. Several leading
positions on the issues are analyzed utilizing the taxonomy of
distinct questions.
This analysis is not meant to resolve any of the conflicts about which
research questions are most worth pursuing; moreover, there is no
attempt to decide which of the questions or combinations of questions
discussed ought to be considered "the" units of selection
question.
## 2. Four Basic Questions
Four basic questions can be delineated as distinct and separable. As
will be demonstrated in
section 3,
these questions are often used in combination to represent the units
of selection problem. But let us begin by clarifying terms (see Lloyd
1992, 2001). (See the entry on
the biological notion of individual
for more on this topic.)
The term *replicator*, originally introduced in the 1970s but
since modified by philosophers in the 1980s, is used to refer to any
entity of which copies are made (Dawkins 1976, 1982a,b; Hull 1980; Brandon
1982). Replicators were originally described using two orthogonal
distinctions. A "germ-line" replicator, as distinct from a
"dead-end" replicator, is "the potential ancestor of
an indefinitely long line of descendant replicators" (Dawkins
1982a: 46). For instance, DNA in a chicken's egg is a germ-line
replicator, whereas that in a chicken's wing is a dead-end
replicator. Note that DNA are, but chickens are not, replicators,
since the latter do not replicate themselves as wholes. An
"active" replicator is "a replicator that has some
causal influence on its own probability of being propagated,"
whereas a "passive" replicator is never transcribed and
has no phenotypic expression whatsoever (Dawkins 1982a: 47). There is
special interest in *active germ-line replicators*,
"since adaptations 'for' their preservation are
expected to fill the world and to characterize living organisms"
(Dawkins 1982a: 47).
The original terminology of "replicator" was introduced
along with the term "vehicle", which is defined as
>
>
> any relatively discrete entity...which houses replicators, and
> which can be regarded as a machine programmed to preserve and
> propagate the replicators that ride inside it. (Dawkins 1982b: 295)
>
>
>
>
On this view, most replicators' phenotypic effects are
represented in vehicles, which are themselves the proximate targets of
natural selection (Dawkins 1982a: 62).
In the introduction of the term "interactor", it was
observed that the previous theory has replicators interacting with
their environments in two distinct ways: they produce copies of
themselves, and they influence their own survival and the survival of
their copies through the production of secondary products that
ultimately have phenotypic expression (Hull 1980). The term
"interactor" was suggested for the entities that function
in this second process. An interactor denotes that entity which
interacts, as a cohesive whole, directly with its environment in such
a way that replication is differential--in other words, an entity
on which selection acts directly (Hull 1980: 318). The process of
evolution by natural selection is
>
>
> a process in which the differential extinction and proliferation of
> interactors cause the differential perpetuation of the replicators
> that produced them. (Hull 1980: 318; see Brandon 1982: 317-318)
>
>
>
>
One challenge to the term, "interactor," was that
"interacting is not conspicuous during the process of
elimination that results in natural selection" (Mayr 1997:
2093). It's difficult to imagine why anyone would say this,
given the original description of the interactor as "an entity
that directly interacts ... *in such a way that replication is
differential*". Perhaps more interestingly, the
"target of selection" language is rejected because
selection is seen as more of an elimination process; thus, it would be
misleading to call the "leftovers" of the elimination
process the "targets" of selection. The term
"selecton", was proposed, which is defined as
>
>
> a discrete entity and a cohesive whole, an individual or a social
> group, the survival and successful reproduction of which is favored by
> selection owing to its possession of certain properties. (Mayr 1997:
> 2093)
>
>
>
This seems remarkably similar to an interactor, with the difference
that no differential reproduction, and thus no evolution, is
mentioned.
At the birth of the "interactor" concept, the concept of
"evolvers" was also introduced, which are the entities
that evolve as a result of selection on interactors: these are usually
called *lineages* (Hull 1980). So far, no one has directly
claimed that evolvers are units of selection. They can be seen,
however, to be playing a role in considering the question of who owns
an adaptation and who benefits from evolution by selection, which we
will consider in sections
2.3
and
2.4.
### 2.1 The Interactor Question
In its traditional guise, the interactor question is, what units are
being actively selected in a process of natural selection? As such,
this question is involved in the oldest forms of the units of
selection debates (Darwin 1859 [1964], Haldane 1932, Wright 1945). In
an early review on "units of selection", the purpose of
the article was claimed as: "to contrast the levels of
selection, especially as regards their efficiency as causers of
evolutionary change" (Lewontin 1970: 7). Similarly, others
assumed that a unit of selection is something that "responds to
selective forces as a unit--whether or not this corresponds to a
spatially localized deme, family, or population" (Slobodkin
& Rapoport 1974: 184).
Questions about interactors focus on the description of the selection
process itself, that is, on the interaction between an entity, that
entity's traits and environment, and on how this interaction
produces evolution; they do not focus on the outcome of this process
(see Wade 1977; Vrba & Gould 1986). The interaction between some
interactor at a certain level and its environment is assumed to be
mediated by "traits" that affect the interactor's
expected survival and reproductive success. Here, the interactor is
possibly at any level of biological organization, including a group, a
kin-group, an organism, a gamete, a chromosome, or a gene. Some
portion of the expected fitness of the interactor is directly
correlated with the value of the trait in question. The expected
fitness of the interactor is commonly expressed in terms of genotypic
fitness parameters, that is, in terms of the fitness of combinations
of replicators. Hence, interactor success is most often reflected in
and counted through, replicator success, either through simple
summation of the fitnesses of their traits, or some more complicated
relation. Several methods are available for expressing the correlation
between interactor trait and (genotypic or genic) fitness, including
partial regression, variances, and
covariances.[2]
In fact, much of the interactor debate has been played out through the
construction of mathematical genetic models--with the exception
of work on group selection and on female-biased sex ratios (Wade 1980,
1985, 2016; D.S. Wilson & Colwell 1981; see especially Griesemer
& Wade 1988). The point of building such models is to determine
what kinds of selection, operating at which levels, may be effective
in producing evolutionary change.
It has been widely held, for instance, that the conditions under which
group selection can effect evolutionary change are quite stringent and
rare. Typically, group selection was seen to require small group size,
low migration rate, and extinction of entire
demes.[3]
Some modelers, however, disagree that these stringent conditions are
necessary, and show that in the evolution of altruism by group
selection, very small groups may not be necessary (Matessi &
Jayakar 1976: 384; *contra* Maynard Smith 1964). Others also
argue that small effective deme size is not a necessary prerequisite
to the operation of group selection (Wade & McCauley 1980: 811).
Similarly, another shows that strong extinction pressure on demes is
not necessary (Boorman 1978: 1909). And finally, there was an early
group selection model that violates all three of the
"necessary" conditions usually cited (Uyenoyama 1979; see
Wade 2016).
That different researchers reach such disparate conclusions about the
efficacy of group selection is partly because they are using different
models with different parameter values. Several assumptions, routinely
used in group selection models, were highlighted, that biased the
results of these models against the efficacy of group selection (Wade
1978). For example, many group selection models use a specific
mechanism of migration; it is assumed that the migrating individuals
mix completely, forming a "migrant pool" from which
migrants are assigned to populations randomly. All populations are
assumed to contribute migrants to a common pool from which colonists
are drawn at random. Under this approach, which is used in all models
of group selection prior to 1978, small sample size is needed to get a
large genetic variance between populations (Wade 1978: 110; see
discussion in Okasha 2003, 2006).
If, in contrast, migration occurs by means of large populations,
higher heritability of traits and a more representative sampling of
the parent population will result. Each propagate is made up of
individuals derived from a single population, and there is no mixing
of colonists from the different populations during propagule
formation. On the basis of further analysis, much more
between-population genetic variance can be maintained with the
propagule model (Slatkin and Wade 1978: 3531). Thus, by using
propagule pools as the assumption about colonization, one can greatly
expand the set of parameter values for which group selection can be
effective (Slatkin and Wade 1978, cf. Craig 1982).
Another aspect of this debate that has received a great deal of
consideration concerns the mathematical tools necessary for
identifying when a particular level of biological organization meets
the criteria for being an
interactor.[4]
Overall, while many of the suggested techniques have had strengths,
no one approach to this aspect of the interactor question has been
generally accepted and indeed it remains the subject of debate in
biological circles (Okasha 2004b,c). Detailed work on two of the major
techniques, the Price equation and contextual analysis, has indicated
that neither approach is universally applicable, on the grounds that
neither provides a proper causal decomposition in all varieties of
selection (Okasha 2006). Specifically, it appears that while
contextual analysis may be superior in most cases of multi-level
selection, the Price equation may be more useful in certain cases of
genic selection (Okasha 2006).
It is important to note that, even in the midst of deciding among the
various methods for representing selection processes, these choices
have consequences for the empirical adequacy of the selection models.
This is true even if the models are denied to have a causal
interpretation, as is done by those promoting a
"statisticalist" interpretation of selection theory
(Walsh, Lewens, & Ariew 2002). On this view, evolution is seen as
a purely statistical phenomenon, and population genetics studies
statistical relations estimated by census, and not causal
relationships. The claim is that the "deeply
uninteresting" units of selection problem has been dissolved,
whereas in fact, it has simply been restricted to the interactor
question (while ignoring the other three "units" questions
entirely); the problem of how to deliver an empirically adequate
selection model is not directly addressed (2002: 470-471).
Instead, an unspecified method is assumed that "identifies
classes [that] ... adequately predict and explain changes in the
structure of the population" (Walsh, Lewens, & Ariew 2002:
471), with no acknowledgment that this involves making a commitment to
one or another of the above methods of determining an interactor,
whether under a causal interpretation or not. Thus, the interactor
problem has not been escaped, whether or not it is interpreted
causally (see Otsuka 2016).
Note that the "interactor question" does not involve
attributing adaptations or benefits to the interactors, or indeed, to
any candidate unit of selection. Interaction at a particular level
involves only the presence of a trait at that level with a special
relation to genic or genotypic expected success that is not reducible
to interactions at a lower level. A claim about interaction indicates
only that there is an evolutionarily significant event occurring at
the level in question; it says nothing about the existence of
adaptations at that level. As we shall see, the most common error made
in interpreting many of the interactor-based approaches is that the
presence of an interactor at a level is taken to imply that the
interactor is also a manifestor of an adaptation at that level.
### 2.2 The Replicator Question
The focus of discussions about replicators concerns just which organic
entities actually meet the definition of replicator. Answering this
question obviously turns on what one takes the definition of
replicator to be. In this connection the revision of the original
meaning of "replicator" turned out to be central. The
revised meaning refined and restricted the meaning of
"replicator," which was defined as "an entity that
passes on its structure directly in replication" (Hull 1980:
318). The terms *replicator* and *interactor* will be
used in this latter sense in the rest of this entry.
The revised definition of replicator corresponds more closely than the
original one to a long-standing debate in genetics about how large or
small a fragment of a genome ought to count as a replicating
unit--something that is copied, and which can be treated
separately in evolutionary theory (see especially Lewontin 1970; Hull
1980). This debate revolves critically around the issue of linkage
disequilibrium and led some biologists to advocate the usage of
parameters referring to the entire genome rather than to allele and
genotypic frequencies in genetical models (Lewontin
1974).[5]
The basic point is that with much linkage disequilibrium, individual
genes cannot be considered as replicators because they do not behave
as separate units during reproduction. Although this debate remains
pertinent to the choice of state space of genetical model, it has been
eclipsed by concerns about interactors in evolutionary genetics.
This is not to suggest that the replicator question has been solved.
Work on the replicator question is part of a rich and continuing
research program; it is simply no longer a large part of the units
debates. That this parting of ways took place is largely due to the
fact that evolutionists working on the units problems tacitly adopted
the original suggestion that the replicator, whatever it turned out to
be, be called the "gene" (Dawkins 1982b, pp. 84-86; see
section 3.3).
This move neatly removes the replicator question from consideration.
Exactly why this move should have met with near universal acceptance
is to some extent historical, however the fact that the intellectual
tools (largely mathematical models) of the participants in the units
debates were better suited to dealing with aspects of that debate
other than the replicator question which requires mainly bio-chemical
investigation, surely contributed to this outcome.
There is a very important class of exceptions to this general
abandonment of the replicator question. Developmental Systems Theory
was formulated as a radical alternative to the interactor/replicator
dichotomy (Oyama 1985; Griffiths & Gray 1994, 1997; Oyama,
Griffiths, & Gray 2001). Here the evolving unit is understood to
be the developing system as a whole, privileging neither the
replicator nor the interactor.
There has also been a profound reconception of the evolution by
selection process, which has rejected the role of replicator as
misconceived. In its place the role of "reproducer" is
proposed, which focuses on the material transference of genetic and
other matter from generation to generation (Griesemer 2000a,b; see
Forsdyke 2010; see
section 3.5).
On this approach, thinking in terms of reproducers incorporates
development into heredity and the evolutionary process. It also allows
for both epigenetic and genetic inheritance to be dealt with within
the same framework. The reproducer plays a central role, along with a
hierarchy of interactors, in work on the units of evolutionary
transition (see
Evolutionary Transition;
Griesemer 2000c). This topic concerns the major transitions of life
from one level of complexity to the next, for example, the transition
from unicellularity to multicellularity. More recently, another notion
was introduced of a "reproducer" that is more broadly
inclusive, in that it relaxes the material overlap requirement and
focuses on an understanding of "who came from whom, and roughly
where one begins and another ends" (Godfrey-Smith 2009: 86).
These two definitions of "reproducer" disagree about
retroviral reproduction, and what counts as a salient material bond
between generations. On one side is the claim that there is no
material overlap in the case of retroviral reproduction, and that the
key is formal or informational relations (Godfrey-Smith 2009). On the
other side, is a view that sees material overlap due to RNA strand
hybridization guiding and channeling flows of information (Griesemer
2014, 2016). There is also the introduction of a notion of reproducer
that involves only the copying of a property, with no substance
overlapping involved (Nanay 2011). Like the second view of reproducer,
it appeals to the case of retroviruses having no material overlap (cf.
Griesemer 2014, 2016).
### 2.3 The Beneficiary Question
Who benefits from a process of evolution by selection? There are two
predominant interpretations of this question: Who benefits ultimately
in the long term, from the evolution by selection process? And who
gets the benefit of possessing adaptations as a result of a selection
process? Take the first of these, the issue of the ultimate
beneficiary.
There are two obvious answers to this question--two different
ways of characterizing the long-term survivors and beneficiaries of
the evolution by selection process. One might say that the species or
lineages (the previous "evolvers") are the ultimate
beneficiaries of the evolutionary process. Alternatively, one might
say that the lineages characterized on the genic level, that is, the
surviving alleles, are the relevant long-term beneficiaries. I have
not located any authors holding the first view, but, for Richard
Dawkins, the latter interpretation is the *primary fact* about
evolution. To arrive at this conclusion, he adds the requirement of
agency to the notion of beneficiary (see Hampe & Morgan 1988). A
beneficiary, by definition, does not simply passively accrue credit in
the long term; it must function as the initiator of a causal pathway
(Dawkins 1982a,b). Under this
definition, the replicator is causally responsible for all of the
various effects that arise further down the biochemical or phenotypic
pathway, irrespective of which entities might reap the long-term
rewards (Sapienza 2010).
A second and quite distinct version of the beneficiary question
involves the notion of adaptation. The evolution by selection process
may be said to "benefit" a particular level of entity
under selection, through producing adaptations at that level (Williams
1966, Maynard Smith 1976, Eldredge 1985, Vrba 1984). On this approach,
the level of entity actively selected (the interactor) benefits from
evolution by selection at that level through its acquisition of
adaptations.
It is crucial to distinguish the question concerning the level at
which adaptations evolve from the question about the identity of the
ultimate beneficiaries of that selection process. One can think that
organisms have adaptations without thinking that organisms are the
"ultimate beneficiaries" of the selection
process.[6]
This sense of "beneficiary" that concerns adaptations
will be treated as a separate issue, discussed in the next
section.
### 2.4 The Manifestor-of-Adaptation Question
At what level do adaptations occur? Or, "When a population
evolves by natural selection, what, if anything, is the entity that
does the adapting?" (Sober 1984: 204).
As mentioned previously, the presence of adaptations at a given level
of entity is sometimes taken to be a requirement for something to be a
unit of
selection.[7]
Significantly, group selection for "group advantage"
should be distinguished from group selection *per se* (Wright
1980). In fact, the combination of the interactor question with the
question of what entity had adaptations had created a great deal of
confusion in the units of selection debates in general.
Some, if not most, of this confusion is a result of a very important
but neglected duality in the meaning of "adaptation" (in
spite of useful discussions in Brandon 1978, Burian 1983, Krimbas
1984, Sober 1984). Sometimes "adaptation" is taken to
signify any trait at all that is a direct result of a selection
process at that level. In this view, any trait that arises directly
from a selection process is claimed to be, by definition, an
adaptation (e.g., Sober 1984; Brandon 1985, 1990; Arnold &
Fristrup 1982). Sometimes, on the other hand, the term
"adaptation" is reserved for traits that are "good
for" their owners, that is, those that provide a "better
fit" with the environment, and that intuitively satisfy some
notion of "good
engineering."[8]
These two meanings of adaptation, the *selection-product* and
*engineering* definitions respectively, are distinct, and in
some cases, incompatible.
Consider the peppered moth case: natural selection is acting on the
color of the moths over time, and the population evolves, but no
"engineering" adaptation emerges. Rather, the proportion
of dark moths simply increases over time, relative to the industrial
environmental conditions, a clear case of evolution by natural
selection, on which a good fit to the environment is reinforced. Note
that the dark moths lie within the range of variation of the ancestral
population; they are simply more frequent now, due to their superior
fit with the environment. The dark moths are a
"selection-product" adaptation. Contrast the moth case to
the case of Darwin's finches, in which different species evolved
distinct beak shapes specially adapted to their diet of particular
seeds and foods (Grant & Grant 1989; Grant 1999). Natural
selection here occurred against constantly changing genetic and
phenotypic backgrounds in which accumulated selection processes had
changed the shapes of the beaks, thus producing
"engineering" adaptations when natural selection occurred.
The finches now possess evolved traits that especially
"fit" them to their environmental demands; their newly
shaped beaks are new mechanisms beyond the original range of variation
in the ancestral population (Lloyd 2015).
Some evolutionary biologists have strongly advocated an engineering
definition of adaptation (e.g., Williams 1966). The basic idea is that
it is possible to have evolutionary change result from direct
selection favoring a trait without having to consider that changed
trait as an adaptation. Consider, for example, Waddington's
(1956) genetic assimilation experiments. How should we interpret the
results of Waddington's experiments in which latent genetic
variability was made to express itself phenotypically because of an
environmental pressure (Williams 1966: 70-81; see the lucid
discussion in Sober 1984: 199-201)? The question is whether the
bithorax condition (resulting from direct artificial selection on that
trait) should be seen as an adaptive trait, and the engineering
adaptationist's answer is that it should not. Instead, the
bithorax condition is seen as "a disruption...of
development," a failure of the organism to respond (Williams
1966: 75-78). Hence, this analysis drives a wedge between the
notion of a trait that is a direct product of a selection process and
a trait that fits the stronger engineering definition of an adaptation
(see Gould & Lewontin 1979; Sober 1984: 201; cf. Dobzhansky
1956).[9]
In sum, when asking whether a given level of entity possesses
adaptations, it is necessary to state not only the level of selection
in question but also which notion of adaptation--either
*selection-product* or *engineering*--is being
used. This distinction between the two meanings of adaptation also
turns out to be pivotal in the debates about the efficacy of higher
levels of selection, as we will see in sections
3.1
and
3.2.
### 2.5 Summary
In this section, four distinct questions have been described that
appear under the rubric of "the units of selection"
problem, What is the interactor? What is the replicator? What is the
beneficiary? And what entity manifests any adaptations resulting from
evolution by selection? There is a serious ambiguity in the meaning of
"adaptation"; which meaning is in play has had deep
consequences for both the group selection debates and the species
selection debates (Lloyd 2001). Commenting on this analysis, John
Maynard Smith wrote in *Evolution*:
>
>
> [Lloyd 2001] argues, correctly I believe, that much of the confusion
> has arisen because the same terms have been used with different
> meanings by different authors ... [but] I fear that the
> confusions she mentions will not easily be ended. (Maynard Smith 2001:
> 1497)
>
>
>
In
section 3,
this taxonomy of questions is used to sort out some of the most
influential positions in five debates: group selection
(3.1),
species selection
(3.2),
genic selection
(3.3),
genic pluralism
(3.4),
as well as units of evolutionary transition
(3.5).
## 3. An Anatomy of the Debates
### 3.1 Group Selection
The near-deathblow in the nineteen sixties to group panselectionism
was, oddly enough, about benefit (Williams 1966). The interest was in
cases in which there was selection among groups and the groups as a
whole *benefited from* organism-level traits (including
behaviors) that seemed disadvantageous to the organism (Wynne Edwards
1962; Williams 1966; Maynard Smith 1964). The argument was that the
presence of a benefit to the group was not sufficient to establish the
presence of group selection. This was demonstrated by showing that a
group benefit was not necessarily a group adaptation (Williams 1966).
Hence, here the term "benefit" was being used to signify
the manifestation of an adaptation at the group level. The assumption
was that a genuine group selection process results in the evolution of
a group-level trait--a real adaptation--that serves a design
purpose for the group. The mere existence however, of traits that
benefit the group is not enough to show that they are adaptations; in
order to be an adaptation, under this view, the trait must be an
*engineering* adaptation that evolved by natural selection. The
argument was that group benefits do not, in general, exist
*because* they benefit the group; that is, they do not have the
appropriate causal history (Williams 1966; see Brandon 1981, 1985: 81;
Sober 1984: 262 ff.; Sober & Wilson 1998).
Implicit in this discussion is the assumption that being a unit of
selection at the group level requires two things: (1) having the group
as an interactor, and (2) having a group-level engineering-type
adaptation. That is, the approach taken combines two different
questions, the interactor question and the manifestor-of-adaptation
question, and calls this combined set *the* unit of selection
question. These requirements for "group selection" make
perfect sense given that the prime target was a view of group
selection that incorporated this same two-pronged definition of a unit
of selection (see Borrello 2010 for a philosophically-oriented history
of Wynne-Edwards and his views on group selection).
This combined requirement of engineering group-level adaptation in
addition to the existence of an interactor at the group level is a
very popular version of the necessary conditions for being a unit of
selection within the group selection debates. For example, it was
claimed that the group selection issue hinges on "whether
entities more inclusive than organisms exhibit adaptations"
(Hull 1980: 325). Another view states that the unit of selection is
determined by "Who or what is best understood as the possessor
and beneficiary of the trait" (Cassidy 1978: 582). Similarly,
paleontological approaches required adaptations for an entity to count
as a unit of selection (Eldredge 1985: 108; Vrba 1983, 1984).
The engineering notion of adaptation was also tied into the version of
the units of selection question in other contexts (Maynard Smith
1976). In an argument separating group and kin selection, it was
concluded that group selection is favored by small group size, low
migration rates, and rapid extinction of groups infected with a
selfish allele and that
>
>
> the ultimate test of the group selection hypothesis will be whether
> populations having these characteristics tend to show
> "self-sacrificing" or "prudent" behavior more
> commonly than those which do not. (Maynard Smith 1976: 282)
>
>
>
This means that the presence of group selection or the effectiveness
of group selection is to be measured by the existence of nonadaptive
behavior on the part of individual organisms along with the presence
of a corresponding group-level adaptation. Therefore, this approach to
kin and group selection does require a group-level adaptation from
groups to count as units of selection. As with the previous view, it
is significant that the *engineering* notion of adaptation is
assumed rather than the weaker *selection-product* notion;
>
>
> [A]n explanation in terms of group advantage should always be
> explicit, and always calls for some justification in terms of the
> frequency of group extinction. (Maynard Smith 1976: 278; cf. Wade
> 1978; Wright 1980)
>
>
>
More recently, geneticists have attempted to make precise the notion
of a group adaptation though the "Formal Darwinism
Project" (Grafen 2008), in which the general concept of
adaptation can be applied to groups (Gardner & Grafen 2009).
However, it is unclear how the notion of adaptation developed within
the formal Darwinism project relates to the previously discussed
engineering notion of adaptation. Philosophers have offered a
different analysis of group adaptation, one based on an earlier
analysis of selection and adaptation (Okasha & Paternotte 2012;
Grafen 2008). The key distinction, in the original view, is between a
trait that is a trait that merely benefits the group, i.e.,
"fortuitous group benefit," and one that is a genuine
group adaptation, a feature evolved *because* it benefited the
group, i.e., "for the right reason" (Okasha &
Paternotte 2012: 1137). Contextual analysis, as well as the Price
equation, can provide a formal definition of group adaptation, but
both need to be supplemented by causal reasoning (Okasha &
Paternoster 2012).
In contrast to the preceding approach, we can separate the interactor
and manifestor-of-adaptation questions in our group selection models
(Wright 1980; see Lewontin 1978; Gould & Lewontin 1979). This is
done by distinguishing between what is called "intergroup
selection," that is, interdemic selection in the shifting
balance process, and "group selection for group advantage"
(Wright 1980: 840; see Wright 1929, 1931). The term
"altruist" originally denoted, in genetics, a phenotype
"that contributes to group advantage at the expense of
disadvantage to itself" (1980: 840; Haldane 1932). This earlier
debate is connected to the main group selection debate in the 1960s,
in which the group selectionists asserted the evolutionary importance
of "group selection for group advantage" (Wright 1980).
The argument is that the primary kin selection model is "very
different" from "group selection for the uniform advantage
of a group"(1980: 841; like Arnold & Fristrup 1982; Damuth
& Heisler 1988; Heisler & Damuth 1987). There are excellent
summaries of the empirical and theoretical discoveries enabled by
"intergroup" selection models (Goodnight & Stevens
1997; Wade 2016).
Those supporting a genic selection view in the 1970s were taken to
task for mistakenly thinking that because they have successfully
criticized group selection for group advantage, they can conclude that
"natural selection is practically wholly genic"(Wright
1980: 841).
>
>
> [N]one of them discussed group selection for organismic advantage to
> individuals, the dynamic factor in the shifting balance process,
> although this process, based on irreversible local peak-shifts is not
> fragile at all, in contrast with the fairly obvious fragility of group
> selection for group advantage, which they considered worthy of
> extensive discussion before rejection. (Wright 1980: 841)
>
>
>
This is a fair criticism of the genic selectionist view. The problem
is that these authors failed to distinguish between two questions: the
interactor question and the manifestor-of-adaptation question. The
form of group selection that involves interdemic group selection
models involves groups only as interactors, not as manifestors of
group-level adaptations. More recently, modelers following Sewall
Wright's interest in structured populations have created a new
set of genetical models that are also called "group
selection" models and in which the questions of group
adaptations and group benefit play little or no
role.[10]
For a period spanning two decades, however, the genic selectionists
did not acknowledge that the position they attacked, namely group
selection as engineering adaptation, is significantly different from
other available approaches to group selection, such as those that
primarily treat groups as interactors. Ultimately, however, genic
selectionists did recognize the significance of the distinction
between the interactor question and the manifestor-of-adaptation
question. In 1985, for example, we have progress towards mutual
understanding:
>
>
> If some populations of species are doing better than others at
> persistence and reproduction, and if such differences are caused in
> part by genetic differences, this selection at the population level
> must play a role in the evolution of the species.... [But]
> selection at any level above the family (group selection in a broad
> sense) is unimportant for the origin and maintenance of adaptation.
> (Williams 1985: 7-8)
>
>
>
And in 1987, we have an extraordinary concession:
>
>
> There has been some semantic confusion about the phrase "group
> selection," for which I may be partly responsible. For me, the
> debate about levels of selection was initiated by Wynne-Edwards'
> book. He argued that there are group-level adaptations...which
> inform individuals of the size of the population so that they can
> adjust their breeding for the good of the population. He was clear
> that such adaptations could evolve *only* if populations were
> units of selection.... Perhaps unfortunately, he referred to the
> process as "group selection." As a consequence, for me and
> for many others who engaged in this debate, the phrase cane to imply
> that groups were sufficiently isolated from one another reproductively
> to act as units of evolution, and not merely that selection acted on
> groups.
>
>
>
> The importance of this debate lay in the fact that group-adaptationist
> thinking was at that time widespread among biologists. It was
> therefore important to establish that there is no reason to expect
> groups to evolve traits ensuring their own survival unless they are
> sufficiently isolated for like to beget like.... When Wilson
> (1975) introduced his trait-group model, I was for a long time
> bewildered by his wish to treat it as a case of group selection and
> doubly so by the fact that his original model...had interesting
> results only when the members of the group were genetically related, a
> process I had been calling kin selection for ten years. I think that
> these semantic difficulties are now largely over. (Maynard Smith 1987:
> 123).
>
>
>
Even the originator of the replicator/vehicle distinction also seems
to have rediscovered the evolutionary efficacy of higher-level
selection processes in an article on artificial life. In this article,
the primary concern is with modeling the course of selection
processes, and a species-level selection interpretation is offered for
an aggregate species-level trait (Dawkins 1989a). Still, Dawkins seems
not to have recognized the connection between this evolutionary
dynamic and the controversies surrounding group selection because in
his second edition of *The Selfish Gene* (Dawkins 1989b) he had
yet to accept the distinction made so clearly by group selectionists
in 1980 (Wright 1980). This was in spite of the fact that by 1987, the
importance of distinguishing between evolution by selection processes
and any engineering adaptations produced by these processes had been
acknowledged by the workers he claimed to be following most closely
(Williams 1985, 1992; Maynard Smith 1987). More recently, a related
debate has fired up between genic selection and group selection in the
journals, about the definitions of group and kin selection (E.O.
Wilson 2008; see below). But this debate is bound for nowhere without
tight enough definitions of these kinds of selection (Shavit &
Millstein 2008). The adoption of Wade's strict definitions would
help, following the prescriptions of early group selectionists (Shavit
& Millstein 2008).
There has been an even more recent challenge to the received
understanding of kin selection, favoring a group selection
interpretation, which has been rebutted by those defending a strict
separation between kin and group selection (Nowak, Tarnita, &
Wilson 2010; Holldobler & Wilson 2009; rebutted by Gardner,
West. & Wild 2011; Abbot et al. 2011). This view, has in turn,
been rebutted by others (van Veelen et al. 2012; Allen, Nowak, &
Wilson 2013; E.O. Wilson & Nowak 2014). Philosophers have offered
a very useful analysis of these debates about kin and group selection
(Birch & Okasha 2015). The basic prediction of kin selection
theory is that social behavior, especially social behavior that
benefits others, should correlate with genetic relatedness. This is
commonly expressed through Hamilton's
rule, *rb* > *c*, where
*r* is relatedness, *b* is the benefit that behavior
offers the conspecific, and *c* is the cost to the actor (see
Hamilton 1975 for an expansion to multilevel selection). The critics
claiming that kin selection is a form of group selection, assert that
Hamilton's rule "almost never holds" (Nowak et al. 2010:
1059).
That is, it almost never states the true conditions under which a
social behavior will evolve by selection. Their opponents claim the
opposite: that it is incorrect to claim that Hamilton's rule
requires restrictive assumptions, or that it almost never holds. On
the contrary, they claim, it holds a great deal of the time (Gardner,
West, & Wild 2011).
On a philosophical analysis, there are really three distinct versions
of Hamilton's rule, and thus three distinct versions of kin
selection theory under discussion. One involves many substantial
assumptions, including
>
>
> weak selection, additive gene action, (i.e., no dominance or
> epistasis), and the additivity of fitness payoffs (i.e., a relatively
> simple payoff structure). (Birch & Okasha 2015: 23)
>
>
>
When these assumptions are weakened, we get more variants of
Hamilton's rule. Particularly important are the payoff
parameters *c* and *b*. Sometimes these denote the
values of a particular model, called HRS (Hamilton's Rule,
special), but other times, they denote averaging effects or partial
regression coefficients, in the case of HRG (Hamilton's Rule,
general). A third approach, HRA (Hamilton's Rule, approximate),
which uses first-order approximates of regression coefficients, is the
approach most commonly used in contemporary kin selection theory. The
special version is very restrictive, while the general version allows
a wide variety of cases. According to the philosophical analysis of
the cases, Nowak et al. are using the special version of
Hamilton's rule when they say it "almost never
holds," whereas Gardner, West, and Wild are using the
regression-based, general version of the rule, which allows a great
deal of leeway in application. In other words, they are talking past
each other (Birch & Okasha 2015). Significantly,
>
>
> Neither [Nowak et al. nor Gardner, West, & Wild ] is referring to
> HRA, even though this approximate version of the rule is the version
> most commonly used by kin selection theorists. (Birch & Okasha
> 2015: 24)
>
>
>
On the same philosophical analysis, it is also argued that kin
selection and multilevel selection represented using the Price
equation are formally equivalent, and that preferences for kin
selection models may not be justified as they are usually (Birch &
Okasha 2015; cf. West et al. 2008). Some
biologists, for example, have argued that kin selection is more easily
applicable than group selection, and that kin selection can be applied
whenever there is group selection (West, Griffin, & Gardner 2007,
2008). Moreover, they deny that kin selection is a form of group
selection, despite formal similarities and derivations (West, Griffin,
& Gardner 2007: 424). Problems about using multilevel selection
models may stem from the group beneficiary problem arising from the
early group selection context wherein group selection was assumed to
involve both group benefit and group engineering adaptation (Birch
& Okasha 2015).
>
>
> [A]lthough kin and multilevel selection are equivalent as statistical
> decompositions of evolutionary change, there are situations in which
> one approach provides a more accurate representation of the causal
> structure of social interaction. (Birch & Okasha 2015: 30; see
> also Okasha & Paternotte 2012 vs. Gardner & Grafen 2009 on
> group adaptations)
>
>
>
Geneticists have offered an effective critique of the "Formal
Darwinism Project" to units of selection and adaptation, arguing
that the latter's preferences for the level of individual
organism is arbitrary, as is their bias against multilevel selection
(Shelton & Michod 2014a).
In an analysis of the contextual analysis and Price equation methods
of representing hierarchical selection models, it was argued that
contextual analysis is superior overall, except in cases of meiotic
drive (Okasha 2006). However, it was recently argued that contextual
analysis can even handle cases of meiotic drive, thus making it the
superior approach to multilevel selection (Earnshaw 2015). In a
separate and helpful analysis, the relationship between kin and
multilevel models was spelled out using causal graphs (Okasha 2015).
Just because the two models can produce the same changes in gene
frequencies, it does not follow that they represent the same causal
structure, which is illustrated using causal graph theory and examples
from biology, such as group adaptation and meiotic drive (Okasha 2015;
see
Genic Selection: The Pluralists).
This goes very much against the claims of equivalence of the two
model types, kin and group selection (West, Griffin, & Gardner
2007, 2008; West & Gardner 2013), in which these technical
equivalences are taken to signify total equivalence of the
evolutionary systems (see also Frank 2013; Queller 1992; Dugatkin
& Reeve 1994; Sober & Wilson 1998). Take, for instance, the
claim: "There is no theoretical or empirical example of group
selection that cannot be explained with kin selection" (West,
Griffin, & Gardner 2008: 375),
It is good to keep in mind that there are dissenters from this claim
of the equivalence between group and kin selection models (e.g.,
Lloyd, Lewontin, & Feldman 2008; van Veelen 2009; van Veelen et
al. 2012; Holldobler & Wilson 2009; Traulsen 2010; Nowak,
Tarnita, & Wilson 2010, 2011; E.O. Wilson 2012). Those who claim
full equivalence discuss the evolution of cooperation and altruism,
arguing that only kin selection allows an easy solution to these
evolutionary problems (West, Griffin, & Gardner 2008). From the
more robust group selectionist point of view, the so-called free rider
problems with kin and group selection--such as those that are
contemplated by evolutionists puzzling over the evolution of
altruism--are pseudo-problems based on misconceptions (Wade 2016;
see also Bowles & Gintis 2011; Planer 2015; Sterelny 2012).
One approach notes that kin selection (either in its inclusive fitness
form or the direct fitness approach) and multilevel selection
"differ primarily in the types of questions being
addressed" (Goodnight 2013: 1546). Whereas kin selection aims at identifying
character states that maximize fitness, multilevel selection methods
have the goal of looking at the effects of selection on trait changes.
While the two have formal similarities, the kin selection models arose
out of game theory and evolutionary stable strategies (ESS) and are
used to identify the optimal solution, but they cannot be used to
examine the *process* by which the population will achieve that
optimum or equilibrium. In contrast, the multilevel selection methods,
such as contextual analysis, which arise out of the quantitative
genetics traditions, are used to describe the processes acting on the
population in its current
state.[11]
Thus, the two methods are not the same, nor are they competing
paradigms.
>
>
> Rather they should be considered complementary approaches that when
> used together give a clearer picture of social evolution than either
> one can when used in isolation. (Goodnight 2013: 1547; cf. Maynard Smith
> 1976; Dawkins 1982b; West, Griffin, & Garnder 2007, 2008; Gardner
> & Grafen 2009)
>
>
>
In the laboratory, the hierarchical genetic approach of multilevel
selection has been used to demonstrate that populations respond
rapidly to experimentally imposed group selection, and that indirect
genetic effects are primarily responsible for the surprising strength
and effectiveness of group selection experiments, *contra* the
full equivalence claims (Goodnight 2013; Goodnight & Stevens 1997; cf. West, Griffin,
& Gardner 2007, 2008). Field studies using contextual analysis
have shown that multilevel selection is far more common in nature than
previously expected (Goodnight 2013; e.g., Stevens, Goodnight, & Kalisz 1995;Tsuji 1995; Aspi et al. 2003; Weinig et al. 2007; Eldakar et al. 2010;
Wade 2016). There is much less emphasis on the evolution of altruism
within the hierarchical genetic approach, as selection is observed as
it is occurring, and this includes group selection going in the same
direction as organismic selection, not just in opposition to it,
*contra* early genic selectionist recommendations (Maynard
Smith 1976). This sort of group selection of interactors is not based
on group level engineering adaptations, although some still persist in
confusing group selection itself with the combination of the two
features, group selection and group engineering adaptation (e.g.,
Ramsey & Brandon 2011).
Most recently, a new topic has arisen in the context of multilevel
selection, involving the evolution of "holobionts," i.e.,
the combination of a eukaryotic organism with its microbiotic load
(Zilber-Rosenberg & Rosenberg 2008). It has become clear that each
"individual" human being is actually a community of
organisms co-evolved for mutual benefit (Gilbert, Sapp, & Tauber
2012). Our microbiota (the collection of bacteria, viruses, and fungi
living in our gut, mouth, and skin) is necessary for our survival and
development, and our species is also needed in turn for their
survival. Bacterial symbionts help induce and sustain the human immune
system, T-cells, and B-cells (Lee & Mazamanian 2010; Round,
O'Connell, & Mazamanian 2010), as well as providing
essential vitamins to the host human being. Gut bacteria are necessary
for cognitive development (Sampson & Mazamanian 2015), as well as
for the development of blood vessels in the gut; lipid metabolism,
detoxification of dangerous bacteria, viruses, and fungi; and the
regulation of colonic pH and intestinal permeability (Nicholson et al.
2012).
The holobiont--the combination of the host and its
microbiota--functions as a unique biological entity anatomically,
metabolically, immunologically, and developmentally (Gilbert,
Rosenberg, & Zilber-Rosenberg forthcoming; Gilbert 2011). Similarly, a
holobiont is seen as an "integrated community of species,
[which] becomes a unit of natural selection" (Gilbert, Sapp,
& Tauber 2012: 334). That is, in essence, theorists claim that the
holobiont can function as an interactor since it has features that
bind it together as a functional whole in such a way that it can
interact in a natural selection process. So what ties the different
species together to produce an interactor? According to pioneering
philosophical thought on holobionts and symbionts, it is the
community's common evolutionary fate, its being a
"functioning whole," that characterizes it as an
evolutionary interactor, "objects between which natural
selection selects" (Dupre 2012: 160; see also
Dupre & O'Malley 2013; Zilber-Rosenberg &
Rosenberg 2008). This community can also be described as a
"team" of consortia undergoing selection (Gilbert et al.
forthcoming). Others describe
them as "collaborators" or "polygenomic
consortia", which has the advantage of encompassing both
competition and cooperation within the holobiont (Dupre &
O'Malley 2013: 314; Lloyd forthcoming; see also Huttegger & Smead 2011 on stag
hunt game-theoretic results regarding the range of collaboration).
Holobionts can also be reproducers, where the host usually reproduces
vertically and the microbiota reproduce either vertically,
horizontally, or both. This situation has provoked discussion among
philosophers (Godfrey-Smith 2009, 2011; Sterelny 2011; Griesemer 2014,
2016; Booth 2014; Lloyd
forthcoming). Holobionts' microbiota can reproduce
outside the context of the original host organism, so some holobionts,
e.g., the Hawaiian bobtail squid and its luminescent bacteria, are not
"Darwinian populations" (Godfrey-Smith 2009, 2011), and
therefore not units of selection (see Booth 2014). This approach
contrasts with that of the original reproducer approach which would
include the squid-bacteria system and also retroviruses excluded under
the "Darwinian populations" account (Griesemer2000a,
2016).
>
>
> As in [my book, *Darwinian Populations and Natural Selection*],
> I hold that it is a mistake to see things that do not reproduce as
> units of selection. (Godfrey-Smith 2011: 509; Booth 2014)
>
>
>
This exclusion rests on the merging of the interactor with the
reproducer requirements, and as such will not hold sway over those who
do not buy such a confounding of roles (e.g., Dupre &
O'Malley 2013). This is yet another case wherein distinguishing
the interactor question from the replicator/reproducer question can be
"more illuminating" (Sterelny 2011: 496; see Dupre
& O'Malley 2013; Gilbert, Sapp, & Tauber 2012; Lloyd forthcoming).
Finally, holobionts can also be manifestors of adaptations, as in the
case of the evolution of placental mammals in the acquisition by
horizontal gene transfer from a retrovirus of a crucial gene coding
for the protein syncytin (Lloyd forthcoming; Dupressoir, Lavialle, & Heidemann
2012). Syncytin allows fetuses to fuse to their mother's
placenta, a role crucial to the evolution of placental mammals.
Moreover, it seems that several retrovirally derived enhancers played
critical roles in the formation of a key cell in the uterine wall,
also crucial for maintaining pregnancy, enhancing the
holobiont's role as a manifestor of adaptation (Wagner et al.
2014).
There are several other significant entries into the group selection
discussions, including the book, *Unto Others: The Evolution and
Psychology of Unselfish Behavior* (Sober & Wilson 1998). Here,
a case for group selection is developed based on the need to account
for the existence of biological altruism. Biological altruism is any
behaviour that benefits another organism at some cost to the actor.
Such behavior must always reduce the actor's fitness but it may
(following the work on interdemic selection), increase the fitness of
certain groups within a structured population. There was a big benefit
to bringing the attention of the larger philosophical community to
group selection models, and explaining them in an accessible fashion.
It has thus brought this aspect of the units of selection controversy
out onto the main stage of philosophical thought.
The "Darwinian populations" view previously mentioned
provides a considerably different view of the necessary conditions for
group selection, one which rejects many of the currently accepted
cases of the phenomenon. For a given selection story to be
descriptively valid, a "Darwinian population" must exist
at the level of selection being described, which requires the presence
of both an interactor and a reproducer at that level, thus putting
together what others have pulled apart (Godfrey-Smith 2009: 112). A
Darwinian population is conceived as, at minimum,
>
>
> a collection of causally connected individual things in which there is
> variation in character, which leads to differences in reproductive
> output (differences in how much or how quickly individuals reproduce),
> and which is inherited to some extent. (2009: 39)
>
>
>
There are further differentiations between paradigmatic, minimal, and
marginal Darwinian populations based on a variety of criteria, such as
the fidelity of heritability, continuity (the degree to which small
shifts in phenotype correlate to small changes in fitness), and the
dependence of reproductive differences on intrinsic features of
individuals (Godfrey-Smith 2009: Chapter 3).
For example, under this view, the case of the evolution of altruism,
which is commonly attributed to group selection, should not be
considered as such, because of the lack of a true reproducer at the
group level; the group level description depicts at best a marginal
Darwinian population (Godfrey-Smith 2009: 119). Rather, the argument
is that a neighborhood selection model, in which individuals are
affected by the phenotypes of their neighbors but cannot be seen as
"collectives competing at a higher level," is fully
capable of capturing the selective process involved, and represents a
Darwinian population, in which the individuals are seen both as the
interactors and the reproducers (2009: 118). This would seem to entail
that many group selection accounts (e.g., Sober & Wilson 1998), as
well as any models classified as Multi-level selection 1 (MLS1) models
(Heisler & Damuth 1987), cannot be properly considered as such.
This view grants that there are empirical examples in which a
group-level reproducer clearly exists (for example, Wade &
Griesemer 1998; Griesemer & Wade 2000; there is otherwise no
discussion of Wrightian approaches to group selection). The approach
using Darwinian populations and reproducers is claimed to present an
advantage over other available analyses of units of selection because
it can account for previously neglected examples such as epigenetic
inheritance systems (Godfrey-Smith 2009). The question remains as to
whether gaining an account to deal with these is worth rejecting an
entire class of accepted group selection models, and whether such a
loss is truly necessary to deal with epigenesis, given that we have an
epigenetic account with reproducers that allows for group selection
(see Griesemer 2000c).
### 3.2 Species Selection
Ambiguities about the definition of a unit of selection have also
snarled the debate about selection processes at the species level. One
response to the notion of species selection comes with a classic
confusion: "It is individual selection discriminating against
the individuals of the losing species that causes the
extinction" (Mayr 1997: 2093). The individual death of species
members is confused with extinction: "the actual selection takes
place at the level of competing individuals of the two species"
(Mayr 1997: 2093). Once we overcome such difficulties, and succeed in
conceiving of species as unified interactors, we are still faced with
two questions. The combining of the interactor question and the
manifestor-of-adaptation question (in the engineering sense) led to
the rejection of research aimed at considering the role of species as
interactors, *simpliciter*, in evolution. Once it is understood
that species-level interactors may or may not possess design-type
adaptations, it becomes possible to distinguish two research
questions: Do species function as interactors, playing an active and
significant role in evolution by selection? And does the evolution of
species-level interactors produce species-level engineering
adaptations and, if so, how often?
For the early history of the species selection debate, these questions
were lumped together; asking whether species could be units of
selection meant asking whether they fulfilled *both* the
interactor and manifestor-of-adaptation roles. For example, early
species selection advocates used a genic selectionist treatment of the
evolution of altruism as a touchstone in the definition of species
selection (e.g., Vrba 1984). The relevant argument is that kin
selection could cause the spread of altruistic genes but that it
should not be called group selection (Maynard Smith 1976). Again, this
was because the groups were not considered to possess design-type
adaptations themselves. Some species selectionists agreed that the
spread of altruism should not be considered a case of group selection
because "there is no group adaptation involved; altruism is not
emergent at the group level" (Vrba 1984: 319; Maynard Smith
gives different reasons for his rejection). This amounts to assuming
that there must be group benefit in the sense of a design-type
group-level adaptation in order to say that group selection can occur.
This species selection view was that evolution by selection is not
happening at a given level unless there is a benefit or engineering
adaptation at that level. The early species selection position
explicitly equates units of selection with the existence of an
interactor plus adaptation at that level (Vrba 1983: 388);
furthermore, it seems that the stronger, engineering definition of
adaptation had been adopted.
It was generally accepted among early species selectionists that
species selection does not happen unless there are species-level
adaptations (Eldredge 1985: 196, 134). Certain cases are rejected as
higher-level *selection processes* overall because
>
>
> frequencies of the properties of lower-level individuals which are
> part of a high-level individual simply do not make convincing
> higher-level adaptations. (Eldredge 1985: 133)
>
>
>
Most of those defending species selection early on defined a unit of
selection as requiring an emergent, adaptive property (Vrba 1983,
1984; Vrba and Eldredge 1984; Vrba and Gould 1986). This amounts to
asking a combination of the interactor and manifestor-of-adaptation
questions. But the relevant question is not "whether *some
particle-level causal processes or other* bear the causal
responsibility," but rather "whether particle-level
*selection* bears the causal responsibility" (Okasha
2006: 107). An emergent character requirement conflates these two
questions. Such a character may be the result of a selection process
at the group/species level, but it should not be treated as a
pre-condition of such a process.
But consider the lineage-wide trait of variability. Treating species
as interactors has a long tradition (Dobzhansky 1956, Thoday 1953,
Lewontin 1958). If species are conceived as interactors (and not
necessarily manifestors of adaptations), then the notion of species
selection is not vulnerable to the original anti group-selection
objections from the early genic selectionists (Williams
1966).[12]
The old idea was that lineages with certain properties of being able
to respond to environmental stresses would be selected for, and thus
that the trait of variability itself would be selected for and would
spread in the population of populations. In other words, lineages were
treated as interactors. The earlier researchers spoke loosely of
adaptations where adaptations were treated in the weak sense as
equivalent simply to the outcome of selection processes (at any
level). They were explicitly *not* concerned with the effect of
species selection on organismic level traits but with the effect on
species-level characters such as speciation rates, lineage-level
survival, and extinction rates of species. Some argued, including the
present author, that this sort of case represents a perfectly good
form of species selection, using so-called "emergent
fitnesses," even though some balk at the thought that
variability would then be considered, under a weak definition, a
species-level adaptation (Lloyd & Gould 1993; Lloyd 1988 [1994]).
Paleontologists used this approach to species selection in their
research on fossil gastropods (Jablonski 2008, 1987; Jablonski & Hunt 2006), and the approach has also been used in
the leading text on speciation (Coyne & Orr 2004).
Early species selectionists also eventually recognized the advantages
of keeping the interactor question separate from a requirement for an
engineering-type adaptation, dropping the former requirement that, in
order for species to be units of selection, they must possess
species-level adaptations (Vrba 1989). Ultimately, the current
widely-accepted definition of species selection is in conformity with
a simple interactor interpretation of a unit of selection (Vrba 1989;
see Damuth & Heisler 1988; Lloyd 1988 [1994]; Jablonski 2008).
It is easy to see how the two-pronged definition of a unit of
selection--as interactor and manifestor of adaptation--held
sway for so long in the species selection debates. After all, it had
dominated much of the group selection debates for so long. Some of the
confusion and conflict over higher-level units of selection arose
because of an historical contingency--the early group
selectionist's implicit definition of a unit of selection and
the responses it provoked (Wynne Edwards 1962; Borrello 2010).
### 3.3 Genic Selection: The Originators
One may understandably think that the early genic selectionists were
interested in the replicator question because of the claims that the
unit of selection ought to be the replicator. This would be a mistake.
Rather, the primary interest is in a specific ontological issue about
benefit (Dawkins 1976, 1982a,b). This amounts to asking a special version of the beneficiary
question, and the answer to that question dictates the answers to the
other three questions flying under the rubric of the "units of
selection".
Briefly, the argument is that because replicators are the only
entities that "survive" the evolutionary process, they
must be the beneficiaries (Dawkins 1982a,b). What happens in the
process of evolution by natural selection happens *for their
sake*, for their benefit. Hence, interactors interact for the
replicators' benefit, and adaptations belong to the replicators.
Replicators are the only entities with real agency as initiators of
causal chains that lead to the phenotypes; hence, they accrue the
credit and are the real units of selection.
This version of the units of selection question amounts to a
combination of the beneficiary question plus the
manifestor-of-adaptation question. There is little evidence that they
are answering the predominant interactor question; rather, the
argument is that people who focus on interactors are laboring under a
misunderstanding of evolutionary theory (Dawkins 1976, 1982a,b). One reason for
thinking this might be that the opponents are taken to be those who
hold a combination of the interactor plus manifestor-of-adaptations
definition of a unit of selection (e.g., Wynne-Edwards).
Unfortunately, leading genic selectionists ignore those who are
pursuing the interactor question alone; these researchers are not
vulnerable to the criticisms posed against the combined
interactor-adaptation view (Dawkins 1982a,b; Williams 1966). Some
insist that the early genic selectionists have misunderstood
evolutionary selection, an argument that is based upon interpreting
the units of selection controversy as a debate about interactors
(Gould 1977; Istvan 2013); however, because the early genic
selectionists say that the debate concerns the units of the
ultimate beneficiary, they are arguing past one another (Istvan 2013).
Section 3.4, Genic Selection: The Pluralists,
addresses those who interpret themselves as arguing against the
interactor question itself.
In the next few paragraphs, two aspects of Dawkins' specific
version of the units of selection problem shall be characterized. I
will attempt to clarify the key issues of interest to Dawkins and to
relate these to the issues of interest to others.
There are two mistakes that Dawkins is *not* making. First, he
does not deny that interactors are involved in the evolutionary
process. He emphasizes that it is not necessary, under his view, to
believe that replicators are directly "visible" to
selection forces (1982b: 176). Dawkins has recognized from the
beginning that his question is completely distinct from the interactor
question. He remarks, in fact, that the debate about group versus
organismic selection is "a factual dispute about the level at
which selection is most effective in nature," whereas his own
point is "about what we ought to mean when we talk about a unit
of selection" (1982a: 46). He also states that genes or other
replicators do not "literally face the cutting edge of natural
selection. It is their phenotypic effects that are the proximal
subjects of selection" (1982a: 47). We shall return to this
issue in
section 3.4, Genic Selection: The Pluralists.
Second, Dawkins does not specify how large a chunk of the genome he
will allow as a replicator; there is no commitment to the notion that
single exons are the only possible replicators. He argues that if
Lewontin, Franklin, Slatkin and others are right, his view will not be
affected (see
Replicators).
If linkage disequilibrium is very strong, then the "effective
replicator will be a very large chunk of DNA" (Dawkins 1982b:
89; Sapienza 2010). We can conclude from this that Dawkins is not
interested in the replicator question at all; his claim here is that
his framework can accommodate any of its possible answers.
On what basis, then, does Dawkins reject the question about
interactors? I think the answer lies in the particular question in
which he is most interested, namely, What is "the nature of the
entity *for whose benefit adaptations* may be said to
exist?"[13]
On the face of it, it is certainly conceivable that one might identify
the beneficiary of the adaptations as--in some cases,
anyway--the individual organism or group that exhibits the
phenotypic trait taken to be the adaptation. In fact, some writers
seem to have done just that in the discussion of group selection (see
Williams
1966).[14]
But the original genic selectionist rejects this move, introducing an
*additional* qualification to be fulfilled by a unit of
selection; it must be "the unit that actually survives or fails
to survive" (Dawkins 1982a: 60). Because organisms, groups, and
even genomes do not actually survive the evolution-by-selection
process, the answer to the survival question must be the replicator.
(Strictly speaking, this is false; it is copies of the replicators
that survive. Replicators must therefore be taken in some sense as
information and not as biological entities (see Hampe & Morgan
1988; cf. Griesemer 2005).
But there is still a problem. Although the conclusion is that
"there should be no controversy over replicators versus
vehicles. Replicator survival and vehicle selection are two aspects of
the same process" (1982a: 60), the genic selectionist does not
just leave the vehicle selection debate alone. Instead, the argument
is that we do not need the concept of discrete vehicles at all. This
is what we shall investigate in
section 3.4 Genic Selection: The Pluralists.
The important point for now is that, on Dawkins' analysis, the
fact that replicators are the only survivors of the
evolution-by-selection process automatically answers also the question
of who owns the adaptations. Adaptations must be seen as being
designed for the good of the active-germ-line replicator for the
simple reason that replicators are the only entities around long
enough to enjoy them over the course of natural selection. The genic
selectionist acknowledges that the phenotype is "the all
important instrument of replicator preservation," and that
genes' phenotypic effects are organized into organisms (that
thereby might benefit from them in their lifetimes) (1982b: 114). But
because only the active germ-line replicators survive, they are the
*true locus of adaptations* (1982b: 113; emphasis added). The
other things that benefit over the short term (e.g., organisms with
adaptive traits) are merely the tools of the real survivors, the real
owners. Hence, Dawkins rejects the vehicle approach partly because he
identifies it with the manifestor-of-adaptation approach, which he has
answered by definition, in terms of the long-term beneficiary.
The second key aspect of genic selectionists' views on
interactors is the desire to do away with them entirely. Dawkins is
aware that the vehicle concept is "fundamental to the
predominant orthodox approach to natural selection" (1982b:
116). Nevertheless, he rejects this approach in *The Extended
Phenotype*, claiming, "the main purpose of this book is to
draw attention to the weaknesses of the whole vehicle concept"
(1982b: 115). But this "vehicle" approach is not
equivalent to "the interactor question"; it encompasses a
much more restricted approach.
In particular, when arguing against "the vehicle concept,"
Dawkins is only arguing against the desirability of seeing the
individual organism as the one and only possible vehicle. His target
is explicitly those who hold what he calls the "Central
Theorem," which says that *individual organisms should be
seen as maximizing their own inclusive fitness* (1982b: 5, 55).
These arguments are indeed damaging to the Central theorem, but they
are ineffective against other approaches that define units of
selection as interactors.
One way to interpret the Central Theorem is that it implies that the
individual organism is always the beneficiary of any selection
process. The genic selectionists seem to mean by
"beneficiary" both the manifestor of adaptation and that
which survives to reap the rewards of the evolutionary process.
Dawkins argues, rightly and persuasively, I think, that it does not
make sense always to consider the individual organism to be the
beneficiary of a selection process.
But it is crucial to see that Dawkins is not arguing against the
importance of the interactor question in general, but rather against a
particular definition of a unit of selection. The view being
criticized assumes that the individual organism is the interactor,
*and* the beneficiary, *and* the manifestor of
adaptation. Consider the main argument against the utility of
considering vehicles: the primary reason to abandon thinking about
vehicles is that it confuses people (1982b: 189). But look at the
examples; their point is that it is inappropriate always to ask how an
organism's behavior benefits that organism's inclusive
fitness. We should ask instead, "whose inclusive fitness the
behavior is benefiting" (1982b: 80). Dawkins states that his
purpose in the book is to show that "theoretical dangers attend
the assumption that adaptations are for the good of...the
individual organism" (1982b: 91).
So, Dawkins is quite clear about what he means by the "vehicle
selection approach"; it always assumes that the organism is the
beneficiary of its accrued inclusive fitness. Dawkins advances
powerful arguments against the assumption that the organism is always
the interactor cum beneficiary cum manifestor of adaptations. This
approach is clearly not equivalent to the approach to units of
selection characterized as the interactor approach. Unfortunately,
genic selectionists extend Dawkins' conclusions to these other
approaches, which he has, in fact, not addressed. The genic
selectionists' lack of consideration of the interactor
definition of a unit of selection leads to two grave problems with
this view.
One problem is the tendency to interpret all group selectionist claims
as being about beneficiaries and manifestors of adaptations as well as
interactors. This is a serious misreading of authors who are pursuing
the interactor question alone.
Consider, for example, this argument that groups should not be
considered units of selection:
>
>
> To the extent that active germ-line replicators benefit from the
> survival of the group of individuals in which they sit, over and above
> the [effects of individual traits and altruism], we may expect to see
> adaptations for the preservation of the group. But all these
> adaptations will exist, fundamentally, through differential replicator
> survival. The basic beneficiary of any adaptation is the active
> germ-line replicator (Dawkins 1982b: 85).
>
>
>
Notice that this argument begins by admitting that groups can function
as interactors, and even that group selection may effectively produce
group-level adaptations. The argument that groups should not be
considered real units of selection amounts to the claim that the
groups are not the ultimate beneficiaries. To counteract the intuition
that the groups do, of course, benefit, in some sense, from the
adaptations, the terms "fundamentally" and
"basic" are used, thus signaling what the author considers
the most important level. Even if a group-level trait is affecting a
change in gene frequencies, "it is still genes that are regarded
as the replicators which actually survive (or fail to survive) as a
consequence of the (vehicle) selection process" (Dawkins 1982b:
115). Thus, the replicator is the unit of selection because it is the
beneficiary, and the real owner, of all adaptations that exist.
Saying all this does not, however, address the fact that other
researchers investigating group selection are asking the interactor
question and sometimes also the manifestor-of-adaptation question,
rather than Dawkins' special version of the (ultimate)
beneficiary question. He gives no additional reason to reject these
other questions as legitimate; he simply reasserts the superiority of
his own preferred unit of selection. In sum, Dawkins has identified
three criteria as necessary for something to be a unit of selection:
it must be a replicator; it must be the most basic beneficiary of the
selection process; and it is automatically the ultimate manifestor of
adaptation through being the beneficiary.
Finally, further work in the philosophy of biology brings the level of
the unit of selection down even further than the original genic
selectionists do (Rosenberg 2006). Higher level selection is reducible
to more fundamental
levels.[15]
Taking a reductionist stance, which is taken to be necessary to avoid
an "untenable dualism" in biology between physicalism and
antireductionism, the argument is that the principle of natural
selection (PNS) should be properly viewed as a basic law of physical
science (specifically chemistry), which can operate at the level of
atoms and molecules (Rosenberg 2006: 189-191). Different
molecular environments would favor different chemical types, and those
that "more closely approximate an environmentally optimal
combination of stability and replication," are thus the
"fittest" and would predominate (2006: 190). This could
then be applied at each step of the way from simple molecules to
compounds, organelles, cells, tissues, and so on, such that
>
>
> the result at each level of chemical aggregation is the instantiation
> of another PNS, grounded in, or at least in principle derivable from,
> the molecular interactions that follow the PNS in the environment
> operating at one or more lower levels of aggregation. (Rosenberg 2006:
> 192)
>
>
>
This approach addresses antireductionist arguments regarding group
level properties. The claim is that this new envisioning of the PNS as
a purely physical law allows us to better understand the lower level
origins of apparently higher level causes, thus revealing that
"the appearance of 'downward causation' is just
that: mere appearance." (Rosenberg 2006: 197) For example, the
claim is that group level selection explanations, such as are commonly
given for altruism, do not require an antireductionist stance, since
physical laws, such as that second law of thermodynamics, can allow
for local unfavorable changes (in this case, local decreases in
entropy) as long as compensation is made elsewhere. With regard to the
physical PNS,
>
>
> groups of biological individuals may experience fitness increases at
> the expense of fitness decreases among their individual members for
> periods of time that will depend on the size and composition of the
> group and the fitness effects of their traits. What the PNS will not
> permit is long-term fitness changes at the level of groups without
> long-term fitness changes in the same direction among some or all of
> the individuals composing them. (Rosenberg 2006: 198)
>
>
>
In other words, this is supposed to show that there is no need to
think in terms of irreducible group level interactors. Again, note
that this analysis merges characteristics of interactors and
replicators.
In the next section, we will consider some relatively more recent work
in which genic selectionism is defended through a pluralist approach
to modeling. What matters in the final analysis, though, is exactly
what matters to the original genic selectionists, and that is the
search for the ultimate beneficiary of the evolution by selection
process.
### 3.4 Genic Selection: The Pluralists
As we saw in the previous section, the original genic selectionists
had particular problems with their treatment of the interactor. While
they admitted that the "vehicle" was necessary for the
selection process, they did not want to accord it any weight in the
units of selection debate because it was not the beneficiary, but
rather an agent of the beneficiary. Soon, however, there emerged a new
angle available to genic selectionists (Waters
1986).[16]
The new "genic pluralism" appears to let one bypass the
interactor question, by, in effect, turning genes into interactors
(Sterelny & Kitcher 1988). The proposal is that there are two
"images" of natural selection, one in which selection
accounts are given in terms of a hierarchy of entities and their
traits' environments, the other of which is given in terms of
*genes* having properties that affect their abilities to leave
copies of themselves (Sterelny & Kitcher 1988; see Kitcher,
Sterelny, & Waters 1990, Sterelny 1996a,b; Waters 1986,
1991).[17]
Something significant follows from the fact that hierarchical models
or selection processes can be reformulated in terms of the genic
level. These claims have been resisted on a variety of grounds (see
objections in R.A. Wilson 2003, Stanford 2001, Van der Steen & van
den Berg 1999, Gannett 1999, Shanahan 1997, Glennan 2002, Sober 1990,
Sober & Wilson 1998, Brandon & Nijhout 2006, Sarkar 2008).
The big payoff of the genic point of view is:
>
>
> Once the possibility of many, equally adequate, representations of
> evolutionary processes has been recognized, philosophers and
> biologists can turn their attention to more serious projects than that
> of quibbling about the real unit of selection. (Kitcher, Sterelny,
> & Waters 1990: 161)
>
>
>
By "quibbling about the real unit of selection," here, the
authors seem to be referring to the large range of articles in which
evolutionists have tried to give concrete evidence and requirements
for something to serve as an interactor in a selection process.
As an aside, it is important to note that none of the philosophers are
advocating genic selectionism to the exclusion of other views. What
interests them is a proposed equivalence between being able to tell
the selection story one way, in terms of interactors and replicators,
and to tell the same story another way, purely in terms of
"genic agency". Thus, they are pluralists, in that they
are not ultimately arguing in favor of the genic view; they are,
however, expanding the genic selectionist view beyond its previous
limits.
The pluralists attack the view that "for any selection process,
there is a uniquely correct identification of the operative selective
forces and the level at which each impinges" (Waters 1991: 553).
Rather, they claim, "We believe that asking about the real unit
of selection is an exercise in muddled metaphysics" (Kitcher,
Sterelny, & Waters 1990: 159). The basic view is that "the
causes of one and the same selection process can be correctly
described at different levels" (including the genic one) (Waters
1991: 555). Moreover, these descriptions are on equal ontological
footing. Equal, that is, except for when Sterelny and Kitcher slip over into a
genuinely reductionist genic view, when they state that it is an error
to claim
>
>
> that selection processes must be described in a particular way, and
> their error involves them in positing entities, "targets of
> selection," that do not exist. (1988: 359)
>
>
>
Here they seem to be denying the existence of interactors altogether.
If interactors don't exist, then clearly a genic level account
of the phenomena would be preferable to, not merely equivalent to, a
hierarchical view.
The pluralists do seem to be arguing against the utility of the notion
of the interactor in studying the selection process. Echoing the
original genic selectionists, their idea is that the whole causal
story can be told at the level of genes, and that no higher level
entities need be proposed or considered in order to have an accurate
and complete explanation of the selection process. But, arguably, the
genic level story cannot be told without taking the functional role of
interactors into account, and thus the pluralists cannot avoid
quibbling about interactors, as they claim (see Lloyd 2005). Nor is
the genic account adequate to all selection cases; the genic account
fails when drift is factored in (Brandon & Nijhout 2006).
Let us recall what the interactor question in the units of selection
debate amounts to: What levels of entities interact with their
environments through their traits in such a way that it makes a
difference to replicator success? As mentioned before, there has been
much discussion in the literature about how to delineate and locate
interactors among multilayered processes of selection. Each of these
suggestions leads to slightly different results and different problems
and limitations, but each also takes the notion of the interactor
seriously as a necessary component to understanding a selection
process.
The genic pluralists state that "All selective episodes (or,
perhaps, almost all) can be interpreted in terms of genic selection.
That is an important fact about natural selection" (Kitcher,
Sterelny, & Waters 1990: 160). Thus, the functional claim of the
pluralists is that anything that a hierarchical selection model can
do, a genic selection model can do just as well. Much attention is
paid to showing that the two types of models can represent certain
patterns of selection equally well, even those that are conventionally
considered hierarchical selection. This is argued for using both
specific examples and schema for translating hierarchical models into
genic ones. Let us consider one challenging case here.
Take the classic account of the efficacy of interdemic or group
selection, the case that even G.C. Williams acknowledged was
hierarchical selection. Lewontin and Dunn (Lewontin & Dunn 1960
and Lewontin 1962), in investigating the house mouse, found first,
that there was segregation distortion, in that over 80% of the sperm
from mice heterozygous for the t-allele also carried the t-allele,
whereas the expected rate would be 50%. Second, they also found that
male homozygotes (those with two t-alleles) tended to be sterile. (In
several populations the t-alleles were homozygous lethal, but in the
populations in question, homozygous males were sterile.) Third, they
also found a substantial effect of group extinction based on the fact
that female mice would often find themselves in groups in which all
males were sterile, and the group itself would therefore go
extinct. This, then, is how a genuine and empirically robust
hierarchical model was developed.
What the pluralists want to note about this case is very narrow, that
is
>
>
> whether there are real examples of processes that can be modeled as
> group selection can be asked and answered entirely *within the
> genic* point of view. (Kitcher, Sterelny, & Waters 1990: 160)
>
>
>
>
Just as a warning to the unwary, the key to understanding the genic
reinterpretation of this case is to grasp that the pluralists use a
concept of genetic environment that their critics ignore.
The pluralists tell how to "construct" a genic model of
the causes responsible for the frequency of the t-allele. We must
first distinguish
>
>
> genetic environments that are contained within female mice that are
> trapped in small populations with only sterile males from genetic
> environments that are not contained within such females. In effect,
> the interactions at the group level would be built in as a part of one
> kind of genetic environment. (Waters 1991: 563)
>
>
>
In other words, various very detailed environments would have to be
specified for various different t-alleles and wild-type alleles. In
order to determine the invariant fitness parameter of a specific
allele, let's call it "A" for example, we would need
to know what kind of environment it is in at the allelic level, e.g.,
whether it is paired with a t-allele. Then we would need to know a
further detailed layer of the environment of "A", such as
what the sex is of the "environment" it is in. If it is in
a t-allele arrangement, and it is also in a male environment, the
allelic fitness of "A" would be changed. Finally, we need
to know the type of subpopulation or deme the "A" allele
is in. Is it in a small deme with many t-alleles? Then it is more
likely to become extinct. So, as we can see, various aspects of the
allele's environment are built up from the gene out, depending
on what would make a difference to the gene's fitness in that
very particular kind of environment. If you want to know the overall
fitness of the "A" allele, you add up the fitnesses in
each set of specialized, detailed environments and weight them
according to the frequency of that environment.
The idea is:
>
>
> What appears as a multiple level selection process (e.g., selection of
> the t-allele) to those who draw the conceptual divide [between
> environments] at the traditional level, appears to genic selectionists
> of Williams's style as several selection processes being carried
> out at the same level within different genetic environments. (Waters
> 1991: 571)
>
>
>
The "same level" here means the "genic level,"
while the genetic environments include everything from the other
allele at the locus, to whether the genotype is present in a male or
female mouse, to the size and composition of the deme the mouse is in.
This completes the sketch of the genic pluralist position. We now turn
to its reception.
Genic pluralism's impact has been largely philosophical rather
than biological (but see Shanahan 1997 and Van der Steen & Van den
Berg 1999). Within philosophy, the view has been widely disseminated
and taught, and a steady stream of critical responses to the genic
pluralist position has been forthcoming. These responses fall into two
main categories: pragmatic and causal.
The pragmatic response to genic pluralism simply notes that in any
given selective scenario the genic perspective provides no information
that is not also available from the hierarchical point of view. This
state of affairs is taken by critics of this type as sufficient reason
to prefer whichever perspective is most useful for solving the
problems facing a particular researcher (Glymour 1999; Van der Steen
& Van den Berg 1999; and Shanahan 1997). The weakness of this
approach as a critique of genic pluralism is that it does not so much
criticize genic pluralism as simply ignore it.
The other major form of critique of genic pluralism is based on
arguments concerning the causal structure of selective episodes. The
idea here is that while genic pluralism gets the "genetic
book-keeping" (i.e., the input/output relations) correct, it
does not accurately reflect the causal processes that bring about the
result in question (Wimsatt 1980a,b). Some examples of this approach used against the
genic pluralists (including Sober 1990; Sober & Wilson 1994,
1998) also appeal to aspects of the manifestor-of-adaptations and
beneficiary questions to establish the failure of genic pluralism to
represent certain selective events correctly. Causal concerns are also
raised in some other work (Shanahan 1997, Van der Steen and Van den
Berg 1999, Stanford 2001, and Glennan 2002), though without the focus
on other units questions. The weakness of this line of criticism is
its inability to isolate a notion of cause that is both plausible and
plausibly true of hierarchical but not genic level models. This
feature--that the genic and hierarchical models are so similar as
to be indistinguishable--which appears as an insurmountable
problem in the context of debates about differing causal structure,
turns out to be the locus of critical response to genic pluralism,
which denies that the genic selectionists have any distinct and
coherent genic level causes at all (Lloyd 2005; Lloyd et al.
2005).
Genic pluralism presents alleles as independent causal entities, with
the claim that the availability of such models makes hierarchical
selection models--and the ensuing debates about how to identify
interactors in selection processes--moot. Or, in a less
contentious version of the argument, the hierarchical and genic models
are fully developed causal alternatives (Waters 1991). However, in
each case of the causal allelic models, these models are directly and
completely derived from precisely the hierarchical models the authors
reject. Moreover, causal claims made on behalf of alleles are utterly
dependent on hierarchically identified and established interactors
*as causes*, thus undermining their claims that the units of
selection (interactor) debates are mere "quibbles" and are
irrelevant to the representation of selection processes. Moreover, and
contrary to the claims of pluralists, cases of frequency-dependence,
such as in heterosis and in game-theoretic models of selection,
necessitate selection at higher than genic levels because the relevant
properties of the entities at the genic level are only definable
relative to higher levels of organization. Thus, they cannot be
properly described as properties of alleles nor are they "even
definable at the allelic level." (Sarkar 2008: 219) In addition,
when drift is taken into account, the genic accounts fail to be
empirically adequate (Brandon & Nijhout 2006).
We can say that the allelic level models are completely derivative
from higher level models of selection processes using the following
guidelines (Lloyd 2005). Two models that are mathematically equivalent
may be semantically different, that is, they have different
interpretations. Such models can be independent from one another or
one may be derivative of the other. In the genic selection case, the
pluralists appear to be claiming that the genic level models are
independent from the hierarchical models. The claim is: although the
genic models are mathematically equivalent, they have different
parameters, and a different interpretation, and they are completely
independent from hierarchical models.
But, despite the pluralists' repeated claims, we can see from
their own calculations and examples that theirs are
*derivative* models, and thus, that their "genic"
level causes are derivative from and dependent on higher level causes.
Their genic level models depend for their empirical, causal, and
explanatory adequacy on entire mathematical structures taken from the
hierarchical models and refashioned.
As reviewed above, one example from their own writing comes from the
treatment of the t-allele case, a universally recognized case of three
levels of selection operating simultaneously on a single allele. Right
before the t-allele case, a suggestion is offered that a
Williams's type analysis could be based on an application of
Lloyd's additivity criterion for identifying
*interactors*, which is strictly hierarchical (Waters 1991:
563; Lloyd [1988] 1994: Ch. 5). Thus, the pluralist suggestion is to
borrow a method for identifying potential higher-level interactors in
order to determine the genic environments and thus to have more
adequate genic level models. Similarly, other pluralists resort to a
traditional approach to identifying interactors in order to make their
genic models work. It had earlier been proposed that the statistical
idea of screening off be used to identify which levels of entities are
causally effective in the selection process; i.e., it is a method used
to isolate interactors (Brandon 1982). But some pluralists propose
using screening off to identify layers of allelic environments, and
also show how Sober's probabilistic causal account could be used
for a genic account (Sterelny & Kitcher 1988: 354).
Hence, the pluralists all use the same methods for isolating relevant
genic-level environments as others do for the traditional isolating of
interactors. What, we may ask, is the real difference? Both can be
seen as attempting to get the causal influences on selection right,
because they are using the same methods. What is different is that the
genic selectionists want to tell the causal story in terms of genes
and not in terms of interactors and genes. Moreover, they propose
doing away with interactors altogether, by renaming them the
genic-level environments. Are we to think that renaming changes the
metaphysics of the situation?
It seems that levels of interaction important to the outcome of the
selection process are being discovered in the usual ways, i.e., by
using approaches to interactors and their environments, and that that
exact same information is being translated into talk of the
differentiated and layered environments of the genes.
The issue concerning renaming model structures is especially confusing
in the genic pluralists presentations, because they repeatedly rely on
an assumption or intuition that, given an allelic state space, we are
dealing with allelic causes. This last assumption is easily traced
back to the original genic selectionist views that alleles are the
ultimate beneficiaries of any long term selection process (Williams
1966; Dawkins 1982a,b); thus, the genic pluralist argument rests
substantially on a view regarding the superior importance of the
beneficiary question, which has been clearly delineated from the
interactor question, above.
Let us summarize the consequences of derivativeness in terms of the
science and metaphysics of the processes discussed. First, the genic
pluralists end up offering not, as they claim, a variety of genuinely
diverse causal versions of the selection process at different levels.
This is because the causes of the hierarchical models, however
determined, are simply transformed and renamed in the lower level
models, but remain fully intact as relevant causes at the full range
of higher and lower levels. More importantly, no new allelic causes
are introduced. Second, while genic models may be derived from
hierarchical models, they fail to sustain the necessary supporting
methodology. Third, the lack of genuine alternative causal accounts
destroys the claims of pluralism or, at least, of any interesting
philosophical variety, since there are no genuine alternatives being
presented, unless you count renaming model structures as
metaphysically significant (see also Okasha
2011).[18]
Thus, the picture of proposing an alternative
"interactor" at the genic level is not fulfilled (vs.
Sterelny & Kitcher 1988). Perhaps the best way to save the
pluralist vision is to appeal to the work on neighbor selection
(Godfrey-Smith 2008),
which can be cast within a pluralist program. This effort is to revive
and discuss an alternative fashion of modeling altruism or group
benefit, within the terms of a lower, individual level (see the
discussion in
section 3.1, Genic Selection: The Originators).
There is a further complication with respect to the nature of the
genic selection models put forward by genic pluralists. These models
function under the presupposition that they are at least
mathematically equivalent to hierarchical models. This claim has
largely depended on the work of Dugatkin and Reeve in establishing
this equivalence (Dugatkin & Reeve 1994, Sterelny 1996b, Sober
& Wilson 1998, Sterelny & Griffiths 1999, Kerr &
Godfrey-Smith 2002a, Waters 2005). However, foundational work has
indicated that this equivalence does not in fact hold. In Dugatkin and
Reeve and the rest of this literature, comparison of population
genetic models was largely based on predictions of allele frequency
changes; in other words, if two models made the same predictions as to
the changes of allelic frequencies in a given situation, then the
models are equivalent. However, this is an overly simplistic method
for testing model equivalence which pays little mind to the details of
the models themselves. When the notion of representational adequacy of
the models is taken into account, specifically through the inclusion
of parametric and dynamical sufficiency as important points of
comparison, this equivalence between genic and hierarchical models
disappears (Lloyd, Lewontin, & Feldman 2008; Lewontin 1974; see
also
group selection
for more on formal equivalence).
Parametric sufficiency concerns what state space and variables are
sufficient to capture the relevant properties of a given system, while
dynamical sufficiency
>
>
> concerns what state space and variables are sufficient to describe the
> evolution of a system given the parameters being used in the specific
> system. (Lloyd, Lewontin, & Feldman 2008: 146; Lewontin 1974)
>
>
>
Utilizing these concepts allows for a more detailed and meaningful
evaluation of a given mathematical model. And under such an analysis,
the claims regarding the equivalency of genic and hierarchical models
cannot be sustained. Since allelic parameters and the changes in
allelic frequencies depend on genotypic fitnesses, the genic models
claimed to be equivalent to the hierarchical models are neither
parametrically nor dynamically
sufficient.[19]
### 3.5 Units of Evolutionary Transition
In our preceding discussions of units of selection, we have restricted
ourselves to situations in which the various units were
pre-established entities. Our approach has been synchronic, one in
which the relevant units, be they genes, organisms, or populations,
are the same both before and after a given evolutionary process.
However, not all evolutionary processes may be able to be captured
under such a perspective. In particular, recent discussions regarding
so-called "evolutionary transitions" present a unique
complication to the debates over units and levels of selection.
Evolutionary transition is "the process that creates
*new* levels of biological organization" (Griesemer 2000c:
69), such as the origins of chromosomes,
multicellularity, eukaryotes, and social groups (Maynard-Smith and
Szathmary 1995: 6-7). These transitions all share a
common feature, namely that "entities that were capable of
independent replication before the transition can replicate only as
part of a larger whole after it" (Maynard-Smith and
Szathmary 1995: 6).
Evolutionary transitions create new potential levels and units of
selection by creating new kinds of entities that can have variances in
fitness. Thus, it is the "project of a theory of evolutionary
transition to explain the evolutionary origin of entities with such
capacities" (Griesemer 2000c: 70). However, since such cases involve the *evolutionary
origin* of a given level of selection, traditional synchronic
approaches to units and levels of selection, which assume the
pre-existence of a "hierarchy of entities that are
*potential* candidates for units of selection", may be
insufficient, since it is the evolution of those very properties that
allow entities to serve as, for example, interactors or replicators
that is being addressed (Griesemer 2000c: 70). Such a task requires a
diachronic perspective, one under which the properties of our
currently extant units of selection cannot be presupposed.
>
>
> ...[A]s long as evolutionary theory concerns the function of
> *contemporary* units at *fixed* levels of the biological
> hierarchy..., the functionalist approach may be adequate to its
> intended task. However, if a philosophy of units is to address
> problems going beyond this scope--for example to problems of
> evolutionary *transition*,... then a different approach is
> needed. (Griesemer 2003: 174)
>
>
>
The "reproducer" concept (discussed in
section 2.2),
which incorporates the notion of development into the treatment of
units and levels of selection, is a step toward meeting the goal of
addressing such evolutionary transitions, and
>
>
> the dependency of formerly independent replicators on the
> "replication" of the wholes--the basis for the
> definition of evolutionary transition ... is a
> *developmental* dependency that should be incorporated into the
> analysis of units. (2000c: 75)
>
>
>
Those adopting the reproducer concept argue that thinking in broader
terms of reproducers avoids the presupposition of evolved coding
mechanisms implicit to the concept of replicators. In the case of
evolutionary transitions, this allows us to separate the basic
development involved in the origin of a new biological level from the
later evolution of sophisticated developmental mechanisms for the
"stabilization and maintenance of a new level of
reproduction" (Griesemer 2000c: 77).
Explaining evolutionary transitions in Darwinian terms poses a
particular challenge: "Why was it advantageous for the
lower-level units to sacrifice their individuality and form themselves
into a corporate body?" (Okasha 2006: 218). On one analysis,
three stages of such a transition, each defined in terms of the
connection between fitness at the level of the collective and the
individual fitness of its component particles, are identified (Okasha
2006: 238). Initially, collective fitness is simply defined as average
particle fitness. As fitness at the two levels begins to be decoupled,
collective fitness remains proportional to average particle fitness,
but is not defined by it; at such a stage, "the emerging
collective lacks 'individuality', and has no
collective-level functions of its own" (Okasha 2006: 237).
Finally, collective fitness "starts to depend on the
functionality of the collective itself" (Okasha 2006:
237-8; see Okasha 2015 for a representation of this in terms of
causal graphs).
On this analysis, the different stages of an evolutionary transition
involve different conceptions of multi-level selection (Okasha 2006,
2015). Using the distinction defended by Lorraine Heisler and John
Damuth (Heisler & Damuth 1987; Damuth & Heisler 1988) in their
"contextual analysis" of units of selection, this analysis
claims that early on in the process of an evolutionary transition,
multi-level selection 1 (MLS1), in which the particles themselves are
the "'focal' units" upon which selection
directly acts, applies. However, by the end of the transition, both
the collectives and the particles are focal units of selection
processes, with independent fitnesses, a case of Damuth and
Heisler's multi-level selection 2 (MLS2) (Okasha 2006: 4). An
easy way to capture this distinction is that, under MLS1, the lower
level particles are the interactors as well as the replicators, while
in MLS2, both the upper level collectives as well as the particles are
interactors. Thus, the issues surrounding evolutionary transitions
involve both the interactor question and the replicator question.
Understanding evolutionary transitions hence provides additional
significance to Damuth and Heisler's distinction:
>
>
> Rather than simply describing selection processes of different sorts,
> which should be kept separate in the interests of conceptual clarity,
> MLS1 and MLS2 represent different *temporal* stages of an
> evolutionary transition. (Okasha 2006: 239)
>
>
>
On a different approach, evolutionary transitions are seen as the
appearance of a "new kind of Darwinian population", of
"new entities that can enter into Darwinian processes in their
own right" (Godfrey-Smith 2009: 122). These transitions involve
a "de-Darwinizing" of the lower-level entities such that
>
>
> an initial collective has come to engage in definite high-level
> reproduction, and this has involved the curtailing of independent
> evolution at the lower level. (Godfrey-Smith 2009: 124)
>
>
>
This can be accomplished in a variety of ways, such as through the
bottleneck caused by the production of new collectives from single
individuals, coupled with germ-line segregation (as in the transitions
to multicellularity), or by a single member of the collective
preventing all other members from reproducing (for example, among
eusocial insects), or by single member having primary but not total
control over the other constituents (as in the evolution of
eukaryotes) (Godfrey-Smith 2009: 123-124).
These processes all involve restrictions on the ability of the
lower-level entities to function as interactors and replicators, and
the emergence of upper-level collectives as both interactors and
replicators. The degree to which lower-level entities are thus
restricted can vary. For example, somatic cells are still capable of
bearing individual fitness, of outcompeting neighboring cells, and of
producing more progeny. Thus, they are not yet
"post-populational"; they "retain crucial Darwinian
features in their own right" (Godfrey-Smith 2009: 126). However,
they are dependent on the germ-line cells for the propagation of new
*collectives*, and thus their ability to act as replicators is
necessarily curtailed. Thus, in order to prevent subversion and
encourage cooperation, such a transition requires both the
"*generation of benefit*" and the
"*alignment of reproductive interests*"
(Godfrey-Smith 2009: 124, with terminology from Calcott 2008; see
Booth's 2014 analysis of heterokaryotic Fungi using
Godfrey-Smith's approach). For example, in the case of
multicellularity, the latter can be accomplished by "close
kinship within the collective" (Godfrey-Smith 2009: 124).
In a useful analysis of the volvocine algae, other hierarchical
selectionists use optimality modeling at the group level to search for
a group level adaptation, in aid of modeling evolutionary transitions
(Shelton & Michod 2014b). They look for selection and adaptation
at the higher level in their model of transition, which contrasts
other views that look only for selection at the higher level, but not
for engineering adaptations. The emphasis is on the distinction
between fortuitous group benefit and real group adaptation. In places,
however, they seem to embrace the product-of-selection definition of
group adaptation, even though they are committed to denying its
applicability (2014b: 454). Their point is to decompose levels of
selection and adaptation using a model organism to get evolutionary
emergence of levels, i.e., evolutionary transition.
Thus, there are a variety of philosophical approaches to analyzing
evolutionary transition on offer, whether in terms of reproducers,
multilevel selection, or Darwinian populations. The essential
diachronic nature of the problem poses a unique challenge, and
involves not just the interactor and replicator (or reproducer)
questions, but also the questions of who is the beneficiary of the
selection process, and how that new level emerges.
## 4. Conclusion
It makes no sense to treat different answers as competitors if they
are answering different questions. We have reviewed a framework of
four questions with which the debates appearing under the rubric of
"units of selection" can be classified and clarified. The
original discussants of the units of selection problem separated the
classic question about the level of selection or interaction (the
interactor question) from the issue of how large a chunk of the genome
functions as a replicating unit (the replicator question). The
interactor question should also be separated from the question of
which entity should be seen as acquiring adaptations as a result of
the selection process (the manifestor-of-adaptation question). In
addition, there is a crucial ambiguity in the meaning of adaptation
that is routinely ignored in these debates: adaptation as a selection
product and adaptation as an engineering design. Finally, we can
distinguish the issue of the entity that ultimately benefits from the
selection process (the beneficiary question) from the other three
questions.
This set of distinctions has been used to analyze leading points of
view about the units of selection and to clarify precisely the
question or combination of questions with which each of the
protagonists is concerned. There are many points in the debates in
which misunderstandings may be avoided by a precise characterization
of which of the units of selection questions is being addressed. |
scientific-unity | ## 1. Historical development in philosophy and science from Greek philosophy to Logical Empiricism in America
### 1.1 From Greek thought to Western science
Unity has a history as well as a logic. Different formulations and
debates express intellectual and other resources and interests in
different contexts. Questions about unity belong partly in a tradition
of thought that can be traced back to pre-Socratic Greek cosmology, in
particular to the preoccupation with the question of the One and the
Many. In what senses is the world and, as a result, our knowledge of
it one? A number of representations of the world in terms of a few
simple constituents that were considered fundamental emerged:
Parmenides' static substance, Heraclitus' flux of
becoming, Empedocles' four elements, Democritus' atoms,
Pythagoras' numbers, Plato's forms, and Aristotle's
categories. The underlying question of the unity of our types of
knowledge was explicitly addressed by Plato in the *Sophist* as
follows: "Knowledge also is surely one, but each part of it that
commands a certain field is marked off and given a special name proper
to itself. Hence language recognizes many arts and many forms of
knowledge" (*Sophist*, 257c). Aristotle asserted in
*On the Heavens* that knowledge concerns what is primary, and
different "sciences" know different kinds of causes; it is
metaphysics that comes to provide knowledge of the underlying
kind.
With the advent and expansion of Christian monotheism, the
organization of knowledge reflected the idea of a world governed by
the laws dictated by God, its creator and legislator. From this
tradition emerged encyclopedic efforts such as the
*Etymologies*, compiled in the sixth century by the Andalusian
Isidore, Bishop of Seville, the works of the Catalan Ramon Llull in
the Middle Ages and those of the Frenchman Petrus Ramus in the
Renaissance. Llull introduced iconic tree-diagrams and
forest-encyclopedias representing the organization of different
disciplines including law, medicine, theology and logic. He also
introduced more abstract diagrams--not unlike some found in
Cabbalistic and esoteric traditions--in an attempt to
combinatorially encode the knowledge of God's creation in a
universal language of basic symbols. Their combination would be
expected to generate knowledge of the secrets of creation and help
articulate knowledge of universal order (*mathesis
universalis*), which would, in turn, facilitate communication with
different cultures and their conversion to Christianity. Ramus
introduced diagrams representing dichotomies and gave prominence to
the view that the starting point of all philosophy is the
classification of the arts and sciences. The encyclopedia organization
of knowledge served the project of its preservation and
communication.
The emergence of a distinctive tradition of scientific thought
addressed the question of unity through the designation of a
privileged method, which involved a privileged language and set of
concepts. Formally, at least, it was modeled after the Euclidean ideal
of a system of geometry. In the late-sixteenth century, Francis Bacon
held that one unity of the sciences was the result of our organization
of records of discovered material facts in the form of a pyramid with
different levels of generalities. These could be classified in turn
according to disciplines linked to human faculties. Concomitantly, the
controlled interaction with phenomena of study characterized so-called
experimental philosophy. In accordance with at least three
traditions--the Pythagorean tradition, the Bible's dictum
in the Book of Wisdom and the Italian commercial tradition of
bookkeeping--Galileo proclaimed at the turn of the seventeenth
century that the Book of Nature had been written by God in the
language of mathematical symbols and geometrical truths, and in it,
the story of Nature's laws was told in terms of a reduced set of
objective, quantitative primary qualities: extension, quantity of
matter and motion. A persisting rhetorical role for some form of
theological unity of creation should not be neglected when considering
pre-twentieth-century attempts to account for the possibility and
desirability of some form of scientific knowledge. Throughout the
seventeenth century, mechanical philosophy and Descartes' and
Newton's systematization from basic concepts and first laws of
mechanics became the most promising framework for the unification of
natural philosophy. After the demise of Laplacian molecular physics in
the first half of the nineteenth century, this role was taken over by
ether mechanics and, unifying forces and matter, energy physics.
### 1.2 Rationalism and Enlightenment
Descartes and Leibniz gave this tradition a rationalist twist that was
centered on the powers of human reason and the ideal of system of
knowledge founded on rational principles. It became the project of a
universal framework of exact categories and ideas, a *mathesis
universalis* (Garber 1992; Gaukroger 2002). Adapting the
scholastic image of knowledge, Descartes proposed an image of a tree
in which metaphysics is depicted by the roots, physics by the trunk,
and the branches depict mechanics, medicine and morals. Leibniz
proposed a *general science* in the form of a *demonstrative
encyclopedia*. This would be based on a "catalogue of simple
thoughts" and an algebraic language of symbols,
*characteristica universalis*, which would render all knowledge
demonstrative and allow disputes to be resolved by precise
calculation. Both defended the program of founding much of physics on
metaphysics and ideas from life science (Smith 2011) (Leibniz's
unifying ambitions with symbolic language and physics extended beyond
science, to settle religious and political fractures in Europe). By
contrast, while sharing a model of a geometric, axiomatic structure of
knowledge, Newton's project of natural philosophy was meant to
be autonomous from a system of philosophy and, in the new context,
still endorsed, for its model of organization and its empirical
reasoning, values of formal synthesis and ontological simplicity (see
the entry on
Newton;
also Janiak 2008).
Belief in the unity of science or knowledge, along with the
universality of rationality, was at its strongest during the European
Enlightenment. The most important expression of the encyclopedic
tradition came in the mid-eighteenth century from Diderot and
D'Alembert, editors of the *Encyclopedie, ou
dictionnaire raisonne des sciences, des arts et des
metiers* (1751-1772). Following earlier
classifications by Nichols and Bacon, their diagram presenting the
classification of intellectual disciplines was organized in terms of a
classification of human faculties. Diderot stressed in his own entry,
"Encyclopaedia", that the word signifies the unification
of the sciences. The function of the encyclopedia was to exhibit the
unity of human knowledge. Diderot and D'Alembert, in contrast to
Leibniz, made classification by subject the primary focus, and
introduced cross-references instead of logical connections. The
Enlightenment tradition in Germany culminated in Kant's critical
philosophy.
### 1.3 German tradition since Kant
For Kant, one of the functions of philosophy was to determine the
precise unifying scope and value of each science. For him, the unity
of science is not the reflection of a unity found in nature, or, even
less, assumed in a real world behind the apparent phenomena. Rather,
it has its foundations in the unifying a priori character or function
of concepts, principles and of Reason itself. Nature is precisely our
experience of the world under the universal laws that include such
concepts. And science, as a system of knowledge, is "a whole of
cognition ordered according to principles", and the principles
on which proper science is grounded are a priori (Preface to
*Metaphysical Foundations of Natural Science*). A devoted but
not exclusive follower of Newton's achievements and insights, he
maintained through most of his life that mathematization and a priori
universal laws given by the understanding were preconditions for
genuine scientific character (like Galileo and Descartes earlier, and
Carnap later, Kant believed that mathematical exactness constituted
the main condition for the possibility of objectivity). Here Kant
emphasized the role of mathematics coordinating a priori cognition and
its determined objects of experience. Thus, he contrasted the methods
employed by the chemist, a "systematic art" organized by
empirical regularities, with those employed by the mathematician or
physicist, which were organized by a priori laws, and he held that
biology is not reducible to mechanics--as the former involves
explanations in terms of final causes (see *Critique of Pure
Reason*, *Critique of Judgment* and *Metaphysical
Foundations of Natural Science*). With regards to
biology--insufficiently grounded in the fundamental forces of
matter--its inclusion requires the introduction of the idea of
purposiveness (McLaughlin 1991). More generally, for Kant, unity was a
regulative principle of reason, namely, an ideal guiding the process
of inquiry toward a complete empirical science with its empirical
concepts and principles grounded in the so-called concepts and
principles of the understanding that constitute and objectify
empirical phenomena. (On systematicity as a distinctive aspect of this
ideal and on its origin in reason, see Kitcher 1986 and
Hoyningen-Huene 2013).
Kant's ideas set the frame of reference for discussions of the
unification of the sciences in German thought throughout the
nineteenth century (Wood and Hahn 2011). He gave philosophical
currency to the notion of worldview (*Weltanschauung*) and,
indirectly, world-picture (*Weltbild*), establishing among
philosophers and scientists the notion of the unity of science as an
intellectual ideal. From Kant, German-speaking Philosophers of Nature
adopted the image of Nature in terms of interacting forces or powers
and developed it in different ways; this image found its way to
British natural philosophy. In Great Britain this idealist, unifying
spirit (and other notions of an idealist and romantic turn) was
articulated in William Whewell's philosophy of science. Two
unifying dimensions are these: his notion of mind-constructed
fundamental ideas, which form the basis for organizing axioms and
phenomena and classifying sciences, and the argument for the reality
of explanatory causes in the form of *consilience of
induction*, wherein a single cause is independently arrived at
as the hypothesis explaining different kinds of phenomena.
In face of expanding research, the unifying emphasis on organization,
classification and foundation led to exploring differences and
rationalizing boundaries. The German intellectual current culminated
in the late-nineteenth century in the debates among philosophers such
as Windelband, Rickert and Dilthey. In their views and those of
similar thinkers, a worldview often included elements of evaluation
and life meaning. Kant had established the basis for the famous
distinction between the natural sciences
(*Naturwissenschaften*) and the cultural, or social, sciences
(*Geisteswissesnschaften*) popularized in theory of science by
Wilhelm Dilthey and Wilhelm Windelband. Dilthey, Windelband, his
student Heinrich Rickert, and Max Weber (although the first two
preferred *Kulturwissenschaften*, which excluded psychology)
debated over how differences in subject matter between the two kinds
of sciences forced a distinctive difference between their respective
methods. Their preoccupation with the historical dimension of the
human phenomena, along with the Kantian emphasis on the conceptual
basis of knowledge, led to the suggestion that the natural sciences
aimed at generalizations about abstract types and properties, whereas
the human sciences studied concrete individuals and complexes. The
human case suggested a different approach based on valuation and
personal understanding (Weber's *verstehen*). For
Rickert, individualized concept formation secured knowledge of
historical individuals by establishing connections to recognized
values (rather than personal valuations). In biology, Ernst Haeckel
defended a monistic worldview (Richards 2008).
The *Weltbild* tradition influenced the physicists Max Planck
and Ernst Mach, who engaged in a heated debate about the precise
character of the unified scientific world-picture. Mach's more
influential view was both phenomenological and Darwinian: the
unification of knowledge took the form of an analysis of ideas into
biologically embodied elementary sensations (neutral monism) and was
ultimately a matter of adaptive economy of thought. Planck adopted a
realist view that took science to gradually approach complete truth
about the world, and he fundamentally adopted the thermodynamical
principles of energy and entropy (on the Mach-Planck debate see
Toulmin 1970). These world-pictures constituted some of the
alternatives to a long-standing mechanistic view that, since the rise
of mechanistic philosophy with Descartes and Newton, had informed
biology as well as most branches of physics. In the background was the
perceived conflict between the so-called mechanical and
electromagnetic worldviews, which resulted throughout the first two
decades of the twentieth century in the work of Albert Einstein
(Holton 1998).
In the same German tradition, and amidst the proliferation of work on
energy physics and books on unity of science, the German energeticist
Wilhelm Ostwald declared the twentieth century the "Monistic
century". During the 1904 World's Fair in St. Louis, the
German psychologist and Harvard professor Hugo Munsterberg organized a
congress under the title "Unity of Knowledge"; invited
speakers were Ostwald, Ludwig Boltzmann, Ernest Rutherford, Edward
Leamington Nichols, Paul Langevin and Henri Poincare. In 1911,
the International Committee of Monism held its first meeting in
Hamburg, with Ostwald
presiding.[1]
Two years later it published Ostwald's monograph, *Monism as
the Goal of Civilization*. In 1912, Mach, Felix Klein, David
Hilbert, Einstein and others signed a manifesto aiming at the
development of a comprehensive worldview. Unification remained a
driving scientific ideal. In the same spirit, Mathieu Leclerc du
Sablon published his *L'Unite de la Science*
(1919), exploring metaphysical foundations, and Johan Hjorst published
*The Unity of Science* (1921), sketching out a history of
philosophical systems and unifying scientific hypotheses.
### 1.4 Positivism and logical empiricism
The German tradition stood in opposition to the prevailing empiricist
views that, since the time of Hume, Comte and Mill, held that the
moral or social sciences (even philosophy) relied on conceptual and
methodological analogies with geometry and the natural sciences, not
just astronomy and mechanics, as well as with biology. In the Baconian
tradition, Comte emphasized a pyramidal hierarchy of disciplines in
his "encyclopedic law" or order, from the most general
sciences about the simplest phenomena to the most specific sciences
about the most complex phenomena, each depending on knowledge from its
more general antecedent: from inorganic physical sciences (arithmetic,
geometry, mechanics, astronomy, physics and chemistry) to the organic
physical ones, such as biology and the new "social
physics", soon to be renamed sociology (Comte 1830-1842).
Mill, instead, pointed to the diversity of methodologies for
generating, organizing and justifying associated knowledge with
different sciences, natural and human, and the challenges to impose a
single standard (Mill 1843, Book VI). He came to view political
economy eventually as an art, a tool for reform more than a system of
knowledge (Snyder 2006).
Different yet connected currents of nineteenth-century positivism,
first in European philosophy in the first half of the
century--Auguste Comte, J.S. Mill and Herbert Spencer--and
subsequently in North American philosophy--John Fiske, Chauncey
Wright and William James--arose out of intellectual tensions
between metaphysics and the sciences and identified positivism as
synthetic and scientific philosophy; accordingly, they were concerned
with the ordering and unification of knowledge through the
organization of the sciences. The synthesis was either of methods
alone--Mill and Wright--or else also of
doctrines--Comte, Spencer and Fiske. Some used the term
"system" especially in relation to a common logic or
method (Pearce 2015).
In the twentieth century the unity of science became a distinctive
theme of the scientific philosophy of logical empiricism (Cat 2021).
The question of unity engaged science and philosophy alike. In their
manifesto, logical empiricists--known controversially also as
logical positivists--and most notably the founding members of the
Vienna Circle, adopted the Machian banner of "unity of science
without metaphysics". This was a normative criterion of unity
with a role in social reform based on the demarcation between science
and metaphysics: a unity of method and language that included all the
sciences, natural and social. A common method did not necessarily
imply a more substantive unity of content involving theories and their
concepts.
A stronger reductive model within the Vienna Circle was recommended by
Rudolf Carnap in his *The Logical Construction of the World*
(1928). While embracing the Kantian connotation of the term
"constitutive system", it was inspired by recent formal
standards: Hilbert's axiomatic approach to formulating theories
in the exact sciences and Frege's and Russell's logical
constructions in mathematics. It was also predicated on the formal
values of simplicity, rationality, (philosophical) neutrality and
objectivity associated with scientific knowledge. In particular,
Carnap tried to explicate such notions in terms of a rational
reconstruction of science in terms of a method and a structure based
on logical constructions out of (1) basic concepts in axiomatic
structures and (2) rigorous, reductive logical connections between
concepts at different levels.
Different constitutive systems or logical constructions would serve
different (normative) purposes: a theory of science and a theory of
knowledge. Both foundations raised the issue of the nature and
universality of a physicalist language.
One such system of unified science is the theory of science, in which
the construction connects concepts and laws of the different sciences
at different levels, with physics and its genuine laws as fundamental,
lying at the base of the hierarchy. Because of the emphasis on the
formal and structural properties of our representations, objectivity,
rationality and unity go hand in hand. Carnap's formal emphasis
developed further in *Logical Syntax of Language* (1934).
Alternatively, all scientific concepts could be constituted or
constructed in a different system in the protocol language out of
classes of elementary complexes of experiences, scientifically
understood, representing experiential concepts. Carnap subsequently
defended the epistemological and methodological universality of
physicalist language and physicalist statements. The unity of science
in this context was an epistemological project (for a survey of the
epistemological debates, see Uebel 2007; on different strands of the
anti-metaphysical normative project of unity see Frost-Arnold
2005).
Whereas Carnap aimed at rational reconstructions, another member of
the Vienna Circle, Otto Neurath, favored a more naturalistic and
pragmatic approach, with a less idealized and reductive model of
unity. His evolving standards of unity were generally motivated by the
complexity of empirical reality and the application of empirical
knowledge to practical goals. He spoke of an
"encyclopedia-model", opposed to the classic ideal of a
pyramidal, reductive "system-model". The
encyclopedia-model took into account the presence within science of
ineliminable and imprecise terms from ordinary language and the social
sciences and emphasized a unity of language and the local exchanges of
scientific tools. Specifically, Neurath stressed the material-thing
language called "physicalism", not to be confounded with
the emphasis on the vocabulary of physics. Its motivation was partly
epistemological, and Neurath endorsed anti-foundationalism; no unified
science, like a boat at sea, would rest on firm foundations. The
scientific spirit abhorred dogmatism. This weaker model of unity
emphasized empiricism and the normative unity of the natural and the
human sciences.
Like Carnap's unified reconstructions, Neurath's had
pragmatic motivations. Unity without reductionism provided a tool for
cooperation and it was motivated by the need for successful
treatment--prediction and control--of complex phenomena in
the real world that involved properties studied by different theories
or sciences (from real forest fires to social policy): unity of
science at the point of action. It is an argument from holism, the
counterpart of Duhem's claim that only clusters of hypotheses
are confronted with experience. Neurath spoke of a "boat",
a "mosaic", an "orchestration", and a
"universal jargon". Following institutions such as the
International Committee on Monism and the International Council of
Scientific Unions, Neurath spearheaded a movement for Unity of Science
in 1934 that encouraged international cooperation among scientists and
launched the project of an International Encyclopedia of Unity of
Science. It expressed the internationalism of his socialist
convictions and the international crisis that would lead to the Second
World War (Kamminga and Somsen 2016).
At the end of the Eighth International Congress of Philosophy, held in
Prague in September of 1934, Neurath proposed a series of
International Congresses for the Unity of Science. These took place in
Paris, 1935; Copenhagen, 1936; Paris, 1937; Cambridge, England, 1938;
Cambridge, Massachusetts, 1939; and Chicago, 1941. For the
organization of the congresses and related activities, Neurath founded
the Unity of Science Institute in 1936 (renamed in 1937 as the
International Institute for the Unity of Science) alongside the
International Foundation for Visual Education, founded in 1933. The
Institute's executive committee was composed of Neurath, Philip
Frank and Charles Morris.
After the Second World War, a discussion of unity engaged philosophers
and scientists in the Inter-Scientific Discussion Group, first as the
Science of Science Discussion Group, in Cambridge, Massachusetts,
founded in October 1940 primarily by Philip Frank and Carnap
(themselves founders of the Vienna Circle), Quine, Feigl, Bridgman,
and the psychologists E. Boring and S.S. Stevens. This would later
become the Unity of Science Institute. The group was joined by
scientists from different disciplines, from quantum mechanics (Kemble
and Van Vleck) and cybernetics (Wiener) to economics (Morgenstern), as
part of what was both a self-conscious extension of the Vienna Circle
and a reflection of local concerns within a technological culture
increasingly dominated by the interest in computers and nuclear power.
The characteristic feature of the new view of unity was the ideas of
consensus and, subsequently, especially within the USI,
cross-fertilization. These ideas were instantiated in the emphasis on
scientific operations (operationalism) and the creation of war-boosted
cross-disciplines such as cybernetics, computation, electro-acoustics,
psycho-acoustics, neutronics, game theory and biophysics (Galison
1998; Hardcastle 2003).
In the late 1960s, Michael Polanyi and Marjorie Grene organized a
series of conferences funded by the Ford Foundation on unity of
science themes (Grene 1969a, 1969b, 1971). Their general character was
interdisciplinary and anti-reductionist. The group was originally
called "Study Group on Foundations of Cultural Unity", but
this was later changed to "Study Group on the Unity of
Knowledge". By then, a number of American and international
institutions were already promoting interdisciplinary projects in
academic areas (Klein 1990). For both Neurath and Polanyi, the
organization of knowledge and science, the Republic of Science, was
inseparable from ideals of political organization.
Over the last four decades much historical and philosophical
scholarship has challenged the ideals of monism and reductionism and
pursued a growing critical interest in pluralism and interdisciplinary
collaboration (see below). Along the way, the distinction between the
historical human sciences and ahistorical natural sciences has
received much critical and productive attention. One outcome has been
the application and development of concepts and accounts in philosophy
of history in understanding different human and natural sciences as
historical, with a special focus on the epistemic role of unifying
narratives and standards and problems in disciplines such as
archeology, geology, cosmology and paleontology (see, for instance,
Morgan and Wise 2017; Currie 2019). Another outcome has been a renewed
critique of the autonomy of the social sciences. One concern is, for
instance, the raising social and economic value of the natural
sciences and their influence on the models and methods of the social
sciences; another is the influence on both of certain particular
political and economic views. Other related concerns are the role of
the social sciences in political life and the assumption that humans
are unique and superior to animals and emerging technologies (see, for
instance, van Bouwel 2009b; Kincaid and van Bouwel 2023).
## 2. Varieties of Unity
The historical introductory sections have aimed to show the
intellectual centrality, varying formulations, and significance of the
concept of unity. The rest of the entry presents a variety of modern
themes and views. It will be helpful to introduce a number of broad
categories and distinctions that can sort out different kinds of
accounts and track some relations between them, as well as additional
significant philosophical issues. (The categories are not mutually
exclusive, and they sometimes partly overlap. Therefore, while they
help label and characterize different positions, they cannot provide a
simple, easy and neatly ordered conceptual map.)
Connective unity is a weaker and more general notion than the specific
ideal of *reductive unity*; this requires asymmetric relations
of reduction (see below), which typically rely on assumptions about
hierarchies of levels of description and the primacy--conceptual,
ontological, epistemological and so on--of a fundamental
representation. The category of connective unity helps accommodate and
bring attention to the diversity of non-reductive accounts.
Another useful distinction is between *synchronic* and
*diachronic unity*. Synchronic accounts are ahistorical,
assuming no meaningful temporal relations. Diachronic accounts, by
contrast, introduce genealogical hypotheses involving asymmetric
temporal and causal relations between entities or states of the
systems described. Evolutionary models are of this kind; they may be
reductive to the extent that the posited original entities are simpler
and on a lower level of organization and size. Others simply emphasize
connection without overall directionality.
In general, it is useful to distinguish between *ontological
unity* and *epistemological unity*, even if many accounts
bear both characteristics and fall under both rubrics. In some cases,
one kind supports the other salient kind in the model. Ontological
unity is here broadly understood as involving relations between
descriptive conceptual elements; in some cases, the concepts will
describe entities, facts, properties or relations, and descriptive
models will focus on metaphysical aspects of the unifying connections
such as holism, emergence, or downwards causation. Epistemological
unity applies to epistemic relations or goals such as explanation.
Methodological connections and formal (logical, mathematical, etc.)
models may belong in this kind. This article does not draw any strict
or explicit distinction between epistemological and methodological
dimensions or modes of unity.
Additional possible categories and distinctions include the following:
*vertical unity* or *inter-level unity* is unity of
elements attached to levels of analysis, composition or organization
on a hierarchy, whether for a single science or more, whereas
*horizontal unity* or *intra-level unity* applies to one
single level and to its corresponding kind of system (Wimsatt 2007).
*Global unity* is unity of any other variety with a universal
quantifier of all kinds of elements, aspects or descriptions
associated with individual sciences as a kind of *monism*, for
instance, taxonomical monism about natural kinds, while *local
unity* applies to a subset. (Cartwright has distinguished this
same-level global form of reduction, or "imperialism", in
Cartwright 1999; see also Mitchell 2003). Obviously, vertical and
horizontal accounts of unity can be either global or local. Finally,
the rejection of global unity has been associated with both
*isolationism*--keeping independent competing alternative
representations of the same phenomena or systems--as well as with
*local integration*--the local connective unity of the
alternative perspectives. A distinction of methodological nature
contrasts *internal* and *external* perspectives,
according to whether the accounts are based naturalistically on the
local contingent practices of certain scientific communities at a
given time or based on universal metaphysical assumptions broadly
motivated (Ruphy 2017). (Ruphy has criticized Cartwright and
Dupre for having adopted external metaphysical positions and
defended the internal perspective, also present in the program of the
so-called Minnesota School, i.e., Kellert et al. 2006.)
## 3. Epistemological unities
### 3.1 Reduction
The project of unity, as mentioned above, has long guided models of
scientific understanding by privileging descriptions of more
fundamental entities and phenomena such as the powers and behaviors of
atoms, molecules and machines. Since at least the 1920s and until the
1960s, unity was understood in terms of formal approaches to theories,
of semantic relations between vocabularies and of logical relations
between linguistic statements in those vocabularies. The ideal of
unity was formulated, accordingly, in terms of reduction relations to
more fundamental terms and statements. Different accounts have since
stood and fallen with any such commitments. Also, note that
identifying relations of *reduction* must be distinguished from
the ideal of *reductionism.* Reductionism is the adoption of
reduction relations as the global ideal or standard of a proper
unified structure of scientific knowledge; and distance from that
ideal was considered a measure of its unifying progress.
In general, reduction reconciles ontological and epistemological
considerations of identity and difference. Claims such as
"mental states are reduceable to neurochemical states",
"chemical differences reduce to differences between atomic
structures" and "optics can be reduced to
electromagnetism" imply the truth of identity
statements--a strong ontological reduction. However, the
relations of reduction between entities or properties in such claims
also get additional epistemic value from the expressed semantic
diversity of conceptual content and the asymmetry of the relation
described in such claims between the reducing and the reduced
claims--concepts, laws, theories, etc. (Riel 2014).
Elimination and eliminativism are the extreme forms of reduction and
reductionism that commit to aligning the epistemic content with the
ontological content fixed by the identity statements: since only x
exists and claims about x best meet scientific goals, only talk of x
merits scientific consideration and use.
Two formulations of unification in the logical positivist tradition of
the ideal logical structure of science placed the question of unity at
the core of philosophy of science: Carl Hempel's
deductive-nomological model of explanation and Ernst Nagel's
model of reduction. Both are fundamentally epistemological models, and
both are specifically explanatory, at least in the sense that
explanation serves unification. The emphasis on language and logical
structure makes explanatory reduction a form of unity of the
synchronic kind. Still, Nagel's model of reduction is a model of
scientific structure and explanation as well as of scientific
progress. It is based on the problem of relating different theories as
different sets of theoretical predicates (Nagel 1961).
Reduction requires two conditions: *connectability* and
*derivability*. Connectability of laws of different theories
requires *meaning invariance* in the form of extensional
equivalence between descriptions, with bridge principles between
coextensive but distinct terms in different theories.
Nagel's account distinguishes two kinds of reductions:
*homogenous* and *heterogeneous*. When both sets of
terms overlap, the reduction is homogeneous. When the related terms
are different, the reduction is heterogeneous. Derivability requires a
deductive relation between the laws involved. In the quantitative
sciences, the derivation often involves taking a limit. In this sense
the reduced science is considered an *approximation* to the
reducing new one.
Neo-Nagelian accounts have attempted to solve Nagel's problem of
reduction between putatively incompatible theories. The following
paragraphs list a few.
Nagel's two-term relation account has been modified by weaker
conditions of analogy and a role for conventions, requiring it to be
satisfied not necessarily by the two original theories, \(T\_1\) and
\(T\_2\), which are respectively new and old and more and less general,
but by the modified theories \(T'\_1\) and \(T'\_2\). Explanatory
reduction is strictly a four-term relation in which \(T'\_1\) is
"strongly analogous" to \(T\_1\) and corrects, with the
insight that the more fundamental theory can offer, the older theory,
\(T\_2\), changing it to \(T'\_2\). Nagel's account also requires
that bridge laws be synthetic identities, in the sense that they be
factual, empirically discoverable and testable; in weaker accounts,
admissible bridge laws may include elements of convention (Schaffner
1967; Sarkar 1998). The difficulty lay especially with the task of
specifying or giving a non-contextual, transitive account of the
relations between \(T\) and \(T'\) (Wimsatt 1976).
An alternative set of semantic and syntactic conditions of reduction
bears counterfactual interpretations. For instance, syntactic
conditions may take the form of limit relations (e.g., classical
mechanics reduces to relativistic mechanics for the value of the speed
of light c[?] 0 and to quantum mechanics for the value of the
Planck constant h[?] 0) and *ceteris paribus* assumptions
(e.g., when optical laws can be identified with results of the theory
of electromagnetism); then, each condition helps explain why the
reduced theory works where it does and fails where it does not
(Glymour 1969).
A different approach to reductionism acknowledges a commitment to
providing explanation but rejects the value of a focus on the role of
laws. This approach typically draws a distinction between hard
sciences such as physics and chemistry and special sciences such as
biology and the social sciences. It claims that laws that are in a
sense operative in the hard sciences are not available in the special
ones, or play a more limited and weaker role, and this on account of
historical character, complexity or reduced scope. The rejection of
empirical laws in biology, for instance, has been argued on grounds of
historical dependence on contingent initial conditions (Beatty 1995),
and as matter of supervenience (see the entry on
supervenience)
of spatio-temporally restricted functional claims on lower-level
molecular ones, and the multiple realization (see the entry on
multiple realizability)
of the former by the latter (Rosenberg 1994; Rosenberg's
argument from supervenience to reduction without laws must be
contrasted with Fodor's physicalism about the special sciences
about laws without reduction (see below and the entry on
physicalism);
for a criticism of these views see Sober 1996). This non-Nagelian
approach assumes further that explanation rests on identities between
predicates and deductive derivations (reduction and explanation might
be said to be justified by derivations, but not constituted by them;
see Spector 1978). Explanation is provided by lower-level mechanisms;
their explanatory role is to replace final why-necessarily questions
(functional) with proximate how-possibly questions (molecular).
One suggestion to make sense of the possibility of the supervening
functional explanations without Nagelian reduction may rely on an
ontological type of relation of reduction such as the composition of
powers in explanatory mechanisms (Gillette 2010, 2016). The reductive
commitment to the lower level is based on relations of composition, at
play in epistemological analysis and metaphysical synthesis, but is
merely formal and derivational. We may be able to infer what composes
the higher level, but we cannot simply get all the relevant knowledge
of the higher level from our knowledge of the lower level (see also
Auyang 1998). This kind of proposal points to epistemic
anti-reductionism.
Note that significant proposals of reductive unity rely on ontological
assumptions about some hierarchy of levels organizing reality and
scientific theories of it (more below).
A more general characterization views reductionism as a research
strategy. On this methodological view reductionism can be
characterized by a set of so-called heuristics (non-algorithmic,
efficient, error-based, purpose-oriented, problem-solving tasks)
(Wimsatt 2006): heuristics of conceptualization (e.g., descriptive
localization of properties, system-environment interface determinism,
level and entity-dependence), heuristics of model-building and theory
construction (e.g., model intra-systemic localization with emphasis of
structural properties over functional ones, contextual simplification
and external generalization) and heuristics of observation and
experimental design (e.g., focused observation, environmental control,
local scope of testing, abstract shared properties, behavioral
regularity and context-independence of results).
### 3.2 Antireductionism
Since the 1930s, the focus was on a syntactic approach, with physics
as the paradigm of science, deductive logical relations as the form of
cognitive or epistemic goals such as explanation and prediction, and
theory and empirical laws as paradigmatic units of scientific
knowledge (Suppe 1977; Grunbaum and Salmon 1988). The historicist
turn in the 1960s, the semantic turn in philosophy of science in the
1970s and a renewed interest in special sciences has changed this
focus. The very structure of hierarchy of levels has lost its
credibility, even for those who believe in it as a model of autonomy
of levels rather than as an image of fundamentalism. The rejection of
such models and their emendations have occupied the last four decades
of philosophical discussion about unity in and of the sciences
(especially in connection to psychology and biology, and more recently
chemistry). A valuable consequence has been the strengthening of
philosophical projects and communities devoting more sustained and
sophisticated attention to special sciences, different from
physics.
The first target of antireductionist attacks has been Nagel's
demand of extensional equivalence. It has been dismissed as an
inadequate demand of "meaning invariance" and
approximation, and with it the possibility of deductive connections.
Mocking the positivist legacy of progress through unity, empiricism
and anti-dogmatism, these constraints have been decried as
intellectually dogmatic, conceptually weak and methodologically overly
restrictive (Feyerabend 1962). The emphasis is placed, instead, on the
merits of the new theses of incommensurability and methodological
pluralism.
A similar criticism of reduction involves a different move: that the
deductive connection be guaranteed provided that the old, reduced
theory was "corrected" beforehand (Shaffner 1967). The
evolution and the structure of scientific knowledge could be neatly
captured, using Schaffner's expression, by "layer-cake
reduction". The terms "length" and
"mass"--or the symbols \(l\) and \(m\)--for
instance, may be the same in Newtonian and relativistic mechanics, or
the term "electron" the same in classical physics and
quantum mechanics, or the term "atom" the same in quantum
mechanics and in chemistry, or "gene" in Mendelian
genetics and molecular genetics (see, for instance, Kitcher 1984). But
the corresponding concepts, it is argued, are not. Concepts or words
are to be understood as getting their content or meaning within a
holistic or organic structure, even if the organized wholes are the
theories that include them. From this point of view, different wholes,
whether theories or Kuhnian paradigms, manifest degrees of conceptual
incommensurability. As a result, the derived, reducing theories
typically are not the allegedly reduced, older ones; and their
derivation sheds no relevant insight into the relation between the
original, older one and the new (Feyerabend 1962; Sklar 1967).
From a historical standpoint, the positivist model collapsed the
distinction between synchronic and diachronic reduction, that is,
between reductive models of the structure and the evolution, or
succession, of scientific theories. By contrast, historicism, as
embraced by Kuhn and Feyerabend, drove a wedge between the two
dimensions and rejected the linear model of scientific change in terms
of accumulation and replacement. For Kuhn, replacement becomes partly
continuous, partly non-cumulative change in which one world--or,
less literally, one world-picture, one paradigm--replaces another
(after a revolutionary episode of crisis and proliferation of
alternative contenders) (Kuhn 1962). This image constitutes a form of
*pluralism*, and, like the reductionism it is meant to replace,
it can be either *synchronic* or *diachronic*. Here is
where Kuhn and Feyerabend parted ways. For Kuhn, synchronic pluralism
only describes the situation of crisis and revolution between
paradigms. For Feyerabend, history is less monistic, and pluralism is
and should remain a synchronic and diachronic feature of science and
culture (Feyerabend, here, thought science and society inseparable,
and followed Mill's philosophy of liberal individualism and
democracy).
A different kind of antireductionism addresses a more conceptual
dimension, the problem of *categorial reduction*:
Meta-theoretical categories of description and interpretation for
mathematical formalisms, e.g., criteria of causality, may block full
reduction. Basic interpretative concepts that are not just variables
in a theory or model are not reducible to counterparts in fundamental
descriptions (Cat 2000; the case of individuality in quantum physics
has been discussed in Healey 1991; Redhead and Teller 1991; Auyang
1995; and, in psychology, in Block 2003).
### 3.3 Epistemic roles: From demarcation to explanation and evidence. Varieties of connective unity. Aesthetic value
Unity has been considered an epistemic virtue and goal, with different
modes of unification associated with roles such as demarcation,
explanation and evidence. It can also be traced to synthetic cognitive
tasks of categorization and reasoning--also applied in the
sciences--especially through relations of similarity and
difference (Cat 2022b).
*Demarcation*. Certain models of unity, which we may call
container models, attempt to demarcate science from non-science. The
criteria adopted are typically methodological and normative, not
descriptive. Unlike connective models, they serve a dual function of
drawing up and policing a boundary that (1) encloses and endorses the
sciences and (2) excludes other practices. As noted above, some
demarcation projects have aimed to distinguish between natural and
special sciences. The more notorious ones, however, have aimed to
exclude practices and doctrines dismissed under the labels of
metaphysics, pseudo-science or popular knowledge. Empirical or not,
the applications of standards of epistemic purity are not merely
identification or labeling exercises for the sake of carving out
scientific inquiry as a natural kind or mapping out intellectual
landscapes. The purpose is to establish authority and the stakes
involve educational, legal and financial interests. Recent
controversies include not just the teaching of creation science, but
also polemics over the scientific status of, for instance, homeopathy,
vaccination and models of plant neurology and climate change.
The most influential demarcation criterion has been Popper's
original anti-metaphysics barrier: the condition of empirical
falsifiability of scientific statements. It required the logically
possible relation to basic statements, linked to experience, that can
prove general hypotheses to be false with certainty. For this purpose,
he defended the application of a particular deductive argument, the
*modus tollens* (Popper 1935/1951). Another demarcation
criterion is explanatory unity, empirically grounded. Hempel's
deductive-nomological model characterizes the scientific explanation
of events as a logical argument that expresses their expectability in
terms of their subsumption under an empirically testable
generalization. Explanations in the historical sciences too must fit
the model if they are to count as scientific. They could then be
brought into the fold as bona fide scientific explanations even if
they could qualify only as explanation sketches.
Since their introduction, Hempel's model and its weaker versions
have been challenged as neither generally applicable nor appropriate.
The demarcation criterion of unity is undermined by criteria of
demarcation between natural and historical sciences. For instance,
historical explanations have a genealogical or narrative form, or else
they require the historian's engaging problems or issuing a
conceptual judgment that brings together meaningfully a set of
historical facts (recent versions of such decades-old arguments are in
Cleland 2002, Koster 2009, Wise 2011). According to more radical
views, natural sciences such as geology and biology are historical in
their contextual, causal and narrative forms; as well, Hempel's
model, especially the requirement of empirically testable strict
universal laws, is satisfied by neither the physical sciences nor the
historical sciences, including archeology and biology (Ereshefsky
1992).
A number of legal decisions have appealed to Popper's and
Hempel's criteria, adding the epistemic role of peer review,
publication and consensus around the sound application of
methodological standards. A more recent criterion has sought a
different kind of demarcation: it is comparative rather than absolute;
it aims to compare science and popular science; it adopts a broader
notion of the German tradition of *Wissenschaften*, that is,
roughly of scholarly fields of research that include formal sciences,
natural sciences, human sciences and the humanities; and it emphasizes
the role of systematicity, with an emphasis on different forms of
epistemic connectedness as weak forms of coherence and order
(Hoyningen-Huene 2013).
*Explanation*. Unity has been defended in the wake of authors
such as Kant and Whewell as an epistemic criterion of explanation or
at least fulfilling an explanatory role. In other words, rather than
modeling unification in terms of explanation, explanation is modeled
in terms of unification. A number of proposals introduce an
explanatory measure in terms of the number of independent explanatory
laws or phenomena conjoined in a theoretical structure. On this
representation, unity contributes understanding and confirmation from
the fewest basic kinds of phenomena, regardless of explanatory power
in terms of derivation or argument patterns (Friedman 1974; Kitcher
1981; Kitcher 1989; Wayne 1996; within a probabilistic framework,
Myrvold 2003; Sober 2003; Roche and Sober 2017; see below).
A weaker position argues that unification is not explanation on the
grounds that unification is simply systematization of old beliefs and
operates as a criterion of theory-choice (Halonen and Hintikka
1999).
The unification account of explanation has been defended within a more
detailed cognitive and pragmatist approach. The key is to think of
explanations as question-answer episodes involving four
elements: the explanation-seeking question about \(P, P\)?, the
cognitive state \(C\) of the questioner/agent for whom \(P\) calls for
explanation, the answer \(A\), and the cognitive state \(C+A\) in
which the need for explanation of \(P\) has disappeared. A related
account models unity in the cognitive state in terms of the
comparative increase of coherence and elimination of spurious
unity--such as circularity or redundancy (Schurz 1999).
Unification is also based on information-theoretic transfer or
inference relations. Unification of hypotheses is only a virtue if it
unifies data. The last two conditions imply that unification yields
also empirical confirmation. Explanations are global increases in
unification in the cognitive state of the cognitive agent (Schurz
1999; Schurz and Lambert 1994).
The unification-explanation link can be defended on the grounds
that laws make unifying similarity expectable (hence
Hempel-explanatory), and this similarity becomes the content of a new
belief (Weber and Van Dyck 2002 contra Halonen and Hintikka 1999).
Unification is not the mere systematization of old beliefs. Contra
Schurz, they argue that scientific explanation is provided by novel
understanding of facts and the satisfaction of our curiosity (Weber
and Van Dyck 2002 contra Schurz 1999). In this sense, causal
explanations, for instance, are genuinely explanatory and do not
require an increase of unification.
A contextualist and pluralist account argues that understanding is a
legitimate aim of science that is pragmatic and not necessarily
formal, or a subjective psychological by-product of explanation (De
Regt and Dieks 2005). In this view explanatory understanding is
variable and can have diverse forms, such as causal-mechanical and
unification, without conflict (De Regt and Dieks 2005). In the same
spirit, Salmon linked unification to the epistemic virtue or goal of
explanation and distinguished between unification and
causal-mechanical explanation as forms of scientific explanatory
understanding (Salmon 1998).
Explanation may also provide unification of different fields of
research through explanatory dependence so that one uses the
other's results to explain one's own (Kincaid 1997).
The views on scientific explanation have evolved away from the formal
and cognitive accounts of the epistemic categories. Accordingly, the
source of understanding provided by scientific explanations has been
misidentified according to some (Barnes 1992). The genuine source for
important, but not all, cases lies in causal explanation or causal
mechanism (Cartwright 1983; Cartwright 1989; see also Glennan 1996;
Craver 2007). Mechanistic models of explanation have become entrenched
in philosophical accounts of the life sciences (Darden 2006; Craven
2007). As an epistemic virtue, the role of unification has been traced
to the causal form of the explanation, for instance, in statistical
regularities (Schurz 2015). The challenge extends to the alleged
extensional link between explanation on the one hand, and truth and
universality on the other (Cartwright 1983; Dupre 1993;
Woodward 2003). In this sense, explanatory unity, which rests on
metaphysical assumptions about components and their properties, also
involves a form of ontological or metaphysical unity. (For a
methodological criticism of external, metaphysical perspectives, see
Ruphy 2016.)
Similar criticisms extend to the traditionally formalist arguments in
physics about fundamental levels; there, unification fails to yield
explanation in the formal scheme based on laws and their symmetries.
Unification and explanation conflict on the grounds that in biology
and physics only causal mechanical explanations answering
why-questions yield understanding of the connections that contribute
to "true unification" (Morrison
2000;[2]
Morrison's choice of standard for evaluating the epistemic
accounts of unity and explanation and her focus on systematic
theoretical connections without reduction has not been without
critics, e.g., Wayne 2002; Plutynski 2005; Karaca
2012).[3]
*Methodology*. Unity has long been understood as a
methodological principle, primarily, but not exclusively, in
reductionist versions (e.g., for the case of biology, see Wimsatt
1976, 2006). This is different from the case of unity through
methodological prescriptions. One methodological criterion appeals to
the epistemic virtues of simplicity or parsimony, whether
epistemological or ontological (Sober 2003). As a formal probabilistic
principle of curve-fitting or average predictive accuracy, the
relevance of unity is objective. Unity plays the role of an empirical
background theory.
The methodological role of unification may track scientific progress.
Unlike in the case of the explanatory role of unification presented
above, this account of progress and interdisciplinarity relies on a
unifying role of explanation: as in evo-devo in biology, unification
is a process of advancement in which two fields of research are in the
process of unification through mutual explanatory relevance, that is,
when results in one field are required to pose and address questions
in the other, raising explananda and explanations (Nathan 2017).
Heuristic dependence involves one influencing, accommodating,
contributing to and addressing the other's research questions
(for examples relating chemistry and physics see Cat and Best
2023).
*Evidence*. As in the relation of unification to explanation,
unification is considered an epistemic criterion of evidence in
support of the unified account (for a non-probabilistic account of the
relation between unification and confirmation, see Schurz 1999). The
resulting evidence and demonstration may be called synthetic evidence
and demonstration. Synthetic evidence may be the outcome of synthetic
modes of reasoning that rely on assumptions of similarity and
difference, for instance in cases of robustness, cross-checking and
meta-analysis (Cat 2022b).
Like probabilistic models of explanation, recent formal discussions of
unity and coherence within the framework of Bayesianism place unity in
evidentiary reasoning (Forster and Sober 1994, sect. 7; Schurz and
Lambert 2005 is also a formal model, with an algebraic approach). More
generally, the probabilistic framework articulates formal
characterizations of unity and introduces its role in evaluations of
evidence.
A criterion of unity defended for its epistemic virtue in relation to
evidence is simplicity or parsimony (Sober 2013, 2016). Comparatively
speaking, simpler hypotheses, models or theories present a higher
likelihood of truth, empirical support and accurate prediction. From a
methodological standpoint, however, appeals to parsimony might not be
sufficient. Moreover, the connection between unity as parsimony and
likelihood is not interest-relative, at least in the way that the
connection between unity and explanation is (Sober 2003; Forster and
Sober 1994; Sober 2013, 2016).
On the Bayesian approach, the rational comparison and acceptance of
probabilistic beliefs in the light of empirical data is constrained by
Bayes' Theorem for conditional probabilities (where \(h\) and
\(d\) are the hypothesis and the data respectively):
\[ \P(h \mid d) = \frac{\P(d \mid h) \cdot \P(h)}{P(d)} \]
One explicit Bayesian account of unification as an epistemic,
methodological virtue, has introduced the following measure of unity:
a hypothesis \(h\) unifies phenomena \(p\) and \(q\) to the degree
that given \(h, p\) is statistically/probabilistically relevant to (or
correlated with) \(q\) (Myrvold 2003; for a probabilistically
equivalent measure of unity in Bayesian terms see McGrew 2003; on the
equivalence, Schupbach 2005). This measure of unity has been
criticized as neither necessary nor sufficient (Lange 2004;
Lange's criticism assumes the unification-explanation
link; in a rebuttal, Schupbach 2005 rejects this and other assumptions
behind Lange's criticism). In a recent development, Myrvold
argues for mutual information unification, i.e., that hypotheses are
said to be supported by their ability to increase the amount of what
he calls the mutual information of the set of evidence statements (see
Myrvold 2017). The explanatory unification contributed by hypotheses
about common causes is an instance of the information condition.
Evidentiary unification may contribute to the unification of different
fields of research in the form of evidentiary dependence: Evidentiary
dependence involves the appeal to the other's results in the
evaluation of one's own (Kincaid 1997).
*Aesthetic value*. Finally, epistemic values of unity may rely
on subsidiary considerations of aesthetic value. Nevertheless,
consideration of beauty, elegance or harmony may also provide
autonomous grounds for adopting or pursuing varieties of unification
in terms of simplicity and patterns of order (regularity of specific
relations) (McAllister 1996; Glynn 2010; Orrell 2012). Whether
aesthetic judgments have any epistemic import depends on metaphysical,
cognitive or pragmatic assumptions.
*Unification without reduction*. Reduction is not the sole
standard of unity, and models of unification without reduction have
proliferated. In addition, such models introduce new units of
analysis. An early influential account centers around the notion of
*interfield theories* (Darden and Maull 1977; Darden 2006). The
orthodox central place of theories as the unit of scientific knowledge
is replaced by that of fields. Examples of such fields are genetics,
biochemistry and cytology. Different levels of organization correspond
in this view to different fields: Fields are individuated
intellectually by a focal problem, a domain of facts related to the
problem, explanatory goals, methods and a vocabulary. Fields import
and transform terms and concepts from others. The model is based on
the idea that theories and disciplines do not match neat levels of
organization within a hierarchy; rather, many of them in their scope
and development cut across different such levels. Reduction is a
relation between theories within a field, not across fields.
*Interdependence and hybridity*. In general, the higher-level
theories (for instance, cell physiology) and the lower-level theories
(for instance, biochemistry) are ontologically and epistemologically
interdependent on matters of informational content and evidential
relevance; one cannot be developed without the other (Kincaid 1996;
Kincaid 1997; Wimsatt 1976; Spector 1977). The interaction between
fields (through researchers' judgments and borrowings) may
provide enabling conditions for subsequent interactions. For instance,
Maxwell's adoption of statistical techniques in color research
enabled the introduction of similar ideas from social statistics in
his research on reductive molecular theories of gases. The reduction,
in turn, enabled experimental evidence from chemistry and acoustics;
similarly, different chemical and spectroscopic bases for colors
provided chemical evidence in color research (Cat 2014).
The emergence and development of hybrid disciplines and theories is
another instance of non-reductive cooperation or interaction between
sciences. Noted above is the post-war emergence of interdisciplinary
areas of research: the so-called hyphenated sciences such as
neuro-acoustics, radioastronomy, biophysics, etc. (Klein 1990, Galison
1997) On a smaller scale, in the domain of, for instance, physics, one
can find semiclassical models in quantum physics or models developed
around phenomena where the limiting reduction relations are singular
or catastrophic, such as caustic optics and quantum chaos (Cat 1998;
Batterman 2002; Belot 2005). Such semiclassical explanatory models
have not found successful quantum substitutes and have placed
structural explanations at the heart of the relation between classical
and quantum physics (Bokulich 2008). The general form of pervasive
cases of emergence has been characterized with the notion of
contextual emergence (Bishop and Atmanspacher 2006) where
properties, behaviors and their laws on a restricted, lower-level,
single-scale domain are necessary but not sufficient for the
properties and behaviors of another, e.g., higher-level one, not even
of itself. The latter are also determined by contingent contexts
(contingent features of the state space of the relevant system). The
interstitial formation of more or less stable small-scale syntheses
and cross-boundary "alliances" has been common in most
sciences since the early twentieth century. Indeed, it is crucial to
development in model building and growing empirical relevance in
fields ranging anywhere from biochemistry to cell ecology, or from
econophysics to thermodynamical cosmology. Similar cases can be found
in chemistry and the biomedical sciences.
*Conceptual unity*. The conceptual dimension of cross-cutting
has been developed in connection with the possibility of cross-cutting
natural kinds that challenges taxonomical monism. Categories of
taxonomy and domains of description are interest-relative, as are
rationality and objectivity (Khalidi 1998; his view shares positions
and attitudes with Longino 1989; Elgin 1996, 1997). Cross-cutting
taxonomic systems, then, are not conceptually inconsistent or
inapplicable. Both the interest-relativity and hybridity feature
prominently in the context of ontological pluralism (see below).
Another, more general, unifying element of this kind is Holton's
notion of *themata*. Themata are conceptual values that are a
priori yet contingent (both individual and social). They are forming
and organizing presuppositions that factor centrally in the evolution
of the science and include continuity/discontinuity, harmony,
quantification, symmetry, conservation, mechanicism and hierarchy
(Holton 1973). Unity of some kind is itself a thematic element. A more
complex and comprehensive unit of organized scientific practice is the
notion of the various *styles of reasoning*, such as
statistical, analogical modeling, taxonomical, genetic/genealogical or
laboratory styles; each is a cluster of epistemic standards,
questions, tools, ontology, and self-authenticating or stabilizing
protocols (Hacking 1996; see below for the relevance of this account
of a priori elements to claims of global disunity--the account
shares distinctive features of Kuhn's notion of paradigm).
Another model of non-reductive unification is historical and
diachronic: it emphasizes the genealogical and historical identity of
disciplines, which has become complex through interaction. The
interaction extends to relations between specific sciences, philosophy
and philosophy of science (Hull 1988). Hull has endorsed an image of
science as a process, modeling historical unity after a
Darwinian-style pattern of evolution (developing an earlier suggestion
by Popper). Part of the account is the idea of disciplines as
evolutionary historical individuals, which can be revised with the
help of more recent ideas of biological individuality: hybrid unity
as an external model of unity as integration or coordination of
individual disciplines and disciplinary projects, e.g., characterized
by a form of occurrence, evolution or development whose tracking and
identification involves a conjunction with other disciplines, projects
and domains of resources, from within science or outside science.
This diachronic perspective can accommodate models of discovery, in
which genealogical unity integrates a variety of resources that can be
both theoretical and applied, or scientific and non-scientific (an
example, from physics, the discovery of superconductivity, can be
found in Holton, Chang and Jurkowitz 1996). Some models of unity below
provide further examples.
A generalization of the notion of interfield theories is the idea that
unity is *interconnection*: Fields are unified theoretically
and practically (Grantham 2004). This is an extension of the original
modes of unity or identity that single out individual disciplines.
Theoretical unification involves conceptual, ontological and
explanatory relations. Practical unification involves heuristic
dependence, confirmational dependence and methodological integration.
The social dimension of the epistemology of scientific disciplines
relies on institutional unity. With regard to disciplines as
professions, this kind of unity has rested on institutional
arrangements such as professional organizations for
self-identification and self-regulation, university mechanisms of
growth and reproduction through certification, funding and training,
and communication and record through journals.
Many examples of unity without reduction are local rather than global.
These are not merely a phase in a global and linear project or
tradition of unification (or integration), and they are typically
focused on science as a human activity. From that standpoint,
unification is typically understood or advocated as a piecemeal
description and strategy of collaboration (see Klein 1990 on the
distinction between global integration and local interdisciplinarity).
Cases are restricted to specific models, phenomena or situations.
*Material unity*. A more recent approach to the connection
between different research areas has focused on a material level of
scientific practice, with attention to the use of instruments and
other material objects (Galison 1997; Bowker and Star 1999). For
instance, the *material unity* of natural philosophy in the
sixteenth and seventeenth centuries relied on the circulation,
transformation and application of objects in their concrete and
abstract representations (Bertoloni-Meli 2006). The latter correspond
to the imaginary systems and their representations, which we call
models. The evolution of objects and images across different theories
and experiments and their developments in nineteenth-century natural
philosophy provide a historical model of scientific development, but
the approach is not meant to illustrate reductive materialism, since
the same objects and models work and are perceived as vehicles for
abstract ideas, institutions, cultures, etc., or are prompted by them.
On one view, objects are regarded as elements in so-called trading
zones (see below) with shifting meanings in the evolution of
twentieth-century physics, such as with the cloud chamber which was
first relevant to meteorology and next to particle physics (Galison
1997). Alternatively, material objects have been given the status of
*boundary objects*, which provide the opportunity for experts
from different fields to collaborate through their respective
understandings of the system in question and their respective goals
(Bowker and Star 1999).
*Graphic unity*. At the concrete perceptual level, recent
accounts emphasize the role of visual representations in the sciences
and suggest what may be called *graphic unification* of the
sciences. Their cognitive roles, methodological and rhetorical,
include establishing and disseminating facts and their so-called
virtual witnessing, revealing empirical relations, testing their fit
with available patterns of more abstract theoretical relations
(theoretical integration), suggesting new ones, aiding in
computations, serving as aesthetic devices etc. But these uses are not
homogeneous across different sciences and make visible disciplinary
differences. We may equally speak of *graphic pluralism*. The
rates in the use of diagrams in research publications appear to vary
along the hard-soft axis of pyramidal hierarchy, from physics,
chemistry, biology, psychology, economics and sociology, and political
science (Smith et al. 2000). The highest use can be found in physics,
intuitively identified by the highest degree of hardness understood as
consensus, codification, theoretical integration and factual
stability, to the highest interpretive and instability of results.
Similarly, the same variation occurs among sub-disciplines within each
discipline. The kinds of images and their contents also vary across
disciplines and within disciplines, ranging from hand-made images of
particular specimens to hand-made or mechanically generated images of
particulars standing in for types, to schematic images of geometric
patterns in space or time, or to abstract diagrams representing
quantitative relations. Importantly, graphic tools circulate like
other cognitive tools between areas of research that they in turn
connect (Galison 1997; Daston and Galison 2007; Lopes 2009; see also
Lynch and Woolgar 1990; Baigrie 1996; Jones and Galison 1998; Galison
1997; Cat 2014; Kaiser 2005).
*Disciplinary unity and collaboration*. The relation between
disciplines or fields of research has often been tracked by relations
between their respective theories or epistemic products. But
disciplines constitute broader and richer units of analysis of
connections in the sciences characterized, for instance, by their
domain of inquiry, cognitive tools and social structure (Bechtel
1987).
Unification of disciplines, in that sense, has been variously
categorized, for instance, as *interdisciplinary*,
*multidisciplinary*, *crossdisciplinary* or
*transdisciplinary* (Klein 1990; Graff 2005; Kellert 2008;
Repko 2012). It might involve a researcher borrowing from different
disciplines or the collaboration of different researchers. Neither
modality of connection amounts to a straightforward generalization of,
or reduction to, any single discipline, theory, etc. In either case,
the strategic development is typically defended for its heuristic
problem-solving or innovative powers, as it is prompted by a problem
that is considered complex in that it does not arise or cannot be
fully treated within the purview of one specific discipline unified or
individuated around some potentially non-unique set of elements such
as scope of empirical phenomena, rules, standards, techniques,
conceptual and material tools, aims, social institutions, etc.
Indicators of disciplinary unity may vary (Kuhn 1962; Klein 1990;
Kellert 2008). *Interdisciplinary* research or collaboration
creates a new discipline or project, such as interfield research,
often leaving the existence of the original ones intact.
*Multidisciplinary* work involves the juxtaposition of the
treatments and aims of the different disciplines involved in
addressing a common problem. *Crossdisciplinary* work involves
borrowing resources from one discipline to serve the aims of a project
in another. *Transdisciplinary* work is a synthetic creation
that encompasses work from different disciplines (Klein 1990; Kellert
2008; Brigandt 2010; Hoffmann, Schmidt and Nersessian 2012; Osbeck et
al. 2011; Repko 2012). These different modes of synthesis or
connection are not mutually exclusive.
Models of interdisciplinary cooperation and their corresponding
outcomes are often described using metaphors of different kinds:
*cartographic* (domains, boundaries, trading zone, etc.),
*linguistic* (pidgin language, communication, translation,
etc.), *architectural* (building blocks, tiles, etc.),
*socio-political* (imperialism, hierarchy, republic,
orchestration, negotiation, coordination, cooperation, etc.) or
*embodied* (cross-training). Each selectively highlights and
neglects different aspects of scientific practice and properties of
scientific products. Cartographic and architectural images, for
instance, focus on spatial and static synchronic relations and simply
connected, compatible elements. Socio-political and embodied images
emphasize activity and non-propositional elements (Kellert 2008
defends the image of cross-training).
In this context, methodological unity often takes the form of
borrowing standards and techniques for the application of formal and
empirical methods. They range from calculational techniques and tools
for theoretical modeling and simulation of phenomena to techniques for
modeling of data, using instruments and conducting experiments (e.g.,
the culture of field experiments and, more recently, randomized
control trials across natural and social sciences). A key element of
scientific practice, often ignored by philosophical analysis, is
expertise. As part of different forms of methodological unity, it is
key to the acceptance and successful appropriation of techniques.
Recent accounts of multidisciplinary collaboration as a human activity
have focused on the dynamics of integrating different kinds of
expertise around common systems or goals of research (Collins and
Evans 2007; Gorman 2002). The same perspective can accommodate the
recent interest in so-called mixed methods, e.g., different forms of
integration of quantitative and qualitative methods and approaches in
the social sciences (but mixed-method approaches do not typically
involve mixed disciplines).
As teamwork and collaboration within and between research units has
steadily increased, so has the degree of specialization (Wutchy et al.
2007). Between both trends, new reconfigurations keep forming with
different purposes and types of institutional expression and support;
for example, STEM, Earth sciences, physical sciences, mind-brain
sciences, etc., in addition to hybrid fields mentioned above. Yet,
in educational research and funding, disciplinarity has acquired
new normative purchase in opposite directions: while funding agencies
and universities promote interdisciplinarity, university
administrations encourage both disciplinary competition and more
versatile adisciplinarity (Griffiths 2022). In the face of defenses of
different forms of interdisciplinarity (Graff 2015), there is also a
renewed attention to the critical value of autonomy of disciplines, or
disciplinarity, as the key resource in division of labor and
collaboration alike (Jacobs 2013).
The social epistemology of interdisciplinarity is complex. It develops
both top-down from managers and bottom-up from practitioners
(Maki 2016; Maki and MacLeod 2016), relying on a variety of
kinds of interactions with heuristic and normative dimensions
(Boyer-Kassem et al. 2018). More generally, collaborative work
arguably provides epistemic, ethical and instrumental advantages,
e.g., more comprehensive relevant understanding of complex problems,
expanded justice to interests of direct knowledge users and increased
resources (Laursen et al. 2021). Yet, collaboration relies on a
plurality of values or norms that may lead to conflicts such as a
paralyzing practical incoherence of guiding values and moral
incoherence and problematic forms of oppression (Laursen et al.
2021; see more on the limits of pluralism below).
Empirical work in sociology and cognitive psychology on scientific
collaboration has led to a broader perspective, including a number of
dimensions of interdisciplinary cooperation involving identification
of conflicts and the setting of sufficient so-called common ground
integrators. These include shared (pre-existing, revised and newly
developed) concepts, terminology, standards, techniques, aims,
information, tools, expertise, skills (abstract, dialectical, creative
and holistic thinking), cognitive and social ethos (curiosity,
tolerance, flexibility, humility, receptivity, reflexivity, honesty,
team-play) social interaction, institutional structures and
geography (Cummings and Kiesler 2005; Klein 1990; Kockelmans 1979;
Repko 2012). Sociological studies of scientific collaboration can in
principle place the connective models of unity within the more general
scope of social epistemology, for instance, in relation to
distributive cognition (beyond the focus on strategies of consensus
within communities).
The broad and dynamical approach to processes of interdisciplinary
integration may effectively be understood to describe the production
of different sorts and degrees of epistemic emergence. The integrated
accounts require shared (old or new) assumptions and may involve a
case of ontological integration, for instance in causal models.
Suggested kinds of interdisciplinary causal-model integration are the
following: sequential causal order in a process or mechanism cutting
across disciplinary divides; horizontal parallel integration of
different causal models of different elements of a complex phenomenon;
horizontal joint causal model of the same effect; and vertical or
cross-level causal integration (see emergent or top-down causality,
below) (Repko 2012; Kockelmans 1979).
The study of the social epistemology of interdisciplinary and
collaborative research has been carried out and proposed from
different perspectives that illuminate different dimensions and
implications. These include historical typology (Klein 1990), formal
modeling (Boyer-Kassem et al. 2018), ethnography (Nersessian 2022),
ethical, scientometrics and multidimensional philosophical
perspectives (Maki 2016). Other approaches aim to design and
offer tools that facilitate collaboration such as the Toolbox Dialogue
Initiative (Hubbs et al. 2020). The different forms of plurality and
connection also ultimately inform the organization of science in the
social and political terms of diversity and democracy (Longino 1998,
2001). In terms of cooperation and coordination, unity, in this sense,
cannot be reduced to consensus (Rescher 1993; van Bouwel 2009a; Repko
2012; Hoffmann, Schmidt and Nersessian 2012.)
A general model of local interconnection, which has acquired
widespread attention and application in different sciences, is the
anthropological model of *trading zone*, where hybrid languages
and meanings are developed that allow for interaction without
straightforward extension of any party's original language or
framework (Galison 1997). Galison has applied this kind of
anthropological analysis to the subcultures of experimentation. This
strategy aims to explain the strength, coherence and continuity of
science in terms of local *coordinations* of intercalated
levels of symbolic procedures and meanings, instruments and
arguments.
At the experimental level, instruments, as found objects, acquire new
meanings, developments and uses as they bridge over the transitions
between theories, observations or theory-laden observations.
Instruments and experimental projects in the case of Big Science also
bring together, synchronically and interactively, the skills,
standards and other resources from different communities, and they
change each in turn (on interdisciplinary experimentation see also
Osbeck et al. 2011). Patterns of laboratory research are shared by
the different sciences, including not just instruments but the general
strategies of reconfiguration of human researchers and the natural
entities researched (Knorr-Cetina 1992). This includes statistical
standards (e.g., statistical significance) and ideals of
replication. At the same time, attention has been paid to the
different ways in which experimental approaches differ among the
sciences (Knorr-Cetina 1992; Guala 2005; Weber 2005) as well as to how
they have been transferred (e.g., field experiments and randomized
control trials) or integrated (e.g., mixed methods combining
quantitative and qualitative techniques).
Interdisciplinary research has been claimed to revolve shared boundary
objects (Gorman 2002) and to yield evidentiary and heuristic
integration through cooperation (Kincaid 1997). Successful
interdisciplinary research, however, does not seem to require
integration, e.g., evolutionary game theory in economics and biology
(Grune-Yanoff 2016). Heuristic cooperation has also led to new
stable disciplines through stronger integrative forms of problem
solving. In the life sciences, for instance, computational,
mathematical and engineering techniques for modeling and data
collection in molecular biology have led to integrative systems
biology (MacLeod and Nersessian 2016; Nersessian 2022). Similarly,
constraints and resources for problem-solving in physics, chemistry
and biology led to the development of quantum chemistry (Gavroglu and
Simoes 2012; see also the case of nuclear chemistry in Cat and
Best 2023).
## 4. Ontological unities
### 4.1 Ontological unities and reduction
Since Nagel's influential model of reduction by derivation, most
discussions of the unity of science have been cast in terms of
reductions between concepts and the entities they describe, and
between theories incorporating the descriptive concepts. Ontological
unity is expressed by a preferred set of such ontological units. In
this regard, it should be noted that the selection of units typically
relies on assumptions about classification or natural order such as
commitments to natural kinds or a hierarchy of levels.
In terms of concepts featured in preferred descriptions, explanatory
or not, reduction endorses taxonomical monism: a privileged set of
fundamental kinds of things. These privileged kinds are often known as
so-called natural kinds; although, as it has been argued, monism does
not entail realism (Slater 2005) and the notion admits of multiple
interpretations, ranging from the more conventionalist to the more
essentialist (Tahko 2021; see a critique in Cat 2022a). Natural
kindness has further been debated in terms, for instance, of the
contrast between so-called cluster and causal theories. Connectedly,
pluralism does not entail anti-realism (Dupre 1993), nor does
realism entail monism or essentialism (Khalidi 2021). Without
additional metaphysical assumptions, the fundamental units are
ambiguous with respect to their status as either entity or property.
Reduction may determine the fundamental kinds or level through the
analysis of entities.
A distinctive ontological model is as follows. The hierarchy of levels
of reduction is fixed by *part-whole* relations. The levels of
aggregation of entities run all the way down to atomic particles and
field parts, rendering microphysics the fundamental science (Gillett
2016). The focus of recent accounts has been placed on the relation
between the causal powers at different levels and how lower-level
entities and powers determine higher-level ones. Discussions have
centered on whether the relation is of identity between types or
tokens of things (Thalos 2013; Gillette 2016; Wilson 2021; see more
below) or even if causal powers are the relevant unit of analysis
(Thalos 2013).
A classic reference to this compositional type of account is Oppenheim
and Putnam's "The Unity of Science as a Working
Hypothesis" (Oppenheim and Putnam 1958; Oppenheim and Hempel had
worked in the 1930s on taxonomy and typology, a question of broad
intellectual, social and political relevance in Germany at the time).
Oppenheim and Putnam intended to articulate an idea of science as a
reductive unity of concepts and laws reduced to those of the most
elementary elements. They also defended it as an empirical
hypothesis--not an a priori ideal, project or
precondition--about science. Moreover, they claimed that its
evolution manifested a trend in that unified direction out of the
smallest entities and lowest levels of aggregation. In an important
sense, the evolution of science recapitulates, in the reverse, the
evolution of matter, from aggregates of elementary particles to the
formation of complex organisms and species (we find a similar
assumption in Weinberg's downward arrow of explanation). Unity,
then, is manifested not just in *mereological* form, but also
diachronically, *genealogically* or *historically*.
A weaker form of ontological reduction advocated for the biomedical
sciences with the causal notion of *partial reductions*:
explanations of localized scope (focused on parts of higher-level
systems only) laying out a causal mechanism connecting different
levels in the hierarchy of composition and organization (Schaffner
1993; Schaffner 2006; Scerri has similarly discussed degrees of
reduction in Scerri 1994). An extensional, domain-relative approach
introduces the distinction between "domain preserving" and
"domain combining" reductions. Domain-preserving
reductions are intra-level reductions and occur between \(T\_1\) and
its predecessor \(T\_2\). In this parlance, however, \(T\_2\)
"reduces" to \(T\_1\). This notion of
"reduction" does not refer to any relation of explanation
(Nickles 1973).
The claim that reduction, as a relation of explanation, needs to be a
relation between theories or even involve any theory has also been
challenged. One such challenge focuses on "inter-level"
explanations in the form of *compositional redescription* and
causal mechanisms (Wimsatt 1976). The role of biconditionals or even
Schaffner-type identities, as factual relations, is heuristic (Wimsatt
1976). The heuristic value extends to the preservation of the
higher-level, reduced concepts, especially for cognitive and pragmatic
reasons, including reasons of empirical evidence. This amounts to
rejecting the structural, formal approach to unity and reductionism
favored by the logical-positivist tradition. Reductionism is another
example of the functional, purposive nature of scientific practice.
The metaphysical view that follows is a pragmatic and non-eliminative
realism (Wimsatt 2006). As a heuristic, this kind of non-eliminative
pragmatic reductionism is a complex stance. It is, across levels,
integrative and intransitive, compositional, mechanistic and
functionally localized, approximative and abstractive. It is bound to
adopting false idealizations, focusing on regularities and stable
common behavior, circumstances and properties. It is also constrained
in its rational calculations and methods, tool-binding and
problem-relative. The heuristic value of eliminative, inter-level
reductions has been defended as well (Poirier 2006).
The appeal to formal laws and deductive relations is dropped for sets
of concepts or vocabularies in the *replacement analysis*
(Spector 1978). This approach allows for talk of entity reduction or
branch reduction, and even direct theory replacement, without the
operation of laws, and it circumvents vexing difficulties raised by
bridge principles and the deductive derivability condition
(self-reduction, infinite regress, etc.). Formal relations only
guarantee, but do not define, the reduction relation. Replacement
functions are meta-linguistic statements. As Sellars argued in the
case of explanation, this account distinguishes between reduction and
the testing for reduction, and it highlights the role of derivations
in both. Finally, replacement can be in practice or in theory.
Replacement in practice does not advocate elimination of the reduced
or replaced entities or concepts (Spector 1978).
As indicated above, reductive models and associated organization of
scientific theories and disciplines assume an epistemic hierarchy
grounded on an ontological hierarchy of levels of organization of
entities, properties or phenomena, from societies down to cells,
molecules and subatomic elements. Patterns of behavior of entities at
lower, more fundamental levels are considered more general, stable and
explanatory.
Levels forming a hierarchy are discrete and stratified so that
entities at level n strongly depend on entities at level \(n-1.\) The
different levels may be distinguished in different ways: by
differences in scale, by a relation of realization and, especially, by
a relation of composition such that entities at level n are considered
composed of, or decomposable into, entities at level \(n-1.\)
This assumption has been the target of criticism (Thalos 2013;
Potochnik 2017). Neither scales, realization nor decomposition can fix
an absolute hierarchy of levels. Critics have noted, for instance,
that entities may participate in causal interactions at multiple
levels and across levels in physical models (Thalos 2013) and
across biological and neurobiological models (Haug 2010; Eronen
2013).
In addition, the compartmentalization of theories and their concepts
or vocabulary into levels neglects the existence of empirically
meaningful and causally explanatory relations between entities or
properties at different levels. If they are neglected as theoretical
knowledge and left as only bridge principles, the possibility of
completeness of knowledge is jeopardized. Maximizing completeness of
knowledge requires a descriptive unity of all phenomena at all levels
and anything between these levels. Any bounded region or body of
knowledge neglecting such cross-boundary interactions is radically
incomplete, and not just confirmationally or evidentially so; we may
refer to this problem as the problem of *cross-boundary
incompleteness* at either intra-level or *horizontal
incompleteness* and, on a hierarchy, the problem of inter-level or
*vertical incompleteness* (Kincaid 1997; Cat 1998).
If levels cannot track, for instance, same-level causal relations,
either their causal explanatory relevance is derivative and contextual
(Craver 2007), their role is cognitive and pragmatic as in the role as
conceptual coordinate systems or, more radically, altogether
dispensable (Thalos 2013; Potochnik 2017). As a result, they fail to
fix a corresponding hierarchy of fields and subfields of scientific
research defended by reductionist accounts.
The most radical form of reduction as replacement is often called
*eliminativism*. The position has made a considerable impact in
philosophy of psychology and philosophy of mind (Churchland 1981;
Churchland 1986). On this view the vocabulary of the reducing theories
(neurobiology) eliminates and replaces that of the reduced ones
(psychology), leaving no substantive relation between them (which is
only a replacement rule) (see also
eliminative materialism).
From a semantic standpoint, one may distinguish different kinds of
reduction in terms of four criteria, two epistemological and two
ontological: fundamentalism, approximation, abstract hierarchy and
spatial hierarchy. *Fundamentalism* implies that the features
of a system can be explained in terms only of factors and rules from
another realm. *Abstract hierarchy* is the assumption that the
representation of a system involves a hierarchy of levels of
organization with the explanatory factors being located at the lower
levels. *Spatial hierarchy* is a special case of abstract
hierarchy in which the criterion of hierarchical relation is a spatial
part-whole or containment relation. Strong reduction satisfies the
three "substantive" criteria, whereas weak reduction only
satisfies fundamentalism. Approximate reductions--strong and
hierarchical--are those which satisfy the criterion of
fundamentalism only approximately (Sarkar 1998; the merit of
Sarkar's proposal resides in its systematic attention to
hierarchical conditions and, more originally, to different conditions
of *approximation*; see also Ramsey 1995; Lange 1995).
The semantic turn extends to more recent notion of models that do not
fall under the strict semantic or model-theoretic notion of
mathematical structures (Giere 1999; Morgan and Morrison 1999). This
is a more flexible framework about relevant formal relations and the
scope of relevant empirical situations. It is implicitly or explicitly
adopted by most accounts of unity without reduction. One may add the
primacy of temporal representation and temporal parts, *temporal
hierarchy* or *temporal compositionality*, first emphasized
by Oppenheim and Putnam as a model of genealogical or diachronic
unity. This framework applies to processes both of evolution and
development (a more recent version is in McGivern 2008 and in Love and
Hutteman 2011).
The shift in the accounts of scientific theory from syntactic to
semantic approaches has changed conceptual perspectives and,
accordingly, formulations and evaluations of reductive relations and
reductionism. However, examples of the semantic approach focusing on
mathematical structures and satisfaction of set-theoretic relations
have focused on syntactic features--including the axiomatic form
of a theory--in the discussion of reduction (Sarkar 1998; da
Costa and French 2003). In this sense, the structuralist approach can
be construed as a neo-Nagelian account, while an alternative line of
research has championed the more traditional structuralist semantic
approach (Balzer and Moulines 1996; Moulines 2006; Ruttkamp 2000;
Ruttkamp and Heidema 2005).
### 4.2 Ontological unities and antireductionism
From the opposite direction, arguments concerning new concepts such as
*multiple realizability* and *supervenience*, introduced
by Putnam, Kim, Fodor and others, have led to talk of higher-level
functionalism, a distinction between type-type and token-token
reductions and the examination of its implications. The concepts of
emergence, supervenience and downward causation are related
metaphysical tools for generating and evaluating proposals about unity
and reduction in the sciences. This literature has enjoyed its chief
sources and developments in general metaphysics and in philosophy of
mind and psychology (Davidson 1969; Putnam 1975; Fodor 1975; Kim
1993).
*Supervenience*, first introduced by Davidson in discussions of
mental properties, is the notion that a system with properties on one
level is composed of entities on a lower level and that its properties
are determined by the properties of the lower-level entities or
states. The relation of determination is that no changes at the
higher-level occur without changes at the lower level. Like
token-reductionism, supervenience has been adopted by many as the poor
man's reductionism (see the entry on
supervenience).
A different case for the autonomy of the macrolevel is based on the
notion of multiple supervenience (Kincaid 1997; Meyering 2000).
The autonomy of the special sciences from physics has been defended in
terms of a distinction between *type-physicalism* and
*token-physicalism* (Fodor 1974; Fodor countered Oppenheim and
Putnam's hypothesis under the rubric "the disunity of
science"; see the entry on
physicalism).
The key logical assumption is the type-token distinction, whereby
types are realized by more specific tokens, e.g., the type
"animal" is instantiated by different species, the type
"tiger" or "electron" can be instantiated by
multiple individual token tigers and electrons. Type-physicalism is
characterized by a type-type identity between the
predicates/properties in the laws of the special sciences and those of
physics. By contrast, token-physicalism is based on the token-token
identity between the predicates/properties of the special sciences and
those of physics; every event under a special law falls under a law of
physics and bridge laws express contingent token-identities between
events. Token-physicalism operates as a demarcation criterion for
materialism. Fodor argued that the predicates of the special sciences
correspond to infinite or open-ended disjunctions of physical
predicates, and these disjunctions do not constitute natural kinds
identified by an associated law. Token-physicalism is the only
alternative. All special kinds of events are physical, but the special
sciences are not physics (for criticisms based on the presuppositions
in Fodor's argument, see Sober 1999).
The denial of remedial, weaker forms of reductionism is the basis for
the concept of *emergence* (Humphreys 1997; Bedau and Humphreys
2008; Wilson 2021). Different accounts have attempted to articulate
the idea of a whole being different from or more than the mere sum of
its parts (see the entry on
emergent properties).
Emergence has been described beyond logical relations, synchronically
as an ontological property and diachronically as a material process of
fusion, in which the powers of the separate constituents lose their
separate existence and effects (Humphreys 1997). This concept has been
widely applied in discussions of *complexity*. Unlike the
earliest antireductionist models of complexity in terms of holism and
cybernetic properties, more recent approaches track the role of
constituent parts (Simon 1996). Weak emergence has been opposed to
nominal and strong forms of emergence. The nominal kind simply
represents that some macro-properties cannot be properties of
micro-constituents. The strong form is based on supervenience and
irreducibility, with a role for the occurrence of autonomous downwards
causation upon any constituents (see below). Weak emergence is linked
to processes stemming from the states and powers of constituents, with
a reductive notion of downwards causation of the system as a resultant
of constituents' effects (Wilson 2021); however, the connection
is not a matter of Nagelian formal derivation, but of so-called
universality classes (Batterman 2002; Thalos 2013), self-organization
(Mitchell 2012) and implementation through computational aggregation,
compression and iteration. Weak emergence, then, can be defined in
terms of simulation: a macro-property, state or fact is weakly
emergent if and only if it can be derived from its macro-constituents
only by simulation (Bedau and Humphreys 2008) The denial of remedial,
weaker forms of reductionism is the basis for the concept of
*emergence* (Humphreys 1997; Bedau 2008); see the entry on
simulations in science).
Computational models of emergence or complexity straddle the boundary
between formal epistemology and ontology. They are based on
simulations of chaotic dynamical processes such as cellular automata
(Wolfram 1984, 2002). Their supposed superiority to combinatorial
models based on aggregative functions of parts of wholes does not lack
defenders (Crutchfield 1994; Crutchfield and Hanson 1997; Humphreys
2004, 2007, 2008; Humphreys and Huneman 2008; Huneman 2008a, 2008b,
2010).
Connected to the concept of emergence is *top-down* or
*downward causation*, which captures the autonomous and genuine
causal power of higher-level entities or states, especially upon
lower-level ones. The most extreme and most controversial version
includes a violation of laws that regulate the lower level (Meehl and
Sellars 1956; Campbell 1974). Weaker forms require compatibility with
the microlaws (for a brief survey and discussion see Robinson 2005; on
downward causation without top-down causes, see Craver and Bechtel
2007; Bishop 2012). The very concept has become the subject of some
interdisciplinary interest in the sciences (Ellis, Noble and
O'Connor 2012).
Another general argument for the autonomy of the macrolevel in the
form of non-reductive materialism has been a cognitive type of
functionalism, namely, cognitive pragmatism (Van Gulick 1992). This
account links ontology to epistemology. It discusses four pragmatic
dimensions of representations: the nature of the causal interaction
between theory-user and the theory, the nature of the goals to the
realization of which the theory can contribute, the role of indexical
elements in fixing representational content, and differences in the
individuating principles applied by the theory to its types (Wimsatt
and Spector's arguments above are of this kind). A more
ontologically substantive account of functional reduction is
Ramsey's bottom-up *construction by reduction*:
transformation reductions streamline formulations of theories in such
a way that they extend basic theories upwards by engineering their
application to specific context or phenomena. As a consequence, they
reveal, by construction, new relations and systems that are
antecedently absent from a scientist's understanding of the
theory--independently of a top or reduced theory (Ramsey 1995). A
weaker framework of ontological unification is *categorial
unity*, wherein abstract categories such as causality,
information, etc., are attached to the interpretation of the specific
variables and properties in models of phenomena.
## 5. Disunity and pluralism
A more radical departure from logical-positivist standards of unity is
the recent criticism of the methodological values of reductionism and
unification in the sciences and also its position in culture and
society. From the descriptive standpoint, many views under the
rubric of disunity are versions of positions mentioned above. The
difference is mainly normative and a matter of emphasis, scope and
perspective. Such views reject global or universal standards of
unity--including unity of method--by emphasizing disunity
and endorsing different forms of epistemological and ontological
pluralism.
### 5.1 The Stanford School
An influential picture of disunity comes from related works by members
of the so-called Stanford School such as John Dupre, Ian
Hacking, Peter Galison, Patrick Suppes and Nancy Cartwright. Disunity
is, in general terms, a rejection of universalism and uniformity, both
methodological and metaphysical. Through their work, the rubric of
disunity has acquired a visibility parallel to the one once acquired
by unity, as an inspiring philosophical rallying cry.
From a methodological point of view, members of the school have
defended, from analysis of actual scientific practice, a model of
local unity such as the so-called trading-zone, (Galison 1998), a
plurality of scientific methods (Suppes 1978), a plurality of
scientific styles with the function of establishing spaces of
epistemic possibility and a plurality of kinds of unities (Hacking
1996; Hacking follows the historian A.A. Crombie; for a criticism of
Hacking's historical epistemology see Kusch 2010).
From a metaphysical point of view, the disunity of science can be
given adequate metaphysical foundations that make pluralism compatible
with realism (Dupre 1993; Cartwright 1983, 1999). Dupre
opposes a mechanistic paradigm of unity characterized by determinism,
reductionism and essentialism. The paradigm spreads the values and
methods of physics to other sciences that he thinks are scientifically
and socially deleterious. Disunity appears characterized by three
pluralistic theses: against essentialism--there is always a
plurality of classifications of reality into kinds; against
reductionism--there exists equal reality and causal efficacy of
systems at different levels of description (that is, the microlevel is
not causally complete, leaving room for downward causation); and
against epistemological monism--there is no single methodology
that supports a single criterion of scientificity, nor a universal
domain of its applicability, leaving only a plurality of epistemic and
non-epistemic virtues. The unitary concept of science should be
understood, following the later Wittgenstein, as a family-resemblance
concept. (For a criticism of Dupre's ideas, see Mitchell
2003; Sklar 2003.)
Against the universalism of explanatory laws, Cartwright has argued
that laws cannot be both universal and exactly true, as Hempel
required in his influential account of explanation and demarcation;
there exist only patchworks of laws and local cooperation. Like
Dupre, Cartwright adopts a kind of scientific realism but
denies that there is a universal order, whether represented by a
theory of everything or a corresponding a priori metaphysical
principle (Cartwright 1983). Theories apply only locally, where and to
the extent that their interpretive models fit the phenomena studied,
*ceteris paribus* (Cartwright 1999). Cartwright's
pluralism is not just opposed to vertical reductionism but also
horizontal imperialism, or universalism and globalism. She explains
their more or less general domain of application in terms of causal
capacities and arrangements she calls *nomological machines*
(Cartwright 1989, 1999). The regularities they bring about depend on a
shielded environment. As a matter of empiricism, this is the reason
that it is in the controlled environment of laboratories and
experiments, where causal interference is shielded off, that factual
regularities are manifested. The controlled, stable, regular world is
an engineered world. Representation rests on intervention (cf. Hacking
1983; for criticisms see Winsberg et al. 2000; Hoefer 2003; Sklar
2003; Howhy 2003; Teller 2004; McArthur 2006; Ruphy 2016).
Disunity and autonomy of levels have been associated, conversely, with
antirealism, meaning instrumentalist or empiricist heuristics. This
includes, for Fodor and Rosenberg, higher-level sciences such as
biology and sociology (Fodor 1974; Rosenberg 1994; Huneman 2010). It
is against this picture that Dupre's and
Cartwright's attacks on uniformly global unity and reductionism,
above, might seem surprising by including an endorsement, in causal
terms, of
realism.[4]
Rohrlich has defended a similar realist position about weaker,
conceptual (cognitive) antireductionism, although on the grounds of
the mathematical success of derivational explanatory reductions
(Rohrlich 2001). Ruphy, however, has argued that antireductionism
merely amounts to a general methodological prescription and is too
weak to yield uncontroversial metaphysical lessons; these are in fact
based on general metaphysical commitments external to scientific
practice (Ruphy 2005, 2016).
### 5.2 Pluralism
Unlike more descriptive accounts of plurality, pluralism is a
normative endorsement of plurality. The question of the metaphysical
significance of disunity and anti-reductionism takes one straight to
the larger issue of the epistemology and metaphysics (and aesthetics,
social culture and politics) of pluralism. And here one encounters the
familiar issues and notions such as conceptual schemes, frameworks and
worldviews, incommensurability, relativism, contextualism and
perspectivalism about goals and standards of concepts and methods (for
a general discussion see Lynch 1998; on perspectivalism about
scientific models see Giere 1999, 2006; Rueger 2005; Massimi and McCoy
2020).
In connection with relativism and instrumentalism, pluralism has
typically been associated with antirealism about taxonomical
practices. But it has been defended from the standpoint of realism
(for instance, Dupre 1993; Chakravartty 2011). Pluralism about
knowledge of mind-independent facts can be formulated in terms of
different ways to distribute properties (sociability-based pluralism),
with more specific commitments about the ontological status of the
related elements and their plural contextual manifestations of powers
or dispositions (Chakravartty 2011; Cartwright 2007).
From a more epistemological standpoint, pluralism applies widely to
concepts, explanations, virtues, goals, methods, models and kinds of
representations (see above for graphic pluralism), etc. In this sense,
pluralism has been defended as a general framework that rejects the
ideal of consensus in cognitive, evaluative and practical matters,
against pure skepticism (nothing goes) or indifferentism (anything
goes), including a defense of preferential and contextual rationality
that notes the role of contextual rational commitments, by analogy
with political forms of engagement (Rescher 1993; van Bouwel 2009a;
Cat 2012).
Consider at least four distinctions--they are formulated
about concepts, facts, and descriptions, and they apply also to
values, virtues, methods, etc.:
* *Vertical vs. horizontal pluralism*. Vertical pluralism is
inter-level pluralism, the view that there is more than one level of
factual description or kind of fact and that each is irreducible,
equally fundamental, or ontologically/conceptually autonomous.
Horizontal pluralism is intra-level pluralism, the view that there may
be incompatible descriptions or facts on the same level of discourse
(Lynch 1998). For instance, the plurality of explanatory causes to be
chosen from or integrated in biology or physics has been defended as a
lesson in pluralism (Sober 1999).
* *Global vs. local pluralism*. Global pluralism is pluralism
about every type of fact or description. Global horizontal pluralism
is the view that there may be incompatible descriptions of the same
type of fact. Global vertical pluralism is the view that no type of
fact or description reduces to any other. Local horizontal and
vertical pluralism is about one type of fact or description (Lynch
1998). It may also concern situated standpoints informed by, among
others, social differences (Wylie 2015).
* *Difference vs. integrative pluralism*. Difference
pluralism has been defended in terms of division of labor in the face
of complexity and cognitive limitations (Giere 2006), epistemic
humility (Feyerabend 1962 and his final writings in the 1990s; Chang
2002), scientific freedom, empirical testability and theory choice
(Popper 1935; Feyerabend 1962) and underdetermination.
Underdetermination arguments concern choices from a disjunction of
equivalent types of descriptions (Mitchell 2003, 2009) or of
incompatible partial representations or models of phenomena in the
same intended scope (Longino 2002, 2013). The representational
incompatibility may be traced to competing values or aims, or
assumptions in *ceteris paribus* laws.
Critiques of difference pluralism point to different consequences such
as failure to address complex and boundary phenomena and problems
in, for instance, the life and social sciences (Gorman 2002;
Mitchell 2003; 2008); detrimental effects of taxonomic instability
(Sullivan 2017) and so-called intraconcept variability (Cunningham
2021) in the mind-brain sciences. When the variability characterizes
what are taken to be the same concepts and terminology, several
contexts of detrimental effect have been identified: (1) science
education, (2) collaborative research, intra- and cross-disciplinary
(e.g., the practical and moral problems of value conflicts mentioned
above), (3) clinical practice, and (4) metascientific research
(Cunningham 2021).
Integrative, or connective, types of pluralism are the conjunctive or
holistic requirement of different types of descriptions, methods or
perspectives (Mitchell 2003, 2009; contrast with the more isolationist
position in Longino 2002 and her essay in Kellert, Longino and Waters
2006; Longino 2013).
To a merely syncretistic or tolerant, non-interactive pluralism, a
recent body of literature has opposed a dynamic, coordinated,
interactive disciplinary kind of pluralism (Chang 2012; Wylie 2015;
Sullivan 2017). The former requires only respectful division of labor;
the latter may involve either a limited cross-fertilization through
communication and assimilation--borrowing or co-optation--or
a more robust, integrative epistemic engagement. The latter may
involve, for instance, communicative expertise--also known as
interactional expertise--without having contributory expertise
in the other practice, exchange and collaboration on the same
project.
Borrowing and cross-fertilization across divides and over distances
concern more than theories and change the respective practices,
processes and products. They extend to data, concepts, models,
methods, evidence, heuristic techniques, technology and other
resources. To mention one distinction, while theoretical integration
concerns concepts, ontology, explanations, models, etc., practical
integration concerns methods, heuristics and testing (Grantham 2004).
Thus, accounts including different versions of taxonomical pluralism
range from the more conventional and contingent (from Elgin 1997 to
astronomical kinds in Ruphy 2016) and the more grounded in contexts of
practices (categorization work in Bowker and Star 1999; life sciences
and chemistry in Kendig 2016) and the interactive (Hacking's
interactive kinds in the human sciences) to the more metaphysically
substantive. Some methodological prescriptions of pluralism rely on
pluralism in metascientific research including history (Chang
2012).
Connective varieties of pluralism have been endorsed on grounds of
their epistemic value. Considerations of empirical adequacy and
predictive power can be traced back to Neurath, the explanatory and
methodological value of cross-fertilization, the epistemic benefits of
a stance of openness to new kinds of facts and ways of inquiry and
learning (Wylie 2015) and evidential value.
From the standpoint of evidence, we find second-order varieties of
mixed evidence patterns--triangulation, security and
integration--that connect different kinds of evidence to provide
enhanced evidential value. The relevant plurality or difference
requires independence, a condition to be explicated in different ways
(Kuorikoski and Marchionni 2022). Regarding triangulation, enhanced
evidence results from multiple theory-laden but theoretically
independent lines of evidence in, for instance, microscopy (Hacking
1983), archeology (Wylie 1999) and interdisciplinary research in the
social sciences, for example, neuroeconomics in support of existence
claims about phenomena and descriptions of phenomena but not to more
general theories about them (Kuorikoski and Marchionni 2016).
Connective forms of pluralism have been modeled in terms of relations
between disciplines (see above) and defended at the level of social
epistemology by analogy with political models of liberal democracy and
as a model of social governance between the extremes of so-called
consensual mainstreaming and antagonistic exclusivism (van Bouwel
2009a). Through dialogue, a plurality of represented perspectives
enables the kind of critical scrutiny of questions and knowledge
claims that exposes error, bias, unexamined norms of justification and
acceptance and the complexity of issues and relevant implications. As
a result, it has been argued, pluralism supports a corresponding
standard of critical objectivity (Longino 2002; Wylie 2015).
* *Internal vs. external pluralism*. From a methodological
standpoint, an internal perspective is naturalistic in its reliance on
the contingent plurality of scientific practice by any of its
standards. This has been defended by members of the so-called
Minnesota School (Kellert, Longino and Waters 2006) and Ruphy (2016).
The alternative, which Ruphy has attributed to Dupre and
Cartwright, is the adoption of a metaphysical commitment external to
actual scientific practice.
As a matter of actual practice, pluralism has been identified as part
of a plurality of projects and perspectives in, for instance,
cognitive and mind-brain sciences, where attitudes towards a plurality
of ways of researching and understanding cognition vary (Milkowski and
Hohol 2021). These attitudes include (1) lamenting the variability of
meaning and systems of classification (Sullivan 2007; Cunningham
2021); (2) embracing complementarity between hierarchical mechanistic
and computational approaches--in the face of, for instance,
models with propositional declarative and lawlike statements and
computational models with software codes; (3) seeking integration with
mutual consistency and evidential support; (4) seeking reduction
privileging neural models; and (5) seeking grand unification that
values simplicity and generality over testability.
### 5.3 Metapluralism
The preference for one kind of pluralism over another is typically
motivated by epistemic virtues or constraints. Meta-pluralism, or
pluralism about pluralism, is obviously conceivable in similar terms,
as it can be found in the formulation of the so-called pluralist
stance (Kellert, Longino and Waters 2006). The pluralist stance
replaces metaphysical principles with scientific or empirical
methodological rules and aims that have been "tested".
Like Dupre's and Cartwright's metaphysical
positions, its metascientific position must be empirically tested.
Metascientific conclusions and assumptions cannot be considered
universal or necessary, but are local, contingent and relative to
scientific interests and purposes. Thus, on this view, complexity does
not always require interdisciplinarity (Kellert 2008), and in some
situations the pluralist stance will defend reductions or
specialization over interdisciplinary integration (Kellert, Longino
and Waters 2006; Cat 2012; Rescher 1993).
## 6. Conclusion: Why unity? And what difference does it really make?
From Greek philosophy to current debates, justifications for adopting
positions on matters of unification have varied from the metaphysical
and theological to the epistemic, social and pragmatic. Whether as a
matter of truth (Thalos 2013) or consequence, views on matters of
unity and unification make a difference in both science and
philosophy, and, by application, in society as well. In science they
provide strong heuristic or methodological guidance and even
justification for hypotheses, projects, and specific goals. In this
sense, different rallying cries and idioms such as simplicity, unity,
disunity, emergence or interdisciplinarity, have been endowed with a
normative value. Their evaluative role extends broadly. They are used
to provide legitimacy, even if rhetorically, in social contexts,
especially in situations involving sources of funding and profit. They
set a standard of what carries the authority and legitimacy of what it
is to be scientific. As a result, they make a difference in scientific
evaluation, management and application, especially in public domains
such as healthcare and economic decision-making. For instance,
pointing to the complexity of causal structures challenges traditional
deterministic or simple causal strategies of policy decision-making
with known risks and unknown effects of known properties (Mitchell
2009). Last but not least is the influence that implicit assumptions
about what unification can do have on science education (Klein
1990).
Philosophically, assumptions about unification help choose what sort
of philosophical questions to pursue and what target areas to explore.
For instance, fundamentalist assumptions typically lead one to address
epistemological and metaphysical issues in terms of only results and
interpretations of fundamental levels of disciplines. Assumptions of
this sort help define what counts as scientific and shape scientistic
or naturalized philosophical projects. In this sense, they determine,
or at least strongly suggest, what relevant science carries authority
in philosophical debate.
At the end of the day, one should not lose sight of the larger context
that sustains problems and projects in most disciplines and practices.
We are as free to pursue them as Kant's dove is free to fly,
that is, not without the surrounding air resistance to flap its
wings upon and against. Philosophy was once thought to stand for the
systematic unity of the sciences. The foundational character of unity
became the distinctive project of philosophy, in which conceptual
unity played the role of the standard of intelligibility. In addition,
the ideal of unity, frequently under the guise of harmony, has long
been a standard of aesthetic virtue, although this image has been
eloquently challenged by, for instance, John Bailey and Iris Murdoch
(Bailey 1976; Murdoch 1992). Unities and unifications help us meet
cognitive and practical demands upon our life as well as cultural
demands upon our self-images that are both cosmic and earthly. It is
not surprising that talk of the many meanings and levels of
unity--the fundamental level, unification, system, organization,
universality, simplicity, atomism, reduction, harmony, complexity or
totality--can place an urgent grip on our intellectual
imagination. |
binarium | ## 1. The "most famous pair"
Some historians of medieval philosophy describe what they see as an
Augustinian "doctrinal complex" that emerged in the late twelfth and
early thirteenth centuries and that was the "common teaching" among
scholastics after the
1220s.[1]
Weisheipl [1980], pp. 242-43, lists five ingredients of this
"Augustinian"
synthesis:[2]
1. voluntarism (an emphasis on the role of the will as distinct from
the intellect)
2. universal hylomorphism
3. plurality of forms
4. divine illumination, interpreted through the influence of
Avicenna
5. the real identity of the soul with its powers or faculties.
The second and third items on this list together make up what is
sometimes called the *"binarium famosissimum,"* the "most famous
pair."
### 1.1 Universal hylomorphism
Paul Woodruff has defined hylomorphism as "the doctrine, first
taught by Aristotle, that concrete substance consists of forms in
matter (*hyle*)" (Audi [1999], p. 408). One might therefore
expect *universal* hylomorphism to be the doctrine that
*all* substances consist of forms in matter. But in fact the
medieval theory of universal hylomorphism maintained something slightly
weaker than that; it held that all substances *except God* were
composed of matter and form, whereas God is entirely immaterial.
This view seems to be the result of two more basic theses:
* the explicit claim that only God is metaphysically simple in all
respects, so that all creatures are metaphysically composite;
* the view, not always explicitly spelled
out, that all composition is in some way a composition of matter and
form, an indeterminate factor and a determining element.
The reasoning behind the second thesis is murky. On the other hand,
it is easy to find medieval authors who argue in detail for the
first
claim.[3]
Still, it is surprisingly hard to find any medieval author who gives
a good *motivation* for it. That is, why should it be
*important* to maintain that God and only God is metaphysically
simple? What rests on the claim?
It it tempting, and plausible, to suppose that the implicit
reasoning goes something like this: "Composite" (*com* +
*positus* = "put together") things don't just *happen* to
be composite; something put them together. In short, composition
requires an efficient cause. It follows therefore that God, as first
cause, cannot be composite. Conversely, anything that is caused is in
some sense composite. Hence, since everything besides God is created
and therefore caused, everything other than God is composite. In short,
the unique simplicity of God is important to maintain because it is
required by the doctrine of creation. Nevertheless, while this line of
reasoning is plausible, it is not found clearly stated in any medieval
author I know
of.[4]
The notion that all creatures are composites of matter and form
requires that something be said about what we might otherwise call
"immaterial" substances--angels, Aristotelian "separated
substances," the human soul after death. For universal hylomorphism,
such entities cannot be truly immaterial, and yet they are obviously
quite unlike familiar physical objects. As a result, universal
hylomorphists distinguished between "corporeal" matter, i.e., the
matter of physical, sensible objects, and another kind of matter
sometimes called
"spiritual matter."[5]
### 1.2 Plurality of forms
The theory known as "plurality of forms" is not just the theory that
there are typically many forms in a material substance. That would
have been an innocuous claim; everyone agreed that material substances
routinely have many *accidental* forms. The theory of plurality
of forms is instead the theory that there is a plurality of
*substantial* forms in a given material substance. Details of
the theory varied widely from author to
author.[6]
There was some disagreement over how many substantial forms were
involved. Most people who held a version of this theory agreed that at
least a "form of corporeity" was required in all physical substances,
but they disagreed over how many additional substantial forms were
required for a given kind of body. Particular attention was given to
the case of the *human* body.
The arguments in support of this theory were quite diverse, and come
from a variety of directions. William of Ockham, for instance, held
that if the form of corporeity were not distinct from the intellective
soul and were not essentially present in the human being both during
his life and after the intellective soul departs at death, then once a
saint had died it would be false to say that the body that remains is
the body that saint ever had. Hence the cult of venerating the bodies
of the saints would make no sense. (*Quodlibet* II, q.
11.[7])
Again, Thomas Aquinas, although he rejects plurality or forms,
nevertheless records several arguments given on its behalf. Among
them[8]
>
>
>
> Furthermore, before the coming of the rational soul the body in the
> womb of the mother has some form. Now when the rational soul comes, it
> cannot be said that this form disappears, because it does not lapse
> into nothingness, nor would it be possible to specify anything into
> which it might return. Therefore, some form exists in the matter
> previous to the rational soul ...
>
>
>
>
> Furthermore, in VII *Metaphysica* [11, 1036a 26] it is said
> that every definition has parts, and that the parts of a definition are
> forms. In anything that is defined, therefore, there must be several
> forms. Since, therefore, man is a kind of defined thing, it is
> necessary to posit in him several forms; and so some form exists before
> the rational soul.
>
>
>
## 2. Proponents and opponents
The theories of universal hylomorphism and plurality of forms are
found together in many authors from the twelfth and thirteenth
century. They are both held, for instance, by the author known to the
scholastics as "Avicebron" (or Avicebrol, Avencebrol, etc.), who is to
be identified with the Spanish Jewish philosopher and poet Solomon Ibn
Gabirol (c. 1022-c. 1052/c. 1070), and whose *Mekor Hayyim*
(= *Fons vitae,* "Fountain of Life") was translated into Latin
in the late-twelfth
century.[9]
Indeed, medieval as well as modern scholars have sometimes looked to
Ibn Gabirol as the main source for the two doctrines in the thirteenth
century.[10]
Other authors who held both theories included the translator of the
*Fons vitae,* one Dominic Gundisalvi
(Gundissalinus)[11]
as well as people as diverse as Thomas of
York,[12]
St.
Bonaventure,[13]
the anonymous thirteenth-century *Summa philosophiae* once
ascribed to Robert
Grosseteste,[14]
John Pecham, Richard of
Mediavilla,[15]
and many
others.[16]
But other authors rejected these two views. The best known is no
doubt Thomas Aquinas. Aquinas held that the
uniqueness of God's absolute simplicity does *not* require positing a
kind of matter in all creatures. Rather all creatures, including
incorporeal substances such as angels or human souls, are composite
insofar as they have a composition of essence and *esse* ("to
be," the act of existing), whether or not they have an additional
composition or matter and
form.[17]
Again, he argues that if any substance has a plurality of forms, only
the *first* form that comes to it can be a substantial form;
all the others must be accidental
forms.[18]
Godfrey of Fontaines likewise rejected both theories. John Duns
Scotus accepted plurality of forms, but denied universal
hylomorphism.[19]
## 3. The expression *"binarium famosissimum"*
The expression *"binarium famosissimum"* is not a medieval label
for this pairing of doctrines; its use in this sense appears to be
purely a twentieth-century development. Oddly enough, only *a single
occurrence* of the expression has been found in a medieval text,
the anonymous *Summa philosophiae* cited
above.[20]
But it clear that this author uses the expression to refer not to a
pair of doctrines, whether to universal hylomorphism and plurality of
forms or to any other pair, but to the division of substance into
corporeal and
incorporeal[21]:
>
> Now the first contrariety in the categorial line of substance, from
> the nature of genus, partly by reason of the matter related to both
> sides [i.e., to corporeal and incorporeal], partly from the nature of
> the most common form tending to particularity by degrees according to
> the proportion of the matter's receptivity, contains the *binarium
> famosissimum,* that is, corporeal and incorporeal.
>
On the other hand, as early as 1943 Daniel Callus writes of a
certain John Blund, an early-thirteenth century Englishman who rejected
both universal hylomorphism and plurality of forms. Callus says [1943], p. 252:
>
> More than Gundissalinus, we see delineated in Blund the great
> questions which in the second half of that century were to divide the
> different schools into two opposing armies, the outlining of the
> conflict between philosophers and theologians, Aristotelians and the
> so-called Augustinians. In Blund we meet with the earliest, clear, and
> unmistakable account of the *binarium famosissimum* of the
> Augustinians, Plurality of Forms and hylomorphic composition of
> spiritual substances, the angels and the human soul.
>
Again, in a paper delivered in 1946, Callus [1955], p. 4, mentions
the "*binarium famosissimum,* the twofold pillar on which the
whole structure of the Augustinian school was supposed to stand."
Although he does not there say what this "twofold pillar" is, a few
pages later (p. 9) he cites "the *famosissimum binarium
Augustinianum,* namely, the hylomorphic composition of all created
beings, not only corporeal but also spiritual substances, the angels
and the human soul; and plurality of forms in one and the same
individual."
A few years later Etienne Gilson [1955], p. 377, in the
discussion of Thomas Aquinas in his monumental *History of Christian
Philosophy in the Middle Ages,* says:
>
>
> The radical elimination of the *binarium famosissimum,* i.e.,
> hylomorphism and the plurality of forms, was not due to a more correct
> understanding of the metaphysics of Aristotle but to the introduction,
> by Thomas Aquinas, of a new metaphysical notion of being.
>
>
>
Again, Weisheipl [1980], p. 250, writing about Albert the Great,
says (the emphasis is Weisheipl's):
>
> Thus Albert is quick to point out that Avicebron in the *Fons
> vitae* is "the only [philosopher] who says that from one simple
> principle two [things] must immediately proceed in the order of
> nature, since the number 'two' follows upon unity." And
> Saint Thomas notes: "Some say that the soul and absolutely every
> substance besides God is composed of matter and form; indeed the
> *first author to hold this position is Avicebron,* the author
> of *Liber fons vitae*." This is the origin of the later
> *binarium famosissimum*: after One must come Two.
>
In this passage, Weisheipl uses the phrase *"binarium
famosissimum"* in a sense perhaps loosely related to that used by
the author of the *Summa philosophiae.* But he also links it,
via the quotation from Aquinas, to the doctrine of universal
hylomorphism.[22]
A few years later, Weisheipl ([1984], p. 451), speaking about Robert Grosseteste, remarks that he:
>
> saw no problem whatever in accepting universal hylomorphism or a plurality of forms in a single material composite. Not only were these two tenets, known as the *binarium famosissimum,* common in the 1220s and 1230s, they were accepted as 'traditional' teaching by Franciscans at Oxford and Paris throughout the thirteenth century. John Peckham, Matthew of Aquasparta, Richard of Mediavilla, Roger Bacon, the pseudo-Grosseteste and even John Duns Scotus took the *binarium famosissimum* as orthodox Augustinian doctrine against the 'novelties' of Albert the Great and Thomas Aquinas.[23]
>
Again, E. A. Synan [1993], p. 236, refers to Aquinas's denial that
there can be a plurality of substantial forms in any given substance,
and calls it a "rejection of one half of the *binarium
famosissimum.*"
## 4. Conceptual issues
Thus, although Gilson and some other twentieth-century scholars have
paired the theories of universal hylomorphism and plurality of forms
under the title *"binarium famosissimum,"* and although it is
certainly true that many medieval authors held both theories, there is
no evidence that in medieval times they were ever thought of as a
"pair" in this way.
Why then have some recent scholars linked the two theories so
closely? They certainly have done so. Weisheipl ([1980], p. 243), for
example, describes the plurality of forms as "simply a logical
consequence of" universal hylomorphism. And Zavalloni ([1951], p. 437,
n. 61) likewise claims that universal hylomorphism necessarily implies
plurality of forms, although he says the latter does not imply the
former.[24]
What may be going on is this. The theory of universal hylomorphism
closely fits the view that the structure of what we truly say about
things mirrors the structure of the things themselves. Thus, if I truly
say 'The cat is black', then there is a cat that corresponds to the
subject term, and that cat is qualified by the quality blackness corresponding to the predicate term.
Without the blackness, the cat is to that extent indeterminate. It is
the addition of blackness that determines the cat to the particular
color it has. In general then, the relation of subject to predicate in
a true affirmative judgment is the relation of what is at least
relatively indeterminate to what at least partially determines it. Now
the relation of something indeterminate to what determines it is a
relation of "matter" to
"form."[25]
Hence everything we can truly say about a subject reflects the fact
that the thing is composed of an indeterminate side and a determining
element--of matter and form. In short, hylomorphic composition
is involved in anything we can make true affirmations
about--thus, universal
hylomorphism.[26]
Of course, we can truly say many things about a given subject. (E.g.,
'The cat is black', 'The cat is fat',
'The cat is asleep', etc.) The predicates of all these
true statements correspond to forms really inhering in the relatively
indeterminate subject. Hence we can speak of a "plurality of forms."
But the "plurality of forms," in the sense in which our authors speak
of it, refers to something more restricted, to the fact that we can
predicate predicates of a given subject *in a certain "nested"
order.* We can, for example, while taking about the very same cat,
say "This is a body" (i.e., it is corporeal), "This body is alive"
(i.e., it is an organism), "This organism is sensate" (i.e., it has
sensation, it is an animal), "This animal is a cat," "This cat is
black," etc. Each such predication attributes a form to the
underlying, "material," indeterminate subject, and each such subject
is in turn a composite of a form and a *deeper,* underlying
material subject. The picture we get then is the picture of some kind
of primordial *matter,* corresponding to the bare 'this'
of the first predication ("This is a body"), to which is added a
series of forms *one on top of the other* in a certain order,
each one limiting and narrowing down the preceding ones. The structure
that results is a kind of laminated structure, a metaphysical "onion"
with several layers. On this picture, of course, substantial and
accidental forms are both "layers of the onion" in exactly the same
sense. The distinction between essential and accidental features of a
thing would therefore have to be drawn in some other way.
If this reconstruction is more or less correct, then it is clear why
universal hylomorphism and plurality of forms can be viewed as
conceptually linked. Both fit nicely with the view that the structure
of reality is accurately mirrored in true predication. Ibn Gabirol,
who held both theories, seems to have been thinking along
approximately these lines. But some of the arguments cited above in
favor of plurality of forms show that at least that half of the "pair"
was sometimes held for entirely different reasons.
By contrast with the above reasoning, there is another view of
predication, an "Aristotelian" (and later, Thomistic) view that
rejects this picture. On this view, true predication in language is
not a reliable guide to the metaphysical structure of what makes it
true. One can truly *describe* a given subject in narrower and
narrower terms without there being any corresponding distinction of
real metaphysical components in what we are describing. We can, for
example, describe our cat as a body, as an organism, as an animal, and
as in particular a cat, all the while referring to *a single*
metaphysical configuration, a particular combination (in this case) of
matter and a feline form. Calling it a body, an organism, an animal,
before calling it a cat in no way reflects any sequence of
metaphysical distinctions in the entity itself. It is only when we
call it "black" that we introduce a new entity into the structure, an
*accident.* In short, on this alternative view, the structure
of reality is *not* accurately mirrored in true predication, at
least not in any straightforward way. This view is not committed to
any "plurality" of substantial forms, and is not committed to
universal hylomorphism
either.[27]
## 5. Conclusion
The issues here are complex and the historical facts are not yet
well sorted out. But it appears that twentieth-century scholars who saw
a close conceptual link between universal hylomorphism and the
plurality of forms were perhaps thinking of the two theories as
motivated by a common commitment to the first theory of predication
described in SS 4 above. In some cases (for example the *Fons vitae*),
they probably were so motivated. Still, the fact that the two theories
are regarded as a "pair" is a recent phenomenon, not a medieval
one. It seems to have arisen first in the writings of Daniel Callus.[28] |
properties | ## 1. Properties: Basic Ideas
There are some crucial terminological and conceptual distinctions that
are typically made in talking of properties. There are also various
sorts of reasons that have been adduced for the existence of
properties and different traditional views about whether and in what
sense properties should be acknowledged. We shall focus on such
matters in the following subsections.
### 1.1 How We Speak of Properties
Properties are *expressed*, as meanings, by
*predicates*. In the past "predicate" was often
used as synonym of "property", but nowadays predicates are
linguistic entities, typically contrasted with *singular
terms*, i.e., simple or complex noun phrases such as
"Daniel", "this horse" or "the President
of France", which can occupy subject positions in sentences and
purport to *denote*, or *refer to*, a single thing.
Following Frege, predicates are *verbal phrases* such as
"is French" or "drinks". Alternatively,
predicates are *general terms* such as "French",
with the copula "is" (or verbal inflection) taken to
convey an exemplification link (P. Strawson 1959; Bergmann 1960). We
shall conveniently use "predicate" in both ways.
Predicates are *predicated* of singular terms, thereby
generating sentences such as "Daniel is French". In the
familiar formal language of first-order logic, this would be rendered,
say, as "\(F(d)\)", thus representing the predicate with a
capital letter. The set or class of objects to which a predicate
veridically applies is often called the *extension* of the
predicate, or of the corresponding property. This property, in
contrast, is called the *intension* of the predicate, i.e., its
meaning. This terminology traces back to the Middle Age and in the
last century has led to the habit of calling sets and properties
*extensional* and *intensional* entities, respectively.
Extensions and intensions can hardly be identified; this is
immediately suggested by paradigmatic examples of co-extensional
predicates that appear to differ in meaning, such as "has a
heart", and "has kidneys" (see
SS3.1).
Properties can also be referred to by singular terms, or so it seems.
First of all, there are singular terms, e.g., "being
honest" or "honesty", that result from the
*nominalization* of predicates, such as "is honest"
or "honest" (some think that "being \(F\)" and
"\(F\)-ness" stand for different kinds of property
(Levinson 1991). Further, there are definite descriptions, such as
"Mary's favorite property". Finally, though more
controversially, there are demonstratives, such as "that shade
of red", deployed while pointing to a red object (Heal
1997).
Frege (1892) and Russell (1903) had different opinions regarding the
ontological import of nominalization. According to the former,
nominalized predicates stand for a "correlate" of the
"unsaturated" entity that the predicate stands for (in
Frege's terminology they are a "concept correlate"
and a "concept", respectively). According to the latter,
who speaks of "inextricable difficulties" in Frege's
view (Russell 1903: SS49), they stand for exactly the same entity.
*Mutatis mutandis*, they similarly disagreed about other
singular terms that seemingly refer to properties. The ontological
distinction put forward by Frege is mainly motivated by the fact that
grammar indeed forbids the use of predicates in subject position. But
this hardly suffices for the distinction and it is dubious that other
motivations can be marshalled (Parsons 1986). We shall thus take for
granted Russell's line here, although many philosophers support
Frege's view or at least take it very seriously
(Castaneda 1976; Cocchiarella 1986a; Landini 2008).
### 1.2 Arguments for Properties
Properties are typically invoked to explain phenomena of philosophical
interest. The most traditional task for which properties have been
appealed to is to provide a solution to the so-called "one over
many problem" via a corresponding "one over many
argument". This traces back at least to Socrates and Plato
(e.g., *Phaedo*, 100 c-d) and keeps being rehearsed (Russell
1912: ch. 9; Butchvarov 1966; Armstrong 1978a: ch. 7; Loux 1998:
ch.1). The problem is that certain things are many, they are
numerically different, and yet they are somehow one: they appear to be
similar, in a way that suggests a uniform classification, their being
grouped together into a single class. For example, some objects have
the same shape, certain others have the same color, and still others
the same weight. Hence, the argument goes, something is needed to
explain this phenomenon and properties fill the bill: the objects in
the first group, say, all have the property *spherical*, those
in the second *red*, and those in the third *weighing 200
grams*.
Relatedly, properties have been called for to explain our use of
general terms. How is it, e.g., that we apply "spherical"
to those balls over there and refuse to do it for the nearby bench? It
does not seem to be due to an arbitrary decision concerning where, or
where not, to stick a certain label. It seems rather the case that the
recognition of a certain property in some objects but not in others
elicits the need for a label, "spherical", which is then
used for objects having that property and not for others. Properties
are thus invoked as meanings of general terms and
predicates (Plato, *Phaedo*, 78e; Russell 1912: ch.
9). In contrast with this, Quine (1948; 1953 [1980: 11, 21, 131] )
influentially argued that the use of general terms and predicates in
itself does not involve an ontological commitment to entities
corresponding to them, since it is by deploying singular terms that we
purport to refer to something (see also Sellars 1960). However, as
noted, predicates can be nominalized and thus occur as singular terms.
Hence, even if one agrees with Quine, nominalized predicates still
suggest the existence of properties as their referents, at least to
the extent that the use of nominalized predicates cannot be
paraphrased away (Loux 2006: 25-34; entry on
platonism in metaphysics, SS4).
After Quine (1948), quantificational idiom is the landmark of
ontological commitment. We can thus press the point even more by
noting that we make claims and construct arguments that appear to
involve quantification over properties, with quantifiers reaching (i)
over predicate positions, or even (ii) over both subject and predicate
positions (Castaneda
1976).[2]
As regards (i), consider: this apple is red; this tomato is red;
hence, there is something that this apple is and this tomato also is.
As for (ii), consider: wisdom is more important than beauty; Mary is
wise and Elisabeth is beautiful; hence, there is something that Mary
is which is more important than something that Elisabeth is.
Quantification over properties seems ubiquitous not just in ordinary
discourse but in science as well. For example, the inverse square law
for dynamics and the reduction of temperature to mean molecular energy
can be taken to involve quantification over properties such as masses,
distances and temperatures: the former tells us that the attraction
force between any two bodies depends on such bodies' having
certain masses and being at a certain distance, and the latter informs
us that the fact that a sample of gas has a given temperature depends
on its having such and such mean kinetic energy.
Swoyer (1999: SS3.2) considers these points within a long list of
arguments that have been, or can be, put forward to motivate an
ontological commitment to properties. He touches on topics such as
*a priori* knowledge, change, causation, measurement, laws of
nature, intensional logic, natural language semantics, numbers (we
shall cover some of this territory in
SS5
and
SS6).
Despite all this, whether, and in what sense, properties should be
admitted in one's ontology appears to be a perennial issue,
traditionally shaped as a controversy about the existence of
universals.
### 1.3 Traditional Views about the Existence of Universals
Do universals really exist? There are three long-standing answers to
this question: *realism*, *nominalism*, and
*conceptualism*.
According to realists, universals exist as mind-independent entities.
In *transcendent* realism, put forward by Plato, they exist
even if uninstantiated and are thus "transcendent" or
"*ante res*" ("before the things"). In
*immanent* realism, defended by Aristotle in opposition to his
master, they are "immanent" or "*in
rebus*" ("in things"), as they exist only if
instantiated by objects. Contemporary notable supporters are Russell
(1912) for the former and Armstrong (1978a) for the latter (for recent
takes on this old dispute see the essays by Loux, Van Inwagen, Lowe
and Galluzzo in Galluzzo & Loux 2015). Transcendentism is of
course a less economical position and elicits epistemological worries
regarding our capacity to grasp *ante res* universals.
Nevertheless, such worries may be countered in various ways (cf. entry
on
platonism in metaphysics, SS5;
Bealer 1982: 19-20; 1998: SS2; Linsky & Zalta 1995),
and uninstantiated properties may well have work to do, particularly
in capturing the intuitive idea that there are unrealized
possibilities and in dealing with cognitive content (see
SS3).
A good summary of pro-transcendentist arguments and immanentist
rejoinders is offered by Allen (2016: SS2.3). See also Costa
forthcoming for a new criticism of immanentism, based on the
notion of grounding.
Nominalists eschew mind-independent universals. They may either resort
to tropes in their stead, or accept *predicate nominalism*,
which tries to make it without mind-independent properties at all, by
taking the predicates themselves to do the classifying job that
properties are supposed to do. This is especially indigestible to a
realist, since it seems to put the cart before the horse by making
language and mind responsible for the similarities we find in the rich
varieties of things surrounding us. Some even say that this involves
an idealist rejection of a mind-independent world (Hochberg 2013).
Conceptualists also deny that there are mind-independent universals,
and because of this they are often assimilated to nominalists. Still,
they can be distinguished insofar as they replace such universals with
concepts, understood as non-linguistic mind-dependent entities,
typically functioning as meanings of predicates. The mind-dependence
of concepts however makes conceptualism liable to the same kind of
cart/horse worry just voiced above in relation to predicate
nominalism.[3]
The arguments considered in
SS1.2
constitute the typical motivation for realism, which is the stance
that we take for granted here. They may be configured as an abductive
inference to the best explanation (Swoyer 1999). Thus, of course, they
are not foolproof, and in fact nominalism is still a popular view,
which is discussed in detail in the entry on
nominalism in metaphysics,
as well as in the entry on
tropes.
Conceptualism appears to be less common nowadays, although it still
has supporters (cf. Cocchiarella 1986a: ch. 3; 2007), and it is worth
noting that empirical research on concepts is flourishing.
### 1.4 Properties in Propositions and States of Affairs
We have talked above in a way that might give the impression that
predication is an activity that *we* perform, e.g., when we say
or think that a certain apple is red. Although some philosophers might
think of it in this way, predication, or *attribution*, may
also be viewed as a special link that connects a property to a thing
in a way that gives rise to a
proposition,
understood as a complex
featuring the property and the thing (or concepts of them) as
constituents with different roles: the latter occurs in the
proposition *as logical subject* or *argument*, as is
often said, and the former *as attributed* to such an argument.
If the proposition is true (the predication is veridical), the
argument exemplifies the property, viz. the former is an instance of
the latter. The idea that properties yield propositions when
appropriately connected to an argument motivates Russell's
(1903) introduction of the term
"propositional
function" to speak of properties. We
take for granted here that predication is univocal. However, according
to some neo-Meinongian philosophers, there are two modes of
predication, sometimes characterized as "external" and
"internal" (Castaneda 1972; Rapaport 1978; Zalta
1983; see entry on
nonexistent objects).
Zalta (1983) traces back the distinction to Mally and uses
"exemplification" to characterize the former and
"encoding" to characterize the latter. Roughly, the idea
is that non-existent objects may encode properties that existent
objects exemplify. For instance, *winged* is exemplified by
that bird over there and is encoded by the winged horse.
It is often assumed nowadays that, when an object exemplifies a
property, there is a further, complex, entity, a fact or state of
affairs (Bergmann 1960; Armstrong 1997, entries on
facts
and
states of affairs),
having the property (*qua* attributed) and the object
(*qua* argument) as constituents (this compositional conception
is not always accepted; see, e.g., Bynoe 2011 for a dissenting voice).
Facts are typically taken to fulfill the theoretical roles of
truthmakers (the entities that make true propositions true, see entry
on
truthmakers)
and causal *relata* (the entities connected by causal
relations; see entry on
the metaphysics of causation, SS1).
Not all philosophers, however, distinguish between propositions and
states of affairs; Russell (1903) acknowledges only propositions and,
for a more recent example, so does Gaskin (2008).
It appears that properties can have a double role that no other
entities can have: they can occur in propositions and facts both as
arguments and as attributed (Russell 1903: SS48). For example, in
truly saying that this apple is red and that red is a color, we
express a proposition wherein *red* occurs as attributed, and
another proposition wherein *red* occurs as argument.
Correspondingly, there are two facts with *red* in both roles,
respectively. This duplicity grounds the common distinction between
different *orders* or *types* of properties:
*first-order* ones are properties of things that are not
themselves predicables; *second-order* ones are properties of
first-order properties; and so on. Even though the formal and
ontological issues behind this terminology are controversial, it is
widely used and is often connected to the subdivision between
first-order and higher-order logics (see, e.g., Thomason 1974; Oliver
1996; Williamson 2013; entry on
type theory).
It originates from Frege's and Russell's logical
theories, especially Russell's type theory, wherein distinctions
of types and orders are rigidly regimented in order to circumvent the
logical paradoxes (see
SS6).
### 1.5 Relations
A relation is typically attributed to a plurality of objects. These
*jointly* instantiate the relation in question, if the
attribution is veridical. In this case, the *relata* (as
arguments) and the relation (as attributed) are constituents of a
state of affairs. Depending on the number of objects that it can
relate, a relation is usually taken to have a number of
"places" or a "degree" ("adicity",
"arity"), and is thus called "dyadic"
("two-place"), "triadic"
("three-place"), etc. For example, *before* and
*between* are dyadic (of degree 2) and triadic (of degree 3),
respectively. In line with this, properties and propositions are
"monadic" and "zero-adic" predicables, as they
are predicated of one, and of no, object, respectively, and may then
be seen as limiting cases of relations (Bealer 1982, where properties,
relations and propositions are suggestively grouped under the acronym
"PRP;" Dixon 2018; Menzel 1986; 1993; Swoyer 1998; Orilia
1999; Van Inwagen 2004; 2015; Zalta 1983). This terminology is also
applied to predicates and sentences; for example, the predicate
"between" is triadic, and the sentence "Peter is
between Tom and May" is zero-adic. Accordingly, standard
first-order logic employs predicates with a fixed degree, typically
indicated by a superscript, e.g., \(P^1\), \(Q^2\), \(R^3\), etc.
In natural language, however, many predicates appear to be
*multigrade* or *variably polyadic*; i.e., they can be
used with different numbers of arguments, as they can be true of
various numbers of things. For example, we say "John is lifting
a table", with "lifting" used as dyadic, as well as
"John and Mary are lifting a table", with
"lifting" used as triadic. Moreover, there is a kind of
inference, called "argument deletion", which also suggests
that many predicates that *prima facie* could be assigned a
certain fixed degree are in fact multigrade. For example, "John
is eating a cake" suggests that "is eating" is
dyadic, but since, by argument deletion, it entails "John is
eating", one could conclude that it is also monadic and thus
multigrade. Often one can resist the conclusion that there are
multigrade predicates. For example, it could be said that "John
is eating" is simply short for "John is eating
something". But it seems hard to find a systematic and
convincing strategy that allows us to maintain that natural language
predicates have a fixed degree. This has motivated the construction of
logical languages that feature multigrade predicates in order to
provide a more appropriate formal account of natural language (Grandy
1976; Graves 1993; Orilia 2000a). Since natural language predicates
appear to be multigrade, one may be tempted to take the properties and
relations that they express to also be multigrade, and the metaphysics
of science may lend support to this conclusion (Mundy 1989).
Seemingly, relations are not jointly instantiated
*simpliciter*; *how* the instantiation occurs also plays
a role. This comes to the fore in particular with non-symmetric
relations such as *loving*. For example, if John loves Mary,
then *loving* is jointly instantiated by John and Mary in a
certain way, whereas if it is Mary who loves John, then
*loving* is instantiated by John and Mary in another way.
Accordingly, relations pose a special problem: explicating the
difference between facts, such as *Abelard loves Eloise* and
*Eloise loves Abelard*, that at least *prima facie*
involve exactly the same constituents, namely a non-symmetric relation
and two other items (*loving*, Abelard, Eloise). Such facts are
often said to differ in "relational order" or in the
"differential application" of the non-symmetric relation
in question, and the problem then is that of characterizing what this
relational order or differential application amounts to.
Russell (1903: SS218) attributed an enormous importance to this
issue and attacked it repeatedly. Despite this, it has been pretty
much neglected until the end of last century, with only few others
confronting it systematically (e.g., Bergmann 1992; Hochberg 1987).
However, Fine (2000) has forcefully brought it again on the
ontological agenda and proposed a novel approach that has received
some attention. Fine identifies *standard* and
*positionalist* views (analogous to two approaches defended by
Russell at different times (1903; 1984); cf. Orilia 2008). According
to the former, relations are intrinsically endowed with a
"direction", which allows us to distinguish, e.g.,
*loving* and *being loved*: *Abelard loves
Eloise* and *Eloise loves Abelard* differ, because they
involve two relations that differ in direction (e.g., the former
involves *loving* and the latter *being loved*).
According to the latter, relations have different
"positions" that can somehow host *relata*:
*Abelard loves Eloise* and *Eloise loves Abelard*
differ, because the two positions of the very same *loving*
relation are differently occupied (by Abelard and Eloise in one case
and by Eloise and Abelard in the other case). Fine goes on to propose
and endorse an alternative, "anti-positionalist"
standpoint, according to which relations have neither direction nor
positions. The literature on this topic keeps growing and there are
now various proposals on the market, including new versions of
positionalism (Orilia 2014; Donnelly 2016; Dixon 2018), and
*primitivism*, according to which differential application
cannot be analyzed (MacBride 2014).
Russell (1903: ch. 26) also had a key role in leading philosophers to
acknowledge that at least some relations, in particular
spatio-temporal ones, are *external*, i.e., cannot be reduced
to monadic properties, or the mere existence, of the *relata*,
in contrast to internal relations that can be so reduced. This was a
breakthrough after a long tradition tracing back at least to Aristotle
and the Scholastics wherein there seems to be hardly any place for
external relations (see entry on
medieval theories of relations).[4]
### 1.6 Universals *versus* Tropes
According to some philosophers, universals and tropes may coexist in
one ontological framework (see, e.g., Lowe 2006 for a well-known
general system of this kind, and Orilia 2006a, for a proposal based on
empirical data from quantum mechanics). However, nowadays they are
typically seen as alternatives, with the typical supporter of
universals ("universalist") trying to do without tropes
(e.g., Armstrong 1997) and the typical supporter of tropes
("tropist") trying to dispense with universals (e.g.,
Maurin
2002).[5]
In order to clarify how differently they see matters, we may take
advantage of states of affairs. Both parties may agree, say, that
there are two red apples, \(a\) and \(b\). They will immediately
disagree, however, for the universalist will add that
1. there are two distinct states of affairs, *that a is red*
and *that b is red*,
2. such states are similar in having the universal *red* as
constituent, and
3. they differ insofar as the former has \(a\) as constituent,
whereas the latter has \(b\).
The tropist will reject these states of affairs with universals as
constituents and rather urge that there are two distinct tropes, the
redness of \(a\) and the redness of \(b\), which play a theoretical
role analogous to the one that the universalist would invoke for such
states of affairs. Hence, tropists claim that tropes can be causal
relata (D. Williams 1953) and truthmakers (Mulligan, Simons, &
Smith 1984).
Tropes are typically taken to be *simple*, i.e., without any
subconstituent (see Section 2.2 of the entry on
tropes).
Their playing the role of states of affairs with universals as
constituents depends on this: universals combine two functions, only
one of which is fulfilled by tropes. On the one hand, universals are
*characterizers*, inasmuch as they characterize concrete
objects. On the other hand, they are also *unifiers*, to the
extent that different concrete objects may be characterized by the
very same universal, which is thus somehow shared by all of them; when
this is the case, there is, according to the universalist, an
objective similarity among the different objects (see
SS1.2).
In contrast, tropes are only characterizers, for, at least as
typically understood, they cannot be shared by distinct concrete
objects. Given its dependency on one specific object, say, the apple
\(a\), a trope can do the work of a state of affairs with \(a\) as
constituent. But for tropes to play this role, the tropist will have
to pay a price and introduce additional theoretical machinery to
account for objective similarities among concrete objects. To this
end, she will typically resort to the idea that there are objective
resemblances among tropes, which can then be grouped together in
resemblance classes. These resemblance classes play the role of
unifiers for the tropist. Hence, from the tropist's point of
view "property" is ambiguous, since it may stand for the
characterizers (tropes) or for the unifiers (resemblance classes) (cf.
entry on
mental causation, SS6.5).
Similarly, "exemplification" and related words may be
regarded as ambiguous insofar as they can be used either to indicate
that an object exemplifies a certain trope or to indicate that the
object relates to a certain resemblance class by virtue of
exemplifying a trope in that
class.[6]
### 1.7 Kinds of Properties
Many important and often controversial distinctions among different
kinds of properties have been made throughout the whole history of
philosophy until now, often playing key roles in all sorts of disputes
and arguments, especially in metaphysics. Here we shall briefly review
some of these distinctions and others will surface in the following
sections. More details can be found in other more specialized entries,
to which we shall refer.
Locke influentially distinguished between *primary* and
*secondary* qualities; the former are objective features of
things, such as shapes, sizes and weights, whereas the latter are
mind-dependent, e.g., colors, *tastes*,
*sounds*, and *smells*. This contrast was already
emphasized by the Greek atomists and was revived in modern times by
Galileo, Descartes, and Boyle.
At least since Aristotle, the *essential* properties of an
object have been contrasted with its *accidental* properties;
the object could not exist without the former, whereas it could fail
to have the latter (see entry on
essential vs. accidental properties).
Among essential properties, some acknowledge *individual
essences* (also called "haecceities" or
"thisnesses"), which univocally characterize a certain
individual. Adams (1979) conceives of such properties as involving,
via the identity relation, the very individual in question, e.g.,
Socrates: *being identical to Socrates*, which cannot exist if
Socrates does not exist. In contrast, Plantinga (1974) views them as
capable of existing without the individuals of which they are
essences, e.g., *Socratizing*, which could have existed even if
Socrates had not existed. See
SS5.2
on the issue of the essences of properties themselves.
*Sortal* properties are typically
expressed by count nouns like "desk" and "cat"
and are taken to encode principles of individuation and persistence
that allow us to objectively count objects. For example, there is a
fact of the matter regarding how many things in this room instantiate
*being a desk* and *being a cat*. On the other hand,
non-sortal properties such as *red* or *water* do not
allow us to count in a similarly obvious way. This distinction is
often appealed to in contemporary metaphysics (P. Strawson 1959: ch.
5, SS2; Armstrong 1978a: ch. 11, SS4), where, in contrast, the
traditional one between *genus* and
*species*
plays a relatively small role. The latter figured conspicuously in
Aristotle and in much subsequent philosophy inspired by him. We can
view a genus as a property more general than a corresponding species
property, in a hierarchically relative manner. For example, *being
a mammal* is a genus relative to the species *being a
human*, but it is a species relative to the genus *being an
animal*. The possession of a property called *differentia*
is appealed to in order to distinguish between different species
falling under a common genus; e.g., as the tradition has it, the
differentia for being human is *being rational* (Aristotle,
*Categories*, 3a). Similar hierarchies of properties, however
without anything like differentiae, come with the distinction of
determinables and determinates,
which appears to be
more prominent in current metaphysics. Color properties provide
typical examples of such hierarchies, e.g., with *red* and
*scarlet* as determinable and determinate, respectively.
## 2. Exemplification
We saw right at the outset that objects exemplify, or instantiate,
properties. More generally, items of all sorts, including properties
themselves, exemplify properties, or, in different terminology,
*bear*, *have* or *possess* properties. Reversing
order, we can also say that properties *characterize*, or
*inhere in*, the items that exemplify them. There is then a
very general phenomenon of exemplification to investigate, which has
been labeled in various ways, as the variety of terms of art just
displayed testifies. All such terms have often been given special
technical senses in the rich array of different explorations of this
territory since Ancient and Medieval times up to the present age (see,
e.g., Lowe 2006: 77). These explorations can hardly be disentangled
from the task of providing a general ontological picture with its own
categorial distinctions. In line with what most philosophers do
nowadays, we choose "exemplification", or, equivalently,
"instantiation" (and their cognates), to discuss this
phenomenon in general and to approach some different accounts that
have been given of it in recent times. This sweeping use of these
terms is to be kept distinct from the more specialized uses of them
that will surface below (and to some extent have already surfaced
above) in describing specific approaches by different philosophers
with their own terminologies.
### 2.1 Monist vs. Pluralist Accounts
We have taken for granted that there is just one kind of
exemplification, applying indifferently to different categories of
entities. This *monist* option may indeed be considered the
default one. A typical recent case of a philosopher who endorses it is
Armstrong (1997). He distinguishes three basic
categories, particulars, properties or relations, and states of
affairs, and takes exemplification as cutting across them: properties
and relations are exemplified not only by particulars, but by
properties or relations and states of affairs as well. But some
philosophers are *pluralist*: they distinguish different kinds
of exemplification, in relation to categorial distinctions in their
ontology.
One may perhaps attribute different kinds of exemplification to the
above-considered Meinongians in view of the different sorts of
predication that they admit (see, e.g., Monaghan's (2011)
discussion of Zalta's theory). A more typical example of the
pluralist alternative is however provided by Lowe (2006), who
distinguishes "instantiation",
"characterization" and "exemplification" in
his account of four fundamental categories: objects, and three
different sorts of properties, namely kinds (substantial universals),
attributes and modes
(tropes).[7]
To illustrate, Fido is a dog insofar as it *instantiates* the
kind *dog*, \(D\), which in turn is *characterized* by
the attribute of barking, \(B\). Hence, when Fido is barking, it
*exemplifies* \(B\) *occurrently* by virtue of being
characterized by a barking mode, \(b\), that instantiates \(B\); and,
when Fido is silent, it exemplifies \(B\) *dispositionally*,
since \(D\), which Fido instantiates, is characterized by
\(B\) (see Gorman 2014 for a critical discussion of this sort of
view).
### 2.2 Compresence and Partial Identity
Most philosophers, whether tacitly or overtly, appear to take
exemplification as primitive and unanalyzable. However, on certain
views of particulars, it might seem that exemplification is reduced to
something more fundamental.
A well-known such approach is the *bundle theory*, which takes
particulars to be nothing more than "bundles" of
universals connected by a special relation, commonly called
*compresence*, after Russell (1948: Pt. IV, ch.
8)[8].
Despite well-known problems (Van Cleve 1985), this view, or
approaches in its vicinity, keep having supporters (Casullo 1988;
Curtis 2014; Dasgupta 2009; Paul 2002; Shiver 2014; J. Russell 2018;
see Sider 2020, ch. 3, for a recent critical analysis). From this
perspective, that a particular exemplifies a property amounts to the
property's being compresent with the properties that constitute
the bundle with which the particular in question is identified. It
thus looks as if exemplification is reduced to compresence.
Nevertheless, compresence itself is presumably jointly exemplified by
the properties that constitute a given bundle, and thus at most there
is a reduction, to compresence, of exemplification *by a
particular* (understood as a bundle), and not an elimination of
exemplification in general.
Another, more recent, approach is based on *partial identity*.
Baxter (2001) and, inspired by him, Armstrong (2004), have proposed
related assays of exemplification, which seem to analyze it in terms
of such partial identity. These views have captured some interest and
triggered discussions (see, e.g., Mumford 2007; Baxter's (2013)
reply to critics and Baxter's (2018) rejoinder to Brown
2017).
Baxter (2001) relies on the notion of *aspect* and on the
relativization of numerical identity to *counts*. In his view,
both particulars and properties *have* aspects, which can be
similar to distinct aspects of other particulars or properties. The
numerical identity of aspects is relative to standards for counting,
*counts*, which group items in count collections: aspects of
particulars in the *particular* collection, and aspects of
universals, in the *universal* collection. There can then be a
*cross-count identity*, which holds between an aspect in the
particular collection, and an aspect in the universal collection,
e.g., the aspects *Hume as human* and *humanity as had by
Hume*. In this case, the universal and the particular in question
(humanity and Hume, in our example) are *partially identical*.
Instantiation, e.g., that Hume instantiates humanity, then amounts to
this partial identity of a universal and a particular. One may have
the feeling, as Baxter himself worries (2001: 449), that in this
approach instantiation has been traded for something definitely more
obscure, such as aspects and an idiosyncratic view of identity. It can
also be suspected that particulars' and properties'
*having* aspects is presupposed in this analysis, where this
*having* is a relation rather close to exemplification
itself.
Armstrong (2004) tries to do without aspects. At first glance it seems
as if he analyzes exemplification, for he takes the exemplification of
a property (a universal) by a particular to be a partial identity of
the property and the particular; as he puts it (2004: 47), "[i]t
is not a mere mereological overlap, as when two streets intersect, but
it is a partial identity". However, when we see more closely
what this partial identity amounts to, the suspicion arises that it
presupposes exemplification. For Armstrong appears to identify a
particular via the properties that it *instantiates* and
similarly a property via the particulars that *instantiate* it.
So that we may identify a particular, \(x\), via a collection of
properties *qua* instantiated by \(x\), say \(\{F\_x,\) \(G\_x,\)
\(H\_x,\) ..., \(P\_x,\) \(Q\_x,\) \(\ldots\};\) and a property,
\(P\), via a collection of particulars *qua* instantiating
\(P\), say \(\{P\_a, P\_b , \ldots ,P\_x, P\_y , \ldots \}\). By putting
things in this way, we can then say that a particular is partially
identical to a property when the collection that identifies the
particular has an element in common with the collection that
identifies the property. To illustrate, the \(x\) and the \(P\) of our
example are partially identical because they have the element \(P\_x\)
in common. Now, the elements of these collections are neither
properties *tout court* nor particulars *tout court*,
which led us to talk of properties *qua* instantiated and
particulars *qua*
instantiating.[9]
But this of course presupposes instantiation. Moreover, there is the
unwelcome consequence that the world becomes dramatically less
contingent than we would have thought at first sight, for neither a
concrete particular nor a property can exist without it being the case
that the former has the properties it happens to have, and that the
latter is instantiated by the same particulars that actually
instantiate it; we get, as Mumford puts it (2007: 185), "a major
new kind of necessity in the world".
### 2.3 Bradley's Regress
One important motivation, possibly the main one, behind attempts at
analysis such as the ones we have just seen is the worry to avoid the
so-called Bradley's regress
regarding
exemplification (Baxter 2001: 449; Mumford 2007: 185), which goes as
follows. Suppose that the individual \(a\) has the property \(F\). For
\(a\) to instantiate \(F\) it must be *linked to* \(F\) by a
(dyadic) relation of instantiation, \(I\_1\). But this requires a
further (triadic) relation of instantiation, \(I\_2\), that connects
\(I\_1, F\) and \(a\), and so on without end. At each stage a further
connecting relation is required, and thus it seems that nothing
*ever* gets connected to anything else (it is not clear to what
extent Bradley had this version in mind; for references to analogous
regresses prior to Bradley's, see Gaskin 2008: ch. 5,
SS70).
This regress has traditionally been regarded as vicious (see, e.g.,
Bergmann 1960), although philosophers such as Russell (1903: SS55)
and Armstrong (1997: 18-19) have argued that it is not. In doing
so, however, they seem to take for granted the fact that \(a\) has the
property \(F\) (pretty much as in the *brute fact approach*;
see below) and go on to see \(a\)'s and \(F\)'s
instantiating \(I\_1\) as a further fact that is merely
*entailed* by the former, which in turn entails \(a\)'s,
\(F\)'s and \(I\_1\)'s instantiating \(I\_2\), and so on.
This way of looking at the matter tends to be regarded as a standard
response to the regress. But those who see the regress as vicious
assume that the various exemplification relations are introduced in an
effort to *explain* the very existence of the fact that \(a\)
has the property \(F\). Hence, from their explanatory standpoint,
taking the fact in question as an unquestioned ground for a chain of
entailments is beside the point (cf. Loux 1998: 31-36;
Vallicella 2002).
It should be noted, however, that this perspective suggests a
distinction between an "internalist" and an
"externalist" version of the regress (in the terminology
of Orilia 2006a). In the former, at each stage we postulate a new
constituent of the fact, or state of affairs, \(s\), that exists
insofar as \(a\) has the property \(F\), and there is viciousness
because \(s\) can never be appropriately
characterized.[10]
In the latter, at each stage we postulate a new, distinct, state of
affairs, whose existence is required by the existence of the state of
affairs of the previous stage. This amounts to admitting infinite
explanatory and metaphysical dependence chains. However, according to
Orilia (2006b: SS7), since no decisive arguments against such
chains exist, the externalist regress should not be viewed as vicious
(for criticisms, see Maurin 2015 and Allen 2016: SS2.4.1; for a
similar view about predication, see Gaskin 2008).
A typical line for those convinced that the regress is vicious has
consisted in proposing that instantiation is not a relation, or at
least not a normal one. Some philosophers hold that it is a *sui
generis* linkage that hooks things up without intermediaries.
Peter Strawson (1959) calls it a *non-relational tie* and
Bergmann (1960) calls it a *nexus*. Broad (1933: 85) likened
instantiation to glue, which *just* sticks two sheets of paper
together, without needing anything additional; similarly,
instantiation *just* relates. An alternative line has been to
reject instantiation altogether. According to Frege, it is not needed,
because properties have "gaps" that can be filled, and
according to a reading of Wittgenstein's *Tractatus*,
because objects and properties can be connected like links in a chain.
However, both strategies are problematic, as argued by Vallicella
(2002). His basic point is that, if \(a\) has property \(F\), we need
an ontological explanation of why \(F\) and \(a\) happen to be
connected in such a way that \(a\) has \(F\) as one of its properties
(unless \(F\) is a property that \(a\) has necessarily). But none of
these strategies can provide this explanation. For example, the appeal
to gaps is pointless: \(F\) has a gap whether or not it is filled by
\(a\) (for example, it could be filled in by another object), and thus
the gap cannot explain the fact that \(a\) has \(F\) as one of its
properties.
Before turning to exemplification as partial identity, Armstrong
(1997: 118) has claimed that Bradley's regress can be avoided by
taking a state of affairs, say \(x\)'s being \(P\), as capable
by itself of holding together its constituents, i.e., the object \(x\)
and the property \(P\) (see also Perovic 2016). Thus, there is no need
to invoke a relation of exemplification linking \(x\) and \(P\) in
order to explain how \(x\) and \(P\) succeed in giving rise to a
unitary item, namely the state of affairs in question. There seems to
be a circularity here for it appears that we want to explain how an
object and a property come to be united in a state of affairs by
appealing to the result of this unification, namely the state of
affairs itself. But perhaps this view can be interpreted as simply the
idea that states of affairs should be taken for granted in a
primitivist fashion without seeking an explanation of their unity by
appealing to exemplification or otherwise; this is the *brute fact
approach*, as we may call it (for supporters, see Van Inwagen
(1993: 37) and Oliver (1996: 33); for criticisms and a possible
defense, see Vallicella 2002, and Orilia 2016, respectively).
Lowe (2006) has tried to tackle Bradley's regress within his
pluralist approach to exemplification. In his view, characterization,
instantiation and exemplification are "formal" and thus
quite different from garden-variety relations such *giving* or
*loving*. This guarantees that these three relations escape
Bradley's regress (Lowe 2006: 30, 80,
90).[11]
Let us illustrate how, by turning back to the Fido example of
SS2.1.
What a mode instantiates and what it characterizes belong to its
essence. In other words, a mode cannot exist without instantiating the
attribute it instantiates and characterizing the object it
characterizes. Hence, mode \(b\), by simply existing, instantiates
attribute \(B\) and characterizes Fido. Moreover, since
exemplification (the occurrent one, in this case) results from
"composing" characterization and instantiation,
\(b\)'s existence also guarantees that Fido exemplifies \(B\).
According to Lowe, we thus have some truths, that \(b\) characterizes
Fido, that \(b\) instantiates \(B\), and that Fido exemplifies \(B\)
(i.e., is barking), all of which are made true by \(b\). Hence, there
is no need to postulate as truthmakers states of affairs with
constituents, Fido and \(b\), related by characterization, or \(b\)
and \(B\), related by exemplification, or Fido and \(B\), related by
exemplification. This, in Lowe's opinion, eschews
Bradley's regress, since this arises precisely because we appeal
to states of affairs with constituents in need of a glue that
contingently keep them together. Nevertheless, there is no loss of
contingency in Lowe's world picture, for an object need not be
characterized by the modes that happen to characterize it. Thus, for
example, mode \(b\) might have failed to exist and there could have
been a Fido silence mode in its stead, in which case the proposition
that Fido is barking would have been false and the proposition that
Fido is silent would have been true. One may wonder however what makes
it the case that a certain mode is a mode of just a certain object and
not of another one, say another barking dog. Even granting that it is
essential for \(b\) to be a mode of Fido, rather than of another dog,
it remains true that it is *of* Fido, rather than of the other
dog, and one may still think that this *being of* is also glue
of some sort, perhaps with a contingency inherited from the
contingency of \(b\) (which might have failed to exist). The suspicion
then is that the problem of accounting for the relation between a mode
and an object has replaced the Armstrongian one of what makes it the
case that a universal \(P\) and an object \(x\) make up the state of
affairs *\(x\)'s being \(P\)*. But the former problem,
one may urge, is no less thorny than the latter, and some
universalists like Armstrong may consider uneconomical Lowe's
acceptance of tropes in addition to universals (for accounts of
Bradley's regress analogous to Lowe's, but within a
thoroughly tropist ontology, see Section 3.2 of the entry on
tropes.)
As it should be clear from this far from exhaustive survey,
Bradley's regress deeply worries ontologists and the attempts to
tame it keep
flowing.[12]
### 2.4 Self-exemplification
Presumably, properties exemplify properties. For example, if
properties are abstract objects, as is usually thought, then seemingly
every property exemplifies abstractness. But then we should also grant
that there is self-exemplification, i.e., a property exemplifying
itself. For example, abstractness is itself abstract and thus
exemplifies itself. Self-exemplification however has raised severe
perplexities at least since Plato.
Plato appears to hold that *all* properties exemplify
themselves, when he claims that forms participate in themselves. This
claim is crucially involved in his so-called *third man
argument*, which led him to worry that his theory of forms is
incoherent (*Parmenides*, 132 ff.). As we see matters now, it
is not clear why we should hold that all properties exemplify
themselves (Armstrong 1978a: 71); for instance, people are honest, but
honesty itself is not honest (see, however, the entry on
Plato's *Parmenides*,
and Marmodoro forthcoming).
Nowadays, a more serious worry related to self-exemplification is
Russell's famous paradox, constructed as regarding the property
of non-self-exemplification, which appears to exemplify itself iff it
does not, thus defying the laws of logic, at least classical logic.
The discovery of his paradox (and then the awareness of related
puzzles) led Russell to introduce a theory of types, which institutes
a total ban on self-predication by a rigid segregation of properties
into a hierarchy of *types* (more on this in
SS6.1).
The account became more complex and rigid, as Russell moved from
*simple* to *ramified* type theory, which involves a
distinction of *orders* within types (see entries on
type theory
and
Russell's paradox,
and, for a detailed reconstruction of how Russell reacted to the
paradox, Landini 1998).
In type theory all properties are, we may say, *typed*. This
approach has never gained unanimous consensus and its many problematic
aspects are well-known (see, e.g., Fitch 1952: Appendix C; Bealer
1989). Just to mention a few, the type-theoretical hierarchy imposed
on properties appears to be highly artificial and multiplies
properties *ad infinitum* (e.g., since presumably properties
are abstract, for any property \(P\) of type \(n\), there is an
abstractness of type \(n+1\) that \(P\) exemplifies). Moreover, many
cases of self-exemplification are innocuous and common. For example,
the property of *being a property* is itself a property, so it
exemplifies itself. Accordingly, many recent proposals are type-free
(see
SS6.1)
and thus view properties as *untyped*, capable of being
self-predicated, sometimes veridically. An additional motivation to
move in this direction is a new paradox proposed by Orilia and Landini
(2019), which affects simple type theory. It is
"contingent" in that it is derived from a contingent
assumption, namely that someone, say John, is thinking just about the
property of being a property \(P\) such that John is thinking of
something that is not exemplified by \(P\).
## 3. Existence and Identity Conditions for Properties
Quine (1957 [1969: 23]) famously claimed that there should be no
entity without identity. His paradigmatic case concerns sets: two of
them are identical iff they have exactly the same members. Since then
it has been customary in ontology to search for identity conditions
for given categories of entities and to rule out categories for want
of identity conditions (against this, see Lowe 1989). Quine started
this trend precisely by arguing against properties and this has
strictly intertwined the issues of which properties there are and of
their identity
conditions.[13]
### 3.1 From Extensionality to Hyperintensionality
In an effort to provide identity conditions for properties, one could
mimic those for sets, or equate the former with the latter (as in
class nominalism; see note 3), and provide the following
*extensionalist* identity conditions: two properties are
identical iff they are co-extensional. This criterion can hardly work,
however, since there are seemingly distinct properties with the same
extension, such as *having a heart* and *having
kidneys*, and even wildly different properties such as
*spherical* and *weighing 2 kilos* could by accident be
co-extensive.
One could then try the following *intensional* identity
conditions: two properties are identical iff they are
*co-intensional*, i.e., necessarily co-extensional, where the
necessity in question is logical necessity. This guarantees that
*spherical* and *weighing 2 kilos* are different even if
they happen to be co-extensional. Following this line, one may take
properties to be *intensions*, understood as, roughly,
functions that assign extensions (sets of objects) to predicates at
logically possible worlds. Thus, for instance, the predicates
"has a heart" and "has kidneys" stand for
different intensions, for even if they have the same extension in the
actual world they have different extensions at worlds where there are
creatures with heart and no kidneys or vice versa. This approach is
followed by Montague (1974) in his pioneering work in natural language
semantics, and in a similar way by Lewis (1986b), who reduces
properties to sets of possible objects in his modal realism,
explicitly committed to possible worlds and mere *possibilia*
inhabiting them. Most philosophers find this commitment unappealing.
Moreover, one may wonder how properties can do their causal work if
they are conceived of in this way (for further criticisms see Bealer
1982: 13-19, and 1998: SS4; Egan 2004). However, the
criterion of co-intensionality may be accepted without also buying the
reduction of properties to sets of *possibilia* (Bealer views
this as the identity condition for his conception 1 properties; see
below). Still, co-intensionality must face two challenges coming from
opposite fronts.
On the one hand, from the perspective of empirical science,
co-intensionality may appear too strong as a criterion of identity.
For the identity statements of scientific reductions, such as that of
temperature to mean kinetic energy, could suggest that some properties
are identical even if not co-intensional. For example, one may accept
that *having absolute temperature of 300K* is *having mean
molecular kinetic energy of \(6.21 \times 10^{-21}\)* (Achinstein
1974: 259; Putnam (1970: SS1) and Causey (1972) speak of
"synthetic identity" and "contingent
identity", respectively). Rather than logical necessity, it is
*nomological* necessity, necessity on the basis of the causal
laws of nature, that becomes central in this line of thought.
Following it, some have focused on the *causal* and
*nomological* roles of properties, i.e., roughly, the causes
and effects of their being instantiated, and their involvement in laws
of nature, respectively. They have thus advanced *causal or
nomic* criteria. According to them, two properties are identical
iff they have the same causal (Achinstein 1974: SSXI; Shoemaker
1980) or nomological role (Swoyer 1982; Kistler 2002). This line has
been influential, as it connects to the "pure
dispositionalism" discussed in
SS5.2.
There is however a suspicion of circularity here, since causal and
nomological roles may be viewed as higher-order properties (see entry
on
dispositions, SS3).
On the other hand, once matters of meaning and mental content are
taken into account, co-intensionality might seem too weak, for it
makes a property \(P\) identical to any logically equivalent property,
e.g., assuming classical logic, *\(P\) and (\(Q\) or not
\(Q\))*. And with a sufficiently broad notion of logical
necessity, even, for example, *being triangular* and *being
trilateral* are identical. However, one could insist that
"trilateral" and "triangular" appear to have
different meanings, which somehow involve the different geometrical
properties *having a side*, and *having an angle*,
respectively. And if *being triangular* were really identical
to *being trilateral*, from the fact that John believes that a
certain object has the former property, one should be able to infer
that John also believes that such an object has the latter property.
Yet, John's ignorance may make this conclusion unwarranted. In
the light of this, borrowing a term from Cresswell (1975), one may
move from intensional to *hyperintensional* identity
conditions, according to which two properties, such as trilaterality
and triangularity, may be different even if they are co-intensional.
In order to implement this idea, Bealer (1982) take two properties to
be identical iff they have the same analysis, i.e., roughly, they
result from the same ultimate primitive properties and the same
logical operations applied to them (see also Menzel 1986; 1993). Zalta
(1983; 1988) has developed an alternative available to those who admit
two modes of predication (see
SS1.4):
two properties are identical iff they are (necessarily) encoded by
the same objects.
Hyperintensional conditions make of course for finer distinctions
among entities than the other criteria we considered. Accordingly, the
former are often called "fine-grained" and the latter
"coarse-grained", and the same denominations are
correspondingly reserved for the entities that obey the conditions in
question. Coarse- or fine-grainedness is a relative matter. The
extensional criterion is more coarse-grained than the intensional or
causal/nomological ones. And hyperintensional conditions themselves
may be more or less fine-grained: properties could be individuated
almost as finely as the predicates expressing them, to the point that,
e.g., even *being \(P\) and \(Q\)* and *being \(Q\) and
\(P\)* are kept distinct, but one may also envisage less stringent
conditions that make for the identification of properties of that sort
(Bealer 1982: 54). It is conceivable, however, that one could be
logically obtuse to the point of believing that something has a
certain property without believing that it has a trivially equivalent
property. Thus, to properly account for mental content,
*maximal* hyperintensionality appears to be required, and it is
in fact preferred by Bealer. Even so, the paradox of analysis
raises a serious issue.
One could say, for example, that *being a circle* is *being
a locus of points equidistant from a point*, since the latter
provides the analysis of the former. In reply, Bealer distinguishes
between "being a circle" as designating a simple
"undefined" property, which is an *analysandum*
distinct from the *analysans*, *being a locus of points
equidistant from a point*, and "being a circle" as
designating a complex "defined" property, which is indeed
identical to that *analysans*. Orilia (1999: SS5.5)
similarly distinguishes between the simple *analysans* and the
complex *analysandum*, without admitting that expressions for
properties such as "being a circle" could be ambiguous in
the way Bealer suggests. Orilia rather argues that the
"is" of analysis does not express identity but a weaker
relation, which is asymmetrical in that *analysans* and
*analysandum* play different roles in it. The matter keeps
being discussed. Rosen (2015) appeals to grounding to characterize the
different role played by the *analysans*. Dorr (2016) provides
a formal account of the "is" used in identity statements
involving properties, according to which it stands for a symmetric
"identification" relation.
### 3.2 The Sparse and the Abundant Conceptions
Bealer (1982) distinguishes between *conception 1* properties,
or *qualities*, and *conception 2* properties, or
*concepts* (understood as mind-independent). With a different
and now widespread terminology, Lewis (1983, 1986b) followed suit,
speaking of a *sparse* and an *abundant conception* of
properties. According to the former, there are relatively few
properties, namely just those responsible for the objective
resemblances and causal powers of things; they cut nature at its
joints, and science is supposed to individuate them *a
posteriori*. According to the latter, there are immensely many
properties, corresponding to all meaningful predicates we could
possibly imagine and to all sets of objects, and they can be assumed
*a priori*. (It should be clear that "few" and
"many" are used in a comparative sense, for the number of
sparse properties may be very high, possibly infinite). To illustrate,
the sparse conception admits properties currently accepted by
empirical science such as *having negative charge* or
*having spin up* and rejects those that are no longer supported
such as *having a certain amount of caloric*, which featured in
eighteenth century chemistry; in contrast, the abundant conception may
acknowledge the latter property and all sorts of other properties,
strange as they may be, e.g., *negatively charged or disliked by
Socrates*, *round and square* or Goodman's (1983)
notorious *grue* and *bleen*. Lewis (1986b: 60) tries to
further characterize the distinction by taking the sparse properties
to be *intrinsic*, rather than *extrinsic* (e.g.,
*being 6 feet tall*, rather than *being taller than
Tom*), and *natural*, where naturalness is something that
admits of degrees (for example, he says, masses and charges are
perfectly natural, colors are less natural and *grue* and
*bleen* are paradigms of unnaturalness). Much work on such
issues has been done since then (see entries on
intrinsic vs. extrinsic properties
and
natural properties).
Depending on how sparse or abundant properties are, we can have two
extreme positions and other more moderate views in between.
At one end of the spectrum, there is the most extreme version of the
sparse conception, *minimalism*, which accepts all of these
principles:
1. there are only coarse-grained properties,
2. they exist only if instantiated and thus are contingent
beings,
3. they are all instantiated by things in space-time (setting aside
those instantiated by other properties),
4. they are fundamental and thus their existence must be sanctioned
by microphysics.
This approach is typically motivated by physicalism and
epistemological qualms regarding transcendent universals. The
best-known contemporary supporter of minimalism is Armstrong (1978a,b,
1984). Another minimalist is Swoyer (1996).
By dropping or mitigating some of the above principles, we get less
minimalist versions of the sparse conception. For example, some have
urged uninstantiated properties to account for features of measurement
(Mundy 1987), vectors (Bigelow & Pargetter 1991: 77), or natural
laws (Tooley 1987), and some even that there are all the properties
that can be possibly exemplified, where the possibility in question is
causal or nomic (Cocchiarella 2007: ch. 12). In line with
positions traditionally found in emergentism (see
SS5.1),
Schaffer (2004) proposes that there are sparse properties as
fundamental, and in addition, as grounded on them, the properties that
need be postulated at all levels of scientific explanations, e.g.,
chemical, biological and psychological ones. Even Armstrong goes
beyond minimalist strictures, when in his later work (1997)
distinguishes between "first-class properties"
(universals), identified from a minimalist perspective, and
"second-class properties" (supervening on universals). All
these positions appear to be primarily concerned with issues in the
metaphysics of science (see
SS5)
and typically display too short a supply of properties to deal with
meaning and mental content, and thus with natural language semantics
and the foundations of mathematics (see
SS6).
However, minimalists might want to resort to concepts, understood as
mind-dependent entities, in dealing with such issues (e.g., along the
lines proposed in Cocchiarella 2007).
Matters of meaning and mental content are instead what typically
motivate the views at the opposite end of the spectrum. To begin with,
*maximalism*, i.e., the abundant conception in all its glory
(Bealer 1982; 1993; Carmichael 2010; Castaneda 1976; Jubien
1989; Lewis 1986b; Orilia 1999; Zalta 1988; Van Inwagen 2004):
properties are fine-grained necessary entities that exist even if
uninstantiated, or even impossible to be instantiated. In its most
extreme version, maximalism adopts identity conditions that
differentiate properties as much as possible, but more moderate
versions can be obtained by slightly relaxing such conditions. Views
of this sort are hardly concerned with physicalist constraints or the
like and rather focus on the explanatory advantages of
hyperintensionality. These may well go beyond the typical motivations
considered above: Nolan (2014) argues that hyperintensionality is
increasingly important in metaphysics in dealing with issues such as
counter-possible conditionals, explanation, essences, grounding,
causation, confirmation and chance.
Rather than choosing between the sparse and abundant conceptions, the
very promoters of this distinction have opted in different ways for a
*dualism of properties*, according to which there are
properties of both kinds. Lewis endorses abundant properties as
reduced to sets of *possibilia*, and sparse properties either
viewed as universals, and corresponding to some of the abundant
properties (1983), or as themselves sets of possibilia (1986b: 60).
Bealer (1982) proposes a systematic account wherein qualities are the
coarse-grained properties that "provide the world with its
causal and phenomenal order" (1982: 183) and concepts are the
fine-grained properties that can function as meanings and as
constituents of mental contents. He admits a stock of simple
properties, which are both qualities and concepts (1982: 186),
wherefrom complex qualities and complex concepts are differently
constructed: on the one hand, by *thought-building operations*,
which give rise to fine-grained qualities; on the other hand, by
*condition-building operations*, which give rise to
coarse-grained qualities. Orilia (1999) has followed Bealer's
lead in also endorsing both coarse-grained qualities and fine-grained
concepts, without however identifying simple concepts and simple
qualities. In Orilia's account concepts are never identical to
qualities, but may correspond to them; in particular, the identity
statements of intertheoretic reductions should be taken to express the
fact that two different concepts correspond to the same quality.
Despite its advantages in dealing with a disparate range of phenomena,
dualism has not gained any explicit consensus. Its implicit presence
may however be widespread. For example, Putnam's (1970: SS1)
distinction between *predicates* and *physical
properties*, and Armstrong's above-mentioned recognition of
second-class properties may be seen as forms of dualism.
## 4. Complex Properties
It is customary to distinguish between *simple* and
*complex* properties, even though some philosophers take all
properties to be simple (Grossmann 1983: SSSS58-61). The
former are not characterizable in terms of other properties, are
primitive and unanalyzable and thus have no internal structure,
whereas the latter somehow have a structure, wherein other properties,
or more generally entities, are parts or constituents. It is not
obvious that there are simple properties, since one may imagine that
all properties are analyzable into constituents *ad infinitum*
(Armstrong 1978b: 67). Even setting this aside, to provide examples is
not easy. Traditionally, determinate colors are cited, but nowadays
many would rather appeal to fundamental physical properties such as
*having a certain electric charge*. It is easier to provide
putative examples of complex properties, once some other properties
are taken for granted, e.g., *blue and spherical* or *blue
or non-spherical*. These *logically compound* properties,
which involve logical operations, will be considered in
SS4.1.
Next, in
SS4.2
we shall discuss other kinds of complex properties, called
*structural* (after Armstrong 1978b), which are eliciting a
growing interest, as a recent survey testifies (Fisher 2018). Their
complexity has to do with the subdivisions of their instances into
subcomponents. Typical examples come from chemistry, e.g.,
*H2O* and *methane* understood as properties
of molecules.
It should be noted that it is not generally taken for granted that
complex properties literally have parts or constituents. Some
philosophers take this line (Armstrong 1978a: 36-39, 67ff.;
Bigelow & Pargetter 1989; Orilia 1999), whereas others demur, and
rather think that talk in terms of structures with constituents is
metaphorical and dependent on our reliance on structured terms such as
"blue and spherical" (Bealer 1982; Cocchiarella 1986a;
Swoyer 1998: SS1.2).
### 4.1 Logically Compound Properties
At least *prima facie*, our use of complex predicates suggests
that there are corresponding complex properties involving all sorts of
logical operations. Thus, one can envisage *conjunctive*
properties such as *blue and spherical, negative* properties
such as *non-spherical*, *disjunctive* properties such
as *blue or non-spherical*, *existentially* or
*universally quantified* properties such as *loving
someone* or *loved by everyone*, *reflexive*
properties such as *loving oneself*, etc. Moreover, one could
add to the list properties with individuals as constituents, which are
then denied the status of a *purely qualitative* property,
e.g., *taller than Obama*.
It is easy to construct complex predicates. But whether there really
are corresponding properties of this kind is a much more difficult and
controversial issue, tightly bound to the sparse/abundant distinction.
In the sparse conception the tendency is to dispense with them. This
is understandable since in this camp one typically postulates
properties *in rebus* for empirical explanatory reasons. But
then, if we explain some phenomena by attributing properties \(F\) and
\(G\) to an object, while denying that it does not exemplify \(H\), it
seems of no value to add that it also has, say, *F and G*,
*F or H*, and *non-H*. Nevertheless, Armstrong, the
leading supporter of sparseness, has defended over the years a mixed
stance without disjunctive and negative properties, but with
conjunctive ones. Armstrong's line has of course its opponents
(see, e.g., Paolini Paoletti 2014; 2017a; 2020a; Zangwill 2011 argues
that there are negative properties, but they are less real than
positive ones). On the other hand, in the abundant conception, all
sorts of logically compound properties are acknowledged. Since the
focus now is on meaning and mental content, it appears natural to
postulate such properties to account for our understanding of complex
predicates and of how they differ from simple ones. Even here,
however, there are disagreements, when it comes to problematic
predicates such the "does not exemplify itself" of
Russell's paradox. Some suggest that in this case one should
deny there is a corresponding property, in order to avoid the paradox
(Van Inwagen 2006). However, we understand this predicate and thus it
seems *ad hoc* that the general strategy of postulating a
property fails. It seems better to confront the paradox with other
means (see
SS6).
### 4.2 Structural Properties
Following Armstrong (1978b: 68-71; see also Fisher 2018:
SS2), a structural property \(F\) is typically viewed as a
universal, and can be characterized thus:
1. an object exemplifying \(F\), say \(x\), must have
"relevant" proper parts that do not exemplify \(F\);
2. there must be some other "relevant" properties somehow
involved in \(F\) that are not exemplified by the object in
question;
3. the relevant proper parts must rather exemplify one or the other
of the relevant properties involved in \(F\).
For instance, a molecule \(w\) exemplifying *H2O*
has as parts two atoms \(h\_1\) and \(h\_2\) and an atom \(o\) that do
not exemplify *H2O*; this property involves
*hydrogen* and *oxygen*, which are not exemplified by
\(w\); \(h\_1\) and \(h\_2\) exemplify *hydrogen*, and \(o\)
exemplifies *oxygen*. For *relational* structural
properties, there is the further condition that
4. a certain "relevant" relation that links the relevant
proper parts must also be involved.
*H2O* is a case in point: the relevant (chemical)
relation is *bonding*, which links the three atoms in question.
Another, often considered, example is *methane* as a structural
property of methane-molecules (*CH4*), which
involves a *bonding* relation between atoms and the joint
instantiation of *hydrogen* (by four atoms) and *carbon*
(by one atom). *Non-relational* structural properties do not
require condition (iv). For instance, *mass 1 kg* involves many
relevant properties of the kind *mass n kg*, for \(n \lt 1\),
which are instantiated by relevant proper parts of any object that is
1 kg in mass, and these proper parts are not linked by any relevant
relation (Armstrong 1978b: 70).
In most approaches, including Armstrong's, the relevant
properties and the relevant relation, if any, are conceived of as
parts, or constituents, of the structural property in question, and
thus their being involved in the latter is a parthood relation. For
instance, *hydrogen*, *oxygen* and *bonding* are
constituents of *H2O*. Moreover, the composition of
structural properties is isomorphic to that of the complexes
exemplifying them. These two theses characterize what Lewis (1986a)
calls the "pictorial conception". It is worth noting that
structural properties differ from conjunctive properties such as
*human and musician*, in that the constituents of the latter
(i.e., *human* and *musician*) are instantiated by
whatever entity exemplifies the conjunctive property (i.e., a human
being who is also a musician), whereas the constituents of a
structural property (e.g., *hydrogen*, *oxygen* and
*bonding*) are not instantiated by the entities that
instantiate it (e.g., *H2O* molecules).
Structural properties have been invoked for many reasons. Armstrong
(1978b: ch. 22) appeals to them to explain the resemblance of
universals belonging to a common class, e.g., lengths and colors.
Armstrong (1988; 1989) also treats physical quantities and numbers
through structural properties. The former involve, as in the above
*mass 1 kg* example, smaller quantities such as *mass 0.1
kg* and *mass 0.2 kg* (this view is criticized by Eddon
2007). The latter are internal proportion relations between structural
properties (e.g., *being a nineteen-electrons aggregate*) and
unit-properties (e.g., *being an electron*). Moreover,
structural properties have been appealed to in treatments of laws of
nature (Armstrong 1989; Lewis 1986a), some natural kinds (Armstrong
1978b, 1997; Hawley & Bird 2011), possible worlds (Forrest 1986),
*ersatz* times (Parsons 2005), emergent properties
(O'Connor & Wong 2005), linguistic types (Davis 2014), ontic
structural realism (Psillos 2012). However, structural universals have
also been questioned on various grounds, as we shall now see.
Lewis (1986a) raises two problems for the pictorial conception. First,
it is not clear how one and the same universal (e.g.,
*hydrogen*) can recur more than once in a structural universal.
Let us dub this "multiple recurrence problem". Secondly,
structural universals violate the Principle of Uniqueness of
Composition of classical
mereology.
According to this principle, given a certain collection of parts,
there is only one whole that they compose. Consider isomers, namely,
molecules that have the same number and types of atoms but different
structures, e.g., butane and isobutane
(*C4H10*). Here, *butane* and
*isobutane* are different structural universals. Yet, they
arise from the same universals recurring the same number of times. Let
us dub this "isomer problem."
Two further problems may be pointed out. First, even molecules with
the same number and types of atoms and the same structures may vary in
virtue of their spatial orientation (Kalhat 2008). This phenomenon is
known as *chirality*. How can structural properties account for
it? Secondly, the composition of structural properties is restricted:
not every collection of properties gives rise to a structural
property. This violates another principle of classical mereology: the
Principle of Unrestricted Composition.
Lewis dismisses alternatives to the pictorial conception, such as the
"linguistic conception" (in which structural universals
are set-theoretic constructions out of simple universals) and the
"magic conception" (in which structural universals
are not complex and are primitively connected to the relevant
properties). Moreover, he also rejects the possibility of there being
*amphibians*, i.e., particularized universals lying between
particulars and full-fledged universals. Amphibians would solve the
multiple recurrence problem, as they would be identical with multiple
recurrences of single universals. For example, in *methane*,
there would be four amphibians of *hydrogen*.
To reply to Lewis' challenges, two main strategies have been
adopted on behalf of the pictorial conception: a non-mereological
composition strategy, according to which structural properties do not
obey the principles of classical mereology, and a mereology-friendly
strategy. Let us start with the former.
Armstrong (1986) admits that the composition of structural universals
is non-mereological. In his 1997, he emphasizes that states of affairs
do not have a mereological kind of composition, and views universals,
including structural ones, as *states of affairs-types* which,
as such, are not mereological in their composition. Structural
universals themselves turn out to be state of affairs-types of a
conjunctive sort (1997: 34 ff.). To illustrate, let us go back to the
above example with *H2O* exemplified by the water
molecule \(w\). It involves the conjunction of these states of
affairs:
1. atom \(h\_1\) being hydrogen,
2. atom \(h\_2\) being hydrogen,
3. atom \(o\) being oxygen,
4. \(h\_1\), \(h\_2\), and \(o\) being bonded.
This conjunction of states of affairs provides an example of a
state of affairs-type identifiable with the structural universal
*H2O*.
In pursuit of the non-mereological composition strategy, Forrest
(1986; 2006; 2016) relies on logically compound properties, in
particular (in his 2016) conjunctive, existentially quantified and
reflexive ones. Bennett (2013) argues that entities are part of
further entities by occupying specific slots within the latter.
Parthood slots are distinct from each other. Therefore, which entity
occupies which slot matters to the composition of the resultant
complex entity. Hawley (2010) defends the possibility of there being
multiple composition relations. In a more general way, McDaniel (2009)
points out that the relation of structure-making does not obey some of
the principles of classical mereology. Mormann (2010) argues that
distinct categories (to be understood as in
category theory)
come together with distinct parthood and composition relations.
As regards the mereology-friendly strategy, it has been suggested that
structural properties actually include extra components accounting for
structures, so that the Principle of Uniqueness of Composition is
safe. Considering *methane*, Pages (2002) claims that
such a structural property is composed of the properties
*carbon* and *hydrogen*, a *bonding* relation and
a peculiar first-order, formal relation between the atoms. According
to Kalhat (2008), the extra component is a second-order arrangement
relation between the first-order properties. Ordering formal
relations--together with causally individuated
properties--are also invoked by McFarland (2018).
At the intersection of these strategies, Bigelow and Pargetter (1989,
1991) claim that structural universals are internally relational
properties that supervene on first-order properties and relations and
on second-order proportion relations. Second-order proportion
relations are relations such as *having four times as*. They
relate first-order properties and relations. For example, in
*methane*, the relation *having four times as* relates
two conjunctive properties: (i) *hydrogen and being part of this
molecule*; (ii) *carbon and being part of this
molecule*.
Campbell (1990) solves the multiple recurrence problem and the isomer
problem by appealing to tropes. In *methane*, there are
multiple hydrogen tropes. In *butane* and *isobutane*,
distinct tropes entertain distinct structural relations.
Despite Lewis' dismissal, a number of theories appeal to
amphibians (i.e., particularized universals lying between particulars
and full-fledged universals) or, more precisely, to entities similar
to amphibians. Fine (2017) suggests that structural properties should
be treated by invoking arbitrary objects (Fine 1985). Davis'
(2014) account of linguistic types as structural properties confronts
the problem of the multiple occurrence of one type, e.g.,
"dog", in another (e.g., "every dog likes another
dog") (Wetzel 2009). In general, the idea that types are
properties of tokens is a natural one, although there are also
alternative views on the matter (see entry on
types and tokens).
Be this as it may, Davis proposes that types cannot occur within
further types as tokens: they can only occur as *subtypes*.
Therefore, subtypes lie, like amphibians, between types and tokens.
Subtypes are individuated by their positions in asymmetric wholes,
whereas they are primitively and non-qualitatively distinct in
symmetric ones. The approach could be extended to properties such as
*methane*, which would be taken to have four distinct
*hydrogen* subtypes.
## 5. Properties in the Metaphysics of Science
When it comes to the metaphysical underpinnings of scientific
theories, properties play a prominent role: it appears that science
can hardly be done without appealing to them. This adds up to the case
for realism about properties and to our understanding of them. We
shall illustrate this here first by dwelling on some miscellaneous
topics and then by focusing on a debate regarding the very nature of
the properties invoked in science, namely whether or not they are
*essentially* dispositional. Roughly, an object exemplifies a
*dispositional* property, such as *soluble* or
*fragile*, by having a power or disposition to act or being
acted upon in a certain way in certain conditions. For example, a
glass' disposition of *being fragile* consists in its
possibly shattering in certain conditions (e.g., if struck with a
certain force). In contrast, something exemplifies a
*categorical* property, e.g., *made of salt* or
*spherical*, by merely being in a certain way (see entry on
dispositions).
### 5.1 Miscellaneous Topics
Many predicates in scientific theories (e.g., "being a
gene" and "being a belief") are functionally
defined. Namely, their meaning is fixed by appealing to some function
or set of functions (e.g., encoding and transmitting genetic
information). To make sense of functions, properties are needed.
First, functions can be thought of as webs of causal relationships
between properties or property-instances. Secondly, predicates such as
"being a gene" may be taken to refer to
*higher-level* properties, or at least to something that has
such properties. In this case, the property of *being a gene*
would be the higher-level property of *possessing some further
properties* (e.g., biochemical ones) *that play the relevant
functions* (i.e., encoding and transmitting genetic information).
Alternatively, *being a gene* would be a property that
satisfies the former higher-level property. Thirdly, the ensuing
realization relationships between higher-level and function-playing
properties can only be defined by appealing to properties (see entry
on
functionalism).
More generally, many ontological
dependence and reduction
relationships primarily concern properties: type-identity (Place 1956;
Smart 1959; Lewis 1966) and token-identity (Davidson 1980) (see entry
on
the mind/brain identity theory),
supervenience (Horgan 1982; Kim 1993; entry on
supervenience);
realization (Putnam 1975; Wilson 1999; Shoemaker 2007),
scientific reduction
(Nagel 1961). Moreover,
a non-reductive relation such as ontological emergence (Bedau 1997;
Paolini Paoletti 2017b) is typically characterized as a relation
between emergent properties
and more fundamental
properties. Even mechanistic explanations often appeal to properties
and relations in characterizing the organization, the components, the
powers and the phenomena displayed by mechanisms (Glennan & Illari
2018; entry on
mechanisms in science).
Entities in nature are typified by
natural kinds.
These are mostly thought of as properties or property-like entities
that carve nature at its joints (Campbell, O'Rourke, & Slater
2011) at distinct layers of the universe: microphysical (e.g.,
*being a neutron*), chemical (e.g., *being gold*),
biological (e.g., *being a horse*).
Physical quantities such as mass or length are typically treated as
properties, specifiable in terms of a magnitude, a certain number, and
a unit measure, e.g., kg or meter, which can itself be specified in
terms of properties (Mundy 1987; Swoyer 1987). Following Eddon's
(2013) overview, it is possible to distinguish two main strands here:
*relational* and *monadic properties* theories (see also
Dasgupta 2013). In relational theories (Bigelow & Pargetter 1988;
1991), quantities arise from proportion relations, which may in turn
be related by higher-order proportion relations. Consider a certain
quantity, e.g., *mass 3 kg*. Here the *thrice as massive
as* proportion relation is relevant, which holds of certain pairs
of objects \(a\) and \(b\) such that \(a\) is thrice as massive as
\(b\). By virtue of such relational facts, the first members of such
pairs have a mass of 3 kg. Another relational approach is by Mundy
(1988), who claims that *mass 3 kg* is a relation between
(ordered pairs of) objects and numbers. For example, *mass 3
kg* is a relation holding between the ordered pair \(\langle a,
b\rangle\) (assuming that \(a\) is thrice as massive as \(b\), and not
the other way round) and the number 3; or--if relations have
built-in relational order--between \(a, b\) and 3. Knowles (2015)
holds a similar view, wherein physical quantities are relations that
objects bear to numbers. According to the monadic properties approach,
quantities are intrinsic properties of objects (Swoyer 1987; Armstrong
1988; 1989). As we saw, Armstrong develops such a view by taking
quantities to be structural properties. Related metaphysical inquiries
concern the dimensions of quantities (Skow 2017) and the status of
forces (Massin 2009).
Lastly, properties play a prominent role in two well-known accounts of
the
laws of nature:
the *nomological necessitation account* and *law
dispositionalism*. The former has been expounded by Tooley (1977;
1987), Dretske (1977) and Armstrong (1983). Roughly, following
Armstrong, a law of nature consists in a second-order and external
nomological necessitation relation \(\rN\), contingently holding
between first-order universals \(P\) and \(Q\): \(\rN(P, Q)\). Such a
higher-order fact necessitates certain lower-order regularities, i.e.,
all the objects that have \(P\) also have \(Q\) (by nomological
necessity, in virtue of \(\rN(P, Q))\). Law dispositionalism
interprets laws of nature by appealing to dispositional properties.
According to it, laws of nature "flow" from the essence of
such properties. This implies that laws of nature hold with
metaphysical necessity: whenever the dispositions are in place, the
relevant laws must be in place too (Cartwright 1983; Ellis 2001; Bird
2007; Chakravartty 2007; Fischer 2018; see also Schrenk 2017 and
Dumsday 2019). (For criticisms, see, e.g., van Fraassen 1989 as
regards the former approach, and McKitrick 2018 and Paolini Paoletti
2020b, as regards the latter).
### 5.2. Essentially Categorical vs. Essentially Dispositional Properties
There is a domestic dispute among supporters of properties in the
metaphysics of science, regarding the very nature of such properties
(or at least the fundamental ones). We may distinguish two extreme
views, *pure dispositionalism* and *pure
categoricalism*. According to the former, all properties are
essentially dispositional ("dispositional", for
brevity's sake, from now on), since they are nothing more than
causal powers; their *causative roles*, i.e., which effects
their instantiations can cause, exhaust their essences. According to
pure categoricalism, all properties are essentially categorical (in
brief, "categorical", in the following), because their
causative roles are not essential to them. If anything is essential to
a property, it is rather a non-dispositional and intrinsic aspect
called "quiddity" (after Armstrong 1989), which need not
be seen as an additional entity over and above the property itself
(Locke 2012, in response to Hawthorne 2001). Between such views there
lie a number of intermediate positions.
Pure dispositionalism has been widely supported in the last few
decades (Mellor 1974; 2000; Shoemaker 1984: ch. 10 and 11 [in terms of
causal roles]; Mumford 1998; 2004; Bird 2005; 2007; Chakravartty 2007;
Whittle 2008; see Tugby 2013 for an attempt to argue from this sort of
view to a Platonist conception of properties). There are three main
arguments in favor of it. First, pure dispositionalism easily accounts
for the natural necessity of the laws of nature, insofar as such a
necessity just "flows" from the essence of the
dispositional properties involved. Secondly, dispositional properties
can be easily known as they really are, because it is part of their
essence that they affect us in certain ways. Thirdly, at the
(presumably fundamental) micro-physical level, properties are only
described dispositionally, which is best explained by their being
dispositional (Ellis & Lierse 1994; N. Williams 2011).
Nevertheless, pure dispositionalism is affected by several problems.
First, some authors believe that it is not easy to provide a clear-cut
distinction between dispositional and would-be non-dispositional
properties (Cross 2005). Secondly, it seems that the essence of
certain properties does not include, or is not exhausted by, their
causative roles: qualia, inert and epiphenomenal
properties,[14]
structural and geometrical properties, spatio-temporal properties
(Prior 1982; Armstrong 1999; for some responses, see Mellor 1974 and
Bird 2007; 2009). Thirdly, there can be *symmetrical* causative
roles. Consider three distinct properties \(A\), \(B\) and \(C\) such
that \(A\) can cause \(B\), \(B\) can cause \(A\), \(A\) and \(B\) can
together cause \(C\) and nothing else characterizes \(A\) and \(B\).
\(A\) and \(B\) have the same causative role. Therefore, in pure
dispositionalism, they turn out to be identical, against the
hypothesis (Hawthorne 2001; see also Contessa 2019). Fourthly,
according to some, pure dispositionalism falls prey to (at least)
three distinct regresses (for a fourth regress, see Psillos 2006).
Such regresses arise from the fact that the essential causative role
of a dispositional property \(P\) "points towards" further
properties \(S\), \(T\), etc. *qua* possible effects. The
essential causative roles of the latter "point towards"
still other properties, and so on. The first regress concerns the
identity of \(P\), which is never fixed, as it depends on the identity
of further properties, which depend for their identity on still other
properties, and so on (Lowe 2006; Barker 2009; 2013). The second
regress concerns the knowability of \(P\) (Swinburne 1980). \(P\) is
only knowable through its possible effects (i.e., the instantiation of
\(S\), \(T\), etc.), included in its causative role. Yet, such
possible effects are only knowable through their possible effects, and
so on. The third regress concerns the actuality of \(P\) (Armstrong
1997). \(P\)'s actuality is never reached, since \(P\) is
nothing but the power to give rise to \(S\), \(T\), etc., which are
nothing but the powers to give rise to further properties, and so on.
For responses to these regresses, see Molnar 2003; Marmodoro 2009;
Bauer 2012; McKitrick 2013; see also Ingthorsson 2013.
Pure categoricalism seems to imply that causative roles are only
contingently associated to a property. Therefore, on pure
categoricalism, a property can possibly have distinct causative roles,
which allows it to explain--among other things--the apparent
contingency of causative roles and the possibility of recombining a
property with distinct causative roles. Its supporters include Lewis
(1986b, 2009), Armstrong (1999), Schaffer (2005), and more recently
Livanios (2017), who provides further arguments based on the
metaphysics of science. Kelly (2009) and Smith (2016) may be added to
the list, although they take roles to be non-essential and necessary
(for further options, see also Kimpton-Nye 2018, Yates 2018a, Coates
forthcoming and Tugby forthcoming).
However, pure categoricalism falls prey to two sorts of difficulties.
First, the contingency of causative roles has some unpalatable
consequences: unbeknownst to us, two distinct properties can swap
their roles; they can play the same role at the same time; the same
role can be played by one property at one time and by another property
at a later time; an "alien" property can replace a
familiar one by playing its role (Black 2000). Secondly, and more
generally, we are never able to know which properties play which
roles, nor are we able to know the intrinsic nature of such
properties. This consequence should be accepted with "Ramseyan
humility" (Lewis 2009; see also Langton 1998 for a related,
Kantian, sort of humility) or it should be countered in the same way
as we decide to counter any broader version of skepticism (Schaffer
2005). On this issue, see also Whittle (2006); Locke (2009); Kelly
(2013); Yates (2018b).
Let us now turn to some intermediate positions.
According to *dualism* (Ellis 2001; Molnar 2003), there are
both dispositional and categorical properties. Dualism is meant to
combine the virtues of pure dispositionalism and pure categoricalism.
It then faces the charge of adopting a less parsimonious ontology,
since it accepts two classes of properties rather than one, i.e.,
dispositional and categorical ones.
According to the *identity theory* (Heil 2003; 2012; Jaworski
2016; Martin 2008; G. Strawson 2008; see also N. Williams 2019), every
property is both dispositional and categorical (or qualitative). The
main problem here is how to characterize the distinction and relation
of these two "sides". Martin and Heil suggest that they
are two distinct ways of partially considering one and the same
property, whereas Mumford (1998) explores the possibility of seeing
them as two distinct ways of conceptualizing the property in question.
Heil claims that the qualitative and the dispositional sides need to
be identified with one another and with the whole property. Jacobs
(2011) holds that the qualitative side consists in the possession of
some qualitative nature by the property, whereas the dispositional
side consists in that property being (part of) a sufficient truthmaker
for certain counterfactuals. Dispositional and qualitative sides may
also be seen as essential, higher-order properties of properties, as
supervenient and ontologically innocent aspects of properties
(Giannotti 2019), or as constituents of the essence of properties
(Taylor 2018). In general, the identity theory is between Scylla and
Charybdis. If it reifies the dispositional and qualitative sides, it
runs the risk of implying some sort of dualism. If it insists on the
identity between them, it runs the opposite risk of turning into a
pure dispositionalist theory (Taylor 2018).
## 6. Formal Property Theories and their applications
Formal property theories are logical systems that aim at formulating
"general noncontingent laws that deal with properties"
(Bealer & Monnich 1989: 133). In the next subsection we shall
outline how they work. In the two subsequent subsections we shall
briefly consider their deployment in natural language semantics and
the foundations of mathematics, which can be taken to provide further
reasons for the acknowledgment of properties in one's ontology,
or at least of certain kinds of properties.
### 6.1 Logical Systems for Properties
These systems allow for terms corresponding to properties, in
particular variables that are meant to range over properties and that
can be quantified over. This can be achieved in two ways. Either
(option 1; Cocchiarella 1986a) the terms standing for properties are
predicates or (option 2; Bealer 1982) such terms are subject terms
that can be linked to other subject terms by a special predicate that
is meant to express a predication relation (let us use
"pred") pretty much as in standard set theory a special
predicate, "\(\in\)", is used to express the membership
relation. To illustrate, given the former option, an assertion such as
"there is a property that both John and Mary have" can be
rendered as
\[\notag \exists P(P(j) \amp P(m)).\]
Given the second option, it can be rendered as
\[\notag \exists x(\textrm{pred}(x, j) \amp \textrm{pred}(x, m)).\]
(The two options can be combined as in Menzel 1986; see Menzel 1993
for further discussion).
Whatever option one follows, in spelling out such theories one
typically postulates a rich realm of properties. Traditionally, this
is done by a so-called comprehension principle which, intuitively,
asserts that, for any well-formed formula ("wff") \(A\),
with \(n\) free variables, \(x\_1 , \ldots ,x\_n\), there is a
corresponding \(n\)-adic property. Following option 1, it goes as
follows:
\[\tag{CP} \exists R^n \forall x\_1 \ldots \forall x\_n(R^n(x\_1 ,
\ldots ,x\_n) \leftrightarrow A). \]
Alternatively, one can use a variable-binding operator, \(\lambda\),
that, given an open wff, generates a term (called a "lambda
abstract") that is meant to stand for a property. This way to
proceed is more flexible and is followed in the most recent versions
of property theory. We shall thus stick to it in the following. To
illustrate, we can apply "\(\lambda\)" to the open
formula, "\((R(x) \amp S(x))\)" to form the one-place
complex predicate "[\(\lambda x (R(x) \amp S(x))\)]"; if
"\(R\)" denotes *being red* and "\(S\)"
denotes *being square*, then this complex predicate denotes the
compound, conjunctive property *being red and square*.
Similarly, we can apply the operator to the open formula
"\(\exists y(L(x, y))\)" to form the one-place predicate
"[\(\lambda x \exists y(L(x, y))\)]"; if
"\(L\)" stands for *loves*, this complex predicate
denotes the compound property *loving someone* (whereas
"[\(\lambda y \exists x(L(x, y))\)]" would denote
*being loved by someone*). To ensure that lambda abstracts
designate the intended property, one should assume a "principle
of lambda conversion". Given option 1, it can be stated
thus:
\[\tag{\(\lambda\)-conv} [\lambda x\_1\ldots x\_n A](t\_1 , \ldots
,t\_n) \leftrightarrow A(x\_1 /t\_1 , \ldots ,x\_n /t\_n).\]
\(A(x\_1 /t\_1 , \ldots ,x\_n /t\_n)\) is the wff resulting from
simultaneously replacing each \(x\_i\) in \(A\) with \(t\_i\) (for \(1
\le i \le n)\), provided \(t\_i\) is free for \(x\_i\) in \(A\)). For
example, given this principle, [\(\lambda x (R(x) \amp S(x))](j)\) is
the case iff \((R(j) \amp S(j))\) is also the case.
Standard second-order logic allows for predicate variables bound by
quantifiers. Hence, to the extent that these variables are taken to
range over properties, this system could be seen as a formal theory of
properties. Its expressive power is however limited, since it does not
allow for subject terms that stand for properties. Thus, for example,
one cannot even say of a property \(F\) that \(F = F\). This is a
serious limitation if one wants a formal tool for a realm of
properties whose laws one is trying to explore. Standard higher order
logics beyond the second order obviate this limitation by allowing for
predicates in subject position, provided that the predicates that are
predicated of them belong to a higher type. This presupposes a grammar
in which predicates are assigned types of increasing levels, which can
be taken to mean that the properties themselves, for which the
predicates stand for, are arranged into a hierarchy of types. Thus,
such logics appropriate one version or another of the type theory
concocted by Russell to tame his own paradox and related conundrums.
If a predicate can be predicated of another predicate only if the
former is of a type higher than the latter, then self-predication is
banished and Russell's paradox cannot even be formulated.
Following this line, we can construct a type-theoretical formal
property theory. The simple theory of types, as presented, e.g., in
Copi (1971), can be seen as a prototypical version of such a property
theory (if we neglect the principle of extensionality assumed by
Copi). The type-theoretical approach keeps having supporters. For
example, it is followed in the property theories embedded in
Zalta's (1983) theory of abstract objects and more recently in
the metaphysical systems proposed by Williamson (2013) and Hale
(2013).
However, for reasons sketched in
SS2.4,
type theory is hardly satisfactory. Accordingly, many type-free
versions of property theory have been developed over the years and no
consensus on what the right strategy is appears to be in sight. Of
course, without type-theoretical constraints, given \((\lambda\)-conv)
and classical logic (CL), paradoxes such as Russell's
immediately follow (to see this, consider this instance of
\((\lambda\textrm{-conv}): [\lambda x {\sim}x(x)]([\lambda x
{\sim}x(x)]) \leftrightarrow{\sim}[\lambda x {\sim}x(x)]([\lambda x
{\sim}x(x)])\)). In formal systems where abstract singular terms or
predicates may (but need not) denote properties (Swoyer 1998), formal
counterparts of (complex) predicates like "being a property that
does not exemplify itself" (formally, "[\(\lambda x
{\sim}x(x)\)]") could exist in the object language without
denoting properties; from this perspective, Russell's paradox
merely shows that such predicates do not stand for properties
(similarly, according to Schnieder 2017, it shows that it is not
possible that a certain property exists). But we would like to have
general criteria to decide when a predicate stands for a property and
when it does not. Moreover, one may wonder what gives these predicates
any significance at all if they do not stand for properties. There are
then motivations for building type-free property theories in which all
predicates stand for properties. We can distinguish two main strands
of them: those that weaken CL and those that circumscribe
\((\lambda\)-conv) (some of the proposals to be mentioned below are
formulated in relation to set theory, but can be easily translated
into proposals for property theory).
An early example of the former approach was offered in a 1938 paper by
the Russian logician D. A. Bochvar (Bochvar 1938 [1981]), where the
principle of excluded middle is sacrificed as a consequence of the
adoption of what is now known as Kleene's weak three-valued
scheme. An interesting recent attempt based on giving up excluded
middle is Field 2004. A rather radical alternative proposal is to
embrace a paraconsistent logic and give up the principle of
non-contradiction (Priest 1987). A different way of giving up CL is by
questioning its structural rules and turn to a substructural logic, as
in Mares and Paoli (2014). The problem with all these approaches is
whether their underlying logic is strong enough for all the intended
applications of property theory, in particular to natural language
semantics and the foundations of mathematics.
As for the second strand (based on circumscribing \((\lambda\)-conv)),
it has been proposed to read the axioms of a standard set theory such
as ZFC, minus extensionality, as if they were about properties rather
than sets (Schock 1969; Bealer 1982; Jubien 1989). The problem with
this is that these axioms, understood as talking about sets, can be
motivated by the iterative conception of sets, but they seem rather
*ad hoc* when understood as talking about properties
(Cocchiarella 1985). An alternative can be found in Cocchiarella
1986a, where \((\lambda\)-conv) is circumscribed by adapting to
properties the notion of stratification used by Quine for sets. This
approach is however subject to a version of Russell's paradox
derivable from contingent but intuitively possible facts (Orilia 1996)
and to a paradox of hyperintensionality (Bozon 2004) (see Landini 2009
and Cocchiarella 2009 for a discussion of both). Orilia (2000; Orilia
& Landini 2019) has proposed another strategy for circumscribing
\((\lambda\)-conv), based on applying to exemplification Gupta's
and Belnap's theory of circular definitions.
Independently of the paradoxes (Bealer & Monnich 1989: 198
ff.), there is the issue of providing identity conditions for
properties, specifying when it is the case that two properties are
identical. If one thinks of properties as meanings of natural language
predicates and tries to account for intensional contexts, one will be
inclined to assume rather fine-grained identity conditions, possibly
even allowing that [\(\lambda x(R(x) \amp S(x))\)] and [\(\lambda
x(S(x) \amp R(x))\)] are distinct (see Fox & Lappin 2015 for an
approach based on operational distinctions among computable
functions). On the other hand, if one thinks of properties as causally
operative entities in the physical world, one will want to provide
rather coarse-grained identity conditions. For instance, one might
require that [\(\lambda x A\)] and [\(\lambda x B\)] are the same
property iff it is physically necessary that \(\forall x(A
\leftrightarrow B)\). Bealer (1982) tries to combine the two
approaches (see also Bealer & Monnich
1989)[15].
### 6.2 Semantics and Logical Form
The formal study of natural language semantics started with Montague
and gave rise to a flourishing field of inquiry (see entry on
Montague semantics).
The basic idea in this field is to associate to natural language
sentences wffs of a formal language, in order to represent sentence
meanings in a logically perspicuous manner. The association reflects
the compositionality of meanings: different syntactic subcomponents of
sentences correspond systematically to syntactic subcomponents of the
wffs; subcomponents of wffs thus represent the meanings of the
subcomponents of the sentences. The formal language eschews
ambiguities and has its own formal semantics, which grants that
formulas have logical properties and relations, such as logical truth
and entailments, so that in particular certain sequences of formulas
count as logically valid arguments. The ambiguities we normally find
in natural language sentences and the entailment relations that link
them are captured by associating ambiguous sentences to different
unambiguous wffs, in such a way that when a natural language argument
is felt to be valid there is a corresponding sequence of wffs that
count as a logically valid argument. In order to achieve all this,
Montague appealed to a higher-order logic. To see why this was
necessary, one may focus on this valid argument:
(1)
every Greek is mortal;
(2)
the president of Greece is Greek;
therefore,
(3)
the president of Greece is mortal.
To grant compositionality in a way that respects the syntactic
similarity of the three sentences (they all have the same
subject-predicate form), and the validity of the argument, Montague
associates (1)-(3) to formulas such as these:
\[ \tag{1a} [\lambda F \forall x(\mathrm{G}(x) \rightarrow
F(x)](\mathrm{M}); \] \[ \tag{2a} [\lambda F \exists x(\forall y(P(x)
\rightarrow x = y) \amp F(x))](G); \] \[ \tag{3a} [\lambda F \exists
x(\forall y(P(x) \rightarrow x = y) \amp F(x))](\mathrm{M}). \]
The three lambda abstracts in (1a)-(3a) represent, respectively,
the meanings of the three noun phrases in (1)-(3). These
lambda-abstracts occur in predicate position, as predicates of
predicates, so that (1a)-(3a) can be read, respectively, as:
*every Greek* is instantiated by *being mortal*; *the
president of Greece* is instantiated by *being* Greek;
*the president of Greece* is instantiated by *being
mortal*. Given lambda-conversion plus quantifier and propositional
logic, the argument is valid, as desired. It should be noted that
lambda abstracts such as these can be taken to stand for peculiar
properties of properties, classifiable as *denoting concepts*
(after Russell 1903; see Cocchiarella 1989). One may then say then
this approach to semantics makes a case for the postulation of
denoting concepts, in addition to the more obvious and general fact
that it grants properties as meanings of natural language predicates
(represented by symbols of the formal language).
This in itself says nothing about the nature of such properties. As we
saw in
SS3.1,
Montague took them to be intensions as set-theoretically
characterized in terms of possible worlds. Moreover, he took them to
be typed, since, to avoid logical paradoxes, he relied on type theory.
After Montague, these two assumptions have been typically taken for
granted in natural language semantics, though with attempts to recover
hyperintensionality somehow (Cresswell 1985) in order to capture the
semantics of propositional attitudes verbs such as
"believe", affected by the phenomena regarding mental
content hinted at in
SS3.1.
However, the development of type-free property theories has suggested
the radically different road of relying on them to provide logical
forms for natural language semantics (Chierchia 1985; Chierchia &
Turner 1988; Orilia 2000b; Orilia & Landini 2019). This allows one
to capture straightforwardly natural language inferences that appear
to presuppose type-freedom, as they feature quantifiers that bind
simultaneously subject and predicate positions (recall the example of
SS1.2).
Moreover, by endowing the selected type-free property theory with
fine-grained identity conditions, one also accounts for propositional
attitude verbs (Bealer 1989). Thus, we may say that this line makes a
case for properties understood as untyped and highly fine-grained.
### 6.3 Foundations of Mathematics
Since the systematization in the first half of last century, which
gave rise to paradox-free axiomatizations of set
theory such as ZFC, sets are
typically taken for granted in the foundations of mathematics and it
is well known that they can do all the works that numbers can do. This
has led to the proposal of identifying numbers with sets.
Russell's type theory was an alternative that rather relied on
properties (viewed as propositional functions), in backing up a
logicist reduction of mathematics to logic. In essence, the idea was
that properties can do all the work that sets are supposed to do, thus
making the latter dispensable. Hence, Russell spoke of his approach as
a "no-classes" theory of classes (see Landini 2011: 115,
and SS2.4 of the entry on
Russell's logical atomism;
cf. Bealer 1982: 111-119, and Jubien 1989 for followers of this
line). Following this line, numbers are then seen as properties rather
than sets.
The Russellian approach did not encounter among mathematicians a
success comparable to that of set theory. Nevertheless, from an
ontological point of view, it appears to be more economical in its
relying on properties, to the extent that the latter are needed any
way for all sorts of explanatory jobs, reviewed above, which sets,
*qua* extensional entities, can hardly perform. As we have
seen, type theory is problematic. However, type-free property theories
can come to the rescue by replacing typed properties with untyped ones
in the foundations of mathematics. And in fact the advocates of such
theories have often proposed to recover the logicist program, in
particular by identifying natural numbers with untyped properties of
properties (Bealer 1982; Cocchiarella 1986b; Orilia 2000b). (See also
entry on
logicism and neologicism). |
universals-medieval | ## 1. Introduction
The inherent problems with Plato's original theory were
recognized already by Plato himself. In his *Parmenides* Plato
famously raised several difficulties, for which he apparently did not
provide satisfactory answers. Aristotle (384-322 B.C.E.), with all
due reverence to his teacher, consistently rejected Plato's
theory, and heavily criticized it throughout his own work. (Hence the
famous saying, *amicus Plato sed magis amica
veritas*).[1]
Nevertheless, despite this explicit doctrinal conflict, Neo-Platonic
philosophers, pagans (such as Plotinus ca. 204-270, and
Porphyry, ca. 234-305) and Christians (such as Augustine,
354-430, and Boethius, ca. 480-524) alike, observed a
basic concordance between Plato's and Aristotle's
approach, crediting Aristotle with an explanation of how the human
mind acquires its universal concepts of particular things from
experience, and Plato with providing an explanation of how the
universal features of particular things are established by being
modeled after their universal
archetypes.[2]
In any case, it was this general attitude toward the problem in late
antiquity that set the stage for the ever more sophisticated medieval
discussions.[3]
In these discussions, the concepts of the human mind, therefore, were
regarded as posterior to the particular things represented by these
concepts, and hence they were referred to as *universalia post
rem* ('universals after the thing'). The universal
features of singular things, inherent in these things themselves, were
referred to as *universalia in re* ('universals in the
thing'), answering the universal exemplars in the divine mind,
the *universalia ante rem* ('universals before the
thing').[4]
All these, universal concepts, universal features of singular things,
and their exemplars, are expressed and signified by means of some
obviously universal signs, the universal (or common) terms of human
languages. For example, the term 'man', in English is a
universal term, because it is truly predicable of all men in one and
the same sense, as opposed to the singular term
'Socrates', which in the same sense, i.e., when not used
equivocally, is only predicable of one man (hence the need to add an
ordinal number to the names of kings and popes of the same name).
Depending on which of these items (universal features of singular
things, their universal concepts, or their universal names) they
regarded as the primary, really existing universals, it is customary
to classify medieval authors as being *realists*,
*conceptualists*, or *nominalists*, respectively. The
*realists* are supposed to be those who assert the existence of
real universals *in* and/or *before* particular things,
the *conceptualists* those who allow universals only, or
primarily, as concepts of the mind, whereas *nominalists* would
be those who would acknowledge only, or primarily, universal words.
But this rather crude classification does not adequately reflect the
genuine, much more subtle differences of opinion between medieval
thinkers. (No wonder one often finds in the secondary literature
distinctions between, "moderate" and "extreme"
versions of these crudely defined positions.) In the first place,
nearly *all* medieval thinkers agreed on the existence of
universals *before* things in the form of divine ideas existing
in the divine
mind,[5]
but all of them denied their existence in the form of
mind-independent, real, eternal entities originally posited by Plato.
Furthermore, medieval thinkers also agreed that particular things have
certain features which the human mind is able to comprehend in a
universal fashion, and signify by means of universal terms. As we
shall see, their disagreements rather concerned the types of the
relationships that hold between the particular things, their
individual, yet universally comprehensible features, the universal
concepts of the mind, and the universal terms of our languages, as
well as the ontological status of, and distinctions between, the
individualized features of the things and the universal concepts of
the mind. Nevertheless, the distinction between "realism"
and "nominalism", especially, when it is used to refer to
the distinction between the radically different ways of doing
philosophy and theology in late-medieval times, is quite justifiable,
provided we clarify what *really* separated these ways, as I
hope to do in the later sections of this article.
In this brief summary account, I will survey the problem both from a
systematic and from a historical point of view. In the next section I
will first motivate the problem by showing how naturally the questions
concerning universals emerge if we consider how we come to know a
universal claim, i.e., one that concerns a potentially infinite number
of particulars of a given kind, in a simple geometrical demonstration.
I will also briefly indicate why a naive Platonic answer to these
questions in terms of the theory of perfect Forms, however plausible
it may seem at first, is inadequate. In the third section, I will
briefly discuss how the specific medieval questions concerning
universals emerged, especially in the context of answering
Porphyry's famous questions in his introduction to
Aristotle's *Categories*, which will naturally lead us to
a discussion of Boethius' Aristotelian answers to these
questions in his second commentary on Porphyry in the fourth section.
However, Boethius' Aristotelian answers anticipated only one
side of the medieval discussions: the mundane, philosophical theory of
universals, in terms of Aristotelian abstractionism. But the other
important, Neo-Platonic, theological side of the issue provided by
Boethius, and, most importantly, by St. Augustine, was for medieval
thinkers the theory of ontologically primary universals as the
creative archetypes of the divine mind, the Divine Ideas. Therefore,
the fifth section is going to deal with the main ontological and
epistemological problems generated by this theory, namely, the
apparent conflict between divine simplicity and the multiplicity of
divine ideas, on the one hand, and the tension between the Augustinian
theory of divine illumination and Aristotelian abstractionism, on the
other. Some details of the early medieval Boethian-Aristotelian
approach to the problem and its combination with the Neo-Platonic
Augustinian tradition *before* the influx of the newly
recovered logical, metaphysical, and physical writings of Aristotle
and their Arabic commentaries in the second half of the
12th century will be taken up in the sixth section, in
connection with Abelard's (1079-1142) discussion of
Porphyry's questions. The seventh section will discuss some
details of the characteristic metaphysical approach to the problem in
the 13th century, especially as it was shaped by the
influence of Avicenna's (980-1037) doctrine of common
nature. The eighth section outlines the most general features of the
logical conceptual framework that served as the common background for
the metaphysical disagreements among the authors of this period. I
will argue that it is precisely this common logical-semantical
framework that allows the grouping together of authors who endorse
sometimes radically different metaphysics and epistemologies (not only
in this period, but also much later, well into the early modern
period) as belonging to what in later medieval philosophy came to be
known as the "realist" *via antiqua*, the
"old way" of doing philosophy and theology. By contrast,
it was precisely the radically different logical-semantical approach
initiated by William Ockham (ca. 1280-1350), and articulated and
systematized most powerfully by Jean Buridan (ca. 1300-1358),
that distinguished the "nominalist" *via moderna*,
the "modern way" of doing philosophy and theology from the
second half of the 14th century. The general, distinctive
characteristics of this "modern way" will be the discussed
in the ninth section. Finally, the concluding tenth section will
briefly indicate how the separation of the two *viae*, in
addition to a number of extrinsic social factors, contributed to the
disintegration of scholastic discourse, and thereby to the
disappearance of the characteristically medieval problem of
universals, as well as to the re-mergence of recognizably the same
problem in different guises in early modern philosophy.
## 2. The Emergence of the Problem
It is easy to see how the problem of universals emerges, if we
consider a geometrical demonstration, for example, the demonstration
of Thales' theorem. According to the theorem, any triangle
inscribed in a semicircle is a right triangle, as is shown in the
following diagram:
![Points A, B, and D are on a circle with center C. Line AB is a horizontal diameter through C. Lines AD, CD, and BD are also drawn forming 3 triangles: ABD, ACD, and DCB](image1.gif)
Figure 1. Thales' theorem
Looking at this diagram, we can see that all we need to prove is that
the angle at vertex D of triangle ABD is a right angle. The proof is
easy once we realize that since lines AC, DC, and BC are the radii of
a circle, the triangles ACD and DCB are isosceles triangles, whence
their base angles are equal. For then, if we denote the angles of ABD
by the names of their vertices, this fact entails that D=A + B; and
so, since A + B + D=180o, it follows that 2A +
2B=180o; therefore, A + B=90o, that is,
D=90o, **q. e. d.**
Of course, from our point of view, the important thing about this
demonstration is not so much the *truth* of its conclusion as
*the way* it proves this conclusion. For the conclusion is a
universal theorem, which has to concern all possible triangles
inscribed in any possible semicircle whatsoever, not just the one
inscribed in the semicircle in the figure above. Yet, apparently, in
the demonstration above we were talking only about that triangle. So,
how can we claim that whatever we managed to prove concerning that
particular triangle will hold for all possible triangles?
If we take a closer look at the diagram, we can easily see the appeal
of the Platonic answer to this question. For upon a closer look, it is
clear that, despite appearances to the contrary, this demonstration
*cannot* be about the triangle in this diagram. Indeed, in the
demonstration we assumed that the lines AC, DC, and BC were all
perfectly equal, straight lines. However, if we zoom in on the figure,
we can clearly see that these lines are far from being equal; in fact,
they are not even straight lines:
![Figure 1 magnified. The straight and circular lines of Figure 1 are no longer smooth; under increased magnification, they now appear as 'jagged' lines](image2.gif)
Figure 2. The result of zooming in on
Figure 1.
The demonstration was certainly not about the collection of jagged
black surfaces that we can see here. Rather, the demonstration
concerned something we did not see with our bodily eyes, but what we
had in mind all along, understanding it to be a triangle, with
perfectly straight edges, touching a perfect circle in three
unextended points, which are all perfectly equidistant from the center
of the circle. The figure we could see was only a convenient
"reminder" of what we are supposed to have in mind when we
want to prove that a certain property, namely, that it is a right
triangle, has to belong to the object in our mind in virtue of what it
is, namely, a triangle inscribed in a semicircle. Obviously, the
conclusion applies perfectly only to the perfect triangle we had in
mind, whereas it holds for the visible figure only insofar as, and to
the extent that, this figure resembles the object we had in mind. But
this figure fails to have this property precisely insofar as, and to
the extent that, it falls short of the object in our mind.
However, on the basis of this point it should also be clear that the
conclusion *does* apply to this figure, and every other visible
triangle inscribed in a semicircle as well, insofar as, and to the
extent that, it manages to imitate the properties of the perfect
object in our mind. Therefore, the Platonic answer to the question of
what this demonstration was about, namely, that it was about a
perfect, ideal triangle, which is invisible to the eyes, but is
graspable by our understanding, at once provides us with an
explanation of the possibility of universal, necessary knowledge. By
knowing the properties of the Form or Idea, we know all its
particulars, i.e., all the things that imitate it, insofar as they
imitate or participate in it. So, the Form itself is a universal
entity, a universal model of all its particulars; and since it is the
knowledge of this universal entity that can enable us to know at once
all its particulars, it is absolutely vital for us to know
*what* it is, *what* it is *like*, and exactly
*how* it is *related to* its particulars. However,
obviously, all these questions presuppose that *it* *is*
at all, namely, that such a universal entity *exists*.
But the existence of such an entity seems to be rather precarious.
Consider, for instance, the perfect triangle we were supposed to have
in mind during the demonstration of Thales' theorem. If it is a
perfect triangle, it obviously has to have three sides, since a
perfect triangle has to be a triangle, and nothing can be a triangle
unless it has three sides. But of those three sides either at least
two are equal or none, that is to say, the triangle in question has to
be either isosceles or scalene (taking 'isosceles'
broadly, including even equilateral triangles, for the sake of
simplicity). However, since it is supposed to be the universal model
of *all* triangles, and not only of isosceles triangles, this
perfect triangle cannot be an isosceles, and for the same reason it
cannot be a scalene triangle either. Therefore, such a universal
triangle would have to have inconsistent properties, namely,
*both* that it is either isosceles or scalene *and* that
it is neither isosceles nor scalene. However, obviously nothing can
have these properties at the same time, so nothing can be a universal
triangle any more than a round square. So, apparently, no universal
triangle can exist. But then, what was our demonstration about? Just a
little while ago, we concluded that it could not be directly about any
particular triangle (for it was not about the triangle in the figure,
and it was even less about any other particular triangle not in the
figure), and now we had to conclude that it could not be about a
universal triangle either. But are there any further alternatives? It
seems obvious that through this demonstration we do gain universal
knowledge concerning all particulars. Yet it is also clear that we do
not, indeed, we cannot gain this knowledge by examining all
particulars, both because they are potentially infinite and because
none of them perfectly satisfies the conditions stated in the
demonstration. So, there must have been something wrong in our
characterization of the universal, which compelled us to conclude
that, in accordance with that characterization, universals could not
exist. Therefore, we are left with a whole bundle of questions
concerning the nature and characteristics of universals, questions
that cannot be left unanswered if we want to know how universal,
necessary knowledge is possible, if at all.
## 3. The Origin of the Specifically Medieval Problem of Universals
What we may justifiably call the first formulation of "the
*medieval* problem of universals" (distinguishing it from
the both logically and historically related ancient problems of
Plato's Theory of Forms) was precisely such a bundle of
questions famously raised by Porphyry in his *Isagoge*, that
is, his *Introduction to Aristotle's Categories*. As he
wrote:
>
> (1) Since, Chrysaorius, to teach about Aristotle's
> *Categories* it is necessary to know what genus and difference
> are, as well as species, property, and accident, and since reflection
> on these things is useful for giving definitions, and in general for
> matters pertaining to division and demonstration, therefore I shall
> give you a brief account and shall try in a few words, as in the
> manner of an introduction, to go over what our elders said about these
> things. I shall abstain from deeper enquiries and aim, as appropriate,
> at the simpler ones.
>
>
> (2) For example, I shall beg off saying anything about (a) whether
> genera and species are real or are situated in bare thoughts alone,
> (b) whether as real they are bodies or incorporeals, and (c) whether
> they are separated or in sensibles and have their reality in
> connection with them. Such business is profound, and requires another,
> greater investigation. Instead I shall now try to show how the
> ancients, the Peripatetics among them most of all, interpreted genus
> and species and the other matters before us in a more logical fashion.
> [Porphyry, *Isagoge*, in Spade 1994 (henceforth, *Five
> Texts*), p. 1.]
>
>
>
Even though in this way, by relegating them to a "greater
investigation", Porphyry left these questions unanswered, they
certainly proved to be irresistible for his medieval Latin
commentators, beginning with Boethius, who produced not just one, but
two commentaries on Porphyry's text; the first based on Marius
Victorinus's (*fl*. 4th c.) translation, and
the second on his
own.[6]
In the course of his argument, Boethius makes it quite clear what sort
of entity a universal would have to be.
>
> A universal must be common to several particulars
>
> 1. in its entirety, and not only in part
> 2. simultaneously, and not in a temporal succession, and
> 3. it should constitute the substance of its
> particulars.[7]
>
>
>
However, as Boethius argues, nothing in real existence can satisfy
these conditions. The main points of his argument can be reconstructed
as follows.
Anything that is common to many things in the required manner has to
be simultaneously, and as a whole, in the substance of these many
things. But these many things are several beings precisely because
they are distinct from one another in their being, that is to say, the
act of being of one is distinct from the act of being of the other.
However, if the universal constitutes the substance of a particular,
then it has to have the same act of being as the particular, because
constituting the substance of something means precisely this, namely,
sharing the act of being of the thing in question, as the
thing's substantial part. But the universal is supposed to
constitute the substance of all of its distinct particulars, as a
whole, at the same time. Therefore, the one act of being of the
universal entity would have to be identical with all the distinct acts
of being of its several particulars at the same time, which is
impossible.[8]
This argument, therefore, establishes that no one thing can be a
universal in its being, that is to say, nothing can be both one being
and common to many beings in such a manner that it shares its act of
being with those many beings, constituting their substance.
This can easily be visualized in the following diagram, where the tiny
lightning bolts indicate the acts of being of the entities involved,
namely, a woman, a man, and their universal humanity (the larger
dotted figure).
![Consists of three 'matchstick human' characters (as on public bathroom doors), a bigger one on the top, drawn in dashed lines, representing the universal humanity, a female on the lower left and a male character on the lower right, representing individual humans. Each has a lightning bolt symbol next to it, representing their acts of being, showing that the acts of being of the individuals are not identical, while the act of being of the universal would have to be identical with these distinct acts of being, which is impossible.](image3.gif)
Figure 3. Illustration of the first part
of Boethius' argument
But then, Boethius goes on, we should perhaps say that the universal
is not one being, but rather many beings, that is, [the collection
of][9]
those constituents of the individual essences of its particulars on
account of which they all fall under the same universal predicable.
For example, on this conception, the genus 'animal' would
not be some one entity, a universal animality over and above the
individual animals, yet somehow sharing its being with them all
(since, as we have just seen, that is impossible), but rather [the
collection of] the individual animalities of all animals.
Boethius rejects this suggestion on the ground that whenever there are
several generically similar entities, they have to have a genus;
therefore, just as the individual animals had to have a genus, so too,
their individual animalities would have to have another one. However,
since the genus of animalities cannot be one entity, some
'super-animality' (for the same reason that the genus of
animals could not be one entity, on the basis of the previous
argument), it seems that the genus of animalities would have to be a
number of further 'super-animalities'. But then again, the
same line of reasoning should apply to these
'super-animalities', giving rise to a number of
'super-super-animalities', and so on to infinity, which is
absurd. Therefore, we cannot regard the genus as some real being even
in the form of [a collection of] several distinct entities. Since
similar reasonings would apply to the other Porphyrian predicables as
well, no universal can exist in this way.
Now, a universal either exists in reality independently of a mind
conceiving of it, or it only exists in the mind. If it exists in
reality, then it either has to be one being or several beings. But
since it cannot exist in reality in either of these two ways, Boethius
concludes that it can only exist in the
mind.[10]
However, to complicate matters, it appears that a universal cannot
exist in the mind either. For, as Boethius says, the universal
existing in the mind is some universal understanding of some thing
outside the mind. But then this universal understanding is either
disposed in the same way as the thing is, or differently. If it is
disposed in the same way, then the thing also must be universal, and
then we end up with the previous problem of a really existing
universal. On the other hand, if it is disposed differently, then it
is false, for "what is understood otherwise than the thing is is
false" (*Five Texts*, Spade 1994, p. 23 (21)). But then,
all universals in the understanding would have to be false
representations of their objects; therefore, no universal knowledge
would be possible, whereas our considerations started out precisely
from the existence of such knowledge, as seems to be clear, e.g., in
the case of geometrical knowledge.
## 4. Boethius' Aristotelian Solution
Boethius' solution of the problem stated in this form consists
in the rejection of this last argument, by pointing out the ambiguity
of the principle that "what is understood otherwise than the
thing is is false". For in one sense this principle states the
obvious, namely, that an act of understanding that represents a thing
*to* *be* otherwise than the thing is is false. This is
precisely the reading of this principle that renders it plausible.
However, in another sense this principle would state that an act of
understanding which represents the thing in a manner which is
different from the manner in which the thing exists is false. In this
sense, then, the principle would state that if the mode of
representation of the act of understanding is different from the mode
of being of the thing, then the act of understanding is false. But
this is far from plausible. In general, it is simply not true that a
representation can be true or faithful only if the mode of
representation matches the mode of being of the thing represented. For
example, a written sentence is a true and faithful representation of a
spoken sentence, although the written sentence is a visible, spatial
sequence of characters, whereas the spoken sentence is an audible,
temporal pattern of articulated sounds. So, what exists as an audible
pattern of sounds is represented visually, that is, the mode of
existence of the thing represented is radically different from the
mode of its representation. In the same way, when particular things
are represented by a universal act of thought, the things exist in a
particular manner, while they are represented in a universal manner,
still, this need not imply that the representation is false. But this
is precisely the sense of the principle that the objection exploited.
Therefore, since in this sense the principle can be rejected, the
objection is not
conclusive.[11]
However, it still needs to be shown that in the particular case of
universal representation the mismatch between the mode of its
representation and the mode of being of the thing represented does in
fact not entail the falsity of the representation. This can easily be
seen if we consider the fact that the falsity of an act of
understanding consists in representing something *to be* in a
way it is not. That is to say, properly speaking, it is only an act of
*judgment* that can be false, by which we think something
*to be* somehow. But a *simple* act of understanding, by
which we simply understand something without thinking it *to
be* somehow, that is, without attributing anything to it, cannot
be false. For example, I can be mistaken if I form in my mind the
judgment that a man *is* running, whereby I conceive a man
*to be* somehow, but if I simply think of a man without
attributing either running or not running to him, I certainly cannot
make a mistake as to how he
*is*.[12]
In the same way, I would be mistaken if I were to think that a
triangle is neither isosceles nor scalene, but I am certainly not in
error if I simply think of a triangle without thinking either that it
is isosceles or that it is scalene. Indeed, it is precisely this
possibility that allows me to form the universal mental
representation, that is, the universal concept of all particular
triangles, regardless of whether they are isosceles or scalene. For
when I think of a triangle in general, then I certainly do not think
of something that is a triangle and is neither isosceles nor scalene,
for that is impossible, but I simply think of a triangle, not thinking
that it is an isosceles and not thinking that it is a scalene
triangle. This is how the mind is able to separate in thought what are
inseparable in real existence. Being either isosceles or scalene is
inseparable from a triangle in real existence. For it is impossible
for something *to be* a triangle, and yet *not to be* an
isosceles and *not* *to be* a scalene triangle either.
Still, it is not impossible for something to be *thought to be*
a triangle and *not* to be *thought to be* an isosceles
and *not* to be *thought to be* a scalene triangle
either (although of course, it still has to be thought to be
either-isosceles-or-scalene). This separation in thought of those
things that cannot be separated in reality is the process of
*abstraction*.[13]
In general, by means of the process of abstraction, our mind (in
particular, the faculty of our mind Aristotle calls *active
intellect* (*nous poietikos*, in Greek, *intellectus
agens*, in Latin) is able to form universal representations of
particular objects by disregarding what distinguishes them, and
conceiving of them only in terms of those of their features in respect
of which they do not differ from one another.
In this way, therefore, if universals are regarded as universal mental
representations existing in the mind, then the contradictions emerging
from the Platonic conception no longer pose a threat. On this
Aristotelian conception, universals need not be thought of as somehow
sharing their being with all their distinct particulars, for their
being simply consists in their being thought of, or rather, the
particulars' being thought of in a universal manner. This is
what Boethius expresses by saying in his final replies to
Porphyry's questions the following:
>
>
> ... genera and species subsist in one way, but are understood in
> an another. They are incorporeal, but subsist in sensibles, joined to
> sensibles. They are understood, however, as subsisting by themselves,
> and as not having their being in others. [*Five Texts*, Spade
> 1994, p. 25]
>
>
>
But then, if in this way, by positing universals in the mind, the most
obvious inconsistencies of Plato's doctrine can be avoided, no
wonder that Plato's "original" universals, the
universal models which particulars try to imitate by their features,
found their place, in accordance with the long-standing Neo-Platonic
tradition, in the divine
mind.[14]
It is this tradition that explains Boethius' cautious
formulation of his conclusion concerning Aristotelianism pure and
simple, as not providing us with the whole story. As he writes:
>
>
> ... Plato thinks that genera and species and the rest are not
> only understood as universals, but also exist and subsist apart from
> bodies. Aristotle, however, thinks that they are understood as
> incorporeal and universal, but subsist in sensibles.
>
>
>
> I did not regard it as appropriate to decide between their views. For
> that belongs to a higher philosophy. But we have carefully followed
> out Aristotle's view here, not because we would recommend it the
> most, but because this book, [the *Isagoge*], is written about
> the *Categories*, of which Aristotle is the author. [*Five
> Texts*, Spade 1994, p. 25]
>
>
>
## 5. Platonic Forms as Divine Ideas
Besides Boethius, the most important mediator between the Neo-Platonic
philosophical tradition and the Christianity of the Medieval Latin
West, pointing out also its theological implications, was St.
Augustine. In a passage often quoted by medieval authors in their
discussions of divine ideas, he writes as follows:
>
>
> ... in Latin we can call the Ideas "forms" or
> "species", in order to appear to translate word for word.
> But if we call them "reasons", we depart to be sure from a
> proper translation -- for reasons are called "logoi"
> in Greek, not Ideas -- but nevertheless, whoever wants to use
> this word will not be in conflict with the fact. For Ideas are certain
> principal, stable and immutable forms or reasons of things. They are
> not themselves formed, and hence they are eternal and always stand in
> the same relations, and they are contained in the divine
> understanding. [Spade 1985, Other Internet Resources, p.
> 383][15]
>
>
>
As we could see from Boethius' solution, in this way, if
Platonic Forms are not universal beings existing in a universal
manner, but their universality is due to a universal manner of
understanding, we can avoid the contradictions arising from the
"naive" Platonic conception. Nevertheless, placing
universal ideas in the divine mind as the archetypes of creation, this
conception can still do justice to the Platonic intuition that what
accounts for the necessary, universal features of the ephemeral
particulars of the visible world is the presence of some universal
exemplars in the source of their being. It is precisely in virtue of
having some insight into these exemplars themselves that we can have
the basis of universal knowledge Plato was looking for. As St.
Augustine continues:
>
>
> And although they neither arise nor perish, nevertheless everything
> that is able to arise and perish, and everything that does arise and
> perish, is said to be formed in accordance with them. Now it is denied
> that the soul can look upon them, unless it is a rational one, [and
> even then it can do so] only by that part of itself by which it
> surpasses [other things] -- that is, by its mind and reason, as
> if by a certain "face", or by an inner and intelligible
> "eye". To be sure, not each and every rational soul in
> itself, but [only] the one that is holy and pure, that [is the one
> that] is claimed to be fit for such a vision, that is, the one that
> keeps that very eye, by which these things are seen, healthy and pure
> and fair and like the things it means to see. What devout man imbued
> with true religion, even though he is not yet able to see these
> things, nevertheless dares to deny, or for that matter fails to
> profess, that all things that exist, that is, whatever things are
> contained in their own genus with a certain nature of their own, so
> that that they might exist, are begotten by God their author, and that
> by that same author everything that lives is alive, and that the
> entire safe preservation and the very order of things, by which
> changing things repeat their temporal courses according to a fixed
> regimen, are held together and governed by the laws of a supreme God?
> If this is established and granted, who dares to say that God has set
> up all things in an irrational manner? Now if it is not correct to say
> or believe this, it remains that all things are set up by reason, and
> a man not by the same reason as a horse -- for that is absurd to
> suppose. Therefore, single things are created with their own reasons.
> But where are we to think these reasons exist, if not in the mind of
> the creator? For he did not look outside himself, to anything placed
> [there], in order to set up what he set up. To think that is
> sacrilege. But if these reasons of all things to be created and
> [already] created are contained in the divine mind, and [if] there
> cannot be anything in the divine mind that is not eternal and
> unchangeable, and [if] Plato calls these principal reasons of things
> "Ideas", [then] not only are there Ideas but they are
> true, because they are eternal and [always] stay the same way, and
> [are] unchangeable. And whatever exists comes to exist, however it
> exists, by participation in them. But among the things set up by God,
> the rational soul surpasses all [others], and is closest to God when
> it is pure. And to the extent that it clings to God in charity, to
> that extent, drenched in a certain way and lit up by that intelligible
> light, it discerns these reasons, not by bodily eyes but by that
> principal [part] of it by which it surpasses [everything else], that
> is, by its intelligence. By this vision it becomes most blessed. These
> reasons, as was said, whether it is right to call them Ideas or forms
> or species or reasons, many are permitted to call [them] whatever they
> want, but [only] to a very few [is it permitted] to see what is true.
> [Spade 1985, Other Internet Resources, pp. 383-384]
>
>
>
Augustine's conception, then, saves Plato's original
intuitions, yet without their inconsistencies, while it also combines
his philosophical insights with Christianity. But, as a rule, a really
intriguing solution of a philosophical problem usually gives rise to a
number of further problems. This solution of the original problem with
Plato's Forms is no exception.
### 5.1 Divine Ideas and Divine Simplicity
First of all, it generates a particular ontological/theological
problem concerning the relationship between God and His Ideas. For
according to the traditional philosophical conception of divine
perfection, God's perfection demands that He is absolutely
simple, without any composition of any sort of
parts.[16]
So, God and the divine mind are not related to one another as a man
and his mind, namely as a substance to one of its several powers, but
whatever powers God *has* He *is*. Furthermore, the
Divine Ideas themselves cannot be regarded as being somehow the
eternal products of the divine mind distinct from the divine mind, and
thus from God Himself, for the only eternal being is God, and
everything else is His creature. Now, since the Ideas are not
creatures, but the archetypes of creatures in God's mind, they
cannot be distinct from God. However, as is clear from the passage
above, there are several Ideas, and there is only one God. So how can
these several Ideas possibly be one and the same God?
Augustine never explicitly raised the problem, but for example
Aquinas, who (among others) did, provided the following rather
intuitive solution for it (ST1, q. 15, a. 2). The Divine Ideas are in
the Divine Mind as its objects, i.e., as the things understood. But
the diversity of the objects of an act of understanding need not
diversify the act itself (as when understanding the Pythagorean
theorem, we understand both squares and triangles). Therefore, it is
possible for the self-thinking divine essence to understand itself in
a single act of understanding so perfectly that this act of
understanding not only understands the divine essence as it is in
itself, but also in respect of all possible ways in which it can be
imperfectly participated by any finite creature. The cognition of the
diversity of these diverse ways of participation accounts for the
plurality of divine ideas. But since all these diverse ways are
understood in a single eternal act of understanding, which is nothing
but the act of divine being, and which in turn is again the divine
essence itself, the multiplicity of ideas does not entail any
corresponding multiplicity of the divine essence. To be sure, this
solution may still give rise to the further questions as to what these
diverse ways are, exactly how they are related to the divine essence,
and how their diversity is compatible with the unity and simplicity of
the ultimate object of divine thought, namely, divine essence itself.
In fact, these are questions that were raised and discussed in detail
by authors such as Henry of Ghent (c. 1217-1293), Thomas of
Sutton (ca. 1250-1315), Duns Scotus (c. 1266-1308) and
others.[17]
### 5.2 Illuminationism vs. Abstractionism
Another major issue connected to the doctrine of divine ideas, as
should also be clear from the previously quoted passage, was the
bundle of epistemological questions involved in Augustine's
doctrine of divine illumination. The doctrine -- according to
which the human soul, especially "one that is holy and
pure", obtains a specific supernatural aid in its acts of
understanding, by gaining a direct insight into the Divine Ideas
themselves -- received philosophical support in terms of a
typically Platonic argument in Augustine's *De Libero
Arbitrio*.[18]
The argument can be reconstructed as follows.
>
> **The Augustinian Argument for Illumination**.
>
> 1. I can come to know from experience only something that can be
> found in experience [self-evident]
> 2. Absolute unity cannot be found in experience [assumed]
> 3. Therefore, I cannot come to know absolute unity from experience.
> [1,2]
> 4. Whatever I know, but I cannot come to know from experience, I came
> to know from a source that is not in this world of experiences.
> [self-evident]
> 5. I know absolute unity. [assumed]
> 6. Therefore, I came to know absolute unity from a source that is not
> in this world of experiences. [3,4,5]
>
>
>
> *Proof of 2*. Whatever can be found in experience is some
> material being, extended in space, and so it has to have a multitude
> of spatially distinct parts. Therefore, it is many in respect of those
> parts. But what is many in some respect is not one in that respect,
> and what is not one in some respect is not absolutely one. Therefore,
> nothing can be found in experience that is absolutely one, that is,
> nothing in experience is an absolute unity.
>
>
>
> *Proof of 5*. I know that whatever is given in experience has
> many parts (even if I may not be able to discern those parts by my
> senses), and so I know that it is not an absolute unity. But I can
> have this knowledge only if I know absolute unity, namely, something
> that is not many in any respect, not even in respect of its parts,
> for, in general, I can know that something is F in a certain respect,
> and not an F in some other respect, only if I know what it is for
> something to be an F without any qualification. (For example, I know
> that the two halves of a body, taken together, are not absolutely two,
> for taken one by one, they are not absolutely one, since they are also
> divisible into two halves, etc. But I can know this only because I
> know that for obtaining absolutely two things [and not just two
> multitudes of further things], I would have to have two things that in
> themselves are absolutely one.) Therefore, I know absolute unity.
>
>
>
It is important to notice here that this argument (crucially) assumes
that the intellect is passive in acquiring its concepts. According to
this assumption, the intellect merely receives the cognition of its
objects as it finds them. By contrast, on the Aristotelian conception,
the human mind actively processes the information it receives from
experience through the senses. So by means of its faculty
appropriately called the active or agent intellect, it is able to
produce from a limited number of experiences a universal concept
equally representing all possible particulars falling under that
concept. In his commentary on Aristotle's *De Anima*
Aquinas insightfully remarks:
>
>
> The reason why Aristotle came to postulate an active intellect was his
> rejection of Plato's theory that the essences of sensible things
> existed apart from matter, in a state of actual intelligibility. For
> Plato there was clearly no need to posit an active intellect. But
> Aristotle, who regarded the essences of sensible things as existing in
> matter with only a potential intelligibility, had to invoke some
> abstractive principle in the mind itself to render these essences
> actually intelligible. [*In De Anima*, bk. 3, lc. 10]
>
>
>
On the basis of these and similar considerations, therefore, one may
construct a rather plausible Aristotelian counterargument, which is
designed to show that we need not necessarily gain our concept of
absolute unity from a supernatural source, for it is possible for us
to obtain it from experience by means of the active intellect. Of
course, similar considerations should apply to other concepts as
well.
>
> **An Aristotelian-Thomistic counterargument from
> abstraction**.
>
> 1. I know from experience everything whose concept my active
> intellect is able to abstract from experience. [self-evident]
> 2. But my active intellect is able to abstract from experience the
> concept of unity, since we all experience each singular thing as being
> one, distinct from another. [self-evident, common
> experience][19]
> 3. Therefore, I know unity from experience by abstraction. [1,2]
> 4. Whenever I know something from experience by abstraction, I know
> both the thing whose concept is abstracted and its limiting conditions
> from which its concept is abstracted. [self-evident]
> 5. Therefore, I know both unity and its limiting conditions from
> which its concept is abstracted. [3,4]
> 6. But whenever I know something and its limiting conditions, and I
> can conceive of it without its limiting conditions (and this is
> precisely what happens in abstraction), I can conceive of its
> absolute, unlimited realization. [self-evident]
> 7. Therefore, I can conceive of the absolute, unlimited realization
> of unity, based on the concept of unity I acquired from experience by
> abstraction. [5,6]
> 8. Therefore, it is not necessary for me to have a preliminary
> knowledge of absolute unity before all experience, from a source other
> than this world of experiences. [7]
>
>
>
To be sure, we should notice here that this argument *does*
*not* *falsify* *the doctrine* of illumination.
Provided it works, it only *invalidates* the
Augustinian-Platonic *argument* for illumination. Furthermore,
this is obviously not a sweeping, knock-down refutation of the idea
that at least some of our concepts perhaps could not so simply be
derived from experience by abstraction; in fact, in the particular
case of unity, and in general, in connection with our transcendental
notions (i.e., notions that apply in each Aristotelian category, so
they *transcend* the limits of each one of them, such as the
notions of *being*, *unity*, *goodness*,
*truth*, etc.), even the otherwise consistently Aristotelian
Aquinas would have a more complicated story to tell (see Klima 2000b).
Nevertheless, although Aquinas would still leave some room for
illumination in his epistemology, he would provide for illumination an
entirely naturalistic interpretation, as far as the acquisition of our
intellectual concepts of material things is concerned, by simply
identifying it with the "intellectual light in us", that
is, the active intellect, which enables us to acquire these concepts
from experience by
abstraction.[20]
Duns Scotus, who opposed Aquinas on so many other points, takes
basically the same stance on this issue. Other medieval theologians,
especially such prominent "Augustinians" as Bonaventure,
Matthew of Aquasparta, or Henry of Ghent, would provide greater room
for illumination in the form of a direct, specific, supernatural
influence needed for human intellectual cognition in this life besides
the general divine cooperation needed for the workings of our natural
powers, in particular, the abstractive function of the active
intellect.[21]
But they would not regard illumination as supplanting, but rather as
supplementing intellectual abstraction.
As we could see, Augustine makes recognition of truth dependent on
divine illumination, a sort of irradiation of the intelligible light
of divine ideas, which is accessible only to the few who are
"holy and pure".
But this seems to go against at least
>
>
> 1. the experience that there are knowledgeable non-believers or
> pagans
>
>
>
> 2. the Aristotelian insight that we can have infallible comprehension
> of the first principles of scientific demonstrations for which we only
> need the intellectual concepts that we can acquire naturally, from
> experience by
> abstraction,[22]
>
>
>
and
>
>
> 3. the philosophical-theological consideration that if human reason,
> man's natural faculty for acquiring truth were not sufficient
> for performing its natural function, then human nature would be
> naturally defective in its noblest part, precisely in which it was
> created after the image of God.
>
>
>
In fact, these are only some of the problems explicitly raised and
considered by medieval Augustinians, which prompted their ever more
refined accounts of the role of illumination in human cognition.
For example, Matthew of Aquasparta, recapitulating St. Bonaventure,
writes as follows:
>
>
> Plato and his followers stated that the entire essence of cognition
> comes forth from the archetypal or intelligible world, and from the
> ideal reasons; and they stated that the eternal light contributes to
> certain cognition in its evidentness as the entire and sole reason for
> cognition, as Augustine in many places recites, in particular in bk.
> viii. c. 7 of *The City of God*: 'The light of minds for
> the cognition of everything is God himself, who created
> everything'.
>
>
>
> But this position is entirely mistaken. For although it appears to
> secure the way of wisdom, it destroys the way of knowledge.
> Furthermore, if that light were the entire and sole reason for
> cognition, then the cognition of things in the Word would not differ
> from their cognition in their proper kind, neither would the cognition
> of reason differ from the cognition of revelation, nor philosophical
> cognition from prophetic cognition, nor cognition by nature from
> cognition by grace.
>
>
>
> The other position was apparently that of Aristotle, who claimed that
> the entire essence of cognition is caused and comes from below,
> through the senses, memory, and experience, [working together] with
> the natural light of our active intellect, which abstracts the species
> from phantasms and makes them actually understood. And for this reason
> he did not claim that the eternal is light necessary for cognition,
> indeed, he never spoke about it. And this opinion of his is obvious in
> bk. 2 of the *Posterior Analytics*. [...]
>
>
>
> But this position seems to be very deficient. For although it builds
> the way of knowledge, it totally destroys the way of wisdom.
> [...]
>
>
>
> Therefore, I take it that one should maintain an intermediate position
> without prejudice, by stating that our cognition is caused both from
> below and from above, from external things as well as the ideal
> reasons.
>
>
>
> [...] God has provided our mind with some intellectual light, by
> means of which it would abstract the species of objects from the
> sensibles, by purifying them and extracting their quiddities, which
> are the per se objects of the intellect. [...] But this light is
> not sufficient, for it is defective, and is mixed with obscurity,
> unless it is joined and connected to the eternal light, which is the
> perfect and sufficient reason for cognition, and the intellect attains
> and somehow touches it by its upper part.
>
>
>
> However the intellect attains that light or those eternal reasons as
> the reason for cognition not as sole reason, for then, as has been
> said, cognition in the Word would not differ from cognition in proper
> kind, nor the cognition of wisdom would differ from the cognition of
> knowledge. Nor does it attain them as the entire reason, for then it
> would not need the species and similitudes of things; but this is
> false, for the Philosopher says, and experience teaches, that if
> someone loses a sense, then he loses that knowledge of things which
> comes from that sense. [DHCR, pp. 94-96]
>
>
>
In this way, taking the intermediate position between Platonism and
Aristotelianism pure and simple, Matthew interprets Augustine's
Platonism as being compatible with the Aristotelian view, crediting
the Aristotelian position with accounting for the specific empirical
content of our intellectual concepts, while crediting the Platonic
view with accounting for their certainty in grasping the natures of
things. Still, it may not appear quite clear exactly what the
contribution of the eternal light is, indeed, whether it is necessary
at all. After all, if by abstraction we manage to gain those
intellectual concepts that represent the natures of things, what else
is needed to have a grasp of those natures?
Henry of Ghent, in his detailed account of the issue, provides an
interesting answer to this question. Henry first distinguishes
cognition of a true thing from the cognition of the truth of the
thing. Since any really existing thing is truly what it is (even if it
may on occasion appear something else), any cognition of any really
existing thing is the cognition of a true thing. But cognition of a
true thing may occur without the cognition of its truth, since the
latter is the cognition that the thing adequately corresponds to its
exemplar in the human or divine mind. For example, if I draw a circle,
when a cat sees it, then it sees the real true thing as it is
presented to it. Yet the cat is simply unable to judge whether it is a
true circle in the sense that it really is what it is supposed to be,
namely, a locus of points equidistant from a given point. By contrast,
a human being is able to judge the truth of this thing, insofar as he
or she would be able to tell that my drawing is not really and truly a
circle, but is at best a good approximation of what a true circle
would be.
Now, in intellectual cognition, just as in the sensory cognition of
things, when the intellect simply apprehends a true thing, then it
still does not have to judge the truth of the thing, even though it
may have a true apprehension, adequately representing the thing. But
the cognition of the truth of the thing only occurs in a judgment,
when the intellect judges the adequacy of the thing to its
exemplar.
But since a thing can be compared to two sorts of exemplar, namely, to
the exemplar in the human mind, and to the exemplar in the divine
mind, the cognition of the truth of a thing is twofold, relative to
these two exemplars. The exemplar of the human mind, according to
Henry, is nothing but the Aristotelian abstract concept of the thing,
whereby the thing is simply apprehended in a universal manner, and
hence its truth is judged relative to this concept, when the intellect
judges that the thing in question falls under this concept or not. As
Henry writes:
>
>
> [...] attending to the exemplar gained from the thing as the
> reason for its cognition in the cognizer, the truth of the thing can
> indeed be recognized, by forming a concept of the thing that conforms
> to that exemplar; and it is in this way that Aristotle asserted that
> man gains knowledge and cognition of the truth from purely natural
> sources about changeable natural things, and that this exemplar is
> acquired from things by means of the senses, as from the first
> principle of art and science. [...] So, by means of the universal
> notion in us that we have acquired from the several species of animals
> we are able to realize concerning any thing that comes our way whether
> it is an animal or not, and by means of the specific notion of donkey
> we realize concerning any thing that comes our way whether it is a
> donkey or not. [HQO, a. 1, q. 2, fol. 5 E-F]
>
>
>
But this sort of cognition of the truth of a thing, although it is
intellectual, universal cognition, is far from being the infallible
knowledge we are seeking. As Henry argues further:
>
>
> But by this sort of acquired exemplar in us we do not have the
> entirely certain and infallible cognition of truth. Indeed, this is
> entirely impossible for three reasons, the first of which is taken
> from the thing from which this exemplar is abstracted, the second from
> the soul, in which this exemplar is received, and the third from the
> exemplar itself that is received in the soul about the thing.
>
>
>
> The first reason is that this exemplar, since it is abstracted from
> changeable things, has to share in the nature of changeability.
> Therefore, since physical things are more changeable than mathematical
> objects, this is why the Philosopher claimed that we have a greater
> certainty of knowledge about mathematical objects than about physical
> things by means of their universal species. And this is why Augustine,
> discussing this cause of the uncertainty of the knowledge of natural
> things in q. 9 of his *Eighty-Three Different Questions*, says
> that from the bodily senses one should not expect the pure truth
> [*syncera veritas*]
>
>
>
> ... The second reason is that the human soul, since it is
> changeable and susceptible to error, cannot be rectified to save it
> from swerving into error by anything that is just as changeable as
> itself, or even more; therefore, any exemplar that it receives from
> natural things is necessarily just as changeable as itself, or even
> more, since it is of an inferior nature, whence it cannot rectify the
> soul so that it would persist in the infallible truth.
>
>
>
> ... The third reason is that this sort of exemplar, since it is
> the intention and species of the sensible thing abstracted from the
> phantasm, is similar to the false as well as to the true [thing], so
> that on its account these cannot be distinguished. For it is by means
> of the same images of sensible things that in dreams and madness we
> judge these images to be the things, and in sane awareness we judge
> the things themselves. But the pure truth can only be perceived by
> discerning it from falsehood. Therefore, by means of such an exemplar
> it is impossible to have certain knowledge, and certain cognition of
> the truth. And so if we are to have certain knowledge of the truth,
> then we have to turn our mind away from the senses and sensible
> things, and from every intention, no matter how universal and
> abstracted from sensible things, to the unchangeable truth existing
> above the mind [...]. [*ibid.*, fol. 5. F]
>
>
>
So, Henry first distinguished between the cognition of a true thing
and the intellectual cognition of the truth of a thing, and then,
concerning the cognition of the truth of the thing, he distinguished
between the cognition of truth by means of a concept abstracted from
the thing and "the pure truth" [*veritas syncera vel
liquida*], which he says cannot be obtained by means of such
abstracted concepts.
But then the question naturally arises: what is this "pure
truth", and how can it be obtained, if at all? Since cognition
of the pure truth involves comparison of objects not to their acquired
exemplar in the human mind, but to their eternal exemplar in the
divine mind, in the ideal case it would consist in some sort of direct
insight into the divine ideas, enabling the person who has this access
to see everything in its true form, as "God meant it to
be", and also see how it fails to live up to its idea due to its
defects. So, it would be like the direct intuition of two objects, one
sensible, another intelligible, on the basis of which one could also
immediately judge how closely the former approaches the latter. But
this sort of direct intuition of the divine ideas is only the share of
angels and the souls of the blessed in beatific vision; it is
generally not granted in this life, except in rare, miraculous cases,
in rapture, or prophetic vision.
Therefore, if there is to be any non-miraculous recognition of this
pure truth in this life, then it has to occur differently. Henry
argues that even if we do not have a direct intuition of divine ideas
as the objects cognized (whereby their particulars are recognized as
more or less approximating them), we do have the cognition of the
quiddities of things as the objects cognized by reason of some
indirect cognition of their ideas. The reason for this, Henry says, is
the following:
>
>
> ...for our concept to be true by the pure truth, the soul,
> insofar as it is informed by it, has to be similar to the truth of the
> thing outside, since truth is a certain adequacy of the thing and the
> intellect. And so, as Augustine says in bk. 2 of *On Free Choice of
> the Will*, since the soul by itself is liable to slip from truth
> into falsity, whence by itself it is not informed by the truth of any
> thing, although it can be informed by it, but nothing can inform
> itself, for nothing can give what it does not have; therefore, it is
> necessary that it be informed of the pure truth of a thing by
> something else. But this cannot be done by the exemplar received from
> the thing itself, as has been shown earlier [in the previously quoted
> passage -- GK]. It is necessary, therefore, that it be informed
> by the exemplar of the unchangeable truth, as Augustine intends in the
> same place. And this is why he says in *On True Religion* that
> just as by its truth are true those that are true, so too by its
> similitude are similar those that are similar. It is necessary,
> therefore, that the unchangeable truth impress itself into our
> concept, and that it transform our concept to its own character, and
> that in this way it inform our mind with the expressed truth of the
> thing by the same similitude that the thing itself has in the first
> truth. [HQO a. 1, q. 2, fol. 7, I]
>
>
>
So, when we have the cognition of the pure truth of a thing, then we
cannot have it in terms of the concept acquired from the thing, yet,
since we cannot have it from a direct intuition of the divine exemplar
either, the way we can have it is that the acquired concept primarily
impressed on our mind will be further clarified, but no longer by a
similarity of the thing, but by the similarity of the divine exemplar
itself. Henry's point seems to be that given that the external
thing itself is already just a (more or less defective) copy of the
exemplar, the (more or less defective) copy of this copy can only be
improved by means of the original exemplar, just as a copy of a poor
repro of some original picture can only be improved by retouching the
copy not on the basis of the poor repro, but on the basis of the
original. But since the external thing is fashioned after its divine
idea, the "retouching" of the concept in terms of the
original idea does yield a better representation of the thing; indeed,
so much better that on the basis of this "retouched"
concept we are even able to judge just how well the thing realizes its
kind.
For example, when I simply have the initial simple concept of circle
abstracted from circular objects I have seen, that concept is good
enough for me to tell circular objects apart from non-circular ones.
But with this simple, unanalyzed concept in mind, I may still not be
able to say what a true circle is supposed to be, and accordingly,
exactly how and to what extent the more or less circular objects I see
fail or meet this standard. However, when I come to understand that a
circle is a locus of points equidistant from a given point, I will
realize by means of a clear and distinct concept what it was that I
originally conceived in a vague and confused manner in my original
concept of
circle.[23]
To be sure, I do not come to this definition of circle by looking up
to the heaven of Ideas; in fact, I may just be instructed about it by
my geometry teacher. But what is not given to me by my geometry
teacher is the understanding of the fact that what is expressed by the
definition is indeed what I originally rather vaguely conceived by my
concept abstracted from visible circles. This "flash" of
understanding, when I realize that it is necessary for anything that
truly matches the concept of a circle to be such as described in the
definition, would be an instance of receiving illumination without any
particular, miraculous
revelation.[24]
However, even if in this light Henry's distinctions between the
two kinds of truths and the corresponding differences of concepts make
good sense, and even if we accept that the concepts primarily accepted
from sensible objects need to be further worked on in order to provide
us with true, clear understanding of the natures of things, it is not
clear that this further work cannot be done by the natural faculties
of our mind, assuming only the general influence of God in sustaining
its natural operations, but without performing any direct and specific
"retouching" of our concepts "from above".
Using our previous analogy of the acquired concept as the copy of a
poor repro of an original, we may say that if we have a number of
different poor, fuzzy repros that are defective in a number of
different ways, then in a long and complex process of collating them,
we might still be able discern the underlying pattern of the original,
and thus produce a copy that is actually closer to the original than
any of the direct repros, without ever being allowed a glimpse of the
original.
In fact, this was precisely the way Aristotelian theologians, such as
Aquinas, interpreted Augustine' conception of illumination,
reducing God's role to providing us with the intelligible light
not by directly operating on any of our concepts in particular, but
providing the mind with "a certain likeness of the uncreated
light, obtained through participation" (ST1, q. 84, a. 5c),
namely, the agent intellect.
Matthew of Aquasparta quite faithfully describes this view,
associating it with the Aristotelian position he rejects:
>
>
> Some people engaged in "philosophizing" [*quidam
> philosophantes*] follow this position, although not entirely, when
> they assert that that light is the general cause of certain cognition,
> but is not attained, and its special influence is not necessary in
> natural cognition; but the light of the agent intellect is sufficient
> together with the species and similitudes of things abstracted and
> received from the things; for otherwise the operation of [our] nature
> would be rendered vacuous, our intellect would understand only by
> coincidence, and our cognition would not be natural, but supernatural.
> And what Augustine says, namely, that everything is seen in and
> through that light, is not to be understood as if the intellect would
> somehow attain that light, nor as if that light would have some
> specific influence on it, but in such a way that the eternal God
> naturally endowed us with intellectual light, in which we naturally
> cognize and see all cognizable things that are within the scope of
> reason. [DHCR, p. 95]
>
>
>
Although Matthew vehemently rejects this position as going against
Augustine's original intention ("which is unacceptable,
since he is a prominent teacher, whom catholic teachers and especially
theologians ought to follow" -- as Matthew says), this
view, in ever more refined versions, gained more and more ground
toward the end of the 13th century, adopted not only by Aquinas and
his followers, but also by his major opponents, namely, Scotus and his
followers.[25]
Still, illuminationism and abstractionism were never treated by
medieval thinkers as mutually exclusive alternatives. They rather
served as the two poles of a balancing act in judging the respective
roles of nature and direct divine intervention in human intellectual
cognition.[26]
Although Platonism definitely survived throughout the Middle Ages (and
beyond), in the guise of the interconnected doctrines of divine ideas,
participation, and illumination, there was a quite general
Aristotelian
consensus,[27]
especially after Abelard's time, that the mundane universals of
the species and genera of material beings exist as such *in the
human mind*, as a result of the mind's abstracting from
their individuating conditions. But consensus concerning this much by
no means entailed a unanimous agreement on exactly what the universals
thus abstracted are, what it is for them to exist in the mind, how
they are related to their particulars, what their real foundation in
those particulars is, what their role is in the constitution of our
universal knowledge, and how they contribute to the encoding and
communication of this knowledge in the various human languages. For
although the general Aristotelian stance towards universals
successfully handles the inconsistencies quite obviously generated by
a naive Platonist ontology, it gives rise precisely to these
further problems of its own.
## 6. Universals According to Abelard's Aristotelian Conception
It was Abelard who first dealt with the problem of universals
explicitly in this form. Having relatively easily disposed of putative
universal forms as real entities corresponding to Boethius'
definition, in his *Logica Ingredientibus* he concludes that
given Aristotle's definition of universals in his *On
Interpretation* as those things that can be predicated of several
things, it is only universal *words* that can be regarded as
really existing universals. However, since according to
Aristotle's account in the same work, words are meaningful in
virtue of signifying concepts in the mind, Abelard soon arrives at the
following questions:
1. What is the *common cause* in accordance with which a
common name is imposed?
2. What is the understanding's *common conception* of
the likeness of things?
3. Is a word called "common" on account of the common
cause things agree in, or on account of the common conception, or on
account of both together? [*Five Texts*, Spade 1994, p. 41
(88)]
These questions open up a new chapter in the history of the problem of
universals. For these questions add a new aspect to the bundle of the
originally primarily ontological, epistemological, and theological
questions constituting the problem, namely, they add a
*semantic* aspect. On the Aristotelian conception of universals
as universal *predicables*, there obviously *are*
universals, namely, our universal words. But the universality of our
words is clearly not dependent on the physical qualities of our
articulate sounds, or of the various written marks indicating them,
but on their representative function. So, to give an account of the
universality of our universal words, we have to be able to tell in
virtue of what they have this universal representative function, that
is to say, we have to be able to assign a *common cause* by the
recognition of which in terms of a *common concept* we can give
a *common name* to a *potential infinity of individuals*
belonging to the same kind.
But this common cause certainly cannot be a *common thing* in
the way Boethius described universal things, for, as we have seen, the
assumption of the existence of such a common thing leads to
contradictions. To be sure, Abelard also provides a number of further
arguments, dealing with several refinements of Boethius'
characterization of universals proposed by his contemporaries, such as
William of Champeaux, Bernard of Chartres, Clarembald of Arras,
Jocelin of Soissons, and Walter of Mortagne - but I cannot go
into those details
here.[28]
The point is that he refutes and rejects all these suggestions to
save real universals either as common things, having their own real
unity, or as collections of several things, having a merely collective
unity. The gist of his arguments against the former view is that the
universal thing on that view would have to have its own numerical
unity, and therefore, since it constitutes the substance of all its
singulars, all these singulars would have to be substantially one and
the same thing which would have to have all their contrary properties
at the same time, which is impossible. The main thrust of his
arguments against the collection-theory is that collections are
arbitrary integral wholes of the individuals that make them up, so
they simply do not fill the bill of the Porphyrian characterizations
of the essential predicables such as genera and
species.[29]
So, the common cause of the imposition of universal words cannot be
any one thing, or a multitude of things; yet, being a common
*cause*, it cannot be nothing. Therefore, this common cause,
which Abelard calls the
*status*[30]
of those things to which it is common, is a cause, but it is a cause
which is a non-thing. However strange this may sound, Abelard observes
that sometimes we do assign *causes* which are not
*things*. For example, when we say "The ship was wrecked
because the pilot was absent", the cause that we assign, namely,
that the pilot was absent is not some *thing*, it is rather
*how* things were, i.e., the *way* things were, which in
this case we signify by the whole proposition "The pilot was
absent".[31]
From the point of view of understanding what Abelard's
*status* are, it is significant that he assimilates the causal
role of *status* as the common cause of imposition to causes
that are signified by whole propositions. These *significata*
of whole propositions, which in English we may refer to by using the
corresponding "that-clauses" (as I did above, referring to
the cause of the ship's wreck by the phrase "that the
pilot was absent"), and in Latin by an
accusative-with-infinitive construction, are what Abelard calls the
*dicta* of propositions. These *dicta*, not being
identifiable with any single thing, yet, not being nothing, constitute
an ontological realm that is completely different from that of
ordinary things. But it is also in this realm that Abelard's
*common causes of imposition* may find their place.
Abelard says that the common cause of imposition of a universal name
has to be something in which things falling under that name agree. For
example, the name 'man' (in the sense of 'human
being', and not in the sense of 'male human being')
is imposed on all humans on account of something in which all humans,
as such, agree. But that in which all humans as such agree is that
each one of them is a man, that is, each one agrees with all others in
their *being a man*. So, it is their being human [*esse
hominem*] that is the common cause Abelard was looking for, and
this is what he calls the *status* of man. The *status*
of man is not a thing; it is not any singular man, for obviously no
singular man is common to all men, and it is not a universal man, for
there is no such a thing. But *being a man* is common in the
required manner (i.e., it is something in which all humans agree), yet
it is clearly not a thing. For let us consider the singular
propositions 'Socrates is a man' [*Socrates est
homo*], 'Plato is a man' [*Plato est homo*],
etc. These signify their *dicta*, namely, Socrates's
being a man [*Socratem esse hominem*], and Plato's being
a man [*Platonem esse hominem*], etc. But then it is clear that
if we abstract from the singular subjects and retain what is common to
them all, we can get precisely the *status* in which all these
subjects agree, namely, being a man [*esse hominem*]. So, the
*status*, just like the *dicta* from which they can be
obtained, constitute an ontological realm that is entirely different
from that of ordinary things.
Still, despite the fact that it clearly has to do something with
abstraction, an activity of the mind, Abelard insists that a
*status* is not a concept of our mind. The reason for his
insistence is that the *status*, being the *common
cause* of imposition of a common name, must be something real, the
existence of which is not dependent on the activity of our minds. A
*status* is there in the nature of things, regardless of
whether we form a mental act whereby we recognize it or not. In fact,
for Abelard, a *status* is an object of the divine mind,
whereby God preconceives the state of his creation from
eternity.[32]
A concept, or mental image *of our mind*, however, exists as
the object of our mind only insofar as our mind performs the mental
act whereby it forms this object. But this object, again, is not a
thing, indeed, not any more than any other fictitious object of our
minds. However, what distinguishes the *universal concept* from
a merely *fictitious object* of our mind is that the former
corresponds to a *status* of really existing singular things,
whereas the latter does not have anything corresponding to it.
To be sure, there are a number of points left in obscurity by
Abelard's discussion concerning the relationships of the items
distinguished here. For example, Abelard says that we cannot conceive
of the *status*. However, it seems that we can only signify by
our words whatever we can conceive. Yet, Abelard insists that besides
our concepts, our words *must* signify the *status*
themselves.[33]
A solution to the problem is only hinted at in Abelard's remark
that the names can signify *status*, because "their
inventor *meant* to impose them in accordance with certain
natures or characteristics of things, even if he did not know how to
think out the nature or characteristic of the thing" (*Five
Texts*, Spade 1994, p. 46 (116)). So, we may assume that although
the inventor of the name does not know the *status*, his vague,
"senses-bound" conception, *from which* he takes
his word's signification, is directed at the *status*, as
to *that which* he *intends* to
signify.[34]
However, Abelard does not work out this suggestion in any further
detail. Again, it is unclear how the *status* is related to the
individualized natures of the things that agree in the
*status*. If the *status* is what the divine mind
conceives of the singulars in abstraction from them, why
couldn't the nature itself be conceived in the same way? -
after all, the abstract nature would not have to be a thing any more
than a *status* is, for its existence would not be
*real* *being*, but merely its *being conceived*.
Furthermore, it seems quite plausible that Abelard's
*status* could be derived by abstraction from singular
*dicta* with the same predicate, as suggested above. But
*dicta* are the quite ordinary *significata* of
*our* propositions, which Abelard never treats as
epistemologically problematic, so why would the *status*, which
we could apparently abstract from them, be accessible only to the
divine mind?
I'm not suggesting that Abelard could not provide acceptable and
coherent answers to these and similar questions and
problems.[35]
But perhaps these problems also contributed to the fact that by the
13th century his doctrine of *status* was no longer
in currency. Another historical factor that may have contributed to
the waning of Abelard's theory was probably the influence of the
newly translated Aristotelian writings along with the Arabic
commentaries that flooded the Latin West in the second half of the
12th century.
## 7. Universal Natures in Singular Beings and in Singular Minds
The most important influence in this period from our point of view
came from Avicenna's doctrine distinguishing the absolute
consideration of a universal nature from what applies to the same
nature in the subject in which it exists. The distinction is neatly
summarized in the following passage.
>
> Horsehood, to be sure, has a definition that does not demand
> universality. Rather it is that to which universality happens. Hence
> horsehood itself is nothing but horsehood only. For in itself it is
> neither many nor one, neither is it existent in these sensibles nor in
> the soul, neither is it any of these things potentially or actually in
> such a way that this is contained under the definition of horsehood.
> Rather [in itself it consists] of what is horsehood
> only.[36]
>
In his little treatise *On Being and Essence*, Aquinas explains
the distinction in greater detail in the following words:
>
>
> A nature, however, or essence ...can be considered in two ways.
> First, we can consider it according to its proper notion, and this is
> its absolute consideration; and in this way nothing is true of it
> except what pertains to it as such; whence if anything else is
> attributed to it, that will yield a false attribution. ...In the
> other way [an essence] is considered as it exists in this or that
> [individual]; and in this way something is predicated of it *per
> accidens* [non-essentially or coincidentally], on account of that
> in which it exists, as when we say that a man is white because
> Socrates is white, although this does not pertain to man as such.
>
>
>
> A nature considered in this way, however, has two sorts of existence.
> It exists in singulars on the one hand, and in the soul on the other,
> and from each of these [sorts of existence] it acquires accidents. In
> the singulars, furthermore, the essence has several [acts of]
> existence according to the multiplicity of singulars. Nevertheless, if
> we consider the essence in the first, or absolute, sense, none of
> these pertain to it. For it is false to say that the essence of man,
> considered absolutely, has existence in this singular, because if
> existence in this singular pertained to man insofar as he is man, man
> would never exist, except as this singular. Similarly, if it pertained
> to man insofar as he is man not to exist in this singular, then the
> essence would never exist in the singular. But it is true to say that
> man, but not insofar as he is man, may be in this singular or in that
> one, or else in the soul. Therefore, the nature of man considered
> absolutely abstracts from every existence, though it does not exclude
> any. And the nature thus considered is what is predicated of each
> individual.[37]
>
>
>
So, a common nature or essence according to its absolute consideration
abstracts from all existence, both in the singulars and in the mind.
Yet, and this is the important point, it is *the same* nature
that informs both the singulars that have this nature and the minds
conceiving of them in terms of this nature. To be sure, this sameness
is not numerical sameness, and thus it does not yield numerically one
nature. On the contrary, it is the sameness of several, numerically
distinct realizations of the same information-content, just like the
sameness of a book in its several copies. Just as there is no such a
thing as a universal book over and above the singular copies of the
same book, so there is no such a thing as a universal nature existing
over and above the singular things of the same nature; still, just as
it is true to say that the singular copies are the copies of *the
same book*, so it is true to say that these singulars are of
*the same nature*.
Indeed, this analogy also shows why this conception should be so
appealing from the point of view of the original epistemological
problem of the possibility of universal knowledge, without entailing
the ontological problems of naive Platonism. For just as we do
not need to read all copies of the same book in order to know what we
can find on the same page in the next copy (provided it is not a
corrupt
copy),[38]
so we can know what may apply to all singulars of the same nature
without having to experience them all. Still, we need not assume that
we can have this knowledge only if we can get somehow in a mysterious
contact with the universal nature over and above the singulars; all we
need is to learn how "to read" the singulars in our
experience to discern the "common message", the universal
nature, informing them all, uniformly, yet in their distinct
singularity. (Note that "reading the singulars" is not a
mere metaphor: this is precisely what geneticists are quite literally
doing in the process of gene sequencing, for instance, in the human
genome project.) Therefore, the *same nature* is not *the
same* in the same way as the *same individual* having this
nature is the same as long as it exists. For that *same
nature*, insofar as it is regarded as *the same*, does not
even exist at all; it is said to be the same only insofar as it is
*recognizable* *as the same*, if we disregard everything
that distinguishes its instances in several singulars. (Note here that
whoever would want to deny such a *recognizable sameness* in
and across several singulars would have to deny that he is able to
recognize the same words or the same letters in various sentences; so
such a person would not be able to read, write, or even to speak, or
understand human speech. But then we shouldn't really worry
about such a person in a philosophical debate.)
However, at this point some further questions emerge. If this common
nature is *recognizably the same* on account of disregarding
its individuating conditions in the singulars, then isn't it the
result of abstraction; and if so, isn't it in the abstractive
mind as its object? But if it is, then how can Aquinas say that it
abstracts *both* from being in the singulars *and* from
being in the mind?
Here we should carefully distinguish between what we can say about
*the same nature* *as such*, and what we can say about
*the same nature* *on account of its conditions* as it
exists in this or that subject. Again, using our analogy, we can
certainly consistently say that the same book in its first edition was
200 pages, whereas in the second only 100, because it was printed on
larger pages, but the book itself, as such, is neither 200 nor 100
pages, although it can be either. In the same way, we can consistently
say that *the same nature as such* is neither in the singulars
nor in the mind, but of course it is only insofar as it is in the mind
that it can be *recognizably the same*, on account of the
mind's abstraction. Therefore, that it is abstract and is
actually recognized as the same in its many instances is something
that belongs to the same nature only on account of being conceived by
the abstractive mind. This is the reason why the nature is called a
*universal concept*, insofar as it is in the mind. Indeed, it
is only under this aspect that it is properly called a universal. So,
although *that which* *is predicable* of several
singulars is nothing but the common nature as such, considered
absolutely, still, *that it is predicable* pertains to the same
nature only on account of being conceived by the abstractive
intellect, insofar as it is a concept of the mind.
At any rate, this is how Aquinas solves the paralogism that seems to
arise from this account, according to which the true claims that
Socrates is a man and man is a species would seem to entail the
falsity that Socrates is a species. For if we say that in the
proposition 'Socrates is a man' the predicate signifies
human nature absolutely, but the same nature, on account of its
abstract character, is a species, the false conclusion seems
inevitable (Klima 1993a).
However, since the common nature is not a species in its absolute
consideration, but only insofar as it is in the mind, the conclusion
does not follow. Indeed, this reasoning would be just as invalid as
the one trying to prove that this book, pointing to the second edition
which is actually 100 pages, is 200 pages, because the same book was
200 hundred pages in its first edition. For just as its being 200
pages belongs to the same book only in its first edition, so its being
a species belongs to human nature only as it exists in the mind.
So, to sum up, we have to distinguish here between the nature existing
in this singular (such as the individualized human nature of Socrates,
which is numerically one item, mind-independently existing in
Socrates), the universal (such as the species of human nature existing
only in the mind as its object considered in abstraction from the
individuating conditions it has in the singular humans), and the
nature according to its absolute consideration (such as human nature
considered in abstraction both from its existence in the singulars as
its subjects and in the mind as its object). What establishes the
distinction of these items is the difference of what can be truly said
of them on account of the different conditions they have in this or
that. What establishes the unity of these items, however, is that they
are somehow the same nature existing and considered under different
conditions. For the human nature in Socrates is numerically one, it is
numerically distinct from the human nature in Plato, and it has real,
mind-independent existence, which is in fact nothing but the existence
of Socrates, i.e., Socrates' life. However, although the human
nature in Socrates is a numerically distinct item from the human
nature in Plato, insofar as it is human nature, it is formally, in
fact, specifically the same nature, for it is human nature, and not
another, specifically different, say, feline or canine nature. It is
precisely this formal, specific, mind-independent sameness of these
items (for, of course, say, this cat and that cat do not differ
insofar as they are feline, regardless of whether there is anyone to
recognize this) that allows the abstractive human mind to recognize
this sameness by abstracting from those individuating conditions on
account of which this individualized nature in this individual
numerically differs from that individualized nature in that
individual. Thus, insofar as the formally same nature is actually
considered by a human mind in abstraction from these individualizing
conditions, it is a universal, a species, an abstract object of a
mental act whereby a human mind conceives of any individualized human
nature without its individuating conditions. But, as we could see
earlier, nothing can be a human nature existing without its
individuating conditions, although any individualized human nature can
be thought of without thinking of its necessarily conjoined
individuating conditions (just as triangular shape can be thought of
without thinking its necessarily conjoined conditions of being
isosceles or being scalene). So for this universal concept to be is
nothing but to be thought of, to be an object of the abstractive human
mind. Finally, human nature in its absolute consideration is the same
nature abstracted even from this being, i.e., even from being an
object of the mind. Thus, as opposed to both in its existence in
individuals and in the mind, neither existence, nor non-existence, nor
unity, nor disunity or multiplicity belongs to it, as it is considered
without any of these; indeed, it is considered without considering its
being considered, for it is considered only in terms of what belongs
to it on account of itself, not considering anything that has to
belong to it on account of something else in which it can only be
(i.e., whether in the mind or in reality). So, the nature according to
its absolute consideration does not have numerical unity or
multiplicity, which it has as it exists in individuals, nor does it
have the formal unity that it has in the consideration of the mind
(insofar as it is one species among many), but it has that formal
unity which precedes even the recognition of this unity by the
abstractive
mind.[39]
Nevertheless, even if with these distinctions Aquinas' solution
of the paralogism works and what he says about the existence and unity
vs. multiplicity of a common nature can be given a consistent
interpretation, the emergence of the paralogism itself and the
complexities involved in explaining it away, as well as the problems
involved in providing this consistent interpretation show the inherent
difficulties of this account. The main difficulty is the trouble of
keeping track of what we are talking about when it becomes crucial to
know what pertains to what on account of what; in general, when the
conditions of identity and distinction of the items we are talking
about become variable and occasionally rather unclear.
Indeed, we can appreciate just how acute these difficulties may become
if we survey the items that needed to be distinguished in what may be
described as the common conceptual framework of the
"realist" *via antiqua*, the "old way"
of doing philosophy and theology, before the emergence of the
"modern way", the "nominalist" *via
moderna* challenging some fundamental principles of the older
framework, resulting mostly from the semantic innovations introduced
by William Ockham. The survey of these items and the problems they
generate will then allow us to see in greater detail the main
motivation for Ockham's innovations.
## 8. Universals in the *Via Antiqua*
In this framework, we have first of all the universal or common terms
of spoken and written languages, which are common on account of being
imposed upon universal concepts of the human mind. The concepts
themselves are universal on account of being obtained by the activity
of the abstractive human mind from experiences of singulars. But the
process of concept formation also involves various stages.
In the first place, the sensory information collected by the single
senses is distinguished, synthesized, and collated by the higher
sensory faculties of the common sense [*sensus communis*] and
the so-called cogitative power [*vis cogitativa*], to be stored
in sensory memory as *phantasms*, the sensory representations
of singulars in their singularity. The active intellect
[*intellectus agens*] uses this sensory information to extract
its intelligible content and produce the intelligible species
[*species intelligibiles*], the universal representations of
several individuals in their various degrees of formal unity,
disregarding their distinctive features and individuating conditions
in the process of abstraction.
The intelligible species are stored in the intellectual memory of the
potential intellect [*intellectus possibilis*], which can then
use them to form the corresponding concept in an act of thought, for
example, in forming a judgment. The intelligible species and the
concepts themselves, being formed by individual human minds, are
individual in their being, insofar as they pertain to this or that
human mind. However, since they are the result of abstraction, in
their information content they are universal.
Now insofar as this universal information content is common to all
minds that form these concepts at all, and therefore it is a common
intelligible content gained by these minds from their objects insofar
as they are conceived by these minds in a universal manner, later
scholastic thinkers refer to it as the objective concept
[*conceptus obiectivus*], distinguishing it from the formal or
subjective concepts [*conceptus formales seu subiectivi*],
which are the individual acts of individual minds carrying this
information (just as the individual copies of a book carry the
information content of the
book).[40]
It is this objective concept that is identified as the universal of
the human mind (distinguished from the universals of the divine mind),
namely, a species, a genus, a difference, a property, or an accident.
(Note that these are only the simple concepts. Complex concepts, such
as those corresponding to complex terms and propositions are the
products of the potential intellect using these concepts in its
further operations.)
These universals, then, as the objective concepts of the mind, would
be classified as beings of reason [*entia rationis*], the being
of which consists in their being conceived (cf. Klima 1993b and
Schmidt 1966). To be sure, they are not merely fictitious objects, for
they are grounded in the nature of things insofar as they carry the
universal information content abstracted from the singulars. But then
again, the universal information content of the objective concept
itself, considered not insofar as it is in the mind as its object, but
in itself, disregarding whatever may carry it, is distinguished from
its carriers both in the mind and in the ultimate objects of the mind,
the singular things, as the nature of these things in its absolute
consideration.
However, the common nature as such cannot exist on its own any more
than a book could exist without any copies of it or any minds
conceiving of it. So, this common nature has real existence only in
the singulars, informing them, and giving them their recognizably
common characteristics. However, these common characteristics can be
recognized as such only by a mind capable of abstracting the common
nature from experiencing it in its really existing singular instances.
But it is on account of the real existence of these individualized
instances in the singulars that the common nature can truly be
predicated of the singulars, as long as they are actually informed by
these individualized instances.
The items thus distinguished and their interconnections can be
represented by the following block-diagram. The dashed frames indicate
that the items enclosed by them have a certain reduced ontological
status, a "diminished" mode of being, while the boxes
partly sharing a side indicate the (possible) partial identities of
the items they
enclose.[41]
The arrows pointing from the common term to the singulars, their
individualized natures and items in the mind on this diagram represent
semantic relations, which I am going to explain later, in connection
with Ockham's innovations. The rest of the arrows indicate the
flow of information from experience of singulars through the sensory
faculties to the abstractive mind, and to the application of the
universal information abstracted by the mind to further singular
experiences in acts of judgment.
![A box labeled (1) 'common term' has arrows pointing to boxes labeled (2) 'individual natures', (3) 'singulars', (4) 'absolute nature', (5) 'objective concept', and (6) 'subjective concept'. The arrow from (1) to (2) is labeled 'signification', the arrow from (1) to (3) is dashed and labeled 'supposition', the arrow from (1) to (6) is labeled 'subordination'. Boxes (2) and (3) share a side. Boxes (4) and (5) share a side and have dashed frames. Box (3) has an arrow to box (7) 'phantasms' which has an arrow to box (8) 'intelligible species' which has an arrow to box (9) 'subjective concept' which has an arrow to box (5). Boxes (7),(8), and (9) are inside a large box (10) labeled 'MIND'.](image4.gif)
Figure 4. The *via antiqua*
conception
Obviously, this is a rather complicated picture. However, its
complexity itself should not be regarded as problematic or even
surprising, for that matter. After all, this diagram merely
summarizes, and distinguishes the main stages of, how the human mind
processes the intelligible, universal information received from a
multitude of singular experiences, and then again, how it applies this
information in classifying further experiences. This process may
reasonably be expected to be complex, and should not be expected to
involve fewer stages than, e.g., setting up, and retrieving
information from, a computer database.
What renders this picture more problematic is rather the difficulties
involved in identifying and distinguishing these stages and the
corresponding items. Further complications were also generated by the
variations in terminology among several authors, and the various
criteria of identity and distinctness applied by them in introducing
various different notions of identity and distinctness. In fact, many
of the great debates of the authors working within this framework can
be characterized precisely as disputing the identity or distinctness
of the items featured here, or the very criteria of identifying or
distinguishing them.
For example, already Abelard raised the question whether the concept
or mental image, which we may identify in the diagram as the objective
concept of later authors, should be identified with the act of
thought, which we may identify as the subjective concept, or perhaps a
further act of the mind, called *formatio*, namely, the
potential intellect's act of forming the concept, using the
intelligible species as the principle of its action. Such distinctions
were later on severely criticized by authors such as John Peter Olivi
and others, who argued for the elimination of intelligible species,
and, in general, of any intermediaries between an act of the intellect
and its ultimate objects, the singulars conceived in a universal
manner.[42]
Again, looking at the diagram on the side of the singulars, most
13th century authors agreed that what accounts for the
specific unity of several individuals of the same species, namely,
their specific nature, should be something other than what accounts
for their numerical distinctness, namely, their principle of
individuation. However, one singular entity in a species of several
co-specific individuals has to contain both the principle of the
specific unity of these individuals and its own principle of
individuation. Therefore, this singular entity, being a composite at
least of its specific nature and its principle of individuation, has
to be distinct from its specific nature. At any rate, this is the
situation with material substances, whose principle of individuation
was held to be their matter. However, based on this reasoning,
immaterial substances, such as angels, could not be regarded as
numerically distinct on account of their matter, but only on account
of their form. But since form is the principle of specific unity,
difference in form causes specific diversity. Therefore, on this
basis, any two angels had to be regarded as different in species. This
conclusion was explicitly drawn by Aquinas and others, but it was
rejected by Augustinian theologians, and it was condemned in Paris in
1277.[43]
So, no wonder authors such as Henry of Ghent and Duns Scotus worked
out alternative accounts of individuation, introducing not only
different principles of individuation, such as the Scotists'
famous (or infamous) *haecceity*, but also different criteria
of distinctness and identity, such as those grounding Henry of
Ghent's *intentional distinction*, or Scotus's
*formal
distinction*,[44]
or even later Suarez' *modal
distinction*.[45]
But even further problems arose from considering the identity or
distinctness of the individualized natures signified by several common
terms in one and the same individual. The metaphysical debate over the
real distinction of essence and existence from this point of view is
nothing but the issue whether the individualized common nature
signified by the definition of a thing is the same as the act of being
signified by the verb 'is' in the same thing. In fact, the
famous problem of the plurality vs. unity of substantial forms may
also be regarded as a dispute over whether the common natures
signified by the substantial predicates on the Porphyrian tree in the
category of substance are distinct or the same in the same individual
(cf. Callus 1967). Finally, and this appears to be the primary
motivation for Ockham's innovations, there was the question
whether one must regard all individualized common natures signified in
the same individual by several predicates in the ten Aristotelian
categories as distinct from one another. For the affirmative answer
would involve commitment to a virtually limitless multiplication of
entities.
Indeed, according to Ockham, the *via antiqua* conception would
entail that
>
> a column is to the right by to-the-rightness, God is creating by
> creation, is good by goodness, just by justice, mighty by might, an
> accident inheres by inherence, a subject is subjected by subjection,
> the apt is apt by aptitude, a chimera is nothing by nothingness,
> someone blind is blind by blindness, a body is mobile by mobility, and
> so on for other, innumerable
> cases.[46]
>
And this is nothing, but "multiplying beings according to the
multiplicity of terms... which, however, is erroneous and leads
far away from the
truth".[47]
## 9. Universals in the *Via Moderna*
To be sure, as the very debates within the *via antiqua*
framework concerning the identity or non-identity of various items
distinguished in that framework indicate, Ockham's charges are
not quite
justified.[48]
After all, several *via antiqua* authors *did* allow
the identification of the *significata* of terms belonging to
various categories, so their "multiplication of beings"
did not necessarily match the multiplicity of terms. Furthermore,
since *via antiqua* authors also distinguished between various
modes or senses of being, allowing various sorts of
"diminished" kinds of being, such as *beings of
reason*, their ontological commitments were certainly not as
unambiguous as Ockham would have us believe in this passage. However,
if we contrast the diagram of the *via antiqua* framework above
with the following schematic representation of the *via
moderna* framework introduced by Ockham, we can immediately
appreciate the point of Ockham's innovations.
![shows the simplified version of Figure 4, what we get as a result of the nominalist reductions. It consists of three main boxes, labeled 'common term' on the top, 'singulars' on the lower left, and 'mind' on the lower right. The 'mind' box contains two sub-boxes, labeled 'phantasms' and 'common concepts', with an arrow pointing from the former to the latter, indicating the flow of information, as does the arrow pointing from the box 'singulars' to the box 'phantasms'. The rest of the arrows indicate semantic relations: the full arrow pointing from the box 'common term' to 'singulars' is labeled 'signification', the dashed arrow is labeled 'supposition'. The full arrow pointing from 'common term' to 'common concept' is labeled 'subordination'. Finally, the unlabeled full arrow pointing from 'common concept' to 'singulars' represents natural signification or signification, as is clear from the text.](image5.gif)
Figure 5. The *via moderna*
conception
Without a doubt, it is the captivating simplicity of this picture,
especially as compared with the complexity of the *via antiqua*
picture, that was the major appeal of the Ockhamist approach. There
are fewer items here, equally on the same ontological footing,
distinguished from one another in terms of the same unambiguous
distinction, the numerical distinction between individual real
entities.
To be sure, there still are universals in this picture. But these
universals are neither common natures "contracted" to
individuals by some really or merely formally distinct principle of
individuation, nor some universal objects of the mind, which exist in
a "diminished" manner, as *beings of reason*.
Ockham's universals, at least in his mature
theory,[49]
are just our common terms and our common concepts. Our common terms,
which are just singular utterances or inscriptions, are common in
virtue of being subordinated to our common concepts. Our common
concepts, on the other hand, are just singular acts of our singular
minds. Their universality consists simply in the universality of their
representative function. For example, the common term
'man' is a spoken or written universal term of English,
because it is subordinated to that concept of our minds by which we
conceive of each man indifferently. (See Klima, 2011) It is this
indifference in its representative function that enables the singular
act of my mind to conceive of each man in a universal manner, and the
same goes for the singular act of your mind. Accordingly, there is no
need to assume that there is anything in the individual humans,
distinct from these humans themselves, a common yet individualized
nature waiting to be abstracted by the mind. All we need to assume is
that two humans are more similar to each other than either of them to
a brute animal, and all animals are more similar to each other than
any of them to a plant, etc., and that the mind, being able to
recognize this similarity, is able to represent the humans by means of
a common specific concept, the animals by means of a common generic
concept, all living things by means of a more general generic concept,
etc.[50]
In this way, then, the common terms subordinated to these concepts
need not signify some abstract common nature in the mind, and
consequently its individualized instances in the singulars, for they
directly signify the singulars themselves, just as they are directly
conceived by the universally representative acts of the mind. So, what
these common terms signify are just the singulars themselves, which
are also the things referred to by these terms when they are used in
propositions. Using the customary rendering of the medieval logical
terminology, the things ultimately signified by a common term are its
*significata*, while the things referred to by the same term
when it is used in a proposition are their (personal)
*supposita*.[51]
Now if we compare the two diagrams representing the respective
conceptions of the two *viae*, we can see just how radically
Ockham's innovations changed the character of the semantic
relations connecting terms, concepts and things. In both
*viae*, common terms are subordinated to common concepts, and
it is in virtue of this subordination that they ultimately signify
what their concepts represent. In the *via moderna*, a concept
is just an act of the mind representing singulars in a more or less
indifferent manner, yielding a more or less universal signification
for the term. In the *via antiqua*, however, the act of the
mind is just one item in a whole series of intermediary
representations, distinguished in terms of their different functions
in processing universal information, and connected by their common
content, ultimately representing the common, yet individualized
natures of their
singulars.[52]
Accordingly, a common term, expressing this common content, is
primarily subordinated to the objective concept of the mind. But of
course, this objective concept is only the common content of the
singular representative acts of singular minds, their subjective
concepts, formed by means of the intelligible species, abstracted by
their active intellects. On the other hand, the objective concept,
abstracting from all individuating conditions, expresses only what is
common to all singulars, namely, their nature considered absolutely.
But this absolutely considered nature is only the common content of
what informs each singular of the same nature in its actual real
existence. So, the term's ultimate *significata* will
have to be the individualized natures of the singulars. But these
ultimate *significata* may still not be the singulars
themselves, namely, when the things informed by these
*significata* are not metaphysically simple. In the *via
moderna* conception, therefore, the ultimate *significata*
of a term are nothing but those singular things that can be the
term's *supposita* in various propositions, as a matter
of semantics. By contrast, in the *via antiqua* conception, a
term's ultimate *significata* may or may not be the same
things as the term's (personal) *supposita*, depending on
the constitution of these *supposita*, as a matter of
metaphysics. The singulars will be the *supposita* of the term
when it is used as the subject term of a proposition in which
something is predicated about the things informed by these ultimate
*significata* (in the case of metaphysically simple entities,
the term's *significata* and *supposita*
coincide).[53]
Nevertheless, despite the nominalists' charges to the contrary,
the *via antiqua* framework, as far as its semantic
considerations are concerned, was no more committed to the real
distinction of the *significata* and *supposita* of its
common terms than the *via moderna* framework was. For if the
semantic theory in itself had precluded the identification of these
semantic values, then the question of possible identity of these
values could not have been meaningfully raised in the first place.
Furthermore, in that case such identifications would have been
precluded as meaningless even when talking about metaphysically simple
entities, such as angels and God, whereas the metaphysical simplicity
of these entities was expressed precisely in terms of such
identifications. But also in the mundane cases of the
*significata* and *supposita* of concrete and abstract
universal terms in the nine accidental categories, several *via
antiqua* authors argued for the identification of these semantic
values both within and across categories. First of all there was
Aristotle's authority for the claim that action and passion are
the same
motion,[54]
so the significata of terms in these two categories could not be
regarded as really distinct entities. But several authors also argued
for the identification of relations with their foundations, that is to
say, for the identity of the significata of relative terms with the
significata of terms in the categories quantity and quality. (For
example, on this conception, my equality in height to you would be
just my height, provided you were of the same height, and not a
distinct "equality-thing" somehow attached to my height,
caused by our equal
heights.)[55]
By contrast, what makes the *via moderna* approach simpler is
that it "automatically" achieves such identifications
already on the basis of its semantic principles. Since in this
approach the *significata* of concrete common terms are just
the singulars directly represented by the corresponding concepts, the
*significata* and (personal) *supposita* of terms are
taken to be the same singulars from the beginning. So these common
terms *signify* and *supposit* for the same things
either absolutely, provided the term is *absolute*, or in
relation to other singulars, provided the term is
*connotative.* But even in the case of connotative terms, such
as relative terms (in fact, all terms in the nine accidental
categories, except for some abstract terms in the category quality,
according to Ockham) we do not need to assume the existence of some
mysterious relational entities informing singular substances. For
example, the term 'father' need not be construed as
signifying in me an inherent relation, my fatherhood, somehow
connecting me to my son, and suppositing for me on that account in the
context of a proposition; rather, it should merely be construed as
signifying me in relation to my son, thereby suppositing for me in the
context of a proposition, while connoting my son.
## 10. The Separation of the *Viae*, and the Breakdown of Scholastic Discourse in Late-Medieval Philosophy
The appeal of the simplicity of the *via moderna* approach,
especially as it was systematically articulated in the works of John
Buridan and his students, had a tremendous impact on late-medieval
philosophy and theology. To be sure, many late-medieval scholars, who
were familiar with both ways, would have shared the sentiment
expressed by the remark of Domingo Soto (1494-1560, describing
himself as someone who was "born among nominalists and raised by
realists")[56]
to the effect that whereas the realist doctrine of the *via
antiqua* was more difficult to understand, still, the nominalist
doctrine of the *via moderna* was more difficult to
believe.[57]
Nevertheless, the overall simplicity and internal consistency of the
nominalist approach were undeniable, gathering a strong following by
the 15th century in all major universities of Europe, old
and newly established
alike.[58]
The resulting separation and the ensuing struggle of the medieval
*viae* did not end with the victory of the one over the other.
Instead, due to the primarily *semantic* nature of the
separation, getting the parties embroiled in increasingly complicated
ways of talking past each other, thereby generating an ever growing
dissatisfaction, even contempt, in a new, lay, humanist
intelligentsia,[59]
it ended with the demise of the characteristically medieval
conceptual frameworks of both *viae* in the late-medieval and
early modern period.
These developments, therefore, also put an end to the specifically
*medieval* problem of universals. However, the increasingly
rarified late-medieval problem eventually vanished only to give way to
several modern variants of *recognizably* *the same*
problem, which keeps recurring in one form or another in contemporary
philosophy as well. Indeed, one may safely assert that as long as
there is interest in the questions of how a human language obviously
abounding in universal terms can be meaningfully mapped onto a world
of singulars, there *is* a problem of universals, regardless of
the details of the particular conceptual framework in which the
relevant questions are articulated. Clearly, in this sense, the
problem of universals is itself a universal, the universal problem of
accounting for the relationships between mind, language, and
reality. |
consequentialism | ## 1. Classic Utilitarianism
The paradigm case of consequentialism is utilitarianism, whose classic
proponents were Jeremy Bentham (1789), John Stuart Mill (1861), and
Henry Sidgwick (1907). (For predecessors, see Schneewind 1997, 2002.)
Classic utilitarians held hedonistic act consequentialism. *Act
consequentialism* is the claim that an act is morally right if and
only if that act maximizes the good, that is, if and only if the total
amount of good for all minus the total amount of bad for all is
greater than this net amount for any incompatible act available to the
agent on that occasion. (Cf. Moore 1912, chs. 1-2.)
*Hedonism* then claims that pleasure is the only intrinsic good
and that pain is the only intrinsic bad.
These claims are often summarized in the slogan that an act is right
if and only if it causes "the greatest happiness for the
greatest number." This slogan is misleading, however. An act can
increase happiness for most (the greatest number of) people but still
fail to maximize the net good in the world if the smaller number of
people whose happiness is not increased lose much more than the
greater number gains. The principle of utility would not allow that
kind of sacrifice of the smaller number to the greater number unless
the net good overall is increased more than any alternative.
Classic utilitarianism is consequentialist as opposed to deontological
because of what it denies. It denies that moral rightness depends
directly on anything other than consequences, such as whether the
agent promised in the past to do the act now. Of course, the fact that
the agent promised to do the act might indirectly affect the
act's consequences if breaking the promise will make other
people unhappy. Nonetheless, according to classic utilitarianism, what
makes it morally wrong to break the promise is its future effects on
those other people rather than the fact that the agent promised in the
past (Sinnott-Armstrong 2009).
Since classic utilitarianism reduces all morally relevant factors
(Kagan 1998, 17-22) to consequences, it might appear simple.
However, classic utilitarianism is actually a complex combination of
many distinct claims, including the following claims about the moral
rightness of acts:
>
>
> Consequentialism = whether an act is morally right depends only on
> *consequences* (as opposed to the circumstances or the
> intrinsic nature of the act or anything that happens before the
> act).
>
>
>
> Actual Consequentialism = whether an act is morally right depends only
> on the *actual* consequences (as opposed to foreseen,
> foreseeable, intended, or likely consequences).
>
>
>
> Direct Consequentialism = whether an act is morally right depends only
> on the consequences of *that act itself* (as opposed to the
> consequences of the agent's motive, of a rule or practice that
> covers other acts of the same kind, and so on).
>
>
>
> Evaluative Consequentialism = moral rightness depends only on the
> *value* of the consequences (as opposed to non-evaluative
> features of the consequences).
>
>
>
> Hedonism = the value of the consequences depends only on the
> *pleasures* and *pains* in the consequences (as opposed
> to other supposed goods, such as freedom, knowledge, life, and so
> on).
>
>
>
> Maximizing Consequentialism = moral rightness depends only on which
> consequences are *best* (as opposed to merely satisfactory or
> an improvement over the status quo).
>
>
>
> Aggregative Consequentialism = which consequences are best is some
> function of the values of *parts* of those consequences (as
> opposed to rankings of whole worlds or sets of consequences).
>
>
>
> Total Consequentialism = moral rightness depends only on the
> *total* net good in the consequences (as opposed to the average
> net good per person).
>
>
>
> Universal Consequentialism = moral rightness depends on the
> consequences for *all* people or sentient beings (as opposed to
> only the individual agent, members of the individual's society,
> present people, or any other limited group).
>
>
>
> Equal Consideration = in determining moral rightness, benefits to one
> person matter *just as much* as similar benefits to any other
> person (as opposed to putting more weight on the worse or worst
> off).
>
>
>
> Agent-neutrality = whether some consequences are better than others
> does not depend on whether the consequences are evaluated from the
> perspective of the agent (as opposed to an observer).
>
>
>
These claims could be clarified, supplemented, and subdivided further.
What matters here is just that most pairs of these claims are
logically independent, so a moral theorist could consistently accept
some of them without accepting others. Yet classic utilitarians
accepted them all. That fact makes classic utilitarianism a more
complex theory than it might appear at first sight.
It also makes classic utilitarianism subject to attack from many
angles. Persistent opponents posed plenty of problems for classic
utilitarianism. Each objection led some utilitarians to give up some
of the original claims of classic utilitarianism. By dropping one or
more of those claims, descendants of utilitarianism can construct a
wide variety of moral theories. Advocates of these theories often call
them consequentialism rather than utilitarianism so that their
theories will not be subject to refutation by association with the
classic utilitarian theory.
## 2. What is Consequentialism?
This array of alternatives raises the question of which moral theories
count as consequentialist (as opposed to deontological) and why. In
actual usage, the term "consequentialism" seems to be used
as a family resemblance term to refer to any descendant of classic
utilitarianism that remains close enough to its ancestor in the
important respects. Of course, different philosophers see different
respects as the important ones (Portmore 2020). Hence, there is no
agreement on which theories count as consequentialist under this
definition.
To resolve this vagueness, we need to determine which of the various
claims of classic utilitarianism are essential to consequentialism.
One claim seems clearly necessary. Any consequentialist theory must
accept the claim that I labeled "consequentialism",
namely, that certain normative properties depend only on consequences.
If that claim is dropped, the theory ceases to be
consequentialist.
It is less clear whether that claim by itself is sufficient to make a
theory consequentialist. Several philosophers assert that a moral
theory should not be classified as consequentialist unless it is
agent-neutral (McNaughton and Rawling 1991, Howard-Snyder 1994, Pettit
1997). This narrower definition is motivated by the fact that many
self-styled critics of consequentialism argue against
agent-neutrality.
Other philosophers prefer a broader definition that does not require a
moral theory to be agent-neutral in order to be consequentialist
(Bennett 1989; Broome 1991, 5-6; and Skorupski 1995). Criticisms
of agent-neutrality can then be understood as directed against one
part of classic utilitarianism that need not be adopted by every moral
theory that is consequentialist. Moreover, according to those who
prefer a broader definition of consequentialism, the narrower
definition conflates independent claims and obscures a crucial
commonality between agent-neutral consequentialism and other moral
theories that focus exclusively on consequences, such as moral egoism
and recent self-styled consequentialists who allow agent-relativity
into their theories of value (Sen 1982, Broome 1991, Portmore 2001,
2003, 2011).
A definition solely in terms of consequences might seem too broad,
because it includes absurd theories such as the theory that an act is
morally right if it increases the number of goats in Texas. Of course,
such theories are implausible. Still, it is not implausible to call
them consequentialist, since they do look only at consequences. The
implausibility of one version of consequentialism does not make
consequentialism implausible in general, since other versions of
consequentialism still might be plausible.
Besides, anyone who wants to pick out a smaller set of moral theories
that excludes this absurd theory may talk about evaluative
consequentialism, which is the claim that moral rightness depends only
on the value of the consequences. Then those who want to talk about
the even smaller group of moral theories that accepts both evaluative
consequentialism and agent-neutrality may describe them as
agent-neutral evaluative consequentialism. If anyone still insists on
calling these smaller groups of theories by the simple name,
'consequentialism', this narrower word usage will not
affect any substantive issue.
Still, if the definition of consequentialism becomes too broad, it
might seem to lose force. Some philosophers have argued that any moral
theory, or at least any plausible moral theory, could be represented
as a version of consequentialism (Sosa 1993, Portmore 2009, Dreier
1993 and 2011; but see Brown 2011). If so, then it means little to
label a theory as consequentialist. The real content comes only by
contrasting theories that are not consequentialist.
In the end, what matters is only that we get clear about which
theories a particular commentator counts as consequentialist or not
and which claims are supposed to make them consequentialist or not.
Only then can we know which claims are at stake when this commentator
supports or criticizes what they call "consequentialism".
Then we can ask whether each objection really refutes that particular
claim.
## 3. What is Good? Hedonistic vs. Pluralistic Consequentialisms
Some moral theorists seek a single simple basic principle because they
assume that simplicity is needed in order to decide what is right when
less basic principles or reasons conflict. This assumption seems to
make hedonism attractive. Unfortunately, however, hedonism is not as
simple as they assume, because hedonists count both pleasures and
pains. Pleasure is distinct from the absence of pain, and pain is
distinct from the absence of pleasure, since sometimes people feel
neither pleasure nor pain, and sometimes they feel both at once.
Nonetheless, hedonism was adopted partly because it seemed simpler
than competing views.
The simplicity of hedonism was also a source of opposition. From the
start, the hedonism in classic utilitarianism was treated with
contempt. Some contemporaries of Bentham and Mill argued that hedonism
lowers the value of human life to the level of animals, because it
implies that, as Bentham said, an unsophisticated game (such as
push-pin) is as good as highly intellectual poetry if the game creates
as much pleasure (Bentham 1843). *Quantitative* hedonists
sometimes respond that great poetry almost always creates more
pleasure than trivial games (or sex and drugs and rock-and-roll),
because the pleasures of poetry are more certain (or probable),
durable (or lasting), fecund (likely to lead to other pleasures), pure
(unlikely to lead to pains), and so on.
Mill used a different strategy to avoid calling push-pin as good as
poetry. He distinguished higher and lower qualities of pleasures
according to the preferences of people who have experienced both kinds
(Mill 1861, 56; compare Plato 1993 and Hutcheson 1755, 421-23).
This *qualitative* hedonism has been subjected to much
criticism, including charges that it is incoherent and does not count
as hedonism (Moore 1903, 80-81; cf. Feldman 1997,
106-24).
Even if qualitative hedonism is coherent and is a kind of hedonism, it
still might not seem plausible. Some critics argue that not
*all* pleasures are valuable, since, for example, there is no
value in the pleasures that a sadist gets from whipping a victim or
that an addict gets from drugs. Other opponents object that not
*only* pleasures are intrinsically valuable, because other
things are valuable independently of whether they lead to pleasure or
avoid pain. For example, my love for my wife does not seem to become
less valuable when I get less pleasure from her because she contracts
some horrible disease. Similarly, freedom seems valuable even when it
creates anxiety, and even when it is freedom to do something (such as
leave one's country) that one does not want to do. Again, many
people value knowledge of distant galaxies regardless of whether this
knowledge will create pleasure or avoid pain.
These points against hedonism are often supplemented with the story of
the experience machine found in Nozick 1974 (42-45; cf. De
Brigard 2010) and the movie, *The Matrix*. People on this
machine *believe* they are spending time with their friends,
winning Olympic gold medals and Nobel prizes, having sex with their
favorite lovers, or doing whatever gives them the greatest balance of
pleasure over pain. Although they have no real friends or lovers and
actually accomplish nothing, people on the experience machine get just
as much pleasure as if their beliefs were true. Moreover, they feel no
(or little) pain. Assuming that the machine is reliable, it would seem
irrational not to hook oneself up to this machine *if* pleasure
and pain were all that mattered, as hedonists claim. Since it does
*not* seem irrational to refuse to hook oneself up to this
machine, hedonism seems inadequate. The reason is that hedonism
overlooks the value of *real* friendship, knowledge, freedom,
and achievements, all of which are lacking for deluded people on the
experience machine.
Some hedonists claim that this objection rests on a misinterpretation
of hedonism. If hedonists see pleasure and pain as
*sensations*, then a machine might be able to reproduce those
sensations. However, we can also say that a mother is pleased that her
daughter gets good grades. Such *propositional* pleasure occurs
only when the state of affairs in which the person takes pleasure
exists (that is, when the daughter actually gets good grades). But the
relevant states of affairs would not really exist if one were hooked
up to the experience machine. Hence, hedonists who value propositional
pleasure rather than or in addition to sensational pleasure can deny
that more pleasure is achieved by hooking oneself up to such an
experience machine (Feldman 1997, 79-105; see also
Tannsjo 1998 and Feldman 2004 for more on hedonism).
A related position rests on the claim that what is good is desire
satisfaction or the fulfillment of preferences; and what is bad is the
frustration of desires or preferences. What is desired or preferred is
usually not a sensation but is, rather, a state of affairs, such as
having a friend or accomplishing a goal. If a person desires or
prefers to have true friends and true accomplishments and not to be
deluded, then hooking this person up to the experience machine need
not maximize desire satisfaction. Utilitarians who adopt this theory
of value can then claim that an agent morally ought to do an act if
and only if that act maximizes desire satisfaction or preference
fulfillment (that is, the degree to which the act achieves whatever is
desired or preferred). What maximizes desire satisfaction or
preference fulfillment need not maximize sensations of pleasure when
what is desired or preferred is not a sensation of pleasure. This
position is usually described as *preference
utilitarianism*.
One problem for preference utilitarianism concerns how to make
interpersonal comparisons (though this problem also arises for several
other theories of value). If we want to know what one person prefers,
we can ask what that person would choose in conflicts. We cannot,
however, use the same method to determine whether one person's
preference is stronger or weaker than another person's
preference, since these different people might choose differently in
the decisive conflicts. We need to settle which preference (or
pleasure) is stronger because we may know that Jones prefers A's
being done to A's not being done (and Jones would receive more
pleasure from A's being done than from A's not being
done), whereas Smith prefers A's not being done (and Smith would
receive more pleasure from A's not being done than from
A's being done). To determine whether it is right to do A or not
to do A, we must be able to compare the strengths of Jones's and
Smith's preferences (or the amounts of pleasure each would
receive in her preferred outcome) in order to determine whether doing
A or not doing A would be better overall. Utilitarians and
consequentialists have proposed many ways to solve this problem of
interpersonal comparison, and each attempt has received criticisms.
Debates about this problem still rage. (For a recent discussion with
references, see Coakley 2015.)
Preference utilitarianism is also often criticized on the grounds that
some preferences are misinformed, crazy, horrendous, or trivial. I
might prefer to drink the liquid in a glass because I think that it is
beer, though it really is strong acid. Or I might prefer to die merely
because I am clinically depressed. Or I might prefer to torture
children. Or I might prefer to spend my life learning to write as
small as possible. In all such cases, opponents of preference
utilitarianism can deny that what I prefer is really good. Preference
utilitarians can respond by limiting the preferences that make
something good, such as by referring to informed desires that do not
disappear after therapy (Brandt 1979). However, it is not clear that
such qualifications can solve all of the problems for a preference
theory of value without making the theory circular by depending on
substantive assumptions about which preferences are for good
things.
Many consequentialists deny that all values can be reduced to any
single ground, such as pleasure or desire satisfaction, so they
instead adopt a pluralistic theory of value. Moore's *ideal
utilitarianism*, for example, takes into account the values of
beauty and truth (or knowledge) in addition to pleasure (Moore 1903,
83-85, 194; 1912). Other consequentialists add the intrinsic
values of friendship or love, freedom or ability, justice or fairness,
desert, life, virtue, and so on.
If the recognized values all concern individual welfare, then the
theory of value can be called *welfarist* (Sen 1979). When a
welfarist theory of value is combined with the other elements of
classic utilitarianism, the resulting theory can be called
*welfarist consequentialism*.
One non-welfarist theory of value is *perfectionism*, which
claims that certain states make a person's life good without
necessarily being good for the person in any way that increases that
person's welfare (Hurka 1993, esp. 17). If this theory of value
is combined with other elements of classic utilitarianism, the
resulting theory can be called *perfectionist consequentialism*
or, in deference to its Aristotelian roots, *eudaemonistic
consequentialism*.
Similarly, some consequentialists hold that an act is right if and
only if it maximizes some function of both happiness and capabilities
(Sen 1985, Nussbaum 2000). Disabilities are then seen as bad
regardless of whether they are accompanied by pain or loss of
pleasure.
Or one could hold that an act is right if it maximizes fulfillment (or
minimizes violation) of certain specified moral rights. Such theories
are sometimes described as a *utilitarianism of rights*. This
approach could be built into total consequentialism with rights
weighed against happiness and other values or, alternatively, the
disvalue of rights violations could be lexically ranked prior to any
other kind of loss or harm (cf. Rawls 1971, 42). Such a lexical
ranking within a consequentialist moral theory would yield the result
that nobody is ever justified in violating rights for the sake of
happiness or any value other than rights, although it would still
allow some rights violations in order to avoid or prevent other rights
violations.
When consequentialists incorporate a variety of values, they need to
rank or weigh each value against the others. This is often difficult.
Some consequentialists even hold that certain values are
incommensurable or incomparable in that no comparison of their values
is possible (Griffin 1986 and Chang 1997). This position allows
consequentialists to recognize the possibility of irresolvable moral
dilemmas (Sinnott-Armstrong 1988, 81; Railton 2003, 249-91).
Pluralism about values also enables consequentialists to handle many
of the problems that plague hedonistic utilitarianism. For example,
opponents often charge that classical utilitarians cannot explain our
obligations to keep promises and not to lie when no pain is caused or
pleasure is lost. Whether or not hedonists can meet this challenge,
pluralists can hold that knowledge is intrinsically good and/or that
false belief is intrinsically bad. Then, if deception causes false
beliefs, deception is instrumentally bad, and agents ought not to lie
without a good reason, even when lying causes no pain or loss of
pleasure. Since lying is an attempt to deceive, to lie is to attempt
to do what is morally wrong (in the absence of defeating factors).
Similarly, if a promise to do an act is an attempt to make an audience
believe that the promiser will do the act, then to break a promise is
for a promiser to make false a belief that the promiser created or
tried to create. Although there is more tale to tell, the disvalue of
false belief can be part of a consequentialist story about why it is
morally wrong to break promises.
When such pluralist versions of consequentialism are not welfarist,
some philosophers would not call them *utilitarian*. However,
this usage is not uniform, since even non-welfarist views are
sometimes called utilitarian. Whatever you call them, the important
point is that consequentialism and the other elements of classical
utilitarianism are compatible with many different theories about which
things are good or valuable.
Instead of turning pluralist, some consequentialists foreswear the
aggregation of values. Classic utilitarianism added up the values
within each part of the consequences to determine which total set of
consequences has the most value in it. One could, instead, aggregate
goods for each individual but not aggregate goods of separate
individuals (Roberts 2002). Alternatively, one could give up all
aggregation, including aggregation for individuals, and instead rank
the complete worlds or sets of consequences caused by
acts without adding up the values of the parts of those worlds or
consequences. One motive for this move is Moore's principle of
organic unity (Moore 1903, 27-36), which claims that the value
of a combination or "organic unity" of two or more things
cannot be calculated simply by adding the values of the things that
are combined or unified. For example, even if punishment of a criminal
causes pain, a consequentialist can hold that a world with both the
crime and the punishment is better than a world with the crime but not
the punishment, perhaps because the former contains more justice,
without adding the value of this justice to the negative value of
the pain of the punishment. Similarly, a world might seem better
when people do not get pleasures that they do not deserve, even if
this judgment is not reached by adding the values of these pleasures
to other values to calculate any total. Cases like these lead some
consequentialists to deny that moral rightness is any aggregative
function of the values of particular effects of acts. Instead, they
compare the whole world that results from an action with the
whole world that results from not doing that action. If the former is
better, then the action is morally right (J.J.C. Smart 1973, 32;
Feldman 1997, 17-35). This approach can be called *holistic
consequentialism* or *world utilitarianism*.
Another way to incorporate relations among values is to consider
*distribution*. Compare one outcome where most people are
destitute but a few lucky people have extremely large amounts of goods
with another outcome that contains slightly less total goods but where
every person has nearly the same amount of goods. Egalitarian critics
of classical utilitarianism argue that the latter outcome is better,
so more than the total amount of good matters. Traditional hedonistic
utilitarians who prefer the latter outcome often try to justify
egalitarian distributions of goods by appealing to a principle of
diminishing marginal utility. Other consequentialists, however,
incorporate a more robust commitment to equality. Early on, Sidgwick
(1907, 417) responded to such objections by allowing distribution to
break ties between other values. More recently, some consequentialists
have added some notion of fairness (Broome 1991, 192-200) or
desert (Feldman 1997, 154-74) to their test of which outcome is
best. (See also Kagan 1998, 48-59.) Others turn to
prioritarianism, which puts more weight on people who are worse off
(Adler and Norheim 2022, Arneson 2022). Such consequentialists do not
simply add up values; they look at patterns.
A related issue arises from population change. Imagine that a
government considers whether to provide free contraceptives to curb a
rise in population. Without free contraceptives, overcrowding will
bring hunger, disease, and pain, so each person will be worse off.
Still, each new person will have enough pleasure and other goods that
the total net utility will increase with the population. Classic
utilitarianism focuses on total utility, so it seems to imply that
this government should not provide free contraceptives. That seems
implausible to many utilitarians. To avoid this result, some
utilitarians claim that an act is morally wrong if and only if its
consequences contain more pain (or other disvalues) than an
alternative, regardless of positive values (cf. R. N. Smart 1958).
This *negative utilitarianism* implies that the government
should provide contraceptives, since that program reduces pain (and
other disvalues), even though it also decreases total net pleasure (or
good). Unfortunately, negative utilitarianism also seems to imply that
the government should painlessly kill everyone it can, since dead
people feel no pain (and have no false beliefs, diseases, or
disabilities - though killing them does cause loss of ability).
A more popular response is average utilitarianism, which says that the
best consequences are those with the highest average utility (cf.
Rawls 1971, 161-75). The average utility would be higher with
the contraceptive program than without it, so average utilitarianism
yields the more plausible result--that the government should
adopt the contraceptive program. Critics sometimes charge that the
average utility could also be increased by killing the worst off, but
this claim is not at all clear, because such killing would put
everyone in danger (since, after the worst off are killed, another
group becomes the worst off, and then they might be killed next).
Still, average utilitarianism faces problems of its own (such as
"the mere addition paradox" in Parfit 1984, chap. 19). In
any case, all maximizing consequentialists, whether or not they are
pluralists, must decide whether moral rightness depends on maximizing
total good or average good.
A final challenge to consequentialists' accounts of value
derives from Geach 1956 and has been pressed by Thomson 2001. Thomson
argues that "A is a good X" (such as a good poison) does
not entail "A is good", so the term "good" is
an attributive adjective and cannot legitimately be used without
qualification. On this view, it is senseless to call something good
unless this means that it is good for someone or in some respect or
for some use or at some activity or as an instance of some kind.
Consequentialists are supposed to violate this restriction when they
say that the total or average consequences or the world as a whole is
good without any such qualification. However, consequentialists can
respond either that the term "good" has predicative uses
in addition to its attributive uses or that when they call a world or
total set of consequences good, they are calling it good for
consequences or for a world (Sinnott-Armstrong 2003a). If so, the fact
that "good" is often used attributively creates no problem
for consequentialists.
## 4. Which Consequences? Actual vs. Expected Consequentialisms
A second set of problems for classic utilitarianism is
epistemological. Classic utilitarianism seems to require that agents
calculate all consequences of each act for every person for all time.
That's impossible.
This objection rests on a misinterpretation. These critics assume that
the principle of utility is supposed to be used as a *decision
procedure* or *guide*, that is, as a method that agents
consciously apply to acts in advance to help them make decisions.
However, most classic and contemporary utilitarians and
consequentialists do not propose their principles as decision
procedures. (Bales 1971) Bentham wrote, "It is not to be
expected that this process [his hedonic calculus] should be strictly
pursued previously to every moral judgment." (1789, Chap. IV,
Sec. VI) Mill agreed, "it is a misapprehension of the
utilitarian mode of thought to conceive it as implying that people
should fix their minds upon so wide a generality as the world, or
society at large." (1861, Chap. II, Par. 19) Sidgwick added,
"It is not necessary that the end which gives the criterion of
rightness should always be the end at which we consciously aim."
(1907, 413)
Instead, most consequentialists claim that overall utility is the
*criterion* or *standard* of what is morally right or
morally ought to be done. Their theories are intended to spell out the
necessary and sufficient conditions for an act to be morally right,
regardless of whether the agent can tell in advance whether those
conditions are met. Just as the laws of physics govern golf ball
flight, but golfers need not calculate physical forces while planning
shots; so overall utility can determine which decisions are morally
right, even if agents need not calculate utilities while making
decisions. If the principle of utility is used as a criterion of the
right rather than as a decision procedure, then classical
utilitarianism does not require that anyone know the total
consequences of anything before making a decision.
Furthermore, a utilitarian criterion of right implies that it would
not be morally right to use the principle of utility as a decision
procedure in cases where it would not maximize utility to try to
calculate utilities before acting. Utilitarians regularly argue that
most people in most circumstances ought not to try to calculate
utilities, because they are too likely to make serious miscalculations
that will lead them to perform actions that reduce utility. It is even
possible to hold that most agents usually ought to follow their moral
intuitions, because these intuitions evolved to lead us to perform
acts that maximize utility, at least in likely circumstances (Hare
1981, 46-47). Some utilitarians (Sidgwick 1907, 489-90)
suggest that a utilitarian decision procedure may be adopted as an
esoteric morality by an elite group that is better at calculating
utilities, but utilitarians can, instead, hold that nobody should use
the principle of utility as a decision procedure.
This move is supposed to make consequentialism self-refuting,
according to some opponents. However, there is nothing incoherent
about proposing a decision procedure that is separate from one's
criterion of the right. Similar distinctions apply in other normative
realms. The criterion of a good stock investment is its total return,
but the best decision procedure still might be to reduce risk by
buying an index fund or blue-chip stocks. Criteria can, thus, be
self-effacing without being self-refuting (Parfit 1984, chs. 1 and
4).
Others object that this move takes the force out of consequentialism,
because it leads agents to ignore consequentialism when they make real
decisions. However, a criterion of the right can be useful at a higher
level by helping us choose among available decision procedures and
refine our decision procedures as circumstances change and we gain
more experience and knowledge. Hence, most consequentialists do not
mind giving up consequentialism as a direct decision procedure as long
as consequences remain the criterion of rightness (but see Chappell
2001).
If overall utility is the criterion of moral rightness, then it might
seem that nobody could know what is morally right. If so, classical
utilitarianism leads to moral skepticism. However, utilitarians insist
that we can have strong reasons to believe that certain acts reduce
utility, even if we have not yet inspected or predicted every
consequence of those acts. For example, in normal circumstances, if
someone were to torture and kill his children, it is possible that
this would maximize utility, but that is very unlikely. Maybe they
would have grown up to be mass murders, but it is at least as likely
that they would grow up to cure serious diseases or do other great
things, and it is much more likely that they would have led normally
happy (or at least not destructive) lives. So observers as well as
agents have adequate reasons to believe that such acts are morally
wrong, according to act utilitarianism. In many other cases, it will
still be hard to tell whether an act will maximize utility, but that
shows only that there are severe limits to our knowledge of what is
morally right. That should be neither surprising nor problematic for
utilitarians.
If utilitarians want their theory to allow more moral knowledge, they
can make a different kind of move by turning from actual consequences
to expected or expectable consequences. Suppose that Alice finds a
runaway teenager who asks for money to get home. Alice wants to help
and reasonably believes that buying a bus ticket home for this runaway
will help, so she buys a bus ticket and puts the runaway on the bus.
Unfortunately, the bus is involved in a freak accident, and the
runaway is killed. If actual consequences are what determine moral
wrongness, then it was morally wrong for Alice to buy the bus ticket
for this runaway. Opponents claim that this result is absurd enough to
refute classic utilitarianism.
Some utilitarians bite the bullet and say that Alice's act was
morally wrong, but it was blameless wrongdoing, because her motives
were good, and she was not responsible, given that she could not have
foreseen that her act would cause harm. Since this theory makes actual
consequences determine moral rightness, it can be called *actual
consequentialism*.
Other responses claim that moral rightness depends on foreseen,
foreseeable, intended, or likely consequences, rather than actual
ones. Imagine that Bob does not in fact foresee a bad consequence that
would make his act wrong if he did foresee it, but that Bob could
easily have foreseen this bad consequence if he had been paying
attention. Maybe he does not notice the rot on the hamburger he feeds
to his kids which makes them sick. If *foreseen* consequences
are what matter, then Bob's act is not morally wrong. If
*foreseeable* consequences are what matter, then Bob's
act is morally wrong, because the bad consequences were foreseeable.
Now consider Bob's wife, Carol, who notices that the meat is
rotten but does not want to have to buy more, so she feeds it to her
children anyway, hoping that it will not make them sick; but it does.
Carol's act is morally wrong if foreseen or foreseeable
consequences are what matter, but not if what matter are
*intended* consequences, because she does not intend to make
her children sick. Finally, consider Bob and Carol's son Don,
who does not know enough about food to be able to know that eating
rotten meat can make people sick. If Don feeds the rotten meat to his
little sister, and it makes her sick, then the bad consequences are
not intended, foreseen, or even foreseeable by Don, but those bad
results are still objectively *likely* or *probable*,
unlike the case of Alice. Some philosophers deny that probability can
be fully objective, but at least the consequences here are foreseeable
by others who are more informed than Don can be at the time. For Don
to feed the rotten meat to his sister is, therefore, morally wrong if
likely consequences are what matter, but not morally wrong if what
matter are foreseen or foreseeable or intended consequences.
Consequentialist moral theories that focus on actual or objectively
probable consequences are often described as *objective
consequentialism* (Railton 1984). In contrast, consequentialist
moral theories that focus on intended or foreseen consequences are
usually described as *subjective consequentialism*.
Consequentialist moral theories that focus on reasonably foreseeable
consequences are then not subjective insofar as they do not depend on
anything inside the actual subject's mind, but they are
subjective insofar as they do depend on which consequences this
particular subject would foresee if he or she were better informed or
more rational.
One final solution to these epistemological problems deploys the legal
notion of proximate cause. If consequentialists define consequences in
terms of what is caused (unlike Sosa 1993), then which future events
count as consequences is affected by which notion of causation is used
to define consequences. Suppose I give a set of steak knives to a
friend. Unforeseeably, when she opens my present, the decorative
pattern on the knives somehow reminds her of something horrible that
her husband did. This memory makes her so angry that she voluntarily
stabs and kills him with one of the knives. She would not have killed
her husband if I had given her spoons instead of knives. Did my
decision or my act of giving her knives cause her husband's
death? Most people (and the law) would say that the cause was her act,
not mine. Why? One explanation is that her voluntary act intervened in
the causal chain between my act and her husband's death.
Moreover, even if she did not voluntarily kill him, but instead she
slipped and fell on the knives, thereby killing herself, my gift would
still not be a cause of her death, because the coincidence of her
falling intervened between my act and her death. The point is that,
when voluntary acts and coincidences intervene in certain causal
chains, then the results are not seen as caused by the acts further
back in the chain of necessary conditions (Hart and Honore
1985). Now, if we assume that an act must be such a proximate cause of
a harm in order for that harm to be a consequence of that act, then
consequentialists can claim that the moral rightness of that act is
determined only by such proximate consequences. This position, which
might be called *proximate consequentialism*, makes it much
easier for agents and observers to justify moral judgments of acts
because it obviates the need to predict non-proximate consequences in
distant times and places. Hence, this move is worth considering, even
though it has never been developed as far as I know and deviates far
from traditional consequentialism, which counts not only proximate
consequences but all upshots -- that is, everything for which the
act is a causally necessary condition.
## 5. Consequences of What? Rights, Relativity, and Rules
Another problem for utilitarianism is that it seems to overlook
justice and rights. One common illustration is called Transplant.
Imagine that each of five patients in a hospital will die without an
organ transplant. The patient in Room 1 needs a heart, the patient in
Room 2 needs a liver, the patient in Room 3 needs a kidney, and so on.
The person in Room 6 is in the hospital for routine tests. Luckily
(for them, not for him!), his tissue is compatible with the other five
patients, and a specialist is available to transplant his organs into
the other five. This operation would save all five of their lives,
while killing the "donor". There is no other way to save
any of the other five patients (Foot 1966, Thomson 1976; compare
related cases in Carritt 1947 and McCloskey 1965).
We need to add that the organ recipients will emerge healthy, the
source of the organs will remain secret, the doctor won't be
caught or punished for cutting up the "donor", and the
doctor knows all of this to a high degree of probability (despite the
fact that many others will help in the operation). Still, with the
right details filled in (no matter how unrealistic), it looks as if
cutting up the "donor" will maximize utility, since five
lives have more utility than one life (assuming that the five lives do
not contribute too much to overpopulation). If so, then classical
utilitarianism implies that it would not be morally wrong for the
doctor to perform the transplant and even that it would be morally
wrong for the doctor not to perform the transplant. Most people find
this result abominable. They take this example to show how bad it can
be when utilitarians overlook individual rights, such as the unwilling
donor's right to life.
Utilitarians can bite the bullet, again. They can deny that it is
morally wrong to cut up the "donor" in these
circumstances. Of course, doctors still should not cut up their
patients in anything close to normal circumstances, but this example
is so abnormal and unrealistic that we should not expect our normal
moral rules to apply, and we should not trust our moral intuitions,
which evolved to fit normal situations (Sprigge 1965). Many
utilitarians are happy to reject common moral intuitions in this case,
like many others (cf. Singer 1974, Unger 1996, Norcross 1997).
Most utilitarians lack such strong stomachs (or teeth), so they modify
utilitarianism to bring it in line with common moral intuitions,
including the intuition that doctors should not cut up innocent
patients. One attempt claims that a killing is worse than a death. The
doctor would have to kill the "donor" in order to prevent
the deaths of the five patients, but nobody is killed if the five
patients die. If one killing is worse than five deaths that do not
involve killing, then the world that results from the doctor
performing the transplant is worse than the world that results from
the doctor not performing the transplant. With this new theory of
value, consequentialists can agree with others that it is morally
wrong for the doctor to cut up the "donor" in this
example.
A modified example still seems problematic. Just suppose that the five
patients need a kidney, a lung, a heart, and so forth because they
were all victims of murder attempts. Then the world will contain the
five killings of them if they die, but not if they do not die. Thus,
even if killings are worse than deaths that are not killings, the
world will still be better overall (because it will contain fewer
killings as well as fewer deaths) if the doctor cuts up the
"donor" to save the five other patients. But most people
still think it would be morally wrong for the doctor to kill the one
to prevent the five killings. The reason is that it is not the doctor
who kills the five, and the doctor's duty seems to be to reduce
the amount of killing that she herself does. In this view, the doctor
is not required to *promote* life or decrease death or even
decrease killing by other people. The doctor is, instead, required to
*honor* the value of life by not causing loss of life (cf.
Pettit 1997).
This kind of case leads some consequentialists to introduce
agent-relativity into their theory of value (Sen 1982, Broome 1991,
Portmore 2001, 2003, 2011). To apply a consequentialist moral theory,
we need to compare the world with the transplant to the world without
the transplant. If this comparative evaluation must be agent-neutral,
then, if an observer judges that the world with the transplant is
better, the agent must make the same judgment, or else one of them is
mistaken. However, if such evaluations can be agent-relative, then it
could be legitimate for an observer to judge that the world with the
transplant is better (since it contains fewer killings by anyone),
while it is also legitimate for the doctor as agent to judge that the
world with the transplant is worse (because it includes a killing
*by him*). In other cases, such as competitions, it might
maximize the good from an agent's perspective to do an act,
while maximizing the good from an observer's perspective to stop
the agent from doing that very act. If such agent-relative value makes
sense, then it can be built into consequentialism to produce the claim
that an act is morally wrong if and only if the act's
consequences include less overall value from the perspective of the
agent. This *agent-relative consequentialism*, plus the claim
that the world with the transplant is worse from the perspective of
the doctor, could justify the doctor's judgment that it would be
morally wrong for him to perform the transplant. A key move here is to
adopt the agent's perspective in judging the agent's act.
Agent-neutral consequentialists judge all acts from the
observer's perspective, so they would judge the doctor's
act to be wrong, since the world with the transplant is better from an
observer's perspective. In contrast, an agent-relative approach
requires observers to adopt the doctor's perspective in judging
whether it would be morally wrong for the doctor to perform the
transplant. This kind of agent-relative consequentialism is then
supposed to capture commonsense moral intuitions in such cases
(Portmore 2011).
Agent-relativity is also supposed to solve other problems. W. D. Ross
(1930, 34-35) argued that, if breaking a promise created only
slightly more happiness overall than keeping the promise, then the
agent morally ought to break the promise according to classic
utilitarianism. This supposed counterexample cannot be avoided simply
by claiming that keeping promises has agent-neutral value, since
keeping one promise might prevent someone else from keeping another
promise. Still, agent-relative consequentialists can respond that
keeping a promise has great value from the perspective of the agent
who made the promise and chooses whether or not to keep it, so the
world where a promise is kept is better from the agent's
perspective than another world where the promise is not kept, unless
enough other values override the value of keeping the promise. In this
way, agent-relative consequentialists can explain why agents morally
ought not to break their promises in just the kind of case that Ross
raised.
Similarly, critics of utilitarianism often argue that utilitarians
cannot be good friends, because a good friend places more weight on
the welfare of his or her friends than on the welfare of strangers,
but utilitarianism requires impartiality among all people. However,
agent-relative consequentialists can assign more weight to the welfare
of a friend of an agent when assessing the value of the consequences
of that agent's acts. In this way, consequentialists try to
capture common moral intuitions about the duties of friendship (see
also Jackson 1991).
One final variation still causes trouble. Imagine that the doctor
herself wounded the five people who need organs. If the doctor does
not save their lives, then she will have killed them herself. In this
case, even if the doctor can disvalue killings by herself more than
killings by other people, the world still seems better from her own
perspective if she performs the transplant. Critics will object that
it is, nonetheless, morally wrong for the doctor to perform the
transplant. Many people will not find this intuition as clear as in
the other cases, but consequentialists who do find it immoral for the
doctor to perform the transplant even in this case will need to modify
consequentialism in some other way in order to yield the desired
judgment.
This problem cannot be solved by building rights or fairness or desert
into the theory of value. The five do not deserve to die, and they do
deserve their lives, just as much as the one does. Each option
violates someone's right not to be killed and is unfair to
someone. So consequentialists need more than just new values if they
want to avoid endorsing this transplant.
One option is to go indirect. A *direct* consequentialist holds
that the moral qualities of something depend only on the consequences
of that very thing. Thus, a direct consequentialist about motives
holds that the moral qualities of a motive depend on the consequences
of that motive. A direct consequentialist about virtues holds that the
moral qualities of a character trait (such as whether or not it is a
moral virtue) depend on the consequences of that trait (Driver 2001a,
Hurka 2001, Jamieson 2005, Bradley 2005). A direct consequentialist
about acts holds that the moral qualities of an act depend on the
consequences of that act. Someone who adopts direct consequentialism
about everything is a *global* direct consequentialist (Pettit
and Smith 2000, Driver 2012).
In contrast, an *indirect* consequentialist holds that the
moral qualities of something depend on the consequences of something
else. One indirect version of consequentialism is *motive
consequentialism*, which claims that the moral qualities of an act
depend on the consequences of the motive of that act (compare Adams
1976 and Sverdlik 2011). Another indirect version is *virtue
consequentialism*, which holds that whether an act is morally
right depends on whether it stems from or expresses a state of
character that maximizes good consequences and, hence, is a
virtue.
The most common indirect consequentialism is *rule
consequentialism*, which makes the moral rightness of an act
depend on the consequences of a rule (Singer 1961). Since a rule is an
abstract entity, a rule by itself strictly has no consequences. Still,
*obedience rule consequentialists* can ask what would happen if
everybody obeyed a rule or what would happen if everybody violated a
rule. They might argue, for example, that theft is morally wrong
because it would be disastrous if everybody broke a rule against
theft. Often, however, it does not seem morally wrong to break a rule
even though it would cause disaster if everybody broke it. For
example, if everybody broke the rule "Have some children",
then our species would die out, but that hardly shows it is morally
wrong not to have any children. Luckily, our species will not die out
if everyone is permitted not to have children, since enough people
want to have children. Thus, instead of asking, "What would
happen if everybody did that?", rule consequentialists should
ask, "What would happen if everybody were permitted to do
that?" People are permitted to do what violates no accepted
rule, so asking what would happen if everybody were permitted to do an
act is just the flip side of asking what would happen if people
accepted a rule that forbids that act. Such *acceptance rule
consequentialists* then claim that an act is morally wrong if and
only if it violates a rule whose acceptance has better consequences
than the acceptance of any incompatible rule. In some accounts, a rule
is accepted when it is built into individual consciences (Brandt
1992). Other rule utilitarians, however, require that moral rules be
publicly known (Gert 2005; cf. Sinnott-Armstrong 2003b) or built into
public institutions (Rawls 1955). Then they hold what can be called
*public acceptance rule consequentialism*: an act is morally
wrong if and only if it violates a rule whose public acceptance
maximizes the good.
The indirectness of such rule utilitarianism provides a way to remain
consequentialist and yet capture the common moral intuition that it is
immoral to perform the transplant in the above situation. Suppose
people generally accepted a rule that allows a doctor to transplant
organs from a healthy person without consent when the doctor believes
that this transplant will maximize utility. Widely accepting this rule
would lead to many transplants that do not maximize utility, since
doctors (like most people) are prone to errors in predicting
consequences and weighing utilities. Moreover, if the rule is publicly
known, then patients will fear that they might be used as organ
sources, so they would be less likely to go to a doctor when they need
one. The medical profession depends on trust that this public rule
would undermine. For such reasons, some rule utilitarians conclude
that it would not maximize utility for people generally to accept a
rule that allows doctors to transplant organs from unwilling donors.
If this claim is correct, then rule utilitarianism implies that it is
morally wrong for a particular doctor to use an unwilling donor, even
for a particular transplant that would have better consequences than
any alternative even from the doctor's own perspective. Common
moral intuition is thereby preserved.
Rule utilitarianism faces several potential counterexamples (such as
whether public rules allowing slavery could sometimes maximize
utility) and needs to be formulated more precisely (particularly in
order to avoid collapsing into act-utilitarianism; cf. Lyons 1965).
Such details are discussed in another entry in this encyclopedia (see
Hooker on rule-consequentialism). Here I will just point out that
direct consequentialists find it convoluted and implausible to judge a
particular act by the consequences of something else (Smart 1956). Why
should mistakes by other doctors in other cases make this
doctor's act morally wrong, when this doctor knows for sure that
he is not mistaken in this case? Rule consequentialists can respond
that we should not claim special rights or permissions that we are not
willing to grant to every other person, and that it is arrogant to
think we are less prone to mistakes than other people are. However,
this doctor can reply that he is willing to give everyone the right to
violate the usual rules in the rare cases when they do know for sure
that violating those rules really maximizes utility. Anyway, even if
rule utilitarianism accords with some common substantive moral
intuitions, it still seems counterintuitive in other ways. This makes
it worthwhile to consider how direct consequentialists can bring their
views in line with common moral intuitions, and whether they need to
do so.
## 6. Consequences for Whom? Limiting the Demands of Morality
Another popular charge is that classic utilitarianism demands too
much, because it requires us to do acts that are or should be moral
options (neither obligatory nor forbidden). (Scheffler 1982) For
example, imagine that my old shoes are serviceable but dirty, so I
want a new pair of shoes that costs $100. I could wear my old shoes
and give the $100 to a charity that will use my money to save someone
else's life. It would seem to maximize utility for me to give
the $100 to the charity. If it is morally wrong to do anything other
than what maximizes utility, then it is morally wrong for me to buy
the shoes. But buying the shoes does not seem morally wrong. It might
be morally better to give the money to charity, but such contributions
seem supererogatory, that is, above and beyond the call of duty. Of
course, there are many more cases like this. When I watch television,
I always (or almost always) could do more good by helping others, but
it does not seem morally wrong to watch television. When I choose to
teach philosophy rather than working for CARE or the Peace Corps, my
choice probably fails to maximize utility overall. If we were required
to maximize utility, then we would have to make very different choices
in many areas of our lives. The requirement to maximize utility, thus,
strikes many people as too demanding because it interferes with the
personal decisions that most of us feel should be left up to the
individual.
Some utilitarians respond by arguing that we really are morally
required to change our lives so as to do a lot more to increase
overall utility (see Kagan 1989, P. Singer 1993, and Unger 1996). Such
hard-liners claim that most of what most people do is morally wrong,
because most people rarely maximize utility. Some such wrongdoing
might be blameless when agents act from innocent or even desirable
motives, but it is still supposed to be moral wrongdoing. Opponents of
utilitarianism find this claim implausible, but it is not obvious that
their counter-utilitarian intuitions are reliable or well-grounded
(Murphy 2000, chs. 1-4; cf. Mulgan 2001, Singer 2005, Greene
2013).
Other utilitarians blunt the force of the demandingness objection by
limiting direct utilitarianism to what people morally *ought*
to do. Even if we morally ought to maximize utility, it need not be
morally wrong to fail to maximize utility. John Stuart Mill, for
example, argued that an act is morally *wrong* only when both
it fails to maximize utility and its agent is liable to punishment for
the failure (Mill 1861). It does not always maximize utility to punish
people for failing to maximize utility. Thus, on this view, it is not
always morally wrong to fail to do what one morally ought to do. If
Mill is correct about this, then utilitarians can say that we ought to
give much more to charity, but we are not required or obliged to do
so, and failing to do so is not morally wrong (cf. Sinnott-Armstrong
2005).
Many utilitarians still want to avoid the claim that we morally ought
to give so much to charity. One way around this claim uses a
rule-utilitarian theory of what we morally ought to do. If it costs
too much to internalize rules implying that we ought to give so much
to charity, then, according to such rule-utilitarianism, it is not
true that we ought to give so much to charity (Hooker 2000, ch.
8).
Another route follows an agent-relative theory of value. If there is
more value in benefiting oneself or one's family and friends
than there is disvalue in letting strangers die (without killing
them), then spending resources on oneself or one's family and
friends would maximize the good. A problem is that such
consequentialism would seem to imply that we morally ought not to
contribute those resources to charity, although such contributions
seem at least permissible.
More personal leeway could also be allowed by deploying the legal
notion of proximate causation. When a starving stranger would stay
alive if and only if one contributed to a charity, contributing to the
charity still need not be the proximate cause of the stranger's
life, and failing to contribute need not be the proximate cause of his
or her death. Thus, if an act is morally right when it includes the
most net good in its proximate consequences, then it might not be
morally wrong either to contribute to the charity or to fail to do so.
This potential position, as mentioned above, has not yet been
developed, as far as I know.
Yet another way to reach this conclusion is to give up maximization
and to hold instead that we morally ought to do what creates enough
utility. This position is often described as *satisficing
consequentialism* (Slote 1984). According to satisficing
consequentialism, it is not morally wrong to fail to contribute to a
charity if one contributes enough to other charities and if the money
or time that one could contribute does create enough good, so it is
not just wasted. (For criticisms, see Bradley 2006.) A related
position is *progressive consequentialism*, which holds that we
morally ought to improve the world or make it better than it would be
if we did nothing, but we don't have to improve it as much as we
can (Elliot and Jamieson, 2009). Both satisficing and progressive
consequentialism allow us to devote some of our time and money to
personal projects that do not maximize overall good.
A more radical set of proposals confines consequentialism to
judgements about how good an act is on a scale (Norcross 2006, 2020)
or to degrees of wrongness and rightness (Sinhababu 2018). This
positions are usually described as *scalar
consequentialism*. A scalar consequentialist can refuse to say
whether it is absolutely right or wrong to give $1000 to charity, for
example, but still say that giving $1000 to charity is better and more
right than giving only $100 and simultaneously worse and more wrong
than giving $10,000. A related *contrastivist consequentialism*
could say that one ought to give $1000 in contrast with $100 but not
in contrast with $10,000 (cf. Snedegar 2017). Such positions can
also hold that less or less severe negative sanctions are justified
when an agent's act is worse than a smaller set of alternatives
compared to when the agent's act is worse than a larger set of
alternatives. This approach then becomes less demanding, both because
it sees less negative sanctions as justified when the agent fails to
do the best act possible, and also because it avoids saying that
everyday actions are simply wrong without comparison to any set of
alternatives.
Opponents still object that all such consequentialist theories are
misdirected. When I decide to visit a friend instead of working for a
charity, I can know that my act is not immoral even if I have not
calculated that the visit will create enough overall good or that it
will improve the world. These critics hold that friendship requires us
to do certain favors for friends without weighing our friends'
welfare impartially against the welfare of strangers. Similarly, if I
need to choose between saving my drowning wife and saving a drowning
stranger, it would be "one thought too many" (Williams
1981) for me to calculate the consequences of each act. I morally
should save my wife straightaway without calculating utilities.
In response, utilitarians can remind critics that the principle of
utility is intended as only a criterion of right and not as a decision
procedure, so utilitarianism does not imply that people ought to
calculate utilities before acting (Railton 1984). Consequentialists
can also allow the special perspective of a friend or spouse to be
reflected in agent-relative value assessments (Sen 1982, Broome 1991,
Portmore 2001, 2003) or probability assessments (Jackson 1991). It
remains controversial, however, whether any form of consequentialism
can adequately incorporate common moral intuitions about
friendship.
## 7. Arguments for Consequentialism
Even if consequentialists can accommodate or explain away common moral
intuitions, that might seem only to answer objections without yet
giving any positive reason to accept consequentialism. However, most
people begin with the *presumption* that we morally ought to
make the world better when we can. The question then is only whether
any moral constraints or moral options need to be added to the basic
consequentialist factor in moral reasoning. (Kagan 1989, 1998) If no
objection reveals any need for anything beyond consequences, then
consequences alone seem to determine what is morally right or wrong,
just as consequentialists claim.
This line of reasoning will not convince opponents who remain
unsatisfied by consequentialist responses to objections. Moreover,
even if consequentialists do respond adequately to every proposed
objection, that would not show that consequentialism is correct or
even defensible. It might face new problems that nobody has yet
recognized. Even if every possible objection is refuted, we might have
no reason to reject consequentialism but still no reason to accept
it.
In case a positive reason is needed, consequentialists present a wide
variety of arguments. One common move attacks opponents. If the only
plausible options in moral theory lie on a certain list (say,
Kantianism, contractarianism, virtue theory, pluralistic intuitionism,
and consequentialism), then consequentialists can argue for their own
theory by criticizing the others. This *disjunctive syllogism*
or *process of elimination* will be only as strong as the set
of objections to the alternatives, and the argument fails if even one
competitor survives. Moreover, the argument assumes that the original
list is complete. It is hard to see how that assumption could be
justified.
Consequentialism also might be supported by an *inference to the
best explanation* of our moral intuitions. This argument might
surprise those who think of consequentialism as counterintuitive, but
in fact consequentialists can explain many moral intuitions that
trouble deontological theories. Moderate deontologists, for example,
often judge that it is morally wrong to kill one person to save five
but not morally wrong to kill one person to save a million. They never
specify the line between what is morally wrong and what is not morally
wrong, and it is hard to imagine any non-arbitrary way for
deontologists to justify a cutoff point. In contrast,
consequentialists can simply say that the line belongs wherever the
benefits most outweigh the costs, including any bad side effects (cf.
Sinnott-Armstrong 2007). Similarly, when two promises conflict, it
often seems clear which one we should keep, and that intuition can
often be explained by the amount of harm that would be caused by
breaking each promise. In contrast, deontologists are hard pressed to
explain which promise is overriding if the reason to keep each promise
is simply that it was made (Sinnott-Armstrong 2009). If
consequentialists can better explain more common moral intuitions,
then consequentialism might have more explanatory coherence overall,
despite being counterintuitive in some cases. (Compare Sidgwick 1907,
Book IV, Chap. III; and Sverdlik 2011.) And even if act
consequentialists cannot argue in this way, it still might work for
rule consequentialists (such as Hooker 2000).
Consequentialists also might be supported by *deductive*
arguments from abstract moral intuitions. Sidgwick (1907, Book III,
Chap. XIII) seemed to think that the principle of utility follows from
certain very general self-evident principles, including
universalizability (if an act ought to be done, then every other act
that resembles it in all relevant respects also ought to be done),
rationality (one ought to aim at the good generally rather than at any
particular part of the good), and equality ("the good of any one
individual is of no more importance, from the point of view ... of the
Universe, than the good of any other").
Other consequentialists are more skeptical about moral intuitions, so
they seek foundations outside morality, either in non-normative facts
or in non-moral norms. Mill (1861) is infamous for his
"proof" of the principle of utility from empirical
observations about what we desire (cf. Sayre-McCord 2001). In
contrast, Hare (1963, 1981) tries to derive his version of
utilitarianism from substantively neutral accounts of morality, of
moral language, and of rationality (cf. Sinnott-Armstrong 2001).
Similarly, Gewirth (1978) tries to derive his variant of
consequentialism from metaphysical truths about actions.
Yet another argument for a kind of consequentialism is
*contractarian*. Harsanyi (1977, 1978) argues that all
informed, rational people whose impartiality is ensured because they
do not know their place in society would favor a kind of
consequentialism. Broome (1991) elaborates and extends
Harsanyi's argument.
Other forms of arguments have also been invoked on behalf of
consequentialism (e.g. Cummiskey 1996, P. Singer 1993;
Sinnott-Armstrong 1992). However, each of these arguments has also
been subjected to criticisms.
Even if none of these arguments proves consequentialism, there still
might be no adequate reason to deny consequentialism. We might have no
reason either to deny consequentialism or to assert it.
Consequentialism could then remain a live option even if it is not
proven. |
utilitarianism-history | ## 1. Precursors to the Classical Approach
Though the first systematic account of utilitarianism was developed
by Jeremy Bentham (1748-1832), the core insight motivating the theory
occurred much earlier. That insight is that morally appropriate
behavior will not harm others, but instead increase happiness or
'utility.' What is distinctive about utilitarianism
is its approach in taking that insight and developing an account of
moral evaluation and moral direction that expands on it. Early
precursors to the Classical Utilitarians include the British Moralists,
Cumberland, Shaftesbury, Hutcheson, Gay, and Hume. Of these,
Francis Hutcheson (1694-1746) is explicitly utilitarian when it comes
to action choice.
Some of the earliest utilitarian thinkers were the
'theological' utilitarians such as Richard Cumberland
(1631-1718) and John Gay (1699-1745). They believed that
promoting human happiness was incumbent on us since it was approved by
God. After enumerating the ways in which humans come under
obligations (by perceiving the "natural consequences of
things", the obligation to be virtuous, our civil obligations
that arise from laws, and obligations arising from "the authority
of God") John Gay writes: "...from the consideration
of these four sorts of obligation...it is evident that a full and
complete obligation which will extend to all cases, can only be that
arising from the authority of *God*; because God only can in all
cases make a man happy or miserable: and therefore, since we are
*always* obliged to that conformity called virtue, it is evident
that the immediate rule or criterion of it is the will of God"
(R, 412). Gay held that since God wants the happiness of mankind,
and since God's will gives us the criterion of virtue,
"...the happiness of mankind may be said to be the criterion
of virtue, but *once removed*" (R, 413). This view
was combined with a view of human motivation with egoistic
elements. A person's individual salvation, her eternal
happiness, depended on conformity to God's will, as did virtue
itself. Promoting human happiness and one's own coincided,
but, given God's design, it was not an accidental
coincidence.
This approach to utilitarianism, however, is not theoretically clean
in the sense that it isn't clear what essential work God does, at
least in terms of normative ethics. God as the source of
normativity is compatible with utilitarianism, but utilitarianism
doesn't require this.
Gay's influence on later writers, such as Hume, deserves
note. It is in Gay's essay that some of the
questions that concerned Hume on the nature of virtue are
addressed. For example, Gay was curious about how to explain our
practice of approbation and disapprobation of action and
character. When we see an act that is vicious we disapprove of
it. Further, we associate certain things with their effects, so
that we form positive associations and negative associations that also
underwrite our moral judgments. Of course, that we view happiness,
including the happiness of others as a good, is due to God's
design. This is a feature crucial to the theological approach,
which would clearly be rejected by Hume in favor of a naturalistic view
of human nature and a reliance on our sympathetic engagement with
others, an approach anticipated by Shaftesbury (below). The
theological approach to utilitarianism would be developed later by
William Paley, for example, but the lack of any theoretical necessity
in appealing to God would result in its diminishing appeal.
Anthony Ashley Cooper, the 3rd Earl of Shaftesbury
(1671-1713) is generally thought to have been the one of the earliest
'moral sense' theorists, holding that we possess a kind
of "inner eye" that allows us to make moral
discriminations. This seems to have been an innate sense of right
and wrong, or moral beauty and deformity. Again, aspects of this
doctrine would be picked up by Francis Hutcheson and David Hume
(1711-1776). Hume, of course, would clearly reject any robust
realist implications. If the moral sense is like the other
perceptual senses and enables us to pick up on properties out there in
the universe around us, properties that exist independent from our
perception of them, that are objective, then Hume clearly was not a
moral sense theorist in this regard. But perception picks up on
features of our environment that one could regard as having a
contingent quality. There is one famous passage where Hume likens moral
discrimination to the perception of secondary qualities, such as
color. In modern terminology, these are response-dependent
properties, and lack objectivity in the sense that they do not exist
independent of our responses. This is radical. If an
act is vicious, its viciousness is a matter of the human
response (given a corrected perspective) to the act (or its perceived
effects) and thus has a kind of contingency that seems unsettling,
certainly unsettling to those who opted for the theological option.
So, the view that it is part of our very nature to make moral
discriminations is very much in Hume. Further -- and what is
relevant to the development of utilitarianism -- the view of
Shaftesbury that the virtuous person contributes to the good of the
whole -- would figure into Hume's writings, though
modified. It is the virtue that contributes to the good of the
whole system, in the case of Hume's artificial virtues.
Shaftesbury held that in judging someone virtuous or good in a moral
sense we need to perceive that person's impact on the systems of
which he or she is a part. Here it sometimes becomes difficult to
disentangle egoistic versus utilitarian lines of thought in
Shaftesbury. He clearly states that whatever guiding force there is has
made nature such that it is "...the *private
interest* and *good* of every one, to work towards the
*general good*, which if a creature ceases to promote, he is
actually so far wanting to himself, and ceases to promote his own
happiness and welfare..." (R, 188). It is hard, sometimes,
to discern the direction of the 'because' -- if one
should act to help others because it supports a system in which
one's own happiness is more likely, then it looks really like a
form of egoism. If one should help others because that's the
right thing to do -- and, fortunately, it also ends up promoting
one's own interests, then that's more like utilitarianism,
since the promotion of self-interest is a welcome effect but not what,
all by itself, justifies one's character or actions.
Further, to be virtuous a person must have certain psychological
capacities -- they must be able to reflect on character, for example,
and represent to themselves the qualities in others that are either
approved or disapproved of.
>
> ...in this case alone it is we call any creature worthy or
> virtuous when it can have the notion of a public interest, and can
> attain the speculation or science of what is morally good or ill,
> admirable or blameable, right or wrong....we never say
> of....any mere beast, idiot, or changeling, though ever so
> good-natured, that he is worthy or virtuous. (Shaftesbury IVM; BKI,
> PII, sec. iii)
>
Thus, animals are not objects of moral appraisal on the view, since
they lack the necessary reflective capacities. Animals also lack
the capacity for moral discrimination and would therefore seem to lack
the moral sense. This raises some interesting questions. It
would seem that the moral sense is a perception *that* something
is the case. So it isn't merely a discriminatory sense that
allows us to sort perceptions. It also has a propositional
aspect, so that animals, which are not lacking in other senses are
lacking in this one.
The virtuous person is one whose affections, motives, dispositions
are of the right sort, not one whose behavior is simply of the right
sort and who is able to reflect on goodness, and her own goodness [see
Gill]. Similarly, the vicious person is one who exemplifies the
wrong sorts of mental states, affections, and so forth. A person
who harms others through no fault of his own "...because he
has convulsive fits which make him strike and wound such as approach
him" is not vicious since he has no desire to harm anyone and his
bodily movements in this case are beyond his control.
Shaftesbury approached moral evaluation via the virtues and vices.
His utilitarian leanings are distinct from his moral sense approach,
and his overall sentimentalism. However, this approach highlights the
move away from egoistic views of human nature -- a trend picked
up by Hutcheson and Hume, and later adopted by Mill in criticism of
Bentham's version of utilitarianism. For writers like Shaftesbury and
Hutcheson the main contrast was with egoism rather than
rationalism.
Like Shaftesbury, Francis Hutcheson was very much interested in
virtue evaluation. He also adopted the moral sense
approach. However, in his writings we also see an emphasis
on action choice and the importance of moral deliberation to action
choice. Hutcheson, in *An Inquiry Concerning Moral Good and
Evil*, fairly explicitly spelled out a utilitarian principle of
action choice. (Joachim Hruschka (1991) notes, however, that it was
Leibniz who first spelled out a utilitarian decision procedure.)
>
>
> ....In comparing the moral qualities of actions...we are led
> by our moral sense of virtue to judge thus; that in *equal
> degrees* of happiness, expected to proceed from the action, the
> virtue is in proportion to the *number* of persons to whom the
> happiness shall extend (and here the *dignity*, or *moral
> importance* of persons, may compensate numbers); and, in
> equal *numbers*, the virtue is the *quantity* of the
> happiness, or natural good; or that the virtue is in a compound ratio of the
> *quantity* of good, and *number* of enjoyers....so
> that *that action* is *best*, which procures
> the *greatest happiness* for the *greatest numbers*; and
> that *worst*, which, in *like manner*, occasions
> *misery*. (R, 283-4)
>
Scarre notes that some hold the moral sense approach incompatible
with this emphasis on the use of reason to determine what we ought to
do; there is an opposition between just apprehending what's
morally significant and a model in which we need to reason to figure
out what morality demands of us. But Scarre notes these are not
actually incompatible:
>
> The picture which emerges from Hutcheson's discussion is of a
> division of labor, in which the moral sense causes us to look with
> favor on actions which benefit others and disfavor those which harm
> them, while consequentialist reasoning determines a more precise
> ranking order of practical options in given situations. (Scarre,
> 53-54)
Scarre then uses the example of telling a lie to illustrate: lying
is harmful to the person to whom one lies, and so this is viewed with
disfavor, in general. However, in a specific case, if a lie is
necessary to achieve some notable good, consequentialist reasoning will
lead us to favor the lying. But this example seems to
put all the emphasis on a consideration of consequences in
*moral* approval and disapproval. Stephen Darwall notes (1995,
216 ff.) that the moral sense is concerned
with *motives* -- we approve, for example, of the motive of
benevolence, and the wider the scope the better. It is the motives
rather than the consequences that are the objects of approval and
disapproval. But inasmuch as the morally good person cares about what
happens to others, and of course she will, she will rank order acts in
terms of their effects on others, and reason is used in calculating
effects. So there is no incompatibility at all.
Hutcheson was committed to maximization, it seems. However, he
insisted on a caveat -- that "the dignity or moral
importance of persons may compensate numbers." He added
a deontological constraint -- that we have a duty to others in
virtue of their personhood to accord them fundamental dignity
regardless of the numbers of others whose happiness is to be affected
by the action in question.
Hume was heavily influenced by Hutcheson, who was one of his
teachers. His system also incorporates insights made by
Shaftesbury, though he certainly lacks Shaftesbury's confidence
that virtue is its own reward. In terms of his place in the
history of utilitarianism we should note two distinct effects his
system had. Firstly, his account of the social utility of the
artificial virtues influenced Bentham's thought on utility.
Secondly, his account of the role sentiment played in moral judgment
and commitment to moral norms influenced Mill's thoughts about
the internal sanctions of morality. Mill would diverge from
Bentham in developing the 'altruistic' approach to
Utilitarianism (which is actually a misnomer, but more on that
later). Bentham, in contrast to Mill, represented the egoistic
branch -- his theory of human nature reflected Hobbesian
psychological egoism.
## 2. The Classical Approach
The Classical Utilitarians, Bentham and Mill, were concerned with
legal and social reform. If anything could be identified as the
fundamental motivation behind the development of Classical
Utilitarianism it would be the desire to see useless, corrupt laws and
social practices changed. Accomplishing this goal required a
normative ethical theory employed as a critical tool. What is the
truth about what makes an action or a policy a morally good one, or
morally *right*? But developing the theory itself was also
influenced by strong views about what was wrong in their society.
The conviction that, for example, some laws are bad resulted in
analysis of why they were bad. And, for Jeremy Bentham, what made
them bad was their lack of utility, their tendency to lead to
unhappiness and misery without any compensating happiness. If a
law or an action doesn't *do* any good, then it
*isn't* any good.
### 2.1 Jeremy Bentham
Jeremy Bentham (1748-1832) was influenced both by Hobbes'
account of human nature and Hume's account of social
utility. He famously held that humans were ruled by two sovereign
masters -- pleasure and pain. We seek pleasure and the avoidance
of pain, they "...govern us in all we do, in all we say, in
all we think..." (Bentham PML, 1). Yet he also promulgated the
principle of utility as the standard of right action on the part of
governments and individuals. Actions are approved when they
are such as to promote happiness, or pleasure, and disapproved of when
they have a tendency to cause unhappiness, or pain (PML). Combine
this criterion of rightness with a view that we should be actively
trying to promote overall happiness, and one has a serious
incompatibility with psychological egoism. Thus, his apparent
endorsement of Hobbesian psychological egoism created problems in
understanding his moral theory since psychological egoism rules out
acting to promote the overall well-being when that it is incompatible
with one's own. For the psychological egoist, that is not
even a possibility. So, given 'ought implies can' it
would follow that we are not obligated to act to promote overall
well-being when that is incompatible with our own. This generates
a serious tension in Bentham's thought, one that was drawn to his
attention. He sometimes seemed to think that he could reconcile
the two commitments empirically, that is, by noting that when people
act to promote the good they are helping themselves, too. But
this claim only serves to muddy the waters, since the standard
understanding of psychological egoism -- and Bentham's own
statement of his view -- identifies motives of action which are
self-interested. Yet this seems, again, in conflict with his own
specification of the method for making moral decisions which is not to
focus on self-interest -- indeed, the addition of *extent* as a
parameter along which to measure pleasure produced distinguishes this
approach from ethical egoism. Aware of the difficulty, in later
years he seemed to pull back from a full-fledged commitment to
psychological egoism, admitting that people do sometimes act
benevolently -- with the overall good of humanity in mind.
Bentham also benefited from Hume's work, though in many ways
their approaches to moral philosophy were completely different. Hume
rejected the egoistic view of human nature. Hume also focused on
character evaluation in his system. Actions are significant as
evidence of character, but only have this derivative significance. In
moral evaluation the main concern is that of character. Yet Bentham
focused on act-evaluation. There was a tendency -- remarked on by
J. B. Schneewind (1990), for example -- to move away from focus on
character evaluation after Hume and towards act-evaluation. Recall
that Bentham was enormously interested in social reform. Indeed,
reflection on what was morally problematic about laws and policies
influenced his thinking on utility as a standard. When one legislates,
however, one is legislating in support of, or against, certain
actions. Character -- that is, a person's true
character -- is known, if known at all, only by that person. If one
finds the opacity of the will thesis plausible then character, while
theoretically very interesting, isn't a practical focus for
legislation. Further, as Schneewind notes, there was an increasing
sense that focus on character would actually be disruptive, socially,
particularly if one's view was that a person who didn't
agree with one on a moral issues was defective in terms of his or her
character, as opposed to simply making a mistake reflected in
action.
But Bentham does take from Hume the view that utility is the measure
of virtue -- that is, utility more broadly construed than
Hume's actual usage of the term. This is because Hume made
a distinction between pleasure that the perception of virtue generates
in the observer, and social utility, which consisted in a trait's
having tangible benefits for society, any instance of which may or may
not generate pleasure in the observer. But Bentham is not simply
reformulating a Humean position -- he's merely been
influenced by Hume's arguments to see pleasure as a measure or
standard of moral value. So, why not move from pleasurable
*responses* to traits to pleasure as a kind of
*consequence* which is good, and in relation to which, actions
are morally right or wrong? Bentham, in making this move, avoids a
problem for Hume. On Hume's view it seems that the
response -- corrected, to be sure -- determines the trait's
quality as a virtue or vice. But on Bentham's view the action
(or trait) is morally good, right, virtuous in view of the
consequences it generates, the pleasure or utility it produces, which
could be completely independent of what our responses are to the
trait. So, unless Hume endorses a kind of ideal observer test for
virtue, it will be harder for him to account for how it is people make
mistakes in evaluations of virtue and vice. Bentham, on the other
hand, can say that people may not respond to the actions good
qualities -- perhaps they don't perceive the good
effects. But as long as there are these good effects which are, on
balance, better than the effects of any alternative course of action,
then the action is the right one. Rhetorically, anyway, one can see
why this is an important move for Bentham to be able to make. He was a
social reformer. He felt that people often had responses to certain
actions -- of pleasure or disgust -- that did not reflect anything
morally significant at all. Indeed, in his discussions of
homosexuality, for example, he explicitly notes that
'antipathy' is not sufficient reason to legislate against
a practice:
>
>
> The circumstances from which this antipathy may have taken its rise
> may be worth enquiring to.... One is the physical antipathy to
> the offence.... The act is to the highest degree odious and
> disgusting, that is, not to the man who does it, for he does it only
> because it gives him pleasure, but to one who thinks [?] of it. Be it
> so, but what is that to him? (Bentham *OAO*, v. 4, 94)
Bentham then notes that people are prone to use their physical
antipathy as a pretext to transition to moral antipathy, and the
attending desire to punish the persons who offend their taste.
This is illegitimate on his view for a variety of reasons, one of which
is that to punish a person for violations of taste, or on the basis of
prejudice, would result in runaway punishments, "...one
should never know where to stop..." The prejudice in
question can be dealt with by showing it "to be
ill-grounded". This reduces the antipathy to the act in
question. This demonstrates an optimism in Bentham. If a
pain can be demonstrated to be based on false beliefs then he believes
that it can be altered or at the very least 'assuaged and
reduced'. This is distinct from the view that a pain or
pleasure based on a false belief should be discounted. Bentham
does not believe the latter. Thus Bentham's hedonism is a
very straightforward hedonism. The one intrinsic good is
pleasure, the bad is pain. We are to promote pleasure and act to
reduce pain. When called upon to make a moral decision one
measures an action's value with respect to pleasure and pain
according to the following: intensity (how strong the pleasure or pain
is), duration (how long it lasts), certainty (how likely the pleasure
or pain is to be the result of the action), proximity (how close the
sensation will be to performance of the action), fecundity (how likely
it is to lead to further pleasures or pains), purity (how much
intermixture there is with the other sensation). One also
considers extent -- the number of people affected by the
action.
Keeping track of all of these parameters can be complicated and time
consuming. Bentham does not recommend that they figure into every
act of moral deliberation because of the efficiency costs which need to
be considered. Experience can guide us. We know that the
pleasure of kicking someone is generally outweighed by the pain
inflicted on that person, so such calculations when confronted with a
temptation to kick someone are unnecessary. It is reasonable to
judge it wrong on the basis of past experience or consensus. One
can use 'rules of thumb' to guide action, but these rules
are overridable when abiding by them would conflict with the promotion
of the good.
Bentham's view was surprising to many at the time at least in part
because he viewed the moral quality of an action to be determined
instrumentally. It isn't so much that there is a particular kind of
action that is intrinsically wrong; actions that are wrong are wrong
simply in virtue of their effects, thus, instrumentally wrong. This
cut against the view that there are some actions that by their very
nature are just wrong, regardless of their effects. Some may be wrong
because they are 'unnatural' -- and, again, Bentham
would dismiss this as a legitimate criterion. Some may be wrong
because they violate liberty, or autonomy. Again, Bentham would view
liberty and autonomy as good -- but good instrumentally, not
intrinsically. Thus, any action deemed wrong due to a violation of
autonomy is derivatively wrong on instrumental grounds as well. This
is interesting in moral philosophy -- as it is far removed from
the Kantian approach to moral evaluation as well as from natural law
approaches. It is also interesting in terms of political philosophy
and social policy. On Bentham's view the law is not monolithic and
immutable. Since effects of a given policy may change, the moral
quality of the policy may change as well. Nancy Rosenblum noted that
for Bentham one doesn't simply decide on good laws and leave it at
that: "Lawmaking must be recognized as a continual process in
response to diverse and changing desires that require
adjustment" (Rosenblum 1978, 9). A law that is good at one point
in time may be a bad law at some other point in time. Thus, lawmakers
have to be sensitive to changing social circumstances. To be fair to
Bentham's critics, of course, they are free to agree with him that
this is the case in many situations, just not all -- and that
there is still a subset of laws that reflect the fact that some
actions just are intrinsically wrong regardless of
consequences. Bentham is in the much more difficult position of
arguing that effects are all there are to moral evaluation of action
and policy.
### 2.2 John Stuart Mill
John Stuart Mill (1806-1873) was a follower of Bentham, and, through
most of his life, greatly admired Bentham's work even though he
disagreed with some of Bentham's claims -- particularly on
the nature of 'happiness.' Bentham, recall, had
held that there were no qualitative differences between pleasures, only
quantitative ones. This left him open to a variety of
criticisms. First, Bentham's Hedonism was too
egalitarian. Simple-minded pleasures, sensual pleasures, were
just as good, at least intrinsically, than more sophisticated and
complex pleasures. The pleasure of drinking a beer in front of
the T.V. surely doesn't rate as highly as the pleasure one gets
solving a complicated math problem, or reading a poem, or listening to
Mozart. Second, Bentham's view that there were no
qualitative differences in pleasures also left him open to the
complaint that on his view human pleasures were of no more value than
animal pleasures and, third, committed him to the corollary that the
moral status of animals, tied to their sentience, was the same as that
of humans. While harming a puppy and harming a person are both
bad, however, most people had the view that harming the person was
worse. Mill sought changes to the theory that could accommodate
those sorts of intuitions.
To this end, Mill's hedonism was influenced by perfectionist
intuitions. There are some pleasures that are more fitting than
others. Intellectual pleasures are of a higher, better, sort than
the ones that are merely sensual, and that we share with animals.
To some this seems to mean that Mill really wasn't a hedonistic
utilitarian. His view of the good did radically depart from
Bentham's view. However, like Bentham, the good still
consists in pleasure, it is still a psychological state. There is
certainly that similarity. Further, the basic structures of the
theories are the same (for more on this see Donner 1991). While it is
true that Mill is more comfortable with notions like
'rights' this does not mean that he, in actuality, rejected
utilitarianism. The rationale for all the rights he recognizes is
utilitarian.
Mill's 'proof' of the claim that intellectual
pleasures are better *in kind* than others, though, is highly
suspect. He doesn't attempt a mere appeal to raw
intuition. Instead, he argues that those persons who have experienced
both view the higher as better than the lower. Who would rather be a
happy oyster, living an enormously long life, than a person living a
normal life? Or, to use his most famous example -- it is better to
be Socrates 'dissatisfied' than a fool
'satisfied.' In this way Mill was able to solve a problem
for utilitarianism.
Mill also argued that the principle could be proven, using another
rather notorious argument:
>
> The only proof capable of being given that an object is visible is
> that people actually see it.... In like manner, I apprehend, the
> sole evidence it is possible to produce that anything is desirable is
> that people do actually desire it. If the end which the utilitarian
> doctrine proposes to itself were not, in theory and in practiced,
> acknowledged to be an end, nothing could ever convince any person that
> it was so. (Mill, U, 81)
Mill then continues to argue that people desire happiness -- the
utilitarian end -- and that the general happiness is "a good
to the aggregate of all persons." (81)
G. E. Moore (1873-1958) criticized this as fallacious. He
argued that it rested on an obvious ambiguity:
>
>
> Mill has made as naive and artless a use of the naturalistic
> fallacy as anybody could desire. "Good", he tells us,
> means "desirable", and you can only find out what is
> desirable by seeking to find out what is actually desired....
> The fact is that "desirable" does not mean "able to
> be desired" as "visible" means "able to be
> seen." The desirable means simply what *ought* to be
> desired or deserves to be desired; just as the detestable means not
> what can be but what ought to be detested... (Moore, PE, 66-7)
>
It should be noted, however, that Mill was offering this as an
alternative to Bentham's view which had been itself criticized as
a 'swine morality,' locating the good in pleasure in a kind
of indiscriminate way. The distinctions he makes strike many as
intuitively plausible ones. Bentham, however, can accommodate
many of the same intuitions within his system. This is because he
notes that there are a variety of parameters along which we
quantitatively measure pleasure -- intensity and duration are just two
of those. His complete list is the following: *intensity,
duration, certainty or uncertainty, propinquity or remoteness,
fecundity, purity,* and *extent.* Thus, what Mill
calls the intellectual pleasures will score more highly than the
sensual ones along several parameters, and this could give us reason to
prefer those pleasures -- but it is a quantitative not a qualitative
reason, on Bentham's view. When a student decides to
study for an exam rather than go to a party, for example, she is making
the best decision even though she is sacrificing short term
pleasure. That's because studying for the exam, Bentham
could argue, scores higher in terms of the long term pleasures doing
well in school lead to, as well as the fecundity of the pleasure in
leading to yet other pleasures. However, Bentham will have to
concede that the very happy oyster that lives a very long time could,
in principle, have a better life than a normal human.
Mill's version of utilitarianism differed from Bentham's
also in that he placed weight on the effectiveness of internal
sanctions -- emotions like guilt and remorse which serve to
regulate our actions. This is an off-shoot of the different view
of human nature adopted by Mill. We are the sorts of beings that
have social feelings, feelings for others, not just ourselves. We
care about them, and when we perceive harms to them this causes painful
experiences in us. When one perceives oneself to be the agent of
that harm, the negative emotions are centered on the self. One
feels guilt for what one has done, not for what one sees another
doing. Like external forms of punishment, internal sanctions are
instrumentally very important to appropriate action. Mill also
held that natural features of human psychology, such as conscience and
a sense of justice, underwrite motivation. The sense of justice,
for example, results from very natural impulses. Part of this
sense involves a desire to punish those who have harmed others, and
this desire in turn "...is a spontaneous outgrowth from two
sentiments, both in the highest degree natural...; the impulse of
self-defense, and the feeling of sympathy." (Chapter 5,
*Utilitarianism*) Of course, he goes on, the justification
must be a separate issue. The feeling is there naturally, but it
is our 'enlarged' sense, our capacity to include the
welfare of others into our considerations, and make intelligent
decisions, that gives it the right normative force.
Like Bentham, Mill sought to use utilitarianism to inform law and
social policy. The aim of increasing happiness underlies his
arguments for women's suffrage and free speech. We can be
said to have certain rights, then -- but those rights are
underwritten by utility. If one can show that a purported right
or duty is harmful, then one has shown that it is not genuine.
One of Mills most famous arguments to this effect can be found in his
writing on women's suffrage when he discusses the ideal marriage
of partners, noting that the ideal exists between individuals of
"cultivated faculties" who influence each other
equally. Improving the social status of women was important
because they were capable of these cultivated faculties, and denying
them access to education and other opportunities for development is
forgoing a significant source of happiness. Further, the men who
would deny women the opportunity for education, self-improvement, and
political expression do so out of base motives, and the resulting
pleasures are not ones that are of the best sort.
Bentham and Mill both attacked social traditions that were justified
by appeals to natural order. The correct appeal is to utility
itself. Traditions often turned out to be "relics"
of "barbarous" times, and appeals to nature as a form
of justification were just ways to try rationalize continued deference
to those
relics.
In the latter part of the 20th century some writers criticized
utilitarianism for its failure to accommodate virtue evaluation.
However, though virtue is not the central normative concept in Mill's
theory, it is an extremely important one. In Chapter 4 of
*Utilitarianism* Mill noted
> ... does the utilitarian doctrine deny that people
> desire virtue, or maintain that virtue is not a thing to be desired?
> The very reverse. It maintains not only that virtue is to be desired,
> but also that it is to be desired disinterestedly, for
> itself. Whatever may be the opinion of utilitarian moralists as to the
> original conditions by which virtue is made virtue ... they not only
> place virtue at the very head of things which are good as a means to
> the ultimate end, but they also recognize as a psychological fact the
> possibility of its being, to the individual, a good in itself, without
> looking to any end beyond it; and hold, that the mind is not in a
> right state, not in a state conformable to Utility, not in the state
> most conducive to the general happiness, unless it does love virtue in
> this manner ...
In *Utilitarianism* Mill argues that virtue not only has
instrumental value, but is constitutive of the good life. A person
without virtue is morally lacking, is not as able to promote the good.
However, this view of virtue is someone complicated by rather cryptic
remarks Mill makes about virtue in his *A System of Logic* in
the section in which he discusses the "Art of Life." There he seems
to associate virtue with aesthetics, and morality is reserved for the
sphere of 'right' or 'duty'. Wendy Donner
notes that separating virtue from right allows Mill to solve another
problem for the theory: the demandingness problem (Donner 2011). This
is the problem that holds that if we ought to maximize utility, if
that is the right thing to do, then doing right requires enormous
sacrifices (under actual conditions), and that requiring such
sacrifices is too demanding. With duties, on Mill's view, it is
important that we get compliance, and that justifies coercion. In the
case of virtue, however, virtuous actions are those which it is
"...for the general interest that they remain free."
## 3. Henry Sidgwick
Henry Sidgwick's (1838-1900) *The Methods of Ethics* (1874) is
one of the most well known works in utilitarian moral philosophy, and
deservedly so. It offers a defense of utilitarianism, though some
writers (Schneewind 1977) have argued that it should not primarily be
read as a defense of utilitarianism. In *The Methods* Sidgwick
is concerned with developing an account of "...the
different methods of Ethics that I find implicit in our common moral
reasoning..." These methods are egoism, intuition based
morality, and utilitarianism. On Sidgwick's view, utilitarianism is
the more basic theory. A simple reliance on intuition, for example,
cannot resolve fundamental conflicts between values, or rules, such as
Truth and Justice that may conflict. In Sidgwick's words
"...we require some higher principle to decide the
issue..." That will be utilitarianism. Further, the rules
which seem to be a fundamental part of common sense morality are often
vague and underdescribed, and applying them will actually require
appeal to something theoretically more basic -- again,
utilitarianism. Yet further, absolute interpretations of rules seem
highly counter-intuitive, and yet we need some justification for any
exceptions -- provided, again, by utilitarianism. Sidgwick
provides a compelling case for the theoretical primacy of
utilitarianism.
Sidgwick was also a British philosopher, and his views developed out
of and in response to those of Bentham and Mill. His
*Methods* offer an engagement with the theory as it had been
presented before him, and was an exploration of it and the main
alternatives as well as a defense.
Sidgwick was also concerned with clarifying fundamental features of
the theory, and in this respect his account has been enormously
influential to later writers, not only to utilitarians and
consequentialists, generally, but to intuitionists as well.
Sidgwick's thorough and penetrating discussion of the theory
raised many of the concerns that have been developed by recent moral
philosophers.
One extremely controversial feature of Sidgwick's views
relates to his rejection of a publicity requirement for moral
theory. He writes:
>
>
> Thus, the Utilitarian conclusion, carefully stated, would seem to be
> this; that the opinion that secrecy may render an action right which
> would not otherwise be so should itself be kept comparatively secret;
> and similarly it seems expedient that the doctrine that esoteric
> morality is expedient should itself be kept esoteric. Or, if this
> concealment be difficult to maintain, it may be desirable that Common
> Sense should repudiate the doctrines which it is expedient to confine
> to an enlightened few. And thus a Utilitarian may reasonably desire,
> on Utilitarian principles, that some of his conclusions should be
> rejected by mankind generally; or even that the vulgar should keep
> aloof from his system as a whole, in so far as the inevitable
> indefiniteness and complexity of its calculations render it likely to
> lead to bad results in their hands. (490)
This accepts that utilitarianism may be self-effacing; that is, that
it may be best if people do not believe it, even though it is
true. Further, it rendered the theory subject to Bernard
Williams' (1995) criticism that the theory really simply
reflected the colonial elitism of Sidgwick's time, that it was
'Government House Utilitarianism.' The elitism in his
remarks may reflect a broader attitude, one in which the educated are
considered better policy makers than the uneducated.
One issue raised in the above remarks is relevant to practical
deliberation in general. To what extent should proponents of a
given theory, or a given rule, or a given policy -- or even
proponents of a given one-off action -- consider what they think
people will *actually* do, as opposed to what they think those
same people *ought* to do (under full and reasonable reflection,
for example)? This is an example of something that comes up in
the Actualism/possibilism debate in accounts of practical
deliberation. Extrapolating from the example used above, we have
people who advocate telling the truth, or what they believe to be the
truth, even if the effects are bad because the truth is somehow misused
by others. On the other hand are those who recommend not telling
the truth when it is predicted that the truth will be misused by others
to achieve bad results. Of course it is the case that the truth
ought not be misused, that its misuse can be avoided and is not
inevitable, but the misuse is entirely predictable. Sidgwick
seems to recommending that we follow the course that we predict will
have the best outcome, given as part of our calculations the data that
others may fail in some way -- either due to having bad desires,
or simply not being able to reason effectively. The worry
Williams points to really isn't a worry specifically with
utilitarianism (Driver 2011). Sidgwick would point out
that if it is bad to hide the truth, because 'Government
House' types, for example, typically engage in self-deceptive
rationalizations of their policies (which seems entirely plausible),
then one shouldn't do it. And of course, that heavily
influences our intuitions.
Sidgwick raised issues that run much deeper to our basic
understanding of utilitarianism. For example, the way earlier
utilitarians characterized the principle of utility left open serious
indeterminacies. The major one rests on the distinction between
total and average utility. He raised the issue in the context of
population growth and increasing utility levels by increasing numbers
of people (or sentient beings):
>
>
> Assuming, then, that the average happiness of human beings is a
> positive quantity, it seems clear that, supposing the average
> happiness enjoyed remains undiminished, Utilitarianism directs us to
> make the number enjoying it as great as possible. But if we foresee as
> possible that an increase in numbers will be accompanied by a decrease
> in average happiness or *vice versa*, a point arises which has
> not only never been formally noticed, but which seems to have been
> substantially overlooked by many Utilitarians. For if we take
> Utilitarianism to prescribe, as the ultimate end of action, happiness
> on the whole, and not any individual's happiness, unless
> considered as an element of the whole, it would follow that, if the
> additional population enjoy on the whole positive happiness, we ought
> to weigh the amount of happiness gained by the extra number against
> the amount lost by the remainder. (415)
For Sidgwick, the conclusion on this issue is not to simply strive
to greater average utility, but to increase population to the point
where we maximize the product of the number of persons who are
currently alive and the amount of average happiness. So it seems
to be a hybrid, total-average view. This discussion also raised
the issue of policy with respect to population growth, and both would
be pursued in more detail by later writers, most notably Derek Parfit
(1986).
## 4. Ideal Utilitarianism
G. E. Moore strongly disagreed with the hedonistic value theory
adopted by the Classical Utilitarians. Moore agreed that we ought
to promote the good, but believed that the good included far more than
what could be reduced to pleasure. He was a pluralist, rather
than a monist, regarding intrinsic value. For example, he
believed that 'beauty' was an intrinsic good. A
beautiful object had value independent of any pleasure it might
generate in a viewer. Thus, Moore differed from Sidgwick who
regarded the good as consisting in some consciousness. Some objective
states in the world are intrinsically good, and on Moore's view,
beauty is just such a state. He used one of his more notorious
thought experiments to make this point: he asked the reader to
compare two worlds, one was entirely beautiful, full of things which
complemented each other; the other was a hideous, ugly world, filled
with "everything that is most disgusting to us." Further,
there are not human beings, one imagines, around to appreciate or be
disgusted by the worlds. The question then is, which of these worlds is
better, which one's existence would be better than the
other's? Of course, Moore believed it was clear that the
beautiful world was better, even though no one was around to appreciate
its beauty. This emphasis on beauty was one facet of
Moore's work that made him a darling of the Bloomsbury
Group. If beauty was a part of the good independent of its
effects on the psychological states of others -- independent of,
really, how it affected others, then one needn't sacrifice
morality on the altar of beauty anymore. Following beauty is not
a mere indulgence, but may even be a moral obligation. Though
Moore himself certainly never applied his view to such cases, it does
provide the resources for dealing with what the contemporary literature
has dubbed 'admirable immorality' cases, at least some of
them. Gauguin may have abandoned his wife and children, but it
was to a beautiful end.
Moore's targets in arguing against hedonism were the earlier
utilitarians who argued that the good was some state of consciousness
such as pleasure. He actually waffled on this issue a bit, but
always disagreed with Hedonism in that even when he held that beauty
all by itself was not an intrinsic good, he also held that for the
appreciation of beauty to be a good the beauty must actually be there,
in the world, and not be the result of illusion.
Moore further criticized the view that pleasure *itself* was
an intrinsic good, since it failed a kind of isolation test that he
proposed for intrinsic value. If one compared an empty universe
with a universe of sadists, the empty universe would strike one as
better. This is true even though there is a good deal of
pleasure, and no pain, in the universe of sadists. This would
seem to indicate that what is necessary for the good is at least the
absence of bad intentionality. The pleasures of sadists, in
virtue of their desires to harm others, get discounted -- they are
not good, even though they are pleasures. Note this radical
departure from Bentham who held that even malicious pleasure was
intrinsically good, and that if nothing instrumentally bad attached to the
pleasure, it was wholly good as well.
One of Moore's important contributions was to put forward an
'organic unity' or 'organic whole' view of
value. The principle of organic unity is vague, and there is some
disagreement about what Moore actually meant in presenting it.
Moore states that 'organic' is used "...to
denote the fact that a whole has an intrinsic value different in amount
from the sum of the values of its parts." (PE, 36) And, for
Moore, that is all it is supposed to denote. So, for example, one
cannot determine the value of a body by adding up the value of its
parts. Some parts of the body may have value only in relation to
the whole. An arm or a leg, for example, may have no value at all
separated from the body, but have a great deal of value attached to the
body, and increase the value of the body, even. In the section of
*Principia Ethica* on the Ideal, the principle of organic unity
comes into play in noting that when persons experience pleasure through
perception of something beautiful (which involves a positive emotion in
the face of a recognition of an appropriate object -- an emotive
and cognitive set of elements), the experience of the beauty is better
when the object of the experience, the beautiful object, actually
exists. The idea was that experiencing beauty has a small positive
value, and existence of beauty has a small positive value, but
combining them has a great deal of value, more than the simple addition
of the two small values (PE, 189 ff.). Moore noted:
"A true belief in the reality of an object greatly increases the
value of many valuable wholes..." (199).
This principle in Moore -- particularly as applied to the
significance of actual existence and value, or knowledge and value,
provided utilitarians with tools to meet some significant
challenges. For example, deluded happiness would be severely
lacking on Moore's view, especially in comparison to happiness
based on knowledge.
## 5. Conclusion
Since the early 20th Century utilitarianism has undergone a variety
of refinements. After the middle of the 20th Century it has
become more common to identify as a 'Consequentialist'
since very few philosophers agree entirely with the view proposed by
the Classical Utilitarians, particularly with respect to the hedonistic
value theory. But the influence of the Classical Utilitarians has
been profound -- not only within moral philosophy, but within
political philosophy and social policy. The question Bentham
asked, "What use is it?," is a cornerstone of policy
formation. It is a completely secular, forward-looking
question. The articulation and systematic development of this
approach to policy formation is owed to the Classical Utilitarians. |
consequentialism-rule | ## 1. Utilitarianism
A moral theory is a form of consequentialism if and only if it
assesses acts and/or character traits, practices, and institutions
solely in terms of the goodness of the consequences. Historically,
utilitarianism has been the best-known form of consequentialism.
Utilitarianism assesses acts and/or character traits, practices, and
institutions solely in terms of overall net benefit. Overall net
benefit is often referred to as aggregate well-being or welfare.
Aggregate welfare is calculated by counting a benefit or harm to any
one individual the same as the same size benefit or harm to any other
individual, and then adding all the benefits and harms together to
reach an aggregate sum. There is considerable dispute among
consequentialists about what the best account of welfare is.
## 2. Welfare
Classical utitilitarians (i.e., Jeremy Bentham, J.S. Mill, and Henry
Sidgwick) took benefit and harm to be purely a matter of pleasure and
pain. The view that welfare is a matter of pleasure minus pain has
generally been called hedonism. It has grown in sophistication (Parfit
1984: Appendix I; Sumner 1996; Crisp 2006; de Lazari-Radek and Singer
2014: ch. 9) but remains committed to the thesis that how well
someone's life goes depends *entirely* on his or her
pleasure minus pain, albeit with pleasure and pain being construed
very broadly.
Even if pleasures and pains are construed very broadly, hedonism
encounters difficulties. The main one is that many (if not all) people
care very strongly about things other than their own pleasures and
pains. Of course these other things can be important as means to
pleasures and to the avoidance of pain. But many people care very
strongly about things over and beyond their hedonistic instrumental
value. For example, many people want to know the truth about various
matters even if this won't increase their (or anyone
else's) pleasure. Another example is that many people care about
achieving things over and beyond the pleasure such achievements might
produce. Again, many people care about the welfare of their family and
friends in a non-instrumental way. A rival account of these points,
especially the last, is that people care about many things other than
their own welfare.
On any plausible view of welfare, the satisfaction people can feel
when their desires are fulfilled constitutes an addition to their
welfare. Likewise, on any plausible view, frustration felt as a result
of unfulfilled desires constitutes a reduction in welfare. What is
controversial is whether the fulfilment of someone's desire
constitutes a benefit to that person apart from any effect that the
fulfilment of the desire has on that person's felt satisfaction
or frustration. Hedonism answers No, claiming that only effects on
felt satisfaction or felt frustration matter.
A different theory of welfare answers Yes. This theory holds that the
fulfilment of any desire of the agent's constitutes a benefit to
the agent, even if the agent never knows that desire has been
fulfilled and even if the agent derives no pleasure from its
fulfilment. This theory of human welfare is often referred to as the
*desire-fulfillment theory of welfare*.
Clearly, the desire-fulfillment theory of welfare is broader than
hedonism, in that the desire-fulfillment theory accepts that what can
constitute a benefit is wider than merely pleasure. But there are
reasons for thinking that this broader theory is too broad. For one
thing, people can have sensible desires that are simply too
disconnected from their own lives to be relevant to their own welfare
(Williams 1973a: 262; Overvold 1980, 1982; Parfit 1984: 494). I desire
that the starving in far-away countries get food. But the fulfilment
of this desire of mine does not benefit *me*.
For another thing, people can have desires for absurd things for
themselves. Suppose I desire to count all the blades of grass in the
lawns on this road. If I get satisfaction out of doing this, the felt
satisfaction constitutes a benefit to me. But the bare fulfilment of
my desire to count all the blades of grass in the lawns on this road
does not (Rawls 1971: 432; Parfit 1984: 500; Crisp 1997: 56).
On careful reflection, we might think that the fulfilment of
someone's desire constitutes an addition to that person's
welfare if and *only* if that desire has one of a certain set
of contents. We might think, for example, that the fulfilment of
someone's desire for pleasure, friendship, knowledge,
achievement, or autonomy for herself *does* constitute an
addition to her welfare, and that the fulfilment of any desires she
might have for things that do not fall into these categories do not
directly benefit her (though, again, the pleasure she derives from
their satisfaction does). If we think this, it seems we think there is
a list of things that constitute anyone's welfare (Parfit 1984:
Appendix I; Brink 1989: 221-36; Griffin 1996: ch. 2; Crisp 1997:
ch. 3; Gert 1998: 92-4; Arneson 1999a).
Insofar as the goods to be promoted are parts of welfare, the theory
remains utilitarian. There is a lot to be said for utilitarianism.
Obviously, how lives go is important. And there is something deeply
attractive (if not downright irresistible) in the idea that morality
is fundamentally impartial, i.e., the idea that, at the most
fundamental level of morality, everyone is equally important --
women and men, strong and weak, rich and poor, Blacks, Whites,
Hispanics, Asians, etc. And utilitarianism plausibly interprets this
equal importance as dictating that in the calculation of overall
welfare a benefit or harm to any one individual counts neither more
nor less that the same size benefit or harm to any other
individual.
## 3. Other Goods To Be Promoted
The nonutilitarian members of the consequentialist family are theories
that assess acts and/or character traits, practices, and institutions
solely in terms of resulting good, *where good is not restricted to
welfare*. "Nonutilitarian" here means "not
purely utilitarian", rather than "completely
unutilitarian". When writers describe themselves as
consequentialists rather than as utilitarians, they are normally
signalling that their fundamental evaluations will be in terms of not
only welfare but also some other goods.
What are these other goods? The most common answers have been justice,
fairness, and equality.
Justice, according to Plato, is "rendering to each his
due" (*Republic*, Bk. 1). We might suppose that what
people are due is a matter of what people are owed, either because
they deserve it or because they have a moral right to it. Suppose we
plug these ideas into consequentialism. Then we get the theory that
things should be assessed in terms of not only how much welfare
results but also the extent to which people get what they deserve and
the extent to which moral rights are respected.
For consequentialism to take this line, however, is for it to restrict
its explanatory ambitions. What a theory simply presupposes, it does
not explain. A consequentialist theory that presupposes both that
justice is constituted by such-and-such and that justice is one of the
things to be promoted does not explain why the components of justice
are important. It does not explain what desert is. It does not explain
the importance of moral rights, much less try to determine what the
contents of these moral rights are. These are matters too important
and contentious for a consequentialist theory to leave unexplained or
open. If consequentialism is going to refer to justice, desert, and
moral rights, it needs to analyze these concepts and justify the role
it gives them.
Similar things can be said about fairness. If a consequentialist
theory presupposes an account of fairness, and simply stipulates that
fairness is to be promoted, then this consequentialist theory is not
explaining fairness. But fairness (like justice, desert, and moral
rights) is a concept too important for consequentialism not to try to
explain.
One way for consequentialists to deal with justice and fairness is to
contend that justice and fairness are constituted by conformity with a
certain set of justified social practices, and that what justifies
these practices is that they generally promote overall welfare and
equality. Indeed, the contention might be that what people are due,
what people have a moral right to, what justice and fairness require,
is conformity to whatever practices promote overall welfare and
equality.
Whether equality needs to be included in the formula, however, is very
controversial. Many think that a purely utilitarian formula has
sufficiently egalitarian implications. They think that, even if the
goal is promotion of welfare, not the promotion of
welfare-plus-equality, there are some contingent but pervasive facts
about human beings that push in the direction of equal distribution of
material resources. According to the "law of diminishing
marginal utility of material resources", the amount of benefit a
person gets out of a certain unit of material resources is less the
more units of that material good the person already has. Suppose I go
from having no way of getting around except by foot to having a
bicycle, or, though I live in a place where one can get very cold, I
go from having no warm coat to having one. I will benefit more from
getting that first bicycle or coat than I would if I go from having
nine bicycles or coats to having ten.
There are exceptions to the law of diminishing marginal utility. In
most of these exceptions, an additional unit of material resource
pushes someone over some important threshold. For example, consider
the meal or pill or gulp of air that saves someone's life, or
the car whose acquisition pushes the competitive collector into first
place. In such cases, the unit that puts the person over the threshold
might well be as beneficial to that person as any prior unit was.
Still, as a general rule, material resources do have diminishing
marginal utility.
To the assumption that material resources have diminishing marginal
utility, let us add the assumption that different people generally get
*roughly* the same benefits from the same material resources.
Again, there are exceptions. If you live in a freezing climate and I
live in a hot climate, then you would benefit much more from a warm
coat than I would.
But suppose we live in the same place, which has freezing winters,
good paths for riding bicycles, and no public transportation. And
suppose you have ten bicycles and ten coats (though you are not vying
for some bicycle- or coat-collector prize). Meanwhile, I am so poor
that I have none. Then, redistributing one of your bicycles and one of
your coats to me will probably harm you less than it will benefit me.
This sort of phenomenon pervades societies where resources are
unequally distributed. Wherever the phenomenon occurs, a fundamentally
impartial morality is under pressure to redistribute resources from
the richer to the poorer.
However, there are also contingent but pervasive facts about human
beings that pull in favor of practices that have the foreseen
consequence of material inequality. First of all, higher levels of
overall welfare can require higher levels of productivity (think of
the welfare gains resulting from improvements in agricultural
productivity). In many areas of the economy, the provision of material
rewards for greater productivity seems the most efficient acceptable
way of eliciting higher productivity. Some individuals and groups will
be more productive than others (especially if there are incentive
schemes). So the practice of providing material rewards for greater
productivity will result in material inequality.
Thus, on the one hand, the diminishing marginal utility of material
resources exerts pressure in favor of more equal distributions of
resources. On the other hand, the need to promote productivity exerts
pressure in favor of incentive schemes that have the foreseen
consequence of material inequality. Utilitarians and most other
consequentialists find themselves balancing these opposed
pressures.
Note that those pressures concern the distribution of resources. There
is a further question about how equally welfare itself should be
distributed. Many recent writers have taken utilitarianism to be
indifferent about the distribution of welfare. Imagine a choice
between an outcome where overall welfare is large but distributed
unequally and an outcome where overall welfare is smaller but
distributed equally. Utilitarians are taken to favor outcomes with
greater overall welfare even if it is also less equally
distributed.
To illustrate this, let us take an artificially simple population,
divided into just two groups.
>
>
>
> | | | |
> | --- | --- | --- |
> | | Units of welfare | Total welfare for both groups |
> | **Alternative 1** | Per person | Per group | |
> | 10,000 people in group A | 1 | 10,000 | |
> | 100,000 people in group B | 10 | 1,000,000 | |
> | | | | *Impartially* calculated: 1,010,000 |
>
>
>
>
>
>
> | | | |
> | --- | --- | --- |
> | | Units of welfare | Total welfare for both groups |
> | **Alternative 2** | Per person | Per group | |
> | 10,000 people in group A | 8 | 80,000 | |
> | 100,000 people in group B | 9 | 900,000 | |
> | | | | *Impartially* calculated: 980,000 |
>
>
>
Many people would think Alternative 2 above better than Alternative 1,
and might think that the comparison between these alternatives shows
that there is always pressure in favor of greater equality of
welfare.
As Derek Parfit (1997) in particular has argued, however, we must not
be too hasty. Consider the following choice:
>
>
>
> | | | |
> | --- | --- | --- |
> | | Units of welfare | Total welfare for both groups |
> | **Alternative 1** | Per person | Per group | |
> | 10,000 people in group A | 1 | 10,000 | |
> | 100,000 people in group B | 10 | 1,000,000 | |
> | | | | *Impartially* calculated: 1,010,000 |
>
>
>
>
>
>
> | | | |
> | --- | --- | --- |
> | | Units of welfare | Total welfare for both groups |
> | **Alternative 3** | Per person | Per group | |
> | 10,000 people in group A | 1 | 10,000 | |
> | 100,000 people in group B | 1 | 100,000 | |
> | | | | *Impartially* calculated: 110,000 |
>
>
>
Is equality of welfare so important that Alternative 3 is superior to
Alternative 1? To take an example of Parfit's, suppose the only
way to make everyone equal with respect to sight is to make everyone
totally blind. Is such "levelling down" required by
morality? Indeed, is it in any way at all morally desirable?
If we think the answer is No, then we might think that equality of
welfare as such is not really an ideal (cf. Temkin 1993). Losses to
the better off are justified only where this benefits the worse off.
What we had thought of as pressure in favor of equality of welfare was
instead pressure in favor of levelling up. We might say that additions
to welfare matter more the worse off the person is whose welfare is
affected. This view has come to be called *prioritarianism*
(Parfit 1997; Arneson 1999b). It has tremendous intuitive appeal.
For a simplistic example of how prioritarianism might work, suppose
the welfare of the worst off counts five times as much as the welfare
of the better off. Then Alternative 1 from the tables above comes out
at:
>
> \((1 \times 5 \times 10,000) + (10 \times 100,000)\),
>
which comes to 1,050,000 total units of welfare. Again with the
welfare of the worst off counting five times as much, Alternative 2
comes out at:
>
> \((8 \times 5 \times 10,000) + (9 \times 100,000)\),
>
which comes to 1,300,000 total units of welfare. This accords with the
common reaction that Alternative 2 is morally superior to Alternative
1.
Of course in real examples there is never only one division in
society. Rather there is a scale from the worst off, to the not quite
so badly off, and so on up to the best off. Prioritarianism is
committed to variable levels of importance of benefitting people at
different places on this scale: the worse off a person is, the greater
the importance attached to that person's gain.
This raises two serious worries about prioritarianism. The first
concerns prioritarianism's difficulty in *nonarbitrarily*
determining how much more importance to give to the welfare of the
worse off. For example, should a unit of benefit to the worst off
count 10 times more than the same size benefit to the best off and 5
times more than the same size benefit to the averagely well off? Or
should the multipliers be 20 and 10, or 4 and 2, or other amounts? The
second worry about prioritarianism is whether attaching greater
importance to increases in welfare for some than to the same size
increases in welfare for others contradicts fundamental impartiality
(Hooker 2000: 60-2).
This is not the place to go further into debates between
prioritarianism and its critics. So the rest of this article sets
aside those debates.
## 4. Full Rule-consequentialism
Consequentialists have distinguished three components of their theory:
(1) their thesis about what makes acts morally wrong, (2) their thesis
about the procedure agents should use to make their moral decisions,
and (3) their thesis about the conditions under which moral sanctions
such as blame, guilt, and praise are appropriate.
What we might call *full* rule-consequentialism consists of
rule-consequentialist criteria for all three. Thus, full
rule-consequentialism claims that an act is morally wrong if and only
if it is forbidden by rules justified by their consequences. It also
claims that agents should do their moral decision-making in terms of
rules justified by their consequences. And it claims that the
conditions under which moral sanctions should be applied are
determined by rules justified by their consequences.
Full rule-consequentialists may think that there is really only one
set of rules about these three different subject matters. Or they may
think that there are different sets that in some sense correspond to
or complement one another.
Much more important than the distinction between different kinds of
full rule-consequentialism is the distinction between full
rule-consequentialism and *partial* rule-consequentialism.
Partial rule-consequentialism might take many forms. Let us focus on
the most common form. The most common form of partial
rule-consequentialism claims that agents should make their moral
decisions about what to do by reference to rules justified by their
consequences, but does not claim that moral wrongness is determined by
rules justified by their consequences. Partial rule-consequentialists
typically subscribe to the theory that moral wrongness is determined
directly in terms of the consequences of the act compared to the
consequences of alternative possible acts. This theory of wrongness is
called act-consequentialism.
Distinguishing between full and partial rule-consequentialism
clarifies the contrast between act-consequentialism and
rule-consequentialism. Act-consequentialism is best conceived of as
maintaining merely the following:
>
> Act-consequentialist criterion of wrongness: An act is wrong if and
> only if it results in less good than would have resulted from some
> available alternative act.
>
When confronted with that criterion of moral wrongness, many people
naturally assume that the way to decide what to do is to apply the
criterion, i.e.,
>
> Act-consequentialist moral decision procedure: On each occasion, an
> agent should decide what to do by calculating which act would produce
> the most good.
>
However, consequentialists nearly never defend this
act-consequentialist decision procedure as a general and typical way
of making moral decisions (Mill 1861: ch 2; Sidgwick 1907:
405-6, 413, 489-90; Moore 1903: 162-4; Smart 1956:
346; 1973: 43, 71; Bales 1971: 257-65; Hare 1981; Parfit 1984:
24-9, 31-43; Railton 1984: 140-6, 152-3; Brink
1989: 216-7, 256-62, 274-6; Pettit and Brennan 1986;
Pettit 1991, 1994, 1997: 156-61, 2017; de Lazari-Radek and
Singer 2014: ch. 10). There are a number of compelling
consequentialist reasons why the act-consequentialist decision
procedure would be counter-productive.
First, very often the agent does not have detailed information about
what the consequences would be of various acts.
Second, obtaining such information would often involve greater costs
than are at stake in the decision to be made.
Third, even if the agent had the information needed to make
calculations, the agent might make mistakes in the calculations. (This
is especially likely when the agent's natural biases intrude, or
when the calculations are complex, or when they have to be made in a
hurry.)
Fourth, there are what we might call expectation effects. Imagine a
society in which people know that others are naturally biased towards
themselves and towards their loved ones but are trying to make their
every moral decision by calculating overall good. In such a society,
each person might well fear that others will go around breaking
promises, stealing, lying, and even assaulting whenever they convinced
themselves that such acts would produce the greatest overall good
(Woodard 2019: 149). In such a society, people would not feel they
could trust one another.
This fourth consideration is more controversial than the first three.
For example, Hodgson 1967, Hospers 1972, and Harsanyi 1982 argue that
trust would break down. Singer 1972 and Lewis 1972 argue that it would
not.
Nevertheless, most philosophers accept that, for all four of the
reasons above, using an act-consequentialist decision procedure would
not maximize the good. Hence even philosophers who espouse the
act-consequentialist criterion of moral wrongness reject the
act-consequentialist moral decision procedure. In its place, they
typically advocate the following:
>
> Rule-consequentialist decision procedure: At least normally, agents
> should decide what to do by applying rules whose acceptance will
> produce the best consequences, rules such as "Don't harm
> innocent others", "Don't steal or vandalize
> others' property", "Don't break your
> promises", "Don't lie", "Pay special
> attention to the needs of your family and friends", "Do
> good for others generally".
>
Since act-consequentialists about the criterion of wrongness typically
accept this decision procedure, act-consequentialists are in fact
partial rule-consequentialists. Often, what writers refer to as
indirect consequentialism is this combination of act-consequentialism
about wrongness and rule-consequentialism about the appropriate
decision procedure.
Standardly, the decision procedure that full rule-consequentialism
endorses is the one that it would be best for *society* to
accept. The qualification "standardly" is needed because
there are versions of rule-consequentialism that let the rules be
relativised to small groups or even individuals (D. Miller 2010; Kahn
2012). And act-consequentialism insists upon the decision procedure it
would be best for the *individual* to accept. So, according to
act-consequentialism, since Jack's and Jill's capacities
and situations may be very different, the best decision procedure for
Jack to accept may be different from the best decision procedure for
Jill to accept. However, in practice act-consequentialists typically
ignore for the most part such differences and endorse the above
rule-consequentialist decision procedure (Hare 1981, chs. 2, 3, 8, 9,
11; Levy 2000).
When act-consequentialists endorse the above rule-consequentialist
decision procedure, they acknowledge that following this decision
procedure does not guarantee that we will do the act with the best
consequences. Sometimes, for example, our following a decision
procedure that rules out harming an innocent person will prevent us
from doing that act that would produce the best consequences.
Similarly, there will be some circumstances in which stealing,
breaking our promises, etc., would produce the best consequences.
Still, our following a decision procedure that generally rules out
such acts will in the long run and on the whole probably produce far
better consequences than our trying to run consequentialist
calculations on an act-by-act basis.
Because act-consequentialists typically agree with a
rule-consequentialist decision procedure, whether to classify some
philosopher as an act-consequentialist or as a rule-consequentialist
can be problematic. For example, G.E. Moore (1903, 1912) is sometimes
classified as an act-consequentialist and sometimes as a
rule-consequentialist. Like so many others, including his teacher
Henry Sidgwick, Moore combined an act-consequentialist criterion of
moral wrongness with a rule-consequentialist procedure for deciding
what to do. Moore simply went further than most in stressing the
danger of departing from the rule-consequentialist decision procedure
(see Shaw 2000). Hare (1981) and Pettit (1991, 1994, 1997) have also
been especially influential act-consequentialists about what makes
acts right or wrong while holding that everyday decision-making should
be conducted in terms of familiar rules focused on things other than
consequences.
## 5. Global Consequentialism
Some writers propose that the purest and most consistent form of
consequentialism is the view that absolutely everything should be
assessed by its consequences, including not only acts but also rules,
motives, the imposition of sanctions, etc. Let us follow Pettit and
Smith (2000) in referring to this view as *global*
consequentialism. Kagan (2000) pictures it as multi-dimensional direct
consequentialism, in that each thing is assessed directly in terms of
whether its own consequences are as good as the consequences of
alternatives.
How does this global consequentialism differ from what we have been
calling partial rule-consequentialism? What we have been calling
partial rule-consequentialism is nothing but the combination of the
act-consequentialist criterion of moral wrongness with the
rule-consequentialist decision procedure. So defined, partial
rule-consequentialism leaves open the question of when moral sanctions
are appropriate.
Some partial rule-consequentialists say that agents should be blamed
and feel guilty whenever they fail to choose an act that would result
in the best consequences. A much more reasonable position for a
partial rule-consequentialist to take is that agents should be blamed
and feel guilty whenever they choose an act that is forbidden by the
rule-consequentialist decision procedure, whether or not that
individual act fails to result in the best consequences. Finally,
partial rule-consequentialism, as we have defined it, is compatible
with the claim that whether agents should be blamed or feel guilty
depends not on the wrongness of what they did, nor on whether the
recommended procedure for making moral decisions would have led them
to choose the act they choose, but instead solely on whether this
blame or guilt will do any good. This is precisely the view of
sanctions that global consequentialism takes.
One objection to global consequentialism is that simultaneously
applying a consequentialist criterion to acts, decision procedures,
and the imposition of sanctions leads to apparent paradoxes (Crisp
1992; Streumer 2003; Lang 2004).
Suppose, on the whole and in the long run, the best decision procedure
for you to accept is one that leads you to do act *x* now. But
suppose also that in fact the act with the best consequences in this
situation is not *x* but *y*. So global consequentialism
tells you (a) to use the best possible decision procedure but also (b)
not to do the act picked out by this decision procedure. That seems
paradoxical.
Things get worse when we consider blame and guilt. Suppose you follow
the best possible decision procedure but fail to do the act with the
best consequences. Are you to be blamed? Should you feel guilty?
Global consequentialism claims that you should be blamed if and only
if blaming you will produce the best consequences, and that you should
feel guilty if and only if this will produce the best consequences.
Suppose that for some reason the best consequences would result from
blaming you for following the prescribed decision procedure (and thus
doing *x*). But surely it is paradoxical for a moral theory to
call for you to be blamed although you followed the moral decision
procedure mandated by the theory. Or suppose that for some reason the
best consequences would result from blaming you for intentionally
choosing the act with the best consequences (*y*). Again,
surely it is paradoxical for a moral theory to call for you to be
blamed although you intentionally chose the very act required by the
theory.
So one problem with global consequentialism is that it creates
potential gaps between what acts it claims to be required and what
decision procedures it tells agents to use, and between each of these
and blamelessness. (For explicit replies to this line of attack, see
Driver 2014: 175 and de Lazari-Radek and Singer 2014:
315-16.)
That is not the most familiar problem with global consequentialism.
The most familiar problem with it is instead its maximising
act-consequentialist criterion of wrongness. According to this
maximising criterion, an act is wrong if and only if it fails to
result in the greatest good. This criterion judges some acts to be not
wrong which certainly seem to be wrong. It also judges some acts that
seem not wrong to be wrong.
For example, consider an act of murder that results in slightly more
good than any other act would have produced. According to the most
familiar, maximising act-consequentialist criterion of wrongness, this
act of murder is not wrong. Many other kinds of act such as
assaulting, stealing, promise breaking, and lying can be wrong even
when doing them would produce slightly more good than not doing them
would. Again, the familiar, maximising form of act-consequentialism
denies this.
Or consider someone who gives to her child, or keeps for herself, some
resource of her own instead of contributing it to help some stranger
who would have gained slightly more from that resource. Such an action
hardly seems wrong. Yet the maximising act-consequentialist criterion
judges it to be wrong. Indeed, imagine how much self-sacrifice an
averagely well-off person would have to make before her further
actions satisfied the maximising act-consequentialist criterion of
wrongness. She would have to give to the point where further
sacrifices from her in order to benefit others would harm her more
than they would benefit the others. Thus, the maximising
act-consequentialist criterion of wrongness is often accused of being
unreasonably demanding.
The objections just directed at maximising act-consequentialism could
be side-stepped by a version of act-consequentialism that did not
require maximising the good. This sort of act-consequentialism is now
called satisficing consequentialism. See the entry on
consequentialism/
for more on such a theory.
## 6. Formulating Full Rule-consequentialism
There are a number of different ways of formulating
rule-consequentialism. For example, it can be formulated in terms of
the good that actually results from rules or in terms of the
rationally expected good of the consequences of rules. It can be
formulated in terms of the consequences of compliance with rules or in
terms of the wider consequences of acceptance of rules. It can be
formulated in terms of the consequences of absolutely everyone's
accepting the rules or in terms of the rules' acceptance by
something less than everyone. Rule-consequentialism can also be
formulated in terms of the teaching of a code to everyone in the next
generation, in the full realization that the degrees of resulting
acceptance will vary (Mulgan 2006, 2017, 2020; D. Miller 2014, 2021;
T. Miller 2016, 2021). Rule-consequentialism is more plausible if
formulated in some ways than it is if formulated in other ways. This
is explained in the following three subsections. Questions of
formulation are also relevant in later sections on objections to
rule-consequentialism.
### 6.1 Actual versus Expected Good
As indicated, full rule-consequentialism consists in
rule-consequentialist answers to three questions. The first is, what
makes acts morally wrong? The second is, what procedure should agents
use to make their moral decisions? The third is, what are the
conditions under which moral sanctions such as blame, guilt, and
praise are appropriate?
As we have seen, the answer that full rule-consequentialists give to
the question about decision procedure is much like the answer that
other kinds of consequentialist give to that question. So let us focus
on the points of contrast, i.e., the other two questions. These two
questions -- about what makes acts wrong and about when sanctions
are appropriate -- are more tightly connected than sometimes
realized.
Indeed, J.S. Mill, one of the fathers of consequentialism, affirmed
their tight connection:
>
> We do not call anything wrong, unless we mean to imply that a person
> ought to be punished in some way or other for doing it; if not by law,
> by the opinion of his fellow creatures; if not by opinion, by the
> reproaches of his own conscience. (1861: ch. 5, para. 14)
>
Let us assume that Mill took "ought to be punished, at least by
one's own conscience if not by others" to be roughly the
same as "blameworthy". With this assumption in hand, we
can interpret Mill as tying wrongness tightly to blameworthiness. In a
moment, we can consider what follows if Mill is mistaken that
wrongness is tied tightly to blameworthiness. First, let us consider
what follows if Mill is correct that wrongness is tied tightly to
blameworthiness.
Consider the following argument, whose first premise comes from
Mill:
>
> If an act is wrong, it is blameworthy.
>
Surely, an agent cannot rightly be blamed for accepting and following
rules that the agent could not foresee would have sub-optimal
consequences. From this, we get our second premise:
>
> If an act is blameworthy, the sub-optimal consequences of rules
> allowing that act must have been foreseeable.
>
From these two premises we get the conclusion:
>
> So if an act is wrong, the sub-optimal consequences of rules allowing
> that act must have been foreseeable.
>
Of course, the actual consequences of accepting a set of rules may not
be the same as the foreseeable consequences of accepting that set.
Hence, if full rule-consequentialism claims that an act is wrong if
and only if the *foreseeable* consequences of rules allowing
that act are sub-optimal, rule-consequentialism cannot also hold that
an act is wrong if and only if the *actual* consequences of
rules allowing that act will be sub-optimal.
Now suppose instead the relation between wrongness and blameworthiness
is far looser than Mill suggested (cf. Sorensen 1996). That is,
suppose that our criterion of wrongness can be quite different from
our criterion of blameworthiness. In that case, we could hold:
>
> *Actualist* rule-consequentialist criterion of
> *wrongness*: An act is wrong if and only if it is forbidden by
> rules that would *actually* result in the greatest good.
>
and
>
> *Expectablist* rule-consequentialist criterion of
> *blameworthiness*: An act is blameworthy if and only if it is
> forbidden by the rules that would *foreseeably* result in the
> greatest good.
>
Let us replace "foreseeably result in the greatest good"
with "result in the greatest expected good". Here is how
expected good of a set of rules is calculated. Suppose we can identify
the value or disvalue of each possible outcome a set of rules might
have. Multiply the value of each possible outcome by the probability
of that outcome's occurring. Take all the products of these
multiplications and add them together. The resulting number quantifies
the expected good of that set of rules.
Expected good is not to be calculated by employing whatever crazy
estimates of probabilities people might assign to possible outcomes.
Rather, expected good is calculated by multiplying the value or
disvalue of possible outcomes by *rational* or
*justified* probability estimates.
Different agents have different evidence and thus have different
rational and justified probability estimates. Such differences are
sometimes exactly what explains disagreements about what changes to an
extant moral code would be improvements. In some cases of
disagreements, the cause of such disagreement is that at least one of
the parties is not aware of, or has not fully assimilated, evidence
that is available. Expectablist rule-consequentialist would likely
want to tie rational or justified probability estimates to the
evidence that is available at the time, even if some people are not
aware of it or have not fully appreciated its implications.
There might be considerable scepticism about how calculations of
expected good are even possible (Lenman 2000). Even where such
calculations are possible, they will often be quite impressionistic
and imprecise. Nevertheless, we can reasonably hope to make at least
*some* informed judgements about the *likely*
consequences of alternative possible rules (Burch-Brown 2014). And we
could be guided by such judgements, as legislators often say they are.
In contrast, which rules would *actually* have the *very
best* consequences will normally be inaccessible. Hence, the
expectablist rule-consequentialist criterion of blameworthiness is
appealing.
Now return to the proposal that, while the criterion of
blameworthiness is the expectablist rule-consequentialist one, the
correct criterion of moral wrongness is the actualist
rule-consequentialist one. This proposal rejects Mill's move of
tying moral wrongness to blameworthiness. There is, however, a very
strong objection to this proposal. What is the role and importance of
moral wrongness if it is disassociated from blameworthiness?
In order to retain an obvious role and importance for moral wrongness,
those committed to the expectablist rule-consequentialist criterion
*of blameworthiness* are likely to endorse:
>
> *Simple expectablist* rule-consequentialist criterion of moral
> wrongness: An act is morally wrong if and only if it is forbidden by
> the rules that would result in the greatest *expected* good.
>
Indeed, once we have before us the distinction between (a) the amount
of value that *actually* results and (b) the rationally
*expected* good, the full rule-consequentialist is likely to go
for expectablist criteria of moral wrongness, blameworthiness, and
decision procedures.
What if, as far as we can tell, no one code has greater expected value
than its rivals? We will need to amend our expectablist criteria in
order to accommodate this possibility:
>
> *Sophisticated expectablist* rule-consequentialist criterion of
> moral wrongness: An act is morally wrong if and only if it is
> forbidden either by the rules that would result in the greatest
> *expected* good, or, if two or more alternative codes of rules
> are equally best in terms of expected good, by the one of these codes
> closest to conventional morality.
>
The argument for using closeness to conventional morality to break
ties between otherwise equally promising codes begins with the
observation that social change regularly has unexpected consequences.
And these unexpected consequences usually seem to be negative.
Furthermore, the greater the difference between a new code and the one
already conventionally accepted, the greater the scope for unexpected
consequences. So, as between two codes we judge to have equally high
expected value, we should choose the one closest to the one we already
know. (For discussion of the situation where two codes have equally
high expected value and seem equally close to conventional morality,
see Hooker 2008: 83-4.)
An implication is that people should make changes to the status quo
where but only where these changes have greater expected value than
sticking with the status quo. Rule-consequentialism manifestly has the
capacity to recommend change. But it does not favor change for the
sake of change.
Rule-consequentialism most definitely does need to be formulated so as
to deal with ties in expected value. However, for most of the rest of
this article, I will ignore this complication.
### 6.2 Compliance and Acceptance
There are other important issues of formulation that
rule-consequentialists face. One is the issue of whether
rule-consequentialism should be formulated in terms of compliance with
rules, in terms of acceptance of rules, or in terms of the teaching of
rules. Admittedly, the most important consequence of teaching and
accepting rules is compliance with them. And early formulations of
rule-consequentialism did indeed explicitly mention compliance. For
example, they said an act is morally wrong if and only if it is
forbidden by rules *the compliance with which* will maximize
the good (or the expected good). (See Austin 1832; Brandt 1959; M.
Singer 1955, 1961.)
However, acceptance of a rule can have consequences other than
compliance with the rule. As Kagan (2000: 139) writes, "once
embedded, rules can have an impact on results that is independent of
their impact on acts: it might be, say, that merely thinking about a
set of rules reassures people, and so contributes to happiness."
(For more on what we might call these 'beyond-compliance
consequences' of rules, see Sidgwick 1907: 405-6, 413;
Lyons 1965: 140; Williams 1973: 119-20, 122, 129-30; Adams
1976: 470; Scanlon 1998: 203-4; Kagan 1998: 227-34.)
These consequences of acceptance of rules should most definitely be
part of a cost-benefit analysis of rules. Formulating
rule-consequentialism in terms of the consequences of acceptance
allows them to be part of this analysis. Important consequences of
communal acceptance of a set of rules include assurance, incentive,
and deterrent effects. And consideration of assurance and incentive
effects has played a large role in the development of
rule-consequentialism (Harsanyi 1977, 1982: 56-61; 1993:
116-18; Brandt 1979: 271-77; 1988: 346ff [1992: 142ff.];
1996: 126, 144; Johnson 1991, especially chs. 3, 4, 9). Hence, we
should not be surprised that rule-consequentialism has gone from being
formulated in terms of compliance with rules to being formulated in
terms of acceptance of rules.
However, just as we need to move from thinking about the consequences
of compliance to thinking about the wider consequences of acceptance,
we need to go further. Focusing purely on the consequences of
acceptance of rules ignores the "transition" costs for the
teachers of teaching those rules, and the costs to the
teaching's recipients of internalizing these rules. And yet
these costs can certainly be significant (Brandt 1963: section 4; 1967
[1992: 126]; 1983: 98; 1988: 346-47, 349-50 [1992:
140-143, 144-47]; 1996: 126-28, 145, 148, 152,
223).
Suppose, for example, that, once a fairly simple and relatively
undemanding code of rules Code *A* has been accepted, the
expected value of Code *A* would be *n*. Suppose a more
complicated and demanding alternative Code *B* would have an
expected value of \(n + 5\) once Code *B* has been accepted. So
if we just consider the expected values of acceptance of the two
alternative codes, Code *B* wins.
But now let us factor into our cost/benefit analysis of rival codes
the relative costs of teaching the two codes and of getting them
internalized by new generations. Since Code *A* is fairly
simple and relatively undemanding, the cost of getting it internalized
is [?]1. Since Code *B* is more complicated and demanding,
the cost of getting it internalized is [?]7. So if our comparison
of the two codes considers the respective costs of getting them
internalized, Code *A*'s expected value is \(n-1\), and
Code *B*'s is \(n+5-7\). Once we include the respective
costs of getting the codes internalized, Code *A* wins.
As indicated, the costs of teaching a code successfully, so that the
code is very widely internalized, are "transition costs".
But of course such transitions are always to one arrangement from
another. The arrangement we are imagining the transition being
*to* is the acceptance of a certain proposed code. The
arrangement we are imagining the transition being *from* is
... well, what?
One answer is that the arrangement from which the transition is
supposed to be starting is whatever moral code the society happens to
accept already. That might seem like the natural answer. However,
there is a strong objection to this answer, namely that
rule-consequentialism should not let the cost/benefit analysis of a
proposed code be influenced by the costs of getting people to give up
whatever rules they may have already internalised. This is for two
reasons.
Most importantly, rule-consequentialist assessment of codes needs to
avoid giving weight directly or indirectly to moral ideas that have
their source in other moral theories but not in rule-consequentialism
itself. Suppose people in a given society were brought up to believe
that women should be subservient to men. Should rule-consequentialist
evaluation of a proposed non-sexist code have to count the costs of
getting people to give up the sexist rules they have already
internalised so as to accept the new, non-sexist ones? Since the
sexist rules are unjustifiable, that they were accepted should not be
allowed to infect rule-consequentialist assessment.
Another reason for rejecting the answer we are considering is that it
threatens to underwrite an unattractive relativism. Different
societies may differ considerably in their extant moral beliefs. Thus,
a way of assessing proposed codes that considers the costs of getting
people already committed to some other code will end up having to
countenance different transition costs to get to the same code. For
example, the transition costs to a non-racist code are much higher
from an already accepted racist code than from an already accepted
non-racist one. Formulating rule-consequentialism so that it endorses
the same code for 1960s Michigan as for 1960s Mississippi is
desirable. (For opposing arguments that rule-consequentialism should
be formulated so as to coutenance social relativism, see Kahn 2012; D.
Miller 2014, 2021; T. Miller 2021.)
A way to avoid ending up with the social relativism just identified is
to formulate rule-consequentialism in terms of acceptance by *new
generations* of humans. The proposal might be that we compare the
respective "teaching costs" of alternative codes, on the
assumption that these codes will be taught to new generations of
children, i.e., children who have not already been educated to accept
a moral code. We are to imagine the children start off with natural
(non-moral) inclinations to be very partial towards themselves and a
few others. We should also assume that there is a cognitive cost
associated with the learning of each rule.
These are realistic assumptions, with big implications. One is that a
cost/benefit analysis of alternative codes of rules would have reason
to favor simpler codes over more complex ones. Of course there can
also be benefits from having more, or more complicated, rules. Yet
there is probably a limit on how complicated or complex a code can be
and still have greater expected value than simpler codes, once
teaching costs are included.
Another implication concerns prospective rules about making sacrifices
to help others. Since children start off focused on their own
gratifications, getting them to internalize a kind of impartiality
that requires them to make large sacrifices repeatedly for the sake of
others would have extremely high costs. There would also, of course,
be enormous benefits from the internalization of such a rule --
predominately, benefits to others. Would the benefits be greater than
the costs? At least since Sidgwick (1907: 434), many utilitarians have
taken for granted that human nature is such that the real
possibilities are (1) that human beings care passionately about some
and less about each of the rest of humanity or (2) that human beings
care weakly but impartially about everyone. In other words, what is
not a realistic possibility, according to this view of human nature,
is human beings' caring strongly and impartially about everyone
in the world. If this view of human nature is correct, then one
enormous cost of successfully making people completely impartial is
that doing so would leave them with only weak concerns.
Even if that picture of human nature is not correct, that is, even if
making people completely impartial could be achieved without draining
them of enthusiasm and passion, the cost of successfully making people
care as much about every other individual as they do about themselves
would be prohibitive. At some point on the spectrum running from
complete partiality to complete impartiality, the costs of pushing and
inducing everyone further along the spectrum outweigh the
benefits.
A complication worth mentioning comes from the obvious point that
moral education is developmental. How many rules are to be taught to
children, how complicated these rules are, how demanding the rules
are, and how to resolve conflicts among the rules depend on the
developmental stage of the children. Hence, there can be conflict
between the simpler and less demanding rules taught to the very young
and the more elaborate, nuanced, and demanding rules taught at higher
developmental stages. Of course, rule-consequentialism is best
formulated in terms of the rules featured in the end of the
development rather than the ones figuring in earlier stages.
### 6.3 Complete Acceptance versus Incomplete Acceptance
While rule-consequentialist cost/benefit analyses of codes should
count the cost of getting those codes internalised by new generations,
such analyses should acknowledge that the internalisation will not be
achieved in every last person (D. Miller 2021: 130). No matter how
good the teaching is, the results will be imperfect. Some people will
learn rules that differ to some degree from the ones that were taught,
and some of these people will end up with very mistaken views about
what is morally required, morally optional, or morally wrong. Other
people will end up with insufficient moral motivation. Other people
will end up with no moral motivation at all (psychopaths).
Rule-consequentialism needs to have rules for coping with the
inevitably imperfect results of moral teaching.
Such rules will crucially include rules about punishment. From a
rule-consequentialist point of view, one point of punishment is to
deter certain kinds of act. Another point of punishment is to get
undeterred, dangerous people off the streets. Perhaps
rule-consequentialism can admit that another point of punishment is to
appease the primitive lust for revenge on the part of victims of such
acts and their family and friends. Finally, there is the expressive
and reinforcing power of rules about punishment.
Some ways of formulating rule-consequentialism make having rules about
punishment difficult to explain. One such way of formulating
rule-consequentialism is:
>
> An act is morally wrong if and only if it is prohibited by the code of
> rules the full acceptance of which by absolutely everyone would
> produce the greatest expected good.
>
Imagine a world in which absolutely every adult human fully accepts
rules forbidding (for example) physical attacks on the innocent,
stealing, promise breaking, and lying. Suppose these rules have been
internalized so deeply by everyone in this world that in this world
there is complete compliance with these rules. Also assume that, if
everyone in this world always complies with these rules, this perfect
compliance becomes common knowledge. In this world, there would be
little or no need for rules about punishment and thus little or no
benefit from having such rules. But there are teaching and
internalization costs associated with each rule taught and
internalized. So there are teaching and internalization costs
associated with the inclusion of any rule about punishment. The
combination of costs without benefits is repellant. Therefore, for a
world of complete compliance, the version of rule-consequentialism
immediately above would not endorse rules about punishment.
We need a form of rule-consequentialism that includes rules for
dealing with people who are not committed to the right rules, indeed
even for people who are irredeemable. In other words,
rule-consequentialism needs to be formulated so as to conceptualize
society as containing some people insufficiently committed to the
right rules, and even some people not committed to any moral rules.
Here is a way of doing so:
>
> An act is wrong if and only if it is prohibited by a code of rules the
> acceptance of which by the overwhelming majority of people in each new
> generation would have the greatest expected value.
>
Note that rule-consequentialism neither endorses nor condones the
non-acceptance of the code by those outside the overwhelming majority.
On the contrary, rule-consequentialism claims those people are morally
mistaken. Indeed, the whole point of formulating rule-consequentialism
this way is to make room for rules about how to respond negatively to
such people.
Of course, the term "overwhelming majority" is very
imprecise. Suppose we remove the imprecision by picking a precise
percentage of society, say 90%. Picking any precise percentage as an
obvious element of arbitrariness to it. For example, if we pick 90%,
why not 89% or 91%?
Perhaps, we can argue for a number in the range of 90% as a reasonable
compromise between two pressures. On the one hand, the percentage we
pick should be close enough to 100% to retain the idea that, ideally,
moral rules would be accepted by everyone. On the other hand, the
percentage needs to be far enough short of 100% to leave considerable
scope for rules about punishment. It seems that 90% is in a defensible
range, given the need to balance those considerations. (For dissent
from this, see Ridge 2006; for a reply to Ridge, see Hooker and
Fletcher 2008. The matter receives further discussion in H. Smith
2010; Tobia 2013, 2018; T. Miller 2014, 2021; Toppinen 2016; Portmore
2017; Yeo 2017; Podgorski 2018; Perl 2021.)
Holly Smith (2010) pointed out that a cost/benefit analysis of the
acceptance of any particular code by a positive percentage of the
population less than 100% depends on what the rest of the population
accepts and is disposed to do. Consider the following contrast. One
imagined scenario is that 90% of the population accept one code, and
the other 10% accept a very similar code, such that the two codes
rarely diverge in practice. A second imagined scenario is that 90% of
the population accept one code, and the other 10% accept various codes
that frequently and dramatically conflict in practice with code
accepted by 90%. Conflict resolution and enforcement are less
important in the first imagined scenario than in the second. Hence, if
rule-consequentialism is formulated in terms of the acceptance of a
code by less than 100% of people, it matters what assumptions are made
about whatever percentage of the population do not accept this
code.
Some theorists propose that formulating rule-consequentialism in terms
of the code the teaching of which has the greatest expected value is
superior to formulating the theory in terms of either a fixed or
variable acceptance rate for codes (Mulgan 2006, 141, 147; 2017, 291;
2020, 12-21; T. Miller 2016; 2021; D. Miller 2021). A
"teaching formulation" of rule-consequentialism holds:
An act is morally prohibited, obligatory, or optional if and only if
and because it is prohibited, obligatory, or permitted by the code of
rules the teaching of which to everyone has at least as much expected
value as the teaching of any other code.
Two clarifications are needed immediately. First,
"everyone" here needs to be qualified so as not to include
people "with significant, cognitive, or conative deficits"
(D. Miller 2021: 10). Second, teaching a code to everyone is not
assumed to lead to everyone's internalization of this code.
Hence, even if everyone is taught a certain code, foreseeably there
will be some who internalize rules that are more or less different
from the rules that were taught. There will also be some whose moral
motivation is unreliable or even indiscernible. And there will be some
who take themselves to be sceptics about moral rules and even about
the rest of morality.
There are some definite advantages of teaching formulations of
rule-consequentialism. To be sure, we have to build into our
cost/benefit analysis that enough costs are incurred to make the
teaching at least partly successful. In other words, we must insist
that the teaching be successful enough to get enough people to
internalize rules such that a good degree of cooperation and security
results. But we do not need to be precise about what percentage of
people taught this code remain amoralists, what percentage of people
taught this code end up internalizing codes somewhat different from
the one being taught, or what percentage end up insufficiently
motivated to comply with the moral code they have internalized unless
effective enforcement is in place (D. Miller 2021; T. Miller
2021).
Another advantage of teaching formulations is that the idea of
teaching a code to everyone connects tightly to the idea that this
code should be public knowledge. The idea that a moral code must be
suitable for being public knowledge is very appealing (Baier 1958:
195-196; Rawls 1971: 133; Gert 1998: 10-13, 225,
239-240; Hill 2005; Cureton 2015; Pettit 2017: 39, 65, 102). And
rule-consequentialists have championed the idea that moral rules are
subject to this "publicity condition" (Brandt 1992: 136,
144; Hooker 2010; 2016, forthcoming; Parfit 2011, 2017a; D. Miller
2021: 131-32).
## 7. Three Ways of Arguing for Rule-consequentialism
What rules will rule-consequentialism endorse? It will endorse rules
prohibiting physically attacking innocent people, taking or harming
the property of others, breaking one's promises, and lying. It
will also endorse rules requiring one to pay special attention to the
needs of one's family and friends, but more generally to be
willing to help others with their (morally permissible) projects. A
society where such rules are prominent in a public code would be
likely to have more good in it than one lacking such rules.
The fact that these rules are endorsed by rule-consequentialism makes
rule-consequentialism attractive. For, intuitively, these rules seem
right. However, other moral theories endorse these rules as well. Most
obviously, a familiar kind of moral pluralism contends that these
intuitively attractive rules constitute the most basic level of
morality, i.e., that there is no deeper moral principle underlying and
unifying these rules. Call this view Rossian pluralism (in honor of
its champion W.D. Ross (1930, 1939)).
Rule-consequentialism may agree with Rossian pluralism in endorsing
rules against physically attacking the innocent, stealing, promise
breaking, and rules requiring various kinds of loyalty and more
generally doing good for others. But rule-consequentialism goes beyond
Rossian pluralism by specifying an underlying unifying principle that
provides impartial justification for such rules. Other moral theories
try to do this too. Such theories include some forms of Kantianism
(Audi 2001, 2004) and some forms of contractualism (Scanlon 1998;
Parfit 2011; Levy 2013). In any case, the first way of arguing for
rule-consequentialism is to argue that it specifies an underlying
principle that provides impartial justification for intuitively
plausible moral rules, and that no rival theory does this as well
(Urmson 1953; Brandt 1967; Hospers 1972; Hooker 2000). (Attacks on
this line of argument for rule-consequentialism include Stratton-Lake
1997; Thomas 2000; D. Miller 2000; Montague 2000; Arneson 2005; Moore
2007; Hills 2010; Levy 2014.)
This first way of arguing for rule-consequentialism might be seen as
drawing on the idea that a theory is better justified to us to the
extent that it increases coherence within our beliefs (Rawls 1951,
1971: 19-21, 46-51; DePaul 1987; Ebertz 1993; Sayre-McCord
1986, 1996). [See the entry on
coherentist theories of epistemic justification.]
But the approach might also be seen as moderately foundationalist in
that it begins with a set of beliefs (in various moral rules) to which
it assigns independent credibility though not infallibility (Audi
1996, 2004; Crisp 2000). [See the entry on
foundationalist theories of epistemic justification.]
Admittedly, coherence with our moral beliefs does not make a moral
theory *true*, since our moral beliefs might be mistaken.
Nevertheless, if a moral theory fails significantly to cohere with our
moral beliefs, this undermines the theory's ability to be
justified to us.
Wolf (2016) and Copp (2020) argue that the meta-ethical view that
morality is a social practice with a particular function might lead us
to rule-consequentialism. However, Hooker (forthcoming) contends that
the meta-ethical view that morality is a social practice with a
particular function stands in need of justification in terms of
whether it coheres with our considered moral principles and more
specific considered moral judgements. In other words, that
meta-ethical view about the function of morality needs to be judged in
terms of whether it helps us achieve a reflective equilibrium among
our beliefs or not.
The second way of arguing for rule-consequentialism is very different.
It begins with a commitment to consequentialist assessment. With that
commitment as a first premise, the point is then made that assessing
acts *indirectly*, e.g., by focusing on the consequences of
communal acceptance of rules, will in fact produce better consequences
than assessing acts directly in terms of their own consequences
(Austin 1832; Brandt 1963, 1979; Harsanyi 1982: 58-60; 1993;
Riley 2000). After all, making decisions about what to do is the main
point of moral assessment of acts. So if a way of morally assessing
acts is likely to lead to bad decisions, or more generally lead to bad
consequences, then, according to a consequentialist point of view, so
much the worse for that way of assessing acts.
Earlier we saw that all consequentialists now accept that assessing
each act individually by its expected value is in general a terrible
procedure for making moral decisions. Agents should decide how to act
by instead appealing to certain rules such as "don't
physically attack others", "don't steal",
"don't break your promises", "pay special
attention to the needs of your family and friends", and
"be generally helpful to others". And these are the rules
that rule-consequentialism endorses. Many consequentialists, however,
think this hardly shows that full rule-consequentialism is the best
form of consequentialism. Once a distinction is made between, on the
one hand, the best procedure for making moral decisions about what to
do and, on the other hand, the criteria of moral rightness and
wrongness, all consequentialists can admit that we need
rule-consequentialism's rules for our decision procedure. But
consequentialists who are not rule-consequentialists contend that such
rules play no role in the criterion of moral rightness. Hence these
consequentialists reject what this article has called full
rule-consequentialism.
However, whether the objection we have just been considering to the
second way of arguing for rule-consequentialism is a good objection
depends on whether it is legitimate to distinguish between procedures
appropriate for making moral decisions and the criteria of moral
rightness or wrongness. That matter remains contentious (Hooker 2010;
de Lazari-Radek and Singer 2014: ch. 10).
Yet the second way of arguing for rule-consequentialism runs into
another and quite different objection. This objection is that the
first step in this argument for rule-consequentialism is a commitment
to consequentialist assessment. This first step itself needs
justification. Why assume that assessing things in a consequentialist
way is uniquely justified?
It might be said that consequentialist assessment is justified because
promoting the impartial good has an obvious intuitive appeal. But that
won't do, since there are alternatives to consequentialist
assessment that also have obvious intuitive appeal, for example,
"act on the code that no one could reasonably reject".
What we need is a way of arguing for a moral theory that does not
start by begging the question which kind of theory is most
plausible.
A third way of arguing for rule-consequentialism is contractualist
(Harsanyi 1953, 1955, 1982, 1993; Brandt 1979, 1988, 1996; Scanlon
1982, 1998; Parfit 2011; Levy 2013). Suppose we can specify reasonable
conditions under which everyone would choose, or at least would have
sufficient reason to choose, the same code of rules. Intuitively, such
an idealized agreement would legitimate that code of rules. Now if
those rules are the ones whose internalisation would maximise the
expected good, contractualism is leading us to
rule-consequentialism's rules.
There are different views about what would be reasonable conditions
for choosing among alternative possible moral rules. One view is that
everyone's impartiality would have to be insured by the
imposition of a hypothetical "veil of ignorance" behind
which no one knew any specific facts about himself or herself
(Harsanyi 1953, 1955). Another view is that we should imagine that
people would be choosing a moral code on the basis of (a) full
empirical information about the different effects on everyone, (b)
normal concerns (self-interested as well as altruistic), and (c)
roughly equal bargaining power (Brandt 1979; cf. Gert 1998). Parfit
(2011) proposes seeking rules that everyone has (personal or
impartial) reason to choose or will that everyone accept. If impartial
reasons are always sufficient even when opposed by personal ones, then
everyone has sufficient reason to will that everyone accept the rules
whose universal acceptance will have the best consequences impartially
considered. Similarly, Levy (2013) supposes that no one could
reasonably reject a code of rules that would impose on her burdens
that add up to less than the aggregate of burdens that every other
code would impose on others. Such arguments suggest the extensional
equivalence of contractualism and rule-consequentialism. (For
assessment of whether Parfit's contractualist arguments for
rule-consequentialism succeed, see J. Ross 2009; Nebel 2012; Hooker
2014. For similarities between rule-consequentialism and
Scanlon's contractualism, and the difficulty of deciding between
these two theories, see Suikkanen 2022.)
## 8. Must Rule-consequentialism Be Guilty of Collapse, Incoherence, or Rule-worship?
An oft-repeated line of objection to rule-consequentialism from the
mid 1960s to the mid 1990s was that this theory is fatally impaled on
one or the other horn of the following dilemma: Either
rule-consequentialism collapses into practical equivalence with the
simpler act-consequentialism, or rule-consequentialism is
incoherent.
Here is why some have thought rule-consequentialism collapses into
practical equivalence with act-consequentialism. Consider a rule that
rule-consequentialism purports to favor -- e.g.,
"don't steal". Now suppose an agent is in some
situation where stealing would produce more good than not stealing. If
rule-consequentialism selects rules on the basis of their expected
good, rule-consequentialism seems driven to admit that compliance with
the rule "don't steal except when ... or ... or
..." is better than compliance with the simpler rule
"don't steal". This point generalizes. In other
words, for every situation where compliance with some rule would not
produce the greatest expected good, rule-consequentialism seems driven
to favor instead compliance with some amended rule that does not miss
out on producing the greatest expected good in the case at hand. But
if rule-consequentialism operates this way, then in practice it will
end up requiring the very same acts that act-consequentialism
requires.
If rule-consequentialism ends up requiring the very same acts that
act-consequentialism requires, then rule-consequentialism is indeed in
terrible trouble. Rule-consequentialism is the more complicated of the
two theories. This leads to the following objection. What is the point
of rule-consequentialism with its infinitely amended rules if we can
get the same practical result much more efficiently with the simpler
act-consequentialism?
Rule-consequentialists in fact have an excellent reply to the
objection that their theory collapses into practical equivalence with
act-consequentialism. This reply relies on the point that the best
kind of rule-consequentialism ranks systems of rules *not* in
terms of the expected good of *complying* with them, but in
terms of the expected good of their *teaching* and
*acceptance*. Now if a rule forbidding stealing, for example,
has exception clause after exception clause after exception clause
tacked on to it, the rule with all these exception clauses will
provide too much opportunity for temptation to convince agents that
one of the exception clauses applies, when in fact stealing would be
advantageous to the agent. And this point about temptation will also
undermine other people's confidence that their property
won't be stolen. The same is true of most other moral rules:
incorporating too many exception clauses could undermine
people's assurance that others will behave in certain ways (such
as keeping promises and avoiding stealing).
Furthermore, when comparing alternative rules, we must also consider
the relative costs of getting them internalised by new generations.
Clearly, the costs of getting new generations to learn either an
enormous number of rules or hugely complicated rules would be
prohibitive. So rule-consequentialism will favor a code of rules
without too many rules, and without too much complication within the
rules.
There are also costs associated with getting new generations to
internalise rules that require one to make enormous sacrifices for
others with whom one has no particular connection. Of course,
following such demanding rules will produce many benefits, mainly to
others. But the costs associated with internalising such rules should
be weighed against the benefits of following them. At some level of
demandingness, the costs of getting such demanding rules internalised
will outweigh the benefits that following them will produce. Hence,
doing a careful cost/benefit analysis of internalising demanding rules
will come out opposing rules' being too demanding.
The code of rules that rule-consequentialism favours a code comprised
of rules that are not too numerous, too complicated, or too demanding
can sometimes lead people to do acts that do not have the greatest
expected value. For example, following the simpler rule
"Don't steal" will sometimes produce less good
consequences than following a more complicated rule "Don't
steal except when ... or ... or ... or ... or
...". Another example might be following a rule that allows
people to give some degree of priority to their own projects, even
when they could produce more good by sacrificing their own projects in
order to help others. Still, rule-consequentialism's contention
is that bringing about widespread acceptance of a simpler and less
demanding code, even if acceptance of that code does sometimes lead
people to do acts with sub-optimal consequences, has higher expected
value in the long run than bringing about widespread acceptance of a
maximally complicated and demanding code. Since rule-consequentialism
can tell people to follow this simpler and less demanding code, even
when following it will not to maximise expected good,
rule-consequentialism escapes collapse into practical equivalence to
act-consequentialism.
To the extent that rule-consequentialism circumvents collapse, this
theory is accused of incoherence. Rule-consequentialism is accused of
incoherence for maintaining that an act can be morally permissible or
even required though the act fails to maximise expected good. Behind
this accusation must be the assumption that rule-consequentialism
contains an overriding commitment to maximise the good. It is
incoherent to have this overriding commitment and then to oppose an
act required by the commitment. (For developments of this line of
thought, see Arneson 2005; Card 2007; Wall 2009.)
In order to evaluate the incoherence objection to
rule-consequentialism, we need to be clearer about the supposed
location of an overriding commitment to maximize the good. Is this
commitment supposed to be part of the rule-consequentialist
agent's moral psychology? Or is it supposed to be part of the
theory rule-consequentialism?
Well, rule-consequentialists need not have maximizing the good as
their ultimate and overriding moral goal. Instead, they could have a
moral psychology as follows:
>
>
> Their fundamental moral motivation is to do what is impartially
> defensible.
>
>
>
> They believe acting on impartially justified rules is impartially
> defensible.
>
>
>
> They also believe that rule-consequentialism is on balance the best
> account of impartially justified rules.
>
>
>
Agents with this moral psychology -- i.e., this combination of
moral motivation and beliefs -- would be morally motivated to do
as rule-consequentialism prescribes. This moral psychology is
certainly possible. And, for agents who have it, there is nothing
incoherent about following rules when doing so will not maximize the
expected good.
So, even if rule-consequentialist *agents* need not have an
overriding commitment to maximize expected good, does their
*theory* contain such a commitment? No, rule-consequentialism
is essentially the conjunction of two claims: (1) that rules are to be
selected solely in terms of their consequences and (2) that these
rules determine which kinds of acts are morally wrong. This is really
all there is to the theory -- in particular, there is not some
third component consisting in or entailing an overriding commitment to
maximize expected good.
Without an overriding commitment to maximize the expected good, there
is nothing incoherent in rule-consequentialism's forbidding some
kinds of act, even when they maximize the expected good. Likewise,
there is nothing incoherent about rule-consequentialism's
requiring other kinds of act, even when they conflict with maximizing
the expected good. The best known objection to rule-consequentialism
dies once we realize that neither the rule-consequentialist agent nor
the theory itself contains an overriding commitment to maximize the
good.
The viability of this defense of rule-consequentialism against the
incoherence objection may depend in part on what the argument for
rule-consequentialism is supposed to be. The defense seems less viable
if the argument for rule-consequentialism starts from a commitment to
consequentialist assessment. For starting with such a commitment seems
very close to starting from an overriding commitment to maximize the
expected good. The defence against the incoherence objection seems far
more secure, however, if the argument for rule-consequentialism is
that this theory does better than any other moral theory at specifying
an impartial justification for intuitively plausible moral rules. (For
more on this, see Hooker 2005, 2007; Rajczi 2016; Wolf 2016; Copp
2020.)
Another old objection to rule-consequentialism is that
rule-consequentialists must be "rule-worshipers" --
i.e., people who will stick to the rules even when doing so will
obviously be disastrous.
The answer to this objection is that rule-consequentialism endorses a
rule requiring one to prevent disaster, even if doing so requires
breaking other rules (Brandt 1992: 87-8, 150-1,
156-7). To be sure, there are many complexities about what
counts as a disaster. Think about what counts as a disaster when the
"prevent disaster" rule is in competition with a rule
against lying. Now think about what counts as a disaster when the
"prevent disaster" rule is in competition with a rule
against stealing, or even more when in competition with a rule against
physically harming the innocent. Rule-consequentialism may need to be
clearer about such matters. But at least it cannot rightly be accused
of potentially leading to disaster.
An important confusion to avoid is to think that
rule-consequentialism's including a "prevent
disaster" rule means that rule-consequentialism collapses into
practical equivalence with maximising act-consequentialism. Maximising
act-consequentialism holds that we should lie, or steal, or harm the
innocent whenever doing so would produce even a *little* more
expected good than not doing so would. A rule requiring one to prevent
disaster does not have this implication. Rather, the "prevent
disaster" rule comes into play only when there is a very much
larger difference in the amounts of expected value at stake.
Woodard (2022) contends that the "prevent disaster" rule
needs reconceptualizing. Many rules identify kinds of reasons.
Examples are the reason to prevent harm, the reason not to steal, and
the reason to keep your promises. But, Woodard holds, we should not
think of the "prevent disaster" rule as being a rule
identifying a kind of reason, which then is claimed to have overriding
force. We should rather think that the "prevent disaster"
rule distinguishes between cases in which the reason to prevent harm
overrides opposing moral reasons and cases in which the reason to
prevent harm is weaker than opposing reasons.
Even if Woodard is correct about this, a "prevent
disaster" rule can be accused of vagueness and indeterminacy.
Indeed, the line between cases in which the moral reason to prevent
harm *overrides* opposing moral reasons and cases in which the
moral reason to prevent harm *is weaker* than opposing moral
reasons might well be indeterminant. Woollard (2022) argues that
admitting such vagueness and indeterminacy does not weaken
rule-consequentialism. And she draws on her earlier work (2015) to
explain how rule-consequentialism's "prevent
disaster" rule need not impale the theory on the dilemma of
either being overly demanding or placing a counterintuitive limit on a
requirement to come to the aid of others.
## 9. Other Objections to Rule-consequentialism
From the mid 1960s until the mid 1990s, most philosophers thought
rule-consequentialism couldn't survive the objections discussed
in the previous section. So, during those three decades, most
philosophers didn't bother with other objections to the theory.
However, if rule-consequentialism has convincing replies to all three
of the objections just discussed, then a good question is whether or
not there are other fatal objections to the theory.
Some other objections try to show that, given the theory's
criterion for selecting rules, there are conditions under which it
selects intuitively unacceptable rules. For example, Tom Carson (1991)
argued that rule-consequentialism turns out to be extremely demanding
in the real world. Mulgan (2001, esp. ch. 3) agreed with Carson about
that, and went on to argue that, even if rule-consequentialism's
implications in the actual world are fine, the theory has
counterintuitive implications in possible worlds. If Mulgan were right
about that, this would cast doubt on rule-consequentialism's
claim to explain why certain demands are appropriate in the actual
world. Debate about such matters continues (Hooker 2003; Lawlor 2004;
Parfit 2011, 2017a, 2017b; Woollard 2015: 181-205; Rajczi 2016;
Portmore 2017; Podgorski 2018; D. Miller 2021; Perl 2021, 2022). And
Mulgan has become a developer of rule-consequentialism rather than a
critic (Mulgan 2006, 2009, 2015, 2017, 2020).
A related objection to rule-consequentialism is that
rule-consequentialism makes the justification of familiar rules
contingent on various empirical facts, such as what human nature is
like, and how many people there are in need or in positions to help.
The objection to rule-consequentialism is that some familiar moral
rules are necessarily, not merely contingently, justified (McNaughton
and Rawling 1998; Gaut 1999, 2002; Montague 2000; Suikkanen 2008). A
sibling of this objection is that rule-consequentialism makes the
justification of rules depend on the wrong facts (Arneson 2005;
Portmore 2009; cf. Woollard 2015: esp. pp. 185-86,
203-205; 2022).
The mechanics of teaching new codes throws up serious questions for
forms of rule-consequentialism that count the costs of getting rules
internalised by new generations. As explained earlier, limiting the
targets of the teaching to new generations is meant to avoid having to
count the costs of getting rules internalised by existing generations
of people who have already internalised some other moral rules and
ideas. But can we come up with a coherent description of those who are
supposed to do the teaching of these new generations? If the teachers
are imagined to have already internalised the ideal code themselves,
then how is that supposed to have happened? If these teachers are
imagined not to have already internalised the ideal code, then there
will be costs associated with the conflict between the ideal code and
whatever they have already internalised. (This objection was
formulated by John Andrews, Robert Ehman, and Andrew Moore. Cf. Levy
2000; D. Miller, 2021.) A related objection is that
rule-consequentialism has not yet been formulated in a way that
enables it to deal plausibly with conflicts among rules (Eggleston
2007). But see Copp 2020; D. Miller 2021; Woodard 2022. |
vagueness | ## 1. Inquiry Resistance
If you cut one head off of a two-headed man, have you decapitated him?
What is the maximum height of a short man? When does a fertilized egg
develop into a person?
These questions are impossible to answer because they involve absolute
borderline cases. In the vast majority of cases, the unknowability of
a borderline statement is only relative to a given means of settling
the issue (Sorensen 2001, chapter 1). For instance, a boy may count as
a borderline case of 'obese' because people cannot tell
whether he is obese just by looking at him. His curious mother could
try to settle the matter by calculating her son's body mass
index. The formula is to divide his weight (in kilograms) by the
square of his height (in meters). If the value exceeds 30, this test
counts him as obese. The calculation will itself leave some borderline
cases. The mother could then use a weight-for-height chart. These
charts are not entirely decisive because they do not reflect the ratio
of fat to muscle, whether the child has large bones, and so on. The
boy will only count as an absolute borderline case of
'obese' if no possible method of inquiry could settle
whether he is obese. When we reach this stage, we start to suspect
that our uncertainty is due to the concept of obesity rather than to
our limited means of testing for obesity.
Absolute borderline cases are first officially targeted by Charles
Sanders Peirce's entry for 'vague' in the 1902
*Dictionary of Philosophy and Psychology*:
>
> A proposition is vague when there are possible states of things
> concerning which it is *intrinsically uncertain* whether, had
> they been contemplated by the speaker, he would have regarded them as
> excluded or allowed by the proposition. By intrinsically uncertain we
> mean not uncertain in consequence of any ignorance of the interpreter,
> but because the speaker's habits of language were indeterminate.
> (Peirce 1902, 748)
>
Peirce connects intrinsic uncertainty with the sorites paradox:
"vagueness is an indeterminacy in the applications of an idea,
as to how many grains of sand are required to make a heap, and the
like" (1892, 167). In the case of relative borderline cases, the
question is determinate but our means for answering it are incomplete.
For absolute borderline cases, there is incompleteness in the question
itself. When a term is applied to one of its absolute borderline cases
the result is a statement that resists all attempts to settle whether
it is true or false (Hu 2021). For instance, no amount of conceptual
analysis or empirical inquiry can establish the minimum number of
grains needed for a heap. Equally futile is inquiry into the
qualitative vagueness illustrated by the decapitation riddle. An
inventive speaker could give the appearance of settling the matter by
stipulating that 'decapitate' means 'remove a
head' (as opposed to 'make headless' or
'remove the head'). But that semantic revision would
change the topic to an issue that merely sounds the same as
decapitation. The switch in meaning might not be detected if the
inventive speaker was misperceived as merely applying antecedent
knowledge of decapitation.
We should expect absolute borderline cases to arise for unfamiliar
cases. Where there is no perceived need for a decision, criteria are
left undeveloped. Since philosophical thought experiments often
involve unfamiliar cases, there is apt to be intrinsic uncertainty. W.
V. O. Quine was especially suspicious of the far-out hypotheticals
popular in the literature on personal identity. He globalizes his
doubts about determinate answers in his thesis of radical translation:
for any language, there are indefinitely many, equally adequate rival
translation manuals.
The indeterminacy of translation applies to one's own language.
Introspection will not settle whether '\(x\) is the same person
as \(y\)' means '\(x\) has the same body as \(y\)'
or '\(x\) has same memories as \(y\)'. You have never
encountered a case in which these two possible rules of usage diverge.
John Locke imagines a prince and cobbler who awake with each
other's memories. Is the prince now in the cobbler's bed
or is the prince in his royal bed with errant memories? Quine sees a
deflating resemblance with the decapitation riddle. Either of the
incompatible answers fits past usage. Yet Locke answers that the
prince and cobbler have switched bodies. Materialists counter that the
prince and cobbler have instead switched psychologies. Locke and his
adversary should have instead shrugged their shoulders.
The Cambridge University philosophers Bertrand Russell (1923) and
Frank Ramsey were heavily influenced by American pragmatism (Misak
2016). Their reading of Peirce made semantic indeterminacy grow into
an international pre-occupation among analytic philosophers. Although
vagueness seems holistic, Peirce's definition is reductive.
'Tall' is vague by virtue of there being cases in which
there is no fact of the matter. A man who is 1.8 meters in height is
neither clearly tall nor clearly non-tall. No amount of conceptual
analysis or empirical investigation can settle whether a 1.8 meter man
is tall. Borderline cases are inquiry resistant.
Inquiry resistance recurs. For in addition to the unclarity of the
borderline case, there is normally unclarity as to where the unclarity
begins. Twilight governs times that are borderline between day and
night. But we are equally uncertain as to when twilight begins. This
suggests there are borderline cases of borderline cases of
'day'. Consequently, 'borderline case' has
borderline cases. This higher-order vagueness seems to show that
'vague' is vague (Hu 2017). We are slow to recognize
higher-order vagueness because we are under a norm to assert only what
we know. When we discuss cases of twilight, we focus on definite cases
of this borderline between day and night, not the marginal cases
between definite day and definite twilight.
The vagueness of 'vague' would have two destructive
consequences. First, Gottlob Frege could no longer coherently
characterize vague predicates as incoherent. For this attribution of
incoherence uses 'vague'. Frege's ideal of precision
is itself vague because 'precise' is the complement of
'vague'. Second, the vagueness of 'vague'
dooms efforts to avoid a sharp line between true and false with a
buffer zone that is neither true nor false. If the line is not drawn
between the true and the false, then it will be between the true and
the intermediate state. Any finite number of intermediates just delays
the inevitable.
Several philosophers repudiate higher-order vagueness as an illusion
(Wright 2021, chapter 5). They deny that there is an open-ended
iteration of borderline status. They find it telling that speakers do
not go around talking about borderline borderline cases and borderline
borderline borderline cases and so forth (Raffman 2005, 23). Defenders
of higher-order vagueness reply that ordinary speakers avoid iterating
'borderline' for the same reason they avoid iterating
'million' or 'know'. The iterations are
confusing but perfectly meaningful. 'Borderline' behaves
just like a vague predicate. For instance, 'borderline'
can be embedded in a sorites argument. Defenders of higher-order
vagueness have also tried to clinch the case with particular-specimens
such as borderline hermaphrodites (reasoning that these individuals
are borderline borderline males) (Sorensen 2010).
Second thoughts about higher-order vagueness are difficult to sustain
unless one also rethinks first-order vagueness. Perhaps Peirce
overreacted to the threat of futile inquiry! Where does the tail of a
snake begin? When posed as a rhetorical question, the speaker
intimates that there is no definite answer. But the tail can be
located by tracing down from the snake's rib cage. A false
attribution of indeterminacy will lead to the premature abandonment of
inquiry. Is heat the absence of cold or cold the absence of heat? Many
physicists ceased to inquire after Ernst Mach ridiculed the ancient
riddle as indeterminate. Israel Scheffler (1979, 77-78) draws a
moral from such hasty desertions. Never give up! In an internal
criticism of his colleague Quine, Scheffler condemns any attribution
of semantic indeterminacy as a defeatist relic of the distinction
between analytic and synthetic statements. Appeals to meaning never
conclude inquiry or preclude inquiry.
No one pursues the many lines of inquiry Scheffler refuses to close.
Yet some agree that the attribution of borderline status is never
mandatory. Crispin Wright (2021) concludes there are no
*definite* borderline cases. Diana Raffman (2014, 153) offers
some empirical support for borderline status always being optional. A
minority of fluent speakers will draw a sharp line between yellow and
orange.
Mandatory status is defended by listing paradigm cases of
'borderline'. Just as a language teacher tests for
competence by having students distinguish clear positives from clear
negatives, she also tests by their recognition of borderline cases.
Any pupil who uses '-ish' without recognizing borderline
cases of 'noonish' has not yet mastered the suffix.
'Noonish' is not a synonym of 'within ten minutes of
noon' or any other term with a precisely delimited interval.
## 2. Comparison with Ambiguity and Generality
'Tall' is relative. A 1.8 meter pygmy is tall for a pygmy
but a 1.8 meter Masai is not tall for a Masai. Although relativization
disambiguates, it does not eliminate borderline cases. There are
shorter pygmies who are borderline tall for a pygmy and taller Masai
who are borderline tall for a Masai. The direct bearers of vagueness
are a word's full disambiguations such as 'tall for an
eighteenth-century French man'. Words are only vague indirectly,
by virtue of having a sense that is vague. In contrast, an ambiguous
word bears its ambiguity directly--simply in virtue of having
multiple meanings.
This contrast between vagueness and ambiguity is obscured by the fact
that most words are both vague and ambiguous. 'Child' is
ambiguous between 'offspring' and 'immature
offspring'. The latter reading of 'child' is vague
because there are borderline cases of immature offspring. The contrast
is further complicated by the fact that most words are also general.
For instance, 'child' covers both boys and girls.
Ambiguity and vagueness also contrast with respect to the
speaker's discretion. If a word is ambiguous, the speaker can
resolve the ambiguity without departing from literal usage. For
instance, he can declare that he meant 'child' to express
the concept of an immature offspring. If a word is vague, the speaker
cannot resolve the borderline case. For instance, the speaker cannot
make 'child' literally mean anyone under eighteen just by
intending it. That concept is not, as it were, on the menu
corresponding to 'child'. He would be understood as taking
a special liberty with the term to suit a special purpose.
Acknowledging departure from ordinary usage would relieve him of the
obligation to defend the sharp cut-off.
When the movie director Alfred Hitchcock mused 'All actors are
children' he was taking liberties with clear negative cases of
'child' rather than its borderline cases. The aptness of
his generalization is not judged by its literal truth-value (because
it is obviously untrue). Likewise, we do not judge precisifications of
borderline cases by their truth-values (because they are obviously not
ascertainable as true or false). We instead judge precisifications by
their simplicity, conservativeness, and fruitfulness. A
precisification that draws the line across the borderline cases
conserves more paradigm usage than one that draws the line across
clear cases. But conservatism is just one desideratum among many.
Sometimes the best balance is achieved at the cost of turning former
positive cases into negative cases.
Once we shift from literal to figurative usage, we gain fictive
control over our entire vocabulary--not just vague words. When a
travel agent says 'France is a hexagon', we do not infer
that she has committed the geometrical error of classifying France as
a six-sided polygon. We instead interpret the travel agent as speaking
figuratively, as meaning that France is shaped like a hexagon.
Similarly, when the travel agent says 'Reno is the biggest
little city', we do not interpret her as overlooking the
vagueness of 'little city'. Just as she uses the obvious
falsehood of 'France is a hexagon' to signal a metaphor,
she uses the obvious indeterminacy of 'Reno is the biggest
little city' to signal hyperbole.
Given that speakers lack any literal discretion over vague terms, we
ought not to chide them for indecisiveness. Where there is no decision
to be made, there is no scope for vice. Speakers would have literal
discretion if statements applying a predicate to its borderline cases
were just permissible variations in linguistic usage. For the sake of
comparison, consider discretion between alternative spellings.
Professor Letterman uses 'judgment' instead of
'judgement' because he wants to promote the principle that
a silent E signals a long vowel. He still has fond memories of Tom
Lehrer's 1971 children's song "Silent E":
>
> Who can turn a can into a cane?
>
>
> Who can turn a pan into a pane?
>
>
> It's not too hard to see,
>
>
> It's Silent E.
>
>
>
>
> Who can turn a cub into a cube?
>
>
> Who can turn a tub into a tube?
>
>
> It's elementary
>
>
> For Silent E.
>
Professor Letterman disapproves of those who add the misleading E but
concedes that 'judgement' is a permissible spelling; he
does not penalize his students for misspelling when they make their
hard-hearted choice of 'judgement'. Indeed, Professor
Letterman scolds students if they fail to stick with the same spelling
throughout the composition. Choose but stick to your choice!
Professor Letterman's assertion 'The word for my favorite
mental act is spelled j-u-d-g-m-e-n-t' is robust with respect to
the news that it is also spelled j-u-d-g-e-m-e-n-t. He would continue
to assert it. He can conjoin the original assertion with information
about the alternative: 'The word for my favorite mental act is
spelled j-u-d-g-m-e-n-t and is also spelled j-u-d-g-e-m-e-n-t'.
In contrast, Professor Letterman's assertion that 'Martha
is a woman' is not robust with respect to the news that Martha
is a borderline case of 'woman' (say, Letterman learns
Martha is younger than she looks). The new information would lead
Letterman to retract his assertion in favor of a hedged remark such as
'Martha might be a woman and Martha might not be a woman'.
Professor Letterman's loss of confidence is hard to explain if
the information about her borderline status were simply news of a
different but permissible way of describing her. Discoveries of
notational variants do not warrant changes in former beliefs.
News of borderline status has an evidential character. Loss of clarity
brings loss of warrant. If you do not lower your confidence, you are
open to the charge of dogmatism. To concede that your ferry is a
borderline case of 'seaworthy' is to concede that you do
not know that your ship is seaworthy. That is why debates can be
dissolved by showing that the dispute is over an absolute borderline
case. The debaters should both become agnostic. After all, they do not
have a license to form beliefs beyond their evidence. If your ferry is
borderline seaworthy, you are not free to believe it is seaworthy.
According to W. K. Clifford, belief beyond the evidence is always
morally forbidden, not just irrational. William James countered that
belief beyond the evidence is permissible when there is a forced,
momentous, live option. Can I survive the destruction of my body,
simply by virtue of a strong psychological resemblance with a future
person? The answer affects my prospects for immortality. James says
that my belief that this is borderline survival does not forbid me
from believing that it is survival. Whether I exercise this
prerogative depends on my temperament. Since momentous borderline
cases are so prevalent in speculative matters, much philosophical
disagreement is a matter of personality differences.
The Clifford drop into agnosticism is especially quick for those who
believe that evidence supports a unique judgment. A detective who
thinks the evidence shows Gray is a murderer cannot concede that the
evidence equally permits the judgment that Gray is not a murderer. If
the detective concedes that Gray's act is borderline murder,
then he cannot shed the hedge and believe the act is murder.
In the case of relative borderline cases, the hedge can be shed by the
discovery of a novel species of evidence. Murder inquiries were
sometimes inconclusive because the suspect had an identical twin.
After detectives learned that identical twins have distinct
fingerprints, some of these borderline cases became decidable.
Presently, there are some borderline cases arising from the extreme
similarity in DNA between identical twins. Soon, geneticists will be
able to reliably discern the slight differences between twin DNA that
arise from mutations.
News of an alternative sense is like news of an alternative spelling;
there is no evidential impact (except for meta-linguistic beliefs
about the nature of words). Your assertion that 'All bachelors
are men' is robust with respect to the news that
'bachelor' has an alternative sense in which it means a
male seal. Assertions are not robust with respect to news of hidden
generality. If a South African girl says 'No elephant can be
domesticated' but learns there is another species of elephant
indigenous to Asia, she will lose some confidence; maybe Asian
elephants can be domesticated. News of hidden generality has
evidential impact. When it comes to robustness, vagueness resembles
generality more than vagueness resembles ambiguity.
Mathematical terms such as 'prime number' show that a term
can be general without being vague. A term can also be vague without
being general. Borderline cases of analytically empty predicates
illustrate this possibility.
Generality is obviously useful. Often, lessons about a particular
\(F\) can be projected to other \(F\)s in virtue of their common
\(F\)-ness. When a girl learns that *her* cat has a nictitating
membrane that protects its eyes, she rightly expects that her
neighbor's cat also has a nictitating membrane. Generality saves
labor. When the girl says that she wants a toy rather than clothes,
she narrows the range of acceptable gifts without going through the
trouble of specifying a particular gift. The girl also balances
values: a gift should be intrinsically desired and yet also be a
surprise. If uncertain about which channel is the weather channel, she
can hedge by describing the channel as 'forty-something'.
There is an inverse relationship between the contentfulness of a
proposition and its probability: the more specific a claim, the less
likely it is to be true. By gauging generality, we can make sensible
trade-offs between truth and detail.
'Vague' has a sense that is synonymous with abnormal
generality. This precipitates many equivocal explanations of
vagueness. For instance, many commentators say that vagueness exists
because broad categories ease the task of classification. If I can
describe your sweater as red, then I do not need to ascertain whether
it is scarlet. This freedom to use wide intervals obviously helps us
to learn, teach, communicate, and remember. But so what? The problem
is to explain the existence of borderline cases. Are they present
because vagueness serves a function? Or are borderline cases
side-effects of ordinary conversation--like the echoes indoor
listeners learn to ignore?
Every natural language is both vague and ambiguous. However, both
features seem eliminable. Indeed, both are eliminated in miniature
languages such as checkers notation, computer programming languages,
and mathematical descriptions. Moreover, it seems that both vagueness
and ambiguity ought to be minimized. 'Vague' and
'ambiguous' are pejorative terms. And they deserve their
bad reputations. Think of all the automotive misery that has been
prefaced by
>
> Driver: Do I turn left?
>
>
> Passenger: Right.
>
English can be lethal. Philosophers have long motivated appeals for an
ideal language by pointing out how ambiguity creates the menace of
equivocation:
>
> No child should work.
>
>
> Every person is a child of someone.
>
>
> Therefore, no one should work.
>
Happily, we know how to criticize and correct all equivocations.
Indeed, every natural language is self-disambiguating in the sense
that each has all the resources needed to uniquely specify any reading
one desires.
## 3. The Philosophical Challenge Posed by Vagueness
Vagueness, in contrast, precipitates a profound problem: the sorites
paradox. For instance,
* Base step: A one-day-old human being is a child.
* Induction step: If an \(n\)-day-old human being is a child, then
that human being is also a child when it is \(n + 1\) days old.
* Conclusion: Therefore, a 36,500-day-old human being is a
child.
The conclusion is false because a 100-year-old man is clearly a
non-child. Since the base step of the argument is also plainly true
and the argument is valid by mathematical induction, we seem to have
no choice but to reject the second premise.
George Boolos (1991) observes that we have an autonomous case against
the induction step. In addition to implying plausible conditionals
such as "If a 1-day-old human being is a child, then that human
being is also a child when it is 2 days old", the induction step
also implies ludicrous conditionals such as "If a 1-day-old
human being is a child, then that human being is also a child when
36,500 days old".
Boolos is puzzled why we overlook these clear counterexamples. One
explanation is that we tend to treat the induction step as a
*generic* generalization such as "People have ten
toes" (Sorensen 2012) rather than a generalization with a
quantifier such as 'all' or 'most'. Whereas
the universal generalization "All people have ten toes" is
refuted by a single person with eleven toes, the generic
generalization tolerates exceptions.
This hypothesis is plausible for newcomers to the sorites paradox. But
it is less plausible for those being tutored by Professor Boolos. He
guides logic students to the correct, universal reading of the
induction step. When students drift to a generic reading, Boolos
reminds them that the induction step is a generalization that
tolerates no exceptions.
Guided by Boolos' firm hand, logic students drive a
*second* stake into the heart of the induction step (the
entailment of preposterous conditionals). Yet the paradox seems far
from dead. The hitch is that death for the induction step is life for
its negation. The negation of the second premise classically implies a
sharp threshold for childhood. For the falsehood of the induction step
implies the existential generalization that there is a number \(n\)
such that an \(n\)-day-old human being is a child but is no longer a
child one day later.
Epistemicists accept this astonishing consequence. They stand by
classical logic and conclude that vagueness is a form of ignorance.
For any day in the borderline region of 'child', there is
a small probability that it is the day one stopped being a child.
Normally, the probability is negligible. But if we round down to zero,
we fall into into inconsistency.
This diagnosis is reminiscent of Carl Hempel's solution to the
Raven Paradox. At first blush, we deny that a white handkerchief could
confer any degree of confirmation on 'All ravens are
black'. But this is incompatible with us regarding a white
handkerchief as providing some confirmation of 'All non-black
things are non-ravens'. Since this contrapositive is logically
equivalent to 'All ravens are black', whatever confirms
one hypothesis confirms the other. Hempel advises us to regain
consistency by conceding that a white handkerchief slightly confirms
'All ravens are black'. Similarly, Hempel might advise us
to slightly raise the probability of a borderline child being a child
upon learning that she is a day younger than previously believed.
Hempel's only reservation about indoor ornithology is
inefficiency. Given how the actual world is arranged, more news about
'All ravens are black' can be gained outdoors. We do not
bother to look indoors because the return on inquiry is too low.
Bottom-up Bayesians envisage extraordinary scenarios in which the
return on inquiry could be high. However, even their ample imagination
fails to yield a possible world in which a single day has a more
discernible opportunity to make a difference to whether someone is a
child. All pressure to shift credence is exerted top down from
probability theory.
Timothy Williamson (1994) traces our ignorance of the threshold for
childhood to "margin for error" principles. If one knows
that an \(n\)-day-old human being is a child, then that human being
must also be a child when \(n + 1\) days old. Otherwise, one is right
by luck. Given that there is a threshold, we would be ignorant of its
location. Under a use theory of meaning, the threshold depends on many
speakers. Williamson characterizes this threshold as sharing the
unpredictability of long-term weather forecasts. According to the
meteorologist Edward Lorenz, a tornado in Kansas could be precipitated
by the flapping of a butterfly's wings in Tokyo.
The use theory of meaning does not apply to asocial languages. For
instance, psycholinguists postulate a language of thought that is not
used for communication. This innate language is vague.
Natural languages lexicalize only a small share of our concepts. The
primary bearers of vagueness are concepts rather than words (Bacon
2018). Aphasia does not end vague thinking. Insofar as we attribute
thinking to non-linguistic animals, we attribute vague thinking.
Peirce's emphasis on a community of inquirers encourages a
sociology of language. Ludwig Wittgenstein's attack on the
possibility of a private language was intended to remove any rival to
a community-based approach to language. Yet asocial vagueness remains
viable. This motivates some epistemicists to explain our ignorance
more metaphysically, say, as an effect of there being no truth-maker
to control the truth-value of a borderline statement (Sorensen 2001,
chapter 11).
Debate over Williamson's margin for error principle draws us
deep into epistemology and modality (Yli-Vakkuri 2016). Preferring to
explore the shallows, some commentators survey attitudes weaker than
knowledge. According to Nicholas Smith (2008, 182) we cannot even
guess that the threshold for baldness is the 400th hair. Hartry Field
(2010, 203) denies that a rational man can fear that he has just
passed the threshold into being old. Hope, speculation, and wonder do
not require evidence but they do require understanding. So it is
revealing that these attitudes have trouble getting a purchase on the
threshold of oldness (or any other vague predicate). A simple
explanation is that bare linguistic competence gives us knowledge that
there are no such thresholds. This accounts for the comical air of the
epistemicist. Just as there is no conceptual room to worry that there
is a natural number between sixty and sixty one, there is no
conceptual room to worry that one has passed the threshold of oldness
between one's sixtieth and sixty-first birthday.
An old epistemicist might reply: My piecemeal confidence that a given
number is not the threshold for oldness does not agglomerate into
collective confidence that there is no such number. If I wager against
each number being the threshold, then I must have placed a losing bet
somewhere. For if I won each bet then there was no opportunity for me
to make the transition to oldness. My bookie could have made a
"Dutch book" against me. He would have been entitled to
payment without having to identify which bet I lost. Since
probabilities may be extracted from hypothetical betting behavior, I
must actually assign some small (normally negligible) probability to
hypotheses identifying particular thresholds. So must you.
Stephen Schiffer (2003, 204) denies that classical probability
calculations apply in vague contexts. Suppose Donald is borderline old
and borderline bald. According to Schiffer we should be just as
confident in the conjunction 'Donald is old and bald' as
in either conjunct. Adding conjuncts does not reduce confidence
because we have a "vague partial belief" rather than the
standard belief assumed by mathematicians developing probability
theory. Schiffer offers a calculus for this vagueness-related
propositional attitude. He crafts the rules for vague partial belief
to provide a *psychological* solution to the sorites
paradox.
The project is complicated by the fact that vague partial beliefs
interact with precise beliefs (MacFarlane 2010). Consider a statement
that has a mixture of vague and precise conjuncts: 'Donald is
old and bald and has an even number of hairs'. Adding the extra
precise conjunct should diminish confidence. Schiffer also needs to
accommodate the fact that some speakers are undecided about whether
the nature of the uncertainty involves vagueness. Even an idealized
speaker may be unsure because there is vagueness about the borders
between vagueness-related uncertainty and other sorts of
uncertainty.
Attitudes toward determinacy vary over time. Bernard Williams (2010,
162) says Thucydides (460-400 BC) invented historical time.
Thucydides insists that any two events occurred simultaneously or one
occurred before the other. Thucydides does not countenance a vague
"olden days"--a period in which events fail to be
chronologically ordered. Every event is situated on a single timeline.
There can be ignorance of when an event took place but there is always
a fact of the matter. With similar focus, Archimedes (287-212
BC) insists that every quantity corresponds to a specific number. No
quantity, however large, is innumerable. In "The Sand
Reckoner" Archimedes devises a special notation to express the
total number of grains of sand. The reductionism of Enlightenment
thinkers led them to further analysis of shades of gray into black and
white dots of precision.
In a holistic reaction, the Romantics portray the new boundaries as no
more real than the grid lines of a microscope. The specimen under the
slide lacks boundaries. Understanding does not require boundaries.
Organizing items along a spectrum is a natural form of classification.
We comprehend this analog grouping more readily and smoothly than we
grasp the digital groupings instilled by Descartes' algebraic
representation of space. If our metaphysics of classification is
constricted by a logic designed for mathematics, the sorites paradox
will make us see disorder where there is really order of a more fluid
sort. Spontaneous arrangements will bring what order is permitted by
the subject matter. Romantic biologists take to the field and refuse
to pen species within the axes of the Cartesian coordinate system.
Confronted with novel phenomena such as evolution, magnetism and
electricity, Romantic scientists bypassed boundaries.
A geologist views a pair of aerial photographs cross-eyed. His brain
recognizes the two images as representations of the same landscape.
The flat images fuse into a unifying stereoscopic perspective. The
Romantics portray vague language as perceptively de-focussed.
Vagueness is not the cloudy vision of an old man with cataracts.
The epistemicist fits into the Thucydidean tradition of replacing
indeterminacy with determinacy. Crispin Wright (2021, 393) compares
the epistemicist's postulation of hidden boundaries to the
mathematical realist's postulation of a hidden Platonic realm.
Both the epistemicist and the mathematical realist add metaphysical
infrastructure to ensure truth will transcend our means of proof. This
infrastructure validates proofs of generalizations that are not
constructed from examples. The impossibility of finding a witness for
an existential generalization, such as 'There is a youngest old
man', is compatible with there being a non-constructive
proof.
Despite rearguard defenses by the Romantics, Newtonian physics
appeared to show that empirical reality fits classical logic. The hard
sciences illustrated the crystalline growth of determinacy. But in a
shocking reversal, quantum mechanics suggests the possibility of
reversing from determinacy to indeterminacy. What had been formerly
regarded as determinate, the position and velocity of an object, was
regarded as indeterminate by Werner Heisenberg. Quantum logicians,
such as Hilary Putnam (1976), abandoned the distributive laws of
propositional logic.
Putnam (1983) went on propose a solution that seems to apply L. E. J.
Brouwer's intuitionism to the sorites paradox. (In response to a
rebuttal, Putnam [1985] surprisingly denied that intuitionistic
semantics was part of the proposal.) Brouwer had achieved fame with
his fixed point theorem. But reflection on Kant's philosophy of
mathematics led Brouwer to retract his lovely proof. *Any*
non-constructive proof had to be replaced! Brouwer's recall of
established theorems yielded more specific proofs that had higher
explanatory power. Theorems that could not be retrofitted became
objects of agnosticism. This was sad news for Brouwer's fixed
point theorem. But Putnam welcomes the recall of the
epistemicist's theorem that there is a smallest natural number.
The classical logician relies on double negation to deduce a hidden
boundary from the negation of the generalization "If \(n\) is
small, then \(n + 1\) is small". Since the intuitionist does not
accept this classical rule, his refusal to acquiesce to the induction
step does not saddle him with hidden thresholds. Like the
epistemicist, the intuitionist treats vagueness as a cognitive problem
that does not require rejecting the law of bivalence. Unlike the
epistemicist, the intuitionist must not claim to know the law of
bivalence. To sustain this agnosticism about bivalence, the
intuitionist must parry the sorites monger's reformulations
(Wright 2021, chapter 14). For instance, the intuitionist's
least number principle threatens to re-ignite the sorites.
Many commentators concede to the epistemicist that it is logically
possible for vague predicates to have thresholds. They just think it
would be a miracle:
>
> It is logically possible that the words on this page will come to life
> and sort my socks. But I know enough about words to dismiss this as a
> serious possibility. So I am right to boggle at the possibility that
> our rough and ready terms such as 'red' could so
> sensitively classify objects.
>
Epistemicists counter that this bafflement rests on an over-estimate
of stipulation's role in meaning. Epistemicists say much meaning
is acquired passively by default rather than actively by decision. If
some boundaries are more eligible for reference than others, then the
environment does the work. If nothing steps in to make a proposition
true, then it is false. Or so opines Timothy Williamson.
Most doubt whether precise analytical tools fit vague arguments. The
scientific Romantic H. G. Wells was amongst the first to suggest that
we must moderate the *application* of logic:
>
> Every species is vague, every term goes cloudy at its edges, and so in
> my way of thinking, relentless logic is only another name for
> stupidity--for a sort of intellectual pigheadedness. If you push
> a philosophical or metaphysical enquiry through a series of valid
> syllogisms--never committing any generally recognised
> fallacy--you nevertheless leave behind you at each step a certain
> rubbing and marginal loss of objective truth and you get deflections
> that are difficult to trace at each phase in the process. Every
> species waggles about in its definition, every tool is a little loose
> in its handle, every scale has its individual error. (1908, 18)
>
Many more believe that the problem is with logic itself rather than
the manner in which it is applied. They favor solving the sorites
paradox by replacing standard logic with an earthier deviant
logic.
There is a desperately wide range of opinions as to how the revision
of logic should be executed. Every form of deviant logic has been
applied in the hope of resolving the sorites paradox.
## 4. Many-valued Logic
An early favorite was many-valued logic. On this approach, borderline
statements are assigned truth-values that lie between full truth and
full falsehood. Some logicians favor three truth-values, others prefer
four or five. The most popular approach is to use an infinite number
of truth-values represented by the real numbers between 0 (for full
falsehood) and 1 (for full truth). This infinite spectrum of
truth-values might be of service for a continuous sorites argument
involving 'small real number' (Weber and Colyvan 2010).
Classical logic does not enforce a sharp boundary for infinite
domains. But it still enforces unlimited sensitivity for vague
predicates. If two quantities \(A\) and \(B\) are growing but \(A\) is
given a headstart, then \(A\) will cease to equal a small real number
before \(B\). Since this transitional difference holds regardless of
the lead's magnitude, "\(A\) equals a small real
number" passes from true to false within an arbitrarily narrow
interval. If the number of truth-values is instead as numerous as the
real numbers, there is no longer any mismatch between the tiny size of
the transition and the large difference in truth-value.
There is a new mismatch, however. The number of truth-values now
dwarfs the number of truth-value bearers. Since a natural language can
express any proposition, the number of propositions equals the number
of sentences. The number of sentences is \(\aleph\_0\) while (assuming
the continuum hypothesis) the number of real numbers is at least
\(\aleph\_1\). Consequently, there are infinitely many truth-values
without bearers.
This new mismatch seems more bearable than the old mismatch it is
designed to avoid. Yet some find even a lower infinity of truth-values
over-bearing. Instead of having just one artificially sharp line
between the true and the false, the many-valued logician has
infinitely many sharp lines such as that between statements with a
truth of .323483925 and those with a higher truth-value. Mark
Sainsbury grimaces, '... you do not improve a bad idea by
iterating it' (1996, 255).
A proponent of an infinite-valued logic might cheer up Sainsbury with
an analogy. It is a bad idea to model a circle with a straight line.
Using two lines is not much better, nor is there is much improvement
using a three-sided polygon (a triangle). But as we add more straight
lines to the polygon (square, pentagon, hexagon, and so on) we make
progress--by iterating the bad idea of modeling a circle with
straight lines.
Indeed, it would be tempting to triumphantly conclude "The
circle has been modeled as an infinite-sided polygon". But has
the circle been revealed to be an infinite-sided polygon? Have curved
lines been replaced by straight lines? Have curved lines (and hence
circles) been proven to not exist? A model can succeed without being
clear what has been achieved.
But it is premature to dwell on the analogy "Precision is to
vagueness as straightness is to curvature". The many-valued
logician must first vindicate the comparison by providing details
about how to calculate the truth-values of vague statements from the
truth-values of their component statements.
Proponents of many-valued logic approach this obligation with great
industry. Precise new rules are introduced to calculate the
truth-values of compound statements that contain statements with
intermediate truth-values. For instance, the revised rule for
conjunctions assigns the conjunction the same truth-value as the
conjunct with the lowest truth-value. These rules are designed to
yield all standard theorems when all the truth-values are 1 and 0. In
this sense, classical logic is a limiting case of many-valued logic.
Classical logic is agreed to work fine in the area for which it was
designed--mathematics.
Most theorems of standard logic break down when intermediate
truth-values are involved. (An irregular minority, such as 'If
\(P\), then \(P\)', survive.) Even the classical contradiction
'Donald is bald and it is not the case that he is bald'
receives a truth-value of .5 when 'Donald is bald' has a
truth-value of .5. Many-valued logicians note that the error they are
imputing to classical logic is often so small that classical logic can
still be fruitfully applied. But they insist that the sorites paradox
illustrates how tiny errors accumulate into a big error.
Critics of the many-valued approach complain that it botches phenomena
such as hedging. If I regard you as a borderline case of 'tall
man', I cannot sincerely assert that you are tall and I cannot
sincerely assert that you are of average height. But I can assert the
hedged claim 'Either you are tall or of average height'.
The many-valued rule for disjunction is to assign the whole statement
the truth-value of its highest disjunct. Normally, the added disjunct
in a hedged claim is not more plausible than the other disjuncts. Thus
it cannot increase the degree of truth. Disappointingly, the proponent
of many-valued logic cannot trace the increase of assertibility to an
increase in the degree of truth.
Epistemicists ascribe the increase in assertibility to the increasing
probability of truth. Since the addition of disjuncts can raise
probability indefinitely, the epistemicists correctly predict that we
can hedge our way to full assertibility. However, epistemicists do not
have a monopoly on this prediction.
## 5. Supervaluationism
According to supervaluationists, borderline statements lack a
truth-value. This neatly explains why it is universally impossible to
know the truth-value of a borderline statement. Supervaluationists
offer details about the nature of absolute borderline cases. Simple
sentences about borderline cases lack a truth-value. Compounds of
these statements can have a truth-value if they come out true
regardless of how the statement is admissibly precisified. For
instance, 'Either Mr. Stoop is tall or it is not the case that
Mr. Stoop is tall' is true because it comes out true under all
ways of sharpening 'tall'. Thus the method of
supervaluations allows one to retain all the theorems of standard
logic while admitting "truth-value gaps".
One may wonder whether this striking result is a genuine convergence
with standard logic. Is the supervaluationist characterizing vague
statements as propositions? Or is he merely pointing out that certain
non-propositions have a structure isomorphic to logical theorems?
(Some electrical circuits are isomorphic to tautologies but this does
not make the circuits tautologies.) Kit Fine (1975, 282), and
especially David Lewis (1982), characterize vagueness as
hyper-ambiguity. Instead of there being one vague concept, there are
many precise concepts that closely resemble each other.
'Child' can mean a human being at most one day old or mean
a human being at most two days old or mean a human being at most three
days old .... Thus the logic of vagueness is a logic for
equivocators. Lewis' idea is that ambiguous statements are true
when they come out true under all disambiguations. But logicians
normally require that a statement be disambiguated *before*
logic is applied. The mere fact that an ambiguous statement comes out
true under all its disambiguations does not show that the statement
itself is true. Sentences which are *actually* disambiguated
may have truth-values. But the best that can be said of those that
merely *could* be disambiguated is that they *would*
have had a truth-value had they been disambiguated (Tye 1989).
Supervaluationism will converge with classical logic only if each word
of the supervaluated sentence is uniformly interpreted. For instance,
'Either a carbon copy of Teddy Roosevelt's signature is an
autograph or it is not the case that a carbon copy of Teddy
Roosevelt's signature is an autograph' comes out true only
if 'autograph' is interpreted the same way in both
disjuncts. Vague sentences resist mixed interpretations. However,
mixed interpretations are permissible for ambiguous sentences. As
Lewis himself notes in a criticism of relevance logic, 'Scrooge
walked along the bank on his way to the bank' can receive a
mixed disambiguation. When exterminators offer 'non-toxic ant
poison', we charitably switch relativizations within the noun
phrase: the substance is safe for human beings but deadly for
ants.
Even if one agrees that supervaluationism converges with classical
logic about theoremhood, they clearly differ in other respects.
Supervaluationism requires rejection of inference rules such as
contraposition, conditional proof and *reductio ad absurdum*
(Williamson 1994, 151-152). In the eyes of the
supervaluationist, a demonstration that a statement is not true does
not guarantee that the statement is false.
The supervaluationist is also under pressure to reject semantic
principles which are intimately associated with the application of
logical laws. According to Alfred Tarski's Convention T, a
statement '\(S\)' is true if and only if \(S\). In other
words, truth is disquotational. Supervaluationists say that being
supertrue (being true under all precisifications) suffices for being
true. But given Convention T, supertruth would then be disquotational.
Since the supervaluationists accept the principle of excluded middle,
the combination of Convention T and 'P or ~P' being
supertrue would force them to say '\(P\)' is supertrue or
'\(\neg P\)' is supertrue (even if '\(P\)'
applies a predicate to a borderline case). This would imply that
either '\(P\)' is true or '\(\neg P\)' is true
(Williamson 1994, 162-163). And that would be a fatal loss of
truth-value gaps for supervaluationism.
There is a final concern about the "ontological honesty"
of the supervaluationist's existential quantifier. As part of
his solution to the sorites paradox, the supervaluationist asserts,
"There is a human being who, for some \(n\), was a child when
\(n\) days old but not when \(n + 1\) days old." For this
statement comes out true under all admissible precisifications of
'child'. However, when pressed the supervaluationist adds
a codicil: "Oh, of course I do not mean that there really is a
sharp threshold for childhood."
After the clarification, some wonder how supervaluationists differs
from drastic metaphysical skeptics. In his nihilist days, Peter Unger
(1979) admitted that it is useful to talk *as if* there are
children. But he insisted that strictly speaking, vague terms such as
'child' cannot apply to anything. Unger was free to use
supervaluationism as a theory to explain our ordinary discourse about
children. (Unger instead used other resources to explain how we
fruitfully apply empty predicates.) But once the dust had cleared and
the precise rubble came into focus, Unger had to conclude that there
are no children.
Officially, the supervaluationist rejects the induction step of the
sorites argument. Unofficially, he seems to instead reject the
*base* step of the sorites argument.
Supervaluationists encourage the view that all vagueness is a matter
of linguistic indecision: the reason why there are borderline cases is
that we have not bothered to make up our minds. The method of
supervaluation allows us to assign truth-values prior to any
decisions. Expressivists think this is a mistake akin to assigning
truth-values to normative claims (MacFarlane 2016). They model
vagueness as practical uncertainty as to whether to treat a borderline
\(F\) as an \(F\). The deliberator may accept the tautologies of
classical logic as constraints governing competing plans for drawing
lines. I can accept "Either Donald is bald or not" without
accepting either disjunct. An existentially quantified sentence can be
accepted even when no instance is. A shrug of the shoulders signals
readiness to go either way, not ignorance as to which possible world
one inhabits.
The expressivist is poised to explain how supervaluationism developed
into the most respected theory of vagueness. Frege portrayed vagueness
as negligent under-definition: One haphazardly introduces a necessary
condition here, a sufficient condition there, but fails to supply a
condition that is both necessary and sufficient. Supervaluationists
counter that indecision can be both intentional and functional.
Instead of committing ourselves prematurely, we fill in meanings as we
go along in light of new information and interests.
Psychologists may deny that we could really commit ourselves to the
complete precisifications envisaged by the supervaluationist (which
encompass an entire language). Many of these comprehensive
alternatives are too complex for memory. For instance, any
precisification admitting a long band of random verdicts will be
algorithmically complex and so not compressible in a rule. Realistic
alternatives are only modestly more precise than ordinary usage.
Nevertheless, the supervaluationist's conjecture about gradual
precisification is popular for the highly stipulative enterprise of
promulgating and enforcing laws (Endicott 2000). Judges frequently
seem to exercise and control discretion by means of vague language.
Uncertainties about the scope of discretion may arise from
higher-order vagueness (Schauer 2016).
Discretion through gap-filling pleases those who regard adjudication
as a creative process. It alarms those who think we should be judged
by laws rather than men. The doctrine of discretion through
indeterminacy has also been questioned on the grounds that the source
of the discretion is the generality of the legal terms rather than
their vagueness (Poscher 2012).
By David Lanius's (2019) reckoning, the only function for
vagueness in law is strategic. Drunk driving is discouraged by the
vagueness of 'drunk' in the way it is discouraged by
random testing of motorists. Scattershot enforcement becomes a welcome
feature of the legal system rather than a bug in the legal
programming. Motorists stay sober to avoid participation in a
punishment lottery.
Judicial variance is just what one expects if the judges are making a
forced choice on borderline cases. Judges cannot confess ignorance and
so are compelled to assert beyond their evidence. Little wonder that
what comes out of their mouth is affected by irrelevancies such as
what went in their mouths for lunch. This susceptibility to bias by
irrelevant factors (weather, order effects, anchoring) could be
eliminated by methodical use of a lottery. The lottery could be
weighted to the fact that borderline crimes vary in how close they
come to being clear crimes.
Hrahn Asgeirsson (2020, chapter 3) admits that the descriptive
question 'Is this drunk driving?' cannot be more reliably
answered by a judge than anyone else when it is a borderline case. But
he thinks the normative question, 'Should this be counted as
drunk driving?' could be better answered by those with superior
knowledge of legislative intent.
## 6. Subvaluationism
Whereas the supervaluationist analyzes borderline cases in terms of
truth-value gaps the dialetheist analyzes them in terms of truth-value
gluts. A glut is a proposition that is both true and false. The rule
for assigning gluts is the mirror image of the rule for assigning
gaps: A statement is true exactly if it comes out true on at least one
precisification. The statement is false just if it comes out false on
at least one precisification. So if the statement comes out true under
one precisification and false under another precisification, the
statement is both true and false.
To avoid triviality, the dialetheist must adopt a paraconsistent logic
that stops two contradictory statements from jointly implying
everything. The resulting "subvaluationism" is a dual of
supervaluationism.
The spiritual father of subvaluationism is Georg Hegel. For Hegel, the
basic kind of vagueness is conflict vagueness. The shopper in between
the wings of a revolving doorway is both inside the building and
outside the building. Degree vagueness is just a special case of the
conflict inherent in becoming. Any process requires a gradual
manifestation of a contradiction inherent in the original state. At
some stage, a metamorphosizing caterpillar is not a butterfly (by
virtue of what it was) and a butterfly (by virtue of what it will be).
Hegelians believed this dialectical conception of vagueness solved the
sorites and demonstrated the inadequacy of classical logic (Weber
2010). The Russian Marxist Georgi Plekhanov (1908 [1937]) proposed a
logic of contradiction to succeed classical logic (Hyde 2008,
93-5). One of his students, Henry Mehlberg (1958) went on to
substitute gaps for gluts. The first version of supervaluationism is
thus a synthesis, reconciling the thesis of classical logic with the
anti-thesis posed by the logic of contradiction.
From a formal point of view, there seems no more reason to prefer one
departure from classical logic rather than the other (Cobreros and
Tranchini 2019). Since Western philosophers abominate contradiction,
parity with dialetheism would diminish the great popularity of
supervaluationism.
A Machiavellian epistemicist will welcome this battle between the gaps
and the gluts. He roots for the weaker side. Although he does not want
the subvaluationist to win, the Machiavellian epistemicist does want
the subvaluationist to achieve mutual annihilation with his
supervaluationist doppelganger. His political calculation is:
Gaps + Gluts = Bivalence.
Pablo Cobreros (2011) has argued that subvaluationism provides a
better treatment of higher-order vagueness than supervaluationism. But
for the most part, the subvaluationists (and their frenemies) have
merely claimed subvaluationism to be at least as attractive as
supervaluationism (Hyde and Colyvan 2008). This modest ambition seems
prudent. After all, truth-value gaps have far more independent support
from the history of philosophy (at least Western philosophy). Prior to
the explosive growth of vagueness research after 1975, ordinary
language philosophers amassed a panoramic battery of analyses
suggesting that gaps are involved in presupposition, reference
failure, fiction, future contingent propositions, performatives, and
so on and so on. Supervaluationism rigorously consolidated these
appeals to ordinary language.
Dialetheists characterize intolerance for contradiction as a shallow
phenomenon, restricted to a twentieth-century Western academic milieu
(maybe even now being eclipsed by the rise of China). Experimental
philosophers have challenged the old appeals to ordinary language with
empirical results suggesting that glutty talk is as readily stimulated
by borderline cases as gappy talk (Alxatib and Pelletier 2011, Ripley
2011).
## 7. Contextualism
Just as contextualism in epistemology runs orthogonal to the familiar
divisions amongst epistemologists (foundationalism, reliabilism,
coherentism, etc.), there are contextualists of every persuasion
amongst vagueness theorists. They develop an analogy between the
sorites paradox and indexical sophistries such as:
* Base step: The horizon is more than 1 meter away.
* Induction step: If the horizon is more than \(n\) meters away,
then it is more than \(n + 1\) meters away.
* Conclusion: The horizon is more than a billion meters away.
The horizon is where the earth meets the sky and is certainly less
than a billion meters away. (The circumference of the earth is only
forty million meters.) Yet when you travel toward the horizon to
specify the \(n\) at which the induction step fails, your trip is as
futile as the pursuit of the rainbow. You cannot reach the horizon
because it shifts with your location.
All contextualists accuse the sorites monger of equivocating. In one
sense, the meaning of 'child' is uniform; the
context-invariant rule for using the term (its
"character") is constant. However, the set of things to
which the term applies (its "content") shifts with the
context. In this respect, vague words resemble indexical terms such
as: 'I', 'you', 'here',
'now', 'today', 'tomorrow'. When a
debtor tells his creditor on Monday 'I will pay you back
tomorrow' and then repeats the sentence on Tuesday, there is a
sense in which he has said the same thing (the character is the same)
and a sense in which he has said something different (the content has
shifted because 'tomorrow' now picks out Wednesday).
According to the contextualists, the rules governing the shifts
prohibit us from interpreting any instance of the induction step as
having a true antecedent and a false consequent. The very process of
trying to refute the induction step changes the context so that the
instance will not come out false. Indeed, contextualists typically
emphasize that each instance is true. Assent is mandatory.
Consequently, direct attacks on the induction step must fail. One is
put in mind of Seneca's admonition to his student Nero:
"However many you put to death, you will never kill your
successor."
Hans Kamp (1981), the founder of contextualism, maintained that the
extension of vague words orbits the speaker's store of
conversational commitments. He notes that the commitment pressures the
speaker to adjust past judgments to current judgments. This backward
smoothing undermines the search for a sharp transition. The diagnostic
potential of this smoothing is first recognized by Diana Raffman
(1994). Switching categories (e.g., from blue to green) between two
neighboring, highly similar items in a sorites series, without
installing a boundary, is enabled by a gestalt switch that occurs
between the two judgments. Switching categories comes at a cost,
however; so once the classifier has switched to a new category, she
will stay in the new category as long as she can. In particular, if
she now reverses direction and goes back down the series, she will
continue to classify as green some items she previously classified as
blue. This pattern of judgments, hysteresis, has the effect of
smoothing out the category shift so that the seeming continuity of the
series is preserved. Instead of merely inviting psychologists to
verify her predictions about hysteresis effects in sorites series,
Raffman (2014, 146-156) collaborates with two psychologists to
run an experiment that confirms them.
Stewart Shapiro integrates Kamp's ideas with Friedrich
Waismann's concept of open texture. Shapiro thinks speakers have
discretion over borderline cases because they are judgment dependent.
They come out true in virtue of the speaker judging them to be true.
Given that the audience does not resist, borderline cases of
'child' can be correctly described as children. The
audience recognizes that other competent speakers could describe the
borderline case differently. As Waismann lyricizes, "Every
description stretches, as it were, into a horizon of open
possibilities: However far I go, I shall always carry this horizon
with me" (1945, 124).
American pragmaticism colors Delia Graff Fara's contextualism.
Consider dandelion farms. Why would someone grow weeds? The answer is
that 'weed' is relative to interests. Dandelions are
unwanted by lawn caretakers but are desired by farmers for food and
wine. Fara thinks this interest relativity extends to all vague words.
For instance, 'child' means a degree of immaturity that is
significant to the speaker. Since the interests of the speaker shift,
there is an opportunity for a shift in the extension of
'child'. Fara is reluctant to describe herself as a
contextualist because the context only has an indirect effect on the
extension via the changes it makes to the speaker's
interest.
How strictly are we to take the comparison between vague words and
indexical terms? Scott Soames (2002, 445) answers that all vague words
literally are indexical.
This straightforward response is open to the objection that the
sorites monger could stabilize reference. When the sorites monger
relativizes 'horizon' to the northeast corner of the
Empire State Building's observation deck, he seems to generate a
genuine sorites paradox that exploits the vagueness of
'horizon' (not its indexicality).
All natural languages have stabilizing pronouns, ellipsis, and other
anaphoric devices. For instance, in 'Jack is tired now and Jill
is too', the 'too' forces a uniform reading of
'tired'. Jason Stanley suggests that the sorites monger
employ the premise:
>
> If that1 is a child then that2 is too, and if
> that2 is too, then that3 is too, and if
> that3 is too, then that4 is too, ... and
> then that\(\_i\) is too.
>
Each 'that\(\_n\)' refers to the \(n\)th element in a
sequence of worsening examples of 'child'. The meaning of
'child' is not shifting because the first occurrence of
the term governs all the subsequent clauses (thanks to
'too'). If vague terms were literally indexical, the
sorites monger would have a strong reply. If vague terms only resemble
indexicals, then the contextualist needs to develop the analogy in a
way that circumvents Stanley's counsel to the sorites
monger.
The contextualist would also need to address a second technique for
stabilizing the context. R. M. Sainsbury (2013) advises the sorites
monger to present his premises in apparently random order. No pair of
successive cases raises an alarm that similar cases are being treated
differently. Unless the hearer has extraordinary memory, he will not
feel pressured to adjust the context.
The contextualist must find enough shiftiness to block every sorites
argument. Since vagueness seeps into every syntactic category, critics
complain that contextualism exceeds the level of ambiguity
countenanced by linguists and psycholinguists.
Another concern is that some sorites arguments involve predicates that
do not give us an opportunity to equivocate. Consider a sorites with a
base step that starts from a number too large for us to think about.
Or consider an inductive predicate that is too complex for us to
reason with. One example is obtained by iterating 'mother
of' a thousand times (Sorensen 2001, 33). This predicate could
be embedded in a mind-numbing sorites that would never generate
context shifts.
Other unthinkable sorites arguments use predicates that can only be
grasped by individuals in other possible worlds or by creatures with
different types of minds than ours. More fancifully, there could be a
vague predicate, such as Saul Kripke's "killer
yellow", that instantly kills anyone who wields it. The basic
problem is that contextualism is a psychologistic theory of the
sorites. Since arguments can exist without being propounded, they
float free of attempts to moor them to arguers.
## 8. Global Indeterminacy without Local Indeterminacy?
Contextualists base their holism about vagueness on top-down
psychology. Holism can also be logical. We encountered this in the
supervaluationist's rejection of truth-functionality. Instead of
calculating the truth-values of all compound statements bottom-up from
the truth-values of their component statements, Kit Fine (1975)
assigned truth-values top-down by counting a statement as true if it
comes out true under all admissible precisifications of the entire
language.
Fine's (2020) new holism parachutes in from R. M.
Sainsbury's (1996) characterization of vagueness in terms of
boundarylessness. A precise predicate such as
'thirty-something' organizes cases into pigeonholes.
Positive cases commence at thirty and cease at thirty nine. But
paradigms and foils of 'thirty-ish' operate as poles of a
magnet. Candidates for 'thirty-ish' resemble iron filings.
This force field prevents completion of a forced-march sorites
argument (in which we are marched from a definite instance of a
predicate to a definite non-instance). Judgment must be suspended
somewhere in the series. But it does not follow that there is any
specific case such that one must suspend judgment about it.
Borderline status has the mobility of a bubble in a level. The
precisificationist's mistake is to remove the bubble.
Peirce's mistake is to freeze the bubble into *intrinsic*
uncertainty. This is the dual of the error Peirce attributes to
Rene Descartes' method of universal doubt. Doubting
everything is impossible because doubt only gets leverage against a
fulcrum of presuppositions. (Ludwig Wittgenstein's analogy is
with the hinges of a door that hold fast in order that the door of
doubt can open and close.) These presuppositions are indubitable in
the sense that they cannot be challenged *within* the inquiry.
When the skeptic claims to doubt everything, he winds up doubting
nothing. The skeptic's sham doubts are belied by the absence of
any interruption of his habits. A genuine doubt triggers inquiry.
Inquiry requires presupposition. These background beliefs must be
taken for granted. But the necessary existence of certainty does not
entail any intrinsic certitudes. What must be presupposed for one
inquiry need not be presupposed for another. By Fine's
reckoning, Peirce should have issued a parallel rejection of intrinsic
uncertitudes. Within any inquiry, there must be some suspension of
judgment. But it does not follow that any proposition must resist all
inquiry.
Consider a literary historiographer who, weary of debate about the
identity of the first book, demonstrates that 'book' is
vague by assembling a slippery slope ranging from early non-books to
paradigm books. According to Peirce, the demonstration succeeds only
if the spectrum contains some intrinsically doubtful cases. They are
what makes 'book' vague. Fine's negative thesis is
that this reductionist explanation relies on impossible explanans. A
booklet can no more be an intrinsically borderline book than it can be
intrinsically misshelved. Fine's positive thesis is that
slippery slopes suffice to demonstrate the vagueness of
'book'. There are borderline books only in the way there
are heavy books. In contrast to 'mass',
'heavy' requires relativization to a gravitational field.
'Borderline' requires relativization to a range of
cases.
Many-valued logicians have long contended that local indeterminacy is
incompatible with the law of excluded middle. Fine's (1975)
supervaluationism was designed to reconcile the two. But now Fine
concedes the incompatibility. Unlike the many-valued logician, Fine
blames all the inconsistency on local indeterminacy. Indeed, Fine
continues to think the law of excluded middle is undeniable (which is
not to say it must be affirmed). What Fine does deny is
*conjunctive* excluded middle: (\(B \vee \neg B) \wedge (C \vee
\neg C) \wedge (D \vee \neg D) \wedge \ldots .\) For this principle
implies that there is a sharp cut-off in a forced-march sorites
paradox.
Since conjunctive excluded middle is a theorem of classical logic,
Fine proposes a compatibility semantics that bears a family
resemblance to intuitionism. Fine reconstructs a pedigree by modifying
Saul Kripke's semantics for intuitionistic logic. Timothy
Williamson (2020) objects that the resulting logic condemns even more
innocent inferences than intuitionism. Others defend local
indeterminacy by saying that there are paradigm cases of intrinsically
uncertain propositions such as those opening section one. Paradigms
such as the decapitation riddle concern qualitative vagueness rather
than the quantitative vagueness that generates a forced-march sorites
argument. Kit Fine is at work on a future book that may address the
worry that his account is incomplete or disjunctive.
## 9. Is All Vagueness Linguistic?
Supervaluationists emphasize the distinction between words and
objects. Objects themselves do not seem to be the sort of thing that
can be general, ambiguous, or vague (Eklund 2011). From this
perspective, Georg Hegel commits a category mistake when he
characterizes clouds as vague. Although we sometimes speak of clouds
being ambiguous or even being general to a region, this does not
entitle us to infer that there is metaphysical ambiguity or
metaphysical generality.
Supervaluationists are seconding a fallacy attribution dating back to
Bertrand Russell's seminal article "Vagueness"
(1923). This consensus was re-affirmed by Michael Dummett (1975) and
ritualistically re-avowed by subsequent commentators.
In 1978 Gareth Evans focused opposition to vague objects with a short
proof modeled after Saul Kripke's attack on contingent identity.
If there is a vague object, then some statement of the form '\(a
= b\)' must be vague (where each of the flanking singular terms
precisely designates that object). For the vagueness is allegedly due
to the object rather than its representation. But any statement of
form '\(a = a\)' is definitely true. Consequently, \(a\)
has the property of being definitely identical to \(a\). Since \(a =
b\), then \(b\) must also have the property of being definitely
identical to \(a\). Therefore '\(a = b\)' must be
definitely true!
Evans agrees that there are vague identity statements in which one of
the flanking terms is vague (just as Kripke agrees that there are
contingent identity statements when one of the flanking terms is a
flaccid designator). But then the vagueness is due to language, not
the world.
Despite Evans' impressive assault, there was a renewal of
interest in vague objects in the 1980s. As a precedent for this
revival, Peter van Inwagen (1990, 283) recalls that in the 1960s,
there was a consensus that all necessity is linguistic. Most
philosophers now take the possibility of essential properties
seriously.
Some of the reasons are technical. Problems with Kripke's
refutation of contingent identity have structural parallels that
affect Evans' proof. Evans also relies on inferences that
deviant logicians challenge (Parsons 2000).
In the absence of a decisive *reductio ad absurdum*, many
logicians feel their role to be the liberal one of articulating the
logical space for vague objects. There should be 'vague objects
for those who want them' (Cowles and White 1991). Logic should
be ontologically neutral.
Since epistemicists try to solve the sorites with little more than a
resolute application of classical logic, they are methodologically
committed to a partisan role for logic. Instead of looking for
loopholes, we should accept the consequence (Williamson 2015).
Some non-enemies of vague objects also have an ambition to consolidate
various species of indeterminacy (Barnes and Williams 2011). Talk of
indeterminacy is found in quantum mechanics, analyses of the open
future, fictional incompleteness, and the continuum hypothesis.
Perhaps vagueness is just one face of indeterminacy.
This panoramic vision contrasts with the continuing resolution of many
to tether vagueness to the sorites paradox (Eklund 2011). They fear
that the clarity achieved by semantic ascent will be muddied by
metaphysics.
But maybe the mud is already on the mountain top. Trenton Merricks
(2001) claims that standard characterizations of linguistic vagueness
rely on metaphysical vagueness. If 'Donald is bald' lacks
a truth-value because there is no fact to make the statement true,
then the shortage appears to be ontological.
The view that vagueness is always linguistic has been attacked from
other directions. Consider the vagueness of maps (Varzi 2001). The
vagueness is pictorial rather than discursive. So one cannot conclude
that vagueness is linguistic merely from the premise that vagueness is
representational.
Or consider vague instrumental music such as Claude Debussy's
"The Clouds". Music has syntax but too little semantics to
qualify as language. There is a little diffuse reference through
devices such as musical quotation, leitmotifs, and homages. These
referential devices are not precise. Therefore, some music is vague
(Sorensen 2010). The strength and significance of this argument
depends on the relationship between music and language. Under the
musilanguage hypothesis, language and music branched off from a common
"musilanguage" with language specializing in semantics and
music specializing in the expression of emotion. This scenario makes
it plausible that purely instrumental music could have remnants of
semantic meaning.
Mental imagery also seems vague. When rising suddenly after a
prolonged crouch, I 'see stars before my eyes'. I can tell
there are more than ten of these hallucinated lights but I cannot tell
how many. Is this indeterminacy in thought to be reduced to
indeterminacy in language? Why not vice versa? Language is an
outgrowth of human psychology. Thus it seems natural to view language
as merely an accessible intermediate bearer of vagueness. |
problem-of-many | ## 1. Introduction
In his (1980), Peter Unger introduced the "Problem of the
Many". A similar problem appeared simultaneously in P. T. Geach
(1980), but Unger's presentation has been the most influential
over recent years. The problem initially looks like a special kind of
puzzle about
vague predicates,
but that may be misleading. Some of the standard solutions to
Sorites paradoxes
do not obviously help here, so perhaps the Problem reveals some
deeper truths involving the metaphysics of material constitution, or
the logic of statements involving identity.
The puzzle arises as soon as there is an object without clearly
demarcated borders. Unger suggested that clouds are paradigms of this
phenomenon, and recent authors such as David Lewis (1993) and Neil
McKinnon (2002) have followed him here. Here is Lewis's
presentation of the puzzle:
>
> Think of a cloud--just one cloud, and around it a clear blue sky.
> Seen from the ground, the cloud may seem to have a sharp boundary. Not
> so. The cloud is a swarm of water droplets. At the outskirts of the
> cloud, the density of the droplets falls off. Eventually they are so
> few and far between that we may hesitate to say that the outlying
> droplets are still part of the cloud at all; perhaps we might better
> say only that they are near the cloud. But the transition is gradual.
> Many surfaces are equally good candidates to be the boundary of the
> cloud. Therefore many aggregates of droplets, some more inclusive and
> some less inclusive (and some inclusive in different ways than
> others), are equally good candidates to be the cloud. Since they have
> equal claim, how can we say that the cloud is one of these aggregates
> rather than another? But if all of them count as clouds, then we have
> many clouds rather than one. And if none of them count, each one being
> ruled out because of the competition from the others, then we have no
> cloud. How is it, then, that we have just one cloud? And yet we do.
> (Lewis 1993: 164)
>
The paradox arises because in the story as told the following eight
claims each seem to be true, but they are mutually inconsistent.
0. There are several distinct sets of water droplets
*sk* such that for each such set, it is not clear
whether the water droplets in *sk* form a
cloud.
1. There is a cloud in the sky.
2. There is at most one cloud in the sky.
3. For each set *sk*, there is an object
*ok* that the water droplets in
*sk* compose.
4. If the water droplets in *si* compose
*oi*, and the objects in *sj*
compose *oj*, and the sets *si*
and *sk* are not identical, then the objects
*oi* and *oj* are not
identical.
5. If *oi* is a cloud in the sky, and
*oj* is a cloud in the sky, and
*oi* is not identical with *oj*,
then there are at least two clouds in the sky.
6. If any of these sets *si* is such that its
members compose a cloud, then for any other set
*sj*, if its members compose an object
*oj*, then *oj* is a cloud.
7. Any cloud is composed of a set of water droplets.
To see the inconsistency, note that by 1 and 7 there is a cloud
composed of water droplets. Say this cloud is composed of the water
droplets in *si*, and let *sj* be
any other set whose members might, for all we can tell, form a cloud.
(Premise 0 guarantees the existence of such a set.) By 3, the water
droplets in *sj* compose an object
*oj*. By 4, *oj* is not identical
to our original cloud. By 6, *oj* is a cloud, and
since it is transparently in the sky, it is a cloud in the sky. By 5,
there are at least two clouds in the sky. But this is inconsistent
with 2. A solution to the paradox must provide a reason for rejecting
one of the premises, or a reason to reject the reasoning that led us
to the contradiction, or the means to live with the contradiction.
Since none of the motivations for believing in the existence of
dialetheia
apply here, let us ignore the last possibility. And since 0 follows
directly from the way the story is told, let us ignore that option as
well. That leaves open eight possibilities.
(The classification of the solutions here is *slightly*
different from that in Chapter One of Hud Hudson's "A
Materialist Metaphysics of the Human Person." But it has a deep
debt to Hudson's presentation of the range of solutions, which
should be clear from the discussion that follows.)
## 2. Nihilism
Unger's original solution was to reject 1. The concept of a
cloud involved, he thought, inconsistent presuppositions. Since those
presuppositions were not satisfied, there are no clouds. This is a
rather radical move, since it applies not just to clouds, but to any
kind of sortal for which a similar problem can be generated. And,
Unger pointed out, this includes most sortals. As Lewis puts it,
"Think of a rusty nail, and the gradual transition from steel
... to rust merely resting on the nail. Or think of a cathode,
and its departing electrons. Or think of anything that undergoes
evaporation or erosion or abrasion. Or think of yourself, or any
organism, with parts that gradually come loose in metabolism or
excretion or perspiration or shedding of dead skin" (Lewis 1993:
165).
Despite Lewis's presentation, the Problem of the Many is not a
problem about change. The salient feature of these examples is that,
in practice, change is a slow process. Hence whenever a cathode, or a
human, is changing, be it by shedding electrons, or shedding skin,
there are some things that are not clearly part of the object, nor
clearly not part of it. Hence there are distinct sets that each have a
good claim to being the set of parts of the cathode, or of the human,
and that is what is important.
It would be profoundly counterintuitive if there were no clouds, or no
cathodes, or no humans, and that is probably enough to reject the
position, if any of the alternatives are not also equally
counterintuitive. It also, as Unger noted, creates difficulties for
many views about singular thought and talk. Intuitively, we can pick
out one of the objects composed of water droplets by the phrase
'that cloud'. But if it is not a cloud, then possibly we
cannot. For similar reasons, we may not be able to name any such
object, if we use any kind of reference-fixing description involving
'cloud' to pick it out from other objects composed of
water droplets. If the Problem of the Many applies to humans as well
as clouds, then by similar reasoning we cannot name or demonstrate any
human, or, if you think there are no humans, any human-like object.
Unger was happy to take these results to be philosophical discoveries,
but they are so counterintuitive that most theorists hold that they
form a reductio of his theory. Bradley Rettler (2018) argues that the
nihilist has even more problems than this. Nihilism solves some
philosophical problems, such as explaining which of 0-7 is false. But,
he argues, for any problem it solves, there is a parallel problem
which it does not solve, but rival solutions do solve. For instance,
if you think of the problem here as a version of a
Sorites paradox,
nihilism does not help with versions of the paradox which concern
predicates applied to simples.
It is interesting that some other theories of vagueness have adopted
positions resembling Unger's in some respects, but without the
extreme conclusions. Matti Eklund (2002) and Roy Sorensen (2001) have
argued that all vague concepts involve inconsistent presuppositions.
Sorensen spells this out by saying that there are some inconsistent
propositions that anyone who possesses a vague concept should believe.
In the case of a vague predicate *F* that is vulnerable to a
Sorites paradox, the inconsistent propositions are that some things
are *F*s, some things are not *F*s, any object that
closely resembles (in a suitable respect) something that is *F*
is itself *F*, and that there are chains of 'suitably
resembling' objects between an *F* and a non-*F*.
Here the inconsistent propositions are that a story like Lewis's
is possible, and in it 0 through 7 are true. Neither Eklund nor
Sorensen conclude from this that nothing satisfies the predicates in
question; rather they conclude that some propositions that we find
compelling merely in virtue of possessing the concepts from which they
are constituted are false. So while they don't adopt
Unger's nihilist conclusions, two contemporary theorists agree
with him that vague concepts are in some sense incoherent.
## 3. Overpopulation
A simple solution to the puzzle is to reject premise 2. Each of the
relevant fusions of water droplets looks and acts like a cloud, so it
is a cloud. As with the first option this leads to some very
counterintuitive results. In any room with at least one person, there
are many millions of people. But this is not as bad as saying that
there are no people. And perhaps we don't even have
to *say* the striking claim. In many circumstances, we quantify
over a restricted domain. We can say, "There's no
beer," even when there is beer in some non-salient locales. With
respect to some restricted quantifier domains, it is true that there
is exactly one person in a particular room. The surprising result is
that with respect to other quantifier domains, there are many millions
of people in that room. The defender of the overpopulation theory will
hold that this shows how unusual it is to use unrestricted
quantifiers, not that there really is only one person in the room.
The overpopulation solution is not popular, but it is not without
defenders. J. Robert G. Williams (2006) endorses it, largely because
of a tension between the supervaluationist solution (that will be
discussed in section 7) and what supervaluationism says about the
Sorites paradox. James Openshaw (2021) and Alexander Sandgren
(forthcoming) argue that the overpopulation solution is true, and each
offer a theory of how singular thought about the cloud is possible
given overpopulation. Sandgren also points out that there might be
multiple sources of overpopulation. Even given a particular set of
water droplets, some metaphysical theories will say that there are
multiple objects those droplets compose, which differ in their
temporal or modal properties.
Hudson (2001: 39-44) draws out a surprising consequence of the
overpopulation solution as applied to people. Assume that there are
really millions of people just where we'd normally say there was
exactly one. Call that person Charlie. When Charlie raises her arm,
each of the millions must also raise their arms, for the millions
differ only in whether or not they contain some borderline skin cells,
not in whether their arm is raised or lowered. Normally, if two people
are such that whenever one acts a certain way, then so must the other,
we would say that at most one of them is acting freely. So it looks
like at most one of the millions of people around Charlie are free.
There are a few possible responses here, though whether a defender of
the overpopulation view will view this consequence as being more
counter-intuitive than other claims to which she is already committed,
and hence whether it needs a special response, is not clear. There are
some other striking, though not always philosophically relevant,
features of this solution. To quote Hudson:
>
> Among the most troublesome are worries about naming and singular
> reference ... how can any of us ever hope to successfully refer
> to himself without referring to his brothers as well? Or how might we
> have a little private time to tell just one of our sons of our
> affection for him without sharing the moment with uncountably many of
> his brothers? Or how might we follow through on our vow to practice
> monogamy? (Hudson 2001: 39)
>
## 4. Brutalism
As Unger originally states it, the puzzle relies on a contentious
principle of mereology. In particular, it assumes mereological
Universalism, the view that for any objects, there is an object that
has all of them as its fusion. (That is, it has each of those objects
as parts, and has no parts that do not overlap at least one of the
original objects.) Without this assumption, the Problem of the Many
may have an easy solution. The cloud in the sky is *the* object
up there that is a fusion of water droplets. There are many other sets
of water droplets, other than the set of water droplets that compose
the cloud, but since the members of those sets do not compose an
object, they do not compose a cloud.
There are two kinds of theories that imply that only one of the sets
of water droplets is such that there exists a fusion of its atoms.
First, there are *principled* restrictions on composition,
theories that say that the *x*s compose an object *y*
iff the *x*s are *F*, for some natural property
*F*. Secondly, there are *brutal* theories, which say
it's just a brute fact that in some cases the *x*s
compose an object, and in others they do not. It would be quite hard
to imagine a principled theory solving the Problem of the Many, since
it is hard to see what the principle could be. (For a more detailed
argument for this, set against a somewhat different backdrop, see
McKinnon 2002.) But a brutal theory could work. And such a theory has
been defended. Ned Markosian (1998) argues that not only does
brutalism, the doctrine that there are brute facts about when the
*x*s compose a *y*, solve the Problem of the Many, the
account of composition it implies fits more naturally with our
intuitions about composition.
It seems objectionable, in some not easy-to-pin-down way, to rely on
brute facts in just this way. Here is how Terrence Horgan puts the
objection:
>
> In particular, a good metaphysical theory or scientific theory should
> avoid positing a plethora of quite specific, disconnected, *sui
> generis*, compositional facts. Such facts would be ontological
> danglers; they would be metaphysically queer. Even though explanation
> presumably must bottom out somewhere, it is just not credible--or
> even intelligible--that it should bottom out with specific
> compositional facts which themselves are utterly unexplainable and
> which do not conform to any systematic general principles. (Horgan
> 1993: 694-5)
>
On the other hand, this kind of view does provide a particularly
straightforward solution to the Problem of the Many. As Markosian
notes, if we have independent reason to view favourably the idea that
facts about when some things compose an object are brute facts, which
he thinks is provided by our intuitions about cases of composition and
non-composition, the very simplicity of this solution to the Problem
of the Many may count as an argument in favour of brutalism.
## 5. Relative Identity
Assume that the brutalist is wrong, and that for every set of water
droplets, there is an object those water droplets compose. Since that
object looks for all the world like a cloud, we will say it is a
cloud. The fourth solution accepts those claims, but denies that there
are many clouds. It is true that there are many fusions of atoms, but
these are all the same *cloud*. This view adopts a position
most commonly associated with P. T. Geach (1980), that two things can
be the same *F* but not the same *G*, even though they
are both *G*s. To see the motivation for that position, and a
discussion of its strengths and weaknesses, see the article on
relative identity.
Here is one objection that many have felt is telling against the
relative identity view: Let *w* be a water droplet that is
in *s*1 but not *s*2. The relative
identity solution says that the droplets in
*s*1 compose an object *o*1, and
the droplets in *s*2 compose an object
*o*2, and though *o*1 and
*o*2 are different fusions of water droplets, they
are the same cloud. Call this cloud *c*. If
*o*1 is the same cloud as *o*2,
then presumably they have the same properties. But
*o*1 has the property of having *w* as a
part, while *o*2 does not. Defenders of the relative
identity theory here deny the principle that if two objects are the
same *F*, they have the same properties. Many theorists find
this denial to amount to a *reductio* of the view.
## 6. Partial Identity
Even if *o*1 and *o*2 exist, and
are clouds, and are not the same cloud, it does not immediately follow
that there are two clouds. If we analyze "There are two
clouds" as "There is an *x* and a *y* such
that *x* is a cloud and *y* is a cloud, and *x*
is not the same cloud as *y*" then the conclusion will
naturally follow. But perhaps that is not the correct analysis of
"There are two clouds." Or, more cautiously, perhaps it is
not the correct analysis in all contexts. Following some suggestions
of D. M. Armstrong's (Armstrong 1978, vol. 2: 37-8), David
Lewis suggests a solution along these lines. The objects
*o*1 and *o*2 are not the same
cloud, but they are *almost* the same cloud. And in everyday
circumstances (AT) is a good-enough analysis of "There is one
cloud"
(AT)
There is a cloud, and all clouds are *almost* identical
with it.
As Lewis puts it, we 'count by almost-identity' rather
than by identity in everyday contexts. And when we do, we get the
correct result that there is one cloud in the sky. Lewis notes that
there are other contexts in which we count by some criteria other than
identity.
>
> If an infirm man wishes to know how many roads he must cross to reach
> his destination, I will count by identity-along-his-path rather than
> by identity. By crossing the Chester A. Arthur Parkway and Route 137
> at the brief stretch where they have merged, he can cross both by
> crossing only one road. (Lewis 1976: 27)
>
There are two major objections to this theory. First, as Hudson notes,
even if we normally count by almost-identity, we know how to count by
identity, and when we do it seems that there is one cloud in the sky,
not many millions. A defender of Lewis's position may say that
the only reason this seems intuitive is that it is normally intuitive
to say that there is only one cloud in the sky. And that intuition is
respected! More contentiously, it may be argued that it is a
*good* thing to predict that when we count by identity we get
the result that there are millions of clouds. After all, the only time
we'd do this is when we're doing metaphysics, and we have
noted that in the metaphysics classroom, there is some intuitive force
to the argument that there are millions of clouds in the sky. It would
be a brave philosopher to endorse this as a *virtue* of the
theory, but it may offset some of the costs.
Secondly, something like the Problem of the Many can arise even when
the possible objects are not almost identical. Lewis notes this
objection, and provides an illustrative example to back it up. A
similar kind of example can be found in W. V. O. Quine's
*Word and Object* (1960). Lewis's example is of a house
with an attached garage. It is unclear whether the garage is part of
the house or an external attachment to it. So it is unclear whether
the phrase 'Fred's house' denotes the basic house,
call it the home, or the fusion of the home and the garage. What is
clear is that there is exactly one house here. However, the home might
be quite different from the fusion of the home and the garage. It will
probably be smaller and warmer, for example. So the home and the
home-garage fusion are not even almost identical. Quine's
example is of something that looks, at first, to be a mountain with
two peaks. On closer inspection we find that the peaks are not quite
as connected as first appeared, and perhaps they could be properly
construed as two separate mountains. What we could not say is that
there are *three* mountains here, the two peaks and their
fusion, but since neither peak is almost identical to the other, or to
the fusion, this is what Lewis's solution implies.
But perhaps it is wrong to understand almost-identity in this way.
Consider another example of Lewis's, one that Dan Lopez de
Sa (2014) argues is central to a Lewisian solution to the problem.
>
> You draw two diagonals in a square; you ask me how many triangles; I
> say there are four; you deride me for ignoring the four large
> triangles and counting only the small ones. But the joke is on you.
> For I was within my rights as a speaker of ordinary language, and you
> couldn't see it because you insisted on counting by strict
> identity. I meant that, for some *w*, *x*, *y*,
> *z*, (1) *w*, *x*, *y*, and *z* are
> triangles; (2) *w* and *x* are distinct, and ... and so
> are *y* and *z* (six clauses); (3) for any triangle
> *t*, either *t* and *w* are not distinct, or ...
> or *t* and *z* are not distinct (four clauses). And by
> 'distinct' I meant non-overlap rather than non-identity,
> so what I said was true. (Lewis 1993, fn. 9)
>
One might think this is the general way to understand counting
sentences in ordinary language. So *There is exactly one F*
gets interpreted as *There is an F, and no F is wholly distinct
from it*; and *There are exactly two Fs* gets interpreted
as *There are wholly distinct things x and y that are both F, and
no F is wholly distinct from both of them*, and so on. Lewis
writes as if this is an explication of the almost-identity proposal,
but this is at best misleading. A house with solar panels partially
overlaps the city's electrical grid, but it would be very
strange to call them almost-identical. It sounds like a similar, but
distinct, proposed solution.
However we understand the proposal, Lopez de Sa notes that it
has a number of virtues. It seems to account for the puzzle involving
the house, what he calls the Problem of the Two. If in general
counting involves distinctness, then we have a good sense in which
there is one cloud in the sky, and Fred owns one house.
There still remain two challenges for this view. First, one could
still follow Hudson and argue that even if we ordinarily understand
counting sentences this way, we do still know how to count by
identity. And when we do, it seems that there is just one cloud, not
millions of them. Second, it isn't that clear that we always
count by distinctness, in the way Lopez de Sa suggests. If I
say there are three ways to get from my house to my office, I
don't mean to say that these three are completely distinct.
Indeed, they probably all start with going out my door, down my
driveway etc., and end by walking up the stairs into my office. So the
general claim about how to understand counting sentences seems
false.
C. S. Sutton (2015) argued that we can get around something like the
second of these problems if we do two things. First, the rule
isn't that we don't normally quantify over things that
overlap, just that we don't normally quantify over things that
substantially overlap. We can look at a row of townhouses and say that
there are seven houses there even if the walls of the house overlap.
Second, the notion of overlap here is not sensitive to the quantity of
material that the objects have in common, but to their functional
role. If two objects play very different functional roles, she argues
that we will naturally count them as two, even if they have a lot of
material in common. This could account for a version of the getting to
work example where the three different ways of getting to work only
differ on how to get through one small stretch in the middle. That is,
if there are three (wholly distinct ways) to get from B to C, and the
way to get from A to D is to go A-B-C-D, then there is a good sense in
which there are three ways to get from A to D. Sutton's theory
explains how this could be true even if the B-C leg is a short part of
the trip.
David Liebesman (2020) argued for a different way of implementing this
kind of theory. He argues that the kinds of constraints that Lewis,
Lopez de Sa and Sutton have suggested don't get
incorporated into our theory of counting, but into the proper
interpretation of the nouns involved in counting sentences. It helps
to understand Liebesman's idea with an example.
There is, we'll presumably all agree, just one colour in Yves
Klein's
*Blue Monochrome*.
(It says so in the title.) But *Blue Monochrome* has blue in
it, and it has ultramarine in it, and blue doesn't equal
ultramarine. What's gone on? Well, says Liebesman, whenever a
noun occurs under a determiner, it needs an interpretation. That
interpretation will almost never be maximal. When we ask how many
animals a person has in their house, we typically don't mean to
count the insects. When we say every book is on the shelf, we
don't mean every book in the universe. And typically, the
relevant interpretation of the determiner phrase (like 'every
book', or 'one cloud') will exclude overlapping
objects. Typically but not, says Liebesman, always. We can say, for
instance, that every shade of a colour is a colour, and in that
sentence 'colour' includes both blue and ultramarine.
This offers a new solution to the Problem of the Many. He argues that
all of 0 to 7 are true, but they are true for different
interpretations of phrases including the word 'cloud'.
When we interpret it in a way that rules out overlap, then 6 is false.
When we interpret it in the maximal way, like the way we interpret
'colour' in *Every shade of a colour is a colour*,
then 2 is false. But to get the inconsistency, we have to equivocate.
It's an easy equivocation to make, since each of the meanings is
one that we frequently use.
## 7. Vagueness
Step 6 in the initial setup of the problem says that if any of the
*oi* is a cloud, then they all are. There are three
important arguments for this premise, two of them presented explicitly
by Unger, and the other by Geach. Two of the arguments seem to be
faulty, and the third can be rejected if we adopt some familiar,
though by no means universally endorsed, theories of vagueness.
### 7.1 Argument from Duplication
The first argument, due essentially to Geach, runs as follows.
Geach's presentation did not involve clouds, but the principles
are clearly stated in his version of the argument. (The argument shows
that if an *ok* is a cloud for arbitrary *k*,
we can easily generalize to the claim that for every *i*,
*oi* is a cloud.)
D1.
If all the water droplets not in *sk* did not
exist, then *ok* would be a cloud.
D2.
Whether *ok* is a cloud does not depend on
whether things distinct from it exist.
C.
*ok* is a cloud.
D2 implies that *being a cloud* is an
intrinsic
property. The idea is that by changing the world outside the cloud,
we do not change whether or not it is a cloud. There is, however,
little reason to believe this is true. And given that it leads to a
rather implausible conclusion, that there are millions of clouds where
we think there is one, there is some reason to believe it is false. We
can argue directly for the same conclusion. Assume many more water
droplets coalesce around our original cloud. There is still one cloud
in the sky, but it determinately includes more water droplets than the
original cloud. The fusion of those water droplets exists, and we may
assume that they did not change their intrinsic properties, but they
are now a *part* of a cloud, rather than a cloud. Even if
something looks like a cloud, smells like a cloud and rains like a
cloud, it need not be a cloud, it may only be a part of a cloud.
### 7.2 Argument from Similarity
Unger's primary argument takes a quite different tack.
S1.
For some *j*, *oj* is a typical
cloud.
S2.
Anything that differs minutely from a typical cloud is a
cloud.
S3.
*ok* differs minutely from
*oj*.
C.
*ok* is a cloud.
Since we only care about the conditional *if oj*
*is a cloud, so is ok*, it is clearly acceptable to
assume that *oj* is a cloud for the sake of the
argument. And S3 is guaranteed to be true by the setup of the problem.
The main issue then is whether S2 is true. As Hudson notes, there
appear to be some clear counterexamples to it. The fusion of a cloud
with one of the water droplets in my bathtub is clearly not a cloud,
but by most standards it differs minutely from a cloud, since there is
only one droplet of water difference between them.
### 7.3 Argument from Meaning
The final argument is not set out as clearly, but it has perhaps the
most persuasive force. Unger says that if exactly one of the
*oi* is a cloud, then there must be a
'selection principle' that picks it over the others. But
it is not clear just what kind of selection principle that could be.
The underlying argument seems to be something like this:
M1.
For some *j*, *oj* is a cloud.
M2.
If *oj* is a cloud and the rest of the
*oi* are not, then some principle selects
*oj* to be the unique cloud.
M3.
There is no principle that selects *oj* to be
the unique cloud.
C.
At least one of the other *oi* is a cloud.
The idea behind M2 is that word meanings are not brute facts about
reality. As Jerry Fodor put it, "if aboutness is real, it must
be really something else" (Fodor 1987: 97). Something makes it
the case that *oj* is the unique thing (around here)
that satisfies our term 'heap'. Maybe that could be
because *oj* has some unique properties that make it
suitable to be in the denotation of ordinary terms. Or maybe it is
something about our linguistic practices. Or maybe it is some
combination of these things. But something must determine it, and
whatever it is, we can (in theory) say what that is, by giving some
kind of principled explanation of why *oj* is the
unique cloud.
It is at this point that theories of
vagueness
can play a role in the debate. Two of the leading theories of
vagueness, epistemicism and supervaluationism, provide principled
reasons to reject this argument. The epistemicist says that there are
semantic facts that are beyond our possible knowledge. Arguably we can
only know where a semantic boundary lies if that boundary was fixed by
our use or by the fact that one particular property is a natural kind.
But, say the epistemicists, there are many other boundaries that are
not like this, such as the boundary between the heaps and the
non-heaps. Here we have a similar kind of situation. It is vague just
which of the *oi* is a cloud. What that means is
that there is a fact about which of them is a cloud, but we cannot
possibly know it. The epistemicist is naturally read as rejecting the
very last step in the previous paragraph. Even if something (probably
our linguistic practices) makes it the case that
*oj* is the unique cloud, that need not be something
we can know and state.
The supervaluationist response is worth spending more time on here,
both because it engages directly with the intuitions behind this
argument and because two of its leading proponents (Vann McGee and
Brian McLaughlin, in their 2001) have responded directly to this
argument using the supervaluationist framework. Roughly (and for more
detail see the section on supervaluations in the entry on
vagueness)
supervaluationists say that whenever some terms are vague, there are
ways of making them more precise consistent with our intuitions on how
the terms behave. So, to use a classic case, 'heap' is
vague, which to the supervaluationist means that there are some piles
of sand that are neither determinately heaps nor determinately
non-heaps, and a sentence saying that that object is a heap is neither
determinately true nor determinately false. However, there are many
ways to *extend* the meaning of 'heap' so it
becomes precise. Each of these ways of making it precise is called a
*precisification*. A precisification is *admissible* iff
every sentence that is determinately true (false) in English is true
(false) in the precisification. So if *a* is determinately a
heap, *b* is determinately not a heap and *c* is neither
determinately a heap nor determinately not a heap, then every
precisification must make '*a* is a heap' true and
'*b* is a heap' false, but some make
'*c* is a heap' true and others make it false. To a
first approximation, to be admissible a precisification must assign
all the determinate heaps to the extension of 'heap' and
assign none of the determinate non-heaps to its extension, but it is
free to assign or not assign things in the 'penumbra'
between these groups to the extension of 'heap'. But this
is not quite right. If *d* is a little larger than *c*,
but still not determinately a heap, then the sentence "If
*c* is a heap so is *d*" is intuitively true. As
it is often put, following Kit Fine (1975), a precisification must
respect 'penumbral connections' between the borderline
cases. If *d* has a better case for being a heap than
*c*, then a precisification cannot make *c* a heap but
not *d*. These penumbral connections play a crucial role in the
supervaluationist solution to the Problem of the Many. Finally, a
sentence is determinately true iff it is true on all admissible
precisifications, determinately false iff it is false on all
admissible precisifications.
In the original example, described by Lewis, the sentence "There
is one cloud in the sky" is determinately true. None of the
sentences "*o*1 is a cloud",
"*o*2 is a cloud" and so on are
determinately true. So a precisification can make each of these either
true or false. But, if it is to preserve the fact that "There is
one cloud in the sky" is determinately true, it must make
exactly *one* of those sentences true. McGee and McLaughlin
suggest that this combination of constraints lets us preserve what is
plausible about M2, without accepting that it is true. The term
'cloud' is vague; there is no fact of the matter as to
whether its extension includes *o*1 or
*o*2 or *o*3 or .... If there
were such a fact, there would have to be something that made it the
case that it included *oj* and not
*ok*, and as M3 correctly points out, no such facts
exist. But this is consistent with saying that its extension does
contain exactly one of the *oi*. The beauty of the
supervaluationist solution is that it lets us hold these seemingly
contradictory positions simultaneously. We also get to capture some of
the plausibility of S2--it is consistent with the
supervaluationist position to say that anything similar to a cloud is
not determinately not a cloud.
Penumbral connections also let us explain some other puzzling
situations. Imagine I point cloudwards and say, "That is a
cloud." Intuitively, what I have said is true, even though
'cloud' is vague, and so is my demonstrative
'that'. (To see this, note that there's no
determinate answer as to which of the *oi* it picks
out.) On different precisifications, 'that' picks out
different *oi*. But on every precisification it
picks out the *oi* that is in the extension of
'cloud', so "That is a cloud" comes out true
as desired. Similarly, if I named the cloud 'Edgar', then
a similar trick lets it be true that "Edgar" is vague,
while "Edgar is a cloud" is determinately true. So the
supervaluationist solution lets us preserve many of the intuitions
about the original case, including the intuitions that seemed to
underwrite M2, without conceding that there are millions of clouds.
But there are a few objections to this package.
* *Objection*: There are many telling objections to
supervaluationist theories of vagueness.
+ *Reply*: This may be true, but it would take us well beyond the
scope of this entry to outline them all. See the entries on
vagueness
(the section on supervaluation) and the
Sorites Paradox
for more detail.
* *Objection*: The supervaluationist solution makes some
existentially quantified sentences, like "There is a cloud in
the sky" determinately true even though no instance of them is
determinately true.
+ *Reply*: As Lewis says, this is odd, but no odder than things
that we learn to live with in other contexts. Lewis compares this to
"I owe you a horse, but there is no particular horse that I owe
you."
* *Objection*: The penumbral connections here are not
precisely specified. What is the rule that says how much overlap is
required before two objects cannot both be clouds?
+ *Reply*: It is true that the connections are not precisely
specified. It would be quite hard to carefully analyze
'cloud' to work out exactly what they are. But that we
cannot say exactly what the rules are is no reason for saying no such
rules exist, any more than our inability to say exactly what knowledge
is provides a reason for saying that no one ever knows anything.
Scepticism can't be proven *that* easily.
* *Objection*: The penumbral connections appealed to here are
left unexplained. At a crucial stage in the explanation, it seems to
be just assumed that the problem can be solved, and that it is a
determinate truth that there is one cloud in the sky.
+ *Reply 1*: We have to start somewhere in philosophy. This kind
of reply can be spelled out in two ways. There is a
'Moorean' move that says that the premise that there is
one cloud in the sky is more plausible than the premises that would
have to be used in an argument against supervaluationism.
Alternatively, it might be claimed that the main argument for
supervaluationism is an inference to the best explanation. In that
case, the intuition that there is exactly one cloud in the sky, but it
is indeterminate just which object it is, is something to be
explained, not something that has to be *proven*. This is the
kind of position defended by Rosanna Keefe in her book *Theories of
Vagueness*. Although Keefe does not apply this *directly*
to the Problem of the Many, the way to apply her position to the
Problem seems clear enough.
+ *Reply 2*: The penumbral connections we find for most words are
generated by the inferential role provided by the meaning of the
terms. It is because the inference from "This pile of sand is a
heap", and "That pile of sand is slightly larger than this
one, and arranged roughly the same way," to "That pile of
sand is a heap" is generally acceptable that precisifications
which make the premises true and the conclusions false are
inadmissible. (We have to restrict this inferential rule to the case
where 'this' and 'that' are ordinary
demonstratives, and not used to pick out arbitrary fusions of grains,
or else we get bizarre results for reasons that should be familiar by
this point in the story.) And it isn't too hard to specify the
inferential rule here. The inference from
"*oj* is a cloud" and
"*oj* and *ok* massively
overlap" to "*ok* is a cloud" is
just as acceptable as the above inference involving heaps. Indeed, it
is part of the meaning of 'cloud' that this inference is
acceptable. (Much of this reply is drawn from the discussion of
'maximal' predicates in Sider 2001 and 2003, though since
Sider is no supervaluationist, he would not entirely endorse this way
of putting things.)
* *Objection*: The second reply cannot explain the existence
of penumbral connections between 'cloud' and
demonstratives like 'that' and names like
'Edgar'. It would only explain the existence of those
penumbral connections if it was part of the meaning of names and
demonstratives that they fill some kind of inferential role. But this
is inconsistent with the widely held view that demonstratives and
names are
directly referential.
(For more details on debates about the meanings of names, see the
entries on
propositional attitude reports
and
singular propositions.)
+ *Reply*: One response to this would be to deny the view that
names and demonstratives are directly referential. Another would be to
deny that inferential roles provide the *only* penumbral
constraints on precisifications. Weatherson 2003b sketches a theory
that does exactly this. The theory draws on David Lewis's
response to some quite different work on semantic indeterminacy. As
many authors (Quine 1960, Putnam 1981, Kripke 1982) showed, the
dispositions of speakers to use terms are not fine-grained enough to
make the language as precise as we ordinarily think it is. As far as
our usage dispositions go, 'rabbit' could mean undetached
rabbit part, 'vat' could mean vat image, and
'plus' could mean quus. (Quus is a function defined over
pairs of numbers that yields the sum of the two numbers when they are
both small, and 5 when one is sufficiently large.) But intuitively our
language is not that indeterminate: 'plus' determinately
does not mean quus.
Lewis (1983, 1984) suggested the way out here is to posit a notion of
'naturalness'. Sometimes a term *t* denotes the
concept *C*1 rather than *C*2 not
because we are disposed to use *t* as if it meant
*C*1 rather than *C*2, but simply
because *C*1 is a more natural concept.
'Plus' means plus rather than quus simply because plus is
more natural than quus. Something like the same story applies to names
and demonstratives. Imagine I point in the direction of Tibbles the
cat and say, "That is Edgar's favourite cat." There
is a way of systematically (mis)interpreting all my utterances so
'that' denotes the Tibbles-shaped region of space-time
exactly one metre behind Tibbles. (We have to reinterpret what
'cat' means to make this work, but the discussions of
semantic indeterminacy in Quine and Kripke make it clear how to do
this.) So there's nothing in my usage dispositions that makes
'that' mean Tibbles, rather than the region of space-time
that 'follows' him around. But because Tibbles is more
natural than that region of space-time, 'that' does pick
out Tibbles. It is the very same naturalness that makes
'cat' denote a property that Tibbles (and not the trailing
region of space-time) satisfies that makes 'that' denote
Tibbles, a fact that will become important below.
The same kind of story can be applied to the cloud. It is because the
cloud is a more natural object than the region of space-time a mile
above the cloud that our demonstrative 'that' denotes the
cloud and not the region. However, none of the *oi*
are more natural than any other, so there is still no fact of the
matter as to whether 'that' picks out
*oj* or *ok*. Lewis's theory
does not eliminate all semantic indeterminacy; when there are equally
natural candidates to be the denotation of a term, and each of them is
consistent with our dispositions to use the term, then the denotation
of the term is simply indeterminate between those candidates.
Weatherson's theory is that the role of each precisification is
to arbitrarily make one of the *oi* more natural
than the rest. Typically, it is thought that the denotation of a term
according to a precisification is determined directly. It is a fact
about a precisification *P* that, according to it,
'cloud' denotes property *c*1. On
Weatherson's theory this is not the case. What the
precisification does is provide a new, and somewhat arbitrary,
standard of naturalness, and the content of the terms according to the
precisification is then determined by Lewis's theory of content.
The denotations of 'cloud' and 'that'
according to a precisification *P* are those concepts and
objects that are the most *natural-according-to-P* of the
concepts and objects that we could be denoting by those terms, for all
one can tell from the way the terms are used. The coordination between
the two terms, the fact that on every precisification
'that' denotes an object in the extension of
'cloud' is explained by the fact that the very same thing,
naturalness-according-to-*P*, determines the denotation of
'cloud' and of 'that'.
* *Objection* (From Stephen Schiffer 1998): The
supervaluationist account cannot handle speech reports involving vague
names. Imagine that Alex points cloudwards and says, "That is a
cloud." Later Sam points towards the same cloud and says,
"Alex said that that is a cloud." Intuitively, Sam's
utterance is determinately true. But according to the
supervaluationist, it is only determinately true if it is true on
every precisification. So it must be true that Alex said that
*o*1 is a cloud, that Alex said that
*o*2 is a cloud, and so on, since these are all
precisifications of "Alex said that that is a cloud." But
Alex did not say all of those things, for if she did she would be
committed to saying that there are millions of clouds in the sky, and
of course she is not, as the supervaluationists have been arguing.
+ *Reply*: There is a little logical slip here. Let
*Pi* be a precisification of Sam's word
'that' that makes it denote *oi*. All
the supervaluationist who holds that Sam's utterance is
determinately true is committed to is that for each *i*,
according to *Pi*, Alex said that
*oi* is a cloud. And this will be true if the
denotation of Alex's word 'that' is also
*oi* according to *Pi*. So as long
as there is a penumbral connection between Sam's word
'that' and Alex's word 'that', the
supervaluationist avoids the objection. Such a connection may seem
mysterious at first, but note that Weatherson's theory predicts
that just such a penumbral connection obtains. So if that theory is
acceptable, then Schiffer's objection misfires.
* *Objection* (From Neil Mackinnon 2002 and Thomas Sattig
2013): It is part of our notion of a mountain that facts about
mountain-hood are not basic. If something is a mountain, there are
facts in virtue of which it is a mountain, and nearby things are not.
Yet on each precisification of 'mountain', this
won't be true; it will be completely arbitrary which fusion of
rocks is a mountain.
+ *Reply*: It is true that on any precisification there will be
no principled reason why this fusion of rocks is a mountain, and
another is not. And it is true that there should be such a principled
reason; mountainhood facts are not basic. But that problem can be
avoided by the theory that denies that "the supervaluationist
rule [applies] to any statement whatever, never mind that the
statement makes no sense that way" (Lewis 1993, 173).
Lewis's idea, or at least the application of Lewis's idea
to this puzzle, is that we know how to understand the idea that
mountainhood facts are non-arbitrary: we understand it as a claim that
there is some non-arbitrary explanation of which precisifications of
'mountain' are and are not admissible. If we must apply
the supervaluationist rule to every statement, including the statement
that it is not arbitrary which things are mountains, this
understanding is ruled out. Lewis's response is to deny that the
rule must always be applied. As long as there is some sensible way to
understand the claim, we don't have to insist on applying the
supervaluationist machinery to it.
That said, it does seem like this is likely to be somewhat of a
problem for everyone (even the theorist like Lewis who uses the
supervaluationist machinery only when it is helpful). Sattig himself
claims to avoid the problem by making the mountain be a maximal fusion
of candidates. But for any plausible mountain, it will be vague and
somewhat arbitrary what the boundary is between being a
mountain-candidate and not being one. The lower boundaries of
mountains are not, in practice, clearly marked. Similarly, there will
be some arbitrariness in the boundaries between the admissible and
inadmissible precisifications of 'mountain'. We may have
to live with some arbitrariness.
* *Objection* (From J. Robert G. Williams 2006): When we look
at an ordinary mountain, there definitely is (at least) one mountain
in front of us. That seems clear. The vagueness solution respects this
fact. But in many contexts, people tend to systematically confuse
"Definitely, there is an *F*" with "There is
a definite *F*." Indeed, the standard explanation of why
Sorites arguments seem, mistakenly, to be attractive is that this
confusion gets made. Yet on the vagueness account, there is no
definite mountain; all the candidates are borderline cases. So by
parity of reasoning, we should expect intuition to deny that there is
definitely a mountain. And intuition does not deny that, it loudly
confirms it. At the very least, this shows a tension between the
standard account of the Sorites paradox, and the vagueness solution to
the Problem of the Many.
+ *Reply*: This is definitely a problem for the views that many
philosophers have put forward. As Williams stresses, it isn't on
its own a problem for the vagueness solution to the Problem of the
Many, but it is a problem for the conjunction of that solution with a
widely endorsed, and independently plausible, explanation of the
Sorites paradox. In his dissertation, Nicholas K. Jones (2010) argues
that the right response is to give up the idea that speakers typically
confuse "Definitely, there is an *F*" with
"There is a definite *F*", and instead use a
different resolution of the Sorites.
Space prevents a further discussion of all possible objections to the
supervaluationist account, but interested readers are particularly
encouraged to look at Neil McKinnon's objection to the account
(see the Other Internet Resources section), which suggests that
distinctive problems arise for the supervaluationist when there really
are two or more clouds involved.
Even if the supervaluationist solution to the Problem of the Many has
responses to all of the objections that have been levelled against it,
some of those objections rely on theories that are contentious and/or
underdeveloped. So it is far from clear at this stage how well the
supervaluationist solution, or indeed any solution based on vagueness,
to the Problem of the Many will do in future years.
## 8. Rethinking Parthood
Some theorists have argued that the underlying cause of the problem is
that we have the wrong theory about the relation between parts and
wholes. Peter van Inwagen (1990) argues that the problem is that we
have assumed that the parthood relation is determinate. We have
assumed that it is always determinately true or determinately false
that one object is a part of another. According to van Inwagen,
sometimes neither of these options applies. He thinks that we need to
adopt some kind of
fuzzy logic
when we are discussing parts and wholes. It can be true to degree
0.7, for example, that one object is part of another. Given these
resources, van Inwagen says, we are free to conclude that there is
exactly one cloud in the sky, and that some of the 'outer'
water droplets are part of it to a degree strictly between 0 and 1.
This lets us keep the intuition that it is indeterminate whether these
outlying water droplets are members of the cloud without accepting
that there are millions of clouds. Note that this is not what van
Inwagen would say about *this* version of the paradox, since he
holds that some simples only constitute an object when that object is
alive. For van Inwagen, as for Unger, there are no clouds, only
cloud-like swarms of atoms. But van Inwagen recognises that a similar
problem arises for cats, or for people, two kinds of things that he
does believe exist, and he wields this vague constitution theory to
solve the problems that arise there.
Traditionally, many philosophers thought that such a solution was
downright incoherent. A tradition stretching back to Bertrand Russell
(1923) and Michael Dummett (1975) held that vagueness was always and
everywhere a representational phenomenon. From this perspective, it
didn't make sense to talk about it being vague or indeterminate
whether a particular droplet was part of a particular cloud. But this
traditional view has come under a lot of pressure in recent years; see
Barnes (2010) for one of the best challenges, and Sorensen (2013,
section 8) for a survey of more work. So let us assume here it is
legitimate to talk about the possibility that parthood itself, and not
just our representation of it, is vague. As Hudson (2001) notes
though, it is far from clear just how the appeal to fuzzy logic is
meant to help *here*. Originally it was clear for each of
*n* water droplets whether they were members of the cloud to
degree 1 or degree 0. So there were 2*n* candidate
clouds, and the Problem of the Many is finding out how to preserve the
intuition when faced with all these objects. It is unclear how
*increasing* the range of possible relationships between each
particle and the cloud from 2 to continuum-many should help here, for
now it seems there are at least continuum-many cloud-like objects to
choose between, one for each function from each of the *n*
droplets to [0, 1], and we need a way of saying exactly one of them is
a cloud. Assume that some droplet is part of the cloud to degree 0.7.
Now consider the object (or perhaps possible object) that is just like
the cloud, except this droplet is only part of it to degree 0.6. Does
that object exist, and is it a cloud? Van Inwagen says, in a way
reminiscent of Markosian's brutal composition solution, that
such an 'object' does not even exist.
A different kind of solution is offered by Mark Johnston (1992) and E.
J. Lowe (1982, 1995). Both of them suggest that the key to solving the
Problem is to distinguish cloud-constituters from clouds. They say it
is a category mistake to identify clouds with any fusion of water
droplets, because they have different identity conditions. The cloud
could survive the transformation of half its droplets into puddles on
the footpath (or whatever kind of land it happens to be raining over),
it would just be a smaller cloud, the fusion could not. As Johnston
says, "Hence Unger's insistent and ironic question
'But which of *o*1, *o*2,
*o*3, ... is our paradigm cloud
*c*?' has as its proper answer 'None'"
(1992: 100, numbering slightly altered).
Lewis (1993) listed several objections to this position, and Lowe
(1995) responds to them. (Lewis and Lowe discuss a version of the
problem using cats not clouds, and we will sometimes follow them
below.)
Lewis's first objection is that positing clouds as well as
cloud-constituting fusions of atoms is metaphysically extravagant. As
Lowe (and, for separate reasons, Johnston) point out, these extra
objects are arguably needed to solve puzzles to do with persistence.
Hence it is no objection to a solution to the Problem of the Many that
it posits such objects. Resolving these debates would take us too far
afield, so let us assume (as Lewis does) that we have reason to
believe that these objects exist.
Secondly, Lewis says that even with this move, we still have a Problem
of the Many applied to cloud-constituters, rather than to clouds. Lowe
responds that since 'cloud-constituter' is not a folk
concept, we don't really have any philosophically salient
intuitions here, so this cannot be a way in which the position is
unintuitive.
Finally, Lewis says that each of the constituters is so like the
object it is meant to merely constitute (be it a cloud, or a cat, or
whatever), it satisfies the same sortals as that object. So if we were
originally worried that there were 1001 cats (or clouds) where we
thought there was one, now we should be worried that there are 1002.
But as Lowe points out, this argument seems to assume that *being a
cat*, or *being a cloud*, is an intrinsic property. If we
assume that it is extrinsic, if it turns on the history of the object,
perhaps its future or its possible future, and on which object it is
embedded in, then the fact that a cloud-constituter looks, when
considered in isolation, to be a cloud is little reason to think it
actually is a cloud.
Johnston provides an argument that the distinction between clouds and
cloud-constituting fusions of water droplets is crucial to solving the
Problem. He thinks that the following principle is sound, and not
threatened by examples like our cloud.
>
> (9') If *y* is a paradigm *F*, and *x* is
> an entity that differs from *y* in any respect relevant to
> being an *F* only very minutely, *and x is of the right
> category, i.e. is not a mere quantity or piece of matter*, then
> *x* is an *F*. (Johnston 1992: 100)
>
The theorist who thinks that clouds are just fusions of water droplets
cannot accept this principle, or they will conclude that every
*oi* is a cloud, since for them each
*oi* is of the right category. On the other hand,
Johnston himself cannot accept it either, unless he denies there can
be another object *c'* which is in a similar position to
*c*, and is of the same category as *c*, but differs
with respect to which water droplets constitute it. It seems that what
is doing the work in Johnston's solution is not just the
distinction between constitution and identity, but a tacit restriction
on when there is a 'higher-level' object constituted by
certain 'lower-level' objects. To that extent, his theory
also resembles Markosian's brutal composition theory, though
since Johnston can accept that every set of atoms has a fusion his
theory has different costs and benefits to Markosian's
theory.
A recent version of this kind of view comes from Nicholas K. Jones
(2015), though he focusses on constitution, not composition. (Indeed,
a distinctive aspect of his view is that he takes constitution to be
metaphysically prior to composition.) Jones rejects the following
principle, which is similar to 4 in the original inconsistent set.
* If the water droplets in *si* constitute
*oi*, and the objects in *sj*
constitute *oj*, and the sets *si*
and *sk* are not identical, then the objects
*oi* and *oj* are not
identical.
He rejects this claim. He argues that some water droplets can
constitute a cloud, and some other water droplets can constitute the
very same cloud. On this view, the predicate *constitute x*
behaves a bit like the predicate *surround the building*. It
can be true that the Fs surround the building, and the Gs surround the
building, without the Fs being the Gs. And on Jones's view, it
can be true that the Fs constitute *x*, and the Gs constitute
*x*, without the Fs being the Gs. This resembles Lewis's
solution in terms of almost-identity, since both Jones and Lewis say
that there is one cloud, yet both *si* and
*sk* can be said to compose it. But for Lewis, this
is possible because he rejects the inference from *There is one
cloud*, to *If a and b are clouds, they are identical*.
Jones accepts this inference, and rejects the inference from the
premise that *si* and *sk* are
distinct, and each compose a cloud, to the conclusion that they
compose non-identical clouds.
## 9. Rethinking Parthood
After concluding that all of these kinds of solutions face serious
difficulties, Hudson (2001: Chapter 2) outlines a new solution, one
which rejects so many of the presuppositions of the puzzle that it is
best to count him as rejecting the reasoning, rather than rejecting
any particular premise. (Hudson is somewhat tentative about
*endorsing* this view, as opposed to merely endorsing the claim
that it looks better than its many rivals, but for expository purposes
let us refer to it here as his view.) To see the motivation behind
Hudson's approach, consider a slightly different case, a variant
of one discussed in Wiggins 1968. Tibbles is born at midnight Sunday,
replete with a splendid tail, called Tail. An unfortunate accident
involving a guillotine sees Tibbles lose his tail at midday Monday,
though the tail is preserved for posterity. Then midnight Monday,
Tibbles dies. Now consider the timeless question, "Is Tail part
of Tibbles?" Intuitively, we want to say the question is
underspecified. Outside of Monday, the question does not arise, for
Tibbles does not exist. Before midday Monday, the answer is
"Yes", and after midday the answer is "No".
This suggests that there is really no proposition that Tail is part of
Tibbles. There is a proposition that Tail is part of Tibbles on Monday
morning (that's true) and that Tail is part of Tibbles on Monday
afternoon (that's false), but no proposition involving just the
parthood relation and two objects. Parthood is a three-place relation
between two objects and a time, not a two-place relation between two
objects.
Hudson suggests that this line of reasoning is potentially on the
right track, but that the conclusion is not quite right. Parthood is a
three-place relation, but the third place is not filled by a time, but
by a region of space-time. To a crude approximation, *x* is
part of *y* at *s* is true if (as we'd normally
say) *x* is a part of *y* and *s* is a region of
space-time containing no region not occupied by *y* and all
regions occupied by *x*. But this should be taken as a
heuristic guide only, not as a reductive definition, since parthood is
really a three-place relation, so the crude approximation does not
even express a proposition according to Hudson.
To see how this applies to the Problem of the Many, let's
simplify the case a little bit so there are only two water droplets,
*w*1 and *w*2, that are neither
determinately part of the cloud nor determinately not a part of it. As
well there is the core of the cloud, call it *a*. On an
orthodox theory, there are four proto-clouds here, *a*,
*a* + *w*1, *a* +
*w*2 and *a* + *w*1 +
*w* 2. On Hudson's theory the largest and the
smallest proto-clouds still exist, but in the middle there is a quite
different kind of object, which we'll call *c*. Let
*r*1 be the region occupied by *a* and
*w*1, and *r*2 the region occupied
by *a* and *w*2. Then the following claims
are all true according to Hudson:
* *c* exactly occupies *r*1;
* *c* exactly occupies *r*2;
* *c* does not occupy the region consisting of the union of
*r*1 and *r*2;
* *c* has *w*1 as a part at
*r*1, but not at *r*2;
* *c* has *w*2 as a part at
*r*2, but not at *r*1;
* *c* has no parts at the region consisting of the union of
*r*1 and *r*2.
Hudson defines "*x* exactly occupies *s*" as
follows:
* *x* has a part at *s*,
* there is no region of space-time, *s*\*, such that
*s*\* has *s* as a subregion, while *x* has a part
at *s*\*, and
* for every subregion of *s*, *s'*, *x*
has a part at *s'*. (Hudson 2001: 63)
At first, it might look like not much has been accomplished here. All
that we did was turn a Problem of 4 clouds into a Problem of 3 clouds,
replacing the fusions *a* + *w*1 and
*a* + *w*2 with the new, and oddly behaved,
*c*. But that is to overlook a rather important feature of the
remaining proto-clouds. The three remaining proto-clouds can be
strictly ordered by the 'part of' relation. This was not
previously possible, since neither *a* + *w*1
nor *a* + *w*2 were part of the other. If we
adopt the principle that 'cloud' is a maximal predicate,
so no cloud can be a proper part of another cloud, we now get the
conclusion that exactly one of the proto-clouds is a cloud, as
desired.
This is a quite ingenious approach, and it deserves some attention in
the future literature. It is hard to say what will emerge as the main
costs and benefits of the view in advance of that literature, but the
following two points seem worthy of attention. First, if we are
allowed to appeal to the principle that no cloud is a proper part of
another, why not appeal to the principle that no two clouds massively
overlap, and get from 4 proto-clouds to one actual cloud that way?
Secondly, why don't we have an object that is just like the old
*a* + *w*1, that is, an object that has
*w*1 as a part at *r*1, and does
not have *w*2 (or anything else) as a part at
*r*2? If we get it back, as well as *a* +
*w*2, then all of Hudson's tinkering with
mereology will just have converted a problem of 4 clouds into a
problem of 5 clouds.
Neither of these points should be taken to be conclusive refutations.
As things stand now, Hudson's solution joins the ranks of the
many and varied proposed solutions to the Problem of the Many. For
such a young problem, the variety of these solutions is rather
impressive. Whether the next few years will see these ranks whittled
down by refutation, or swelled by imaginative theorising, remains to
be seen. |
vaihinger | ## 1. Brief Biography
Hans (born Johannes) Vaihinger's early life was in many ways
that of a typical intellectual in nineteenth-century Germany. Born on
25 September 1852 near Tubingen, the son of the pastor Johann
Georg Vaihinger, Hans was intended for the clergy. He began his
studies in theology at the University of Tubingen in 1870, but,
never especially devout, he soon turned his attentions to other
subjects, primarily philosophy and natural science. Throughout the
next four years, Vaihinger's course of study emphasized the
classics of German philosophy--Kant, Fichte, Schelling, Hegel,
and Schleiermacher--but he also devoted himself intensely
to the independent study of Schopenhauer and Darwin. In 1874,
Vaihinger completed his dissertation under the supervision of the
logician Christoph von Sigwart with a prize-winning essay,
entitled *Recent Theories of Consciousness according to their
Metaphysical Foundation and their Significance for
Psychology*.
Later in the same year, he reported to Leipzig for compulsory military
service, but was excused due to his poor eyesight. Free of his
military duties, Vaihinger had the opportunity to attend lectures at
the University from, among others, the founder of empirical psychology
Wilhelm Wundt. It was during this time that Vaihinger first
encountered the work of a figure who, next to Kant, would be his most
important and lasting philosophical influence, the Neo-Kantian
Friedrich Lange. Vaihinger describes the impact
Lange's *History of Materialism* made on him as follows:
"Now at last I had found the man whom I had sought in vain
during those four years [at Tubingen]. I found a master, a guide,
an ideal teacher... . All that I had striven for and aimed at
stood before my eyes as a finished masterpiece. From this time onwards
I called myself a disciple of F.A. Lange" (*PAO*,
xxi-xxii).
After a brief tenure in Leipzig, Vaihinger moved to Berlin, where he
continued his studies with Hermann von Helmholtz and the Neo-Kantian
Eduard Zeller. In 1876, Vaihinger published his first work, an
exposition and defense of Lange's brand of Neo-Kantianism,
entitled *Hartmann, Duhring, and Lange: Towards a
History of German Philosophy in the Nineteenth Century*. It was
here that the views which would form the basis of the *PAO*
first began to take shape. In autumn of the same year, Vaihinger again
relocated--this time to the University of Strassburg--and by
early 1877 he had habilitated with a work entitled *Logical Studies
on Fictions. Part I: The Theory of Scientific Fictions*. Though the
*PAO* would not be published for more than 30 years, the
*Logical Studies* is already, according to Vaihinger,
"exactly the same" as Part I of the former work
(*PAO* xxiii-xxiv).
The extremely long gestation period of the *PAO* seems to
have been the product of a variety of factors. First and foremost,
monetary considerations compelled Vaihinger to find permanent academic
employment. To further his candidacy, Vaihinger decided to bring out a
*Commentary on Kant's Critique of Pure Reason* in honor of
the centenary of the first *Critique'*s publication. The
first volume of the *Commentary* was released in 1881, and
helped Vaihinger secure a permanent position at the University of
Halle. The second volume followed in 1892. A sense of the scholarly
depth of Vaihinger's work is given by the fact that the
two-volume *Commentary*, which deals only with the
*KrV*'s Prefaces, Introductions, and Transcendental
Aesthetic, reaches nearly 1,100 pages. Moreover, a number of personal
and professional engagements occupied Vaihinger in the last decade of
the century. In 1889 he married Elisabeth Schweigger, the daughter of
a Berlin bookseller. In 1892 they had a son, Richard, and in 1895 a
daughter, Erna. In 1896, he founded the journal *Kant-Studien*,
and in 1901 the *Kant Gesellschaft*.
Finally, in 1911, Vaihinger published the first edition of
*PAO*. The work made him something like a philosophical
celebrity virtually overnight. It proved so popular that it had gone
through no fewer than 10 editions by the time of his death in 1933,
attracting the attention of luminaries from a wide variety of academic
fields (including Einstein, Ostwald, and Freud). Halle even became
informally known as the "Vaihinger-Stadt" (Vaihinger
city). The *PAO*'s success enabled Vaihinger to found,
in 1918, yet another journal, *Annalen der Philosophie und
philosophischen Kritik*, which dealt specifically with the themes
broached in the
*PAO*, and included contributions from figures such as the
mathematician Moritz Pasch, the embryologist Wilhelm Roux, and the
positivist philosophers Rudolf Carnap and Hans Reichenbach. In 1930,
Carnap and Reichenbach took over editorship of the *Annalen* and
renamed it *Erkenntnis*.
Though this period of Vaihinger's life was marked by tremendous
professional success, it was also a period of personal struggle. In
1906, suffering badly from cataracts, Vaihinger was forced to
discontinue his lecturing. Eventually, he was completely blinded. In
1913, there occurred a rather bizarre incident: an anti-Semitic
publication, the *Semi-Kurschner*, "accused"
Vaihinger of being a Jew. With anti-Semitism already on the rise in
Germany, Vaihinger felt compelled to defend his reputation in court,
and sued the publication for defamation. Though the
*Semi-Kurschner* ultimately removed his name, the incident
perhaps explains in part why his reputation would continue to suffer
under the Nazis. Additionally, Vaihinger had invested significant
portions of his assets in Russia; the October Revolution, together with
the economic crisis after the First World War left him in dire
financial straits, and he was forced to sell the bookstore he had
inherited from Elisabeth's parents. Struggle turned to tragedy
when, in 1918, Vaihinger's daughter Erna committed suicide. His
son Richard also began suffering from neuropathy brought on by the war,
and by 1929 he was completely incapacitated.
Vaihinger died in 1933, on the eve of the rise of the Third Reich. A
long-time liberal and pacifist, Vaihinger and his work were treated
with hostile silence throughout the Nazi period. A final, posthumous
tragedy would thus befall him during these years: once among
Germany's most celebrated and influential philosophers,
Vaihinger became a virtual nobody, and his works were quickly
forgotten. Though contemporary philosophical literature reveals a
renewed interest in his ideas, it is fair to say that his reputation
has never fully recovered. For a more detailed treatment of
Vaihinger's biography, see (Simon 2014). Vaihinger himself
discusses his intellectual development at length in his
autobiographical (1921b) (also included as the introduction to
C.K. Ogden's 1925 English translation of
*PAO*).
## 2. Early Views and Intellectual Context
With the hegemony of absolute idealism broken, Germany was awash with
competing philosophical systems by the time Vaihinger habilitated in
1877. Three important movements are worth emphasizing (for detailed
studies of the late nineteenth-century context, see Beiser [2013;
2014; 2015; 2017]). First, under the influence of Lange and others,
Neo-Kantianism had become a major force on the intellectual
scene. Second, due partly to Schopenhauer's rising star, and
partly to the continued influence of Hegel and Schelling, grand
metaphysical theorizing after the idealist model was experiencing a
resurgence in the work of Adolf Trendelenburg, Hermann Lotze, and
Eduard von Hartmann. Finally, traditional materialism and empiricism,
though very much on the defensive, had nevertheless gained back
considerable ground since their nadir in early decades of the
century.
Vaihinger's first intervention in these foundational debates
came in the form of his short tract *Hartmann, Duhring, and
Lange*. Its title notwithstanding, the book is less a work of
historical exegesis, and more a polemical defense of Lange's
Kantian standpoint against the dual assault of naive realism
(represented by the positivism of Eugen Duhring) and unchecked
metaphysical speculation (exemplified by Hartmann's
Neo-Schopenhauerian view, according to which fundamental reality is a
monistic vital substance). Vaihinger's chief charge against
Hartmann and Duhring is that they are *dogmatists*, that
they systematically overstep the critical strictures Kant had placed
on cognition, claiming knowledge of things in themselves (1876, 10,
17). Both commit "the original sin of *post-Kantian
philosophy*"--Hegel's
"fantastic *petitio principii*" of "the unity
of thought and being" (1876, 67). Lange's Neo-Kantianism,
by contrast, represents something like the best of both worlds, and so
promises a middle path in a seemingly intractable dispute. With
Duhring, Lange emphasizes the importance of grounding philosophy
in the results of the empirical sciences. But with Hartmann, he avoids
naively eliding the empirical objects studied by the sciences
with things in themselves.
It is not, however, only Lange's in-principle restriction of
knowledge to appearances that Vaihinger wishes to champion. More
importantly, he holds that Lange has demonstrated that the empirical
sciences *themselves* conclusively support the transcendental
idealist's position. In Chapter IV, Part 3 of the second volume
of his magnum opus *The History of Materialism and Critique of its
Contemporary Significance*, Lange draws on the work of Johannes
Muller and Helmholtz to argue that "the physiology of the
sense organs is the developed, or corrected Kantianism, and
Kant's system can, as it were, be viewed as a program for the
more recent discoveries in this field" (1873-75, 2:409).
The basic idea was that such research had shown that our perceptual
apparatus plays an active role in structuring our experience, and that
the output of such a process does not qualitatively resemble the
initial external stimulus, for which it is a mere "sign."
Since what we perceive is always the product of such unconscious
physiological processes, it supposedly follows that we have no
knowledge of the objects that ultimately cause our perceptions (for
more on these sorts of arguments, see [Hatfield 1990, 165-71,
195-208, and esp. 208-218; Beiser 2014, 199-205,
381-86], and the entries on Hermann von Helmholtz, and Friedrich
Lange). Thus, if Duhring rejects transcendental idealism, he
simply has not consistently followed his own prescription that
philosophy should base itself on the empirical sciences. And if
Hartmann claims to know the nature of things in themselves, he can do
so only by flagrantly ignoring the results of those sciences.
Though perhaps rhetorically effective, this argument leads to a basic
tension in Vaihinger's early view. Vaihinger sums up
Lange's conclusion as follows: "the result of the
physiology of the senses is that we do not ... perceive external
objects, but rather ourselves first *produce the appearance of such
things*, specifically as a consequence of the affection by
transcendent objects" (1876, 56). These appearances are the
result of our "physiological organization" operating on
sensory stimuli. But he goes on to concede that this
"'organization' signifies only the unknown ...
*Y* whose collaboration with the 'thing in itself',
*X*, produces the world of inner and outer experience" (1876,
57), that "even our own body ... is only a product of our
optical apparatus" (1876, 57), and that "not merely the
external
*world*, but also the *organs* with which we grasp it are
mere *images* of what truly exists, which itself remains unknown
to us" (1876, 58).
So, according to the Lange-Vaihinger view, the objects of our
experience are the joint product of affection by things in themselves
and the operations of our physiological organization. This is supposed
to have the conclusion that things in themselves are entirely
unknowable, *a fortiori* not knowable through the methods of the
empirical sciences. The problem becomes clear, then, when we ask what
sort of thing 'physiological organization' is supposed to
signify. If it refers to a thing in itself (the cognitive apparatus in
itself), then the empirical sciences could establish nothing about it.
In this case, the Langean argument is entirely impotent. If it refers
to an appearance, then there is evidently a circle in the argument: the
existence of appearances is supposed to be explained by the causal
collaboration of things in themselves and the perceptual apparatus, but
the perceptual apparatus is itself an appearance. Finally, if the
physiological organization is both an appearance and a thing in itself,
then the empirical sciences *can* in fact yield knowledge of at
least some things in themselves. This, of course, is precisely what the
argument is meant to forestall.
Vaihinger does little to assuage these problems in (1876). Sometimes
he suggests that they are rooted in the fact that the very notion of a
thing in itself involves a contradiction, albeit one which is
"grounded in the contradictory constitution of our cognitive
faculty itself" (1876, 34). In other words, Vaihinger seeks to
view the issue of things in themselves along the lines of a Kantian
antinomy--a conflict which arises necessarily from the nature of
our reason, but which can at least be recognized as such. This,
however, is of little help. For one, it is questionable whether the
basic epistemological standpoint of (1876) can even be coherently
stated without commitment to things in themselves. For another, a
crucial feature of Kant's antinomies is that they are
*resolved* by his transcendental idealism, thus lending that
doctrine an indirect proof (*KrV*, A 506/B 534). If the
critical philosophy itself were to generate insoluble contradictions,
that would be reason to reject, not accept, it. Vaihinger would
continue to struggle with these issues in the *PAO*, which
still aims to advance the broadly Langean project of naturalizing
Kantianism. As we shall see in SS6, however, Vaihinger's
conception of Kantianism undergoes a considerable shift, now taking
the form of what he calls "idealistic positivism",
"positivist idealism", or even (in what is evidently the
first use of the phrase) "logical positivism." For more on
Vaihinger's debt to Lange, see (Ceynowa 1993, 133-72;
Heidelberger 2014).
## 3. Interpretation of Kant
Before returning to Vaihinger's own positive philosophy, it is
worth examining the task that occupied him intensely between the
publication of (1876) and the *PAO*: the interpretation of
Kant. The rise of Neo-Kantianism in the second half of the nineteenth
century brought with it a renewed interest in the historical study of
the critical philosophy. The majority of the works written during this
period tended to approach Kant's texts polemically, either with
the aim of defending and "updating" them, or with the
intention of refuting them. In terms of its scholarly sobriety, depth
of textual analysis, and sensitivity to Kant's immediate
intellectual context, Vaihinger's *Commentary* was
unique. He notes that his approach will be that of those who analyze
ancient texts "with philological sobriety and strict rigor
... in order to determine their genuine meaning from
an *historical* standpoint" (1881-92, 1:iii). The
extent to which Vaihinger resists allowing his own philosophical
commitments to influence his interpretation is indeed impressive. In
fact, the very idea that understanding Kant's texts might
require a distinctively historical kind of approach seems to have been
something of a novelty and is among Vaihinger's lasting
contributions to Kant scholarship.
There are other important, and more substantive, respects in which the
*Commentary* was at the vanguard of Kant scholarship. The
dominant trend in the interpretation of Kant's theoretical
philosophy throughout most of the nineteenth century had been the
physio-psychological reading advanced by such figures as Schopenhauer,
Jakob Fries, Friedrich Beneke, Jurgen Bona Meyer, Helmholtz, and
of course Lange (Beiser 2018, 64-72, 81-84; 2014,
209-11, 460-63). According to this tradition, Kant's
main goal in the *Critique* was the discovery of the *causal
mechanisms* underlying perception. Kant's conception of the
*a priori* was accordingly understood as referring to innate or
ingrained properties of the psychological subject that played an active
role in the construction of perception and acquisition of empirical
knowledge. The first to break decisively with this tradition was
probably Hermann Cohen. Cohen (1871) highlighted the
*epistemological* side of Kant's project--his
interest in *justifying* various claims to knowledge.
Accordingly, Cohen interpreted the *a priori* as referring to
the logical, rather than psychological, conditions of knowledge (1871,
208). In this respect, Cohen's influence on the
*Commentary* was decisive. Vaihinger insists at the outset that
Kant's "philosophy is, in the first instance,
*epistemology* [*Erkenntnistheorie*]"
(1881-92, 1:8).
Vaihinger certainly does not, however, accept Cohen's
interpretation *tout court*. Though Cohen too had emphasized
the need for historical sensitivity in interpreting Kant, his aims
were ultimately still polemical: he believed more careful attention to
Kant's texts could exonerate his system from some of the
perennial charges of inconsistency. For example, Kant's doctrine
that things in themselves exist independently of us and causally
affect our minds to produce our representations had long been thought
to violate his doctrine that the categories have validity only for
objects of possible experience. Another charge, very much alive at the
time, was that Kant's key arguments for the
non-spatiotemporality of things in themselves failed on the grounds
that they showed, at best, only the subjectivity of space and time,
not their *mere* subjectivity. According to this criticism,
Kant overlooked the possibility that space and time might
be *both* subjective forms of intuition and objective
properties of things in themselves (the classic statement of this
criticism is [Trendelenburg 1867]). Cohen's response to both
charges was to argue that interpreters were wrong to read Kant's
doctrine of things in themselves as positing a class of entities
ontologically distinct from appearances. Instead, he maintained, the
Kantian thing in itself is a mere "limiting concept"
(*Grenzbegriff*), to whose actual existence Kant was never
committed (1871, 252, 268).
Vaihinger, by contrast, does not hesitate to admit that the above
objections are genuine problems for Kant. Indeed, he believes they
pinpoint serious inconsistencies in Kant's position. For
example, against Cohen's deflationary treatment of the thing in
itself, Vaihinger notes, "Kant speaks thousands of times of
affecting things in themselves; the impossibility of the existence and
causal efficacy of things in themselves is, by contrast, merely a
*consequence* that *Kant's readers* must certainly
draw from his theory of the categories, but which *Kant himself*
hints at only seldom, and even then only timidly" (1881-92,
2:47) (the actual target of this remark is Fichte, but the point
applies to Cohen's reading too). Vaihinger's goal in the
*Commentary* is not to exonerate Kant from such inconsistencies,
but to explain why he fell into them. In particular, Vaihinger argues
that careful attention to both Kant's pre-critical writings and
his unpublished notes reveals that the *Critique* is really a
"patchwork" of inconsistent views and arguments from
various periods of Kant's intellectual development. It consists
of "geological strata which differ in both time of composition
and content" (1902, 27). In the case of the Deduction, Vaihinger
even claims to have been able to trace precisely each portion of the
text to a particular series of notes from the decade between the
*Dissertation* and the *KrV* (1902). In this respect,
then, Kant's great book resembles more the texts of Homer than
the work of a single author, writing at a single time, with a single
set of views.
More specifically, Vaihinger believes the development of Kant's
thought proceeds through six stages. From 1750-1760 Kant is a
dogmatic rationalist after the Leibnizian-Wolffian model. Then, under
the influence of Locke and Hume, he endorses empiricism from
1760-1764. With the publication of *Dreams of a Spirit
Seer*, he begins to adopt the standpoint of the critical
philosophy, but by the time of the 1770 *Inaugural
Dissertation*, the influence of Leibniz (particularly of the
*Nouveaux Essais*) reasserts itself. This reversion to
dogmatism lasts until 1772 when he becomes a skeptic, again under the
influence of Hume. Finally, he arrives at the mature critical
standpoint with the publication of the *Critique of Pure
Reason* in 1781 (1881-92, 1:47-49). Unfortunately, on
Vaihinger's reading, doctrines from the first five periods
continue to figure in Kant's arguments for and even the core
tenets of the critical philosophy. It is this which explains the
numerous inconsistencies besetting Kant's system.
A clear example of Vaihinger's basic approach is his treatment
of Kant's famous argument for the subjectivity of space from
incongruent counterparts. In the *Prolegomena*, Kant argues
thus:
>
>
>
> There are no inner differences here [in the case of incongruent
> counterparts] that any understanding could merely think; and yet the
> differences are inner as far as the senses teach ... one
> hand's glove cannot be used on the other... . These objects
> are surely not representations of things as they are in themselves,
> and
> *as the pure understanding would cognize them*, rather, they are
> sensory intuitions, i.e., appearances. (4:286)
>
>
Roughly, Vaihinger reconstructs the argument as follows: (1) Some
differences in the spatial properties of objects cannot be known
through the pure understanding; (2) All differences between things in
themselves can be known through the pure understanding; (C) Space is
not a thing in itself. Premise (2), however, violates Kant's
noumenal ignorance doctrine, according to which things in themselves
cannot be known through the understanding's categories. Thus,
Vaihinger concludes, "this assumption [that things in themselves
are knowable through the pure understanding] is obviously anathema to
Kantian criticism, and an archaic reversion to dogmatism"
(1881-92, 2:528). More specifically, his claim is that the
premise is residue from Kant's view in the 1770
*Dissertation*, according to which things in themselves
*are* objects as the pure understanding knows them
(1881-92, 2:354, 453, 522).
Vaihinger's reading of the *KrV*, and especially of the
Deduction, has become known as the "patchwork thesis." The
patchwork reading has been influential in Anglophone scholarship, due
largely to its adoption by Norman Kemp Smith (1918), but it remains
controversial. Since the reading apparently requires us to assume that
Kant failed to understand that combining views he knew were
inconsistent with one another would produce a view that is itself
inconsistent, it may be objected that, "if such a conclusion is
true, we must say that Immanuel Kant was either abnormally stupid, or
else that he took his thinking in a frivolous spirit" (Paton
1930, 157). However, the thesis that the Deduction, regardless of its
precise method of composition, at least contains multiple inconsistent
lines of argument, rather than a single unified argument, has found
able defenders since Vaihinger (see e.g., Wolff 1963; Guyer 1987). For a
recent challenge to this revised patchwork reading see (Allison
2015).
Vaihinger's interpretive work during the 1880's and
90's did not extend to the other major part of the *KrV*,
the Transcendental Dialectic. The *PAO*, however, includes a
supplementary appendix in which Vaihinger lays out a novel
interpretation of this chapter (*PAO* 201-35). The
Dialectic was the part of the *KrV* that clearly had the most
significant impact on Vaihinger's own thought, and his
interpretation of it is radical and interesting. It will therefore be
worth briefly examining before turning to the main argument of the
*PAO*.
In the Dialectic, Kant claims that certain concepts (e.g., those of an
immaterial soul, freedom of the will, and God) have their root in pure
reason. He argues that such concepts are mere "Ideas"
(*Ideen*), in the sense that they are not constitutive
principles of experience, and thus yield no "cognition"
(*Erkenntnis*) of objects. Nevertheless, he suggests they are
"not arbitrarily thought up" (A 327/B 384), but have a
number of important "regulative" uses. In his works on
moral philosophy, Kant goes on to argue that we can even have a
certain kind of "practical cognition" of or
"rational faith" in the Ideas. A natural interpretation of
Kant's position regarding the Ideas holds that we lack
theoretical justification for believing that they refer to objects,
and this lack of justification is such that we cannot claim knowledge
of the existence or non-existence of such objects. Practical reason,
however, may provide justification for the belief in (say) the
existence of God, even if not of a sort sufficient to yield knowledge
of him. Vaihinger disputes this reading. Instead, he interprets the
Ideas as self-conscious fictions, which are nevertheless licensed by
their theoretical and practical utility. This utility justifies us in
"making use" of such concepts--looking, e.g., at
nature as though it were the product of an intelligent
designer--but never in *believing* they refer to objects.
The interpretation relies crucially on two factors: first, a
suggestive passage in which Kant calls Ideas "heuristic
fictions" (A 771/B 799), and contrasts them with hypotheses; and
second, Kant's frequent use of '*als ob*' and
related locutions when describing the regulative employment of the
Ideas. Both the distinction between fictions and hypotheses, and the
idea of treating reality *as if* certain concepts appropriately
applied to it are major themes that find extensive development in the
*PAO*.
Vaihinger's interpretation of the Ideas as fictions is highly
controversial, and was already subjected to withering criticism in his
own day by Erich Adickes (1927). Adickes accuses Vaihinger of ignoring
countless passages that flatly contradict his fictionalist reading,
and of misleadingly paraphrasing or eliding passages he does cite. In
one way, at least, this criticism is unfair. Vaihinger explicitly
concedes: "in Kant we also find in the same contexts many
passages which permit or even demand a contrary interpretation...
. Two tendencies are revealed, a critical and a dogmatic, a
revolutionary and a conservative" (*PAO*
212). Vaihinger's interpretation is not, unlike his work in
the *Commentary*, intended to be strictly historical; he is
rather presenting one strand in Kant's thought that he finds
especially promising. Adickes nevertheless makes a compelling case
that even those passages on which Vaihinger does rely do not speak
decisively in favor of his interpretation. Most contemporary Kant
scholars follow Adickes in rejecting the fictionalist interpretation
of the Ideas, though it continues to find some supporters (see e.g.,
Shaper 1966; Korsgaard 1996, 162-76; and esp. Rauscher
2015).
## 4. The Epistemology of the *PAO*
I turn now to Vaihinger's chief systematic work, the
*PAO*. That this work represents his attempt to continue
Lange's project of naturalizing Kantian epistemology
(Heidelberger 2014, 51-55) may appear surprising given
Vaihinger's interpretation of Kant. Confusion here may be
forestalled by attending to something Vaihinger himself says about the
aim of the work:
>
>
>
> There were two possible ways of working out the Neo-Kantianism of
> F.A. Lange. Either the Kantian standpoint could be developed, on the
> basis of a closer and more accurate study of Kant's teaching, and
> this is what Cohen has done, or one could bring Lange's
> Neo-Kantianism into relation with empiricism and positivism. This has
> been done in my philosophy of "As if," which also leads to
> a more thorough study of Kant's "As if" theory.
> (*PAO* xxii)
>
>
So Vaihinger distinguishes between the views maintained by the
*historical* Kant, and Lange's brand of
*Kantianism*, which his own systematic work defends and
develops.
But, the *PAO* also builds on Lange's position in
non-trivial ways. Vaihinger's autobiographical essay "The
Origin of the Philosophy of 'As If'" points to two
further significant influences, Arthur Schopenhauer and Charles Darwin.
Vaihinger's interest in Schopenhauer was sparked by the doctrine
that the intellect is "a mere tool in the service of the
will" (Schopenhauer 1958, 2:205). This view is based in part on
Schopenhauer's metaphysics, according to which the ultimate
nature of reality is nothing other than a suprapersonal, aimless,
quasi-volitional striving or "will." Schopenhauer holds
that this "will" expresses itself empirically in the
striving of organic beings to preserve their individual existence, and
the intellect has accordingly developed as a mere tool to aid more
complex organisms in this task. Vaihinger believed that Darwin's
theory of natural selection had provided an empirical grounding for
Schopenhauer's hypothesis (*PAO* xviii). This was crucial,
since Vaihinger's own Kantianism prevented him from endorsing
Schopenhauer's speculative vitalist metaphysics (*PAO*
xvii). At the same time, Vaihinger comes to view Kant's doctrine
of the essential limits of our cognition through this
Schopenhauerian-Darwinian lens: the "limitations of human
knowledge," he says, are "a necessary and natural result of
the fact that thought and knowledge are originally only a means, to
attain the Life-purpose, so that their actual independence signifies a
breaking-away from their original purpose; indeed, by the fact of this
breaking-loose, thought is confronted by impossible problems"
(*PAO* xviii). Vaihinger will use this idea to motivate his own
fictionalism, and to provide a radical reinterpretation of
Kantianism.
These various influences are drawn together in the introductory
chapters of *PAO*. Here, Vaihinger presents a view of the
"psyche" as a purely natural faculty, akin to a bodily
organ, and bound by the same laws as the rest of the organic world. All
psychic activities, including "scientific thought," are
"to be considered from the point of view of an organic
function" (*PAO* 1). Like bodily organs, the psyche is
designed (or rather, selected) to perform a particular role within the
total economy of the organism. Its purpose is "to change and
elaborate the perceptual material into those ideas, associations of
ideas, and conceptual constructs," which produce "a world
[in which] objective happenings can be calculated and our behavior
successfully carried out in relation to phenomena"(*PAO*
2). Its purpose is therefore not "to be a copy [*Abbild*]
of reality," but rather "an *instrument for finding our
way about more easily in this world*" (*PAO* 22,
translation altered). Though broadly in the spirit of Schopenhauer and
Darwin, this instrumentalist conception of the mind is, as Ceynowa has
argued (1993, 35-132), also more specifically indebted to the
nineteenth century psychologist Adolf Horwicz.
Whatever its exact provenance, Vaihinger's basic thought is that
what is selected for in the evolution of our cognitive capacities is
the ability to aid us in carrying out activities that are useful for
the purposes of survival. Specifically, Vaihinger seems to reason that
any course of action--say, procuring food for
oneself--involves at least some minimal degree of planning, and
that in turn requires some basic ability to predict what the future
will be like, and to know how it will be different if we choose to
ph rather than to ps. Thus, he suggests that the purpose of
thought is "*to calculate those events that occur without our
intervention* and to realize our impulses appropriately"
(*PAO* 3). For the bare purposes of carrying out those actions
necessary for survival, Vaihinger is claiming, the intellect would not
have to be a reliable guide to the way the world really is. On the
contrary, fictions may have an indispensable role to play here.
What are we to make of this sort of argument? One might suggest that
the fact that our cognitive apparatus is a product of natural selection
supports, rather than undermines, the idea that it is a reliable guide
to the way the world is. For, being mistaken about how things stand in
one's environment is surely an impediment, not an aid, to
survival. Vaihinger would likely respond that this only shows that we
have to get the *right* things about the world right. It does
not mean that our total image of the world is, or even could be, a
*copy* of reality, accurate in all its details.
Vaihinger's view is that the picture of the world that is easiest
to work with will depart from reality in certain crucial respects.
One may now wonder, however, what is to be made of Vaihinger's
near exclusive emphasis in *PAO* on fictions in the natural
sciences and mathematics. After all, the theorizing in fundamental
physics is a rather different matter than determining how to find a
meal, and even if such theories turn out to be useful down the line, it
seems that life would (and does) manage just fine without them. Here,
Vaihinger relies on another Darwinian idea, which he calls the
"Law of Preponderance of the Means over the End," according
to which "means which serve a purpose often undergo a more
complete development than is necessary for the attainment of their
purpose" (*PAO* xxix). In the language of contemporary
evolutionary biology, scientific thought is like a
"spandrel." At the same time, Vaihinger is keen to
emphasize the connection between scientific theorizing and more humdrum
cognitive processes. He adopts the broadly Kantian view that what is
given to us in experience is merely a "chaos of sensations"
(*PAO* 117), which needs to be ordered and refined by the
logical functions of thought, or categories. For example, the repeated
co-occurrence of sensations of a branching shape and sensations of
green leads the psyche to postulate a relation of substantial inherence
between a *thing* or *substance* (the tree) and its
*attribute* (green) (*PAO* 122-23). However, he
argues in a manner evocative of Berkeley, the notion of a substance is
incoherent since it implies the existence of a bare substrate with no
properties whatsoever (*PAO* 125-26). The fiction is
nevertheless justified, he suggests, because it facilitates the
recognition of regularities in the occurrences of our sensations, and
the communication of these regularities to others (*PAO* 130).
Given that our most immediate relation to the world is thus mediated by
the categories (which also include the relations of part-whole,
cause-effect, *inter alia*), it is natural to suppose that these
fictions will resurface in our more sophisticated theories, e.g., in the
form of the atom (a simple material substance). Vaihinger is not
suggesting that genuine knowledge that serves no particular practical
aim is impossible; but he is suggesting that our total theoretic image
of the world is ultimately limited by those fictions that have been
preserved as most adaptive for our basic practical aims (*PAO*
130).
Vaihinger's emphasis on the deep connection between cognition
and action have led many interpreters to liken his views to those of
the American pragmatists (e.g., Appiah 2017, 4; Koridze 2014; Fine
1993, 4; and esp. Ceynowa 1993). Others have disputed this connection
(Bouriau 2016). Vaihinger himself tells us that fictionalism and
pragmatism share important similarities, but nevertheless are
distinguished by the fact that pragmatism accepts, whereas
fictionalism rejects, the principle that "an idea which is found
to be useful in practice proves thereby that it is also true in
theory" (*PAO* viii). So, if one defines a pragmatist
narrowly as someone who endorses this Jamesian definition of truth,
Vaihinger is no pragmatist; his core contention is that "we may
not argue from the utility of a psychical and logical construct that
it is right" (*PAO* 51). Nevertheless, given
Vaihinger's emphasis on the inseparability of practical and
theoretical thought, his view that the use of a concept can be
justified on purely practical grounds, and so forth, it does not seem
unfair to liken him to the pragmatist tradition in a wider sense.
## 5. The Nature of Vaihinger's Fictionalism
Central to Vaihinger's project in the *PAO* is a
distinction between two sorts of fictions: (1) what he (somewhat
inaptly) terms 'semi-fictions,' which are "methods
and concepts based upon a simple deviation from reality"
(*PAO* 59); and (2) 'real' or 'genuine
fictions' which are *self-contradictory* concepts
(ibid.). In other words, semi-fictions are statements or propositions
that happen not to correspond to the world; genuine fictions are those
that claim something *impossible* about the world.
An important example the first kind is what Vaihinger calls the
"abstractive" or "neglective" fiction
(*PAO*, 14-17, 140-41). Such fictions are roughly
what we have in mind today when we speak of 'idealized
models' (contemporary philosophers of science sometimes
distinguish further between "Aristotelian" and
"Galilean" methods of idealization. See the article on
Models in Science. Vaihinger does not clearly distinguish these). A
standard example is the model of the planetary system given in
classical mechanics:
>
>
>
> In physics we find such a fiction in the fact that masses of
> undeniable extension, e.g., the sun and the earth, in connection with
> the derivation of certain basic concepts of mechanics and the
> calculation of their reciprocal attraction are reduced to points or
> concentrated into points (gravitational points) in order, by means of
> this fiction, to facilitate the presentation of the more composite
> phenomena. Such a neglect of elements is especially resorted to where
> a very small factor is assumed to be zero. (*PAO*
> 16)
>
>
The idea is familiar. If what we are interested in is determining
the trajectory and velocity of the planets around the Sun, then it is
useful to ignore or iron-out a wide variety of features of the system,
e.g., the size of the planets, the presence of friction, the
gravitational pull that the planets and their moons exert on one
another. Here, then, we evidently have a clear-cut case in which
"pretending" that certain things are true of the solar
system can greatly aid our aim of prediction. As Vaihinger notes, we
get the "right" results here because the properties of the
system we are ignoring have only a negligible influence on the
phenomena we are interested in studying. The solar system behaves
*nearly* *just as it would* if the planets *were*
mass points, etc. This suggests a counterfactual understanding of
fictions: by reasoning from assumptions that are false--sometimes
radically false--at the actual world, we may still arrive at
conclusions that are true, or quite nearly true, at the actual world.
If those results are arrived at more efficiently by entertaining the
fiction, then the fiction is justified.
If this is Vaihinger's view, however, it would appear to face a
serious difficulty. For, his main interest is in the so-called
"real fictions," which, as noted above, are not merely
false, but *contradictory*. Since everything follows from a
contradiction, any counterfactual whose antecedent is a real fiction
will be trivially true, and this is to say they cannot possibly do the
work Vaihinger envisions (Appiah 2017, 11; Cohen 1923, 485). One way to
respond, here, is to note that the logic with which Vaihinger operates
is still essentially the version of Aristotelian logic found in Kant
and throughout much of the nineteenth century. Such
pre-twentieth-century logics often implicitly reject the principle
*ex contradictione quodlibet*. See, e.g., the discussion of
conditionals in the influential *Port-Royal Logic* (Arnauld and
Nicole 1996, 100).
Relatedly, but from a more contemporary perspective, one might read
Vaihinger as anticipating some recent views on counterpossible
reasoning (Pollard 2010). Consider the following counterfactual
conditionals:
* (1)
If Fermat's last theorem were false, then
no argument for it would be sound.
* (2)
If Fermat's last theorem were false, then
Andrew Wiles would deserve eternal fame.
* (3)
If intuitionistic logic were correct, then the Law
of Excluded Middle would fail.
* (4)
If intuitionistic logic were correct, then the Law
of Excluded Middle would hold.
Intuitively, (1) and (3) are true while (2) and (4) are false.
Moreover, someone who rejects intuitionism might well wish to
*argue* against intuitionism by relying on something like (3),
even though what the antecedent states is, by her lights, impossible.
These facts have led some to suppose that (1) and (3) are not merely
true, but non-trivially true (for more on these sorts of arguments, see
the entry for Impossible Worlds). Vaihinger's fictionalism might
thus be read as adding a heap of substantive examples to the list of
alleged non-trivial counterpossible conditionals.
This sort of reading seems supported by examples that Vaihinger
himself gives, and the theory of "fictive judgments" he
bases on them. One of Vaihinger's favorite examples is:
* (5)
If the circle were a polygon, then it would be
subject to the laws of rectilinear figures. (*PAO*,
193)
Though (5)'s antecedent and consequent both state something
impossible, Vaihinger contends it forms the basis of
Archimedes's proof of the area of a circle using the method of
exhaustion (*PAO* 177). Here, then, is a case in which a proof
no one would wish to call trivial allegedly proceeds from a
contradictory assumption. Generalizing, Vaihinger claims, "in a
hypothetical connection [sc. conditional proposition], not only real
and possible but also unreal and impossible things can be introduced,
because it is merely the connection between the two presuppositions
and not their actual reality that is being expressed"
(*PAO* 193-94).
Anthony Appiah rejects this kind of interpretation, on the grounds
that such counterpossibles are unintelligible (2017, 11-17; cf.
Wilholt 2014, 112). Instead, he suggests Vaihinger's talk of
useful contradictions is more fruitfully understood as referring to
the use of multiple mutually incompatible models. In particular, the
point is that we may find it useful to regard certain phenomena in one
way for some purposes, but in another, incompatible way for other
purposes. Even when it turns out to be useful to combine incompatible
models for understanding one and the same phenomenon (or range of
phenomena), Appiah suggests, Vaihinger need not be committed to a
paraconsistent logic. Rather, drawing on (Cartwright 1999), he
suggests that using a model does not involve drawing out each of its
logical implications, but instead a kind of skill of "knowing
which lines of inference one should and which one should not
follow" (2017, 13). Moreover, because our main aim in using
models is not ultimately to obtain the truth about the physical world,
but to get around in it, there is no reason to avoid such
inconsistency (2017, 14-17).
Appiah's interpretation allows those who are wary of
paraconsistent logics to salvage something philosophically promising
from Vaihinger's talk of contradictions. It also perhaps has the
advantage of doing greater justice to the multifarious character of
Vaihinger's theory of fictions. For, while Vaihinger claims that
the fictive judgment has a "hypothetical element," he does
not simply reduce fictive judgments to conditionals (*PAO*
194). On the other hand, the interpretation stumbles against texts in
which Vaihinger suggests that the contradictions of real fictions
exist
*within* theories or models, not between them. For example, the
concept of a (classical) atom is supposed to be a real fiction because
it is absurd to suppose something which occupies space, but
nevertheless has no extension. This is to say that the atomistic
fiction is itself contradictory, not merely that it conflicts with
other ways of modeling matter. Another way of putting the worry is to
note that the interpretation blurs the apparently sharp distinction
between semi-fictions and real fictions: any model will be a real
fiction as long as we use it together with another model incompatible
with it.
For yet another approach, which denies that Vaihinger has or needs a
general account of how contradictory fictions can be fruitful, see
(Fine 1993, 9-11). On this reading, Vaihinger is interested only
in surveying actual scientific practice, and revealing that, as a
matter of fact, it *does* make fruitful use of real fictions;
the question of how this is possible is beside the point. There is
something to this characterization of Vaihinger's procedure,
which trades more in examples than in general pronouncements. However,
it will also be difficult for proponents of this reading to explain
passages like these: "This purpose [of thought] can only be that
of facilitating conceptual activity, of effecting a safe and rapid
connection of sensations. What we have to show, therefore, is
*how* fictional methods and constructs *render this
possible*" (*PAO* 52, emphasis added); "the
philosophy of as if shows that *and why* these false and
contradictory representations are still useful" (1921a, 532,
emphasis added). Here, Vaihinger seems to indicate that his purpose is
not just to enumerate instances of useful fictions, but to explain how
and why they are useful.
In addition to Vaihinger's specific analysis of fictions, one
might wonder about his precise brand of fictionalism. Contemporary
philosophers distinguish between "hermeneutic
fictionalism" and "revolutionary fictionalism" (see
the entry on Fictionalism). Hermeneutic fictionalism holds that
certain domains of discourse *are* fictional, in the sense that
their participants are not aiming at literal truth, but only seeming
to do so. Some textual evidence speaks in favor of reading Vaihinger
as a hermeneutic fictionalist. For example, he writes: "the
axiom and the hypothesis ... endeavor to be expressions of
reality. The fiction, on the other hand, is not such an expression nor
does it claim to be one" (*PAO* 60). And
Vaihinger's theory of the fictive judgment maintains that many
judgements of the syntactic form '*A* is *B*' are
properly analyzed as, or merely elliptical for, '*A* is to be
regarded *as if* it were *B*'--"the
'is' is a very short abbreviation for an exceedingly
complicated train of thought" (*PAO* 195). On the other
hand, in many places Vaihinger clearly indicates that he thinks we
often treat what would be better regarded as fictional discourse as
aiming at truth. Many disputes, he suggests, can be resolved by
reinterpreting such discourse fictionally. Thus, for example,
proponents of classical atomism have been wrong to think that their
theory is true, but their opponents have also been wrong to believe
that the theory should therefore be given up (*PAO* 53). He
makes an analogous point about the dispute between substantivalists
and relationalists over the nature of space (*PAO*
170-71). So, Vaihinger appears to be a revolutionary
fictionalist at least about some discourses in which he is
interested.
## 6. The As-If Philosophy and Kantianism
The most obvious aim of the *PAO* is to establish the
centrality of fictions to our cognitive life. But there is also a
broader, and in some ways more ambitious, aim to the book--to
provide the kind of "corrected Kantianism" Lange had tried
to establish in the *History of Materialism*. As we saw above,
Vaihinger views Kant's system as riddled with inconsistencies,
which arise from the alleged fact that Kant retains certain elements
of his pre-critical dogmatism even after adopting the critical
standpoint. The As-If philosophy can be seen as an attempt to retain
what Vaihinger takes to be Kant's core philosophical insights,
while removing the inconsistencies that beset the surrounding system
in which those insights were originally couched.
As was noted above, a major problem for Vaihinger's early view,
as for many of the Neo-Kantians of his day, was the status of the thing
in itself. Vaihinger now consistently rejects this notion, predictably
labelling it a fiction. Given that things in themselves are supposed to
be transcendent *substances* that *causally*
*affect* us, this rejection follows immediately from his view
that the concepts of substance and cause are themselves fictions
(*PAO* 55-56). True to form, however, Vaihinger still
holds that the concept of things in themselves has some utility: as
Cohen had argued, it serves as a "limiting concept"
(*PAO* 55), ensuring that we restrict our inquiries to what is
empirically given to us. As we have seen, Vaihinger puts an
instrumentalist spin on this Kantian idea; all knowledge is empirical
in the sense that our guiding cognitive aim is the prediction and
control of empirical phenomena, not correspondence to objective reality
(though, once again, it bears emphasizing that Vaihinger does not seem
to be claiming that such correspondence is impossible, only that it is
incidental to our main epistemic aim).
Vaihinger's rejection of the thing in itself is implicit in
the general label he gives to his own philosophical view,
"idealistic positivism." The 'positivism' here
is in part an epistemological, and in part a metaphysical thesis. The
epistemological thesis is that "what is given consists only of
sensations" (*PAO* 124; cf. xxviii). The metaphysical
thesis is that "what we usually term reality *consists* of
our sensational contents [*Empfindungsinhalte*]"
(*PAO* xxx, emphasis added); "Nothing *exists*
except sensations" (*PAO* 44, emphasis added; cf. 1921a,
532-33). This extreme view (for which Vaihinger does not argue)
has the immediate conclusion that all knowledge is of appearances,
since nothing exists except appearances.
In addition to this radical kind of empiricism, the *PAO*
continues to uphold the broadly Kantian view that cognition is
necessarily *limited*, or, as Vaihinger sometimes puts it, that
there is no "identity between thought and being." However,
it is no longer limited in the sense of being cut off from some
transcendent realm of things in themselves, but in being essentially
incomplete. The point is best appreciated by attending to another
distinction central to the *PAO*--that between
*fictions* and *hypotheses*. Hypotheses (not to be
confused with so-called "hypothetical judgments") are
judgments that are "problematic" in form, i.e., which have
the form 'it is possible that *A* is *B*'
(where the relevant sense of possibility is epistemic). Thus, if the
judgment 'matter is composed of atoms' is a hypothesis,
the person making it is maintaining that it may be true, and holds out
hope that it may be confirmed or falsified in the future. By contrast,
and as we have seen already, the fictive judgment has the form
'*A* is to be regarded *as if* it were *B*
(when in fact *A* is *not B*).' Since Vaihinger
holds that such judgments can be licensed by their practical utility,
and because it is our practical aims that ultimately guide and
constrain theoretical inquiry, Vaihinger infers that every theory will
necessarily involve some fictions; not even the best theory will
consist entirely of hypotheses, much less true hypotheses.
The distinction between hypotheses and fictions is important for
understanding Vaihinger's revised brand of Kantianism in another
way as well. In the Dialectic, Kant had argued that dogmatic
metaphysics necessarily leads reason into a series of conflicts, or
"antinomies," with itself. It can produce, that is,
equally valid arguments putatively establishing both that freedom
exists and that freedom does not exist, that matter is composed of
simples and that matter is not composed of simples, and so on. Kant
claims that these conflicts are "natural" to reason, since
reason has an innate drive to know the "unconditioned." He
argues, however, that these conflicts can be avoided if (and only if)
one recognizes the non-spatiotemporality of things in themselves.
Transcendental idealism allows us to maintain that in some of the
antinomial conflicts it is possible for both the thesis and antithesis
to be false without contradiction, and that in others it is possible
for both to be true without contradiction. The critical philosophy
thus has the distinct advantage of being able to explain how certain
apparently insoluble metaphysical debates arise, and of providing the
only possible resolution of those debates. Vaihinger similarly posits
a natural human tendency to confuse, not appearances and things in
themselves, but fictions and hypotheses. Many statements that are
introduced into our discourse as fictions end up being taken for
hypotheses. And vice versa, many propositions that are originally
intended as hypotheses are retained with mere fictional significance.
Vaihinger calls this the "law of ideational shifts"
(*Gesetz der Ideenverschiebung*) (*PAO* 92-99).
Seemingly intractable debates in the sciences can be resolved, he
maintains, by systematically recognizing this distinction. Thus, for
instance, certain arguments against atomism are cogent if intended to
establish that atomism is false (since it is *contradictory*);
but they are erroneous if intended to imply that atomism should be
rejected. Likewise, arguments for atomism are sure to fail if they are
meant to establish the existence of atoms; but they are perfectly
acceptable if they only aim to show that atomistic discourse is useful
and indeed essential for scientific practice. For, the atomist's
discourse is best understood as a conscious fiction. As Vaihinger
suggests, the confusion arises from ignoring the essential practical
dimension of thought (*PAO* xviii).
How successful is Vaihinger in resolving the alleged tensions in
Kant's critical philosophy? On one level, straightforwardly
rejecting things in themselves and endorsing an ontology only of
sensations clearly avoids the supposed problem of attributing causal
efficacy to transcendent objects. On another, however, it would appear
that Vaihinger cannot so easily reject things in themselves without
causing problems for his basic epistemological view. For one, an
obvious question arises as to *what it is* that *has*
the sensational contents that serve as Vaihinger's ontological
basics. For another, we have seen that Vaihinger's arguments in
the Introduction hold that the *mind* or
"*psyche*" operates on sensory stimuli, and that it
has certain ends to which fictions are expedient means. If nothing
exists except sensations, then it seems we must read Vaihinger's
talk of 'mind' loosely, perhaps as referring to some
particular kind of collection of or relation between sensations. But,
one is tempted to say, it is simply incoherent to suggest that
sensations themselves are the bearers of sensations, that they have
aims, that they operate on and systematize other sensations, etc. At
the very least, "operation" would seem to be a causal
notion, so that Vaihinger's causal fictionalism threatens to
undermine the arguments at the foundation of his system. Vaihinger
sometimes attempts to respond to these sorts of worries. He suggests,
for example, "From our point of view the sequence of sensations
constitutes ultimate reality, and two poles are mentally added,
subject and object" (*PAO* 56); "nothing exists
except sensations which we analyze into two poles, subject and
object ... . In other words, the ego and the
'thing in itself' are fictions" (*PAO*
44). The question of course is who the *we* is here that is
doing the *mental adding*. Perhaps Vaihinger intends such talk
to itself be fictional. But he cannot hold that without undermining
the very motivation for his fictionalism. Nor does Vaihinger ever make
clear and precise exactly what "poles" of sensations are
supposed to be.
One may also worry that, throughout the *PAO*, Vaihinger has
reverted to a crudely psychologistic version of Kantianism that both
he and Cohen had earlier rejected. On this score, Vaihinger is more
easily defended. Though his fictionalism draws on certain biological
and psychological theories, his main concern is still epistemological:
"Apart from the general warning not to confuse fictions with
reality, we may also call attention to the fact that every fiction
must be able to justify itself, i.e., must *justify* itself by
what it accomplishes for the progress of science" (*PAO*
79). There would arguably be a confusion if Vaihinger held that the
fact that certain concepts (or dispositions to form them) are
psychologically innate justifies us in *believing* that there
exist objects exemplifying them. But that is precisely the view he
wishes to reject. The appeal to the theory of natural selection is
intended to suggest a *pragmatic*, rather than evidential,
justification for the *use* of those concepts. In this sense,
Vaihinger sees himself as drawing on Kant's method of justifying
the concepts of God, freedom, and immortality on practical grounds,
but extending it beyond the domain of moral philosophy. |
logical-truth | ## 1. The Nature of Logical Truth
### 1.1 Modality
As we said above, it seems to be universally accepted that, if there
are any logical truths at all, a logical truth ought to be such that
it could not be false, or equivalently, it ought to be such that it
must be true. But as we also said, there is virtually no agreement
about the specific character of the pertinent modality.
Except among those who reject the notion of logical truth altogether,
or those who, while accepting it, reject the notion of logical form,
there is wide agreement that at least part of the modal force of a
logical truth ought to be due to its being a particular case of a true
universal generalization over the possible values of the schematic
letters in "formal" schemata like \((1')-(3')\). (These
values may but need not be expressions.) On what is possibly the
oldest way of understanding the logical modality, that modal force is
entirely due to this property: thus, for example, on this view to say
that (1) must be true can only mean that (1) is a particular case of
the true universal generalization "For all suitable \(P\),
\(Q\), \(a\) and \(b\), if \(a\) is a \(P\) only if \(b\) is a \(Q\),
and \(a\) is a \(P\), then \(b\) is a \(Q\)". On one traditional
(but not uncontroversial) interpretation, Aristotle's claim that the
conclusion of a *syllogismos* must be true if the premises are
true ought to be understood in this way. In a famous passage of the
*Prior Analytics*, he says: "A *syllogismos* is
speech (*logos*) in which, certain things being supposed,
something other than the things supposed results of necessity (*ex
anankes*) because they are so" (24b18-20). Think of
(2) as a *syllogismos* in which the "things
supposed" are (2a) and (2b), and in which the thing that
"results of necessity" is (2c):
* (2a) No desire is
voluntary.
* (2b) Some beliefs are
desires.
* (2c) Some beliefs are
not voluntary.
On the interpretation we are describing, Aristotle's view is that to
say that (2c) results of necessity from (2a) and (2b) is to say that
(2) is a particular case of the true universal generalization
"For all suitable \(P\), \(Q\) and \(R\), if no \(Q\) is \(R\)
and some \(P\)s are \(Q\)s, then some \(P\)s are not \(R\)". For
this interpretation see e.g. Alexander of Aphrodisias, 208.16 (quoted
by Lukasiewicz 1957, SS41), Bolzano (1837, SS155) and
Lukasiewicz (1957, SS5).
In many other ancient and medieval logicians, "must"
claims are understood as universal generalizations about actual items
(even if they are not always understood as universal generalizations
on "formal" schemata). Especially prominent is Diodorus'
view that a proposition is necessary just in case it is true at all
times (see Mates 1961, III, SS3). Note that this makes sense of
the idea that (2) must be true but, say, "People watch TV"
could be false, for surely this sentence was not true in Diodorus'
time. Diodorus' view appears to have been very common in the Middle
Ages, when authors like William of Sherwood and Walter Burley seem to
have understood the perceived necessity of conditionals like (2) as
truth at all times (see Knuuttila 1982, pp. 348-9). An
understanding of necessity as eternity is frequent also in later
authors; see e.g., Kant, *Critique of Pure Reason*, B 184. In
favor of the mentioned interpretation of Aristotle and of the
Diodorean view it might be pointed out that we often use modal
locutions to stress the consequents of conditionals that follow from
mere universal generalizations about the actual world, as in "If
gas prices go up, necessarily the economy slows down".
Many authors have thought that views of this sort do not account for
the full strength of the modal import of logical truths. A nowadays
very common, but (apparently) late view in the history of philosophy,
is that the necessity of a logical truth does not merely imply that
some generalization about actual items holds, but also implies that
the truth would have been true at a whole range of counterfactual
circumstances. Leibniz assigned this property to necessary truths such
as those of logic and geometry, and seems to have been one of the
first to speak of the counterfactual circumstances as "possible
universes" or worlds (see the letter to Bourguet, pp.
572-3, for a crisp statement of his views that contrasts them
with the views in the preceding paragraph; Knuuttila 1982, pp. 353 ff.
detects the earliest transparent talk of counterfactual circumstances
and of necessity understood as at least implying truth in all of these
in Duns Scotus and Buridan; see also the entry on
medieval theories of modality).
In contemporary writings the understanding of necessity as truth in
all counterfactual circumstances, and the view that logical truths are
necessary in this sense, are widespread--although many, perhaps
most, authors adopt "reductivist" views of modality that
see talk of counterfactual circumstances as no more than disguised
talk about certain actualized (possibly abstract) items, such as
linguistic descriptions. Even Leibniz seems to have thought of his
"possible universes" as ideas in the mind of God. (See
Lewis 1986 for an introduction to the contemporary polemics in this
area.)
However, even after Leibniz and up to the present, many logicians seem
to have avoided a commitment to a strong notion of necessity as truth
in all (actual and) counterfactual circumstances. Thus Bolzano, in
line with his interpretation of Aristotle mentioned above,
characterizes necessary propositions as those whose negation is
incompatible with purely general truths (see Bolzano 1837, SS119).
Frege says that "the apodictic judgment [i.e., roughly, the
judgment whose content begins with a 'necessarily'
governing the rest of the content] is distinguished from the assertory
in that it suggests the existence of universal judgments from which
the proposition can be inferred, while in the case of the assertory
one such a suggestion is lacking" (Frege 1879, SS4). Tarski
is even closer to the view traditionally attributed to Aristotle, for
it is pretty clear that for him to say that e.g. (2c)
"must" be true if (2a) and (2b) are true is to say that
(2) is a particular case of the "formal" generalization
"For all suitable \(P\), \(Q\) and \(R\), if no \(Q\) is \(R\)
and some \(P\)s are \(Q\)s, then some \(P\)s are not \(R\)" (see
Tarski 1936a, pp. 411, 415, 417, or the corresponding passages in
Tarski 1936b; see also Ray 1996). Quine is known for his explicit
rejection of any modality that cannot be understood in terms of
universal generalizations about the actual world (see especially Quine
1963). In some of these cases, this attitude is explained by a
distrust of notions that are thought not to have reached a fully
respectable scientific status, like the strong modal notions; it is
frequently accompanied in such authors, who are often practicing
logicians, by the proposal to characterize logical truth as a species
of validity (in the sense of 2.3 below). (See Williamson 2013, ch. 3,
and 2017, for a recent endorsement of the idea that logical truths are
to be seen as instances of true universal generalizations over their
logical forms, based on the scientific or "abductive"
fruitfulness of the idea, though not on a disdain of modal
notions.)
On a recent view developed by Beall and Restall (2000, 2006), called
by them "logical pluralism", the concept of logical truth
carries a commitment to the idea that a logical truth is true in all
of a range of items or "cases", and its necessity consists
in the truth of such a general claim (see Beall and Restall 2006, p.
24). However, the concept of logical truth does not single out a
unique range of "cases" as privileged in determining an
extension for the concept, or, what amounts to the same thing, the
universal generalization over a logical form can be interpreted via
different equally acceptable ranges of quantification; instead, there
are many such equally acceptable ranges and corresponding extensions
for "logical truth", which may be chosen as a function of
contextual interests. This means that, for the logical pluralist, many
sets have a right to be called "the set of logical truths"
(and "the set of logical necessities"), each in the
appropriate
context.[3]
(See the entry on
logical pluralism.)
On another recent understanding of logical necessity as a species of
generality, proposed by Rumfitt (2015), the necessity of a logical
truth consists just in its being usable under all sets of
subject-specific ways of drawing implications (provided these sets
satisfy certain structural rules); or, more roughly, just in its being
applicable no matter what sort of reasoning is at stake. On this view,
a more substantive understanding of the modality at stake in logical
truth is again not required. It may be noted that, although he
postulates a variety of subject-specific implication relations,
Rumfitt rejects pluralism about logical truth in the sense of Beall
and Restall (see his 2015, p. 56, n. 23.), and in fact thinks that the
set of logical truths in a standard quantificational language is
characterized by the standard classical logic.
Yet another sense in which it has been thought that truths like
(1)-(3), and logical truths quite generally, "could" not
be false or "must" be true is epistemic. It is an old
observation, going at least as far back as Plato, that some truths
count as intuitively known by us even in cases where we don't seem to
have any empirical grounds for them. Truths that are knowable on
non-empirical grounds are called *a priori* (an expression that
begins to be used with this meaning around the time of Leibniz; see
e.g. his "Primae Veritates", p. 518). The axioms and
theorems of mathematics, the lexicographic and stipulative
definitions, and also the paradigmatic logical truths, have been given
as examples. If it is accepted that logical truths are *a
priori*, it is natural to think that they must be true or could
not be false at least partly in the strong sense that their negations
are incompatible with what we are able to know non-empirically.
Assuming that such *a priori* knowledge exists in some way or
other, much recent philosophy has occupied itself with the issue of
how it is possible. One traditional ("rationalist") view
is that the mind is equipped with a special capacity to perceive
truths through the examination of the relations between pure ideas or
concepts, and that the truths reached through the correct operation of
this capacity count as known *a priori*. (See, e.g., Leibniz's
"Discours de Metaphysique", SSSS23 ff.;
Russell 1912, p. 105; BonJour 1998 is a very recent example of a view
of this sort.) An opposing traditional ("empiricist") view
is that there is no reason to postulate that capacity, or even that
there are reasons not to postulate it, such as that it is
"mysterious". (See the entry on
rationalism vs. empiricism.)
Some philosophers, empiricists and otherwise, have attempted to
explain *a priori* knowledge as arising from some sort of
convention or tacit agreement to assent to certain sentences (such as
(1)) and use certain rules. Hobbes in his objections to Descartes'
*Meditations* ("Third Objections", IV, p. 608)
proposes a wide-ranging conventionalist view. The later Wittgenstein
(on one interpretation) and Carnap are distinguished proponents of
"tacit agreement" and conventionalist views (see e.g.
Wittgenstein 1978, I.9, I.142; Carnap 1939, SS12, and 1963, p.
916, for informal exposition of Carnap's views; see also Coffa 1991,
chs. 14 and 17). Strictly speaking, Wittgenstein and Carnap think that
logical truths do not express propositions at all, and are just
vacuous sentences that for some reason or other we find useful to
manipulate; thus it is only in a somewhat diminished sense that we can
speak of (*a priori*) knowledge of them. However, in typical
recent exponents of "tacit agreement" and conventionalist
views, such as Boghossian (1997), the claim that logical truths do not
express propositions is rejected, and it is accepted that the
existence of the agreement provides full-blown *a priori*
knowledge of those propositions.
The "rational capacity" view and the
"conventionalist" view agree that, in a broad sense, the
epistemic ground of logical truths resides in our ability to analyze
the meanings of their expressions, be these understood as conventions
or as objective ideas. For this reason it can be said that they
explain the apriority of logical truths in terms of their analyticity.
(See the entry on the
analytic/synthetic distinction.)
Kant's explanation of the apriority of logical truths has seemed
harder to
extricate.[4]
A long line of commentators of Kant has noted that, if Kant's view is
that all logical truths are analytic, this would seem to be in tension
with his characterizations of analytic truths. Kant characterizes
analytic truths as those where the concept of the predicate is
contained in or identical with the concept of the subject, and, more
fundamentally, as those whose denial is contradictory. But it has
appeared to those commentators that these characterizations, while
applying to strict tautologies such as "Men are men" or
"Bearded men are men", would seem to leave out much of
what Kant himself counts as logically true, including syllogisms such
as (2) (see e.g. Mill 1843, bk. II, ch. vi, SS5; Husserl 1901,
vol. II, pt. 2, SS66; Kneale and Kneale 1962, pp. 357-8;
Parsons 1967; Maddy 1999). This and the apparent lack of clear
pronouncements of Kant on the issue has led at least Maddy (1999) and
Hanna (2001) to consider (though not accept) the hypothesis that Kant
viewed some logical truths as synthetic *a priori*. On an
interpretation of this sort, the apriority of many logical truths
would be explained by the fact that they would be required by the
cognitive structure of the transcendental subject, and specifically by
the forms of
judgment.[5]
But the standard interpretation is to attribute to Kant the view that
all logical truths are analytic (see e.g. Capozzi and Roncaglia 2009).
On an interpretation of this sort, Kant's forms of judgment may be
identified with logical concepts susceptible of analysis (see e.g.
Allison 1983, pp. 126ff.). An extended defense of the interpretation
that Kant viewed all logical truths as analytic, including a
vindication of Kant against the objections of the line of commentators
mentioned above, can be found in Hanna (2001), SS3.1. A
substantively Kantian contemporary theory of the epistemology of logic
and its roots in cognition is developed in Hanna (2006); this theory
does not seek to explain the apriority of logic in terms of its
analyticity, and appeals instead to a specific kind of logical
intuition and a specific cognitive logic faculty. (Compare also the
anti-aprioristic and anti-analytic but broadly Kantian view of Maddy
2007, mentioned below.)
The early Wittgenstein shares with Kant the idea that the logical
expressions do not express meanings in the way that non-logical
expressions do (see 1921, 4.0312). Consistently with this view, he
claims that logical truths do not "say" anything (1921,
6.11). But he seems to reject conventionalist and "tacit
agreement" views (1921, 6.124, 6.1223). It is not that logical
truths do not say anything because they are mere instruments for some
sort of extrinsically useful manipulation; rather, they
"show" the "logical properties" that the world
has independently of our decisions (1921, 6.12, 6.13). It is unclear
how apriority is explainable in this framework. Wittgenstein calls the
logical truths analytic (1921, 6.11), and says that "one can
recognize in the symbol alone that they are true" (1921, 6.113).
He seems to have in mind the fact that one can "see" that
a logical truth of truth-functional logic must be valid by inspection
of a suitable representation of its truth-functional content (1921,
6.1203, 6.122). But the extension of the idea to quantificational
logic is problematic, despite Wittgenstein's efforts to reduce
quantificational logic to truth-functional logic; as we now know,
there is no algorithm for deciding if a quantificational sentence is
valid. What is perhaps more important, Wittgenstein gives no
discernible explanation of why in principle all the "logical
properties" of the world should be susceptible of being
reflected in an adequate notation.
Against the "rational capacity",
"conventionalist", Kantian and early Wittgensteinian
views, other philosophers, especially radical empiricists and
naturalists (not to speak of epistemological skeptics), have rejected
the claim that *a priori* knowledge exists (hence by
implication also the claim that analytic propositions exist), and they
have proposed instead that there is only an illusion of apriority.
Often this rejection has been accompanied by criticism of the other
views. J.S. Mill thought that propositions like (2) seem *a
priori* merely because they are particular cases of early and very
familiar generalizations that we derive from experience, like
"For all suitable \(P\), \(Q\) and \(R\), if no \(Q\) is \(R\)
and some \(P\)s are \(Q\)s, then some \(P\)s are not \(R\)" (see
Mill 1843, bk. II, ch. viii). Bolzano held a similar view (see Bolzano
1837, SS315). Quine (1936, SSIII) famously criticized the
Hobbesian view noting that since the logical truths are infinite in
number, our ground for them must not lie just in a finite number of
explicit conventions, for logical rules are presumably needed to
derive an infinite number of logical truths from a finite number of
conventions (an argument derived from Carroll 1895; see Soames 2018,
ch. 10, for exposition and Gomez-Torrente 2019 for criticism of
the argument). Later Quine (especially 1954) criticized Carnap's
conventionalist view, largely on the grounds that there seems to be no
non-vague distinction between conventional truths and truths that are
tacitly left open for refutation, and that to the extent that some
truths are the product of convention or "tacit agreement",
such agreement is characteristic of many scientific hypotheses and
other postulations that seem paradigmatically non-analytic. (See Grice
and Strawson 1956 and Carnap 1963 for reactions to these criticisms.)
Quine (especially 1951) also argued that accepted sentences in
general, including paradigmatic logical truths, can be best seen as
something like hypotheses that are used to deal with experience, any
of which can be rejected if this helps make sense of the empirical
world (see Putnam 1968 for a similar view and a purported example). On
this view there cannot be strictly *a priori* grounds for any
truth. Three recent subtle anti-aprioristic positions are Maddy's
(2002, 2007), Azzouni's (2006, 2008), and Sher's (2013). For Maddy,
logical truths are *a posteriori*, but they cannot be
disconfirmed merely by observation and experiment, since they form
part of very basic ways of thinking of ours, deeply embedded in our
conceptual machinery (a conceptual machinery that is structurally
similar to Kant's postulated transcendental organization of the
understanding). Similarly, for Azzouni logical truths are equally
*a posteriori*, though our sense that they must be true comes
from their being psychologically deeply ingrained; unlike Maddy,
however, Azzouni thinks that the logical rules by which we reason are
opaque to introspection. Sher provides an attempt at combining a
Quinean epistemology of logic with a commitment to a metaphysically
realist view of the modal ground of logical truth.
One way in which *a priori* knowledge of a logical truth such
as (1) would be possible would be if *a priori* knowledge of
the fact that (1) is a logical truth, or of the universal
generalization "For all suitable \(a\), \(P\), \(b\) and \(Q\),
if \(a\) is \(P\) only if \(b\) is \(Q\), and \(a\) is \(P\), then
\(b\) is \(Q\)" were possible. One especially noteworthy kind of
skeptical consideration in the epistemology of logic is that the
possibility of inferential *a priori* knowledge of these facts
seems to face a problem of circularity or of infinite regress. If we
are to obtain inferential *a priori* knowledge of those facts,
then we will presumably follow logical rules at some point, including
possibly the rule of *modus ponens* whose very correctness
might well depend in part on the fact that (1) is a logical truth or
on the truth of the universal generalization "For all suitable
\(a\), \(P\), \(b\) and \(Q\), if \(a\) is \(P\) only if \(b\) is
\(Q\), and \(a\) is \(P\), then \(b\) is \(Q\)". In any case, it
seems clear that not all claims of this latter kind, expressing that a
certain truth is a logical truth or that a certain logical schema is
truth-preserving, could be given an *a priori* inferential
justification without the use of some of the same logical rules whose
correctness they might be thought to codify. The point can again be
reasonably derived from Carroll (1895). Some of the recent literature
on this consideration, and on anti-skeptical rejoinders, includes
Dummett (1973, 1991) and Boghossian (2000).
### 1.2 Formality
On most views, even if it were true that logical truths are true in
all counterfactual circumstances, *a priori*, and analytic,
this would not give sufficient conditions for a truth to be a logical
truth. On most views, a logical truth also has to be in some sense
"formal", and this implies at least that all truths that
are replacement instances of its form are logical truths too (and
hence, on the assumption of the preceding sentence, true in all
counterfactual circumstances, *a priori*, and analytic). To use
a slight modification of an example of Albert of Saxony (quoted by
Bochenski 1956, SS30.07), "If a widow runs, then a
female runs" should be true in all counterfactual circumstances,
*a priori*, and analytic if any truth is. However, "If a
widow runs, then a log runs" is a replacement instance of its
form, and in fact it even has the same form on any view of logical
form (something like "If a \(P\) \(Q\)s, then an \(R\)
\(Q\)s"), but it is not even true *simpliciter*. So on
most views, "If a widow runs, then a female runs" is not a
logical truth.
For philosophers who accept the idea of formality, as we said above,
the logical form of a sentence is a certain schema in which the
expressions that are not schematic letters are widely applicable
across different areas of
discourse.[6]
If the schema is the form of a logical truth, all of its replacement
instances are logical truths. The idea that logic is especially
concerned with (replacement instances of) schemata is of course
evident beginning with Aristotle and the Stoics, in all of whom the
word usually translated by "figure" is precisely
*schema*. In Aristotle a figure is actually an even more
abstract form of a group of what we would now call
"schemata", such as (2'). Our schemata are closer to
what in the Aristotelian syllogistic are the moods; but there seems to
be no word for "mood" in Aristotle (except possibly
*ptoseon* in 42b30 or *tropon* in 43a10; see Smith 1989,
pp. 148-9), and thus no general reflection on the notion of
formal schemata. There is explicit reflection on the contrast between
the formal schemata or moods and the matter (*hyle*) of
*syllogismoi* in Alexander of Aphrodisias (53.28ff., quoted by
Bochenski 1956, SS24.06), and there has been ever since. The
matter are the values of the schematic letters.
The idea that the non-schematic expressions in logical forms, i.e. the
logical expressions, are widely applicable across different areas of
discourse is also present from the beginning of logic, and recurs in
all the great logicians. It appears indirectly in many passages from
Aristotle, such as the following: "All the sciences are related
through the common things (I call common those which they use in order
to demonstrate from them, but not those that are demonstrated in them
or those about which something is demonstrated); and logic is related
to them all, as it is a science that attempts to demonstrate
universally the common things" (*Posterior Analytics*,
77a26-9); "we don't need to take hold of the things of all
refutations, but only of those that are characteristic of logic; for
these are common to every technique and ability"
(*Sophistical Refutations*, 170a34-5). (In these texts
"logic" is an appropriate translation of
*dialektike*; see Kneale and Kneale 1962, I, SS3, who
inform us that *logike* is used for the first time with its
current meaning in Alexander of Aphrodisias.) Frege says that
"the firmest proof is obviously the purely logical, which,
prescinding from the particularity of things, is based solely on the
laws on which all knowledge rests" (1879, p. 48; see also 1885,
where the universal applicability of the arithmetical concepts is
taken as a sign of their logicality). The same idea is conspicuous as
well in Tarski (1941, ch. II, SS6).
That logical expressions include paradigmatic cases like
"if", "and", "some",
"all", etc., and that they must be widely applicable
across different areas of discourse is what we might call "the
minimal thesis" about logical expressions. But beyond this there
is little if any agreement about what generic feature makes an
expression logical, and hence about what determines the logical form
of a sentence. Most authors sympathetic to the idea that logic is
formal have tried to go beyond the minimal thesis. It would be
generally agreed that being widely applicable across different areas
of discourse is only a necessary, not sufficient property of logical
expressions; for example, presumably most prepositions are widely
applicable, but they are not logical expressions on any implicit
generic notion of a logical expression. Attempts to enrich the notion
of a logical expression have typically sought to provide further
properties that collectively amount to necessary and sufficient
conditions for an expression to be logical.
One idea that has been used in such characterizations, and that is
also present in Aristotle, is that logical expressions do not,
strictly speaking, signify anything; or, that they do not signify
anything in the way that substantives, adjectives and verbs signify
something. "Logic [*dialektike*] is not a science of
determined things, or of any one genus" (*Posterior
Analytics*, 77a32-3). We saw that the idea was still present
in Kant and the early Wittgenstein. It reemerged in the Middle Ages.
The main sense of the word "syncategorematic" as applied
to expressions was roughly this semantic sense (see Kretzmann 1982,
pp. 212 ff.). Buridan and other late medieval logicians proposed that
categorematic expressions constitute the "matter" of
sentences while the syncategorematic expressions constitute their
"form" (see the text quoted by Bochenski 1956,
SS26.11). (In a somewhat different, earlier, grammatical sense of
the word, syncategorematic expressions were said to be those that
cannot be used as subjects or predicates in categorical propositions;
see Kretzmann 1982, pp. 211-2.) The idea of syncategorematicity
is somewhat imprecise, but there are serious doubts that it can serve
to characterize the idea of a logical expression, whatever this may
be. Most prepositions and adverbs are presumably syncategorematic, but
they are also presumably non-logical expressions. Conversely,
predicates such as "are identical", "is identical
with itself", "is both identical and not identical with
itself", etc., which are resolutely treated as logical in recent
logic, are presumably categorematic. (They are of course categorematic
in the grammatical sense, in which prepositions and adverbs are
equally clearly syncategorematic.)
Most other proposals have tried to delineate in some other way the
Aristotelian idea that the logical expressions have some kind of
"insubstantial" meaning, so as to use it as a necessary
and sufficient condition for logicality. One recent suggestion is that
logical expressions are those that do not allow us to distinguish
different individuals. One way in which this has been made precise is
through the characterization of logical expressions as those whose
extension or denotation over any particular domain of individuals is
invariant under permutations of that domain. (See Tarski and Givant
1987, p. 57, and Tarski 1966; for related proposals see also McCarthy
1981, Sher 1991, ch. 3, McGee 1996, Feferman 1999, Bonnay 2008, Woods
2016 and Griffiths and Paseau 2022, among others.) A permutation of a
domain is a one-to-one correspondence between the domain and itself.
For example, if \(D\) is the domain {Aristotle, Caesar, Napoleon,
Kripke}, one permutation is the correspondence that assigns each man
to himself; another is the correspondence \(P\) that assigns Caesar to
Aristotle (in mathematical notation,
\(P(\text{Aristotle})=\text{Caesar}\)), Napoleon to Caesar, Kripke to
Napoleon, and Aristotle to Kripke. That the extension of an expression
over a domain is invariant under a permutation of that domain means
that the induced image of that extension under the permutation is the
extension itself (the "induced image" of an extension
under a permutation \(Q\) is what the extension becomes when in place
of each object \(o\) one puts the object \(Q(o)\)). The extension of
"philosopher" over \(D\) is not invariant under the
permutation \(P\) above, for that extension is \(\{\text{Aristotle},
\text{Kripke}\}\), whose induced image under \(P\) is
\(\{\text{Caesar}, \text{Aristotle}\}\). This is favorable to the
proposal, for "philosopher" is certainly not widely
applicable, and so non-logical on most views. On the other hand, the
predicate "are identical" has as its extension over \(D\)
the set of pairs
>
> \(\{ \langle \text{Aristotle, Aristotle} \rangle, \langle
> \text{Caesar, Caesar} \rangle, \langle \text{Napoleon, Napoleon}
> \rangle, \langle \text{Kripke, Kripke} \rangle\};\)
>
its induced image under \(P\), and under any other permutation of
\(D\), is that very same set of pairs (as the reader may check); so
again this is favorable to the proposal. (Other paradigmatic logical
expressions receive more complicated extensions over domains, but the
extensions they receive are invariant under permutations. For example,
on one usual way of understanding the extension of "and"
over a domain, this is the function that assigns, to each pair
\(\langle S\_1, S\_2 \rangle\), where \(S\_1\) and \(S\_2\) are sets of
infinite sequences of objects drawn from \(D\), the intersection of
\(S\_1\) and \(S\_2\); and this function is permutation invariant.) One
problem with the proposal is that many expressions that seem clearly
non-logical, because they are not widely applicable, are nevertheless
invariant under permutations, and thus unable to distinguish different
individuals. The simplest examples are perhaps non-logical predicates
that have an empty extension over any domain, and hence have empty
induced images as well. "Male widow" is one example;
versions of it can be used as counterexamples to the different
versions of the idea of logicality as permutation invariance (see
Gomez-Torrente 2002), and it's unclear that the proponent of
the idea can avoid the problem in any non ad hoc way.
Another popular recent way of delineating the Aristotelian intuition
of the semantic "insubstantiality" of logical expressions
appeals to the concept of "pure inferentiality". The idea
is that logical expressions are those whose meaning, in some sense, is
given by "purely inferential" rules. (See Kneale 1956,
Hacking 1979, Peacocke 1987, Hodes 2004, among others.) A necessary
property of purely inferential rules is that they regulate only
inferential transitions between verbal items, not between extra-verbal
assertibility conditions and verbal items, or between verbal items and
actions licensed by those items. A certain inferential rule licenses
you to say "It rains" when it rains, but it's not
"purely inferential". A rule that licenses you to say
"A is a female whose husband died before her" when someone
says "A is a widow", however, is not immediately
disqualified as purely inferential. Now, presumably in some sense the
meaning of "widow" is given by this last rule together
perhaps with the converse rule, that licenses you to say "A is a
widow" when someone says "A is a female whose husband died
before her". But "widow" is not a logical
expression, since it's not widely applicable; so one needs to
postulate more necessary properties that "purely
inferential" rules ought to satisfy. A number of such conditions
are postulated in the relevant literature (see e.g. Belnap 1962 (a
reply to Prior 1960), Hacking 1979 and Hodes 2004). However, even when
the notion of pure inferentiality is strengthened in these ways,
problems remain. Most often the proposal is that an expression is
logical just in case certain purely inferential rules give its whole
meaning, including its sense, or the set of aspects of its use that
need to be mastered in order to understand it (as in Kneale 1956,
Peacocke 1987 and Hodes 2004). However, it seems clear that some
paradigmatic logical expressions have extra sense attached to them
that is not codifiable purely inferentially. For example, inductive
reasoning involving "all" seems to be part of the sense of
this expression, but it's hard to see how it could be codified by
purely inferential rules (as noted by Sainsbury 1991, pp. 316-7;
see also Dummett 1991, ch. 12). A different version of the proposal
consists in saying that an expression is logical just in case certain
purely inferential rules that are part of its sense suffice to
determine its extension (as in Hacking 1979). But it seems clear that
if the extension of, say, "are identical" is determined by
a certain set of purely inferential rules that are part of its sense,
then the extension of "are identical and are not male
widows" is equally determined by the same rules, which arguably
form part of its sense; yet "are identical and are not male
widows" is not a logical expression (see Gomez-Torrente
2002).
In view of problems of these and other sorts, some philosophers have
proposed that the concept of a logical expression is not associated
with necessary and sufficient conditions, but only with some necessary
condition related to the condition of wide applicability, such as the
condition of "being very relevant for the systematization of
scientific reasoning" (see Warmbrod 1999 for a position of
this type). Others (Gomez-Torrente 2021) have proposed that
there may be a set of necessary and sufficient conditions, if these
are not much related to the idea of semantic
"insubstantiality" and are instead pragmatic and suitably
vague; for example, many expressions are excluded directly by the
condition of wide applicability; and prepositions are presumably
excluded by some such implicit condition as "a logical
expression must be one whose study is useful for the resolution of
significant problems and fallacies in reasoning". To be sure,
these proposals give up on the extended intuition of semantic
"insubstantiality", and may be somewhat unsatisfactory for
that reason.
Some philosophers have reacted even more radically to the problems of
usual characterizations, claiming that the distinction between logical
and non-logical expressions must be vacuous, and thus rejecting the
notion of logical form altogether. (See e.g. Orayen 1989, ch. 4,
SS2.2; Etchemendy 1990, ch. 9; Read 1994; Priest 2001.) These
philosophers typically think of logical truth as a notion roughly
equivalent to that of analytic truth *simpliciter*. But they
are even more liable to the charge of giving up on extended intuitions
than the proposals of the previous paragraph.
For more thorough treatments of the ideas of formality and of a
logical expression see the entry
logical constants,
and MacFarlane 2000.
## 2. The Mathematical Characterization of Logical Truth
### 2.1 Formalization
One important reason for the successes of modern logic is its use of
what has been called "formalization". This term is usually
employed to cover several distinct (though related) phenomena, all of
them present in Frege (1879). One of these is the use of a completely
specified set of artificial symbols to which the logician assigns
unambiguous meanings, related to the meanings of corresponding natural
language expressions, but much more clearly delimited and stripped
from the notes that in those natural language expressions seem (to the
logician) irrelevant to truth-conditional content; this is especially
true of symbols meant to represent the logical expressions of natural
language. Another phenomenon is the stipulation of a completely
precise grammar for the formulae construed out of the artificial
symbols, formulae that will be "stripped" versions of
correlate sentences in natural language; this grammar amounts to an
algorithm for producing formulae starting from the basic symbols. A
third phenomenon is the postulation of a deductive calculus with a
very clear specification of axioms and rules of inference for the
artificial formulae (see the next section); such a calculus is
intended to represent in some way deductive reasoning with the
correlates of the formulae, but unlike ordinary deductions,
derivations in the calculus contain no steps that are not definite
applications of the specified rules of inference.
Instead of attempting to characterize the logical truths of a natural
language like English, the Fregean logician attempts to characterize
the artificial formulae that are "stripped" correlates of
those logical truths in a Fregean formalized language. In first-order
Fregean formalized languages, among these formulae one finds
artificial correlates of (1), (2) and (3), things like
* \(((\text{Bad}(\textit{death}) \rightarrow \text{Good}(\textit{life}))
\ \& \ \text{Bad}(\textit{death})) \rightarrow
\text{Good}(\textit{life}).\)
* \((\forall
x(\text{Desire}(x) \rightarrow \neg \text{Voluntary}(x)) \ \&\
\exists x(\text{Belief}(x) \ \&\ \text{Desire}(x)))\)
\(\rightarrow \exists x(\text{Belief}(x) \ \&\ \neg
\text{Voluntary}(x)).\)
* \((\text{Cat}(\textit{drasha}) \ \&\ \forall x(\text{Cat}(x)
\rightarrow \text{Mysterious}(x)))\)
\(\rightarrow \text{Mysterious}(\textit{drasha}).\)
(See the entry on
logic, classical.)
Fregean formalized languages include also classical higher-order
languages. (See the entry on
logic, second-order and higher-order.)
The logical expressions in these languages are standardly taken to be
the symbols for the truth-functions, the quantifiers, identity and
other symbols definable in terms of those (but there are dissenting
views on the status of the higher-order quantifiers; see 2.4.3
below).
The restriction to artificial formulae raises a number of questions
about the exact value of the Fregean enterprise for the demarcation of
logical truths in natural language; much of this value depends on how
many and how important are perceived to be the notes stripped from the
natural language expressions that are correlates of the standard
logical expressions of formalized languages. But whatever one's view
of the exact value of formalization, there is little doubt that it has
been very illuminating for logical purposes. One reason is that it's
sometimes relatively clear that the stripped notes are irrelevant to
truth-conditional content (this is especially true of the use of
natural language logical expressions for doing mathematics). Another
of the reasons is that the fact that the grammar and meaning of the
artificial formulae is so well delimited has permitted the development
of proposed characterizations of logical truth that use only concepts
of standard mathematics. This in turn has allowed the study of the
characterized notions by means of standard mathematical techniques.
The next two sections describe the two main approaches to
characterization in broad
outline.[7]
### 2.2 Derivability
We just noted that the Fregean logician's formalized grammar amounts
to an algorithm for producing formulae from the basic artificial
symbols. This is meant very literally. As was clear to mathematical
logicians from very early on, the basic symbols can be seen as (or
codified by) natural numbers, and the formation rules in the
artificial grammar can be seen as (or codified by) simple computable
arithmetical operations. The grammatical formulae can then be seen as
(or codified by) the numbers obtainable from the basic numbers after
some finite series of applications of the operations, and thus their
set is characterizable in terms of concepts of arithmetic and set
theory (in fact arithmetic suffices, with the help of some
tricks).
Exactly the same is true of the set of formulae that are derivable in
a formalized deductive calculus. A formula \(F\) is derivable in a
Fregean calculus \(C\) just in case \(F\) is obtainable from the
axioms of \(C\) after some finite series of applications of the rules
of inference of \(C\). But the axioms are certain formulae built by
the process of grammatical formation, so they can be seen as (or
codified by) certain numbers; and the rules of inference can again be
seen as (or codified by) certain computable arithmetical operations.
So the derivable formulae can be seen as (or codified by) the numbers
obtainable from the axiom numbers after some finite series of
applications of the inference operations, and thus their set is again
characterizable in terms of concepts of standard mathematics (again
arithmetic suffices).
In the time following Frege's revolution, there appears to have been a
widespread belief that the set of logical truths of any Fregean
language could be characterized as the set of formulae derivable in
some suitably chosen calculus (hence, essentially, as the set of
numbers obtainable by certain arithmetical operations). Frege himself
says, speaking of the higher-order language in his
"Begriffsschrift", that through formalization (in the
third sense above) "we arrive at a small number of laws in
which, if we add those contained in the rules, the content of all the
laws is included, albeit in an undeveloped state" (Frege 1879,
SS13). The idea follows straightforwardly from Russell's
conception of mathematics and logic as identical (see Russell 1903,
ch. I, SS10; Russell 1920, pp. 194-5) and his thesis that
"by the help of ten principles of deduction and ten other
premises of a general logical nature (...), all mathematics can
be strictly and formally deduced" (Russell 1903, ch. I,
SS4). See also Bernays (1930, p. 239): "[through
formalization] it becomes evident that all logical inference
(...) can be reduced to a limited number of logical elementary
processes that can be exactly and completely enumerated". In the
opening paragraphs of his paper on logical consequence, Tarski (1936a,
1936b) says that the belief was prevalent before the appearance of
Godel's incompleteness theorems (see subsection 2.4.3 below for
the bearing of these theorems on this issue). In recent times,
apparently due to the influence of Tarskian arguments such as the one
mentioned towards the end of subsection 2.4.3, the belief in the
adequacy of derivability characterizations seems to have waned (see
e.g. Prawitz 1985 for a similar appraisal).
### 2.3 Model-theoretic Validity
Even on the most cautious way of understanding the modality present in
logical truths, a sentence is a logical truth only if no sentence
which is a replacement instance of its logical form is false. (This
idea is only rejected by those who reject the notion of logical form.)
It is a common observation that this property, even if it is
necessary, is not clearly sufficient for a sentence to be a logical
truth. Perhaps there is a sentence that has this property but is not
really logically true, because one could assign some *unexpressed
meanings* to the variables and the schematic letters in its
logical form, and under those meanings the form would be a false
sentence.[8]
On the other hand, it is not clearly incorrect to think that a
sentence is a logical truth if no collective assignment of meanings to
the variables and the schematic letters in its logical form would turn
this form into a false sentence. Say that a sentence is
*universally valid* when it has this property. A standard
approach to the mathematical characterization of logical truth,
alternative to the derivability approach, uses always some version of
the property of universal validity, proposing it in each case as both
necessary and sufficient for logical truth. Note that if a sentence is
universally valid then, even if it's not logically true, it will be
*true*. So all universally valid sentences are correct at least
in this sense.
Apparently, the first to use a version of universal validity and
explicitly propose it as both necessary and sufficient for logical
truth was Bolzano (see Bolzano 1837, SS148; and Coffa 1991, pp.
33-4 for the claim of priority). The idea is also present in
other mathematicians of the nineteenth century (see e.g. Jane
2006), and was common in Hilbert's school. Tarski (1936a, 1936b) was
the first to indicate in a fully explicit way how the version of
universal validity used by the mathematicians could itself be given a
characterization in terms of concepts of standard mathematics, in the
case of Fregean formalized languages with an algorithmic grammar.
Essentially Tarski's characterization is widely used today in the form
of what is known as the *model-theoretic notion of validity*,
and it seems fair to say that it is usually accepted that this notion
gives a reasonably good delineation of the set of logical truths for
Fregean languages.
The notion of model-theoretic validity mimics the notion of universal
validity, but is defined just with the help of the set-theoretic
apparatus developed by Tarski (1935) for the characterization of
semantic concepts such as satisfaction, definability, and truth. (See
the entry on
Tarski's truth definitions.)
Given a Fregean language, a *structure* for the language is a
set-theoretical object composed of a set-domain taken together with an
assignment of extensions drawn from that domain to its non-logical
constants. A structure is meant by most logicians to represent an
assignment of meanings: its domain gives the range or
"meaning" of the first-order variables (and induces ranges
of the higher-order variables), and the extensions that the structure
assigns to the non-logical constants are "meanings" that
these expressions could take. Using the Tarskian apparatus, one
defines for the formulae of the Fregean language the notion of truth
in (or satisfaction by) a set-theoretic structure (with respect to an
infinite sequence assigning an object of the domain to each variable).
And finally, one defines a formula to be model-theoretically valid
just in case it is true in all structures for its language (with
respect to all infinite sequences). Let's abbreviate "\(F\) is
true in all structures" as "MTValid\((F)\)". The
model-theoretic characterization makes it clear that
"MTValid\((F)\)" is definable purely in terms of concepts
of set theory. (The notion of model-theoretic validity for Fregean
languages is explained in thorough detail in the entries on
classical logic
and
second-order and higher-order logic;
see also the entry on
model theory.)[9]
(If \(F\) is a formula of a first-order language without identity,
then if no replacement instance of the form of \(F\) is false, this is
a sufficient condition for \(F\)'s being model-theoretically valid. As
it turns out, if \(F\) is not model-theoretically valid, then some
replacement instance of its form whose variables range over the
natural numbers and whose non-logical constants are arithmetical
expressions will be false. This can be justified by means of a
refinement of the Lowenheim-Skolem theorem. See the entry on
logic, classical,
and Quine 1970, ch. 4, for discussion and references. No similar
results hold for higher-order languages.)
The "MT" in "MTValid\((F)\)" stresses the fact
that model-theoretic validity is different from universal validity.
The notion of a meaning assignment which appears in the description of
universal validity is a very imprecise and intuitive notion, while the
notion of a structure appearing in a characterization of
model-theoretic validity is a fairly precise and technical one. It
seems clear that the notion of a structure for Fregean formalized
languages is minimally reasonable, in the sense that a structure
models the power of one or several meaning assignments to make false
(the logical form of) some sentence. As we will mention later, the
converse property, that each meaning assignment's validity-refuting
power is modeled by some structure, is also a natural but more
demanding requirement on a notion of structure.
### 2.4 The Problem of Adequacy
The fact that the notions of derivability and model-theoretic validity
are definable in standard mathematics seems to have been a very
attractive feature of them among practicing logicians. But this
attractive feature of course does not justify by itself taking either
notion as an adequate characterization of logical truth. On most
views, with a mathematical characterization of logical truth we
attempt to delineate a set of formulae possessing a number of
non-mathematical properties. Which properties these are varies
depending on our pretheoretic conception of, for example, the features
of modality and formality. (By "pretheoretic" it's not
meant "previous to any theoretical activity"; there could
hardly be a "pretheoretic" conception of logical truth in
this sense. In this context what's meant is "previous to the
theoretical activity of mathematical characterization".) But on
any such conception there will be external, non-mathematical criteria
that can be applied to evaluate the question whether a mathematical
characterization is adequate. In this last section we will outline
some of the basic issues and results on the question whether
derivability and model-theoretic validity are adequate in this
sense.
#### 2.4.1 Analysis and Modality
One frequent objection to the adequacy of model-theoretic validity is
that it does not provide a conceptual analysis of the notion of
logical truth, even for sentences of Fregean formalized languages (see
e.g. Pap 1958, p. 159; Kneale and Kneale 1962, p. 642; Field 1989, pp.
33-4; Etchemendy 1990, ch. 7). This complaint is especially
common among authors who feel inclined to identify logical truth and
analyticity *simpliciter* (see e.g. Kneale and Kneale,
*ibid*., Etchemendy 1990, p. 126). If one thinks of the concept
of logical truth simply as the concept of analytic truth, it is
especially reasonable to accept that the concept of logical truth does
not have much to do with the concept of model-theoretic validity, for
presumably this concept does not have much to do with the concept of
analyticity. To say that a formula is model-theoretically valid means
that there are no set-theoretic structures in which it is false;
hence, to say that a formula is not model-theoretically valid means
that there are set-theoretic structures in which it is false. But to
say that a sentence is or is not analytic presumably does not mean
anything about the existence or non-existence of set-theoretic
structures. Note that we could object to derivability on the same
grounds, for to say that a sentence is or is not analytic presumably
does not mean anything about its being or not being the product of a
certain algorithm (compare Etchemendy 1990, p. 3). (One further
peculiar, much debated claim in Etchemendy 1990 is that true claims of
the form "\(F\) is logically true" or "\(F\) is not
logically true" should themselves be logical truths (while the
corresponding claims "MTValid\((F)\)" and "Not
MTValid\((F)\)" are not logical truths). Etchemendy's claim is
perhaps defensible under a conception of logical truth as analyticity
*simpliciter*, but certainly doubtful on more traditional
conceptions of logical truth, on which the predicate "is a
logical truth" is not even a logical expression. See
Gomez-Torrente 1998/9 and Soames 1999, ch. 4 for
discussion.)
Analogous "no conceptual analysis" objections can be made
if we accept that the concept of logical truth has some other strong
modal notes unrelated to analyticity; for example, if we accept that
it is part of the concept of logical truth that logical truths are
true in all counterfactual circumstances, or necessary in some other
strong sense. Sher (1996) accepts something like the requirement that
a good characterization of logical truth should be given in terms of a
modally rich concept. However, she argues that the notion of
model-theoretic validity is strongly modal, and so the "no
conceptual analysis" objection is actually wrong: to say that a
formula is or is not model-theoretically valid is to make a
mathematical existence or non-existence claim, and according to Sher
these claims are best read as claims about the possibility and
necessity of structures. (Shalkowski 2004 argues that Sher's defense
of model-theoretic validity is insufficient, on the basis of a certain
metaphysical conception of logical necessity. Etchemendy 2008
relatedly argues that Sher's defense is based on inadequate
restrictions on the modality relevant to logical truth. See also the
critical discussion of Sher in Hanson 1997.) Garcia-Carpintero
(1993) offers a view related to Sher's: model-theoretic validity
provides a (correct) conceptual analysis of logical truth for Fregean
languages, because the notion of a set-theoretical structure is in
fact a subtle refinement of the modal notion of a possible meaning
assignment. Azzouni (2006), ch. 9, also defends the view that
model-theoretic validity provides a correct conceptual analysis of
logical truth (though restricted to first-order languages), on the
basis of a certain deflationist conception of the (strong) modality
involved in logical truth.
The standard view of set-theoretic claims, however, does not see them
as strong modal claims--at best, some of them are modal in the
minimal sense that they are universal generalizations or particular
cases of these. But it is at any rate unclear that this is the basis
for a powerful objection to model-theoretic validity or to
derivability, for, even if we accept that the concept of logical truth
is strongly modal, it is unclear that a good characterization of
logical truth ought to be a conceptual analysis. An analogy might
help. It is widely agreed that the characterizations of the notion of
computability in standard mathematics, e.g. as recursiveness, are in
some sense good characterizations. Note that the concept of
comput*ability* is modal, in a moderately strong sense; it
seems to be about what beings like us could do with certain symbols if
they were free from certain limitations--not about, say, what
existing beings have done or will do. However, to say that a certain
function is recursive is not to make a modal claim about it, but a
certain purely arithmetical claim. So recursiveness is widely agreed
to provide a good characterization of computability, but it clearly
does not provide a conceptual analysis. Perhaps it could be argued
that the situation with model-theoretic validity, or derivability, or
both, is the same.
A number of philosophers explicitly reject the requirement that a good
characterization of logical truth should provide a conceptual
analysis, and (at least for the sake of argument) do not question the
usual view of set-theoretic claims as non-modal, but have argued that
the universe of set-theoretic structures somehow models the universe
of possible structures (or at least the universe of possible
set-theoretic structures; see McGee 1992, Shapiro 1998, Sagi 2014). In
this indirect sense, the characterization in terms of model-theoretic
validity would grasp part of the strong modal force that logical
truths are often perceived to possess. McGee (1992) gives an elegant
argument for this idea: it is reasonable to think that given any
set-theoretic structure, even one construed out of non-mathematical
individuals, actualized or not, there is a set-theoretic structure
isomorphic to it but construed exclusively out of pure sets; but any
such pure set-theoretic structure is, on the usual view, an actualized
existent; so every possible set-theoretic structure is modeled by a
set-theoretic structure, as desired. (The significance of this relies
on the fact that in Fregean languages a formula is true in a structure
if and only if it is true in all the structures isomorphic to it.)
But model-theoretic validity (or derivability) might be theoretically
adequate in some way even if some possible meaning-assignments are not
modeled straightforwardly by (actual) set-theoretic structures. For
model-theoretic validity to be theoretically adequate, it might be
held, it is enough if we have other reasons to think that it is
extensionally adequate, i.e. that it coincides in extension with our
preferred pretheoretic notion of logical truth. In subsections 2.4.2
and 2.4.3 we will examine some existing arguments for and against the
plain extensional adequacy of derivability and model-theoretic
validity for Fregean languages.
#### 2.4.2 Extensional Adequacy: A General Argument
If one builds one's deductive calculus with care, one will be able to
convince oneself that all the formulae derivable in the calculus are
logical truths. The reason is that one can have used one's intuition
very systematically to obtain that conviction: one can have included
in one's calculus only axioms of which one is convinced that they are
logical truths; and one can have included as rules of inference rules
of which one is convinced that they produce logical truths when
applied to logical truths. Using another terminology, this means that,
if one builds one's calculus with care, one will be convinced that the
derivability characterization of logical truth for formulae of the
formalized language will be *sound* with respect to logical
truth.
It is equally obvious that if one has at hand a notion of
model-theoretic validity for a formalized language which is based on a
minimally reasonable notion of structure, then all logical truths (of
that language) will be model-theoretically valid. The reason is
simple: if a formula is not model-theoretically valid then there is a
structure in which it is false; but this structure must then model a
meaning assignment (or assignments) on which the formula (or its
logical form) is false; so it will be possible to construct a formula
with the same logical form, whose non-logical expressions have, by
stipulation, the particular meanings drawn from that collective
meaning assignment, and which is therefore false. But then the idea of
formality and the weakest conception of the modal force of logical
truths uncontroversially imply that the original formula is not
logically true. Using another terminology, we can conclude that
model-theoretic validity is *complete* with respect to logical
truth.
Let's abbreviate "\(F\) is derivable in calculus \(C\)" by
"DC\((F)\)" and "\(F\) is a logical truth (in our
preferred pretheoretical sense)" by "LT\((F)\)".
Then, if \(C\) is a calculus built to suit our pretheoretic conception
of logical truth, the situation can be summarized thus:
* (4) \(\text{DC}(F)
\Rightarrow \text{LT}(F) \Rightarrow \text{MTValid}(F).\)
The first implication is the soundness of derivability; the second is
the completeness of model-theoretic validity.
In order to convince ourselves that the characterizations of logical
truth in terms of DC\((F)\) and MTValid\((F)\) are extensionally
adequate we should convince ourselves that the converse implications
hold too:
* (5)
\(\text{MTValid}(F) \Rightarrow \text{LT}(F) \Rightarrow
\text{DC}(F).\)
Obtaining this conviction, or the conviction that these implications
don't in fact hold, turns out to be difficult in general. But a remark
of Kreisel (1967) establishes that a conviction that they hold can be
obtained sometimes. In some cases it is possible to give a
mathematical proof that derivability (in some specified calculus
\(C\)) is complete with respect to model-theoretic validity, i.e. a
proof of
* (6)
\(\text{MTValid}(F) \Rightarrow \text{DC}(F).\)
Kreisel called attention to the fact that (6) together with (4)
implies that model-theoretic validity is sound with respect to logical
truth, i.e., that the first implication of (5) holds. (Strictly
speaking, this is a strong generalization of Kreisel's remark, which
in place of "\(\text{LT}(F)\)" had something like
"\(F\) is true in all *class* structures"
(structures with a class, possibly proper, as domain of the individual
variables).) This means that when (6) holds the notion of
model-theoretic validity offers an extensionally correct
characterization of logical truth. (See Etchemendy 1990, ch. 11,
Hanson 1997, Gomez-Torrente 1998/9, and Field 2008, ch. 2, for
versions of this observation, and Smith 2011 and Griffiths 2014 for
objections.) Also, (6), together with (4), implies that the notion of
derivability is complete with respect to logical truth (the second
implication in (5)) and hence offers an extensionally correct
characterization of this notion. Note that this reasoning is very
general and independent of what our particular pretheoretic conception
of logical truth is.
An especially significant case in which this reasoning can be applied
is the case of standard Fregean first-order quantificational
languages, under a wide array of pretheoretic conceptions of logical
truth. It is typical to accept that all formulae derivable in a
typical first-order calculus are universally valid, true in all
counterfactual circumstances, *a priori* and analytic if any
formula
is.[10]
So (4) holds under a wide array of pretheoretic conceptions in this
case. (6) holds too for the typical calculi in question, in virtue of
Godel's completeness theorem, so (5) holds. This means that one
can convince oneself that both derivability and model-theoretic
validity are extensionally correct characterizations of our favorite
pretheoretic notion of logical truth for first-order languages, if our
pretheoretic conception is not too eccentric. The situation is not so
clear in other languages of special importance for the Fregean
tradition, the higher-order quantificational languages.
#### 2.4.3 Extensional Adequacy: Higher-order Languages
It follows from Godel's first incompleteness theorem that already
for a second-order language there is no calculus \(C\) where
derivability is sound with respect to model-theoretic validity and
which makes true (6) (for the notion of model-theoretic validity as
usually defined for such a language). We may call this result *the
incompleteness of second-order calculi with respect to model-theoretic
validity*. Said another way: for every second-order calculus \(C\)
sound with respect to model-theoretic validity there will be a formula
\(F\) such that \(\text{MTValid}(F)\) but it is not the case that
\(\text{DC}(F)\).
In this situation it's not possible to apply Kreisel's argument for
(5). In fact, the incompleteness of second-order calculi shows that,
given any calculus \(C\) satisfying (4), one of the implications of
(5) is false (or *both* are): *either* derivability in
\(C\) is incomplete with respect to logical truth *or*
model-theoretic validity is unsound with respect to logical truth.
Different authors have extracted opposed lessons from incompleteness.
A common reaction is to think that model-theoretic validity must be
unsound with respect to logical truth. This is especially frequent in
philosophers on whose conception logical truths must be *a
priori* or analytic. One idea is that the results of *a
priori* reasoning or of analytic thinking ought to be codifiable
in a calculus. (See e.g. Wagner 1987, p. 8.) But even if we grant this
idea, it's doubtful that the desired conclusion follows. Suppose that
(i) every *a priori* or analytic reasoning must be reproducible
in a calculus. We accept also, of course, that (ii) *for every*
calculus \(C\) sound with respect to model-theoretic validity
*there is* a model-theoretically valid formula that is not
derivable in \(C\). From all this it doesn't follow that (iii)
*there is* a model-theoretically valid formula \(F\) such that
*for every* calculus \(C\) sound for model-theoretic validity
\(F\) is not derivable in *C*. From (iii) and (i) it follows of
course that there are model-theoretically valid formulae that are not
obtainable by *a priori* or analytic reasoning. But the step
from (ii) to (iii) is a typical quantificational fallacy. From (i) and
(ii) it doesn't follow that there is any model-theoretically valid
formula which is not obtainable by *a priori* or analytic
reasoning. The only thing that follows (from (ii) alone under the
assumptions that model-theoretic validity is sound with respect to
logical truth and that logical truths are *a priori* and
analytic) is that no calculus sound with respect to model-theoretic
validity can by itself model *all* the *a priori* or
analytic reasonings that people are able to make. But it's not
sufficiently clear that this should be intrinsically problematic.
After all, *a priori* and analytic reasonings must start from
basic axioms and rules, and for all we know a reflective mind may have
an inexhaustible ability to find new truths and truth-preserving rules
by *a priori* or analytic consideration of even a meager stock
of concepts. The claim that all analytic truths ought to be derivable
in one single calculus is perhaps plausible on the view that
analyticity is to be explained by conventions or "tacit
agreements", for these agreements are presumably finite in
number, and their implications are presumably at most effectively
enumerable. But this view is just one problematic idea about how
apriority and analyticity should be explicated. (See also Etchemendy
1990, chs. 8, 9, for an argument for the unsoundness of higher-order
model-theoretic validity based on the conception of logical truth as
analyticity *simpliciter*, and Gomez-Torrente 1998/9,
Soames 1999, ch. 4, Paseau 2014, Florio and Incurvati 2019, 2021, and
Griffiths and Paseau 2022, ch. 9, for critical reactions.)
Another type of unsoundness arguments attempt to show that there is
some higher-order formula that is model-theoretically valid but is
intuitively false in a structure whose domain is a proper class. (The
"intended interpretation" of set theory, if it exists at
all, might be one such structure, for it is certainly not a set; see
the entry on
set theory.)
These arguments thus question the claim that each meaning
assignment's validity-refuting power is modeled by some set-theoretic
structure, a claim which is surely a corollary of the first
implication in (5). (In McGee 1992 there is a good example; there is
critical discussion in Gomez-Torrente 1998/9.) The most
widespread view among set theorists seems to be that there are no
formulae with that property in Fregean languages, but it's certainly
not an absolutely firm belief of theirs. Note that these arguments
offer a challenge only to the idea that universal validity (as defined
in section 2.3) is adequately modeled by set-theoretic validity, not
to the soundness of a characterization of logical truth in terms of
universal validity itself, or in terms of a species of validity based
on some notion of "meaning assignment" different from the
usual notion of a set-theoretic structure. (The arguments we mentioned
in the preceding paragraph and in 2.4.1 would have deeper implications
if correct, for they easily imply challenges to all characterizations
in terms of species of validity as well). In fact, worries of this
kind have prompted the proposal of a different kind of notions of
validity (for Fregean languages), in which set-theoretic structures
are replaced with suitable values of higher-order variables in a
higher-order language for set theory, e.g. with "plural
interpretations" (see Boolos 1985, Rayo and Uzquiano 1999,
Williamson 2003, Florio and Incurvati 2021; see also Florio and
Linnebo 2021 and the entry on
plural quantification).
Both set-theoretic and proper class structures are modeled by such
values, so these particular worries of unsoundness do not affect this
kind of proposals.
In general, there are no fully satisfactory philosophical arguments
for the thesis that model-theoretic validity is unsound with respect
to logical truth in higher-order languages. Are there then any good
reasons to think that derivability (in any calculus sound for
model-theoretic validity) must be incomplete with respect to logical
truth? There don't seem to be any absolutely convincing reasons for
this view either. The main argument (the first version of which was
perhaps first made explicit in Tarski 1936a, 1936b) seems to be this.
As noted above, Godel's first incompleteness theorem implies that
for any calculus for a higher-order language there will be a
model-theoretically valid formula that will not be derivable in the
calculus. As it turns out, the formula obtained by the Godel
construction is also always intuitively true in all domains
(set-theoretical or not), and it's reasonable to think of it as
universally valid. (It's certainly not a formula false in a proper
class structure.) The argument concludes that for any calculus there
are logically true formulae that are not derivable in it.
From this it has been concluded that derivability (in any calculus)
must be incomplete with respect to logical truth. But a fundamental
problem is that this conclusion is based on two assumptions that will
not necessarily be granted by the champion of derivability: first, the
assumption that the expressions typically cataloged as logical in
higher-order languages, and in particular the quantifiers in
quantifications of the form \(\forall X\) (where \(X\) is a
higher-order variable), are in fact logical expressions; and second,
the assumption that being universally valid is a sufficient condition
for logical truth. On these assumptions it is certainly very
reasonable to think that derivability, in any calculus satisfying (4),
must be incomplete with respect to logical truth. But in the absence
of additional considerations, a critic may question the assumptions,
and deny relevance to the argument. The second assumption would
probably be questioned e.g. from the point of view that logical truths
must be analytic, for there is no conclusive reason to think that
universally valid formulae must be analytic. The first assumption
actually underlies any conviction one may have that (4) holds for any
one particular higher-order calculus. (Note that if we denied that the
higher-order quantifiers are logical expressions we could equally deny
that the arguments presented above against the soundness of
model-theoretic validity with respect to *logical truth* are
relevant at all.) That the higher-order quantifiers are logical has
often been denied on the grounds that they are semantically too
"substantive". It is often pointed out in this connection
that higher-order quantifications can be used to define sophisticated
set-theoretic properties that one cannot define just with the help of
first-order quantifiers. (Defenders of the logical status of
higher-order quantifications, on the other hand, point to the wide
applicability of the higher-order quantifiers, to the fact that they
are analogous to the first-order quantifiers, to the fact that they
are typically needed to provide categorical axiomatizations of
mathematical structures, etc. See Quine (1970), ch. 5, for the
standard exponent of the restrictive view, and Boolos (1975) and
Shapiro (1991) for standard exponents of the liberal view.) |
lorenzo-valla | ## 1. Life and Works
Valla did not have an easy life. Equipped with a sharp and polemical
mind, an even sharper pen and a sense of self-importance verging on
the pathological, he made many enemies throughout his life. Born in
Rome in (most likely) 1406 to a family with ties to the papal curia,
Valla as a young man was already in close contact with some major
humanists working as papal secretaries such as Leonardo Bruni
(1370-1444) and Poggio Bracciolini (1380-1459). Another
was his uncle Melchior Scrivani, whom Valla had hoped to succeed after
his death; but opposition from Poggio and Antonio Loschi
(1365/8-1441) must have led the pope to refuse to employ him.
Valla had criticized an elegy by Loschi, and had also boldly favored
Quintilian over Cicero in a treatise which has long considered to be
lost; but a lengthy anonymous prefatory letter has recently been found
and ascribed, convincingly, to the young Valla (Pagliaroli 2008; it
turns out not to be a comparison between Cicero's theory of
rhetoric and Quintilian's handbook, as scholars have always
assumed, but a comparison between a declamation of Ps-Quintilian,
*Gladiator*--the declamations were considered to be the
work of Quintilian in Valla's time--and one of
Cicero's orations, *Pro Ligario*.) Valla's Roman
experience of humanist conversations found an outlet in his dialogue
*De voluptate* (*On Pleasure*), in which the Christian
concepts of charity and beatitude are identified with hedonist
pleasure, while the "Stoic" concept of virtue is rejected
(see below). Valla would later revise the dialogue and change the
names of the interlocutors, but his Epicurean-Christian position
remained the same.
Meanwhile he had moved to Pavia in 1431, stimulated by his friend
Panormita (Antonio Beccadelli, 1394-1471)--with whom he was
soon to quarrel--and had begun to teach rhetoric. He had to flee
Pavia, however, in 1433 after having aroused the anger of the jurists.
In a letter to a humanist jurist friend of his, Catone Sacco
(1394-1463), Valla had attacked the language of one of the
lawyers' main authorities, Bartolus of Sassoferrato
(1313-1357). After some travelling, in 1435 Valla found
employment at the court of Alfonso of Aragon (1396-1458), who
was trying to capture Naples. Though complaining about the lack of
time, books and fellow humanists, Valla was immensely productive in
this phase of his career. In 1439 he finished the first version of his
critique of scholastic philosophy. Two years later he finished his
*Elegantiae linguae Latinae*, a manual for the correct use of
Latin syntax and vocabulary, which became a bestseller throughout
Europe. As a humanist in the court of a king who was fighting against
the pope, Valla demonstrated that the Donation of Constantine, which
had served the papacy to claim worldly power, was a forgery. In the
same years he composed a dialogue on free will and began working on
his annotations to the standard Latin translation of the Bible,
comparing it with the Greek text of the New Testament. He also wrote a
dialogue *On the Profession of the Religious* (*De
professione religiosorum*), in which he attacked the vow of
obedience and asceticism taken by members of religious orders.
Further, he worked on the text of Livy, wrote a history of the deeds
of Alfonso's father, and began re-reading and annotating
Quintilian's *Institutio oratoria* (*Education of the
Orator*) in a manuscript which is still extant (Paris,
Bibliotheque Nationale de France, lat. 7723). But Valla's
philological approach and his penchant for quarreling made him enemies
at the Aragonese court. After his patron Alfonso had made peace with
the pope in 1443, Valla's role as an anti-papal polemicist may
have shrunk. His enemies took advantage of the situation. The
immediate occasion was Valla's denial of the apostolic origin of
the *Symbolum Apostolicum* (Apostle's Creed). In a
letter, now lost, to the Neapolitan lawyers he argued that a passage
from Gratian's *Decretum* that formed the basis of the
belief in the apostolic origin of the Creed was corrupt and needed
emendation. Powerful men at the court staged an inquisitorial trial to
determine whether Valla's works contained heretical and
heterodox opinions. In preparation for the trial Valla wrote a
self-defense (and later an "Apology"), but was rescued
from this perilous situation by the intervention of the king. The
whole affair must have fuelled Valla's wish to return to Rome.
In 1447 he made peace with the pope and became an apostolic
*scriptor* (scribe), and later, in 1455, a papal secretary. In
these years he revised some of his earlier works such as the
*Repastinatio* and his notes on the New Testament, and
translated Thucydides and Herodotus into Latin; his work on
Thucydides, in particular, was to have an important impact on the
study of this difficult Greek author. Always an irascible man, he
continued to engage in quarrels and exchanged a series of invectives
with his arch-enemy Poggio. He died in 1457, still working on a second
revision of his *Dialectics* (that is, the third version). He
was buried in the Lateran, where his grave was removed in the
sixteenth century, probably between the 1560s and 1580s at a time when
his works were put on the Index; the present memorial is not Valla's
tombstone.
Valla's impact on the humanist movement was long-lasting and
varied. His philological approach was developed by subsequent
generations of humanists, and found, arguably, its first systematic
expression in the work of Angelo Poliziano (1454-1494). His
*Elegantiae* was printed many times, either in the original or
in one of its many adaptations and abridgments made by later scholars.
A copy of his annotations on the New Testament was found near Louvain
by Erasmus (1467?-1534), who published it. Though many
theologians complained about his ignorance of theology, Luther and
Leibniz, each for his own reasons, found occasion to refer to
Valla's dialogue on free will. Humanists found Valla's
criticisms of Aristotelian-scholastic thought congenial, though not a
few regarded his style as too aggressive and polemical. And though
some humanists such as Juan Luis Vives (1492-1540) and Johann
Eck (1486-1543) complained about Valla's lack of
philosophical acumen, his critique of scholasticism has a place in the
long transformation from medieval to modern thought--in which
humanism played an important though by no means exclusive role.
## 2. Valla's Critique of Scholastic Philosophy
In his *Repastinatio dialectice et philosophie*, which is
extant in three versions with slightly different titles, Valla attacks
what he sees as the foundations of scholastic-Aristotelian philosophy
and sets out to transform Aristotelian dialectic. As the title of the
first version clearly indicates, he wants to "re-plough"
the ground traditionally covered by the Aristotelian scholastics. The
term *repastinatio* means not only "re-ploughing"
or "re-tilling" but also "cutting back" and
"weeding out." Valla desires to weed out everything he
regards as barren and infertile in scholastic thought and to
re-cultivate the ground by sprinkling it with the fertile waters of
rhetoric and grammar. His use of *repastinatio* indicates that
he is setting out a program of reform rather than of destruction, in
spite of his often aggressive and polemical tone. Book I is devoted
mainly to metaphysics, but also contains chapters on natural
philosophy and moral philosophy, as well a controversial chapter on
the Trinity. In Books II and III Valla discusses propositions and
forms of argumentation such as the syllogism.
His main concern in the first book is to simplify the
Aristotelian-scholastic apparatus. For Valla, the world consists of
things, simply called *res*. Things have qualities and do or
undergo things (which he refers to as "things"). Hence,
there are three basic categories: substance, quality, and action. At
the back of Valla's mind are the grammatical categories of noun,
adjective, and verb; but in many places he points out that we cannot
assume that, for instance, an adjective always refer to a quality or a
verb to an action (*Repastinatio*, 1:134-156;
425-442; *DD* 1:240-80). These three categories are the
only ones Valla admits; the other Aristotelian categories of accidents
such as place, time, relation and quantity can all be reduced to
quality or action. Here, too, grammar plays a leading role in
Valla's thought. From a grammatical point of view, qualities
such as being a father, being in the classroom, or being six-feet tall
all tell us something about how a particular man is qualified; and
there is, consequently, no need to preserve the other Aristotelian
categories.
In reducing the categories to his triad of substance, quality, and
action, Valla does not seem to have in mind "realist"
philosophers who accepted the independent existence of entities such
as relations and quantities over and above individual things. Instead,
his aim is to show that many terms traditionally placed in other
categories, in fact, point to qualities or actions: linguistic usage
(*loquendi* *consuetudo*) teaches us, for example, that
quality is the overarching category. Thus, to the question "of
what kind", we often give answers containing quantitative
expressions. Take, for instance, the question: what sort of horse
should I buy? It may be answered: erect, tall, with a broad chest, and
so on. Valla's reduction of the categories often assumes the
form of grammatical surveys of certain groups of words associated with
a particular category: thus, in his discussion of time, he studies
words like "day," "year," and a host of
others; and his discussion of quantitative words treats mathematical
terms such as line, point and circle. Of course, Valla does not deny
that we can speak of quantity or time or place. But the rich array of
Latin terms signify, in the final analysis, the qualities or actions
of things, and nothing exists apart from concrete things. (Substance
is a shady category for Valla, who says that he cannot give an example
of it, much as Locke was later to maintain that "a substance is
something I know not what").
It is tempting to connect this lean ontology to that of William of
Ockham (c. 1287-1347). The interests, approach and arguments of
the two thinkers, however, differ considerably. Unlike Valla, Ockham
does not want to get rid of the system of categories. As long as one
realizes, Ockham says, that categories do not describe things in the
world but categorize *terms* by which we signify real
substances or real inhering qualities in different ways, the
categories can be maintained and the specific features of, for
example, relational or quantitative terms can be explored. Thus,
Ockham's rejection of a realist interpretation of the categories
is accompanied by a wish to *defend* the categories as distinct
groups of terms. Valla, on the other hand, sees the categories as
summing up the real aspects of things: hence, there are only
substances, qualities, and actions, and his reductive program consists
in showing that we have a vast and rich vocabulary in Latin that we
can use for referring to these things. His questions about words and
classes of words are not unlike those of Priscian (fl. 500-530)
and other grammarians. (Priscian, for instance, had stated that a noun
signifies substance plus quality, and pronouns substance without
quality.)
Another good example of Valla's reduction of scholastic
terminology, distinctions, and concepts is his critique of the
transcendental terms (*Repastinatio* 1:11-21; *DD*
1:18-36). According to Valla, the traditional six
terms--"being," "thing,"
"something," "one," "true," and
"good"--should be reduced to "thing"
(Latin *res*) since everything that exists, including a quality
or an action of a thing (also called a "thing") is a
thing. A good thing, for example, is a thing, and so, too, is a true
thing. From a grammatical point of view, such terms as
"good" and "true" are adjectives, and when
used as substantives (the good, the one), they refer to qualities;
hence, there is nothing "transcendental" about them.
Valla's grammatical approach is evident in his rejection of the
scholastic term *ens*. Just as "running"
(*currens*) can be resolved into "he who runs"
(*is qui currit*), so "being" (*ens*) can be
resolved into "that which is" (*id quod est*).
"That," however, is nothing other than "that
thing" (*ea* *res*); so we get as a result the
laborious formula: "that thing which is".
(*Repastinatio* 1:14; *DD* 1:23-25.) We do not, however,
need the phrase "that which is" (*ea que est*):
"a stone is a being" (*lapis est ens*), or the
equivalent phrase into which it can be resolved, "a stone is a
thing which is" (*lapis est res que est*), are unclear,
awkward, and absurd ways of saying simply that "a stone is a
thing" (*lapis est res*). Valla also rejects other
scholastic terms such as *entitas* ("entity"),
*hecceitas* ("this-ness") and *quidditas*
("quidity") for grammatical reasons: these terms do not
conform to the rules of word formation in Latin. While he is not
against the introduction of new words for things unknown in antiquity
(e.g., *bombarda* for "cannonball"), the
terminology coined by the scholastics is a different matter
entirely.
Related to this analysis is Valla's repudiation of what he
presents as the scholastic view of the distinction between abstract
and concrete terms, that is, the view that abstract terms
("whiteness," "fatherhood") always refer
solely to quality, while concrete terms ("white,"
"father") refer to both substance and quality
(*Repastinatio*, 1:21-30; *DD* 1:36-54). In a
careful discussion of this distinction, taking into account the
grammatical categories of case, number, and gender, Valla rejects the
ontological commitments which such a view seems to imply, and shows,
on the basis of a host of examples drawn from classical Latin usage,
that abstract terms often have the same meaning as their concrete
counterparts (useful/utility, true/truth, honest/honesty). These terms
refer to the concrete thing itself, that is, to the substance, its
quality or action (or a combination of these three components into
which a thing can be resolved). Again, Valla's main concern is
to study the workings of language and how these relate to the world of
everyday things, the world that we see and experience.
In describing and analyzing this world of things Valla is not only
guided by grammatical considerations. He also uses "common
sense" and the limits of our imagination as yardsticks against
which to measure scholastic notions and definitions. He thus thinks
that it is ridiculous to imagine prime matter without any form or form
without any matter, or to define a line as that which has no width and
a point as an indivisible quantity that occupies no space.
Valla's idea is that notions such as divisibility and quantity
are properly at home only in the world of ordinary things. For him,
there is only the world of bodies with actual shapes and dimensions;
lines and points are parts of these things, but only, as he seems to
suggest, in a derivative sense, in other words, as places or spaces
that are filled by the body or parts of that body. If we want to
measure or sketch a (part of a) body, we can select two spots on it
and measure the length between them by drawing points and lines on
paper or in our mind, a process through which these points and lines
become visible and divisible parts of our world
(*Repastinatio*, 1:142-147; 2:427-431; *DD*
254-64). But it would be wrong to abstract from this diagramming
function and infer a world of points and lines with their own
particular quantity. They are merely aids for measuring or outlining
bodies. In modern parlance, Valla seems to be saying that ontological
questions about these entities--do they exist? how do they
exist?--amount to category mistakes, equivalent to asking the
color of virtue.
The appeal to common sense (or what Valla considers as such) informs
his critique of Aristotelian natural philosophy. He insists on
commonplace observations and experiences as criteria for testing ideas
and hypotheses. On this basis, many of Aristotle's contentions,
so Valla argues, are not true to the facts (*Repastinatio*,
1:98-112; *DD* 1:174-99). He rejects or qualifies a
number of fundamental tenets of Aristotelian physics, for instance
that movement is the cause of heat, that a movement is always caused
by another movement, that elements can be transformed into one
another, that each has its own proper qualities (heat and dryness for
fire, heat and humidity for air, etc.), that there are pure elements,
that the combination of heat and humidity is a sufficient condition
for the generation of life, and so forth. Valla often uses
*reductio ad absurdum* as an argumentative strategy: if
Aristotle's theory is true, one would expect to observe
phenomena quite different from the ones we do, in fact, observe. In
arguing, for instance, for the existence of a fiery sphere below the
moon, Aristotle had claimed that "leaden missiles shot out by
force melt in the air" (*De caelo* II.7,
289a26-28). Valla rejects this claim by appealing to ordinary
experience: we never see balls--whether leaden, iron, or stone,
shot out of a sling or a cannon--heat up in the air; nor do the
feathers of arrows catch fire. If movement is sufficient to produce
heat, the spheres would set the air beneath in motion; but no one has
ever observed this. Arguably, the importance of Valla's polemic
here is not so much the quality of his arguments as the critical
tendency they reveal, the awareness that Aristotle's conclusions
often do not conform to daily experience. While he does not develop
his critique in the direction of an alternative natural philosophy as
later Renaissance philosophers such as Bernardino Telesio
(1509-1588) and Francesco Patrizi of Cherso (1529-1597)
would do, Valla contributed to undermining faith in the exclusive
validity of the Aristotelian paradigm.
Valla also criticizes Aristotle's natural philosophy because,
according to him, it detracts from God's power. In strongly
polemical terms, he attacks Aristotle for his
"polytheistic" ideas and for what Valla sees as his
equation of God with nature (*Repastinatio*, 1:54-59;
*DD* 1:94-103). Valla wants to reinstate God as the sole
creator of heaven and earth. To think of the cosmos in terms of a
living animal or the heavens in terms of celestial orbs moved by
intelligences is anathema to Valla (as it was to many medieval
scholastics as well). The notion of God as First Mover is also
rejected, since movement and rest are terms which should not be
applied (except perhaps metaphorically) to spiritual beings such as
God, angels, and souls.
Religious considerations also led Valla to find fault with another
fundamental tenet of Aristotelian scholastic thought: the Tree of
Porphyry (*Repastinatio*, 1:46-50; 2:389-391;
*DD* 1:82-88). Valla has several problems with the Tree. First
of all, it puts substance, rather than thing, on top. For Valla,
however, pure substance does not exist, since a thing is always
already a qualified substance. He also thinks that there is no place
for a human being in the Tree of Porphyry. Since the Tree divides
substance into the corporeal and the spiritual, it is difficult to
find a place for a human being, consisting of both soul and body.
Moreover, the Tree covers both the divine and the created order, which
leads to inappropriate descriptions of God and angels, to which the
term "animal" should not be applied, since they do not
have a body. Valla, therefore, divides Porphyry's Tree into
three different trees: one for spiritual substance, one for corporeal
substance, and one for what he calls "animal," that is,
those creatures which consist of both body and soul
(*Repastinatio*, 1:49-50; 2:422-424; *DD*
1:88). One might argue that what Valla gains over Porphyry by
disentangling the supernatural from the natural order, he loses by
having to admit that he cannot place Christ in any of his three trees,
since he is not only human but also God.
The soul as an incorporeal substance is treated by Valla in a separate
chapter (*Repastinatio*, 1:59-73; 2:408-410;
418-419; *DD* 1:104-29). Rejecting the Aristotelian
hylomorphic account, he returns to an Augustinian picture of the soul
as a wholly spiritual and immaterial substance made in the image of
God, and consisting of memory, intellect, and will. He rejects without
much discussion the various functions of the soul (vegetative,
sensitive, imaginative, intellectual), which would entail, he thinks,
a plurality of souls. He briefly treats the five exterior senses but
is not inclined to deal with the physiological aspects of sensation.
The term "species" (whether sensible or intelligible) does
not occur at all. The Aristotelian *sensus
communis*--which the medieval commentary tradition on
*De* *anima* had viewed as one of the internal senses,
alongside imagination (sometimes distinguished from
*phantasia*), memory, and the *vis aestimativa*
(foresight and prudence)--is mentioned only to be rejected
without further argument (*Repastinatio*, 1:73; *DD*
1:128). Imagination and the *vis aestimativa* are absent from
Valla's account, while memory, as the soul's principal
capacity, appears to have absorbed all the functions which scholastics
had divided among separate faculties of the sensitive soul. What may
have provoked Valla's anger about the traditional picture is the
seemingly passive role allotted to the soul in perception and
knowledge: it seems to come only at the very end of a long chain of
transmission, which starts with outer objects and concludes with a
merely receptive *tabula rasa*. In his view, the soul is far
more noble than the hylomorphic account of Aristotle implies, at least
as Valla understands that account. He therefore stresses on various
occasions the soul's dignified nature, its immortality, unity,
autonomy, and superiority to both the body and the animal soul,
comparing it to the sun's central place in the cosmos.
## 3. Valla's "Reform" of Aristotelian Logic
After Valla's attack on what he calls the *fundamenta*
(foundations) of Aristotelian-scholastic metaphysics and natural
philosophy, he turns to dialectic in Books II and III of his
*Repastinatio*. For Valla, argumentation should be approached
from an oratorical rather than a logical point of view. What counts is
whether an argument *works*, which means whether it convinces
one's adversary or public. The form of the argument is less
important. Dialectic is a species of confirmation and refutation; and,
as such, it is merely a component of invention, one of the five parts
of rhetoric (*Repastinatio*, 1:175; 2:447; *DD* 2:3).
Compared to rhetoric, dialectic is an easy subject and does not
require much time to master, since it considers and uses only the
syllogism "bare", as Valla puts it, that is, in isolation
from its wider argumentative context; its sole aim is to teach. The
rhetorician, on the other hand, uses not only the syllogism, but also
the enthymeme (incomplete syllogism), epicheireme (a kind of extended
reasoning) and example. The orator has to clothe everything in
persuasive arguments, since his task is not only to teach but also to
please and to move. This leads Valla to downplay the importance of the
Aristotelian syllogism and to consider forms of argumentation that are
not easily forced into its straightjacket. Among these are captious
forms of reasoning such as the dilemma, paradox, and heap argument
(*sorites*), and Valla offers an interesting analysis of these
forms in the last book of the *Repastinatio*.
Without rejecting the syllogism *tout court*, Valla is scathing
about its usefulness. He regards it as an artificial type of
reasoning, unfit to be employed by orators since it is does not
reflect the natural way of speaking and arguing. What is the use, for
example, of concluding that Socrates is an animal if one has already
stated that every man is an animal and that Socrates is a man? It is a
simple, puerile, and pedantic affair, hardly amounting to a real
*ars* (art). Valla's treatment of the syllogism clearly
shows his oratorical perspective. Following Quintilian, he stresses
that the nature of syllogistic reasoning is to establish proof. One of
the two premises contains what is to be proven (*que
probatur*), and the other offers the proof (*que probat*),
while the conclusion gives the result of the proof--into which
the proof "goes down" (*in quam probatio
descendit*). It is not always necessary, therefore, to have a
fixed order (major, minor, conclusion). If it suits the occasion
better, we can just as well begin with the minor, or even with the
conclusion. The order is merely a matter of convention and custom
(*Repastinatio*, 1:282-286; 2:531-534; *DD*
2:216-39).
These complaints about the artificiality of the syllogism inform
Valla's discussion of the three figures of the syllogism.
Aristotle had proven the validity of the moods of Figure 2 and 3 by
converting them to four moods of Figure 1; and this was taught, for
example, by Peter of Spain (thirteenth century) in his widely read
handbook on logic, the *Summulae logicales*, certainly
consulted by Valla here. Valla regards this whole business of
converting terms and transposing propositions in order to reduce a
particular syllogism to one of these four moods of Figure 1 as useless
and absurd. While he does not question the validity of these four
moods, he believes that there are many deviant syllogisms that are
also valid, for instance: God is in every place; Tartarus is a place;
therefore, God is in Tartarus. Here the "every" or
"all" sign is added to the predicate in the major
proposition. He says, moreover, that an entirely singular syllogism
can be valid: Homer is the greatest of poets; this man is the greatest
of poets; therefore, this man is Homer. And he gives many other
examples of such deviant schemes. Valla thus deliberately ignores the
criteria employed by Aristotle and his commentators--that at
least one premise must be universal, and at least one premise must be
affirmative, and that if the conclusion is to be negative, one premise
must be negative--or, at any rate, he thinks that they
unnecessarily restrict the number of possible valid figures.
In his discussion of the syllogism Valla does not refer to an
important principle employed by Aristotle and his commentators:
*dici de omni et nullo* (to be said/predicated about all and
about none). To quote Peter of Spain's *Summulae
logicales*: "*To be said of every* [*dici de
omni*] is when there is nothing to to be taken under the subject
[*nichil est sumere sub subiecto*] of which the predicate may
not be said, like 'every man runs': here, running is said
of every man, and under man there is nothing to to be taken of which
running is not said. *To be said of none* [*de nullo*]
is when there is nothing to be taken under the subject from which the
predicate may not be eliminated, like 'no man runs': here,
running is eliminated from any man at all [*quolibet
homine*]" (ed. L. M. de Rijk 1972, 43; ed. Copenhaver et
al., 2014, 170). This principle led Aristotle to conclude that only
four moods of the first figure were immediately valid. That Valla does
not make use of this fundamental condition is understandable from his
oratorical point of view, since it would be an uninteresting or even
irrelevant criterion of validity. It is therefore not surprising that
he thinks that we might as well "reduce" the first figure
to the second rather than vice versa. Likewise, from his oratorical
point of view, he can only treat the third figure of the syllogism
with contempt: it is a "completely foolish" form of
reasoning; no one reasons as follows: every man is a substance; every
man is an animal; therefore, some animal is a substance. In a similar
vein, he rejects the use of letters in the study of syllogisms
(*Repastinatio*, 1:297-300; 2:546-548; *DD*
2:264-71).
Valla's insistence on examining and assessing arguments in terms
of persuasion and usefulness leads him to criticize not only the
syllogism but also other less formal modes of argumentation. These
modes usually involve interrogation, resulting in an unexpected or
unwanted conclusion or an aporetic situation. Some scholars have
regarded Valla's interest in these less formal or non-formal
arguments as an expression of a skeptical attitude towards the
possibility of certainty in knowledge in general. Others have raised
serious doubts about this interpretation. What we can say for sure,
however, is that Valla was one of the first to study and analyze the
heap argument, dilemma, and suchlike. The heap argument is supposed to
induce doubts about the possibility of determining precise limits,
especially to quantities. If I subtract one grain from a heap, is it
still a heap? Of course. What if I subtract two grains? And so forth,
until the heap consists of just one grain, which, to be sure, is an
unacceptable conclusion. It seems impossible to determine the exact
moment when the heap ceases to be a heap, and any attempt to determine
this moment seems to involve an ad hoc decision, for a heap does not
cease to be a heap due to the subtraction of just one single grain.
Valla discusses a number of similar cases, and comments on their
fallacious nature.
Dilemma, too, receives extensive treatment from him. This type of
argument had been widely studied in antiquity. The basic structure is
a disjunction of propositions, usually in the form of a double
question in an interrogation, which sets a trap for the respondent,
since whichever horn of the dilemma he chooses, he seems to be caught
up in a contradiction and will lose the debate ("If he is
modest, why should you accuse someone who is a good man? If he is bad,
why should you accuse someone who is unconcerned by such a
charge?", Cicero, *De inventione* 1.45.83, cited by Valla
*Repastinatio*, 1:321; *DD* 2:332). It was also
recognized that the respondent could often counter the dilemma by
duplicating the original argument and "turning it back"
(*convertere*) on the interrogator, using it as a kind of
boomerang ("if he is modest, you should accuse him because he
will be concerned by such a charge; if he is bad, you should also
accuse him because he is not a good man"). Alternatively, he
could escape the dilemma by questioning the disjunction and showing
that there is a third possibility. There were many variations of this
simple scheme, and it was studied from a logical as well as a
rhetorical point of view with considerable overlap between these two
perspectives.
In medieval times, dilemma does not seem to have attracted much
theoretical reflection, though there was an extensive literature on
related genres such as *insolubilia* and paradoxes, which were
generally treated in a logical manner. It is, therefore, interesting
to see Valla discussing a whole range of examples of dilemma. The
rhetoric textbook by the Byzantine emigre George of
Trebizond (1396-1486), composed about 1433, was probably an
important source for him. This work might also have led Valla to
explore the relevant places in Quintilian, Cicero, the Greek text of
Aristotle's *Rhetoric* and perhaps other Greek sources.
The example Valla discusses most extensively comes from Aulus Gellius
and concerns a lawsuit between Protagoras and his pupil Euathlus
(*Repastinatio*, 1:312-319; 2:562-568; *DD*
2:312-28. Aulus Gellius, *Noctes Atticae,* 5.10.5-16).
The pupil has promised to pay the second installment of the fees as
soon as he has won his first case. He refuses to undertake any cases,
however, and Protagoras takes him to court, putting the case in the
form of a dilemma. If Euathlus loses the case, he will have to pay the
rest of the fee, on account of the verdict of the judges; but if
Euathlus wins, he will also have to pay, this time on account of his
agreement with Protagoras. Euathlus, however, cleverly converts the
argument: in neither case will he have to pay, on account of the
court's decision (if he wins), or on account of the agreement
with Protagoras (if he loses). Aulus Gellius thinks that the judges
should have refrained from passing judgment because any decision would
be inconsistent with itself. But Valla rejects such a rebuttal
(*antistrephon* or *conversio*) of the dilemma and
thinks that a response may be formulated as long as one concentrates
on the relevant aspects of the case. He carefully considers the
perspectives of all parties, the words employed, the feelings of the
judges, and all the circumstances of the case. In a highly rhetorical
analysis of the case in which Valla makes speeches on behalf of
Protagoras, Valla says in the end that Euathlus cannot have it both
ways and must choose one or the other alternative, not both: he must
comply either with his agreement with Protagoras (and pay the rest of
the fees) or with the verdict passed by the judges. If he loses the
case, a refusal to obey the sentence of the judges shows contempt; if
he wins the case, he has to pay the rest of the fee to Protagoras. At
any rate, there is no reason for the judges to despair; Aulus Gellius
is, therefore, wrong in thinking that they should have refrained
from passing a judgment. In all such cases, Valla argues, the
conversion is not a rebuttal, but at best a correction of the initial
argument (a correction, however, is not a refutation), and at worst a
simple repetition or illegitimate shift of the initial position.
An important way of seeing through deceptive arguments is to consider
the weight of words carefully, and Valla gives some further examples
of fallacies which can easily be refuted by examining the meaning and
usage of words and the contexts in which they occur
(*Repastinatio*, 1:320-334; 2:568-578; *DD*
2:328-69). He considers the fallacies "collected by Aristotle in
his *Sophistical Refutations* as for the most part a puerile
art," quoting the *Rhetorica ad Herennium* (2.11.16) to
the effect that "knowledge of ambiguities as taught by
dialecticians is of no help at all but rather a most serious
hindrance." Valla is not interested in providing a comprehensive
list of deceptive arguments and errors or in studying rules for
resolving them. He does not mention, for example, the basic division
between linguistic (*in dictione*) and extra-linguistic
(*extra dictionem*) fallacies. In a letter to a friend, Valla
lists some "dialecticians"--Albert of Saxony (c.
1316-1390), Albert the Great (c. 1220-1280), Ralph Strode
(fl. 1350-1400), William of Ockham, and Paul of Venice (c.
1369-1429) --but there is no sign that he had done more
than leaf through their works on *sophismata* and
*insolubilia* (Valla 1984, 201). The few examples he gives seem
to come from Peter of Spain's *Summulae logicales*. Nor
was it necessary to have more than a superficial acquaintance with the
works of these dialecticians in order to realize that their approach
differed vastly from his. As he repeatedly states, what is required in
order to disambiguate fallacies is not a deeper knowledge of the rules
of logic but a recognition that arguments need to be evaluated within
their wider linguistic and argumentative context. Such an examination
of how words and arguments function will easily lay bare the
artificial and sophistical nature of these forms of argumentation.
This approach is also evident in Valla's analysis of the
proposition of which a syllogism or argument consists. Propositions
are traditionally divided according to quantity (universal of
particular) and to quality (affirmative and negative). Quantity and
quality are indicated by words that are called *signa*
(markers, signs): "all," "any,"
"not," "no one," and so forth. In Book II of
the *Repastinatio*, Valla considers a much wider range of words
than the medieval logicians, who had mainly worked with
"all," "some," "none," and
"no one". To some extent, his aim is not unlike that of
the dialecticians whom he so frequently attacks, that is, to study
signs of quality and negation and how they determine the scope of a
proposition. But for him there is only one proper method of carrying
out such a study: examining carefully the multifarious ways in which
these words are used in refined and grammatically correct Latin. Not
surprisingly, Valla criticizes the square of contraries--the
fourfold classification of statements in which the distinction between
universal and particular and that between affirmative and negative are
combined. A similar critique of the rather arbitrary restriction to a
limited set of words is applied to the scholastic notion of modality
(Valla *Repastinatio*, 1:237-244; 2:491-497;
*DD* 2:126-43). Scholastics usually treat only the following
six terms as modals: "possible," "impossible,"
"true," "false," "necessary," and
"contingent." Latin, however, is much more resourceful in
expressing modality. Using criteria such as refinement and utility,
Valla considers terms such as "likely/unlikely,"
"difficult/easy," "certain/uncertain,"
"useful/useless," "becoming/unbecoming," and
"honorable/dishonorable." This amounts to introducing a
wholly new concept of modality, which comes close to an adverbial
qualification of a given action.
Valla's principal *betes noires* are Aristotle,
Boethius, Porphyry, and Peter of Spain, but he also speaks in general
terms of the entire *natio peripatetica* (nation of
Peripatetics). He frequently refers to *isti* (those), a
suitably vague label for the scholastic followers of
Aristotle--including both dialecticians and theologians. It has
often been claimed that Valla is attacking late medieval
scholasticism; but it must be said that he does not quote any late
medieval scholastic philosophers or theologians. He generally steers
clear of their questions, arguments, and terminology. If we compare
Valla's *Repastinatio* with, for example, Paul of
Venice's *Logica* *parva* (*The Small
Logic*), we quickly perceive the immense difference in attitude,
argument, and approach. Moreover, as noted above, Valla's
grammatical and oratorical approach is fundamentally different from
Ockham's terminism. He probably thought that he did not need to
engage with the technical details of scholastic works. It was
sufficient for him to establish that there was a huge distance between
his own approach and that of the scholastics. Once he had shown that
the scholastic-Aristotelian edifice was built on shaky foundations, he
did not care to attack the superstructure, so to speak. And Valla
proved to his own satisfaction that these foundations were shaky by
showing that the terminology and vocabulary of the scholastics rested
on a misunderstanding of Latin and of the workings of language in
general. But even though his criticisms are mainly addressed to
Aristotle, Boethius, and Porphyry, his more general opinion of
Aristotelianism was, of course, formed by what he saw in his own time.
He loathed the philosophical establishment at the universities, their
methods, genres, and, above all, their style and terminology. He
thought that they were slavishly following Aristotle. In his view,
however, a true philosopher does not hesitate to re-assess any opinion
from whatever source; refusing to align himself with a sect or school,
he presents himself as a critical, independent thinker
(*Repastinatio*, 1:1-8; 2:359-363; *DD*
1:2-12). Whether scholasticism had, indeed, ossified by the time Valla
came on the scene is a matter of debate; but there is no question that
this is how he and the other humanists saw it.
## 4. Moral Philosophy
The same critical spirit also infuses Valla's work on moral
philosophy. In his dialogue, published as *De voluptate* in
1431, when he was still in his mid-twenties, and revised two years
later under the title *De vero bono* (*On the True
Good*), Valla presents a discussion between an
"Epicurean," a "Stoic," and a
"Christian" on an age-old question: what is the highest
ethical good? The result of this confrontation between pagan and
Christian moral thought is a combination of Pauline fideism and
Epicurean hedonism, in which the Christian concepts of charity and
beatitude are identified with hedonist pleasure, and the
"Stoic" concept of virtue is rejected (Valla, *De vero
falsoque bono*). Valla thus treats Epicureanism as a
stepping-stone to the development of a Christian morality based on the
concept of pleasure, and repudiates the traditional synthesis of
Stoicism and Christianity, popular among scholastics and humanists
alike. The substance of the dialogue is repeated in a long chapter in
his *Repastinatio* (*Repastinatio*, 1:73-98;
2:411-418; *DD* 1:130-75).
Valla's strategy is to reduce the traditional four
virtues-- prudence, justice, fortitude, and propriety (or
temperance)--to fortitude, and then to equate fortitude with
charity and love. For Valla, fortitude is the essential virtue, since
it shows that we do not allow ourselves to be conquered by the wrong
emotions, but instead to act for the good. As a true virtue of action,
it is closely connected to justice and is defined as "a certain
resistance against both the harsh and the pleasant things which
prudence has declared to be evils." It is the power to tolerate
and suffer adversity and bad luck, but also to resist the
blandishments of a fortune which can be all too good, thus weakening
the spirit. Fortitude is the only true virtue, because virtue resides
in the will, since our actions, to which we assign moral
qualifications, proceed from the will.
Valla's reductive strategy has a clear aim: to equate this
essential virtue of action, fortitude, with the biblical concept of
love and charity. This step requires some hermeneutic manipulation,
but the Stoic overtones of Cicero's account in *De
officiis* have prepared the way for it--ironically, perhaps,
in view of Valla's professed hostility towards
Stoicism--since enduring hardship with Stoic patience is easily
linked to the Pauline message that we become strong by being tested
(II Cor. 12:10, quoted by Valla). The labor, sweat, and trouble we
must bear, though bad in themselves, "are called good because
they lead to that victory," Valla writes, echoing St Paul
(*Repastinatio*, 1:88-89; 2:415; *DD* 1:156). We
do not, then, strive to attain virtue for its own sake, since it is
full of toil and hardship, but rather because it leads us to our goal.
This is one of Valla's major claims against the Stoics and the
Peripatetics, who--at least in Valla's
interpretation-- regarded virtue as the end of life, that is, the
goal which is sought for its own sake. Because virtuous behavior is
difficult, requiring us to put up with harsh and bitter afflictions,
no one naturally and voluntarily seeks virtue as an end in itself.
What we seek is pleasure or delectation, both in this life
and--far more importantly --in the life to come.
By equating pleasure with love, Valla can argue that it is love or
pleasure that is our ultimate end. This entails the striking notion
that God is not loved for his own sake, but for the sake of love:
"For nothing is loved for its own sake or for the sake of
something else as another end, but the love itself is the end"
(*Repastinatio*, 2:417). This is a daring move. Traditionally,
God was said to be loved for his own sake, not for his usefulness in
gaining something else. Many thinkers agreed with Augustine that
concupiscent love was to be distinguished from friendship, and, with
respect to heavenly beatitude, use from fruition. We can love
something as a means to an end (use), and we can love something for
its own sake (fruition). But because Valla has maintained that
pleasure is our highest good, God can only be loved as a means to that
end.
It is therefore a moot point whether Valla successfully integrated
Epicurean hedonism with Christian morality. He seems to argue that the
Epicurean position is valid only for the period before the coming of
Christ. In our unredeemed state, we are rightly regarded as
pleasure-seeking animals, governed by self-interest and utilitarian
motives. After Christ's coming, however, we have a different
picture: repudiating Epicurean pleasure, we should choose the harsh
and difficult life of Christian *honestas* (virtue) as a step
towards heavenly beatitude. Yet, the two views of human beings are not
so readily combined. On the one hand, there is the positive evaluation
of pleasure as the fundamental principle in human
psychology--which is confirmed and underscored by the
terminological equation of *voluptas* (pleasure),
*beatitudo* (beatitude), *fruitio* (fruition),
*delectatio* (delectation), and *amor* (love). On the
other hand, Valla states apodictically that there are two pleasures:
an earthly one, which is the mother of vice, and a heavenly one, which
is the mother of virtue; that we should abstain from the former if we
want to enjoy the latter; and that the natural, pre-Christian life is
"empty and worthy of punishment" if not put in the wider
perspective of human destiny. In other words, we are commanded to live
the arduous and difficult life of Christian *honestas*, ruled
by restraint, self-denial, and propriety (temperance), and, at the
same time, to live a hedonist life, which consists of the joyful,
free, and natural gratification of the senses.
Another of his targets is the Aristotelian account of virtue as a mean
between two extremes. According to Valla, each individual vice is
instead the opposite of an individual virtue. He makes this point by
distinguishing between two different senses of the same virtue,
showing that they have different opposites. So, while Aristotle
regards fortitude as the mean between the vices of rashness and
cowardice, Valla argues that there are two aspects to fortitude:
fighting bravely and being cautious (for instance, in yielding to the
victorious enemy), with cowardice and rashness as their respective
opposites. Likewise, generosity is not the mean between avarice and
prodigality, but has two aspects: giving and not giving. Prodigality
is the opposite of the first aspect, avarice of the second, for which
we should use the term frugality or thrift rather than generosity.
More generally, the terminology of vices as defects and excesses and
virtue as a mean is misleading; virtues and vices should not be ranked
"according to whether they are at the bottom, or halfway up, or
at the top." Interestingly, a similar critique of
Aristotle's notion of virtue as a mean between two extremes has
been raised in modern scholarship (for example, by W. D. Ross).
Valla regards the Aristotelian notion of virtue as too static and
inflexible, so that it does not do justice to the impulsive nature of
our moral behavior. For him, virtue is not a habit, as Aristotle
believed, but rather an affect, an emotion or feeling that can be
acquired and lost in a moment's time. Virtue is the domain
solely of the will. The greatest virtue or the worse vice may arise
out of one single act. And because virtue is painful and vice
tempting, one may easily slide from the one into the other, unlike
knowledge, which does not turn into ignorance all of a sudden. Valla
therefore frequently removes the notions of knowledge, truth, and
prudence from the sphere of moral action. Virtues as affects are
located in the rear part of the soul, the will, while the domains of
knowledge, truth, and opinion reside in the other two faculties,
memory and reason (*Repastinatio*, 1:73-74; *DD*
1:130). This is not to say that the will is independent from the
intellectual capacities. The affects need reason as their guide, and
the lack of such guidance can result in vice. But Valla is not
entirely clear as to what element we should assign the moral
qualification "good" or "bad." He identifies
virtues with affects, and says that only these merit praise and blame
(*Repastinatio*, 1:74; *DD* 1:130-32); however, he also
writes that the virtues, as affects, cannot be called good or bad in
themselves, but that these judgments apply only to the will, that is,
to the will's choice. This is underlined by his remark that
virtue resides in the will rather than in an action
(*Repastinatio*, 1:77; *DD* 1:136). In his discussion of
the soul, however, he frequently calls reason the will's guide
(e.g., Valla *Repastinatio*, 1:75; *DD* 1:132), and also
says that the affects should follow reason, so that it, too, may be
held responsible, in the final analysis, for moral behavior (even
though he also explicitly denies that the will is determined by
reason). Finally, pleasure, delectation, or beatitude are also called
virtue by the equation of virtue with love and with charity
(*Repastinatio*, 1:85; *DD* 1:150-52).
Moreover, Valla's insistence on the will as the locus of moral
behavior seems compromised by the predestinarianism advocated by the
interlocutor "Lorenzo" in his dialogue *De libero
arbitrio* (*On Free Will*). In this highly rhetorical work,
Valla--if we can assume that Lorenzo represents Valla's own
position--stresses that in his inscrutable wisdom God hardens the
hearts of some, while saving those of others. We do not know why, and
it is presumptuous and vain to inquire into the matter. Yet, somehow
we do have free will, "Now, indeed, He brings no necessity, and
His hardening one and showing another mercy does not deprive us of
free will, since He does this most wisely and in full holiness"
(Valla 1948, 177). So, we are free, after all, and God's
foreknowledge does not necessitate the future; but how exactly Valla
thinks his views settle these issues is not clear. His fideistic point
is that we humans cannot settle the issue; 'the cause of the
divine will which hardens one and shows mercy to another is known
neither to man nor to angels'. Yet, in spite of his fideism and
anti-intellectualism, his own discussion of God and free will, as well
as his account of the Trinity in the *Repastinatio*, makes
substantive claims about what God is.
In conclusion, Valla's moral thought can be described--with
some justice--as hedonistic, voluntarist, and perhaps also
empirical (in the sense of taking account of how people actually
behave). On closer inspection, however, his account seems to contain
the seeds of several ideas that are not so easily reconciled with one
another. This is doubtless due, in no small measure, to his
eclecticism, his attempt to bring into one picture Aristotelian
ethics, the Stoic virtues of Cicero, the biblical concepts of charity
and beatitude, and the Epicurean notion of hedonist pleasure--
each with its own distinctive terminology, definitions, and
philosophical context.
## 5. Evaluation
Valla's contributions to historical, classical, and biblical
scholarship are beyond doubt, and helped to pave the way for the
critical textual philology of Poliziano, Erasmus, and later
generations of humanists. Valla grasped the important
insight--which was not unknown to medieval philosophers and
theologians--that the meaning of a text can be understood only
when it is seen as the product of its original historical and cultural
context. Yet his attempt to reform or transform the scholastic study
of language and argumentation--and, indeed, their entire mode of
doing philosophy--is likely to be met with skepticism or even
hostility by the historian of medieval philosophy who is dedicated to
the argumentative rigor and conceptual analysis which are the
hallmarks of scholastic thought. Nevertheless, while it may be true
that Valla's individual arguments are sometimes weak,
superficial, and unfair, his critique as a whole does have an
important philosophical and historical significance. The following two
points, in particular, should be mentioned.
First, the humanist study of Stoicism, Epicureanism, skepticism, and
Neoplatonism widened the philosophical horizon and eroded faith in the
universal truth of Aristotelian philosophy--an essential
preparatory stage for the rise of early modern thought. In
Valla's day, Aristotle was still "the Philosopher,"
and scholastics put considerable effort into explaining his words.
Valla attacks what he sees as the *ipse dixit* attitude of the
scholastics. For him, a true philosopher does not follow a single
master but instead says whatever he thinks. Referring to
Pythagoras' modest claim that he was a not a wise man but a
lover of wisdom (*Repastinatio*, 1:1; *DD* 1:2), Valla
maintains that he does not belong to any sect (including that of the
skeptics) and wants to retain his independence as critical thinker.
What the scholastics forget, he thinks, is that there were many
alternatives in antiquity to the supposedly great master, many sects,
and many other types of philosopher. In criticizing Aristotle's
natural philosophy, for instance, he gave vent to a sentiment which
ultimately eroded faith in the Aristotelian system. This does not,
however, mean that he was developing a non-Aristotelian natural
philosophy; his rejection of Aristotle's account of nature was
primarily motivated by religious and linguistic considerations.
Indeed, Valla's insistence on common linguistic usage, combined
with his appeal to common sense and his religious fervor, seems at
times to foster a fideism that is at odds with an exploratory attitude
towards the natural world. But with hindsight we can say that
*any* undermining of the faith in the exclusiveness of the
scholastic-Aristotelian worldview contributed to its demise and,
ultimately, to its replacement by a different, mechanistic one. And
although the humanist polemic was only one factor among many others,
its role in this process was by no means negligible. Likewise, in
attacking Peripatetic moral philosophy, Valla showed that there were
alternatives to the Aristotelian paradigm, even though his use of
Epicureanism and Stoicism was rhetorical rather than historical.
The second point relates to the previous one. In placing himself in
opposition to what he regarded as the Aristotelian paradigm, Valla
often interprets certain doctrines--the syllogism, hypothetical
syllogism, modal propositions, and the square of contraries--in
ways they were not designed for. In such cases, we can see Valla
starting, as it were, from the inside of the Aristotelian paradigm,
from some basic assumptions and ideas of his opponents, in order to
refute them by using a kind of *reductio ad absurdum* or
submitting them to his own criteria, which are external to the
paradigm. This moving inside and outside of the Aristotelian paradigm
can explain (and perhaps excuse) Valla's inconsistency, for it
is an inconsistency which is closely tied to his tactics and his
agenda. He does not want to be consistent if this means merely obeying
the rules of the scholastics, which in his view amounted to rigorously
defining one's terms and pressing these into the straightjacket
of a syllogistic argument, no matter what common sense and linguistic
custom teach us. Behind this inconsistency, therefore, lies a
consistent program of replacing philosophical speculation and
theorizing with an approach based on common linguistic practice and
common sense. But arguably it has also philosophical relevance; for
throughout the history of philosophy a warning can be heard against
abstraction, speculation, and formalization. One need not endorse this
cautionary note in order to see that philosophy thrives on the
creative tension between, on the one hand, a tendency to abstract,
speculate, and formalize, and, on the other, a concern that the object
of philosophical analysis should not be lost from sight, that
philosophy should not become a game of its own--an abstract and
theoretical affair that leaves the world it purports to analyze and
explain far behind, using a language that can be understood only by
its own practitioners. |
value-incommensurable | ## 1. Value Incommensurability
Incommensurability between values must be distinguished from the kind
of incommensurability associated with Paul Feyerabend (1978, 1981,
1993) and Thomas Kuhn (1977, 1983, 1996) in epistemology and the
philosophy of science. Feyerabend and Kuhn were concerned with
incommensurability between rival theories or paradigms -- that
is, the inability to express or comprehend one conceptual scheme, such
as Aristotelian physics, in terms of another, such as Newtonian
physics.
In contrast, contemporary inquiry into value incommensurability
concerns comparisons among abstract values (such as liberty or
equality) or particular bearers of value (such as a certain
institution or its effects on liberty or equality). The term
"bearer of value" is to be understood broadly. Bearers of
value can be objects of potential choice (such as a career) or states
of affairs that cannot be chosen (such as a beautiful sunset). Such
bearers of value are valuable in virtue of the abstract value or
values they instantiate or display (so, for example, an institution
might be valuable in virtue of the liberty or equality that it
engenders or embodies).
### 1.1 Measurement and Comparison
The term "incommensurable" suggests the lack of a common
measure. This idea has its historical roots in mathematics. For the
ancient Greeks, who had not recognized irrational numbers, the
dimensions of certain mathematical objects were found to lack a common
unit of measurement. Consider the side and the diagonal of a unit
square. These can be compared or ranked ordinally, since the diagonal
is longer. However, without the use of irrational numbers, there is no
way to specify with cardinal numbers exactly how much longer the
diagonal is than the side of a unit square. The significance of this
kind of incommensurability, especially for the Pythagoreans, is a
matter of some debate (Burkert 1972, 455-465). Hippasus of Metapontum,
who was thought by many to have demonstrated this kind of
incommensurability, is held by legend to have been drowned by the gods
for revealing his discovery (Heath 1921, 154; von Fritz 1970,
407).
Given these historical roots, some authors reserve the term
"incommensurable" for comparisons that can be made, but
not cardinally (Stocker 1980, 176; Stocker 1997, 203; Chang 1997b, 2).
Others interpret the idea of a common measure more broadly. On this
broader interpretation, for there to be a common measure, all that is
required is that ordinal comparisons or rankings are possible. Values
or bearers of value are then incommensurable only when not even an
ordinal comparison or ranking is possible (e.g., Raz 1986; Rabinowicz
2021a). On this interpretation, incommensurability is defined as the
relation that holds between two items when neither is better than the
other nor are they equally as good.
Others
have not given an exact
definition of the term, but use it as an inclusive umbrella term for
comparability problems in general. This inclusive interpretation has
the advantage of encompassing a field of diverse philosophical
discussions which engages in problems concerning value comparisons.
This entry encourages the use of "incommensurable" in the
etymologically correct way i.e., to refer to the lack of a cardinal
scale.
### 1.2 Incommensurable or Incomparable?
Because the idea of comparison is closely tied to the topic of value
incommensurability, this has led to use of the term
"incomparable" alongside "incommensurable" in
the literature. Some authors use the terms interchangeably (e.g., Raz
1986). Others use them to refer to distinct concepts (e.g., Chang
1997b). This entry will use the term "incomparable" to
refer to the possibility that no positive value relation holds between
two value bearers (Chang 1997b). Positive value relations specify how
two items compare (e.g., "better than") rather than how
two items do not compare (e.g., "not better than").
Incomparability is often taken to be a three-place relation: *A* is
incomparable with *B* with respect to *V*. "*V*" is here some
specific consideration for which the items are being compared. One
career may be incomparable to another with respect to the sense of
purpose it will bring to your life, yet they may be comparable with
respect to financial stability since one is clearly better than the
other in this regard. Some argue that without specifying this
"covering consideration" the comparative claim makes no
sense (Chang 1997; Thomson 1997; Andersson 2016b). Often, however, the
covering consideration is not explicitly expressed, but implicitly
assumed by the context of utterance.
Specifying the covering consideration also allows us to identify
"noncomparability". If no single covering consideration is
applicable to the things we are comparing, then the things are
noncomparable. For example, the number four is noncomparable with the
color blue with respect to their tastiness. This form of comparative
failure is considered to be of no interest from the perspective of
practical reasoning and is consequently not given much consideration
in the literature.
From the fact that two things are incommensurable i.e., lack a common
unit of measurement, it does not follow that they are incomparable. It
is possible to correctly judge that one thing is better than the other
while it being impossible to measure how much better it is. However,
when there is incomparability there is also incommensurability. If no
positive value relation holds between two things, then they cannot be
placed on the same cardinal scale.
While both incommensurability and incomparability are important
concepts, much of the recent research has focused on the more specific
possibility of incomparable value bearers. Different competing
accounts have been given for examples that seem to point to the
possibility of things being incomparable. These examples often take
the form similar to that of Joseph Raz's example in which a
person faces the choice between two successful careers: one as a
lawyer and one as a clarinetist. Neither career seems better than the
other, and they also do not appear to be equally good. If they were of
equal value, then a slightly improved version of the legal career
would be better than the musical career, but this judgment appears
incorrect (Raz 1986, 332).
The focus on incomparability can be motived by its possible
significance for rational choice. Comparing alternatives are central
for rational choice, we want to know what alternative is the best, and
thus examples of incomparability are taken to be a threat to rational
decision-making. Incommensurability on the other hand is mostly of
theoretical interest. The possibility of incommensurable values
introduces restrictions for normative theories, in the sense that they
should not assume that all values can be represented on the same
cardinal scale. The focus on incomparability can thus be justified by
referring to its relation to practical reason and rational choice.
The remainder of section 1 considers proposed ways in which to
conceive of value incommensurability.
### 1.3 Conceptions of Value Incommensurability
This section outlines three conceptions of value incommensurability,
each one capturing some sense in which incommensurability is due to
the lack of a common measure.
The first conception characterizes value incommensurability in terms
of restrictions on how the further realization of one value outranks
realization of another value. James Griffin has proposed forms of
value incommensurability of this sort. One form involves what he calls
"trumping." In a conflict between values *A* and
*B*, *A* is said to trump *B* if
"*any* amount of *A*, no matter how small, is more
valuable than *any* amount of *B*, no matter how
large" (Griffin 1986, 83). A weaker form of value
incommensurability involves what Griffin calls
"discontinuity." Two values, *A* and *B*,
are incommensurable in this sense if "so long as we have enough
of *B* any amount of *A* outranks any further amount of
*B*; or that enough of *A* outranks any amount of
*B*" (Griffin 1986, 85).
If values are incommensurable in this first sense, there is no
ambiguity whether the realization of one value outranks realization of
the other. Ambiguity as to whether the realization of one value
outranks realization of the other, however, is thought by many
theorists to be a central feature of incommensurable values. The
second and third conceptions of value incommensurability aim to
capture this feature.
According to the second conception, values are incommensurable if and
only if there is no true general overall ranking of the realization of
one value against the realization of the other value. David Wiggins,
for example, puts this forward as one conception of value
incommensurability. He writes that two values are incommensurable if
"there is no general way in which A and B trade off in the whole
range of situations of choice and comparison in which they
figure" (1997, 59).
This second conception of value incommensurability denies what Henry
Richardson calls "strong commensurability" (1994,
104-105). Strong commensurability is the thesis that there is a true
ranking of the realization of one value against the realization of the
other value in terms of one common value across all conflicts of
value. A denial of such a singular common value, however, does not
rule out what Richardson calls "weak commensurability"
(1994, 105). Weak commensurability is the thesis that in any given
conflict of values, there is a true ranking of the realization of one
value against the realization of the other value in terms of some
value. This value may be one of the values in question or some
independent value. This value also may differ across value conflicts.
The denial of strong commensurability does not entail a denial of weak
commensurability. Even if there is no systematic or general way to
resolve any given conflict of values, there may be some value in
virtue of which the realization of one value ranks against realization
of the other. Donald Regan defends the thesis of strong
commensurability (Regan 1997).
The third conception of value incommensurability denies both strong
and weak commensurability (Richardson 1994, Wiggins 1997, Williams
1981). This conception claims that in some conflicts of values, there
is no true ranking of values.
This third conception of value incommensurability is sometimes said to
be necessary to explain why, in conflicts of value, a gain in one
value does not always cancel the loss in another value. This view
assumes that, whenever there is a true ranking between the realization
of one value and the realization of another value, the gain in one of
the values cancels the loss in the other. Many commentators question
that assumption. This entry leaves open the possibility that when
there is a true ranking between the realization of one value and the
realization of another value, the gain in one of the two values need
not cancel the loss in the other.
If we accept this possibility, a number of questions arise. One
question is what a ranking of realizations of values means if a gain
in one value does not cancel the loss in the other. A second question
is whether the first and second conceptions of value
incommensurability each admit of two versions: one version in which
the gain in one value cancels the loss of the other and one version in
which it does not. A third question concerns the relation between
value incommensurability and tragedy. It may be thought that what
makes a choice tragic is that, no matter which alternative is chosen,
the gain in value cannot cancel the loss in the other value. Not all
authors, however, regard all value conflicts involving incommensurable
values as tragic (Richardson 1994, 117). Wiggins, for example,
reserves the second conception of value incommensurability for what he
calls "common or garden variety incommensurable" choices
and the third conception for what he calls "circumstantially cum
tragically incommensurable" choices (1997, 64).
## 2. Incomparability
Rather than focus on commensurability between abstract values, recent
research focuses on the incomparability between concrete bearers of
value, often in the context of choice (Broome 1997, 2000; Chang 1997,
2002; Griffin 1986; Raz 1986). Bearers of value sometimes appear
incomparable in cases like Joseph Raz's example of the choice
between careers (described above in subsection 1.2).
The case for incomparability in such examples relies in part on what
Ruth Chang calls the "Small Improvement Argument" (Chang
2002b, 667). As noted in the initial discussion of the example, if the
legal and musical careers were of equal value, then a slightly
improved version of the legal career would be better than the musical
career, but this judgment appears incorrect. The Small Improvement
Argument takes the following general form: "if (1) *A* is
neither better nor worse than *B* (with respect to *V*),
(2) *A*+ is better than *A* (with respect to
*V*), (3) *A*+ is not better than *B* (with
respect to *V*), then (4) *A* and *B* are not
related by any of the standard trichotomy of relations (relativized to
*V*)" where *V* represents the relevant set of
considerations for purposes of the comparison (Chang 2002b,
667-668). In addition to Raz, Derek Parfit and Walter
Sinnott-Armstrong are among those who have advanced the Small
Improvement Argument (Parfit 1984; Sinnott-Armstrong 1985). A similar
argument, but for preference relations has, however, a longer
history. Leonard J. Savage hints towards this possibility already
1954, in 1958 R. Duncan Luce presents a version of the argument that
he attributes to Howard Raiffa, and it is also discussed by Ronald de
Sousa in 1974 (Savage 1954; Luce 1958; de Sousa 1974).
Focusing on the incomparability of bearers of value has given rise to
two lines of inquiry in the literature. The first concerns the
relation between incomparability and vagueness; some argue that
alleged examples of incomparability are better understood in terms of
vagueness. The second concerns the range of comparative relations that
can hold between two items; some argue that the Small Improvement
Argument does not establish incomparability but rather that there are
more value relations than previously believed. This section summarizes
debate within each line of inquiry.
### 2.1 Vagueness
Raz distinguishes incomparability from what he calls the
"indeterminacy" of value. Recall that Raz defines two
bearers of value as incomparable if and only if it is not true that
"either one is better than the other or they are of equal
value." The indeterminacy of value is a case of vagueness: it is
neither true nor false of two items that "either one is better
than the other or they are of equal value." Raz regards the
indeterminacy of value to result from the "general indeterminacy
of language" (1986, 324).
In contrast, other philosophers argue for interpreting alleged
examples of incomparability as vagueness (Griffin 1986, 96; Broome
1997, 2000; Andersson 2017; Elson 2017; Dos Santos 2019). Some have
argued that the Small Improvement Argument fails to rule out the
possibility that it is indeterminate how the value bearers relate;
they may be related by one of the standard trichotomous relations, but
it could be indeterminate which (Wasserman 2004; Klockseim 2010;
Gustafsson 2013).
There are also more positive arguments in favor of the vagueness
interpretation. For example, Broome introduces what he calls a
"standard configuration" (1997, 96; 2000, 23). Imagine the
musical and legal careers from Raz's example. Fix the musical
career as the "standard." Now imagine variations in the
legal career arranged in a line such that in one direction, the
variations are increasingly better than the standard and in the other
direction, the standard is increasingly better than the variations.
There is an intermediate zone of legal careers that are not better
than the standard and such that the standard is not better than the
legal careers. If this zone contains one item, Broome defines this
legal career to be equally good with the standard. If this zone
contains more than one item, the zone is either one of "hard
indeterminacy" or one of "soft indeterminacy." A
zone cannot be both one of hard indeterminacy and one of soft
indeterminacy. In a zone of hard indeterminacy, it is false that the
legal careers are better than the standard and false that the standard
is better than the legal careers (1997, 73, 76). In a zone of soft
indeterminacy, it is neither true nor false that the legal careers are
better than the standard and neither true nor false that the standard
is better than the legal careers (1997, 76). The latter is a zone of
vagueness. Broome argues that indeterminate comparatives, including
"better than" are softly indeterminate, thus there is no
hard indeterminacy and no room for the possibility suggested by the
Small Improvement Argument.
By understanding incomparability to entail vagueness, Broome disagrees
with Raz (Broome 2000, 30). Raz defines incomparability so that it is
compatible with vagueness, but not so that it entails vagueness.
Griffin also argues that incomparability entails vagueness (1986, 96).
Where Broome disagrees with Griffin is with regard to the width and
significance of the zone of soft indeterminacy. Broome takes Griffin
to suggest that if there is a zone of soft indeterminacy, it is narrow
and unimportant. Broome argues that vagueness need not imply either
narrowness or lack of importance (2000, 30-31).
Central to Broome's argument is his controversial
"collapsing principle" for comparatives: For any *x*
and *y*, if it is false that *y* is *F*er than *x*
and not false that *x* is *F*er than *y*, then it is
true that *x* is *F*er than *y* (1997, 74). Many
counterexamples have been presented to show the implausibility of the
principle (Carlson 2004, 2013; Elson 2014b; Gustafsson 2018). While
there are attempts to defend the collapsing principle, or versions of
it, (Constantinescu 2012; Andersson & Herlitz 2018) the
counterexamples make Broome's argument less convincing. Another
line of argument has consequently been developed in defense of the
vagueness interpretation. It is argued that the vagueness
interpretation is theoretically parsimonious since it is natural to
accept the existence of evaluative vagueness and if it can
effortlessly account for alleged examples of incomparability then
there is no reason to accept the more mysterious notion of
incomparability or to introduce more value relations beyond the
standard trichotomous relations (Andersson 2017; Elson 2017).
### 2.2 "Roughly Equal" and "On a Par"
The second line of inquiry concerns the set of possible comparative
relations that can obtain between two items. The small improvement
argument for the incomparability of the musical career and the legal
career in Raz's example assumes what Chang calls the
"trichotomy thesis." The trichotomy thesis holds that if
two items can be compared in terms of some value or set of values,
then the two items are related by one of the standard trichotomy of
comparative relations, "better than," "worse
than," or "equally good" (2002b, 660). A number of
authors have argued that these three comparative relations do not
exhaust the space of comparative relations. If they are correct, the
musical career and the legal career may, in fact, be comparable.
James Griffin and Derek Parfit argue that items may in fact be
"roughly equal" and hence comparable (Griffin 1986, 80-81,
96-98, and 104; 1997, 38-39; 2000, 285-289; Parfit 1987, 431). As an
illustration, Parfit imagines comparing two poets and a novelist for a
literary prize (1987, 431). Neither the First Poet nor the Novelist is
worse than the other and the Second Poet is slightly better than the
First Poet. If the First Poet and the Novelist were equally good, it
would follow that the Second Poet is better than the Novelist. This
judgment, according to Parfit, need not follow. Instead, the First
Poet and the Novelist may be roughly equal. The intuition is that even
though three items display the respects in virtue of which the
comparisons are made, some comparisons are inherently rough so that
even though two alternatives are not worse than one other, they are
not equally good. In turn, the musical and legal careers in
Raz's example may be roughly equal. Parfit later referred to
this possibility as there being "evaluative imprecision"
(Parfit 2016, 113).
"Roughly equal," as used here, is to be distinguished from
two other ways in which the term has been used: (1) to refer to a
small difference in value between two items and (2) to refer to a
choice of little significance (Raz 1986, 333). As used here, two items
A and B are said to be roughly equal if neither is worse than the
other and C's being better than B does not imply that C is
better than A when the comparisons are all in virtue of the same set
of respects.
There is some debate as to whether "roughly equal" is in
fact a fourth comparative relation to be considered in addition to the
three standard relations, "better than," "worse
than," and "equally good." One way to conceive of
"roughly equal" is as a "roughed up" version
of "equally good." On this interpretation, the trichotomy
thesis basically holds; there are simply precise and rough versions
(Chang 2002b, 661, fn. 5). Furthermore, "roughly equal" is
a relation that can be defined only in virtue of three items, and as
such, appears to be something distinct from a standard comparative
relation. The standard comparative relations are binary; transitivity
with respect to them is a separate condition (Hsieh 2005, 195).
A separate proposal for a genuine fourth relationship is Ruth
Chang's argument for "on a par" (Chang 1997; Chang
2002b). Two items are said to be on a par if neither is better than
the other, their differences preclude their being equally good, and
yet they are comparable. Whereas "roughly equal" is
invoked to allow for comparability among alternatives that display the
same respects (e.g., literary merit), "on a par" is
invoked to allow for comparability between alternatives that are
different in the respects that they display. Imagine comparing Mozart
and Michelangelo in terms of creativity. According to Chang, neither
Mozart nor Michelangelo is less creative than the other. Because the
two artists display creativity in such different fields, however, it
would be mistaken to judge them to be equally creative. Nevertheless,
according to Chang, they are comparable with respect to creativity.
Something can be said about their relative merits with respect to the
same consideration. According to Chang, they are on a par.
Chang's argument relies on invoking a continuum that resembles
Broome's standard configuration. Chang asks us to imagine a
sequence of sculptors who are successively worse than Michelangelo
until we arrive at a sculptor who is clearly worse than Mozart in
terms of creativity. Chang then brings to bear the intuition that
"between two evaluatively very different items, a small
unidimensional difference cannot trigger incomparability where before
there was comparability" (2002b, 673). In the light of this
intuition, because Mozart is comparable to this bad sculptor, Mozart
is also comparable to each of the sculptors in the sequence, including
Michelangelo. The soundness of this so-called Chaining Argument has,
however, been questioned (Boot 2009; Elson 2014a; Andersson
2016a).
Whether "roughly equal" or "on a par" imply
comparability is a matter of some debate. "Roughly equal"
and "on a par," for example, are intransitive, so to
recognize them as distinct comparative relations would require us to
reconsider the transitivity of comparative relations (Hsieh 2007).
Some authors suggest that one of the three standard comparative
relations obtains between all items that are claimed to be
incomparable, roughly equal, or on a par (Regan 1997). In the case of
"on a par," Joshua Gert and Erik Carlson have argued that
it can be defined in terms of the three standard comparative relations
(Gert 2004; 2015; Carlson 2010). Gert's suggestion is developed
further by Wlodek Rabinowicz who provides a Fitting Attitudes account
of value relations (Rabinowicz 2008; 2012). The Fitting Attitudes
account typically analyzes goodness in terms of a normative component
and an attitudinal component. By acknowledging that there can be two
levels of normativity, requirement and permissibility, the account
leaves room for both parity and standard value relations. *X* is
better than *Y* if and only if it is rationally required to
prefer *X* to *Y*, and *X* and *Y* are on a par if
and only if it is rationally permissible to prefer *X* to
*Y* and also rationally permissible to prefer *Y* to
*X*. Interestingly, preferences and the lack of preferences
together with the two levels of normativity can be combined in 15
different ways. This means that the fitting attitudes analysis
provides the conceptual space for 15 possible value relations.
Another view is that values can be "clumpy," meaning that
the values sort items into clumps. According to this view, once we
recognize the way in which the relation "equally good"
functions in the context of clumpy values, items that appear to
require "roughly equal" or "on a par" can be
judged equally good (Hsieh 2005).
## 3. Arguments for Value Incommensurability and Incomparability
While recent research has focused on value incomparability and how to
interpret examples that have the structure described by the Small
Improvement Argument, earlier research had less focus on the structure
of value but focused more on substantial consideration that talks in
favor of incomparability. In this section, such considerations will be
discussed.
First, a note on terminology. Most of the arguments discuss something
that is similar to the phenomenon of incomparability but refer to the
phenomenon in terms of "incommensurability". In this
section, the terminology of the discussed authors will be respected,
and thus "incommensurability" will sometimes be used, even
though, the authors possibly refer to the phenomenon of
incomparability.
It should be noted that value pluralism seems central for the
possibility of incomparability. After all, the most direct argument
against the possibility that values are incommensurable or
incomparable is based on value monism (e.g., Mill 1861 [1979] and Sidgwick
1874 [1981]). Donald Regan, for example, argues that "given any two
items (objects, experiences, states of affairs, whatever) sufficiently
well specified so that it is apposite to inquire into their
(intrinsic) value in the Moorean sense, then either one is better than
the other, or the two are precisely equal in value" (Regan 1997,
129).
This argument has been questioned. As Ruth Chang points out, by
referring to John Stuart Mill, that values may have both qualitative
and quantitative aspects could in principle allow for value bearers to
be incomparable due to a difference in their qualitative features.
(Chang 1997b, 16-17).
For purposes of this entry, it will not be assumed that value monism
rules out incomparability. In any case, many philosophers argue for
some form of value pluralism (Berlin 1969, Finnis 1981, Nagel 1979,
Raz 1986, Stocker 1990, Taylor 1982, Williams 1981). Although this
entry will not assume value pluralism, most philosophers who argue for
incommensurability or incomparability are value pluralists, so it will
be simpler to present their views by talking of separate values.
### 3.1 Constitutive Incommensurability
It has been argued that value incomparability is constitutive of
certain goods and values and consequently we should assume its
existence. Two versions of this second argument will be discussed
here.
One version of this argument comes from Joseph Raz. Consider being
offered a significant amount of money to leave one's spouse for
a month. The indignation that is typically experienced in response to
such an offer, according to Raz, is grounded in part in the symbolic
significance of certain actions (1986, 349). In this case, "what
has symbolic significance is the very judgment that companionship is
incommensurable with money" (1986, 350). Although this form of
value incommensurability looks like trumping, Raz does not see this as
a case of trumping. He rejects the view that companionship is more
valuable than money. If such a view were correct, then those who forgo
companionship for money would be acting against reason (1986, 352).
Instead, Raz takes the view that a "belief in incommensurability
is itself a qualification for having certain relations" (1986,
351). Someone who does not regard companionship and money as
incommensurable simply has chosen a kind of life that may be
fulfilling in many ways, but being capable of having companionship is
not one of them.
In Raz's account, the symbolic significance of judging money to
be incommensurable with companionship involves the existence of a
social convention that determines participation in that convention
(e.g., marriage) that requires a belief in value incommensurability.
This conventional nature of belief in value incommensurability in
Raz's account raises a question for some authors about its
robustness as an account of value incommensurability. For example,
Chang objects that incommensurability appears to become relative to
one's participation in social conventions (2001, 48). It remains
an open question how much of a problem this point raises. Raz's
account appears to illustrate a basic sense in which the values of
money and companionship can be incommensurable. Insofar as it is not
against reason to choose money over companionship, there is no general
way to resolve a conflict of values between money and companionship.
In Raz's account, the resolution depends upon which social
convention one has chosen to pursue.
Elizabeth Anderson advances a second argument for constitutive
incommensurability. Her account is grounded in a pragmatic account of
value. Anderson reduces "'x is good' roughly to
'it is rational to value x,' where to value something is
to adopt toward it a favorable attitude susceptible to rational
reflection" (1997, 95). She argues that in virtue of these
attitudes there may be no good reason to compare the overall values of
two goods. Pragmatism holds that if such a comparison serves no
practical function, then the comparative value judgment has no truth
value, meaning that the goods are incommensurable (1997, 99). Because
the favorable attitudes one adopts toward goods help to make them
good, Anderson's account can be seen as an argument for
constitutive incommensurability (Chang 2001, 49).
Anderson advances three ways in which there may be no good reason to
compare the overall values of goods. First, it may be boring or
pointless to engage in comparison. To illustrate, "the project
of comprehensively ranking all works of art in terms of their
intrinsic aesthetic value is foolish, boring, and stultifying"
(1997, 100). Second, Anderson points to instances in which "it
makes sense to leave room for the free play of nonrational motivations
like whims and moods" as in the choice of what to do on a
leisurely Sunday afternoon (1997, 91). Third, Anderson argues that the
roles that goods play in deliberation can be so different that
"attempts to compare them head to head are incoherent"
(1997, 91). Imagine that the only way to save one's dying mother
is to give up a friendship. Rather than compare their overall values,
argues Anderson, ordinary moral thinking focuses on what one owes to
one's mother and one's friends (1997, 102). This focus on
obligation recognizes mother and friend each to be intrinsically
valuable and yet valuable in different ways (1997, 103). There is no
good reason, according to Anderson, to compare their overall values
with regard to some common measure.
Chang argues against each of the three points raised by Anderson
(Chang 2001). In response to the first point, Chang notes there are
occasions in which comparisons do need to be made between goods for
which Anderson argues there is no good reason to make comparisons. In
response to the second point, Chang argues that the range of instances
for which the second argument applies is small. In response to the
third point, Chang contends that Anderson's argument assumes
that if goods are comparable then they have some value or evaluative
property in common. Chang points out that this need not be the
case.
### 3.2 Moral Dilemmas
Both value incommensurability and incomparability have been invoked to
make sense of a central feature of supposed moral
dilemmas--namely, that no matter which alternative the agent
chooses, she fails to do something she ought to do. The apparent value
conflicts involved in these choices have led some philosophers to
relate moral dilemmas to the incomparability or incommensurability of
values. Henry Richardson, for example, takes the situation confronting
Sophie in the novel *Sophie's Choice*--that one of
her two children will be spared death, but only if she chooses which
one to save--to point to the incommensurability of values (1994,
115-117).
It has been argued that the mere fact of a moral dilemma does not
imply incomparability. James Griffin, for example, argues that the
feature of "irreplaceability" in moral dilemmas often may
be mistaken as evidence for incomparability (1997, 37).
Irreplaceability is the feature that what is lost in choosing one
alternative over another cannot be replaced by what is gained in
choosing another alternative. Although a conflict of values displays
this feature, not all instances of irreplaceability need involve
plural values. Some moral dilemmas, for example, may involve not a
conflict of values, but a conflict of obligations that arises from the
same consideration. Consider forced choices in saving lives. If there
is a dilemma, it need not involve conflicting values, but rather
conflicting obligations that arise from the same consideration. Walter
Sinnott-Armstrong calls such dilemmas "symmetrical" (1988,
54-58). The dilemma encountering Sophie, it may be said, does not
point to the incommensurability of values.
Richardson acknowledges that the moral considerations underlying
Sophie's dilemma are not incommensurable. Nonetheless, he takes
value incommensurability to be essential to understanding the tragedy
of the dilemma that Sophie encounters. "It is a distinguishing
feature of love, including parental love," he writes,
"that it cherishes the particular and unique features of the
beloved" (115). He concludes, "the fact that she cannot
adequately represent each child's value on a single scale is
what makes the choosing tragic" (116). By locating the
incommensurability of values at the level of what is valuable about
each of her children, Richardson argues that the tragedy of the
dilemma points to incomparability.
Another common approach to argue for value incommensurability is with
reference to "non-symmetrical" dilemmas. As the name
suggests, in non-symmetrical dilemmas, the alternatives are favored by
different values (Sinnott-Armstrong 1988). If these values are
incommensurable in the third sense as discussed in subsection 1.3,
there is no systematic resolution of the value conflict. Consider
Jean-Paul Sartre's well-known example of his pupil who faced the
choice between going to England to join the Free French Forces and
staying at home to help his mother live (Sartre 1975, 295-296). No
matter which alternative he chooses, certain values will remain
unrealized. An idea along these lines is considered, for example, by
Walter Sinnott-Armstrong (1988, 69-71) and Bernard Williams (1981,
74-78) in their discussions of moral dilemmas.
### 3.3 *Akrasia*
Value incommensurability also features in debates about
*akrasia* (Nussbaum 2001, 113-117). David Wiggins, for example,
invokes the idea of value incommensurability to suggest "the
heterogeneity of the psychic sources of desire satisfaction and of
evaluation" (1998, 266). This heterogeneity, according to
Wiggins, allows for a coherent account of the agent's attraction
to what is not best. It allows for a divergence between desire and
value such that the akratic individual can be attracted to a value
that should not be sought at that point in time. Wiggins invokes value
incommensurability to capture the idea that a gain in the value that
should not be sought does not reduce the loss in choosing what is not
best.
In contrast, Michael Stocker denies that value incommensurability is
required for a coherent account of *akrasia* (1990, 214-240).
For Stocker, coherent *akrasia* is possible with a single value
in just the way that it is possible to be attracted to two objects
that differ with respect to the same value (e.g., "a languorous
lesser pleasure and a piquant better pleasure" (1990, 230)).
Recall the discussion of quantitative and qualitative aspects of a
single value at the outset of section 3.
## 4. Deliberation and Choice
As suggested above much of the inquiry into value incomparability is
motivated more generally by theories of practical reason and rational
choice. Even if a conception of justified choice can accommodate value
incomparability, questions remain about how to justify choice on the
basis of incomparability and how to reason about incomparability. This
section considers these issues as they have been discussed in the
contemporary philosophical literature.
It is worth noting the potential for drawing connections between the
philosophical literature and the literature in psychology and the
social sciences on decision-making. One area for such potential is the
psychological literature on the difficulty of making decisions (Yates,
Veinott, and Patalano 2003). Jane Beattie and Sema Barlas (2001), for
example, advance the thesis that the observed variation in the
difficulty of making trade-offs between alternatives can be explained
in part by the category to which the alternatives belong. The authors
consider three categories: commodities ("objects that are
appropriately bought and sold in markets"), currencies
("objects that act as stand-ins for commodities"), and
noncommodities ("objects that either cannot be transferred
(e.g., pain) or that lose some of their value by being traded in
markets (e.g., friendship)") (Beattie and Barlas 2001, 30). The
authors' experimental findings are consistent with participants
holding normative commitments about exchanging currencies and
noncommodities similar to those considered in the discussion of
constitutive incommensurability in subsection 3.2. For example,
Beattie and Barlas report that participants "had a significant
tendency to choose noncommodities over commodities and
currencies" and that choices between noncommodities and
currencies were the easiest for participants (2001, 50-51). The
authors interpret these results to support the view that people choose
noncommodities over currencies on the basis of a rule without engaging
in a calculation of trade-offs (2001, 51-53). In the context of
empirical research, it should also be noted that philosophers have
contributed with empirical research on the characterization of hard
choices (Messerli and Reuter 2017a, 2017b).
### 4.1 Optimization and Maximization
At a minimum, for the choice of an alternative to qualify as
justified, there must not be an overriding reason against choosing it.
Beyond that, conceptions of justified choice differ in what is
required for the choice of an alternative to qualify as justified.
Ruth Chang defines "comparativism" as the view that
"comparative facts are what make a choice objectively correct;
they are that in virtue of which a choice is objectively rational or
what one has most or sufficient normative reason to do. So, whether you
are a consequentialist, deontologist, virtue theorist, perfectionist,
contractualist, etc., about the grounds of rational choice, you should
be, first and foremost, I suggest, a comparativist" (2016, 213).
If alternatives are incomparable, then the lack of a comparative fact
precludes the possibility of justified choice.
One response has been to argue that apparently incomparable
alternatives are, in fact, comparable. As discussed in subsection 2.2,
judgments of incomparability frequently involve comparisons that are
difficult and it may be that judgments of incomparability are mistaken
(Regan 1997) or that, due to vagueness, it is indeterminate which
value relation holds. In addition, as discussed, alternatives that
appear incomparable by way of "better than," "worse
than," or "equally good" may be comparable by way of
some fourth comparative relation, such as "roughly equal"
or "on a par." This means that there is a comparative fact
that makes a choice objectively correct in cases of apparent
incomparability. It does not, however, help us identify the
objectively correct alternative.
A common form of comparativism is *optimization*.
According to optimization, the fact that an alternative is at least as
good as the other alternatives is what justifies its choice.
Consequently, if two alternatives are incomparable no justified choice
can be made between them. Another line of response, one from within
the economics literature, has been to distinguish between
"optimization" and "maximization" as theories
of justified choice (Sen 1997, 746; Sen 2000, 486). The theory of
maximization as justified choice only requires the choice of an
alternative that is not worse than other alternatives. Because
incomparable alternatives are not worse than one another, the choice
of either is justified according to the theory of maximization as
justified choice. Maximization is thus a rejection of comparativism.
However, Hsieh (2007) argues that proponents of comparativism have no
reason to reject maximization as an account of justified choice and
consequently incomparability need not pose a problem for the
possibility of justified choice.
### 4.2 Cyclical Choice
One objection voiced against accounts that permit justified choice
between alternatives that are roughly equal or on a par or between
incomparable alternatives is that such accounts may justify a series
of choices that leaves a person worse off. Consider Raz's
example of career choice. Suppose the person chooses the musical
career over the legal career. At a future time, she has the
opportunity to pursue a legal career that is slightly worse than the
initial legal career. Suppose this slightly worse legal career and the
musical career are judged roughly equal or on a par. If justified
choice permits her to choose either of two alternatives when they are
roughly equal or on a par, then she would be justified in choosing the
slightly worse legal career. Similarly, if justified choice does not
require the comparability of alternatives, she could be justified in
choosing the slightly worse legal career. Through a series of such
apparently justified choices, she would be left significantly worse
off in a manner analogous to a "money pump" (Chang 1997,
11).
One response is to question whether the problem posed by choices of
this sort is serious. John Broome, for example, notes that after
having chosen one kind of career, a person may change her mind and
choose the kind of career she previously rejected. According to
Broome, there would be a puzzle only if she did not repudiate her
previous choice (2000, 34).
Another line of response is that the considerations that make some
alternatives worthy of choice count against the constant switching
among alternatives envisioned in this objection. First, the constant
switching among alternatives is akin to not choosing an alternative.
If the alternatives are such that choosing either is better than
choosing neither, then the considerations that make the alternatives
worthy of choice count against constantly switching among them.
Second, to switch constantly among careers appears to misunderstand
what makes the alternatives worthy of choice. Not only is pursuing a
career the kind of activity that depends upon continued engagement for
its success, but it is also the kind of activity that is unlikely to
be judged truly successful unless one demonstrates some commitment to
it. Third, for a career to be considered successful, it may require
the chooser to adopt a favorable attitude toward the considerations
that favor it over other careers. In turn, when subsequently presented
with the choice of a legal career, the considerations favoring it may
no longer apply to her in the same way as they did before (Hsieh
2007).
Ruth Chang advances a hybrid view on rational choice to meet the
challenge of cyclical choice. She distinguishes "given
reasons" from "will-based" reasons. Roughly put,
given reasons could here be understood as those that are grounded in
normative facts while will-based reasons are "reasons in virtue
of some act of the will; they are a matter of our creation. They are
voluntarist in their normative source. In short, we create will-based
reasons and receive given ones." (Chang 2013, 177). Will-based
reasons come into play when the given reasons no longer can guide our
actions. When two alternatives are on a par, given reasons fail to
guide us and at this stage, will-based reasons can provide guidance
and guarantee that the agent commits to her choice. Anders Herlitz
(2019) argues that Chang's approach and other "two-step
models" violate an intuitively plausible constraint on rational
choice called Basic Contraction Consistency. By violating this
constraint two-step models also allow for money pumps. This could,
however, possibly be avoided by introducing additional formal
conditions for the second step in the model (Herlitz 2019,
299-307).
### 4.3 Rational Eligibility
The idea expressed in the distinction between optimization and
maximization in subsection 4.1 can be expressed more generally in
terms of the idea of rational eligibility. To say that an alternative
is rationally eligible is to mean that choosing it would not be an
unjustified choice. Which alternatives are judged rationally eligible
may vary with the theory of justified choice. According to
maximization as a theory of justified choice, an alternative is
rationally eligible if and only if there is no better alternative.
Isaac Levi (1986; 2004) argues for
"*v*-admissibility" as a criterion of justified
choice. For *v*-admissibility, value incomparability does not
pose a problem for the possibility of justified choice. An alternative
is *v*-admissible if and only if it is optimal according to at
least one of the relevant considerations at hand. In some conflicts of
value, maximization and *v*-admissibility specify the same
alternatives as rationally eligible. Suppose that alternative
*X* is better than alternative *Y* with respect to
consideration *A* and alternative *Y* is better than
alternative *X* with respect to consideration *B*.
According to maximization as a theory of justified choice, both
*X* and *Y* are rationally eligible; neither alternative
is better than the other with respect to *A* and *B*
taken together. Both alternatives are also rationally eligible
according to *v*-admissibility; *X* is optimal with
respect to *A* and *Y* is optimal with respect to
*B*.
In some conflicts of value, maximization and *v*-admissibility
specify different sets of alternatives as rationally eligible.
*V*-admissibility is more restrictive than maximization (Levi
2004). Add to the above example alternative *Z*. Suppose that
*X* is better than *Z*, which is better than *Y*,
all with respect to *A*. Suppose also that *Y* is better
than *Z*, which is better than *X*, all with respect to
*B*. According to maximization as a theory of justified choice,
*X*, *Y*, and *Z* are rationally eligible; no
alternative is better than the other with respect to *A* and
*B* taken together. However, *Z* is not rationally
eligible according to *v*-admissibility. *Z* is not
optimal with respect to *A*. Nor is it optimal with respect to
*B*. Only *X* and *Y* are rationally eligible
according to *v*-admissibility. According to Levi,
*v*-admissibility captures what he takes to be a plausible
judgment--namely, that it would be unjustified to choose the
alternative that is second worse on all relevant respects (Levi 2004).
The plausibility of this judgment might be questioned. Suppose that
*Z* is only slightly worse than *X* with respect to
*A* and *Z* is only slightly worse than *Y* with
respect to *B*. Does the judgment still hold?
For Joseph Raz, incomparability also does not pose a problem for the
possibility of justified choice (1997). If incomparable options give
us reasons to choose both alternatives, they are both rationally
eligible from the perspective of justified choice. As such, the choice
of either alternative is justified on the basis of reason.
One question that arises is this. If an agent has reason to choose
either alternative and they are not equally good, what makes her
choice of one alternative over another intelligible to her? For Raz,
what explains the choice of one alternative over the other is the
exercise of the will. By the will, Raz has in mind "the ability
to choose and perform intentional actions" and "the most
typical exercise or manifestation of the will is in choosing among
options that reason merely renders eligible" (1997, 111).
John Finnis advances a similar view in response to the question of
intelligibility. Finnis writes, "in free choice, one has reasons
for each of the alternative options, but these reasons are not
causally determinative.... No factor but the choosing itself
*settles* which alternative is chosen" (1997, 220). In a
choice between alternatives each favored by different, incomparable
values, even though there are reasons to choose both alternatives,
because reasons are not causally determinative, the lack of a reason
to choose one alternative over another need not render the choice
unintelligible.
Donald Regan challenges this view. According to Regan, unless grounded
in an adequate reason, "a decision to go one way rather than
another will be something that happened to the agent rather than
something she did" and hence be unintelligible to the agent
herself (1997, 144). Suppose the agent has no more reason to choose
one alternative over another and the choice, as suggested above, is
settled by her wants. On Regan's view, if the agent's
choice is to be intelligible to her, her wants must be grounded in
reasons. Because she has no more reason to choose one alternative over
another initially, the reasons grounding her wants must be available
to her only after the initially relevant reasons are exhausted. This
strikes Regan as implausible (1997, 150). Regan concludes that no
choice between incomparable bearers of value is intelligible in the
ways suggested by Raz or Finnis.
### 4.4 External Resources
A number of authors have argued that practical reason has available to
it the resources to accommodate incomparability in ways that would
appear to address the concern raised by Regan.
Charles Taylor outlines two such sets of resources. First, we are able
to appeal to "constitutive goods that lie behind the life
goods" our sense of which is "fleshed out, and passed on,
in a whole range of media: stories, legends, portraits of exemplary
figures and their actions and passions, as well as in artistic works,
music, dance, ritual, modes of worship, and so on" (1997, 179).
Second, we cannot escape the need to live an integrated life or to
have this, at a minimum, as an aspiration (1997, 180). Given that a
life is finite, leading a life involves an articulation of how
different goods fit into it relative to one another. More generally,
our lives take on a certain "shape" and this shape
provides guidance in making choices in the face of incomparability
(1997, 183). Michael Stocker similarly points out that alternatives
are rarely considered in the abstract (1997). Instead, they are
usually considered in concrete ways in which they are
valuable--for example, as part of one's life. Once
considered in such concrete contexts, there are considerations
according to which alternatives can be evaluated for purposes of
justified choice. In his analysis of incommensurability, Fred
D'Agostino discusses the role of social institutions in
resolving value conflicts (2003).
Another resource that has been invoked is morality. Recall Elizabeth
Anderson's discussion of the example of the choice between
saving the life of one's mother and maintaining a close
friendship. Anderson regards the attempt to invoke a comparison of
values as incoherent from the perspective of practical reason.
Instead, ordinary moral thinking focuses on the obligations one has to
one's mother and one's friends. In turn, Anderson suggests
that the obligations themselves will provide guidance (1997, 106). The
availability of such resources to reason is independent of her
theories of value and rationality. John Finnis also points to
principles of morality as guiding reason which avoids comparability
problems (1997). It remains an open question as to how wide a range of
cases it is in which morality can provide such guidance.
One worry that may arise in appealing to such external resources is
that they address the problem of incomparability by simply denying it.
The shape of one's life or the "constitutive goods that
lie behind the life goods," for example, would appear to be
sources of value. Insofar as they provide a value against which to
resolve the initial value conflict, it might be said that there was no
problem of incomparability in the first place.
Two responses to this worry can be given. First, even if these
external resources can ground comparisons, it does not follow that
they provide a systematic resolution to value conflicts. Insofar as
the shapes of people's lives differ, the way in which two
individuals resolve the same value conflict may be different. The fact
that it may be consistent with reason to resolve the same value
conflict in different ways points to the possibility of
incomparability.
Second, in the case of moral considerations, there may not be any
denial of incomparability. Moral considerations may provide guidance
without comparing alternative courses of action (Finnis 1997,
225-226). Furthermore, some philosophers argue that the moral
wrongness of certain actions is intelligible only in virtue of
incomparability. For example, Alan Strudler argues that to deliberate
about the permissibility of lying in terms of comparable options is to
misunderstand the wrong in lying (1998, 1561-1564).
### 4.5 Non-Maximizing Choice
It would appear that conceptions of justified choice that do not rely
on comparisons avoid the problems posed by incomparability. Stocker
aims to provide one such account of justified choice (1990; 1997). Two
features help to distinguish his account. First, he argues for
evaluating alternatives as "the best" in an absolute,
rather than a relative, sense. Optimization and maximization rely on
choosing "the best" in a relative sense: given the
relevant comparison class, there is no better alternative. To be
"the best" in an absolute sense, an alternative
"must be, of its kind, excellent--satisfying, or coming
close to, ideals and standards" (1997, 206). This is the sense,
for example, in which a person can be the best of friends even if we
may have even better friends. More generally, an alternative can be
absolutely best even if there are better alternatives, and even if an
alternative is relatively best, it may not be absolutely best. The
second distinctive feature of Stocker's account is his appeal to
a good life, for example, or part of a good life or project. This
appeal does not require the alternative to be the best for that life
or project. Instead, the alternative need only be good enough for that
life or project to qualify as a justified choice.
Stocker's account of justified choice differs from the concept
of "satisficing" as used in the economics and rational
choice literature. Introduced into the economics literature by Herbert
Simon (1955), satisficing is the process of choosing in which it is
rational to stop seeking alternatives when the agent finds one that is
"good enough" (Byron 2004). Where satisficing differs from
Stocker's account is that satisficing is instrumentally rational
in virtue of a more general maximizing account of choice. In
situations in which finding the best alternative is prohibitively
costly or impossible, for example, satisficing becomes rational. In
contrast, on Stocker's account, choosing the alternative that is
"good enough" is non-instrumentally rational (Stocker
2004).
Objections discussed in previous sections may be raised with respect
to Stocker's account. Suppose two alternatives are both good in
an absolute sense and incomparable. According to Stocker's
account, insofar as both alternatives are good enough for an
agent's life, the choice of either alternative appears
justified. As discussed in subsection 4.2, Regan and others may object
that the choice of either alternative is not intelligible to the
agent. Also, the appeal to a good life may raise the worry discussed
in subsection 4.4 that Stocker's account addresses the problem
of incomparability by simply denying it.
### 4.6 Deliberation about Ends
The topics of value incommensurability and incomparability also arise
in accounts that concern deliberation about ends. This section
discusses two accounts.
The first account is by Henry Richardson (1994). Richardson argues for
the possibility of rational deliberation about ends. According to
Richardson, if comparability is a prerequisite for rational choice,
choices involving conflicting values could be made rationally if each
of the values were expressed in terms of their contribution to
furthering some common end. This conception of choice treats this
common end as the one final end relevant for the choice. According to
Richardson, the idea that comparability is a prerequisite for rational
choice seems to rule out rational deliberation about ends (1994,
15).
Richardson defends an account of rational deliberation about ends that
does not depend upon comparability. In his account, rational
deliberation involves achieving coherence among one's ends.
Briefly, coherence is "a matter of finding or constructing
intelligible connections or links or mutual support among them and of
removing relations of opposition or conflict" (Richardson 1994,
144). For Richardson, coherence among ends need not result in their
being comparable (1994, 180).
The second account is by Elijah Millgram (1997). Like Richardson,
Millgram argues that comparability is not a prerequisite for
deliberation. However, in contrast to Richardson, Millgram argues that
practical deliberation results in the comparability of ends (1997,
151). By this he means that through deliberation, one develops a
coherent conception of what matters and a "background against
which one can judge not only that one consideration is more important
than another, but *how much* more important" (1997, 163).
For Millgram, comparing one's ends is a "central part of
attaining unified agency" (1997, 162).
Millgram proposes two ways in which deliberation results in the
comparability of ends. The first proposal is that one can learn what
is important and how it is important through experience (1997, 161).
The second proposal is that deliberation about ends is
"something like the construction by the agent of a conception of
what matters ... out of raw materials such as desires, ends,
preferences, and reflexes" (1997, 168). Millgram identifies an
objection to each proposal and briefly responds to each.
The objection to the first proposal is this. If the incomparability of
ends can be resolved by experience, this suggests that the judgment of
incomparability reflects incomplete knowledge about the values
expressed in these ends. In other words, experience seems to help only
if the values expressed in these ends are themselves comparable (1997,
168). Millgram responds as follows. The agent who uses her experience
to develop a coherent conception of what matters is like the painter
who uses her experience to paint a picture that is not a copy of an
existing image. Even if the ends are comparable in her conception of
what matters, the values they reflect need not be comparable just as
the painting need not be an exact copy of an existing image.
The objection to the second proposal is this. The second proposal
suggests that deliberation is not driven by experience, which places
it in tension with the first proposal. Millgram responds by continuing
the analogy with the painter. Imagine a painter who paints a picture
that is not a copy of an existing image. Just because the painting is
not a copy of an existing image does not mean there is no source of
correction and constraint on it. Similarly, deliberation may involve
the construction by the agent of a conception of what matters, but
this does not imply that it is not informed or constrained by
experience (1997, 168-169).
### 4.7 Risky Actions
Caspar Hare (2010) introduces and discusses a problem relating to how
rational choice theory can encompass risky actions involving outcomes
that are on a par. If we assume that two outcomes, *A* and
*B*, are on a par, it follows that a small improvement, or mild
sweetening as Hare phrases it, to one of the outcomes would not break
the parity relation. We can imagine two actions, *X* and
*Y*. If we opt for *X* then we will either end up with a
sweetened *A* in state 1 or a sweetened *B* in state 2, and
the outcomes are equiprobable. If we opt for *Y*, we will end up
with *B* in state 1 or *A* in state 2, and the outcomes are
equiprobable. On the one hand, it seems as if we ought to opt for
*X* since we will end up with a sweetened alternative, *A*+
or *B*+. On the other hand, it is difficult to say why *X*
is better than *Y*. The outcome in state 1 is not better if we do
*X* rather than *Y* since the outcomes, *A*+ and
*B*, are on a par, and the outcome in state 2 is not better if we
do *Y* rather than *X* since the outcomes, *B*+ and
*A*, are on a par. The problem can be construed as a lottery
where one lottery is deemed comparable with another, yet their outcomes
are incomparable (see Rabinowicz, 2021b, for this axiological
formulation of Hare's problem). Hare argues that we should
"take the sugar", i.e., we ought to opt for X (see also
Bader 2019). Inspired by a Fitting-Attitudes Analysis of value Wlodek
Rabinowicz argues that even though outcomes may be on a par, it is
possible for an action to be better than another (2021b). Others such
as Schoenfield (2014), Bales, Cohen & Handfield (2014), and Doody
(2019) reach the opposite conclusion.
## 5. Social Choices and Institutions
As discussed above, social practices and institutions play a role in
the inquiry into value incommensurability and incomparability. They
are claimed by some philosophers to ground the possibility of these
phenomena as in the case of constitutive incommensurability
(subsection 3.2), and to help resolve value conflicts, as in the case
of providing external resources for practical reason (subsection 4.4).
This section discusses additional areas in which considerations about
social institutions intersect with the inquiry at hand.
To begin, some philosophers have pointed to a structural similarity
between a single individual trying to choose in the face of
incomparability and the process of incorporating the varied interests
and preferences of the members of society into a single decision. The
preferences of different individuals, for example, may be taken to
reflect differing evaluative judgments about alternatives so that
combining these different preferences into a single decision becomes
analogous to resolving value conflicts in the individual case. Given
this analogy, Fred D'Agostino has applied methods of
decision-making at the social level from social choice theory and
political theory to consider the resolution of value conflicts at the
individual level (2003).
At the same time, Richard Pildes and Elizabeth Anderson caution
against wholeheartedly adopting this analogy. The analogy, according
to them, assumes that individuals already have rationally ordered
preferences. Given incomparability, however, there is no reason to
make such an assumption. In turn, Pildes and Anderson argue that
"individuals need to participate actively in democratic
institutions to enable them to achieve a rational ordering of their
preferences for collective choices" (1990, 2177).
Value incommensurability and incomparability also have been considered
with respect to the law. Matthew Adler discusses the variety of ways
in which legal scholars have engaged the topic of value
incommensurability (1998). One question is whether the possibility of
value incommensurability poses a problem for evaluating government
policy options and laws, more generally. Some authors respond that it
does not. Cass Sunstein, for example, argues that recognition of value
incommensurability helps "to reveal what is at stake in many
areas of the law" (1994, 780). According to Sunstein, important
commitments of a well-functioning legal system are reflected in
recognizing value incommensurability.
More generally, a number of scholars have focused on the relation
between incomparability and the structure of social and political
institutions. John Finnis, for example, takes the open-endedness of
social life to render it impossible to treat legal or policy choices
as involving comparable alternatives (1997, 221-222). Michael
Walzer's account of distributive justice also relates
incomparability to the structure of social and political institutions.
According to Walzer (1983), different social goods occupy different
"spheres," each one governed by a distinct set of
distributive norms. What is unjust is to convert the accumulation of
goods in one sphere into the accumulation of goods in another sphere
without regard for that second sphere's distributive norms.
Underlying Walzer's account, it seems, is a commitment to a kind
of constitutive incommensurability. Given its connection to the
possibility of plural and incompatible ways of life, the concepts of
value incommensurability and incomparability also play a role in many
accounts of political liberalism, including Joseph Raz's account
(1986) and Isaiah Berlin's account (1969). The latter's
inquiry into the relation between incommensurable values and political
institutions motivated much of the further inquiry into the topic. |
value-intrinsic-extrinsic | ## 1. What Has Intrinsic Value?
The question "What is intrinsic value?" is more
fundamental than the question "What has intrinsic value?,"
but historically these have been treated in reverse order. For a long
time, philosophers appear to have thought that the notion of intrinsic
value is itself sufficiently clear to allow them to go straight to the
question of what should be said to have intrinsic value. Not even a
potted history of what has been said on this matter can be attempted
here, since the record is so rich. Rather, a few representative
illustrations must suffice.
In his dialogue *Protagoras*, Plato [428-347 B.C.E.]
maintains (through the character of Socrates, modeled after the real
Socrates [470-399 B.C.E.], who was Plato's teacher) that,
when people condemn pleasure, they do so, not because they take
pleasure to be bad as such, but because of the bad consequences they
find pleasure often to have. For example, at one point Socrates says
that the only reason why the pleasures of food and drink and sex seem
to be evil is that they result in pain and deprive us of future
pleasures (Plato, *Protagoras*, 353e). He concludes that
pleasure is in fact good as such and pain bad, regardless of what
their consequences may on occasion be. In the *Timaeus*, Plato
seems quite pessimistic about these consequences, for he has Timaeus
declare pleasure to be "the greatest incitement to evil"
and pain to be something that "deters from good" (Plato,
*Timaeus*, 69d). Plato does not think of pleasure as the
"highest" good, however. In the *Republic*,
Socrates states that there can be no "communion" between
"extravagant" pleasure and virtue (Plato,
*Republic*, 402e) and in the *Philebus*, where Philebus
argues that pleasure is the highest good, Socrates argues against
this, claiming that pleasure is better when accompanied by
intelligence (Plato, *Philebus*, 60e).
Many philosophers have followed Plato's lead in declaring
pleasure intrinsically good and pain intrinsically bad. Aristotle
[384-322 B.C.E.], for example, himself a student of
Plato's, says at one point that all are agreed that pain is bad
and to be avoided, either because it is bad "without
qualification" or because it is in some way an
"impediment" to us; he adds that pleasure, being the
"contrary" of that which is to be avoided, is therefore
necessarily a good (Aristotle, *Nicomachean Ethics*, 1153b).
Over the course of the more than two thousand years since this was
written, this view has been frequently endorsed. Like Plato, Aristotle
does not take pleasure and pain to be the only things that are
intrinsically good and bad, although some have maintained that this is
indeed the case. This more restrictive view, often called hedonism,
has had proponents since the time of Epicurus [341-271
B.C.E.].[1]
Perhaps the most thorough renditions of it are to be found in the
works of Jeremy Bentham [1748-1832] and Henry Sidgwick
[1838-1900] (see Bentham 1789, Sidgwick 1907); perhaps its most
famous proponent is John Stuart Mill [1806-1873] (see Mill
1863).
Most philosophers who have written on the question of what has
intrinsic value have not been hedonists; like Plato and Aristotle,
they have thought that something besides pleasure and pain has
intrinsic value. One of the most comprehensive lists of intrinsic
goods that anyone has suggested is that given by William Frankena
(Frankena 1973, pp. 87-88): life, consciousness, and activity;
health and strength; pleasures and satisfactions of all or certain
kinds; happiness, beatitude, contentment, etc.; truth; knowledge and
true opinions of various kinds, understanding, wisdom; beauty,
harmony, proportion in objects contemplated; aesthetic experience;
morally good dispositions or virtues; mutual affection, love,
friendship, cooperation; just distribution of goods and evils; harmony
and proportion in one's own life; power and experiences of
achievement; self-expression; freedom; peace, security; adventure and
novelty; and good reputation, honor, esteem, etc. (Presumably a
corresponding list of intrinsic evils could be provided.) Almost any
philosopher who has ever addressed the question of what has intrinsic
value will find his or her answer represented in some way by one or
more items on Frankena's list. (Frankena himself notes that he
does not explicitly include in his list the communion with and love
and knowledge of God that certain philosophers believe to be the
highest good, since he takes them to fall under the headings of
"knowledge" and "love.") One conspicuous
omission from the list, however, is the increasingly popular view that
certain environmental entities or qualities have intrinsic value
(although Frankena may again assert that these are implicitly
represented by one or more items already on the list). Some find
intrinsic value, for example, in certain "natural"
environments (wildernesses untouched by human hand); some find it in
certain animal species; and so on.
Suppose that you were confronted with some proposed list of intrinsic
goods. It would be natural to ask how you might assess the accuracy of
the list. How can you tell whether something has intrinsic value or
not? On one level, this is an epistemological question about which
this article will not be concerned. (See the entry in this
encyclopedia on moral epistemology.) On another level, however, this
is a conceptual question, for we cannot be sure that something has
intrinsic value unless we understand what it is for something to have
intrinsic value.
## 2. What Is Intrinsic Value?
The concept of intrinsic value has been characterized above in terms
of the value that something has "in itself," or "for
its own sake," or "as such," or "in its own
right." The custom has been not to distinguish between the
meanings of these terms, but we will see that there is reason to think
that there may in fact be more than one concept at issue here. For the
moment, though, let us ignore this complication and focus on what it
means to say that something is valuable *for its own sake* as
opposed to being valuable *for the sake of something else* to
which it is related in some way. Perhaps it is easiest to grasp this
distinction by way of illustration.
Suppose that someone were to ask you whether it is good to help others
in time of need. Unless you suspected some sort of trick, you would
answer, "Yes, of course." If this person were to go on to
ask you why acting in this way is good, you might say that it is good
to help others in time of need simply because it is good that their
needs be satisfied. If you were then asked why it is good that
people's needs be satisfied, you might be puzzled. You might be
inclined to say, "It just is." Or you might accept the
legitimacy of the question and say that it is good that people's
needs be satisfied because this brings them pleasure. But then, of
course, your interlocutor could ask once again, "What's
good about that?" Perhaps at this point you would answer,
"It just is good that people be pleased," and thus put an
end to this line of questioning. Or perhaps you would again seek to
explain the fact that it is good that people be pleased in terms of
something else that you take to be good. At some point, though, you
would have to put an end to the questions, not because you would have
grown tired of them (though that is a distinct possibility), but
because you would be forced to recognize that, if one thing derives
its goodness from some other thing, which derives its goodness from
yet a third thing, and so on, there must come a point at which you
reach something whose goodness is not derivative in this way,
something that "just is" good in its own right, something
whose goodness is the source of, and thus explains, the goodness to be
found in all the other things that precede it on the list. It is at
this point that you will have arrived at intrinsic goodness (cf.
Aristotle, *Nicomachean Ethics*, 1094a). That which is
intrinsically good is nonderivatively good; it is good for its
*own* sake. That which is not intrinsically good but
extrinsically good is derivatively good; it is good, not (insofar as
its extrinsic value is concerned) for its own sake, but for the sake
of something else that is good and to which it is related in some way.
Intrinsic value thus has a certain priority over extrinsic value. The
latter is derivative from or reflective of the former and is to be
explained in terms of the former. It is for this reason that
philosophers have tended to focus on intrinsic value in
particular.
The account just given of the distinction between intrinsic and
extrinsic value is rough, but it should do as a start. Certain
complications must be immediately acknowledged, though. First, there
is the possibility, mentioned above, that the terms traditionally used
to refer to intrinsic value in fact refer to more than one concept;
again, this will be addressed later (in this section and the next).
Another complication is that it may not in fact be accurate to say
that whatever is intrinsically good is nonderivatively good; some
intrinsic value may be derivative. This issue will be taken up (in
Section 5) when the computation of intrinsic value is discussed; it
may be safely ignored for now. Still another complication is this. It
is almost universally acknowledged among philosophers that all value
is "supervenient" or "grounded in" on certain
nonevaluative features of the thing that has value. Roughly, what this
means is that, if something has value, it will have this value in
virtue of certain nonevaluative features that it has; its value can be
attributed to these features. For example, the value of helping
others in time of need might be attributed to the fact that such
behavior has the feature of being causally related to certain pleasant
experiences induced in those who receive the help. Suppose we accept
this and accept also that the experiences in question are
intrinsically good. In saying this, we are (barring the complication
to be discussed in Section 5) taking the value of the experiences to
be nonderivative. Nonetheless, we may well take this value, like all
value, to be supervenient on, or grounded in, something. In this case,
we would probably simply attribute the value of the experiences to
their having the feature of being pleasant. This brings out the subtle
but important point that the question whether some value is derivative
is distinct from the question whether it is supervenient. Even
nonderivative value (value that something has in its own right; value
that is, in some way, not attributable *to the value* of
anything else) is usually understood to be supervenient on certain
nonevaluative features of the thing that has value (and thus to be
attributable, in a different way, *to these features*).
To repeat: whatever is intrinsically good is (barring the complication
to be discussed in Section 5) nonderivatively good. It would be a
mistake, however, to affirm the converse of this and say that whatever
is nonderivatively good is intrinsically good. As "intrinsic
value" is traditionally understood, it refers to a
*particular way* of being nonderivatively good; there are other
ways in which something might be nonderivatively good. For example,
suppose that your interlocutor were to ask you whether it is good to
eat and drink in moderation and to exercise regularly. Again, you
would say, "Yes, of course." If asked why, you would say
that this is because such behavior promotes health. If asked what is
good about being healthy, you might cite something else whose goodness
would explain the value of health, or you might simply say,
"Being healthy just is a good way to be." If the latter
were your response, you would be indicating that you took health to be
nonderivatively good in some way. In what way, though? Well, perhaps
you would be thinking of health as intrinsically good. But perhaps
not. Suppose that what you meant was that being healthy just is
"good for" the person who is healthy (in the sense that it
is in each person's interest to be healthy), so that
John's being healthy is good for John, Jane's being
healthy is good for Jane, and so on. You would thereby be attributing
a type of nonderivative interest-value to John's being healthy,
and yet it would be perfectly consistent for you to deny that
John's being healthy is *intrinsically* good. If John
were a villain, you might well deny this. Indeed, you might want to
insist that, in light of his villainy, his being healthy is
intrinsically *bad*, even though you recognize that his being
healthy is good *for him*. If you did say this, you would be
indicating that you subscribe to the common view that intrinsic value
is nonderivative value of some peculiarly *moral*
sort.[2]
Let us now see whether this still rough account of intrinsic value can
be made more precise. One of the first writers to concern himself with
the question of what exactly is at issue when we ascribe intrinsic
value to something was G. E. Moore [1873-1958]. In his book
*Principia Ethica*, Moore asks whether the concept of intrinsic
value (or, more particularly, the concept of intrinsic goodness, upon
which he tended to focus) is analyzable. In raising this question, he
has a particular type of analysis in mind, one which consists in
"breaking down" a concept into simpler component concepts.
(One example of an analysis of this sort is the analysis of the
concept of being a vixen in terms of the concepts of being a fox and
being female.) His own answer to the question is that the concept of
intrinsic goodness is *not* amenable to such analysis (Moore
1903, ch. 1). In place of analysis, Moore proposes a certain kind of
thought-experiment in order both to come to understand the concept
better and to reach a decision about what is intrinsically good. He
advises us to consider what things are such that, if they existed by
themselves "in absolute isolation," we would judge their
existence to be good; in this way, we will be better able to see what
really accounts for the value that there is in our world. For example,
if such a thought-experiment led you to conclude that all and only
pleasure would be good in isolation, and all and only pain bad, you
would be a
hedonist.[3]
Moore himself deems it incredible that anyone, thinking clearly,
would reach this conclusion. He says that it involves our saying that
a world in which only pleasure existed--a world without any
knowledge, love, enjoyment of beauty, or moral qualities--is
better than a world that contained all these things but in which there
existed slightly less pleasure (Moore 1912, p. 102). Such a view he
finds absurd.
Regardless of the merits of this isolation test, it remains unclear
exactly why Moore finds the concept of intrinsic goodness to be
unanalyzable. At one point he attacks the view that it can be analyzed
wholly in terms of "natural" concepts--the view, that
is, that we can break down the concept of being intrinsically good
into the simpler concepts of being *A*, being *B*, being
*C*..., where these component concepts are all purely
descriptive rather than evaluative. (One candidate that Moore
discusses is this: for something to be intrinsically good is for it to
be something that we desire to desire.) He argues that any such
analysis is to be rejected, since it will always be intelligible to
ask whether (and, presumably, to deny that) it is good that something
be *A*, *B*, *C*,..., which would not be the
case if the analysis were accurate (Moore 1903, pp. 15-16). Even
if this argument is successful (a complicated matter about which there
is considerable disagreement), it of course does not establish the
more general claim that the concept of intrinsic goodness is not
analyzable at all, since it leaves open the possibility that this
concept is analyzable in terms of other concepts, some or all of which
are not "natural" but evaluative. Moore apparently thinks
that his objection works just as well where one or more of the
component concepts *A*, *B*, *C*,..., is
evaluative; but, again, many dispute the cogency of his argument.
Indeed, several philosophers have proposed analyses of just this sort.
For example, Roderick Chisholm [1916-1999] has argued that
Moore's own isolation test in fact provides the basis for an
analysis of the concept of intrinsic value. He formulates a view
according to which (to put matters roughly) to say that a state of
affairs is intrinsically good or bad is to say that it is possible
that its goodness or badness constitutes all the goodness or badness
that there is in the world (Chisholm 1978).
Eva Bodanszky and Earl Conee have attacked Chisholm's proposal,
showing that it is, in its details, unacceptable (Bodanszky and Conee
1981). However, the general idea that an intrinsically valuable state
is one that could somehow account for all the value in the world is
suggestive and promising; if it could be adequately formulated, it
would reveal an important feature of intrinsic value that would help
us better understand the concept. We will return to this point in
Section 5. Rather than pursue such a line of thought, Chisholm himself
responded (Chisholm 1981) in a different way to Bodanszky and Conee.
He shifted from what may be called an *ontological* version of
Moore's isolation test--the attempt to understand the
intrinsic value of a state in terms of the value that there would be
if it were the only valuable state *in existence*--to an
*intentional* version of that test--the attempt to
understand the intrinsic value of a state in terms of the kind of
attitude it would be fitting to have if one were to
*contemplate* the valuable state as such, without reference to
circumstances or consequences.
This new analysis in fact reflects a general idea that has a rich
history. Franz Brentano [1838-1917], C. D. Broad
[1887-1971], W. D. Ross [1877-1971], and A. C. Ewing
[1899-1973], among others, have claimed, in a more or less
qualified way, that the concept of intrinsic goodness is analyzable in
terms of the fittingness of some "pro" (i.e., positive)
attitude (Brentano 1969, p. 18; Broad 1930, p. 283; Ross 1939, pp.
275-76; Ewing 1948, p. 152). Such an analysis, which has come to
be called "the fitting attitude analysis" of value, is
supported by the mundane observation that, instead of saying that
something is good, we often say that it is *valuable*, which
itself just means that it is fitting to value the thing in question.
It would thus seem very natural to suppose that for something to be
intrinsically good is simply for it to be such that it is fitting to
value it for its own sake. ("Fitting" here is often
understood to signify a particular kind of moral fittingness, in
keeping with the idea that intrinsic value is a particular kind of
moral value. The underlying point is that those who value for its own
sake that which is intrinsically good thereby evince a kind of
*moral* sensitivity.)
Though undoubtedly attractive, this analysis can be and has been
challenged. Brand Blanshard [1892-1987], for example, argues
that the analysis is to be rejected because, if we ask *why*
something is such that it is fitting to value it for its own sake, the
answer is that this is the case precisely *because* the thing
in question is intrinsically good; this answer indicates that the
concept of intrinsic goodness is more fundamental than that of the
fittingness of some pro attitude, which is inconsistent with analyzing
the former in terms of the latter (Blanshard 1961, pp. 284-86).
Ewing and others have resisted Blanshard's argument, maintaining
that what grounds and explains something's being valuable is not
its being good but rather its having whatever non-value property it is
upon which its goodness supervenes; they claim that it is because of
this underlying property that the thing in question is
"both" good and valuable (Ewing 1948, pp. 157 and 172. Cf.
Lemos 1994, p. 19). Thomas Scanlon calls such an account of the
relation between valuableness, goodness, and underlying properties a
buck-passing account, since it "passes the buck" of
explaining why something is such that it is fitting to value it from
its goodness to some property that underlies its goodness (Scanlon
1998, pp. 95 ff.). Whether such an account is acceptable has recently
been the subject of intense debate. Many, like Scanlon, endorse
passing the buck; some, like Blanshard, object to doing so. If such an
account is acceptable, then Ewing's analysis survives
Blanshard's challenge; but otherwise not. (Note that one might
endorse passing the buck and yet reject Ewing's analysis for
some other reason. Hence a buck-passer may, but need not, accept the
analysis. Indeed, there is reason to think that Moore himself is a
buck-passer, even though he takes the concept of intrinsic goodness to
be unanalyzable; cf. Olson 2006).
Even if Blanshard's argument succeeds and intrinsic goodness is
not to be *analyzed* in terms of the fittingness of some pro
attitude, it could still be that there is a *strict
correlation* between something's being intrinsically good
and its being such that it is fitting to value it for its own sake;
that is, it could still be both that (a) it is necessarily true that
whatever is intrinsically good is such that it is fitting to value it
for its own sake, and that (b) it is necessarily true that whatever it
is fitting to value for its own sake is intrinsically good. If this
were the case, it would reveal an important feature of intrinsic
value, recognition of which would help us to improve our understanding
of the concept. However, this thesis has also been challenged.
Krister Bykvist has argued that what he calls solitary goods may
constitute a counterexample to part (a) of the thesis (Bykvist 2009,
pp. 4 ff.). Such (alleged) goods consist in states of affairs that
entail that there is no one in a position to value them. Suppose, for
example, that happiness is intrinsically good, and good in such a way
that it is fitting to welcome it. Then, more particularly, the state
of affairs of there being happy egrets is intrinsically good; so too,
presumably, is the more complex state of affairs of there being happy
egrets but no welcomers. The simpler state of affairs would appear to
pose no problem for part (a) of the thesis, but the more complex state
of affairs, which is an example of a solitary good, may pose a
problem. For if to welcome a state of affairs entails that that state
of affairs obtains, then welcoming the more complex state of affairs
is logically impossible. Furthermore, if to welcome a state of affairs
entails that one believes that that state of affairs obtains, then the
pertinent belief regarding the more complex state of affairs would be
necessarily false. In neither case would it seem plausible to say that
welcoming the state of affairs is nonetheless fitting. Thus, unless
this challenge can somehow be met, a proponent of the thesis must
restrict the thesis to pro attitudes that are neither truth- nor
belief-entailing, a restriction that might itself prove unwelcome,
since it excludes a number of favorable responses to what is good
(such as promoting what is good, or taking pleasure in what is good)
to which proponents of the thesis have often appealed.
As to part (b) of the thesis: some philosophers have argued that it
can be fitting to value something for its own sake even if that thing
is not intrinsically good. A relatively early version of this argument
was again provided by Blanshard (1961, pp. 287 ff. Cf. Lemos 1994, p.
18). Recently the issue has been brought into stark relief by the
following sort of thought-experiment. Imagine that an evil demon wants
you to value him for his own sake and threatens to cause you severe
suffering unless you do. It seems that you have good reason to do what
he wants--it is appropriate or fitting to comply with his demand
and value him for his own sake--even though he is clearly not
intrinsically good (Rabinowicz and Ronnow-Rasmussen 2004, pp.
402 ff.). This issue, which has come to be known as "the wrong
kind of reason problem," has attracted a great deal of
attention. Some have been persuaded that the challenge succeeds, while
others have sought to undermine it.
One final cautionary note. It is apparent that some philosophers use
the term "intrinsic value" and similar terms to express
some concept other than the one just discussed. In particular,
Immanuel Kant [1724-1804] is famous for saying that the only
thing that is "good without qualification" is a good will,
which is good not because of what it effects or accomplishes but
"in itself" (Kant 1785, Ak. 1-3). This may seem to
suggest that Kant ascribes (positive) intrinsic value only to a good
will, declaring the value that anything else may possess merely
extrinsic, in the senses of "intrinsic value" and
"extrinsic value" discussed above. This suggestion is, if
anything, reinforced when Kant immediately adds that a good will
"is to be esteemed beyond comparison as far higher than anything
it could ever bring about," that it "shine[s] like a jewel
for its own sake," and that its "usefulness...can
neither add to, nor subtract from, [its] value." For here Kant
may seem not only to be invoking the distinction between intrinsic and
extrinsic value but also to be in agreement with Brentano *et
al.* regarding the characterization of the former in terms of the
fittingness of some attitude, namely, esteem. (The term
"respect" is often used in place of "esteem"
in such contexts.) Nonetheless, it becomes clear on further inspection
that Kant is in fact discussing a concept quite different from that
with which this article is concerned. A little later on he says that
all rational beings, even those that lack a good will, have
"absolute value"; such beings are "ends in
themselves" that have a "dignity" or
"intrinsic value" that is "above all price"
(Kant 1785, Ak. 64 and 77). Such talk indicates that Kant believes
that the sort of value that he ascribes to rational beings is one that
they possess to an infinite degree. But then, if this were understood
as a thesis about intrinsic value as we have been understanding this
concept, the implication would seem to be that, since it contains
rational beings, ours is the best of all possible
worlds.[4]
Yet this is a thesis that Kant, along with many others, explicitly
rejects elsewhere (Kant, *Lectures in Ethics*). It seems best
to understand Kant, and other philosophers who have since written in
the same vein (cf. Anderson 1993), as being concerned not with the
question of what intrinsic value rational beings have--in the
sense of "intrinsic value" discussed above--but with
the quite different question of how we ought to behave toward such
creatures (cf. Bradley 2006).
## 3. Is There Such a Thing As Intrinsic Value At All?
In the history of philosophy, relatively few seem to have entertained
doubts about the concept of intrinsic value. Much of the debate about
intrinsic value has tended to be about what things actually do have
such value. However, once questions about the concept itself were
raised, doubts about its metaphysical implications, its moral
significance, and even its very coherence began to appear.
Consider, first, the metaphysics underlying ascriptions of intrinsic
value. It seems safe to say that, before the twentieth century, most
moral philosophers presupposed that the intrinsic goodness of
something is a genuine property of that thing, one that is no less
real than the properties (of being pleasant, of satisfying a need, or
whatever) in virtue of which the thing in question is good. (Several
dissented from this view, however. Especially well known for their
dissent are Thomas Hobbes [1588-1679], who believed the goodness
or badness of something to be constituted by the desire or aversion
that one may have regarding it, and David Hume [1711-1776], who
similarly took all ascriptions of value to involve projections of
one's own sentiments onto whatever is said to have value. See
Hobbes 1651, Hume 1739.) It was not until Moore argued that this view
implies that intrinsic goodness, as a supervening property, is a very
different sort of property (one that he called
"nonnatural") from those (which he called
"natural") upon which it supervenes, that doubts about the
view proliferated.
One of the first to raise such doubts and to press for a view quite
different from the prevailing view was Axel Hagerstrom
[1868-1939], who developed an account according to which
ascriptions of value are neither true nor false (Hagerstrom
1953). This view has come to be called "noncognitivism."
The particular brand of noncognitivism proposed by
Hagerstrom is usually called "emotivism," since
it holds (in a manner reminiscent of Hume) that ascriptions of value
are in essence expressions of emotion. (For example, an emotivist of a
particularly simple kind might claim that to say "*A* is
good" is not to make a statement about *A* but to say
something like "Hooray for *A*!") This view was
taken up by several philosophers, including most notably A. J. Ayer
[1910-1989] and Charles L. Stevenson [1908-1979] (see Ayer
1946, Stevenson 1944). Other philosophers have since embraced other
forms of noncognitivism. R. M. Hare [1919-2002], for example,
advocated the theory of "prescriptivism" (according to
which moral judgments, including judgments about goodness and badness,
are not descriptive statements about the world but rather constitute a
kind of command as to how we are to act; see Hare 1952) and Simon
Blackburn and Allan Gibbard have since proposed yet other versions of
noncognitivism (Blackburn 1984, Gibbard 1990).
Hagerstrom characterized his own view as a type of
"value-nihilism," and many have followed suit in taking
noncognitivism of all kinds to constitute a rejection of the very idea
of intrinsic value. But this seems to be a mistake. We should
distinguish questions about *value* from questions about
*evaluation*. Questions about value fall into two main groups,
*conceptual* (of the sort discussed in the last section) and
*substantive* (of the sort discussed in the first section).
Questions about evaluation have to do with what precisely is going on
when *we ascribe* value to something. Cognitivists claim that
our ascriptions of value constitute statements that are either true or
false; noncognitivists deny this. But even noncognitivists must
recognize that our ascriptions of value fall into two fundamental
classes--ascriptions of intrinsic value and ascriptions of
extrinsic value--and so they too must concern themselves with the
very same conceptual and substantive questions about value as
cognitivists address. It may be that noncognitivism dictates or rules
out certain answers to these questions that cognitivism does not, but
that is of course quite a different matter from rejecting the very
idea of intrinsic value on metaphysical grounds.
Another type of metaphysical challenge to intrinsic value stems from
the theory of "pragmatism," especially in the form
advanced by John Dewey [1859-1952] (see Dewey 1922). According
to the pragmatist, the world is constantly changing in such a way that
the solution to one problem becomes the source of another, what is an
end in one context is a means in another, and thus it is a mistake to
seek or offer a timeless list of intrinsic goods and evils, of ends to
be achieved or avoided for their own sakes. This theme has been
elaborated by Monroe Beardsley, who attacks the very notion of
intrinsic value (Beardsley 1965; cf. Conee 1982). Denying that the
existence of something with extrinsic value presupposes the existence
of something else with intrinsic value, Beardsley argues that all
value is extrinsic. (In the course of his argument, Beardsley rejects
the sort of "dialectical demonstration" of intrinsic value
that was attempted in the last section, when an explanation of the
derivative value of helping others was given in terms of some
nonderivative value.) A quick response to Beardsley's misgivings
about intrinsic value would be to admit that it may well be that, the
world being as complex as it is, nothing is such that its value is
wholly intrinsic; perhaps whatever has intrinsic value also has
extrinsic value, and of course many things that have extrinsic value
will have no (or, at least, neutral) intrinsic value. Far from
repudiating the notion of intrinsic value, though, this admission
would confirm its legitimacy. But Beardsley would insist that this
quick response misses the point of his attack, and that it really is
the case, not just that whatever has value has extrinsic value, but
also that nothing has intrinsic value. His argument for this view is
based on the claim that the concept of intrinsic value is
"inapplicable," in that, even if something had such value,
we could not know this and hence its having such value could play no
role in our reasoning about value. But here Beardsley seems to be
overreaching. Even if it were the case that we cannot *know*
whether something has intrinsic value, this of course leaves open the
question whether anything *does* have such value. And even if
it could somehow be shown that nothing *does* have such value,
this would still leave open the question whether something
*could* have such value. If the answer to this last question is
"yes," then the legitimacy of the concept of intrinsic
value is in fact confirmed rather than refuted.
As has been noted, some philosophers do indeed doubt the legitimacy,
the very coherence, of the concept of intrinsic value. Before we turn
to a discussion of this issue, however, let us for the moment presume
that the concept is coherent and address a different sort of doubt:
the doubt that the concept has any great moral significance. Recall
the suggestion, mentioned in the last section, that discussions of
intrinsic value may have been compromised by a failure to distinguish
certain concepts. This suggestion is at the heart of Christine
Korsgaard's "Two Distinctions in Goodness"
(Korsgaard 1983). Korsgaard notes that "intrinsic value"
has traditionally been contrasted with "instrumental
value" (the value that something has in virtue of being a means
to an end) and claims that this approach is misleading. She contends
that "instrumental value" is to be contrasted with
"final value," that is, the value that something has as an
end or for its own sake; however, "intrinsic value" (the
value that something has in itself, that is, in virtue of its
intrinsic, nonrelational properties) is to be contrasted with
"extrinsic value" (the value that something has in virtue
of its extrinsic, relational properties). (An example of a
nonrelational property is the property of being round; an example of a
relational property is the property of being loved.) As an
illustration of final value, Korsgaard suggests that gorgeously
enameled frying pans are, in virtue of the role they play in our
lives, good for their own sakes. In like fashion, Beardsley wonders
whether a rare stamp may be good for its own sake (Beardsley 1965);
Shelly Kagan says that the pen that Abraham Lincoln used to sign the
Emancipation Proclamation may well be good for its own sake (Kagan
1998); and others have offered similar examples (cf. Rabinowicz and
Ronnow-Rasmussen 1999 and 2003). Notice that in each case the
value being attributed to the object in question is (allegedly) had in
virtue of some *extrinsic* property of the object. This puts
the moral significance of *intrinsic* value into question,
since (as is apparent from our discussion so far) it is with the
notion of something's being valuable for its own sake that
philosophers have traditionally been, and continue to be, primarily
concerned.
There is an important corollary to drawing a distinction between
intrinsic value and final value (and between extrinsic value and
nonfinal value), and that is that, contrary to what Korsgaard herself
initially says, it may be a mistake to contrast final value with
instrumental value. If it is possible, as Korsgaard claims, that final
value sometimes supervenes on extrinsic properties, then it might be
possible that it sometimes supervenes in particular on the property of
being a means to some other end. Indeed, Korsgaard herself suggests
this when she says that "certain kinds of things, such as
luxurious instruments, ... are valued for their own sakes under
the condition of their usefulness" (Korsgaard 1983, p. 185).
Kagan also tentatively endorses this idea. If the idea is coherent,
then we should in principle distinguish two kinds of instrumental
value, one final and the other
nonfinal.[5]
If something *A* is a means to something else *B* and
has instrumental value in virtue of this fact, such value will be
nonfinal if it is merely derivative from or reflective of
*B*'s value, whereas it will be final if it is
nonderivative, that is, if it is a value that *A* has in its
*own* right (due to the fact that it is a means to *B*),
irrespective of any value that *B* may or may not have in
*its* own right.
Even if it is agreed that it is final value that is central to the
concerns of moral philosophers, we should be careful in drawing the
conclusion that intrinsic value is not central to their concerns.
First, there is no necessity that the term "intrinsic
value" be reserved for the value that something has in virtue of
its intrinsic properties; presumably it has been used by many writers
simply to refer to what Korsgaard calls final value, in which case the
moral significance of (what is thus called) intrinsic value has of
course not been thrown into doubt. Nonetheless, it should probably be
conceded that "final value" is a more suitable term than
"intrinsic value" to refer to the sort of value in
question, since the latter term certainly does suggest value that
supervenes on intrinsic properties. But here a second point can be
made, and that is that, even if use of the term "intrinsic
value" is restricted accordingly, it is arguable that, contrary
to Korsgaard's contention, all final value does after all
supervene on intrinsic properties alone; if that were the case, there
would seem to be no reason not to continue to use the term
"intrinsic value" to refer to final value. Whether this is
in fact the case depends in part on just what sort of thing
*can* be valuable for its own sake--an issue to be taken
up in the next section.
In light of the matter just discussed, we must now decide what
terminology to adopt. It is clear that moral philosophers since
ancient times have been concerned with the distinction between the
value that something has for its own sake (the sort of nonderivative
value that Korsgaard calls "final value") and the value
that something has for the sake of something else to which it is
related in some way. However, given the weight of tradition, it seems
justifiable, perhaps even advisable, to continue, despite
Korsgaard's misgivings, to use the terms "intrinsic
value" and "extrinsic value" to refer to these two
types of value; if we do so, however, we should explicitly note that
this practice is not itself intended to endorse, or reject, the view
that intrinsic value supervenes on intrinsic properties alone.
Let us now turn to doubts about the very coherence of the concept of
intrinsic value, so understood. In *Principia Ethica* and
elsewhere, Moore embraces the consequentialist view, mentioned above,
that whether an action is morally right or wrong turns exclusively on
whether its consequences are intrinsically better than those of its
alternatives. Some philosophers have recently argued that ascribing
intrinsic value to consequences in this way is fundamentally
misconceived. Peter Geach, for example, argues that Moore makes a
serious mistake when comparing "good" with
"yellow."[6]
Moore says that both terms express unanalyzable concepts but are to
be distinguished in that, whereas the latter refers to a natural
property, the former refers to a nonnatural one. Geach contends that
there is a mistaken assimilation underlying Moore's remarks,
since "good" in fact operates in a way quite unlike that
of "yellow"--something that Moore wholly overlooks.
This contention would appear to be confirmed by the observation that
the phrase "*x* is a yellow bird" splits up
logically (as Geach puts it) into the phrase "*x* is a
bird and *x* is yellow," whereas the phrase
"*x* is a good singer" does not split up in the
same way. Also, from "*x* is a yellow bird" and
"a bird is an animal" we do not hesitate to infer
"*x* is a yellow animal," whereas no similar
inference seems warranted in the case of "*x* is a good
singer" and "a singer is a person." On the basis of
these observations Geach concludes that nothing can be good in the
free-standing way that Moore alleges; rather, whatever is good is good
*relative* to a certain kind.
Judith Thomson has recently elaborated on Geach's thesis
(Thomson 1997). Although she does not unqualifiedly agree that
whatever is good is good relative to a certain kind, she does claim
that whatever is good is good in some way; nothing can be "just
plain good," as she believes Moore would have it. Philippa Foot,
among others, has made a similar charge (Foot 1985). It is a charge
that has been rebutted by Michael Zimmerman, who argues that
Geach's tests are less straightforward than they may seem and
fail after all to reveal a significant distinction between the ways in
which "good" and "yellow" operate (Zimmerman
2001, ch. 2). He argues further that Thomson mischaracterizes
Moore's conception of intrinsic value. According to Moore, he
claims, what is intrinsically good is not "just plain
good"; rather, it is good in a particular way, in keeping with
Thomson's thesis that all goodness is goodness in a way. He
maintains that, for Moore and other proponents of intrinsic value,
such value is a particular kind of *moral* value. Mahrad
Almotahari and Adam Hosein have revived Geach's challenge
(Almotahari and Hosein 2015). They argue that if, contrary to Geach,
"good" could be used predicatively, we would be able to
use the term predicatively in sentences of the form '*a* is
a good *K*' but, they argue, the linguistic evidence
indicates that we cannot do so (Almotahari and Hosein 2015,
1493-4).
## 4. What Sort of Thing Can Have Intrinsic Value?
Among those who do not doubt the coherence of the concept of intrinsic
value there is considerable difference of opinion about what sort or
sorts of entity can have such value. Moore does not explicitly address
this issue, but his writings show him to have a liberal view on the
matter. There are times when he talks of individual objects (e.g.,
books) as having intrinsic value, others when he talks of the
consciousness of individual objects (or of their qualities) as having
intrinsic value, others when he talks of the existence of individual
objects as having intrinsic value, others when he talks of types of
individual objects as having intrinsic value, and still others when he
talks of states of individual objects as having intrinsic value.
Moore would thus appear to be a "pluralist" concerning the
bearers of intrinsic value. Others take a more conservative,
"monistic" approach, according to which there is just one
kind of bearer of intrinsic value. Consider, for example,
Frankena's long list of intrinsic goods, presented in Section 1
above: life, consciousness, etc. To what *kind(s)* of entity do
such terms refer? Various answers have been given. Some (such as
Panayot Butchvarov) claim that it is *properties* that are the
bearers of intrinsic value (Butchvarov 1989, pp. 14-15). On this
view, Frankena's list implies that it is the properties of being
alive, being conscious, and so on, that are intrinsically good. Others
(such as Chisholm) claim that it is *states of affairs* that
are the bearers of intrinsic value (Chisholm 1968-69, 1972,
1975). On this view, Frankena's list implies that it is the
states of affairs of someone (or something) being alive, someone being
conscious, and so on, that are intrinsically good. Still others (such
as Ross) claim that it is *facts* that are the bearers of
intrinsic value (Ross 1930, pp. 112-13; cf. Lemos 1994, ch. 2).
On this view, Frankena's list implies that it is the facts that
someone (or something) is alive, that someone is conscious, and so on,
that are intrinsically good. (The difference between Chisholm's
and Ross's views would seem to be this: whereas Chisholm would
ascribe intrinsic value even to states of affairs, such as that of
everyone being happy, that do not obtain, Ross would ascribe such
value only to states of affairs that do obtain.)
Ontologists often divide entities into two fundamental classes, those
that are abstract and those that are concrete. Unfortunately, there is
no consensus on just how this distinction is to be drawn. Most
philosophers would classify the sorts of entities just mentioned
(properties, states of affairs, and facts) as abstract. So understood,
the claim that intrinsic value is borne by such entities is to be
distinguished from the claim that it is borne by certain other closely
related entities that are often classified as concrete. For example,
it has recently been suggested that it is tropes that have intrinsic
value.[7]
Tropes are supposed to be a sort of particularized property, a kind
of property-instance (rather than simply a property). (Thus the
particular whiteness of a particular piece of paper is to be
distinguished, on this view, from the property of whiteness.) It has
also been suggested that it is states, understood as a kind of
instance of states of affairs, that have intrinsic value (cf.
Zimmerman 2001, ch. 3).
Those who make monistic proposals of the sort just mentioned are aware
that intrinsic value is sometimes ascribed to kinds of entities
different from those favored by their proposals. They claim that all
such ascriptions can be reduced to, or translated into, ascriptions of
intrinsic value of the sort they deem proper. Consider, for example,
Korsgaard's suggestion that a gorgeously enameled frying pan is
good for its own sake. Ross would say that this cannot be the case. If
there is any intrinsic value to be found here, it will, according to
Ross, not reside in the pan itself but in the fact that it plays a
certain role in our lives, or perhaps in the fact that something plays
this role, or in the fact that something that plays this role exists.
(Others would make other translations in the terms that they deem
appropriate.) On the basis of this ascription of intrinsic value to
some fact, Ross could go on to ascribe a kind of *extrinsic*
value to the pan itself, in virtue of its relation to the fact in
question.
Whether reduction of this sort is acceptable has been a matter of
considerable debate. Proponents of monism maintain that it introduces
some much-needed order into the discussion of intrinsic value,
clarifying just what is involved in the ascription of such value and
simplifying the computation of such value--on which point, see
the next section. (A corollary of some monistic approaches is that the
value that something has for its own sake supervenes on the intrinsic
properties of that thing, so that there is a perfect convergence of
the two sorts of values that Korsgaard calls "final" and
"intrinsic". On this point, see the last section;
Zimmerman 2001, ch. 3; Tucker 2016; and Tucker (forthcoming).)
Opponents argue that reduction results in distortion and
oversimplification; they maintain that, even if there is intrinsic
value to be found in such a fact as that a gorgeously enameled frying
pan plays a certain role in our lives, there may yet be
*intrinsic*, and not merely extrinsic, value to be found in the
pan itself and perhaps also in its existence (cf. Rabinowicz and
Ronnow-Rasmussen 1999 and 2003). Some propose a compromise
according to which the kind of intrinsic value that can sensibly be
ascribed to individual objects like frying pans is not the same kind
of intrinsic value that is the topic of this article and can sensibly
be ascribed to items of the sort on Frankena's list (cf. Bradley
2006). (See again the cautionary note in the final paragraph of
Section 2 above.)
## 5. How Is Intrinsic Value to Be Computed?
In our assessments of intrinsic value, we are often and understandably
concerned not only with *whether* something is good or bad but
with *how* good or bad it is. Arriving at an answer to the
latter question is not straightforward. At least three problems
threaten to undermine the computation of intrinsic value.
First, there is the possibility that the relation of intrinsic
betterness is not transitive (that is, the possibility that something
*A* is intrinsically better than something else *B*,
which is itself intrinsically better than some third thing *C*,
and yet *A* is not intrinsically better than *C*).
Despite the very natural assumption that this relation is transitive,
it has been argued that it is not (Rachels 1998; Temkin 1987, 1997,
2012). Should this in fact be the case, it would seriously complicate
comparisons, and hence assessments, of intrinsic value.
Second, there is the possibility that certain values are
incommensurate. For example, Ross at one point contends that it is
impossible to compare the goodness of pleasure with that of virtue.
Whereas he had suggested in *The Right and the Good* that
pleasure and virtue could be measured on the same scale of goodness,
in *Foundations of Ethics* he declares this to be impossible,
since (he claims) it would imply that pleasure of a certain intensity,
enjoyed by a sufficient number of people or for a sufficient time,
would counterbalance virtue possessed or manifested only by a small
number of people or only for a short time; and this he professes to be
incredible (Ross 1939, p. 275). But there is some confusion here. In
claiming that virtue and pleasure are incommensurate for the reason
given, Ross presumably means that they cannot be measured on the same
*ratio* scale. (A ratio scale is one with an arbitrary unit but
a fixed zero point. Mass and length are standardly measured on ratio
scales.) But incommensurability on a ratio scale does not imply
incommensurability on *every* scale--an ordinal scale, for
instance. (An ordinal scale is simply one that supplies an ordering
for the quantity in question, such as the measurement of arm-strength
that is provided by an arm-wrestling competition.) Ross's
remarks indicate that he in fact believes that virtue and pleasure
*are* commensurate on an ordinal scale, since he appears to
subscribe to the arch-puritanical view that any amount of virtue is
intrinsically better than any amount of pleasure. This view is just
one example of the thesis that some goods are "higher"
than others, in the sense that any amount of the former is better than
any amount of the latter. This thesis can be traced to the ancient
Greeks (Plato, *Philebus*, 21a-e; Aristotle, *Nicomachean
Ethics*, 1174a), and it has been endorsed by many philosophers
since, perhaps most famously by Mill (Mill 1863, paras. 4 ff).
Interest in the thesis has recently been revived by a set of intricate
and intriguing puzzles, posed by Derek Parfit, concerning the relative
values of low-quantity/high-quality goods and
high-quantity/low-quality goods (Parfit 1984, Part IV). One response
to these puzzles (eschewed by Parfit himself) is to adopt the thesis
of the nontransitivity of intrinsic betterness. Another is to insist
on the thesis that some goods are higher than others. Such a response
does not by itself solve the puzzles that Parfit raises, but, to the
extent that it helps, it does so at the cost of once again
complicating the computation of intrinsic value.
To repeat: contrary to what Ross says, the thesis that some goods are
higher than others implies that such goods are commensurate, and not
that they are incommensurate. Some people do hold, however, that
certain values really are incommensurate and thus cannot be compared
on any meaningful scale. (Isaiah Berlin [1909-1997], for
example, is often thought to have said this about the values of
liberty and equality. Whether he is best interpreted in this way is
debatable. See Berlin 1969.) This view constitutes a more radical
threat to the computation of intrinsic value than does the view that
intrinsic betterness is not transitive. The latter view presupposes at
least some measure of commensurability. If *A* is better than
*B* and *B* is better than *C*, then *A*
is commensurate with *B* and *B* is commensurate with
*C*; and even if it should turn out that *A* is not
better than *C*, it may still be that *A* is
commensurate with *C*, either because it is as good as
*C* or because it is worse than *C*. But if *A*
is incommensurate with *B*, then *A* is neither better
than nor as good as nor worse than *B*. (Some claim, however,
that the reverse does not hold and that, even if *A* is neither
better than nor as good as nor worse than *B*, still *A*
may be "on a par" with *B* and thus be roughly
comparable with it. Cf. Chang 1997, 2002.) If such a case can arise,
there is an obvious limit to the extent to which we can meaningfully
say how good a certain complex whole is (here, "whole" is
used to refer to whatever kind of entity may have intrinsic value);
for, if such a whole comprises incommensurate goods *A* and
*B*, then there will be no way of establishing just how good it
is overall, even if there is a way of establishing how good it is with
respect to each of *A* and *B*.
There is a third, still more radical threat to the computation of
intrinsic value. Quite apart from any concern with the
commensurability of values, Moore famously claims that there is no
easy formula for the determination of the intrinsic value of complex
wholes because of the truth of what he calls the "principle of
organic unities" (Moore 1903, p. 96). According to this
principle, the intrinsic value of a whole must not be assumed to be
the same as the sum of the intrinsic values of its parts (Moore 1903,
p. 28) As an example of an organic unity, Moore gives the case of the
consciousness of a beautiful object; he says that this has great
intrinsic value, even though the consciousness as such and the
beautiful object as such each have comparatively little, if any,
intrinsic value. If the principle of organic unities is true, then
there is scant hope of a systematic approach to the computation of
intrinsic value. Although the principle explicitly rules out only
summation as a method of computation, Moore's remarks strongly
suggest that there is no relation between the parts of a whole and the
whole itself that holds in general and in terms of which the value of
the latter can be computed by aggregating (whether by summation or by
some other means) the values of the former. Moore's position has
been endorsed by many other philosophers. For example, Ross says that
it is better that one person be good and happy and another bad and
unhappy than that the former be good and unhappy and the latter bad
and happy, and he takes this to be confirmation of Moore's
principle (Ross 1930, p. 72). Broad takes organic unities of the sort
that Moore discusses to be just one instance of a more general
phenomenon that he believes to be at work in many other situations, as
when, for example, two tunes, each pleasing in its own right, make for
a cacophonous combination (Broad 1985, p. 256). Others have furnished
still further examples of organic unities (Chisholm 1986, ch. 7; Lemos
1994, chs. 3 and 4, and 1998; Hurka 1998).
Was Moore the first to call attention to the phenomenon of organic
unities in the context of intrinsic value? This is debatable. Despite
the fact that he explicitly invoked what he called a "principle
of summation" that would appear to be inconsistent with the
principle of organic unities, Brentano appears nonetheless to have
anticipated Moore's principle in his discussion of
*Schadenfreude*, that is, of malicious pleasure; he condemns
such an attitude, even though he claims that pleasure as such is
intrinsically good (Brentano 1969, p. 23 n). Certainly Chisholm takes
Brentano to be an advocate of organic unities (Chisholm 1986, ch. 5),
ascribing to him the view that there are many kinds of organic unity
and building on what he takes to be Brentano's insights (and,
going further back in the history of philosophy, the insights of St.
Thomas Aquinas [1225-1274] and others).
Recently, a special spin has been put on the principle of organic
unities by so-called "particularists." Jonathan Dancy, for
example, has claimed (in keeping with Korsgaard and others mentioned
in Section 3 above), that something's intrinsic value need not
supervene on its intrinsic properties alone; in fact, the
supervenience-base may be so open-ended that it resists
generalization. The upshot, according to Dancy, is that the intrinsic
value of something may vary from context to context; indeed, the
variation may be so great that the thing's value changes
"polarity" from good to bad, or *vice versa* (Dancy
2000). This approach to value constitutes an endorsement of the
principle of organic unities that is even more subversive of the
computation of intrinsic value than Moore's; for Moore holds
that the intrinsic value of something is and must be constant, even if
its contribution to the value of wholes of which it forms a part is
not, whereas Dancy holds that variation can occur at both levels.
Not everyone has accepted the principle of organic unities; some have
held out hope for a more systematic approach to the computation of
intrinsic value. However, even someone who is inclined to measure
intrinsic value in terms of summation must acknowledge that there is a
sense in which the principle of organic unities is obviously true.
Consider some complex whole, *W*, that is composed of three
goods, *X*, *Y*, and *Z*, which are wholly
independent of one another. Suppose that we had a ratio scale on which
to measure these goods, and that their values on this scale were 10,
20, and 30, respectively. We would expect someone who takes intrinsic
value to be summative to declare the value of *W* to be (10 +
20 + 30 =) 60. But notice that, if *X*, *Y*, and
*Z* are parts of *W*, then so too, presumably, are the
combinations *X*-and-*Y*, *X*-and-*Z*, and
*Y*-and-*Z*; the values of these combinations, computed
in terms of summation, will be 30, 40, and 50, respectively. If the
values of these parts of *W* were also taken into consideration
when evaluating *W*, the value of *W* would balloon to
180. Clearly, this would be a distortion. Someone who wishes to
maintain that intrinsic value is summative must thus show not only how
the various alleged examples of organic unities provided by Moore and
others are to be reinterpreted, but also how, in the sort of case just
sketched, it is only the values of *X*, *Y*, and
*Z*, and not the values either of any combinations of these
components or of any parts of these components, that are to be taken
into account when evaluating *W* itself. In order to bring some
semblance of manageability to the computation of intrinsic value, this
is precisely what some writers, by appealing to the idea of
"basic" intrinsic value, have tried to do. The general
idea is this. In the sort of example just given, each of *X*,
*Y*, and *Z* is to be construed as having basic
intrinsic value; if any combinations or parts of *X*,
*Y*, and *Z* have intrinsic value, this value is not
basic; and the value of *W* is to be computed by appealing only
to those parts of *W* that have basic intrinsic value.
Gilbert Harman was one of the first explicitly to discuss basic
intrinsic value when he pointed out the apparent need to invoke such
value if we are to avoid distortions in our evaluations (Harman 1967).
However, he offers no precise account of the concept of basic
intrinsic value and ends his paper by saying that he can think of no
way to show that nonbasic intrinsic value is to be computed in terms
of the summation of basic intrinsic value. Several philosophers have
since tried to do better. Many have argued that nonbasic intrinsic
value *cannot* always be computed by summing basic intrinsic
value. Suppose that states of affairs can bear intrinsic value. Let
*X* be the state of affairs of John being pleased to a certain
degree *x*, and *Y* be the state of affairs of Jane
being displeased to a certain degree *y*, and suppose that
*X* has a basic intrinsic value of 10 and *Y* a basic
intrinsic value of [?]20. It seems reasonable to sum these values
and attribute an intrinsic value of [?]10 to the conjunctive state
of affairs *X&Y*. But what of the disjunctive state of
affairs *XvY* or the negative state of affairs *~X*? How
are *their* intrinsic values to be computed? Summation seems to
be a nonstarter in these cases. Nonetheless, attempts have been made
even in such cases to show how the intrinsic value of a complex whole
is to be computed in a nonsummative way in terms of the basic
intrinsic values of simpler states, thus preserving the idea that
basic intrinsic value is the key to the computation of all intrinsic
value (Quinn 1974, Chisholm 1975, Oldfield 1977, Carlson 1997). (These
attempts have generally been based on the assumption that states of
affairs are the *sole* bearers of intrinsic value. Matters
would be considerably more complicated if it turned out that entities
of several different ontological categories could all have intrinsic
value.)
Suggestions as to how to compute nonbasic intrinsic value in terms of
basic intrinsic value of course presuppose that there is such a thing
as basic intrinsic value, but few have attempted to provide an account
of what basic intrinsic value itself consists in. Fred Feldman is one
of the few (Feldman 2000; cf. Feldman 1997, pp. 116-18).
Subscribing to the view that only states of affairs bear intrinsic
value, Feldman identifies several features that any state of affairs
that has basic intrinsic value in particular must possess. He
maintains, for example, that whatever has basic intrinsic value must
have it to a determinate degree and that this value cannot be
"defeated" by any Moorean organic unity. In this way,
Feldman seeks to preserve the idea that intrinsic value is summative
after all. He does not claim that all intrinsic value is to be
computed by summing basic intrinsic value, but he does insist that the
value of entire worlds is to be computed in this way.
Despite the detail in which Feldman characterizes the concept of basic
intrinsic value, he offers no strict analysis of it. Others have tried
to supply such an analysis. For example, by noting that, even if it is
true that only states have intrinsic value, it may yet be that not all
states have intrinsic value, Zimmerman suggests (to put matters
somewhat roughly) that basic intrinsic value is the intrinsic value
had by states none of whose proper parts have intrinsic value
(Zimmerman 2001, ch. 5). On this basis he argues that disjunctive and
negative states in fact have no intrinsic value at all, and thereby
seeks to show how all intrinsic value is to be computed in terms of
summation after all.
Two final points. First, we are now in a position to see why it was
said above (in Section 2) that perhaps not all intrinsic value is
nonderivative. If it is correct to distinguish between basic and
nonbasic intrinsic value and also to compute the latter in terms of
the former, then there is clearly a respectable sense in which
nonbasic intrinsic value is derivative. Second, if states with basic
intrinsic value account for all the value that there is in the world,
support is found for Chisholm's view (reported in Section 2)
that some ontological version of Moore's isolation test is
acceptable.
## 6. What Is Extrinsic Value?
At the beginning of this article, extrinsic value was said
simply--too simply--to be value that is not intrinsic.
Later, once intrinsic value had been characterized as nonderivative
value of a certain, perhaps moral kind, extrinsic value was said more
particularly to be derivative value *of that same kind*. That
which is extrinsically good is good, not (insofar as its extrinsic
value is concerned) for its own sake, but for the sake of something
else to which it is related in some way. For example, the goodness of
helping others in time of need is plausibly thought to be extrinsic
(at least in part), being derivative (at least in part) from the
goodness of something else, such as these people's needs being
satisfied, or their experiencing pleasure, to which helping them is
related in some causal way.
Two questions arise. The first is whether so-called extrinsic value is
really a type of value at all. There would seem to be a sense in which
it is not, for it does not add to or detract from the value in the
world. Consider some long chain of derivation. Suppose that the
extrinsic value of *A* can be traced to the intrinsic value of
*Z* by way of *B*, *C*, *D*... Thus
*A* is good (for example) because of *B*, which is good
because of *C*, and so on, until we get to *Y*'s
being good because of *Z*; when it comes to *Z*,
however, we have something that is good, not because of something
else, but "because of itself," i.e., for its own sake. In
this sort of case, the values of *A*, *B*, ...,
*Y* are all parasitic on the value of *Z*. It is
*Z*'s value that contributes to the value there is in the
world; *A*, *B*, ..., *Y* contribute no
value of their own. (As long as the value of *Z* is the only
intrinsic value at stake, no change of value would be effected in or
imparted to the world if a shorter route from *A* to *Z*
were discovered, one that bypassed some letters in the middle of the
alphabet.)
Why talk of "extrinsic value" at all, then? The answer can
only be that we just do say that certain things are good, and others
bad, not for their own sake but for the sake of something else to
which they are related in some way. To say that these things are good
and bad only in a derivative sense, that their value is merely
parasitic on or reflective of the value of something else, is one
thing; to deny that they are good or bad in any respectable sense is
quite another. The former claim is accurate; hence the latter would
appear unwarranted.
If we accept that talk of "extrinsic value" can be
appropriate, however, a second question then arises: what sort of
relation must obtain between *A* and *Z* if *A*
is to be said to be good "because of" *Z*? It is
not clear just what the answer to this question is. Philosophers have
tended to focus on just one particular causal relation, the means-end
relation. This is the relation at issue in the example given earlier:
helping others is a means to their needs being satisfied, which is
itself a means to their experiencing pleasure. The term most often
used to refer to this type of extrinsic value is "instrumental
value," although there is some dispute as to just how this term
is to be employed. (Remember also, from Section 3 above, that on some
views "instrumental value" may refer to a type of
*intrinsic*, or final, value.) Suppose that *A* is a
means to *Z*, and that *Z* is intrinsically good. Should
we therefore say that *A* is instrumentally good? What if
*A* has another consequence, *Y*, and this consequence
is intrinsically bad? What, especially, if the intrinsic badness of
*Y* is greater than the intrinsic goodness of *Z*? Some
would say that in such a case *A* is both instrumentally good
(because of *Z*) and instrumentally bad (because of
*Y*). Others would say that it is correct to say that
*A* is instrumentally good only if all of *A*'s
causal consequences that have intrinsic value are, taken as a whole,
intrinsically good. Still others would say that whether something is
instrumentally good depends not only on what it causes to happen but
also on what it prevents from happening (cf. Bradley 1998). For
example, if pain is intrinsically bad, and taking an aspirin puts a
stop to your pain but causes nothing of any positive intrinsic value,
some would say that taking the aspirin is instrumentally good despite
its having no intrinsically good consequences.
Many philosophers write as if instrumental value is the only type of
extrinsic value, but that is a mistake. Suppose, for instance, that
the results of a certain medical test indicate that the patient is in
good health, and suppose that this patient's having good health
is intrinsically good. Then we may well want to say that the results
are themselves (extrinsically) good. But notice that the results are
of course not a means to good health; they are simply indicative of
it. Or suppose that making your home available to a struggling artist
while you spend a year abroad provides him with an opportunity he
would otherwise not have to create some masterpieces, and suppose that
either the process or the product of this creation would be
intrinsically good. Then we may well want to say that your making your
home available to him is (extrinsically) good because of the
opportunity it provides him, even if he goes on to squander the
opportunity and nothing good comes of it. Or suppose that
someone's appreciating the beauty of the *Mona Lisa*
would be intrinsically good. Then we may well want to say that the
painting itself has value in light of this fact, a kind of value that
some have called "inherent value" (Lewis 1946, p. 391; cf.
Frankena 1973, p. 82). ("*Inherent* value" may not
be the most suitable term to use here, since it may well suggest
*intrinsic* value, whereas the sort of value at issue is
supposed to be a type of *extrinsic* value. The value
attributed to the painting is one that it is said to have in virtue of
its relation to something else that would supposedly be intrinsically
good if it occurred, namely, the appreciation of its beauty.) Many
other instances could be given of cases in which we are inclined to
call something good in virtue of its relation to something else that
is or would be intrinsically good, even though the relation in
question is not a means-end relation.
One final point. It is sometimes said that there can be no extrinsic
value without intrinsic value. This thesis admits of several
interpretations. First, it might mean that nothing can occur that is
extrinsically good unless something else occurs that is intrinsically
good, and that nothing can occur that is extrinsically bad unless
something else occurs that is intrinsically bad. Second, it might mean
that nothing can occur that is either extrinsically good or
extrinsically bad unless something else occurs that is either
intrinsically good or intrinsically bad. On both these
interpretations, the thesis is dubious. Suppose that no one ever
appreciates the beauty of Leonardo's masterpiece, and that
nothing else that is intrinsically either good or bad ever occurs;
still his painting may be said to be inherently good. Or suppose that
the aspirin prevents your pain from even starting, and hence inhibits
the occurrence of something intrinsically bad, but nothing else that
is intrinsically either good or bad ever occurs; still your taking the
aspirin may be said to be instrumentally good. On a third
interpretation, however, the thesis might be true. That interpretation
is this: nothing can occur that is either extrinsically good or
extrinsically neutral or extrinsically bad unless something else
occurs that is either intrinsically good or intrinsically neutral or
intrinsically bad. This would be trivially true if, as some maintain,
the nonoccurrence of something intrinsically either good or bad
entails the occurrence of something intrinsically neutral. But even if
the thesis should turn out to be false on this third interpretation,
too, it would nonetheless seem to be true on a fourth interpretation,
according to which the concept of extrinsic value, in all its
varieties, is to be understood in terms of the concept of intrinsic
value. |
knowledge-value | ## 1. Value problems
In Plato's *Meno*, Socrates raises the question of why
knowledge is more valuable than mere true belief. Call this *the
Meno problem* or, anticipating distinctions made below, *the
primary value problem*.
Initially, we might appeal to the fact that knowledge appears to be of
more practical use than true belief in order to mark this difference
in value. But, as Socrates notes, this could be questioned, because a
*true belief that this is the way to Larissa* will get you to
Larissa just as well as *knowledge that this is the way to
Larissa.* Plato's own solution was that knowledge is formed
in a special way distinguishing it from belief: knowledge, unlike
belief, must be 'tied down' to the truth, like the
mythical tethered statues of Daedalus. As a result, knowledge is
better suited to guide action. For example, if one knows, rather than
merely truly believes, that this is the way to Larissa, then one might
be less likely to be perturbed by the fact that the road initially
seems to be going in the wrong direction. Mere true belief at this
point might be lost, since one might lose all confidence that this is
the right way to go.
The primary value problem has been distinguished from the
*secondary value problem* (Pritchard 2007: SS2). The
secondary value problem pertains to why knowledge is more valuable,
from an epistemic point of view, than *any* proper subset of
its parts. Put otherwise, why is knowledge better than any epistemic
standing falling short of knowing? This includes, but is not
restricted to, mere true belief. To illustrate the distinction,
consider a possible solution to the primary value problem: knowledge
is justified true belief, and justified true belief is better than
mere true belief, which explains why knowledge is better than true
belief. If correct, this hypothesis successfully answers the primary
value problem. However, it requires further development to answer the
secondary value problem. For example, it requires further development
to explain why knowledge is better than justified belief.
Of course, on many standard theories of knowledge, knowledge is not
defined as justified true belief. For instance, according to some
theorists, knowledge is undefeated justified true belief (Lehrer &
Paxson 1969); on other widely discussed accounts, knowledge is true
belief that is non-accidental (Unger 1968), sensitive (Nozick 1981),
safe (Sosa 1999), appropriately caused (Goldman 1967), or produced by
intellectual virtue (Zagzebski 1996). This puts us in a position to
appreciate what some theorists call *the tertiary value
problem*. The tertiary value problem pertains to why knowledge is
*qualitatively better* than any epistemic standing falling
short of knowledge. Consider that if knowledge were only
quantitatively better than that which falls just short--for
instance, on an envisioned continuum of epistemic value--then it
would be mysterious why epistemologists have given such attention to
this particular point on the continuum.
Why does knowledge have this "distinctive value" not
shared by that which falls just short of knowledge (Pritchard 2009:
14)?
Not all theorists accept that the value problems are genuine. For
example, in light of the literature on the Gettier problem, some
theorists deny that the secondary value problem is genuine. On this
approach, whatever is added to justified true belief to rule out
Gettier cases does not increase the value of the agent's
intellectual state: it is of no consequence whether we have
Gettier-proof justified true belief rather than mere justified true
belief (Kaplan 1985). Of course, Gettier cases are peculiar and
presumably rare, so in practice having Gettier-proof justified true
belief is almost invariably confounded with having mere justified true
belief. This could lead some theorists to mistake the value of the
latter for that of the former. Other theorists deny that the primary
value problem is genuine. For example, on one approach, knowledge just
is true belief (Sartwell 1991). If knowledge is true belief, then
knowledge cannot be better than true belief, because nothing can be
better than itself. However, the definition of knowledge as true
belief has not been widely accepted.
## 2. Reliabilism and the Meno Problem
The first contemporary wave of work on the value problem largely
concerned whether this problem raised a distinctive difficulty for
reliabilist accounts of knowledge--i.e., those views which
essentially define knowledge as reliably-formed true belief. In
particular, the claim was that reliabilism was unable to offer an
answer even to the primary value problem.
A fairly clear statement of what is at issue here is given in a number
of places by Linda Zagzebski (e.g., 2003a; cf. DePaul 1993; Zagzebski
1996; Jones 1997; Swinburne 1999, 2000; Riggs 2002a; Kvanvig 2003;
Sosa 2007: ch. 4; Carter & Jarvis 2012). To begin with, Zagzebski
argues that the reliability of the process by which something is
produced does not automatically add value to that thing, and thus that
it cannot be assumed that the reliability of the process by which a
true belief is produced will add value to that true belief. In defense
of this claim, she offers the analogy of a cup of coffee. She claims
that a good cup of coffee which is produced by a reliable coffee
machine--i.e., one that regularly produces good cups of
coffee--is of no more value than an equally good cup of coffee
that is produced by an unreliable coffee machine.
Furthermore, as this line of objection goes, true belief is in the
relevant respects like coffee: a true belief formed via a reliable
belief-forming process is no more valuable than a true belief formed
via an unreliable belief-forming process. In both cases, the value of
the reliability of the process accrues in virtue of its tendency to
produce a certain valuable effect (good coffee/true belief), but this
means that where the effect has been produced--where one has a
good cup of coffee or a true belief--then the value of the
product is no greater for having been produced in a reliable way.
Elsewhere in the literature (e.g., Kvanvig 2003), this problem has
been called the "swamping problem", on account of how the
value of true belief 'swamps' the value of the true belief
being produced in a reliable (i.e., truth-conducive) way. So
expressed, the moral of the problem seems to be that where
reliabilists go awry is by treating the value of the process as being
solely captured by the reliability of the process--i.e., its
tendency to produce the desired effect. Since the value of the effect
swamps the value of the reliability of the process by which the effect
was achieved, this means that reliabilism has no resources available
to it to explain why knowledge is more valuable than true belief.
It's actually not clear that this is a problem that is specific
to reliabilism. That is, it seems that if this is a *bona fide*
problem, then it will affect any account of the value of knowledge
which has the same relevant features as reliabilism--i.e., which
regards the greater value of knowledge over true belief as
instrumental value, where the instrumental value in question is
relative to the valuable good of true belief. In particular, it will
affect *veritist* proposals about epistemic value which treat
truth as the fundamental epistemic good. See Kvanvig (2003:
Ch. 3) for discussion of how internalist approaches to epistemic
justification interface with the swamping problem; see Pettigrew
(2018) and Pritchard (2019) for responses to the swamping argument on
behalf of the veritist.
Furthermore, as J. Adam Carter and Benjamin Jarvis (2012) have argued,
there are reasons to be suspicious of a key premise driving the
swamping argument. The premise in question, which has been referred to
as the "Swamping Thesis" (Pritchard 2011), states that if
the value of a property possessed by an item is instrumentally
valuable only relative to a further good, and that good is already
present in that item, then it can confer no additional value. Carter
and Jarvis contend that one who embraces the Swamping Thesis should
also, by parity of reasoning, embrace a corollary thesis which they
call the Swamping Thesis Complement, according to which, if the value
of a property possessed by an item is instrumentally valuable only
relative to a further good, and that good has already *failed*
to be present in that item, then it can confer no additional value.
However, as they argue, the Swamping Thesis and the Swamping Thesis
Complement, along with other plausible premises, jointly entail the
unpalatable conclusion that non-factive epistemic
properties--most notably, justification--are *never*
epistemically valuable properties of a belief. See Dutant (2013) and
Bjelde (2020) for critical responses to Carter and Jarvis' line
of reasoning and Sylvan (2018) for a separate challenge to the
swamping argument, which rejects its tacit commitment to epistemic
instrumentalism (cf., Bjelde 2020). For an overview of the key moves
of the argument, see Pritchard (2011).
However, even granting the main elements of the swamping argument,
there are moves that the reliabilist can make in response (see, e.g.,
Goldman & Olsson 2009; Olsson 2011; Bates 2013; Roush 2010; cf.
Brown 2012; Davis, & Jager 2012; Hovarth 2009; Piller 2009).
For example, it is surely open to the reliabilist to argue that the
greater instrumental value of reliable true belief over mere true
belief does not need to be understood purely in terms of instrumental
value relative to the good of true belief. There could, for instance,
be all sorts of *practical* benefits of having a reliable true
belief which generate instrumental value. Indeed, it is worth noting
that the line of response to the *Meno* problem sketched by
Plato, which we noted above, seems to specifically appeal to the
greater practical instrumental value of knowledge over mere true
belief.
Moreover, there is reason to think that this objection will only at
best have an impact on *process reliabilist*
proposals--i.e., those views that treat all reliable
belief-forming processes as conferring a positive epistemic standing
on the beliefs so formed. For example, *agent reliabilism*
(e.g., Greco 1999, 2000) might be thought to be untouched by this sort
of argument. This is because, according to agent reliabilism, it is
not any sort of reliable process that confers positive epistemic
status to belief, but only those processes that are stable features of
the agent's "cognitive character". The main
motivation for this restriction on reliable processes is that it
excludes certain kinds of reliable but nonetheless strange and
fleeting processes which notoriously cause problems for the view (such
as processes where the reliability is due to some quirk in the
subject's environment, rather than because of any cognitive
trait possessed by the agent herself). Plausibly, however, one might
argue that the reliable traits that make up an agent's cognitive
character have some value independently of the instrumental value they
possess in virtue of being reliable--i.e., that they have some
final or intrinsic value. If this is right, then this opens up the
possibility that agent-reliabilists can evade the problem noted for
pure reliabilists.
Zagzebski's diagnosis of what is motivating this problem for
reliabilism seems**,** however, explicitly to exclude
such a counter-response. She argues that what gives rise to this
difficulty is the fact that the reliabilist has signed up to a
"machine-product model of belief"--see especially,
Zagzebski (2003a)--where the product is external to the cause. It
is not clear what exactly Zagzebski means by this point, but she
thinks it shows that even where the reliable process is independently
valuable--i.e., independently of its being reliable--it
still doesn't follow that the value of the cause will transfer
to add value to the effect. Here again the coffee analogy is appealed
to: even if a reliable coffee machine were independently valuable, it
would not thereby confer additional value on a good cup of coffee.
Perhaps the best way to evaluate the above line of argument is to
consider what *is* required in order to resolve the problem it
poses. Perhaps what is needed is an 'internal' connection
between product and cause, such as the kind of internal connection
that exists between an act and its motive which is highlighted by how
we explicitly evaluate actions in terms of the motives that led to
them (Zagzebski 2003a). On this picture, then, we are not to
understand knowledge as a state consisting of a known belief, but
rather as a state which consists of both the true belief *and*
the source from which that true belief was acquired. In short, then,
the problem with the machine-product model of belief is that it leads
us to evaluate the state of the knowledge independently of the means
by which the knowledge was acquired. If, in contrast, we have a
conception of knowledge that incorporates into the very state of
knowledge the way that the knowledge was acquired, we can avoid this
problem.
Once one effects this transition away from the machine-product model
of belief, one can allow that the independent value of the reliable
process can ensure that knowledge, by being produced in this way, is
more valuable than mere true belief (Zagzebski 2003a). In particular,
if the process by which one gained the true belief is an epistemic
virtue--a character trait which is both reliable and
intrinsically valuable--then this can ensure that the value of
the knowing state in this case is more valuable than any corresponding
state which simply consisted of a true belief.
Other commentators in the virtue epistemology camp, broadly conceived,
have put forward similar suggestions. For example, Wayne Riggs (2002a)
and Greco (e.g., 2003) have argued for a 'credit' version
of virtue epistemology, according to which the agent, in virtue of
bringing about the positively valuable outcome of a true belief, is
due credit as a result. Rather than treating the extra value of
knowledge over true belief as deriving simply from the agent's
attainment of the target true belief, however, Riggs and Greco instead
argue that we should regard the agent's knowing as the state the
agent is in when she is responsible for her true belief. Only in so
doing, they claim, can we answer the value problem. Jason Baehr
(2012), by contrast with Riggs and Greco, has argued that credit
theories of knowledge do not answer the value problem but, rather,
'provide grounds for denying' (2012: 1) that knowledge has
value over and above the value of true belief.
Interestingly, however, other virtue epistemologists, most notably
Ernest Sosa (2003), have also advocated a 'credit' view,
yet seem to stay within the machine-product picture of belief. That
is, rather than analyze the state of knowing as consisting of both the
true belief and its source, they regard the state of knowing as
distinct from the process, yet treat the fact that the process is
intrinsically valuable as conferring additional value on any true
belief so produced. With Sosa's view in mind, it is interesting
to ask just why we need to analyze knowledge in the way that Zagzebski
and others suggest in order to get around the value problem.
The most direct way to approach this question is by considering
whether it is really true that a valuable cause cannot confer value on
its effect where cause and effect are kept separate in the way that
Zagzebski claims is problematic in the case of knowledge. One
commentator who has objected to Zagzebski's argument by querying
this claim on her part is Berit Brogaard (2007; cf. Percival 2003;
Pritchard 2007: SS2), who claims that a valuable cause can indeed
confer value on its effect in the relevant cases. Brogaard claims that
virtue epistemologists like Zagzebski and Riggs endorse this claim
because they adhere to what she calls a "Moorean"
conception of value, on which if two things have the same intrinsic
properties, then they are equally valuable. Accordingly, if true
belief and knowledge have the same intrinsic properties (which is what
would be the case on the view of knowledge that they reject), it
follows that they must have the same value. Hence, it is crucial to
understand knowledge as having distinct intrinsic properties from true
belief before one can hope to resolve the value problem.
If one holds that there is only intrinsic and instrumental value, then
this conception of value is compelling, since objects with the same
intrinsic properties trivially have the same amount of intrinsic
value, and they also plausibly have the same amount of instrumental
value as well (at least in the same sort of environment). However, the
Moorean conception of value is problematic because--as Wlodek
Rabinowicz & Toni Ronnow-Rasmussen (1999, 2003) have
pointed out--there seem to be objects which we value for their
own sake but whose value derives from their being extrinsically
related to something else that we value. That is, such objects are
*finally*--i.e., non-instrumentally--valuable without
thereby being intrinsically valuable. For criticism of this account of
final value, see Bradley (2002).
The standard example in this regard is Princess Diana's dress.
This would be regarded as more valuable than an exact replica simply
because it belonged to Diana, which is clearly an extrinsic property
of the object. Even though the extra value that accrues to the object
is due to its extrinsic properties, however, it is still the case that
this dress is (properly) valued for its own sake, and thus valued
non-instrumentally.
Given that value of this sort is possible, then it follows that it
could well be the case that we value one true belief over another
because of its extrinsic features--i.e., that the one true
belief, but not the other, was produced by a reliable cognitive trait
that is independently valuable. For example, it could be that we value
forming a true belief via a reliable cognitive trait more than a mere
true belief because the former belief is produced in such a way that
it is of credit to us that we believe the truth. There is thus a
crucial lacuna in Zagzebski's argument.
A different response to the challenge that Zagzebski raises for
reliabilism is given by Michael Brady (2006). In defense of
reliabilism, Brady appeals to the idea that to be valuable is to be a
fitting or appropriate object of positive evaluative attitudes, such
as admiration or love (e.g., Brentano 1889 [1969]; Chisholm 1986;
Wiggins 1987; Gibbard 1990; Scanlon 1998). That one object is more
valuable than another is thus to be understood, on this view, in terms
of the fact that that object is more worthy of positive evaluation.
Thus, the value problem for reliabilism on this conception of value
comes down to the question why knowledge is more worthy of positive
evaluation on this view than mere true belief. Brady's
contention is that, at least within this axiological framework, it
*is* possible for the reliabilist to offer a compelling story
about why reliable true belief--and thus knowledge--is more
valuable than mere true belief.
Central to Brady's argument is his claim that there are many
ways one can positively evaluate something, and thus many different
ways something can be valuable. Moreover, Brady argues that we can
distinguish *active* from *passive* evaluative
attributes, where the former class of attitudes involve pursuit of the
good in question. For example, one might actively value the truth,
where this involves, for instance, a striving to discover the truth.
In contrast, one might at other times merely passively value the
truth, such as simply respecting or contemplating it.
With this point in mind, Brady's central thesis is that on the
reliabilist account knowledge is more valuable than true belief
because certain active positive evaluative attitudes are fitting only
with regard to the former (i.e., reliable true belief). In particular,
given its intrinsic features, reliable true belief is worthy of active
love, whereas an active love of unreliable (i.e., accidental) true
belief because of its intrinsic features would be entirely
inappropriate because there is nothing that we can do to attain
unreliable true belief that wouldn't conflict with love of
truth.
This is an intriguing proposal, which opens up a possible avenue of
defense against the kind of machine-product objection to reliabilism
considered. One problem that such a move faces, however, is that it is
unclear whether we can make sense of the distinction Brady draws
between active and passive evaluative attitudes, at least in the
epistemic sphere. When Brady talks of passive evaluative attitudes
towards the truth, he gives examples like contemplating, accepting,
embracing, affirming, and respecting. Some of these attitudes are not
clearly positive evaluative attitudes, however. Moreover, some of them
are not obviously passive either. For example, is to contemplate the
truth really to evaluate it *positively*, rather than simply to
consider it? Furthermore, in accepting, affirming or embracing the
truth, isn't one *actively* positively evaluating the
truth? Wouldn't such evaluative attitudes manifest themselves in
the kind of practical action that Brady thinks is the mark of active
evaluative attitudes? More needs to be said about this distinction
before it can do the philosophical work that Brady has in mind.
A further, albeit unorthodox, recent approach to the swamping problem
is due to Carter and Rupert (2020). Carter and Rupert point out that
extant approaches to the swamping problem suppose that if a solution
is to be found, it will be at the personal level of description, the
level at which states of subjects or agents, as such, appear. They
take exception to this orthodoxy, or at least to its unquestioned
status. They maintain that from the empirically justified premise that
subpersonal states play a significant role in much epistemically
relevant cognition, we should expect that they constitute a domain in
which we might reasonably expect to locate the "missing
source" of epistemic value, beyond the value attached to mere
true belief.
## 3. Virtue Epistemology and the Value Problem
So far this discussion has taken it as given that whatever problems
reliabilism faces in this regard, there are epistemological theories
available--some form of virtue epistemology, for
example--that can deal with them. But not everyone in the
contemporary debate accepts this. Perhaps the best known sceptic in
this respect is Jonathan Kvanvig (2003), who in effect argues that
while virtue epistemology (along with a form of epistemic internalism)
can resolve the primary value problem (i.e., the problem of explaining
why knowledge is more valuable than mere true belief), the real
challenge that we need to respond to is that set by the secondary
value problem (i.e., the problem of explaining why knowledge is more
valuable than that which falls short of knowledge); and Kvanvig says
that there is no solution available to *that*. That is, Kvanvig
argues that there is an epistemic standing--in essence, justified
true belief--which falls short of knowledge but which is no less
valuable than knowledge. He concludes that the focus of epistemology
should not be on knowledge at all, but rather on
*understanding*, an epistemic standing that Kvanvig maintains
is clearly of more value than knowledge *and* those epistemic
standings that fall short of knowledge, such as justified true
belief.
What Kvanvig says about understanding will be considered below. First
though, let us consider the specific challenge that he poses for
virtue epistemology. In essence, Kvanvig's argument rests on the
assumption that it is essential to any virtue-theoretic account of
knowledge--and any internalist account of knowledge as well, for
that matter (i.e., an account that makes a subjective justification
condition necessary for knowledge possession)--that it also
includes an anti-Gettier condition. If this is right, then it follows
that even if virtue epistemology has an answer to the primary value
problem--and Kvanvig concedes that it does--it will not
thereby have an answer to the secondary value problem since knowledge
is not simply virtuous true belief. Moreover, Kvanvig argues that once
we recognize what a gerrymandered notion a non-Gettierized account of
knowledge is, it becomes apparent that there is nothing valuable about
the anti-Gettier condition on knowledge that needs to be imposed. But
if that is right, then it follows by even virtue epistemic lights that
knowledge--i.e., non-Gettierized virtuous true believing--is
no more valuable than one of its proper sub-sets--i.e., mere
virtuous true believing.
There are at least two aspects of Kvanvig's argument that are
potentially problematic. To begin with, it isn't at all clear
why the anti-Gettier condition on knowledge fails to add value,
something that seems to be assumed here. More generally, Kvanvig seems
to be implicitly supposing that if an analysis of knowledge is ugly
and gerrymandered then that is itself reason to doubt that knowledge
is particularly valuable, at least assuming that there are epistemic
standings that fall short of knowledge which can be given an elegant
analysis. While a similar assumption about the relationship between
the elegance (or otherwise) of the analysis of knowledge and the value
of the analysandum is commonplace in the contemporary epistemological
literature--see, for example, Zagzebski (1999) and Williamson
(2000: chapter 1)--this assumption is contentious. For critical
discussion of this assumption, see DePaul (2009).
In any case, a more serious problem is that many virtue
epistemologists--among them Sosa (1988, 1991, 2007), Zagzebski
(e.g., 1996, 1999) and Greco (2003, 2007, 2008, 2009)--hereafter,
'robust virtue epistemologists'--think that their
view *can* deal with Gettier problems without needing to add an
additional anti-Gettier condition on knowledge. The way this is
achieved is by making the move noted above of treating knowledge as a
state that includes both the truly believing and the virtuous source
by which that true belief was acquired. However, crucially, for robust
virtue epistemologists, there is an important difference between (i) a
belief's being true and virtuously formed, and (ii) a
belief's being true *because* virtuously formed.
Formulating knowledge along the latter lines, they insist, ensures
that the target belief is not Gettiered. Even more, robust virtue
epistemologists think the latter kind of formulation offers the
resources to account for why knowledge is distinctively valuable.
To appreciate this point about value, consider the following
'performance normativity framework' which robust virtue
epistemologists explicitly or implicitly embrace when accounting for
the value of knowledge as a true belief because of virtue.
**Performance Normativity Framework**
*Dimensions of evaluation thesis* Any performance with an aim
can be evaluated along three dimensions: (i) whether it is successful,
(ii) whether it is skillful, and (iii) thirdly, whether the success is
because of the skill.
*Achievement thesis* If and only if the success is because of
the skill, the performance is not merely successful, but also, an
achievement.
*Value thesis* Achievements are finally valuable (i.e.,
valuable for their own sake) in a way that mere lucky successes are
not.
Notice that, if knowledge is a cognitive performance that is an
*achievement*, then with reference to the above set of claims,
the robust virtue epistemologist can respond to not only the secondary
value problem but also the tertiary value problem (i.e., the problem
of explaining why knowledge is more valuable, in kind and not merely
in degree, than that which falls short of knowledge). This is because
knowledge, on this view, is simply the cognitive aspect of a more
general notion, that of achievement, and this is the case even if mere
successes that are produced by intellectual virtues but which are not
*because* of them, are not achievements. (Though, see Kim
2021 for a reversal of the idea that knowledge involves achievement;
according to Kim, all achievements, in any domain of endeavour, imply
knowledge).
As regards the *value thesis*, one might object that some
successes that are because of ability--i.e., achievements, on
this view--are too trivial or easy or wicked to count as finally
valuable. This line of objection is far from decisive. After all, it
is open to the proponent of robust virtue epistemology to argue that
the claim is only that all achievements *qua* achievements are
finally valuable, not that the overall value of every achievement is
particularly high. It is thus consistent with the proposal that some
achievements have a very low--perhaps even negative, if that is
possible--value in virtue of their other properties (e.g., their
triviality). Indeed, a second option in this regard is to allow that
not all achievements enjoy final value whilst nevertheless maintaining
that it is in the nature of achievements to have such value (e.g.,
much in the way that one might argue that it is in the nature of
pleasure to be a good, even though some pleasures are bad). Since, as
noted above, all that is required to meet the (tertiary) value problem
is to show that knowledge is generally distinctively valuable, this
claim would almost certainly suffice for the robust virtue
epistemologist's purposes.
In any case, even if the value thesis is correct--and indeed,
even if the achievement and dimensions of evaluation theses are also
correct--the robust virtue epistemologist has not yet
satisfactorily vindicated any of the aforementioned value problems for
knowledge unless knowledge is itself a kind of achievement--and
that is the element of the proposal that is perhaps the most
controversial. There are two key problems with the claim that
knowledge involves cognitive achievement. The first is that there
sometimes seems to be more to knowledge than a cognitive achievement;
the second is that there sometimes seems to be less to knowledge than
a cognitive achievement.
As regards the first claim, notice that achievements seem to be
compatible with at least one kind of luck. Suppose that an archer hits
a target by employing her relevant archery abilities, but that the
success is 'gettierized' by luck intervening between the
archer's firing of the arrow and the hitting of the target. For
example, suppose that a freak gust of wind blows the arrow off-course,
but then a second freak gust of wind happens to blow it back on course
again. The archer's success is thus lucky in the sense that it
could very easily have been a failure. When it comes to
'intervening' luck of this sort, Greco's account of
achievements is able to offer a good explanation of why the success in
question does not constitute an achievement. After all, we would not
say that the success was because of the archer's ability in this
case.
Notice, however, that not all forms of luck are of this intervening
sort. Consider the following case offered by Pritchard (2010a: ch. 2).
Suppose that nothing intervenes between the archer's firing of
the arrow and the hitting of the target. However, the success is still
lucky in the relevant sense because, unbeknownst to the archer, she
just happened to fire at the only target on the range that did not
contain a forcefield which would have repelled the arrow. Is the
archer's success still an achievement? Intuition would seem to
dictate that it is; it certainly seems to be a success that is because
of ability, even despite the luckiness of that success. Achievements,
then, are, it seems, compatible with luck of this
'environmental' form even though they are not compatible
with luck of the standard 'intervening' form.
The significance of this conclusion for our purposes is that knowledge
is incompatible with *both* forms of luck. In order to see
this, one only needs to note that an epistemological analogue of the
archer case just given is the famous barn facade example (e.g.,
Ginet 1975; Goldman 1976). In this example, we have an agent who forms
a true belief that there is a barn in front of him. Moreover, his
belief is not subject to the kind of 'intervening' luck
just noted and which is a standard feature of Gettier-style cases. It
is not as if, for example, he is looking at what appears to be a barn
but which is not in fact a barn, but that his belief is true
nonetheless because there is a barn behind the barn shaped object that
he is looking at. Nevertheless, his belief is subject to environmental
luck in that he is, unbeknownst to him, in barn facade county
in which every other barn-shaped object is a barn facade. Thus,
his belief is only luckily true in that he could very easily have been
mistaken in this respect. Given that this example is structurally
equivalent to the 'archer' case just given, it seems that
just as we treat the archer as exhibiting an achievement in that case,
so we should treat this agent as exhibiting a cognitive achievement
here. The problem, however, is that until quite recently many
philosophers accepted that the agent in the barn facade case
lacks knowledge. Knowledge, it seems, is incompatible with
environmental luck in a way that achievements, and thus cognitive
achievements, are not (see, for example, Pritchard, e.g., 2012).
Robust virtue epistemologists have made a number of salient points
regarding this case. For example, Greco (2010, 2012) has argued for a
conception of what counts as a cognitive ability according to which
the agent in the barn facade case would not count as exhibiting
the relevant cognitive ability (see Pritchard 2010a: ch. 2 for a
critical discussion of this claim). Others, such as Sosa (e.g., 2007,
2015) have responded by questioning whether the agent in the barn
facade case lacks knowledge, albeit, in a qualified sense.
While Sosa's distinctive virtue epistemology allows for the
compatibility of barn facade cases with *animal
knowledge* (roughly: true belief because of ability), Sosa
maintains that the subject in barn facade cases lacks
*reflective knowledge* (roughly: a true belief whose
creditability to ability or virtue is itself creditable to a
second-order ability or virtue of the agent). Other philosophers
(e.g., Hetherington (1998) have challenged the view that barn
facade protagonists in fact lack (any kind of) knowledge. In a
series of empirical studies, most people attributed knowledge in barn
facade cases and related cases (Colaco, Buckwalter, Stich &
Machery 2014; Turri, Buckwalter & Blouw 2015; Turri 2016a). In one
study, over 80% of participants attributed knowledge (Turri 2016b). In
another study, most professional philosophers attributed knowledge
(Horvath & Wiegmann 2016). At least one theory of knowledge has
been defended on the grounds that it explains why knowledge is
intuitively present in such cases (Turri 2016c).
Even setting that issue aside, however, there is a second problem on
the horizon, which is that it seems that there are some cases of
knowledge which are not cases of cognitive achievement. One such case
is offered by Jennifer Lackey (2007), albeit to illustrate a slightly
different point. Lackey asks us to imagine someone arriving at the
train station in Chicago who, wishing to obtain directions to the
Sears Tower, approaches the first adult passer-by she sees. Suppose
the person she asks is indeed knowledgeable about the area and gives
her the directions that she requires. Intuitively, any true belief
that the agent forms on this basis would ordinarily be counted as
knowledge. Indeed, if one could not gain testimonial knowledge in this
way, then it seems that we know an awful lot less than we think we
know. However, it has been argued, in such a case the agent does not
have a true belief because of her cognitive abilities but, rather,
because of her *informant's* cognitive abilities. If this
is correct, then there are cases of knowledge which are not also cases
of cognitive achievement.
It is worth being clear about the nature of this objection. Lackey
takes cases like this to demonstrate that one can possess knowledge
without it being primarily creditable to one that one's belief
is true. Note though that this is compatible, as Lackey notes, with
granting that the agent *is* employing her cognitive abilities
to some degree, and so surely deserves *some* credit for the
truth of the belief formed (she would not have asked just anyone, for
example, nor would she have simply accepted just any answer given by
her informant). The point is thus rather that whatever credit the
agent is due for having a true belief, it is not the kind of credit
that reflects a *bona fide* cognitive achievement because of
how this cognitive success involves 'piggy-backing' on the
cognitive efforts of others.
## 4. Understanding and Epistemic Value
As noted above, the main conclusion that Kvanvig (2003) draws from his
reflections on the value problem is that the real focus in
epistemology should not be on knowledge at all but on understanding,
an epistemic standing that Kvanvig does think is especially valuable
but which, he argues, is distinct from knowing--i.e., one can
have knowledge without the corresponding understanding, and one can
have understanding without the corresponding knowledge. (Pritchard
[e.g., 2010a: chs 1-4] agrees, though his reasons for taking
this line are somewhat different to Kvanvig's). It is perhaps
this aspect of Kvanvig's book that has prompted the most
critical response, so it is worth briefly dwelling on his claims here
in a little more detail.
To begin with, one needs to get clear what Kvanvig has in mind when he
talks of understanding, since many commentators have found the
conception of understanding that he targets problematic. The two
usages of the term 'understanding' in ordinary language
that Kvanvig focuses on--and which he regards as being especially
important to epistemology--are
>
>
> when understanding is claimed for some object, such as some subject
> matter, and when it involves understanding that something is the case.
> (Kvanvig 2003: 189)
>
>
>
The first kind of understanding he calls "objectual
understanding", the second kind "propositional
understanding". In both cases, understanding requires that one
successfully grasp how one's beliefs in the relevant
propositions cohere with other propositions one believes (e.g.,
Kvanvig 2003: 192, 197-8). This requirement entails that
understanding is directly factive in the case of propositional
understanding and indirectly factive in the case of objectual
understanding--i.e., the agent needs to have at least mostly true
beliefs about the target subject matter in order to be truly said to
have objectual understanding of that subject matter.
Given that understanding--propositional understanding at any
rate--is factive, Kvanvig's argument for why understanding
is distinct from knowledge does not relate to this condition (as we
will see in a moment, it is standard to argue that understanding is
distinct from knowledge precisely because only understanding is
non-factive). Instead, Kvanvig notes two key differences between
understanding and knowledge: that understanding, unlike knowledge,
admits of degrees, and that understanding, unlike knowledge, is
compatible with epistemic luck. Most commentators, however, have
tended to focus not on these two theses concerning the different
properties of knowledge and understanding, but rather on
Kvanvig's claim that understanding is (at least indirectly)
factive.
For example, Elgin (2009; cf. Elgin 1996, 2004; Janvid 2014) and Riggs
(2009) argue that it is possible for an agent to have understanding
and yet lack true beliefs in the relevant propositions. For example,
Elgin (2009) argues that it is essential to treat scientific
understanding as non-factive. She cites a number of cases in which
science has progressed from one theory to a better theory where, we
would say, understanding has increased in the process even though the
theories are, strictly speaking at least, *false*. A different
kind of case that Elgin offers concerns scientific idealizations, such
as the ideal gas law. Scientists know full well that no actual gas
behaves in this way, yet the introduction of this useful fiction
clearly improved our understanding of the behavior of actual gasses.
For a defense of Kvanvig's view in the light of these charges,
see Kvanvig (2009a, 2009b; Carter & Gordon 2014).
A very different sort of challenge to Kvanvig's treatment of
understanding comes from Brogaard (2005, Other Internet Resources).
She argues that Kvanvig's claim that understanding is of greater
value than knowledge is only achieved because he fails to give a rich
enough account of knowledge. More specifically, Brogaard claims that
we can distinguish between objectual and propositional knowledge just
as we can distinguish between objectual and propositional
understanding. Propositional understanding, argues Brogaard, no more
requires coherence in one's beliefs than propositional
knowledge, and so the difference in value between the two cannot lie
here. Moreover, while Brogaard grants that objectual understanding
does incorporate a coherence requirement, this again fails to mark a
value-relevant distinction between knowledge and understanding because
the relevant counterpart--objectual knowledge (i.e., knowledge of
a subject matter)--also incorporates a coherence requirement. So
provided that we are consistent in our comparisons of objectual and
propositional understanding on the one hand, and objectual and
propositional knowledge on the other, Kvanvig fails to make a sound
case for thinking that understanding is of greater value than
knowledge.
Finally, a further challenge to Kvanvig's treatment of knowledge
and understanding focuses on his claims regarding epistemic luck, and
in particular, his insistence that luck cases show how understanding
and propositional knowledge come apart from one another. In order to
bring the luck-based challenge into focus, we can distinguish three
kinds of views about the relationship between understanding and
epistemic luck that are found in the literature: *strong
compatibilism* (e.g., Kvanvig 2003; Rohwer 2014), *moderate
compatibilism* (e.g., Pritchard 2010a: ch. 4) and
*incompatibilism* (e.g., Grimm 2006; Sliwa 2015). Strong
compatibilism is the view that understanding is compatible with the
varieties of epistemic luck that are generally taken to undermine
propositional knowledge. In particular, incompatibilists maintain that
understanding is undermined by neither (i) the kind of luck that
features in traditional Gettier-style cases (1963) cases, nor with
(ii) purely 'environmental luck (e.g., Pritchard 2005) of the
sort that features in 'fake barn' cases (e.g., Goldman
1979) where the fact that one's belief could easily be incorrect
is a matter of being in an inhospitable epistemic environment.
Moderate compatibilism, by contrast, maintains that while
understanding is like propositional knowledge in that it is
incompatible with the kind of luck that features in traditional
Gettier cases, it is nonetheless compatible with environmental
epistemic luck. Incompatibilism rejects that either kind of epistemic
luck case demonstrates that understanding and propositional knowledge
come apart, and so maintains that understanding is incompatible with
epistemic luck to the same extent that propositional knowledge is.
## 5. The Value of Knowledge-How
The received view in mainstream epistemology, at least since Gilbert
Ryle (e.g., 1949), has been to regard knowledge-that and knowledge-how
as different epistemic standings, such that knowing how to do
something is not simply a matter of knowing propositions, viz., of
knowledge-*that*. If this view--known as
*anti-intellectualism*--is correct, then the value of
knowledge-how needn't be accounted for in terms of the value of
knowing propositions. Furthermore, if anti-intellectualism is assumed,
then--to the extent that there is any analogous 'value
problem' for knowledge-how--such a problem needn't
materialize as the philosophical problem of determining what it is
about knowledge-how that makes it more valuable than mere true
belief.
Jason Stanley & Timothy Williamson (2001) have, however,
influentially resisted the received anti-intellectualist thinking
about knowledge-how. On Stanley & Williamson's
view--*intellectualism*--knowledge-how is a kind of
propositional knowledge, i.e., knowledge-*that*, such that
(roughly) *S* knows how to ph iff there is a way *w*
such that *S* knows that *w* is a way for *S* to
ph. Accordingly, if Hannah knows how to ride a bike, then this is
in virtue of her propositional knowledge--viz., her knowing of
some way *w* that *w* is the way for her (Hannah) to
ride a bike.
By reducing in this manner knowledge--how to a kind of
knowledge--that, intellectualists such as Stanley have accepted
that knowledge-how should have properties characteristic of
propositional knowledge, (see, for example, Stanley 2011: 215), of
which knowledge-how is a kind. Furthermore, the value of knowledge-how
should be able to be accounted for, on intellectualism, with reference
to the value of the propositional knowledge that the intellectualist
identifies with knowledge-how.
In recent work, Carter and Pritchard (2015) have challenged
intellectualism on this point. One such example they offer to this end
involves testimony and skilled action. For example, suppose that a
skilled guitarist tells an amateur how to play a very tricky guitar
riff. Carter and Pritchard (2015: 801) argue that though the amateur
can uncontroversially acquire testimonial knowledge from the expert
that, for some way *w* that *w* is the way to play the
riff, it might be that the expert, but not novice, knows *how*
to play the riff. Further, they suggest that whilst the amateur is
better off, with respect to the aim of playing the riff, than he was
prior to gaining the testimonial knowledge he did, he would likewise
be better off further--viz., he would have something even
*more* valuable--if he, like the expert, had the lick down
cold (something the amateur does not have simply on the basis of his
acquired testimonial knowledge) (*Ibid*: 801).
The conclusion Carter and Pritchard draw from this and other similar
cases (e.g., 2015: SS3; see also Poston 2016) is that the value of
knowledge-how cannot be accounted for with reference to the value of
the items of knowledge-that which the intellectualist identifies with
knowledge-how If this is right, then if there is a 'value
problem' for knowledge-how, we shouldn't expect it to be
the problem of determining what is it about certain items of
propositional knowledge that makes these more valuable than
corresponding mere true beliefs. A potential area for future research
is to consider what an analogue value problem for knowledge-how might
look like, on an anti-intellectualist framework.
According to Carter and Pritchard's diagnosis, the underlying
explanation for this difference in value is that knowledge-how (like
understanding, as discussed in
SS4)
essentially involves a kind of cognitive achievement, unlike
propositional knowledge, for reasons discussed in SS4. If this
diagnosis is correct, then further pressure is arguably placed on the
robust virtue epistemologist's 'achievement'
solution to the value problems for knowledge-that, as surveyed in
SS3.
Recall that, according to robust virtue epistemology, the distinctive
value of knowledge-that is accounted for in terms of the value of
cognitive achievement (i.e., success because of ability) which robust
virtue epistemologists take to be essential to propositional
knowledge. But, if the presence of cognitive achievement is what
accounts for why knowledge-how has a value that is not present in the
items of knowledge-that the intellectualist identifies with
knowledge-how, this result would seem to stand in tension with the
robust virtue epistemologist's insistence that what affords
propositional knowledge a value lacked by mere true belief is that the
former essentially involves cognitive achievement.
## 6. Other Accounts of the Value of Knowledge
John Hawthorne (2004; cf. Stanley 2005; Fantl & McGrath 2002) has
argued that knowledge is valuable because of the role it plays in
practical reasoning. More specifically, Hawthorne (2004: 30) argues
for the principle that one should use a proposition *p* as a
premise in one's practical reasoning only if one knows
*p*. Hawthorne primarily motivates this line of argument by
appeal to the lottery case. This concerns an agent's true belief
that she holds the losing ticket for a fair lottery with long odds and
a large cash prize, a belief that is based solely on the fact that she
has reflected on the odds involved. Intuitively, we would say that
such an agent lacks knowledge of what she believes, even though her
belief is true and even though her justification for what she
believes--assessed in terms of the likelihood, given this
justification, of her being right--is unusually strong. Moreover,
were this agent to use this belief as a premise in her practical
reasoning, and so infer that she should throw the ticket away without
checking the lottery results in the paper for example, then we would
regard her reasoning as problematic.
Lottery cases therefore seem to show that justified true belief, no
matter how strong the degree of justification, is not enough for
acceptable practical reasoning--instead, knowledge is required.
Moreover, notice that we can alter the example slightly so that the
agent does possess knowledge while at the same time having a
*weaker* justification for what she believes (where strength of
justification is again assessed in terms of the likelihood, given this
justification, that the agent's belief is true). If the agent
had formed her true belief by reading the results in a reliable
newspaper, for example, then she would count as knowing the target
proposition and can then infer that she should throw the ticket away
without criticism. It is more likely, however, that the newspaper has
printed the result wrongly than that she should win the lottery. This
sort of consideration seems to show that knowledge, even when
accompanied by a relatively weak justification, is better (at least
when it comes to practical reasoning) than a true belief that is
supported by a relatively strong justification but does not amount to
knowledge. If this is the right way to think about the connection
between knowledge possession and practical reasoning, then it seems to
offer a potential response to at least the secondary value
problem.
A second author who thinks that our understanding of the concept of
knowledge can have important ramifications for the value of knowledge
is Edward Craig (1990). Craig's project begins with a thesis
about the value of the concept of knowledge. Simplifying somewhat,
Craig hypothesises that the concept of knowledge is important to us
because it fulfills the valuable function of enabling us to identify
reliable informants. The idea is that it is clearly of immense
practical importance to be able to recognize those from whom we can
gain true beliefs, and that it was in response to this need that the
concept of knowledge arose. As with Hawthorne's theory, this
proposal, if correct, could potentially offer a resolution of at least
the secondary value problem.
Recently, there have been additional attempts to follow--broadly
speaking--Craig's project, for which the value of knowledge
is understood in terms of the functional role that
'knowledge' plays in fulfilling our practical needs. The
matter of how to identify this functional role has received increasing
recent attention. For example, David Henderson (2009), Robin McKenna
(2013), Duncan Pritchard (2012) and Michael Hannon (2015) have
defended views about the concept of knowledge (or knowledge
ascriptions) that are broadly inspired by Craig's favored
account of the function of knowledge as identifying reliable
informants. A notable rival account, defended by Klemens Kappel
(2010), Christoph Kelp (2011, 2014) and Patrick Rysiew (2012; cf.
Kvanvig 2012) identifies *closure of inquiry* as the relevant
function. For Krista Lawlor (2013) the relevant function is identified
(*a la* Austin) as that of *providing assurance*,
and for James Beebe (2012), it's expressing epistemic
approval/disapproval.
In one sense, such accounts are in competition with one another, in
that they offer different practical explications of
'knowledge'. However, these accounts all accept
(explicitly or tacitly) a more general insight, which is that
considerations about the function that the concept of knowledge plays
in fulfilling practical needs should inform our theories of the nature
and corresponding value of knowledge. This more general point remains
controversial in contemporary metaepistemology. For some arguments
against supposing that a practical explication of
'knowledge', in terms of some need-fulfilling function,
should inform our accounts of the nature or knowledge, see for example
Gerken (2015). For a more extreme form of argument in favor of
divorcing considerations to do with how and why we use
'knows' from epistemological theorizing altogether, see
Hazlett (2010; cf. Turri 2011b).
A further and more recent practically oriented approach to the value
of knowledge is defended by Grindrod (2019), who considers
specifically the ramifications of *epistemic
contextualism* for the value of knowledge. Contextualists
maintain that knowledge attributing sentences can vary in truth value
across different contexts of utterance. This kind of position about
the semantics of knowledge attributions is often motivated by
context-shifting cases, such as DeRose's (1992) bank case,
which seem to suggest that the a knowledge attribution is *true*depends on the epistemic standards (as fixed by practical stakes)
of the attributor of the knowledge ascription (see entry on
Epistemic Contextualism).
Grindrod maintains that if epistemic contextualism is true, then
*epistemic value*(including whatever epistemic value might
separate knowledge from mere true belief) should be
contextualised.
## 7. Weak and Strong Conceptions of Knowledge
Laurence BonJour argues that reflecting on the value of knowledge
leads us to reject a prevailing trend in epistemology over the past
several decades, namely, fallibilism, or what BonJour calls the
"weak conception" of knowledge.
BonJour outlines four traditional assumptions about knowledge,
understood as roughly justified true belief, which he
"broadly" endorses (BonJour 2010: 58-9). First,
knowledge is a "valuable and desirable cognitive state"
indicative of "full cognitive success". Any acceptable
theory of knowledge must "make sense of" knowledge's
important value. Second, knowledge is "an all or nothing matter,
not a matter of degree". There is no such thing as degrees of
knowing: either you know or you don't. Third, epistemic
justification comes in degrees, from weak to strong. Fourth, epistemic
justification is essentially tied to "likelihood or probability
of truth", such that the strength of justification covaries with
how likely it makes the truth of the belief in question.
On this traditional approach, we are invited to think of justification
as measured by *how probable* the belief is given the reasons
or evidence you have. One convenient way to measure probability is to
use the decimals in the interval [0, 1]. A probability of 0 means that
the claim is guaranteed to be false. A probability of 1 means that the
claim is guaranteed to be true. A probability of .5 means that the
claim is just as likely to be true as it is to be false. The question
then becomes, how probable must your belief be for it to be
knowledge?
Obviously it must be greater than .5. But how much greater? Suppose we
say that knowledge requires a probability of 1--that is,
knowledge requires our justification or reasons to *guarantee*
the truth of the belief. Call such reasons *conclusive
reasons*.
*The strong conception of knowledge* says knowledge requires
conclusive reasons. We can motivate the strong conception as follows.
If the aim of belief is truth, then it makes sense that knowledge
would require conclusive reasons, because conclusive reasons guarantee
that belief's aim is achieved. The three components of the
traditional view of knowledge thus fit together
"cohesively" to explain why knowledge is valued as a state
of full cognitive success.
But all is not well with the strong conception, or so philosophers
have claimed over the past several decades. The strong conception
seems to entail that we know nearly nothing at all about the material
world outside of our own minds or about the past. For we could have
had all the reasons we do in fact have, even if the world around us or
the past had been different. (Think of Descartes's evil genius.)
This conflicts with commonsense and counts against the strong
conception. But what is the alternative?
The alternative is that knowledge requires reasons that make the
belief very likely true, but needn't guarantee it. This is the
*weak conception of knowledge*. Most epistemologists accept the
weak conception of knowledge. But BonJour asks a challenging question:
what is the "magic" level of probability required by
knowledge? BonJour then argues that a satisfactory answer to this
question isn't forthcoming. For any point short of 1 would seem
*arbitrary*. Why should we pick that point exactly? The same
could be said for a vague range that includes points short of
1--why, exactly, should the vague range extend roughly *that
far* but not further? This leads to an even deeper problem for the
weak conception. It brings into doubt the value of knowledge. Can
knowledge really be valuable if it is arbitrarily defined?
A closely related problem for the weak conception presents itself.
Suppose for the sake of argument that we settle on .9 as the required
level of probability. Suppose further that you believe *Q* and
you believe *R*, that *Q* and *R* are both true,
and that you have reached the .9 threshold for each. Thus the weak
conception entails that you know *Q*, and you know *R*.
Intuitively, if you know *Q* and you also know *R*, then
you're automatically in a position to know the conjunction
*Q* & *R*. But the weak conception cannot sustain
this judgment. For the probability of the conjunction of two
independent claims, such as *Q* and *R*, equals the
product of their probabilities. (This is the special conjunction rule
from probability theory.) In this case, the probability of *Q*
= .9 and the probability of *R* = .9. So the probability of the
conjunction (*Q* & *R*) = .9 x .9 = .81, which
falls short of the required .9. So the weak conception of knowledge
along with a law of probability entail that you're automatically
not in a position to know the conjunction (*Q* &
*R*). BonJour considers this to be "an intuitively
unacceptable result", because after all,
>
>
> what is the supposed state of knowledge really worth, if even the
> simplest inference from two pieces of knowledge [might] not lead to
> further knowledge? (BonJour 2010: 63)
>
>
>
BonJour concludes that the weak conception fails to explain the value
of knowledge, and thus that the strong conception must be true. He
recognizes that this implies that we don't know most of the
things we ordinarily say and think that we know. He explains this
away, however, partly on grounds that knowledge is the norm of
practical reasoning, which creates strong "practical
pressure" to confabulate or exaggerate in claiming to know
things, so that we can view ourselves as reasoning and acting
appropriately, even though usually the best we can do is to
*approximate* appropriate action and reasoning. (BonJour 2010:
75).
## 8. The Value of True Belief
So far, in common with most of the contemporary literature in this
regard, we have tended to focus on the value of knowledge relative to
other epistemic standings. A related debate in this respect,
however--one that has often taken place largely in tandem with
the mainstream debate on the value of knowledge--has specifically
concerned itself with the value of true belief and we will turn now to
this issue.
Few commentators treat truth or belief as being by themselves valuable
(though see Kvanvig 2003: ch. 1), but it is common to treat true
belief as valuable, at least instrumentally. True beliefs are clearly
often of great practical use to us. The crucial *caveat* here,
of course, concerns the use of the word 'often'. After
all, it is also often the case that a true belief might actually
militate against one achieving one's goals, as when one is
unable to summon the courage to jump a ravine and thereby get to
safety**,** because one knows that there is a serious
possibility that one might fail to reach the other side. In such cases
it seems that a false belief in one's abilities--e.g., the
false belief that one could easily jump the ravine--would be
better than a true belief, if the goal in question (jumping the
ravine) is to be achieved.
Moreover, some true beliefs are beliefs in trivial matters, and in
these cases it isn't at all clear why we should value such
beliefs at all. Imagine someone who, for no good reason, concerns
herself with measuring each grain of sand on a beach, or someone who,
even while being unable to operate a telephone, concerns herself with
remembering every entry in a foreign phone book. Such a person would
thereby gain lots of true beliefs but, crucially, one would regard
such truth-gaining activity as rather pointless. After all, these true
beliefs do not seem to serve any valuable purpose, and so do not
appear to have any instrumental value (or, at the very least, what
instrumental value these beliefs have is vanishingly small). It would,
perhaps, be better--and thus of greater value--to have fewer
true beliefs, and possibly more false ones, if this meant that the
true beliefs that one had concerned matters of real consequence.
At most, then, we can say that true beliefs often have instrumental
value. What about final (or intrinsic) value? One might think that if
the general instrumental value of true belief was moot then so too
would be the intuitively stronger thesis that true belief is generally
finally valuable. Nevertheless, many have argued for such a claim.
One condition that seems to speak in favor of this thesis is that as
truth seekers we are naturally curious about what the truth is, even
when that truth is of no obvious practical import. Accordingly, it
could be argued that from a purely epistemic point of view, we do
regard all true belief as valuable for its own sake, regardless of
what further prudential goals we might have (e.g., Goldman 1999: 3;
Lynch 2004: 15-16; Alston 2005: 31; Pritchard 2019; cf. Baehr
2012: 5). Curiosity will only take you so far in this regard, however,
since we are only curious about certain truths, not all of them. To
return to the examples given a moment ago, no fully rational agent is
curious about the measurements of every grain of sand on a given
beach, or the name of every person in a random phone book--i.e.,
no rational person wants to know these truths independently of having
some prudential reason for knowing them.
Still, one could argue for a weaker claim and merely say that it is
*prima facie* or *pro tanto* finally good to believe the
truth (cf. David 2005; Lynch 2009), where cases of trivial truths such
as those just given are simply cases where, *all things
considered*, it is not good to believe the truth. After all, we
are familiar with the fact that something can be *prima facie*
or *pro tanto* finally good without being all-things-considered
good. For example, it may be finally good to help the poor and needy,
but not all-things-considered good given that helping the poor and
needy would prevent you from doing something else which is at present
more important (such as saving that child from drowning).
At this point one might wonder why it matters so much to (some)
epistemologists that true belief is finally valuable. Why not instead
just treat true belief as often of instrumental value and leave the
matter at that? The answer to this question lies in the fact that many
want to regard truth--and thereby true belief--as being the
fundamental epistemic goal, in the sense that ultimately it is only
truth that is epistemically valuable (so, for example, while
justification is epistemically valuable, it is only epistemically
valuable because of how it is a guide to truth). Accordingly, if true
belief is not finally valuable--and only typically instrumentally
valuable--then this seems to downplay the status of the
epistemological project.
There are a range of options here. The conservative option is to
contend that truth is the fundamental goal of epistemology and also
contend that true belief is finally valuable--at least in some
restricted fashion. Marian David (2001, 2005) falls into this
category. In contrast, one might argue that truth is the fundamental
goal while at the same time claiming that true belief is *not*
finally valuable. Sosa (see especially 2004, but also 2000a, 2003)
seems (almost) to fall into this camp, since he claims that while
truth is the fundamental epistemic value, we can accommodate this
thought without having to thereby concede that true belief is finally
valuable, a point that has been made in a similar fashion by Alan
Millar (2011: SS3). Sosa often compares the epistemic domain to
other domains of evaluation where the fundamental good of that domain
is not finally valuable. So, for example, the fundamental goal of the
'coffee-production' domain may be great tasting coffee,
but no-one is going to argue that great tasting coffee is finally
valuable. Perhaps the epistemic domain is in this respect like the
coffee-production domain?
Another line of response against the thesis that true belief is
finally valuable is to suggest that this thesis leads to a
*reductio*. Michael DePaul (2001) has notably advanced such an
argument. According to DePaul, the thesis that true belief is finally
valuable implies that all true beliefs are equally epistemically
valuable. Though this latter claim, DePaul argues, is false, as is
illustrated by cases where two sets each containing an equal number of
true beliefs intuitively differ in epistemic value. Ahlstrom-Vij and
Grimm (2013) have criticized DePaul's claim that the thesis that
true belief is finally valuable implies that two sets each containing
an equal number of true beliefs must not differ in epistemic value.
Additionally, Nick Treanor (2014) has criticized the argument for a
different reason, which is that (*contra* DePaul) there is no
clear example of two sets which contain the same number of true
beliefs. More recently, Xingming Hu (2017) has defended the final
value of true belief against DePaul's argument, though Hu argues
further that neither Ahlstrom-Vij and Grimm's (2013) nor
Treanor's (2014) critique of DePaul's argument is
compelling.
Another axis on which the debate about the value of true belief can be
configured is in terms of whether one opts for an epistemic-value
monism or an epistemic-value pluralism--that is, whether one
thinks there is only one fundamental epistemic goal, or several.
Kvanvig (e.g., 2005) endorses epistemic-value pluralism, since he
thinks that there are a number of fundamental epistemic goals, with
each of them being of final value. Crucial to Kvanvig's argument
is that there are some epistemic goals which are not obviously
truth-related--he cites the examples of having an empirically
adequate theory, making sense of the course of one's experience,
and inquiring responsibly, and more recently, Brent Madison (2017) has
argued by appealing to a new evil demon thought experiment, that
epistemic justification itself should be included in such a list. This
is important because if the range of goals identified were all
truth-related, then it would prompt the natural response that such
goals are valuable only because of their connection to the truth, and
hence not fundamental epistemic goals at all.
Presumably, though, it ought also to be possible to make a case for an
epistemic-value pluralism where the fundamental epistemic goals were
not finally valuable (or, at least, *a la* Sosa, where
one avoided taking a stance on this issue). More precisely, if an
epistemic-value monism that does not regard the fundamental epistemic
goal as finally valuable can be made palatable, then there seems no
clear reason why a parallel view that opted for pluralism in this
regard could not similarly be given a plausible supporting story.
## 9. The Value of Extended Knowledge
In his essay, "*Meno* in a Digital World", Pascal
Engel (2016) questions whether the original value problem applies to
the kind of knowledge or pseudo-knowledge that we get from the
internet? (2016: 1). One might initially think that internet and/or
digitally acquired knowledge raises no new issues for the value
problem. On this line of thought, if digitally acquired (e.g., Googled
knowledge, information stored in iPhone apps, etc.) is
*genuine* knowledge, then whatever goes for knowledge more
generally, *vis-a-vis* the value problems surveyed in
SSSS1-2, thereby goes for knowledge acquired from our
gadgets.
However, recent work at the intersection of epistemology and the
philosophy of mind suggests there are potentially some new and
epistemologically interesting philosophical problems associated with
the value of technology-assisted knowledge. These problems correspond
with two ways of conceiving of knowledge as *extending* beyond
traditional, intracranial boundaries (e.g., Pritchard 2018). In
particular, the kinds of 'extended knowledge' which have
potential import for the value of knowledge debate correspond with the
*extended mind thesis* (for discussion on how this thesis
interfaces with the hypothesis of extended cognition, see Carter,
Kallestrup, Pritchard, & Palermos 2014) and cases involving what
Michael Lynch (2016) calls 'neuromedia' intelligence
augmentation.
According to the *extended mind thesis* (EMT), mental states
(e.g., beliefs) can supervene in part on extra-organismic elements of
the world, such as laptops, phones and notebooks, that are typically
regarded as 'external' to our minds. This thesis, defended
most notably by Andy Clark and David Chalmers (1998), should not be
conflated with comparatively weaker and less controversial thesis of
*content externalism* (e.g., Putnam 1975; Burge 1986),
according to which the meaning or content of mental states can be
fixed by extra-organismic features of our physical or
social-linguistic environments.
What the proponent of EMT submits is that mental states
*themselves* can partly supervene on extracranial artifacts
(e.g., notebooks, iPhones) provided these extracranial artifacts play
kinds of functional roles normally played by on-board, biological
cognitive processes. For example, to borrow an (adapted) case from
Clark and Chalmers (1998), suppose an Alzheimer's patient,
'Otto', begins to outsource the task of memory storage and
retrieval to his iPhone, having appreciated that his biological memory
is failing. Accordingly, when Otto acquires new information, he
automatically records it in his phone's 'memory
app', and when he needs old information, he (also, automatically
and seamlessly) opens his memory app and looks it up. The iPhone comes
to play for Otto the functionally isomorphic role that biological
memory used to play for him *vis-a-vis* the process of
memory storage and retrieval. Just as we attribute to normally
functioning agents knowledge in virtue of their (non-occurrent)
dispositional beliefs stored in biological memory (for example, five
minutes ago, you knew that Paris is the capital of France), so, with
EMT in play, we should be prepared to attribute knowledge to Otto in
virtue of the 'extended' (dispositional) beliefs which are
stored in his notebook, provided Otto is as epistemically diligent in
encoding and retrieving information as he was before (e.g., Pritchard
2010b).
The import EMT has for the value of knowledge debate now takes shape:
whatever epistemically valuable properties (if any) are distinctively
possessed by knowledge, they must be properties that obtain in
Otto's case so as to add value to what would otherwise be
*mere true* (dispositional) *beliefs* that are stored,
extracranially, in Otto's iPhone. But it is initially puzzling
just why, and how, this should be. After all, even if we accept the
intuition that the epistemic value of traditional (intracranial)
knowledge exceeds the value of corresponding true opinion, it is, as
Engel (2016), Lynch (2016) and Carter (2017) have noted, at best not
clear that this comparative intuition holds in the extended case,
where knowledge is possessed simply by virtue of information
persisting in digital storage.
For example, consider again Plato's solution to the value
problem canvassed in
SS1:
knowledge, unlike true belief, must be 'tied-down' to the
truth. Mere true belief is more likely to be lost, which makes it less
valuable than knowledge. One potential worry is that extended
knowledge, as per EMT--literally, often times, knowledge stored
in the cloud--is by its very nature not 'tethered',
or for that matter even tetherable, in a way that corresponding items
of accurate information which fall short of knowledge are not. Nor
arguably does this sort of knowledge in the cloud clearly have the
kind of 'stability' that Olsson (2009) claims is what
distinguishes knowledge from true opinion (cf., Walker 2019). Perhaps
even less does it appear to constitute a valuable cognitive
'achievement', as per robust virtue epistemologists such
as Greco and Sosa.
EMT is of course highly controversial, (see, for example, Adams &
Aizawa 2008), and so one way to sidestep the implications for the
value of knowledge debate posed by the possibility of knowledge that
is extended *via* extended beliefs, is to simply resist EMT as
a thesis about the metaphysics of mind.
However, there are other ways in which the technology-assisted
knowledge could have import for the traditional value problems. In
recent work, Michael P. Lynch (2016) argues that, given the increase
in cognitive offloading coupled with evermore subtle and physically
smaller intelligence-augmentation technologies (e.g., Bostrom &
Sandberg 2009), it is just a matter of time before the majority of the
gadgetry we use for cognitive tasks will be by and large seamless and
'invisible'. Lynch suggests that while coming to know via
such mechanisms can make knowledge acquisition much easier, there are
epistemic drawbacks. He offers the following thought experiment:
>
>
> NEUROMEDIA: Imagine a society where smartphones are miniaturized and
> hooked directly into a person's brain. With a single mental
> command, those who have this technology--let's call it
> neuromedia--can access information on any subject [...] Now
> imagine that an environmental disaster strikes our invented society
> after several generations have enjoyed the fruits of neuromedia. The
> electronic communication grid that allows neuromedia to function is
> destroyed. Suddenly no one can access the shared cloud of information
> by thought alone. [...] for the inhabitants of this society,
> losing neuromedia is an immensely unsettling experience; it's
> like a normally sighted person going blind. They have lost a way of
> accessing information on which they've come to rely [...]
> Just as overreliance on one sense can weaken the others, so
> overdependence on neuromedia might atrophy the ability to access
> information in other ways, ways that are less easy and require more
> creative effort. (Lynch 2016: 1-6)
>
>
>
One conclusion Lynch has drawn from such thought experiments is that
understanding has a value that mere knowledge lacks, a position
we've seen has been embraced for different reasons in
SS4
by Kvanvig and others. A further conclusion, advanced by Pritchard
(2013) and Carter (2017), concerns the extent to which the acquisition
of knowledge involves 'epistemic dependence'--viz.,
dependence on factors outwith one's cognitive agency. They argue
that the greater the scope of epistemic dependence, the more valuable
it becomes to cultivate virtues like intellectual autonomy that
regulate the appropriate reliance and outsourcing (e.g., on other
individuals, technology, medicine, etc.) while at the same time
maintaining one's intellectual self-direction. |
value-pluralism | ## 1. Some Preliminary Clarifications
### 1.1 Foundational and Non-foundational Pluralism
It is important to clarify the levels at which a moral theory might be
pluralistic. Let us distinguish between two levels of pluralism:
foundational and non-foundational. Foundational pluralism is the view
that there are plural moral values at the most basic level--that
is to say, there is no one value that subsumes all other values, no
one property of goodness, and no overarching principle of action.
Non-foundational pluralism is the view that there are plural values at
the level of choice, but these apparently plural values can be
understood in terms of their contribution to one more fundamental
value.[3]
Judith Jarvis Thomson, a foundational pluralist, argues that when we
say that something is good we are never ascribing a property of
goodness, rather we are always saying that the thing in question is
good in some way. If we say that a fountain pen is good we mean
something different from when we say that a logic book is good, or a
film is good. As Thomson puts it, all goodness is a goodness in a way.
Thomson focusses her argument on Moore, who argues that when we say
'*x* is good' we do not mean '*x* is
conducive to pleasure', or '*x* is in accordance
with a given set of rules' and nor do we mean anything else that
is purely descriptive. As Moore points out, we can always query
whether any purely descriptive property really is good--so he
concludes that goodness is simple and
unanalyzable.[4]
Moore is thus a foundational monist, he thinks that there is one
non-natural property of goodness, and that all good things are good in
virtue of having this property. Thomson finds this preposterous. In
Thomson's own words:
>
> Moore says that the question he will be addressing himself to in what
> follows is the question 'What is good?', and he rightly
> thinks that we are going to need a bit of help in seeing exactly what
> question he is expressing in those words. He proposes to help us by
> drawing attention to a possible answer to the question he is
> expressing--that is, to something that would be an answer to it,
> whether or not it is the correct answer to it. Here is what he offers
> us: "Books are good." Books are good? What would you mean
> if you said 'Books are good'? Moore, however, goes
> placidly on: "though [that would be] an answer obviously false;
> for some books are very bad indeed". Well some books are bad to
> read or to look at, some are bad for use in teaching philosophy, some
> are bad for children. What sense could be made of a person who said,
> "No. no. I meant that some books are just plain bad
> things"? (Thomson 1997, pp. 275-276)
>
According to Thomson there is a fundamental plurality of ways of being
good. We cannot reduce them to something they all have in common, or
sensibly claim that there is a disjunctive property of goodness (such
that goodness is 'goodness in one of the various ways'.
Thomson argues that that could not be an interesting property as each
disjunct is truly different from every other disjunct. Thomson (1997),
p. 277). Thomson is thus a foundational pluralist--she does not
think that there is any one property of value at the most basic
level.
W.D. Ross is a foundational pluralist in a rather complex way. Most
straightforwardly, Ross thinks that there are several prima facie
duties, and there is nothing that they all have in common: they are
irreducibly plural. This is the aspect of Ross's view that is
referred to with the phrase, 'Ross-style pluralism'.
However, Ross also thinks that there are goods in the world (justice
and pleasure, for example), and that these are good because of some
property they share. Goodness and rightness are not reducible to one
another, so Ross is a pluralist about types of value as well as about
principles.
Writers do not always make the distinction between foundational and
other forms of pluralism, but as well as Thomson and Ross, at least
Bernard Williams (1981), Charles Taylor (1982), Stuart Hampshire
(1983), Charles Larmore (1987), John Kekes (1993), Michael Stocker
(1990 and 1997), David Wiggins (1997), Christine Swanton (2001), and
Jonathan Dancy (2004) are all committed to foundational pluralism.
Non-foundational pluralism is less radical--it posits a plurality
of bearers of value. In fact, almost everyone accepts that there are
plural bearers of value. This is compatible with thinking that there
is only one ultimate value. G.E. Moore (1903), Thomson's target,
is a foundational monist, but he accepts that there are
non-foundational plural values. Moore thinks that there are many
different bearers of value, but he thinks that there is one property
of goodness, and that it is a simple non-natural property that bearers
of value possess in varying degrees. Moore is clear that comparison
between plural goods proceeds in terms of the amount of goodness they
have.
This is not to say that the amount of goodness is always a matter of
simple addition. Moore thinks that there can be organic unities, where
the amount of goodness contributed by a certain value will vary
according to the combination of values such as love and friendship.
Thus Moore's view is pluralist at the level of ordinary choices,
and that is not without interesting consequences. (We return to the
issue of how a foundational monist like Moore can account for organic
unities in section 3.)
Mill, a classic utilitarian, could be and often has been interpreted
as thinking that there are irreducibly different sorts of pleasure.
Mill argues that there are higher and lower pleasures, and that the
higher pleasures (pleasures of the intellect as opposed to the body)
are superior, in that higher pleasures can outweigh lower pleasures
regardless of the quantity of the latter. As Mill puts it: "It
is quite compatible with the principle of utility to recognize the
fact, that some kinds of pleasure are more desirable and more valuable
than others." (2002, p. 241). On the foundational pluralist
interpretation of Mill, there is not one ultimate good, but two (at
least): higher and lower pleasures. Mill goes on to give an account of
what he means:
>
> If I am asked, what I mean by difference in quality in pleasures, or
> what makes one pleasure more valuable than another, merely as a
> pleasure, except its being greater in amount, there is but one
> possible answer. Of two pleasures, if there be one to which all or
> almost all who have experience of both give a decided preference,
> irrespective of any feeling of moral obligation to prefer it, that is
> the more desirable pleasure. (2002, p. 241).
>
The passage is ambiguous, it is not clear what role the expert judges
play in the theory. On the pluralist interpretation of this passage we
must take Mill as intending the role of the expert judges as a purely
heuristic device: thinking about what such people would prefer is a
way of discovering which pleasures are higher and which are lower, but
the respective values of the pleasure is independent of the
judges' judgment. On a monist interpretation we must understand
Mill as a preference utilitarian: the preferences of the judges
determine value. On this interpretation there is one property of value
(being preferred by expert judges) and many bearers of value (whatever
the judges
prefer).[5]
Before moving on, it is worth noting that a theory might be
foundationally monist in its account of what values there are, but not
recommend that people attempt to think or make decisions on the basis
of the supervalue. A distinction between decision procedures and
criteria of right has become commonplace in moral philosophy. For
example, a certain form of consequentialism has as its criterion of
right action: act so as to maximize good consequences. This might
invite the complaint that an agent who is constantly trying to
maximize good consequences will often, in virtue of that fact, fail to
do so. Sometimes concentrating too hard on the goal will make it less
likely that the goal is achieved. A distinction between decision
procedure and right action can provide a response--the
consequentialist can say that the criterion of right action, (act so
as to maximize good consequences) is not intended as a decision
procedure--the agent should use whichever decision procedure is
most likely to result in success. If, then, there is some attraction
or instrumental advantage from the point of view of a particular
theory to thinking in pluralist terms, then it is open to that theory
to have a decision procedure that deals with apparently plural values,
even if the theory is monist in every other way.
[6]
### 1.2 A Purely Verbal Dispute?
One final clarification about different understandings of pluralism
ought to be made. There is an ambiguity between the name for a group
of values and the name for one unitary value. There are really two
problems here: distinguishing between the terms that refer to groups
and the terms that refer to individuals (a merely linguistic problem)
and defending the view that there really is a candidate for a unitary
value (a metaphysical problem). The linguistic problem comes about
because in natural language we may use a singular term as
'shorthand': conceptual analysis may reveal that surface
grammar does not reflect the real nature of the concept. For example,
we use the term 'well-being' as if it refers to one single
thing, but it is not hard to see that it may not.
'Well-being' may be a term that we use to refer to a group
of things such as pleasure, health, a sense of achievement and so on.
A theory that tells us that well-being is the only value may only be
nominally monist. The metaphysical question is more difficult, and
concerns whether there are any genuinely unitary values at all.
The metaphysical question is rather different for naturalist and
non-naturalist accounts of value. On Moore's non-naturalist
account, goodness is a unitary property but it is not a natural
property: it is not empirically available to us, but is known by a
special faculty of intuition. It is very clear that Moore thinks that
goodness is a genuinely unitary property:
>
> 'Good', then, if we mean by it that quality which we
> assert to belong to a thing, when we say that the thing is good, is
> incapable of any definition, in the most important sense of that word.
> The most important sense of 'definition' is that in which
> a definition states what are the parts which invariably compose a
> certain whole; and in this sense 'good' has no definition
> because it is simple and has no parts. (Moore, 1903, p. 9)
>
The question of whether there could be such a thing is no more easy or
difficult than any question about the existence of non-natural
entities. The issue of whether the entity is genuinely unitary is not
an especially difficult part of that issue.
By contrast, naturalist views do face a particular difficulty in
giving an account of a value that is genuinely unitary. On the goods
approach, for example, the claim must be that there is one good that
is genuinely singular, not a composite of other goods. So for example,
a monist hedonist must claim that pleasure really is just one thing.
Pleasure is a concept we use to refer to something we take to be in
the natural world, and conceptual analysis may or may not confirm that
pleasure really is one thing. Perhaps, for example, we refer both to
intellectual and sensual experiences as pleasure. Or, take another
good often suggested by proponents of the goods approach to value,
friendship. It seems highly unlikely that there is one thing that we
call friendship, even if there are good reasons to use one umbrella
concept to refer to all those different things. Many of the plausible
candidates for the good seem plausible precisely because they are very
broad terms. If a theory is to be properly monist then, it must have
an account of the good that is satisfactorily unitary.
The problem applies to the deontological approach to value too. It is
often relatively easy to determine whether a principle is really two
or more principles in disguise--the presence of a conjunction or
a disjunction, for example, is a clear giveaway. However, principles
can contain terms that are unclear. Take for example a deontological
theory that tells us to respect friendship. As mentioned previously,
it is not clear whether there is one thing that is friendship or more
than one, so it is not clear whether this is one principle about one
thing, or one principle about several things, or whether it is really
more than one principle.
Questions about what makes individuals individuals and what the
relationship is between parts and wholes have been discussed in the
context of metaphysics but these issues have not been much discussed
in the literature on pluralism and monism in moral philosophy.
However, these issues are implicit in discussions of the well-being,
nature of friendship and pleasure, and in the literature on
Kant's categorical imperative, or on Aristotelian accounts of
eudaimonia. Part of an investigation into the nature of these things
is an investigation into whether there really is one thing or not.
[7]
The upshot of this brief discussion is that monists must be able to
defend their claim that the value they cite is genuinely one value.
There may be fewer monist theories than it first appears. Further, the
monist must accept the implications of a genuinely monist view. As
Ruth Chang points out, (2015, p. 24) the simpler the monist's
account of the good is, the less likely it is that the monist will be
able to give a good account of the various complexities in choice that
seem an inevitable part of our experience of value. But on the other
hand, if the monist starts to admit that the good is complex, the view
gets closer and closer to being a pluralist view.
However, the dispute between monists and pluralists is not merely
verbal: there is no prima facie reason to think that there are no
genuinely unitary properties, goods or principles.
## 2. The Attraction of Pluralism
If values are plural, then choices between them will be complex.
Pluralists have pressed the point that choices are complex, and so we
should not shy away from the hypothesis that values are plural. In
brief, the attraction of pluralism is that it seems to allow for the
complexity and conflict that is part of our moral experience. We do
not experience our moral choices as simple additive puzzles.
Pluralists have argued that there are incommensurabilities and
discontinuities in value comparisons, value remainders (or residues)
when choices are made, and complexities in appropriate responses to
value. Recent empirical work confirms that our ethical experience is
of apparently irreducible plural values. (See Gill and Nichols,
2008.)
### 2.1 Discontinuities
John Stuart Mill suggested that there are higher and lower pleasures
(Mill 2002, p. 241), the idea being that the value of higher and lower
pleasures is measured on different scales. In other words, there are
discontinuities in the measurement of value. As mentioned previously,
it is unclear whether we should interpret Mill as a foundational
pluralist, but the notion of higher and lower pleasures is a very
useful one to illustrate the attraction of thinking that there are
discontinuities in value. The distinction between higher and lower
pleasures allows us to say that no amount of lower pleasures can
outweigh some amount of higher pleasures. As Mill puts it, it is
better to be an unhappy human being than a happy pig. In other words,
the distinction allows us to say that there are discontinuities in
value addition. As James Griffin (1986, p. 87) puts it: "We do
seem, when informed, to rank a certain amount of life at a very high
level above any amount of life at a very low level."
Griffin's point is that there are discontinuities in the way we
rank values, and this suggests that there are different
values.[8]
The phenomenon of discontinuities in our value rankings seems to
support pluralism: if higher pleasures are not outweighed by lower
pleasures, that suggests that they are not the same sort of thing. For
if they were just the same sort of thing, there seems to be no reason
why lower pleasures will not eventually outweigh higher pleasures.
Ruth Chang argues that values form organic unities, which is to say
that adding more of one value to a situation may not make it better
overall, the balance of values in the situation is also important. For
example, a life where extra pleasure is added, and all other value
contributions left untouched, may not be better overall, because now
the pleasure is unbalanced (Chang 2002). As Chang puts it, the Pareto
principle of improvement (if something is better along one dimension
and at least as good on all the others it will be better overall) does
not hold here.
The most extreme form of discontinuity is incommensurability or
incomparability, when two values cannot be ranked at all. Pluralists
differ on whether pluralism entails incommensurabilities, and on what
incommensurability entails for the possibility of choice. Griffin
denies that pluralism entails incommensurability (Griffin uses the
term incomparability) whereas other pluralists embrace
incommensurability, but deny that it entails that rational choice is
impossible. Some pluralists accept that there are sometimes cases
where incommensurability precludes rational choice. We shall return to
these issues in Section 4.
### 2.2 Value Conflicts and Rational Regret
Michael Stocker (1990) and Bernard Williams (1973 and 1981) and others
have argued that it can be rational to regret the outcome of a correct
moral choice. That is, even when the right choice has been made, the
rejected option can reasonably be regretted, and so the choice
involves a genuine value conflict. This seems strange if the options
are being compared in terms of a supervalue. How can we regret having
chosen more rather than less of the same thing? Yet the phenomenon
seems undeniable, and pluralism can explain it. If there are plural
values, then one can rationally regret not having chosen something
which though less good, was different.
It is worth noting that the pluralist argument is not that all cases
of value conflict point to pluralism. There may be conflicts because
of ignorance, for example, or because of irrationality, and these do
not require positing plural values. Stocker argues that there are (at
least) two sorts of value conflict that require plural values. The
first is conflict that involves choices between doing things at
different times. Stocker argues that goods become different values in
different temporal situations, and the monist cannot accommodate this
thought. The other sort of case (which Williams also points to) is
when there is a conflict between things that have different advantages
and disadvantages. The better option may be better, but it does not
'make up for' the lesser option, because it isn't
the same sort of thing. Thus there is a remainder--a moral value
that is lost in the choice, and that it is rational to regret.
Both Martha Nussbaum (1986) and David Wiggins (1980) have argued for
pluralism on the grounds that only pluralism can explain akrasia, or
weakness of will. An agent is said to suffer from weakness of will
when she knowingly chooses a less good option over a better one. On
the face of it, this is a puzzling thing to do--why would someone
knowingly do what they know to be worse? A pluralist has a plausible
answer--when the choice is between two different sorts of value,
the agent is preferring A to B, rather than preferring less of A to
more of A. Wiggins explains the akratic choice by suggesting that the
agent is 'charmed' by some aspect of the choice, and is
swayed by that to choose what she knows to be worse overall (Wiggins
1980, p. 257). However, even Michael Stocker, the arch pluralist, does
not accept that this argument works. As Stocker points out, Wiggins is
using a distinction between a cognitive and an affective element to
the choice, and this distinction can explain akrasia on a monist
account of value too. Imagine that a monist hedonist agent is faced
with a choice between something that will give her more pleasure and
something that will give her less pleasure. The cognitive aspect to
the choice is clear--the agent knows that one option is more
pleasurable than the other, and hence on her theory better. However,
to say that the agent believes that more pleasure is better is not to
say that she will always be attracted to the option that is most
pleasurable. She may, on occasion, be attracted to the option that is
more unusual or interesting. Hence she may act akratically because she
was charmed by some aspect of the less good choice--and as
Stocker says, there is no need to posit plural values to make sense of
this--being charmed is not the same as valuing. (Stocker 1990,
p.219).
### 2.3 Appropriate Responses to Value
Another argument for pluralism starts from the observation that there
are many and diverse appropriate responses to value. Christine Swanton
(2003, ch. 2) and Elizabeth Anderson (1993) both take this line. As
Swanton puts it:
>
> According to value centered monism, the rightness of moral
> responsiveness is determined entirely by degree or strength of
> value...I shall argue, on the contrary, that just how things are
> to be pursued, nurtured, respected, loved, preserved, protected, and
> so forth may often depend on further general features of those things,
> and their relations to other things, particularly the moral agent.
> (Swanton 2003, p. 41).
>
The crucial thought is that there are various bases of moral
responsiveness, and these bases are irreducibly plural. A monist could
argue that there are different appropriate responses to value, but the
monist would have to explain why there are different appropriate
responses to the same value. Swanton's point is that the only
explanation the monist has is that different degrees of value merit
different responses. According to Swanton, this does not capture what
is really going on when we appropriately honor or respect a value
rather than promoting it. Anderson and Swanton both argue that the
complexity of our responses to value can only be explained by a
pluralistic theory.
Elizabeth Anderson argues that it is a mistake to understand moral
goods on the maximising model. She uses the example of parental love
(Anderson 1997, p. 98). Parents should not see their love for their
children as being directed towards an "aggregate child
collective". Such a view would entail that trade offs were
possible, that one child could be sacrificed for another. On
Anderson's view we can make rational choices between conflicting
values without ranking values: "...choices concerning those
goods or their continued existence do not generally require that we
rank their values on a common scale and choose the more valuable good;
they require that we give each good its due" (Anderson 1997, p.
104).
## 3. Monist Solutions
The last section begins by saying that if foundational values are
plural, then choices between them will be complex. It is clear that
our choices are complex. However, it would be invalid to conclude from
that that values are plural--the challenge for monists is to
explain how they too can make sense of the complexity of our value
choices.
### 3.1 Different Bearers of Value
One way for monists to make sense of complexity in value choice is to
point out that there are different bearers of value, and this makes a
big difference to the experience of choice. (See Hurka, 1996; Schaber,
1999; Klocksiem 2011). Here is the challenge to monism in Michael
Stocker's words (Stocker, 1990, p. 272): "[if monism is
true] there is no ground for rational conflict because the better
option lacks nothing that would be made good by the lesser." In
other words, there are no relevant differences between the better and
worse options except that the better option is better. Thomas Hurka
objects that there can be such differences. For example, in a choice
between giving five units of pleasure to *A* and ten units to
*B*, the best option (more pleasure for *B*) involves
giving no pleasure at all to *A*. So there is something to
rationally regret, namely, that *A* had no pleasure. The
argument can be expanded to deal with all sorts of choice situation:
in each situation, a monist can say something sensible about an
unavoidable loss, a loss that really is a loss. If, of two options one
will contribute more basic value, the monist must obviously choose
that one. But the lesser of the options may contribute value via
pleasure, while the superior option contributes value via knowledge,
and so there is a loss in choosing the option with the greater value
contribution--a loss in pleasure-- and it is rational for us
to regret this.
There is one difficulty with this answer. The loss described by Hurka
is not a moral loss, and so the regret is not moral regret. In
Hurka's example, the relevant loss is that *A* does not
get any pleasure. The agent doing the choosing may be rational to
regret this if she cares about *A*, or even if she just feels
sorry for *A*, but there has been no moral loss, as
'pleasure for *A*' as opposed to pleasure itself is
not a moral value. According to the view under consideration, pleasure
itself is what matters morally, and so although *A*'s
pleasure matters qua pleasure, the moral point of view takes
*B*'s pleasure into account in just the same way, and
there is nothing to regret, as there is more pleasure than there would
otherwise have been. Stocker and Williams would surely insist that the
point of their argument was not just that there is a loss, but that
there is a *moral* loss. The monist cannot accommodate that
point, as the monist can only consider the quantity of the value, not
its distribution, and so we are at an impasse.
However, the initial question was whether the monist has succeeded in
explaining the phenomenon of 'moral regret', and perhaps
Hurka has done that by positing a conflation of moral and non-moral
regret in our experience. From our point of view, there is regret, and
the monist can explain why that is without appealing to irrationality.
On the other hand the monist cannot appeal to anything other than
quantity of value in appraising the morality of the situation. So
although Hurka is clearly right in so far as he is saying that a
correct moral choice can be regretted for non-moral reasons, he can go
no further than that.
### 3.2 Diminishing Marginal Value
Another promising strategy that the monist can use in order to explain
the complexity in our value choices is the appeal to
'diminishing marginal value'. The value that is added to
the sum by a source of value will tend to diminish after a certain
point--this phenomenon is known as diminishing marginal value
(or, sometimes, diminishing marginal utility). Mill's higher and
lower pleasures, which seem to be plural values, might be accommodated
by the monist in this way. The monist makes sense of discontinuities
in value by insisting on the distinction between sources of value,
which are often ambiguously referred to as 'values', and
the super value. Using a monist utilitarian account of value, we can
distinguish between the non-evaluative description of options, the
intermediate description, and the evaluative description as
follows:
| | | |
| --- | --- | --- |
| *Non-evaluative
description of option* | *Intermediate
description of option* | *Evaluative
description of option* |
| Painting a picture | - | Producing *x* units
of beauty | - | Producing *y* units
of value |
| Reading a book | - | Producing *x* units
of knowledge | - | Producing *y* units
of value |
On this account, painting produces beauty, and beauty (which is not a
value but the intermediate source of value) produces value. Similarly,
reading a book produces knowledge, and gaining knowledge produces
value. Now it should be clear how the monist can make sense of
phenomena like higher and lower pleasures. The non-evaluative options
(e.g. eating donuts) have diminishing marginal non-basic value.
On top of that, the intermediate effect, or non-basic value, (e.g.
experiencing pleasure) can have a diminishing contribution to value.
Varying diminishing marginal value in these cases is easily explained
psychologically. It is just the way we are--we get less and less
enjoyment from donuts as we eat more and more (at least in one
sitting). However, we may well get the same amount of enjoyment from
the tenth Johnny Cash song that we did from the first. In order to
deal with the higher and lower pleasures case the monist will have to
argue that pleasures themselves can have diminishing marginal
utility--the monist can argue that gustatory pleasure gets boring
after a while, and hence contributes less and less to the super
value--well being, or whatever it
is.[9]
This picture brings us back to the distinction between foundational
and non-foundational pluralism. Notice that the monist theories being
imagined here are foundationally monist, because they claim that there
is fundamentally one value, such as pleasure, and they are pluralist
at the level of ordinary choice because they claim that there are
intermediate values, such as knowledge and beauty, which are valuable
because of the amount of pleasure they produce (or realize, or
contain--the exact relationship will vary from theory to
theory).
### 3.3 Theoretical Virtues
The main advantage of pluralism is that it seems true to our
experience of value. We experience values as plural, and pluralism
tells is that values are indeed plural. The monist can respond, as we
have seen, that there are ways to explain the apparent plurality of
values without positing fundamentally plural values. Another,
complementary strategy that the monist can pursue is to argue that
monism has theoretical virtues that pluralism lacks. In general, it
seems that theories should be as simple and coherent as possible, and
that other things being equal, we should prefer a more coherent theory
to a less coherent one. Thus so long as monism can make sense of
enough of our intuitive judgments about the nature of value, then it
is to be preferred to pluralism because it does better on the
theoretical virtue of coherence.
Another way to put this point is in terms of explanation. The monist
can point out that the pluralist picture lacks explanatory depth. It
seems that a list of values needs some further explanation: what makes
these things values? (See Bradley 2009, p.16). The monist picture is
superior, because the monist can provide an explanation for the value
of the (non-foundational) plurality of values: these things are values
because they contribute to well-being, or pleasure, or whatever the
foundational monist value is. (See also the discussion of this in the
entry on
value theory).
Patricia Marino argues against this strategy (2015). She argues that
'systematicity' (the idea that it is better to have fewer
principles) is not a good argument in favour of monism. Marino points
out that explanation in terms of fewer fundamental principles is not
necessarily *better* explanation. If there are plural values,
then the explanation that appeals to plural values is a better one, in
the sense that it is the true one: it doesn't deny the plurality
of values. Even if we could give a monist explanation without having
to trade off against our pluralist intuitions, Marino argues, we have
no particular reason to think that explanations appealing to fewer
principles are superior.
### 3.4 Preference Satisfaction Views
There is a different account of value that we ought to consider here:
the view that value consists in preference or desire satisfaction. On
this view, knowledge and pleasure and so on are valuable when they are
desired, and if they are not desired anymore they are not valuable
anymore. There is no need to appeal to complicated accounts of
diminishing marginal utility: it is uncontroversial that we sometimes
desire something and sometimes don't. Thus complexities in
choices are explained by complexities in our desires, and it is
uncontroversial that our desires are complex.
Imagine a one person preference satisfaction account of value that
says simply that what is valuable is what *P* desires.
Apparently this view is foundationally monist: there is only one thing
that confers value (being desired by *P*), yet at the
non-foundational level there are many values (whatever *P*
desires). Let us say that *P* desires hot baths, donuts and
knowledge. The structure of *P*'s desires is such that
there is a complicated ranking of these things, which will vary from
circumstance to circumstance. The ranking is not explained by the
value of the objects,rather, her desire explains the ranking and
determines the value of the objects. So it might be that *P*
sometimes desires a hot bath and a donut equally, and cannot choose
between them; it might be that sometimes she would choose knowledge
over a hot bath and a donut, but sometimes she would choose a hot bath
over knowledge. On James Griffin's slightly more complex view,
well-being consist in the fulfillment of informed desire, and Griffin
points out that his view can explain discontinuities in value without
having to appeal to diminishing marginal utility:
>
> there may well turn out to be cases in which, when informed, I want,
> say, a certain amount of one thing more than any amount of another,
> and not because the second thing cloys, and so adding to it merely
> produces diminishing marginal values. I may want it even though the
> second thing does not, with addition, lose its value; it may be that I
> think that no increase in that kind of value, even if constant and
> positive, can overtake a certain amount of this kind of value. (1986,
> p. 76).
>
This version of foundational monism/normative pluralism escapes some
of the problems that attend the goods approach. First, this view can
account for deep complexities in choice. The plural goods that P is
choosing between do not seem merely instrumental. Donuts are not good
because they contribute to another value, and P does not desire donuts
for any reason other than their donuty nature. On this view, if it is
hard to choose between donuts and hot baths it is because of the
intrinsic nature of the objects. The key here is that value is
conferred by desire, not by contribution to another value. Second,
this view can accommodate incomparabilities: if P desires a hot bath
because of its hot bathy nature, and a donut because of its donuty
nature, she may not be able to choose between them.
However, it is not entirely clear that a view like Griffin's is
genuinely monist at the foundational level: the question arises, what
is constraining the desires that qualify as value conferring? If the
answer is 'nothing', then the view seems genuinely monist,
but is probably implausible. Unconstrained desire accounts of value
seem implausible because our desires can be for all sorts of
things--we may desire things that are bad for us, or we may
desire things because of some mistake we have made. If the answer is
that there is something constraining the desires that count as value
conferring, then of course the question is, 'what?' Is it
the values of the things desired? A desire satisfaction view that
restricts the qualifying desires must give an account of what
restricts them, and obviously, the account may commit the view to
foundational pluralism.
Griffin addresses this question at the very beginning of his book on
well being (Griffin, 1986,
ch.2).[10]
As he puts it,
>
> The danger is that desire accounts get plausible only by, in effect,
> ceasing to be desire accounts. We had to qualify desire with informed,
> and that gave prominence to the features or qualities of the objects
> of desire, and not to the mere existence of desire. (1986, p. 26).
>
Griffin's account of the relationship between desire and value
is subtle, and (partly because Griffin himself does not distinguish
between foundational and normative pluralism) it is difficult to say
whether his view is foundationally pluralist or not. Griffin argues
that it is a mistake to see desire as a blind motivational
force--we desire things that we perceive in a favorable light- we
take them to have a desirability feature. When we try to explain what
involved in seeing things in a favorable light, we cannot, according
to Griffin, separate understanding from desire:
>
> ...we cannot, even in the case of a desirability feature such as
> accomplishment, separate understanding and desire. Once we see
> something as 'accomplishment', as 'giving weight and
> substance to our lives', there is no space left for desire to
> follow along in a secondary subordinate position. Desire is not blind.
> Understanding is not bloodless. Neither is the slave of the other.
> There is no priority. (1986, p. 30)
>
This suggests that the view is indeed pluralist at the
foundation--values are not defined entirely by desire, but partly
by other features of the situation, and so at the most fundamental
level there is more than one value making feature. Griffin himself
says that "the desire account is compatible with a strong form
of pluralism about values" (p. 31).
The question whether or not Griffin is a foundational pluralist is not
pursued further here. The aim in this section is to show first, that
monist preference satisfaction accounts of value may have more
compelling ways of explaining complexities in value comparison than
monist goods approaches, but second, to point out that any constrained
desire account may well actually be foundationally pluralist. As soon
as something is introduced to constrain the desires that qualify as
value conferring, it looks as though another value is operating.
## 4. Pluralism and Rational Choice
The big question facing pluralism is whether rational choices can be
made between irreducibly plural values. Irreducible plurality appears
to imply incommensurability--that is to say, that there is no
common measure which can be used to compare two different values. (See
the entry on
incommensurable values.)
Value incommensurability seems worrying: if values are
incommensurable, then either we are forced into an ad hoc ranking, or
we cannot rank the values at all. Neither of these are very appealing
options.
However, pluralists reject this dilemma. Bernard Williams argues that
it is a mistake to think that pluralism implies that comparisons are
impossible. He says:
>
> There is one motive for reductivism that does not operate simply on
> the ethical, or on the non-ethical, but tends to reduce every
> consideration to one basic kind. This rests on an assumption about
> rationality, to the effect that two considerations cannot be
> rationally weighed against each other unless there is a common
> consideration in terms of which they can be compared. This assumption
> is at once very powerful and utterly baseless. Quite apart from the
> ethical, aesthetic considerations can be weighed against economic ones
> (for instance) without being an application of them, and without their
> both being an example of a third kind of consideration. (Williams
> 1985, p. 17)
>
Making a similar point, Ruth Chang points out that incommensurability
is often conflated with incomparability. She provides clear
definitions of each: incommensurability is the lack of a common unit
of value by which precise comparisons can be made. Two items are
incomparable, if there is no possible relation of comparison, such as
'better than', or 'as good as' (1997,
Introduction). Chang points out that incommensurability is often
thought to entail incomparability, but it does not.
Defenders of pluralism have used various strategies to show that it is
possible to make rational choices between plural values.
### 4.1 Practical Wisdom
The pluralist's most common strategy in the face of worries
about choices between incommensurable values is to appeal to practical
wisdom--the faculty described by Aristotle--a faculty of
judgment that the wise and virtuous person has, which enables him to
see the right answer. Practical wisdom is not just a question of being
able to see and collate the facts, it goes beyond that in some
way--the wise person will see things that only a wise person
could see. So plural values can be compared in that a wise person will
'just see' that one course of action rather than another
is to be taken. This strategy is used (explicitly or implicitly) by
McDowell (1979), Nagel (1979), Larmore (1987), Skorupski (1996),
Anderson (1993 and 1997) Wiggins (1997 and 1998), Chappell (1998),
Swanton (2003). Here it is in Nagel's words:
>
> Provided one has taken the process of practical justification as far
> as it will go in the course of arriving at the conflict, one may be
> able to proceed without further justification, but without
> irrationality either. What makes this possible is
> judgment--essentially the faculty Aristotle described as
> practical wisdom, which reveals itself over time in individual
> decisions rather than in the enunciation of general principles. (1979,
> p. 135)
>
The main issue for this solution to the comparison problem is to come
up with an account of what practical wisdom is. It is not easy to
understand what sort of thing the faculty of judgment might be, or how
it might work. Obviously pluralists who appeal to this strategy do not
want to end up saying that the wise judge can see which of the options
has more goodness, as that would constitute collapsing back into
monism. So the pluralist has to maintain that the wise judge makes a
judgment about what the right thing to do is without making any
quantitative judgment. The danger is that the faculty seems entirely
mysterious: it is a kind of magical vision, unrelated to our natural
senses. As a solution to the comparison problem, the appeal to
practical wisdom looks rather like way of shifting the problem to
another level. Thus the appeal to practical wisdom cannot be left at
that. The pluralist owes more explanation of what is involved in
practical wisdom. What follows below are various pluralists'
accounts of how choice between plural values is possible, and whether
such choice is rational.
### 4.2 Super Scales
One direction that pluralists have taken is to argue that although
values are plural, there is nonetheless an available scale on which to
rank them. This scale is not rationalized by something that the values
have in common (that would be monism), but by something over and above
the values, which is not itself a super value. Williams sometimes
writes as if this is his intention, as do Griffin (1986 and 1997),
Stocker (1990), Chang (1997 and 2004), Taylor (1982 and 1997). James
Griffin (1986) develops this suggestion in his discussion of plural
prudential values. According to Griffin, we do not need to have a
super-value to have super-scale. Griffin says:
>
> ...it does not follow from there being no super-value that there
> is no super-scale. To think so would be to misunderstand how the
> notion of 'quantity' of well-being enters. It enters
> through ranking; quantitative differences are defined on qualitative
> ones. The quantity we are talking about is 'prudential
> value' defined on informed rankings. All that we need for the
> all-encompassing-scale is the possibility of ranking items on the
> basis of their nature. And we can, in fact, rank them in that way. We
> can work out trade-offs between different dimensions of pleasure or
> happiness. And when we do, we rank in a strong sense: not just choose
> one rather than the other, but regard it as worth more. That is the
> ultimate scale here: worth to one's life. (Griffin 1986, p. 90)
>
This passage is slightly hard to interpret (for more on why see my
earlier discussion of Griffin in the section on preference
satisfaction accounts). On one interpretation, Griffin is in fact
espousing a sophisticated monism. The basic value is 'worth to
one's life', and though it is important to talk about
non-basic values, such as the different dimensions of pleasure and
happiness, they are ultimately judged in terms of their contribution
to the worth of lives.
The second possible interpretation takes Griffin's claim that
worth to life is not a supervalue seriously. On this interpretation,
it is hard to see what worth to life is, if not a supervalue. Perhaps
it is only a value that we should resort to when faced with
incomparabilities. However, this interpretation invites the criticism
that Griffin is introducing a non-moral value, perhaps prudential
value, to arbitrate when moral values are incommensurable. In other
words, we cannot decide between incommensurable values on moral
grounds, so we should decide on prudential grounds. This seems
reasonable when applied to incommensurabilities in aesthetic values.
One might not be able to say whether Guernica is better than War and
Peace, but one might choose to have Guernica displayed on the wall
because it will impress one's friends, or because it is worth
more money, or even because one just enjoys it more. In the case of
moral choices this is a less convincing strategy: it introduces a
level of frivolity into morality that seems out of place.
Stocker's main strategy is to argue that values are plural, and
comparisons are made, so it must be possible to make rational
comparisons. He suggests that a "higher level synthesizing
category" can explain how comparisons are made (1990, p. 172).
According to Stocker these comparisons are not quantitative, they are
evaluative:
>
> Suppose we are trying to choose between lying on a beach and
> discussing philosophy--or more particularly, between the pleasure
> of the former and the gain in understanding from the latter. To
> compare them we may invoke what might be called a higher-level
> synthesizing category. So, we may ask which will conduce to a more
> pleasing day, or to a day that is better spent. Once we have fixed
> upon the higher synthesizing category, we can often easily ask which
> option is better in regard to that category and judge which to choose
> on the basis of that. Even if it seems a mystery how we might
> 'directly' compare lying on the beach and discussing
> philosophy, it is a commonplace that we do compare them, e.g. in
> regard to their contribution to a pleasing day. (Stocker 1990, p. 72)
>
Stocker claims that goodness is just the highest level synthesizing
category, and that lower goods are constitutive means to the good.
Ruth Chang's approach to comparisons of plural values is very
similar (Chang 1997 (introduction) and 2004). Chang claims that
comparisons can only be made in terms of a covering value--a more
comprehensive value that has the plural values as parts.
A recent suggestion by Brian Hedden and Daniel Munoz is that *by
definition*, overall value supervenes on the dimensions of value
(Hedden and Munoz 2023). The basic point is that if overall value is
affected, then there must be a relevant and distinct way that goodness
or badness has been contributed. So, for example, in Chang's
case of adding more pleasure disrupting balance (see section 2.1),
Hedden and Munoz argue that balance is a dimension of value, and it is
because balance has been adversely affected that there is less value
overall.
There is a problem in understanding quite what a 'synthesizing
category' or 'covering value' is. How does the
covering value determine the relative weightings of the constituent
values? One possibility is that it does it by pure
stipulation--as a martini just is a certain proportion of gin and
vermouth. However, stipulation does not have the right sort of
explanatory power. On the other hand, if a view is to remain
pluralist, it must avoid conflating the super scale with a super
value. Chang argues that her covering values are sufficiently unitary
to provide a basis for comparison, and yet preserve the separateness
of the other values. Chang's argument goes as follows: the
values at stake in a situation (for example, prudence and morality)
cannot on their own determine how heavily they weigh in a particular
choice situation--the values weigh differently depending on the
circumstances of the choice. However, the values plus the
circumstances cannot determine relevant weightings
either--because (simplifying here) the internal circumstances of
the choice will affect the weighting of the values differently
depending on the external circumstances. To use Chang's own
example, when the values at stake are prudence and morality
(specifically, the duty to help an innocent victim), and the
circumstances include the fact that the victim is far away, the effect
this circumstance will have on the weighting of the values depends on
external circumstances, which fix what matters in the choice. So, as
Chang puts it, "'What matters' must therefore have
content beyond the values and the circumstances of the choice"
(2004, p. 134).
Stocker is aware of the worry that appeal to something in terms of
which comparisons can be made reduces the view to monism: Stocker
insists that the synthesizing category (such as a good life) is not a
unitary value--it is at most 'nominal monism' in my
terminology. Stocker argues that it is a philosophical prejudice to
think that rational judgment must be quantitative, and so he claims
that he does not need to give an account of how we form and use the
higher level synthesizing categories.
### 4.3 Basic Preferences
Another approach to the comparison problem appeals to basic
preferences. Joseph Raz takes the line that we can explain choice
between irreducibly plural goods by talking about basic preferences.
Raz approaches the issue of incommensurability by talking about the
nature of agency and rationality instead of about the nature of value.
He distinguishes between two conceptions of human agency: the
rationalist conception, and the classical conception. The rationalist
conception corresponds to what we have called the stronger use of the
term rational. According to the rationalist conception, reasons
require action. The classical conception, by contrast, "regards
reasons as rendering options eligible" (Raz 1999, p. 47). Raz
favors the classical conception, which regards the will as something
separate from desire:
>
> The will is the ability to choose and perform intentional actions. We
> exercise our will when we endorse the verdict of reason that we must
> perform an action, and we do so, whether willingly, reluctantly, or
> regretting the need, etc. According to the classical conception,
> however, the most typical exercise or manifestation of the will is in
> choosing among options that reason merely renders eligible. Commonly
> when we so choose, we do what we want, and we choose what we want,
> from among the eligible options. Sometimes speaking of wanting one
> option (or its consequences) in preference to the other eligible ones
> is out of place. When I choose one tin of soup from a row of identical
> tins in the shop, it would be wrong and misleading to say that I
> wanted that tin rather than, or in preference to, the others.
> Similarly, when faced with unpalatable but unavoidable and
> incommensurate options (as when financial need forces me to give up
> one or another of incommensurate goods), it would be incorrect to say
> that I want to give up the one I choose to give up. I do not want to
> do so. I have to, and I would equally have regretted the loss of
> either good. I simply choose to give up one of them. (Raz, 1999, p.
> 48)
>
Raz's view about the nature of agency is defended in great
detail over the course of many articles, and all of those arguments
cannot be examined in detail here. What is crucial in the context of
this discussion of pluralism is whether Raz gives us a satisfactory
account of the weaker sense of rational. Raz's solution to the
problem of incommensurability hangs on the claim that it can be
rational (in the weak sense) to choose A over B when there are no
further reasons favouring A over B. We shall restrict ourselves to
mentioning one objection to the view in the context of moral choices
between plural goods. Though Raz's account of choice may seem
plausible in cases where we choose between non-moral values, it seems
to do violence to the concept of morality. Consider one of Raz's
own examples, the choice between a banana and a pear. It may be that
one has to choose between them, and there is no objective reason to
choose one or the other. In this case, it seems Raz's account of
choice is plausible. If one feels like eating a banana, then in this
case, desire does provide a reason. As Raz puts it, "A want can
never tip the balance of reasons in and of itself. Rather, our wants
become relevant when reasons have run their course." In the
example where we choose between a banana and a pear, this sounds fine.
However, if we apply it to a moral choice it seems a lot less
plausible. Raz admits that "If of the options available to
agents in typical situations of choice and decision, several are
incommensurate, then reason can neither determine nor completely
explain their choices or actions" (Raz, 1999, p. 48). Thus many
moral choices are not directed by reason but by a basic preference. It
is not fair to call it a desire, because on Raz's account we
desire things for reasons--we take the object of our desire to be
desirable. On Raz's picture then, when reasons have run their
course, we are choosing without reasons. It doesn't matter
hugely whether we call that 'rational' (it is not rational
in the strong sense, but it is in the weak sense). What matters is
whether this weak sense of rational is sufficient to satisfy our
concept of moral choice as being objectively defensible. The problem
is that choosing without reasons look rather like plumping. Plumping
may be an intelligible form of choice, but it is questionable whether
it is a satisfactory account of moral choice.
### 4.4 Accepting Incomparability
One philosopher who is happy to accept that there may be situations
where we just cannot make reasoned choices between plural values is
Isaiah Berlin, who claimed that goods such as liberty and equality
conflict at the fundamental level. Berlin is primarily concerned with
political pluralism, and with defending political liberalism, but his
views about incomparability have been very influential in discussions
on moral pluralism. Bernard Williams (1981), Charles Larmore (1987),
John Kekes (1993), Michael Stocker (1990 and 1997), David Wiggins
(1997) have all argued that there are at least some genuinely
irresolvable conflicts between values, and that to expect a rational
resolution is a mistake. (See also the entry on
moral dilemmas).
For Williams this is part of a more general mistake made by
contemporary moral philosophers--he thinks that philosophy tries
to make ethics too easy, too much like arithmetic. Williams insists
throughout his writings that ethics is a much more complex and
multi-faceted beast than its treatment at the hands of moral
philosophers would suggest, and so it is not surprising to him that
there should be situations where values conflict irresolvably. Stocker
(1990) discusses the nature of moral conflict at great length, and
although he thinks that many apparent conflicts can be dissolved or
are not serious, like Williams, he argues that much of contemporary
philosophy's demand for simplicity is mistaken. Stocker argues
that ethics need not always be action guiding, that value is much more
complex than Kantians and utilitarians would have us think, and that
as the world is complicated we will inevitably face conflicts. Several
pluralists have argued that accepting the inevitability of value
conflicts does not result in a breakdown of moral argument, but rather
the reverse. Kekes (1993), for example, claims that pluralism enables
us to see that irresolvable disagreements are not due to wickedness on
the part of our interlocutor, but may be due to the plural nature of
values.
## 5. Conclusion
The battle lines in the debate between pluralism and monism are not
always clear. This entry has outlined some of them, and discussed some
of the main arguments. Pluralists need to be clear about whether they
are foundational or non-foundational pluralists. Monists must defend
their claim that there really is a unitary value. Much of the debate
between pluralists and monists has focussed on the issue of whether
the complexity of moral choice implies that values really are
plural--a pattern emerges in which the monist claims to be able
to explain the appearance of plurality away, and the pluralist insists
that the appearance reflects a pluralist reality. Finally, pluralists
must explain how comparisons between values are made, or defend the
consequence that incommensurability is widespread. |
value-theory | ## 1. Basic Questions
The theory of value begins with a subject matter. It is hard to
specify in some general way exactly what counts, but it certainly
includes what we are talking about when we say any of the following
sorts of things (compare Ziff [1960]):
>
> "pleasure is good/bad"; "it would be good/bad if you
> did that"; "it is good/bad for him to talk to her";
> "too much cholesterol is good/bad for your health";
> "that is a good/bad knife"; "Jack is a good/bad
> thief"; "he's a good/bad man";
> "it's good/bad that you came"; "it would be
> better/worse if you didn't"; "lettuce is
> better/worse for you than Oreos"; "my new can opener is
> better/worse than my old one"; "Mack is a better/worse
> thief than Jack"; "it's better/worse for it to end
> now, than for us to get caught later"; "best/worst of all,
> would be if they won the World Series *and* kept all of their
> players for next year"; "celery is the best/worst thing
> for your health"; "Mack is the best/worst thief
> around"
>
The word "value" doesn't appear anywhere on this
list; it is full, however, of "good",
"better", and "best", and correspondingly of
"bad", "worse", and "worst". And
these words are used in a number of different kinds of constructions,
of which we may take these four to be the main exemplars:
1. Pleasure is good.
2. It is good that you came.
3. It is good for him to talk to her.
4. That is a good knife.
Sentences like 1, in which "good" is predicated of a mass
term, constitute a central part of traditional axiology, in which
philosophers have wanted to know what things (of which there can be
more or less) are good. I'll stipulatively call them value
claims, and use the word "stuff" for the kind of thing of
which they predicate value (like pleasure, knowledge, and money).
Sentences like 2 make claims about what I'll (again
stipulatively) call goodness *simpliciter*; this is the kind of
goodness appealed to by traditional utilitarianism. Sentences like 3
are *good for* sentences, and when the subject following
"for" is a person, we usually take them to be claims about
welfare or well-being. And sentences like 4 are what, following Geach
[1956], I'll call *attributive* uses of "good",
because "good" functions as a predicate modifier, rather
than as a predicate in its own right.
Many of the basic issues in the theory of value begin with questions
or assumptions about how these various kinds of claim are related to
one another. Some of these are introduced in the next two sections,
focusing in 1.1 on the relationship between our four kinds of
sentences, and focusing in 1.2 on the relationship between
"good" and "better", and between
"good" and "bad".
### 1.1 Varieties of Goodness
Claims about good *simpliciter* are those which have garnered the
most attention in moral philosophy. This is partly because as it is
usually understood, these are the "good" claims that
consequentialists hold to have a bearing on what we ought to do.
Consequentialism, so understood, is the view that you ought to do
whatever action is such that it would be best if you did it. This
leaves, however, a wide variety of possible theories about how such
claims are related to other kinds of "good" claim.
#### 1.1.1 Good *Simpliciter* and *Good For*
For example, consider a simple *point of view* theory, according
to which what is good *simpliciter* differs from what is good for
Jack, in that being good for Jack is being good from a certain point
of view -- Jack's -- whereas being good
*simpliciter* is being good from a more general point of view
-- the point of view of the universe (compare Nagel [1985]). The
point of view theory reduces both *good for* and good
*simpliciter* to *good from the point of view of*, and
understands good *simpliciter* claims as about the point of view
of the universe. One problem for this view is to make sense of what
sort of thing points of view could be, such that Jack and the universe
are both the kinds of thing to have one.
According to a different sort of theory, the *agglomerative
theory*, goodness *simpliciter* is just what you get by
"adding up" what is good for all of the various people
that there are. Rawls [1971] attributes this view to utilitarians, and
it fits with utilitarian discussions such as that of Smart's
contribution to Smart and Williams [1973], but much more work would
have to be done in order to make it precise. We sometimes say things
like, "wearing that outfit in the sun all day is not going to be
good for your tan line", but your tan line is not one of the
things whose good it seems plausible to "add up" in order
to get what is good *simpliciter*. Certainly it is not one of the
things whose good classical utilitarians would want to add up. So the
fact that sapient and even sentient beings are not the only kinds of
thing that things can be good or bad for sets an important constraint
both on accounts of the *good for* relation, and on theories
about how it is related to good *simpliciter*.
Rather than accounting for either of goodness *simpliciter* or
goodness-*for* in terms of the other, some philosophers have
taken one of these seriously at the expense of the other. For example,
Philippa Foot [1985] gives an important but compressed argument that
apparent talk about what is good *simpliciter* can be made
sense of as elliptical talk about what is good for some unmentioned
person, and Foot's view can be strengthened (compare Shanklin
[2011], Finlay [2014]) by allowing that apparent good
*simpliciter* claims are often *generically quantified*
statements about what is, in general, good for a person. Thomson
[2008] famously defends a similar view.
G.E. Moore [1903], in contrast, struggled to make sense of good-for
claims. In his refutation of egoism, Moore attributed to ethical
egoists the theory that what is good for Jack (or "in
Jack's good") is just what is good and in Jack's
possession, or alternatively, what it is good that Jack possesses.
Moore didn't argue against these theses directly, but he did
show that they cannot be combined with universalizable egoism. It is
now generally recognized that to avoid Moore's arguments,
egoists need only to reject these analyses of *good for*, which
are in any case unpromising (Smith [2003]).
#### 1.1.2 Attributive Good
Other kinds of views understand good *simpliciter* in terms of
attributive good. What, after all, are the kinds of things to which we
attribute goodness *simpliciter*? According to many philosophers,
it is to propositions, or states of affairs. This is supported by a
cursory study of the examples we have considered, in which what is
being said to be good appears to be picked out by complementizers like
"if", "that", and "for": "it
would be good if you did that"; "it's good that you
came"; "it's better for it to end now". If
complementizer phrases denote propositions or possible states of
affairs, then it is reasonable to conjecture, along with Foot [1985]
that being good *simpliciter* is being a good state of affairs,
and hence that it is a special case of attributive good (if it makes
sense at all -- Geach and Foot both argue that it does not, on
the ground that states of affairs are too thin of a kind to support
attributive good claims).
See the
>
> Supplement on Four Complications about Attributive Good
>
for further complications that arise when we consider the attributive
sense of "good".
Some philosophers have used the examples of attributive good and
*good for* in order to advance arguments against noncognitivist
metaethical theories (See the entry
cognitivism and non-cognitivism).
The basic outlines of such an argument go like this: noncognitivist
theories are designed to deal with good *simpliciter*, but have
some kind of difficulties accounting for attributive good or for
*good for*. Hence, there is a general problem with noncognitivist
theories, or at least a significant lacuna they leave. It has
similarly been worried that noncognitivist theories will have problems
accounting for so-called "agent-relative" value [see
section 4], again, apparently, because of its relational nature. There
is no place to consider this claim here, but note that it would be
surprising if relational uses of "good" like these were in
fact a deep or special problem for noncognitivism; Hare's
account in *The Language of Morals* (Hare [1952]) was
specifically about attributive uses of "good", and it is
not clear why relational noncognitive attitudes should be harder to
make sense of than relational beliefs.
#### 1.1.3 Relational Strategies
In an extension of the strategies just discussed, some theorists have
proposed views of "good" which aspire to treat all of good
*simpliciter*, *good for*, and attributive good as special
cases. A paradigm of this approach is the "end-relational"
theory of Paul Ziff [1960] and Stephen Finlay [2004], [2014].
According to Ziff, all claims about goodness are relative to ends or
purposes, and "good for" and attributive
"good" sentences are simply different ways of making these
purposes (more or less) explicit. Talk about what is good for Jack,
for example, makes the purpose of Jack's being happy (say)
explicit, while talk about what is a good knife makes our usual
purposes for knives (cutting things, say) explicit. The claim about
goodness is then relativized accordingly.
Views adopting this strategy need to develop in detail answers to just
what, exactly, the further, relational, parameter on
"good" is. Some hold that it is *ends*, while others
say things like "aims". A filled-out version of this view
must also be able to tell us the *mechanics* of how these ends
can be made explicit in "good for" and attributive
"good" claims, and needs to really make sense of both of
those kinds of claim as of one very general kind. And, of course, this
sort of view yields the prediction that non-explicitly relativized
"good" sentences -- including those used throughout
moral philosophy -- are really only true or false once the end
parameter is specified, perhaps by context.
This means that this view is open to the objection that it fails to
account for a central class of uses of "good" in ethics,
which by all evidence are *non*-relative, and for which the
linguistic data do not support the hypothesis that they are
context-sensitive. J.L. Mackie held a view like this one and embraced
this result -- Mackie's [1977] error theory about
"good" extended only to such putative non-relational
senses of "good". Though he grants that there are such
uses of "good", Mackie concludes that they are mistaken.
Finlay [2014], in contrast, argues that he can use ordinary pragmatic
effects in order to explain the appearances. The apparently
non-relational senses of "good", Finlay argues, really are
relational, and his theory aspires to explain why they seem
otherwise.
#### 1.1.4 What's Special About Value Claims
The sentences I have called "value claims" present special
complications. Unlike the other sorts of "good" sentences,
they do not appear to admit, in a natural way, of comparisons.
Suppose, for example, with G.E. Moore, that pleasure is good and
knowledge is good. Which, we might ask, is better? This question does
not appear to make very much sense, until we fix on some *amount*
of pleasure and some *amount* of knowledge. But if Sue is a good
dancer and Huw is a good dancer, then it makes perfect sense to ask
who is the better dancer, and without needing to fix on any particular
*amount* of dancing -- much less on any amount of Sue or
Huw. In general, just as the kinds of thing that can be tall are the
same kinds of thing as can be taller than each other, the kinds of
thing that can be good are the same kinds of thing as can be better
than one another. But the sentences that we are calling "value
claims", which predicate "good" of some stuff,
appear not to be like this.
One possible response to this observation, if it is taken seriously,
is to conclude that so-called "value claims" have a
different kind of logical form or structure. One way of implementing
this idea, the *good-first* theory, is to suppose that
"pleasure is good" means something roughly like,
"(other things equal) it is better for there to be more
pleasure", rather than, "pleasure is better than most
things (in some relevant comparison class)", on a model with
"Sue is a good dancer", which means roughly, "Sue is
a better dancer than most (in some relevant comparison class)".
According to a very different kind of theory, the *value-first*
theory, when we say that pleasure is good, we are saying that pleasure
is a value, and things are better just in case there is more of the
things which are values. These two theories offer competing orders of
explanation for the same phenomenon. The good-first theory analyzes
value claims in terms of "good" simpliciter, while the
value-first theory analyzes "good" *simpliciter* in
terms of value claims. The good-first theory corresponds to the thesis
that states of affairs are the "primary bearers" of value;
the value-first theory corresponds to the alternative thesis that it
is things like pleasure or goodness (or perhaps their instances) that
are the "primary bearers" of value.
According to a more skeptical view, sentences like "pleasure is
good" do not express a distinctive kind of claim at all, but are
merely what you get when you take a sentence like "pleasure is
good for Jill to experience", generically quantify out Jill, and
ellipse "to experience". Following an idea also developed
by Finlay [2014], Robert Shanklin [2011] argues that in general,
good-for sentences pattern with *experiencer* adjectives like
"fun", which admit of these very syntactic
transformations: witness "Jack is fun for Jill to talk
to", "Jack is fun to talk to", "Jack is
fun". This view debunks the issue over which the views discussed
in the last paragraph disagree, for it denies that there is any such
distinct topic for value claims to be about. (It may also explain the
failures of comparative forms, above, on the basis of differences in
the elided material.)
### 1.2 Good, Better, Bad
#### 1.2.1 *Good* and *Better*
On a natural view, the relationship between "good",
"better", and "best" would seem to be the same
as that between "tall", "taller", and
"tallest". "Tall" is a gradable adjective, and
"taller" is its comparative form. On standard views,
gradable adjectives are analyzed in terms of their comparative form.
At bottom is the relation of being *taller than*, and someone is
the tallest woman just in case she is taller than every woman.
Similarly, someone is tall, just in case she is taller than a
contextually appropriate standard (Kennedy [2005]), or taller than
sufficiently many (this many be vague) in some contextually
appropriate comparison class.
Much moral philosophy appears to assume that things are very different
for "good", "better", and "best".
Instead of treating "better than" as basic, and something
as being good just in case it is better than sufficiently many in some
comparison class, philosophers very often assume, or write as if they
assume, that "good" is basic. For example, many theorists
have proposed analyses of *what it is* to be good which are
incompatible with the claim that "good" is to be
understood in terms of "better". In the absence of some
reason to think that "good" is very different from
"tall", however, this may be a very peculiar kind of claim
to make, and it may distort some other issues in the theory of
value.
#### 1.2.2 Value
Moreover, it is difficult to see how one could do things the other way
around, and understand "better" in terms of
"good". Jon is a better sprinter than Jan not because it
is more the case that Jon is a good sprinter than that Jan is a good
sprinter -- they are both excellent sprinters, so neither one of
these is more the case than the other. It is, however, possible to see
how to understand both "good" and "better" in
terms of value. If good is to better as tall is to taller, then the
analogue of value should intuitively be height. One person is taller
than another just in case her height is greater; similarly, one state
of affairs is better than another just in case its value is greater.
If we postulate something called "value" to play this
role, then it is natural (though not obligatory) to identify value
with amounts of *values* -- amounts of things like pleasure
or knowledge, which "value" claims claim to be good.
But this move appears to be implausible or unnecessary when applied to
attributive "good". It is not particularly plausible that
there is such a thing as can-opener value, such that one can-opener is
better than another just in case it has more can-opener value. In
general, not all comparatives need be analyzable in terms of something
like height, of which there can be literally more or less. Take, for
example, the case of "scary". The analogy with height
would yield the prediction that if one horror film is scarier than
another, it is because it has more of something -- scariness
-- than the other. This may be right, but it is not obviously so.
If it is not, then the analogy need not hold for "good"
and its cognates, either. In this case, it may be that being better
than does not merely amount to having more value than.
#### 1.2.3 *Good* and *Bad*
These questions, moreover, are related to others. For example,
"better" would appear to be the inverse relation of
"worse". A is better than B just in case B is worse than
A. So if "good" is just "better than sufficiently
many" and "bad" is just "worse than
sufficiently many", all of the interesting facts in the
neighborhood would seem to be captured by an assessment of what stands
in the *better than* relation to what. The same point goes if to
be good is just to be better than a contextually set standard. But it
has been held by many moral philosophers that an inventory of what is
better than what would still leave something interesting and important
out: what is *good*.
If this is right, then it is one important motivation for denying that
"good" can be understood in terms of "better".
But it is important to be careful about this kind of argument.
Suppose, for example, that, as is commonly held about
"tall", the relevant comparison class or standard for
"good" is somehow supplied by the context of utterance.
Then to know whether "that is good" is true, you *do*
need to know more than all of the facts about what is better than what
-- you also need to know something about the comparison class or
standard that is supplied by the context of utterance. The assumption
that "good" is context-dependent in this way may therefore
itself be just the kind of thing to explain the intuition which drives
the preceding argument.
## 2. Traditional Questions
Traditional axiology seeks to investigate what things are good, how
good they are, and how their goodness is related to one another.
Whatever we take the "primary bearers" of value to be, one
of the central questions of traditional axiology is that of what
stuffs are good: what is of value.
### 2.1 Intrinsic Value
#### 2.1.1 What is Intrinsic Value?
Of course, the central question philosophers have been interested in,
is that of what is of *intrinsic* value, which is taken to
contrast with *instrumental* value. Paradigmatically, money is
supposed to be good, but not intrinsically good: it is supposed to be
good because it leads to other good things: HD TV's and houses
in desirable school districts and vanilla lattes, for example. These
things, in turn, may only be good for what they lead to: exciting NFL
Sundays and adequate educations and caffeine highs, for example. And
those things, in turn, may be good only for what they lead to, but
eventually, it is argued, something must be good, and not just for
what it leads to. Such things are said to be *intrinsically
good*.
Philosophers' adoption of the term "intrinsic" for
this distinction reflects a common theory, according to which whatever
is non-instrumentally good must be good in virtue of its intrinsic
properties. This idea is supported by a natural argument: if something
is good only because it is related to something else, the argument
goes, then it must be its relation to the other thing that is
non-instrumentally good, and the thing itself is good only because it
is needed in order to obtain this relation. The premise in this
argument is highly controversial (Schroeder [2005]), and in fact many
philosophers believe that something can be non-instrumentally good in
virtue of its relation to something else. Consequently, sometimes the
term "intrinsic" is reserved for what is good in virtue of
its intrinsic properties, or for the view that *goodness* itself
is an intrinsic property, and non-instrumental value is instead called
"telic" or "final" (Korsgaard [1983]).
I'll stick to "intrinsic", but keep in mind that
intrinsic goodness may not be an intrinsic property, and that what is
intrinsically good may turn out not to be so in virtue of its
intrinsic properties.
See the
>
> Supplement on Atomism/Holism about Value
>
for further discussion of the implications of the assumption that
intrinsic value supervenes on intrinsic properties.
Instrumental value is also sometimes contrasted with
"constitutive" value. The idea behind this distinction is
that instrumental values lead *causally* to intrinsic values,
while constitutive values *amount* to intrinsic values. For
example, my giving you money, or a latte, may causally result in your
experiencing pleasure, whereas your experiencing pleasure may
*constitute*, without causing, your being happy. For many
purposes this distinction is not very important and often not noted,
and constitutive values can be thought, along with instrumental
values, as things that are ways of getting something of intrinsic
value. I'll use "instrumental" in a broad sense, to
include such values.
#### 2.1.2 What is the Intrinsic/Instrumental Distinction Among?
I have assumed, here, that the intrinsic/instrumental distinction is
among what I have been calling "value claims", such as
"pleasure is good", rather than among one of the other
kinds of uses of "good" from part 1. It does not make
sense, for example, to say that something is a good can opener, but
only instrumentally, or that Sue is a good dancer, but only
instrumentally. Perhaps it does make sense to say that vitamins are
good for Jack, but only instrumentally; if that is right, then the
instrumental/intrinsic distinction will be more general, and it may
tell us something about the structure of and relationship between the
different senses of "good", to look at which uses of
"good" allow an intrinsic/instrumental distinction.
It is sometimes said that consequentialists, since they appeal to
claims about what is good *simpliciter* in their explanatory
theories, are committed to holding that states of affairs are the
"primary" bearers of value, and hence are the only things
of intrinsic value. This is not right. First, consequentialists can
appeal in their explanatory moral theory to facts about what state of
affairs would be best, without holding that states of affairs are the
"primary" bearers of value; instead of having a
"good-first" theory, they may have a
"value-first" theory (see section 1.1.4), according to
which states of affairs are good or bad *in virtue* of there
being more things of value in them. Moreover, even those who take a
"good-first" theory are not really committed to holding
that it is states of affairs that are intrinsically valuable; states
of affairs are not, after all, something that you can collect more or
less of. So they are not really in parallel to pleasure or
knowledge.
For more discussion of intrinsic value, see the entry on
intrinsic vs. extrinsic value.
### 2.2 Monism/Pluralism
One of the oldest questions in the theory of value is that of whether
there is more than one fundamental (intrinsic) value. Monists say
"no", and pluralists say "yes". This question
only makes sense as a question about intrinsic values; clearly there
is more than one instrumental value, and monists and pluralists will
disagree, in many cases, not over whether something is of value, but
over whether its value is *intrinsic*. For example, as important
as he held the value of knowledge to be, Mill was committed to holding
that its value is instrumental, not intrinsic. G.E. Moore disagreed,
holding that knowledge is indeed a value, but an intrinsic one, and
this expanded Moore's list of basic values. Mill's theory
famously has a pluralistic element as well, in contrast with
Bentham's, but whether Mill properly counts as a pluralist about
value depends on whether his view was that there is only one value
-- happiness -- but two different kinds of pleasure which
contribute to it, one more effectively than the other, or whether his
view was that each kind of pleasure is a distinctive value. This point
will be important in what follows.
#### 2.2.1 Ontology and Explanation
At least three quite different sorts of issues are at stake in this
debate. First is an ontological/explanatory issue. Some monists have
held that a plural list of values would be explanatorily
unsatisfactory. If pleasure and knowledge are both values, they have
held, there remains a further question to be asked: why? If this
question has an answer, some have thought, it must be because there is
a further, more basic, value under which the explanation subsumes both
pleasure and knowledge. Hence, pluralist theories are either
explanatorily inadequate, or have not really located the basic
intrinsic values.
This argument relies on a highly controversial principle about how an
explanation of why something is a value must work -- a very
similar principle to that which was appealed to in the argument that
intrinsic value must be an intrinsic property [section 2.1.1]. If this
principle is false, then an explanatory theory of *why* both
pleasure and knowledge are values can be offered which does not work
by subsuming them under a further, more fundamental value. Reductive
theories of *what it is* to be a value satisfy this description,
and other kinds of theory may do so, as well (Schroeder [2005]). If
one of these kinds of theory is correct, then even pluralists can
offer an explanation of why the basic values that they appeal to are
values.
#### 2.2.2 Revisionary Commitments?
Moreover, against the monist, the pluralist can argue that the basic
posits to which her theory appeals are not *different in kind*
from those to which the monist appeals; they are only different in
*number*. This leads to the second major issue that is at stake
in the debate between monists and pluralists. Monistic theories carry
strong implications about what is of value. Given any monistic theory,
everything that is of value must be either the one intrinsic value, or
else must lead to the one intrinsic value. This means that if some
things that are intuitively of value, such as knowledge, do not, in
fact, always lead to what a theory holds to be the one intrinsic value
(for example, pleasure), then the theory is committed to denying that
these things are really always of value after all.
Confronted with these kinds of difficulties in subsuming everything
that is pre-theoretically of value under one master value, pluralists
don't fret: they simply add to their list of basic intrinsic
values, and hence can be more confident in preserving the
pre-theoretical phenomena. Monists, in contrast, have a choice. They
can change their mind about the basic intrinsic value and try all over
again, they can work on developing resourceful arguments that
knowledge really does lead to pleasure, or they can bite the bullet
and conclude that knowledge is really not, after all, always good, but
only under certain specific conditions. If the explanatory commitments
of the pluralist are not *different in kind* from those of the
monist, but only different in *number*, then it is natural for
the pluralist to think that this kind of slavish adherence to the
number one is a kind of fetish it is better to do without, if we want
to develop a theory that gets things *right*. This is a
perspective that many historical pluralists have shared.
#### 2.2.3 Incommensurability
The third important issue in the debate between monists and
pluralists, and the most central over recent decades, is that over the
relationship between pluralism and incommensurability. If one state of
affairs is better than another just in case it contains more value
than the other, and there are two or more basic intrinsic values, then
it is not clear how two states of affairs can be compared, if one
contains more of the first value, but the other contains more of the
second. Which state of affairs is better, under such a circumstance?
In contrast, if there is only one intrinsic value, then this
can't happen: the state of affairs that is better is the one
that has more of the basic intrinsic value, whatever that is.
Reasoning like this has led some philosophers to believe that
pluralism is the key to explaining the complexity of real moral
situations and the genuine tradeoffs that they involve. If some things
really *are* incomparable or incommensurable, they reason, then
pluralism about value could explain *why*. Very similar reasoning
has led other philosophers, however, to the view that monism
*has* to be right: practical wisdom requires being able to make
choices, even in complicated situations, they argue. But that would be
impossible, if the options available in some choice were incomparable
in this way. So if pluralism leads to this kind of incomparability,
then pluralism must be false.
In the next section, we'll consider the debate over the
comparability of values on which this question hinges. But even if we
grant all of the assumptions on both sides so far, monists have the
better of these two arguments. Value pluralism may be *one* way
to obtain incomparable options, but there could be other ways, even
consistently with value monism. For example, take the interpretation
of Mill on which he believes that there is only one intrinsic value
-- happiness -- but that happiness is a complicated sort of
thing, which can happen in each of two different ways -- either
through higher pleasures, or through lower pleasures. If Mill has this
view, and holds, further, that it is in some cases indeterminate
whether someone who has slightly more higher pleasures is happier than
someone who has quite a few more lower pleasures, then he can explain
why it is indeterminate whether it is better to be the first way or
the second way, without having to appeal to pluralism in his theory of
*value*. The pluralism would be within his theory of
*happiness* alone.
See a more detailed discussion in the entry on
value pluralism.
### 2.3 Incommensurability/Incomparability
We have just seen that one of the issues at stake in the debate
between monists and pluralists about value turns on the question
(vaguely put) of whether values can be incomparable or
incommensurable. This is consequently an area of active dispute in its
own right. There are, in fact, many distinct issues in this debate,
and sometimes several of them are run together.
#### 2.3.1 Is there Weak Incomparability?
One of the most important questions at stake is whether it must always
be true, for two states of affairs, that things would be better if the
first obtained than if the second did, that things would be better if
the second obtained than if the first did, or that things would be
equally good if either obtained. The claim that it can sometimes
happen that none of these is true is sometimes referred to as the
claim of *incomparability*, in this case as applied to good
*simpliciter*. Ruth Chang [2002] has argued that in addition to
"better than", "worse than", and
"equally good", there is a fourth "positive value
relation", which she calls *parity*. Chang reserves the use
of "incomparable" to apply more narrowly, to the
possibility that in addition to none of the other three relations
holding between them, it is possible that two states of affairs may
fail even to be "on a par". However, we can distinguish
between *weak* incomparability, defined as above, and
*strong* incomparability, further requiring the lack of parity,
whatever that turns out to be. Since the notion of *parity* is
itself a theoretical idea about how to account for what happens when
the other three relations fail to obtain, a question which I
won't pursue here, it will be weak incomparability that will
interest us here.
It is important to distinguish the question of whether good
*simpliciter* admits of incomparability from the question of
whether *good for* and attributive good admit of incomparability.
Many discussions of the incomparability of values proceed at a very
abstract level, and interchange examples of each of these kinds of
value claims. For example, a typical example of a purported
incomparability might compare, say, Mozart to Rodin. Is Mozart a
better artist than Rodin? Is Rodin a better artist than Mozart? Are
they equally good? If none of these is the case, then we have an
example of incomparability in attributive good, but not an example of
incomparability in good *simpliciter*. These questions may be
parallel or closely related, and investigation of each may be
instructive in consideration of the other, but they still need to be
kept separate.
For example, one important argument against the incomparability of
value was mentioned in the previous section. It is that
incomparability would rule out the possibility of practical wisdom,
because practical wisdom requires the ability to make correct choices
even in complicated choice situations. Choices are presumably between
actions, or between possible consequences of those actions. So it
could be that attributive good is sometimes incomparable, because
neither Mozart nor Rodin is a better artist than the other and they
are not equally good, but that good *simpliciter* is always
comparable, so that there is always an answer as to which of two
actions would lead to an outcome that is better.
#### 2.3.2 What Happens when there is Weak Incomparability?
Even once it is agreed that good *simpliciter* is incomparable in
this sense, many theories have been offered as to what that
incomparability involves and why it exists. One important constraint
on such theories is that they not predict more incomparabilities than
we really observe. For example, though Rodin may not be a better or
worse artist than Mozart, nor equally good, he is certainly a better
artist than Salieri -- even though Salieri, like Mozart, is a
better composer than Rodin. This is a problem for the idea that
incomparability can be explained by value pluralism. The argument from
value pluralism to incomparability suggested that it would be
impossible to compare any two states of affairs where one contained
more of one basic value and the other contained more of another. But
cases like that of Rodin and Salieri show that the explanation of what
is incomparable between Rodin and Mozart can't simply be that
since Rodin is a better sculptor and Mozart is a better composer,
there is no way of settling who is the better artist. If that were the
correct explanation, then Rodin and Salieri would also be
incomparable, but intuitively, they are not. Constraints like these
can narrow down the viable theories about what is going on in cases of
incomparability, and are evidence that incomparability is probably not
going to be straightforwardly explained by value pluralism.
There are many other kinds of theses that go under the title of the
incomparability or incommensurability of values. For example, some
theories which posit lexical orderings are said to commit to
"incomparabilities". Kant's thesis that rational
agents have a dignity and not a price is often taken to be a thesis
about a kind of incommensurability, as well. Some have interpreted
Kant to be holding simply that respect for rational agents is of
infinite value, or that it is to be lexically ordered over the value
of anything else. Another thesis in the neighborhood, however, would
be somewhat weaker. It might be that a human life is "above
price" in the sense that killing one to save one is not an
acceptable exchange, but that for some positive value of \(n\),
killing one to save \(n\) would be an acceptable exchange. On this
view, there is no single "exchange value" for a life,
because the value of a human life depends on whether you are
"buying" or "selling" -- it is higher
when you are going to take it away, but lower when you are going to
preserve it. Such a view would intelligibly count as a kind of
"incommensurability", because it sets no single value on
human lives.
A more detailed discussion of the commensurability of values can be
found in the entry on
incommensurable values.
## 3. Relation to the Deontic
One of the biggest and most important questions about value is the
matter of its relation to the deontic -- to categories like
*right*, *reason*, *rational*, *just*, and
*ought*. According to *teleological* views, of which
classical consequentialism and universalizable egoism are classic
examples, deontic categories are posterior to and to be explained in
terms of evaluative categories like *good* and *good for*.
The contrasting view, according to which deontic categories are prior
to, and explain, the evaluative categories, is one which, as Aristotle
says, has no name. But its most important genus is that of
"fitting attitude" accounts, and Scanlon's [1998]
"buck-passing" theory is another closely related
contemporary example.
### 3.1 Teleology
Teleological theories are not, strictly speaking, theories about
value. They are theories about right action, or about what one ought
to do. But they are committed to *claims* about value, because
they appeal to evaluative facts, in order to explain what is right and
wrong, and what we ought to do -- *deontic* facts. The most
obvious consequence of these theories, is therefore that evaluative
facts must not then be explained in terms of deontic facts. The
evaluative, on such views, is prior to the deontic.
#### 3.1.1 Classical Consequentialism
The most familiar sort of view falling under this umbrella is
*classical consequentialism*, sometimes called (for reasons
we'll see in section 3.3) "agent-neutral
consequentialism". According to classical consequentialism,
every agent ought always to do whatever action, out of all of the
actions available to her at that time, is the one such that if she did
it, things would be best. Not all defenders of consequentialism
interpret it in such classical terms; other prominent forms of
consequentialism focus on rules or motives, and evaluate actions only
derivatively.
Classical consequentialism is sometimes supported by appeal to the
intuition that one should always do the best action, and then the
assumption that actions are only instrumentally good or bad --
for the sake of what they lead to (compare especially Moore [1903]).
The problem with this reasoning is that non-consequentialists can
agree that agents ought always to do the best action. The important
feature of this claim to recognize is that it is a claim not about
intrinsic or instrumental value, but about attributive good. And as
noted in section 2.1, "instrumental" and
"intrinsic" don't really apply to attributive good.
Just as how good of a can opener something is or how good of a
torturer someone is does not depend on how good the world is, as a
result of the fact that they exist, how good of an action something is
need not depend on how good the world is, as a result that it happens.
Indeed, if it did, then the evaluative standards governing actions
would be quite different from those governing nearly everything
else.
#### 3.1.2 Problems in Principle vs. Problems of Implementation
Classical consequentialism, and its instantiation in the form of
utilitarianism, has been well-explored, and its advantages and costs
cannot be surveyed here. Many of the issues for classical
consequentialism, however, are issues for details of its exact
formulation or implementation, and not problems *in principle*
with its appeal to the evaluative in order to explain the deontic. For
example, the worry that consequentialism is too demanding has been
addressed within the consequentialist framework, by replacing
"best" with "good enough" -- substituting
a "satisficing" conception for a "maximizing"
one (Slote [1989]). For another example, problems faced by certain
consequentialist theories, like traditional utilitarianism, about
accounting for things like justice can be solved by other
consequentialist theories, simply by adopting a more generous picture
about what sort of things contribute to how good things are (Sen
[1982]).
In section 3.3 we'll address one of the most central issues
about classical consequentialism: its inability to allow for
*agent-centered constraints*. This issue *does* pose an
in-principle general problem for the aspiration of consequentialism to
explain deontic categories in terms of the evaluative. For more, see
the entry on
consequentialism and utilitarianism.
#### 3.1.3 Other Teleological Theories
Universalizable egoism is another familiar teleological theory.
According to universalizable egoism, each agent ought always to do
whatever action has the feature that, of all available alternatives,
it is the one such that, were she to do it, things would be best
*for her*. Rather than asking agents to maximize the good, egoism
asks agents to maximize what is good *for them*. Universalizable
egoism shares many features with classical consequentialism, and
Sidgwick found both deeply attractive. Many others have joined
Sidgwick in holding that there is something deeply attractive about
what consequentialism and egoism have in common -- which
involves, at minimum, the teleological idea that the deontic is to be
explained in terms of the evaluative (Portmore [2005]).
Of course, not all teleological theories share the broad features of
consequentialism and egoism. Classical Natural Law theories (Finnis
[1980], Murphy [2001]) are teleological, in the sense that they seek
to explain what we ought to do in terms of what is good, but they do
so in a very different way from consequentialism and egoism. According
to an example of such a Natural Law theory, there are a variety of
natural values, each of which calls for a certain kind of distinctive
response or respect, and agents ought always to act in ways that
respond to the values with that kind of respect. For more on natural
law theories, see the entry on
the natural law tradition in ethics.
And Barbara Hermann has prominently argued for interpreting
Kant's ethical theory in teleological terms. For more on
Herman's interpretation of Kant, see the entry on
Kant's Moral Philosophy,
especially section 13. Philip Pettit [1997] prominently
distinguishes between values that we are called to
"promote" and those which call for other responses.
As Pettit notes, classical consequentialists hold that all values are
to be promoted, and one way of thinking of some of these other kinds
of teleological theories is that like consequentialism they explain
what we ought to do in terms of what is good, but unlike
consequentialism they hold that some kinds of good call for responses
other than promotion.
### 3.2 Fitting Attitudes
In contrast to teleological theories, which seek to account for
deontic categories in terms of evaluative ones, Fitting Attitudes
accounts aspire to account for evaluative categories -- like good
*simpliciter*, *good for*, and attributive good -- in
terms of the deontic. Whereas teleology has *implications* about
value but is not itself a theory primarily *about* value, but
rather about what is right, Fitting Attitudes accounts *are*
primarily theses about value -- in accounting for it in terms of
the deontic, they tell us what it is for something to be good. Hence,
they are theories about the nature of value.
The basic idea behind all kinds of Fitting Attitudes account is that
"good" is closely linked to "desirable".
"Desireable", of course, in contrast to
"visible" and "audible", which mean
"able to be seen" and "able to be heard", does
not mean "able to be desired". It means, rather, something
like "correctly desired" or "appropriately
desired". If being good just is being desirable, and being
desirable just is being correctly or appropriately desired, it follows
that being good just is being correctly or appropriately desired. But
*correct* and *appropriate* are deontic concepts, so if
being good is just being desirable, then *goodness* can itself be
accounted for in terms of the deontic. And that is the basic idea
behind Fitting Attitudes accounts (Ewing [1947], Rabinowicz and
Ronnow-Rasmussen [2004]).
#### 3.2.1 Two Fitting Attitudes Accounts
Different Fitting Attitudes accounts, however, work by appealing to
different deontic concepts. Some of the problems facing Fitting
Attitudes views can be exhibited by considering a couple exemplars.
According to a formula from Sidgwick, for example, the good is what
ought to be desired. But this slogan is not by itself very helpful
until we know more: desired by whom? By everyone? By at least someone?
By someone in particular? And for which of our senses of
"good" does this seek to provide an account? Is it an
account of good *simpliciter*, saying that it would be good if
\(p\) just in case \_\_\_\_ ought to desire that \(p\), where
"\_\_\_\_" is filled in by whoever it is, who is supposed to
have the desire? Or is it an account of "value" claims,
saying that pleasure is good just in case pleasure ought to be desired
by \_\_\_\_?
The former of these two accounts would fit in with the
"good-first" theory from section 1.1.4; the latter would
fit in with the "value-first" theory. We observed in
section 1.1.4 that "value" claims don't admit of
comparatives in the same way that other uses of "good" do;
this is important here because if "better" simpliciter is
prior to "good" simpliciter, then strictly speaking a
"good-first" theorist needs to offer a Fitting Attitudes
account of "better", rather than of "good".
Such a modification of the Sidgwickian slogan might say that it would
be better if \(p\) than if \(q\) just in case \_\_\_\_ ought to desire
that \(p\) more than that \(q\) (or alternatively, to prefer \(p\) to
\(q\)).
In *What We Owe to Each Other*, T.M. Scanlon offered an
influential contemporary view with much in common with Fitting
Attitudes accounts, which he called the *Buck-Passing* theory of
value. According to Scanlon's slogan, "to call something
valuable is to say that it has other properties that provide reasons
for behaving in certain ways with respect to it." One important
difference from Sidgwick's view is that it appeals to a
different deontic concept: *reasons* instead of *ought*. But
it also aspires to be more neutral than Sidgwick's slogan on the
specific response that is called for. Sidgwick's slogan required
that it is *desire* that is always relevant, whereas
Scanlon's slogan leaves open that there may be different
"certain ways" of responding to different kinds of
values.
But despite these differences, the Scanlonian slogan shares with the
Sidgwickian slogan the feature of being massively underspecified. For
*which* sense of "good" does it aspire to provide an
account? Is it really supposed to be directly an account of
"good", or, if we respect the priority of
"better" to "good", should we really try to
understand it as, at bottom, an account of "better than"?
And crucially, which are the "certain ways" that are
involved? It can't just be that the speaker has to have some
certain ways in mind, because there are some ways of responding such
that reasons to respond in that way are evidence that the thing in
question is *bad* rather than that it is good -- for
example, the attitude of *dread*. So does the theory require that
there is some particular set of certain ways, such that a thing is
good just in case there are reasons to respond to it in any of
*those* ways? Scanlon's initial remarks suggest rather that
for each sort of thing, there are different "certain ways"
such that when we say that *that* thing is good, we are saying
that there are reasons to respond to it in those ways. This is a
matter that would need to be sorted out by any worked out view.
A further complication with the Scanlonian formula, is that appealing
in the analysis to the bare existential claim that there *are*
reasons to respond to something in one of these "certain
ways" faces large difficulties. Suppose, for example, that there
is some reason to respond in one of the "certain ways",
but there are competing, and weightier, reasons not to, so that all
things considered, responding in any of the "certain ways"
would be a mistake. Plausibly, the thing under consideration should
not turn out to be good in such a case. So even a view like
Scanlon's, which appeals to reasons, may need, once it is more
fully developed, to appeal to specific claims about the *weight*
of those reasons.
#### 3.2.2 The Wrong Kind of Reason
Even once these kinds of questions are sorted out, however, other
significant questions remain. For example, one of the famous problems
facing such views is the *Wrong Kind of Reasons* problem (Crisp
[2000], Rabinowicz and Ronnow-Rasmussen [2004]). The problem
arises from the observation that intuitively, some factors can affect
what you ought to desire without affecting what is good. It may be
true that if we make something better, then other things being equal,
you ought to desire it more. But we can also create *incentives*
for you to desire it, without making it any better. For example, you
might be offered a substantial financial reward for desiring something
bad, or an evil demon might (credibly) threaten to kill your family
unless you do so. If these kinds of circumstances can affect what you
ought to desire, as is at least intuitively plausible, then they will
be counterexamples to views based on the Sidgwickian formula.
Similarly, if these kinds of circumstances can give you *reasons*
to desire the thing which is bad, then they will be counterexamples to
views based on the Scanlonian formula. It is in the context of the
Scanlonian formula that this issue has been called the "Wrong
Kind of Reasons" problem, because if these circumstances do give
you reasons to desire the thing that is bad, they are reasons of the
wrong kind to figure in a Scanlon-style account of what it is to be
good.
This issue has recently been the topic of much fruitful investigation,
and investigators have drawn parallels between the kinds of reason to
desire that are provided by these kinds of "external"
incentives and familiar issues about pragmatic reasons for belief and
the kind of reason to intend that exists in Gregory Kavka's
Toxin Puzzle (Hieronymi [2005]). Focusing on the cases of desire,
belief, and intention, which are all kinds of mental state, some have
claimed that the distinction between the "right kind" and
"wrong kind" of reason can be drawn on the basis of the
distinction between "object-given" reasons, which refer to
the object of the attitude, and "state-given" reasons,
which refer to the mental state itself, rather than to its object
(Parfit [2001], Piller [2006]). But questions have also been raised
about whether the "object-given"/"state-given"
distinction is general enough to really explain the distinction
between reasons of the right kind and reasons of the wrong kind, and
it has even been disputed whether the distinction tracks anything at
all.
One reason to think that the distinction may not be general enough, is
that situations very much like Wrong Kind of Reasons situations can
arise even where no mental states are in play. For example, games are
subject to norms of correctness. External incentives to cheat --
for example, a credible threat from an evil demon that she will kill
your family unless you do so -- can plausibly not only provide
you with reasons to cheat, but make it the case that you ought to. But
just as such external incentives don't make it appropriate or
correct to desire something bad, they don't make it a correct
move of the game to cheat (Schroeder [2010]). If this is right, and
the right kind/wrong kind distinction among reasons really does arise
in a broad spectrum of cases, including ones like this one, it is not
likely that a distinction that only applies to reasons for mental
states is going to lie at the bottom of it.
Further discussion of fitting attitudes accounts of
value and the wrong kind of reasons problem can be found in the entry
on
fitting attitude theories of value.
#### 3.2.4 Application to the Varieties of Goodness
One significant attraction to Fitting Attitudes-style accounts, is
that they offer prospects of being successfully applied to attributive
good and *good for*, as well as to good *simpliciter*
(Darwall [2002], Ronnow-Rasmussen [2009], Suikkanen [2009]). Just
as reasons to prefer one state of affairs to another can underwrite
one state of affairs being better than another, reasons to choose one
can-opener over another can underwrite its being a better can opener
than the other, and reasons to prefer some state of affairs *for
someone's sake* can underwrite its being better for that
person than another. For example, here is a quick sketch of what an
account might look like, which accepts the good-first theory from
section 1.1.4, holds as in section 1.1.2 that good *simpliciter*
is a special case of attributive good, and understands attributive
"good" in terms of attributive "better" and
"good for" in terms of "better for":
>
>
> **Attributive better:** For all kinds *K*, and things
> *A* and *B*, for *A* to be a better *K*
> than *B* is for the set of all of the right kind of reasons to
> choose *A* over *B* when selecting a *K* to be
> weightier than the set of all of the right kind of reasons to choose
> *B* over *A* when selecting a *K*.
>
>
>
> **Better for:** For all things *A*, *B*, and
> *C*, *A* is better for *C* than *B* is
> just in case the set of all of the right kind of reasons to choose
> *A* over *B* on *C*'s behalf is weightier
> than the set of all of the right kind of reasons to choose *B*
> over *A* on *C*'s behalf.
>
>
>
If being a good K is just being a better K than most (in some
comparison class), and "it would be good if \(p\)" just
means that \(p\)'s obtaining is a good state of affairs, and
value claims like "pleasure is good" just mean that other
things being equal, it is better for there to be more pleasure, then
this pair of accounts has the right structure to account for the full
range of "good" claims that we have encountered. But it
also shows how the various senses of "good" are related,
and allows that even attributive good and *good for* have, at
bottom, a common shared structure. So the prospect of being able to
offer such a unified story about what the various senses of
"good" have in common, though not the exclusive property
of the Fitting Attitudes approach, is nevertheless one of its
attractions.
### 3.3 Agent-Relative Value?
#### 3.3.1 Agent-Centered Constraints
The most central, in-principle problem for classical consequentialism
is the possibility of what are called *agent-centered
constraints* (Scheffler [1983]). It has long been a traditional
objection to utilitarian theories that because they place no intrinsic
disvalue on wrong actions like murder, they yield the prediction that
if you have a choice between murdering and allowing two people to die,
it is clear that you should murder. After all, other things being
equal, the situation is stacked 2-to-1 -- there are two deaths on
one side, but only one death on the other, and each death is equally
bad.
Consequentialists who hold that killings of innocents are
intrinsically bad can avoid this prediction. As long as a murder is at
least twice as bad as an ordinary death not by murder,
consequentialists can explain why you ought not to murder, even in
order to prevent two deaths. So there is no in-principle problem for
consequentialism posed by this sort of example; whether it is an issue
for a given consequentialist depends on her axiology: on what she
thinks is intrinsically bad, and how bad she thinks it is.
But the problem is very closely related to a genuine problem for
consequentialism. What if you could prevent two murders by murdering?
Postulating an intrinsic disvalue to murders does nothing to account
for the intuition that you still ought not to murder, even in this
case. But most people find it pre-theoretically natural to assume that
even if you should murder in order to prevent thousands of murders,
you shouldn't do it in order to prevent just two. The constraint
against murdering, on this natural intuition, goes beyond the idea
that murders are bad. It requires that the badness of your own murders
affects what you should do more than it affects what others should do
in order to prevent you from murdering. That is why it is called
"agent-centered".
#### 3.3.2 Agent-Relative Value
The problem with agent-centered constraints is that there seems to be
no single natural way of evaluating outcomes that yields all of the
right predictions. For each agent, there is some way of evaluating
outcomes that yields the right predictions about what she ought to do,
but these rankings treat that agent's murders as contributing
more to the badness of outcomes than other agents' murders. So
as a result, an *incompatible* ranking of outcomes appears to be
required in order to yield the right predictions about what some other
agent ought to do -- namely, one which rates *his* murders
as contributing more to the badness of outcomes than the first
agent's murders. (The situation is slightly more complicated
-- Oddie and Milne [1991] prove that under pretty minimal
assumptions there is always *some* agent-neutral ranking that
yields the right consequentialist predictions, but their proof does
not show that this ranking has any independent plausibility, and Nair
[2014] argues that it cannot be an independently plausible account of
what is a better outcome.)
As a result of this observation, philosophers have postulated a thing
called *agent-relative value*. The idea of agent-relative value
is that if the *better than* relation is relativized to
*agents*, then outcomes in which Franz murders can be
worse-*relative-to* Franz than outcomes in which Jens murders,
even though outcomes in which Jens murders are worse-relative-to Jens
than outcomes in which Franz murders. These contrasting rankings of
these two kinds of outcomes are not incompatible, because each is
relativized to a different agent -- the former to Franz, and the
latter to Jens.
The idea of agent-relative value is attractive to teleologists,
because it allows a view that is very similar in structure to
classical consequentialism to account for constraints. According to
this view, sometimes called *Agent-Relative Teleology* or
*Agent-Centered Consequentialism*, each agent ought always to do
what will bring about the results that are best-relative-to her. Such
a view can easily accommodate an agent-centered constraint not to
murder, on the assumption that each agent's murders are
sufficiently worse-relative-to her than other agent's murders
are (Sen [1983], Portmore [2007]).
Some philosophers have claimed that Agent-Relative Teleology is not
even a distinct theory from classical consequentialism, holding that
the word "good" in English picks out agent-relative value
in a context-dependent way, so that when consequentialists say,
"everyone ought to do what will have the best results",
what they are really saying is that "everyone ought to do what
will have the best-relative-to-her results" (Smith [2003]). And
other philosophers have suggested that Agent-Relative Teleology is
such an attractive theory that everyone is really committed to it
(Dreier [1996]). These theses are bold claims in the theory of value,
because they tell us strong and surprising things about the nature of
what we are talking about, when we use the word,
"good".
#### 3.3.3 Problems and Prospects
In fact, it is highly controversial whether there is even such a thing
as agent-relative value in the first place. Agent-Relative
Teleologists typically appeal to a distinction between agent-relative
and agent-neutral value, but others have contested that no one has
ever successfully made such a distinction in a theory-neutral way
(Schroeder [2007]). Moreover, even if there is such a distinction,
relativizing "good" to agents is not sufficient to deal
with all intuitive cases of constraints, because common sense allows
that you ought not to murder, even in order to prevent *yourself*
from murdering twice in the future. In order to deal with such cases,
"good" will need to be relativized not just to agents, but
to *times* (Brook [1991]). Yet a further source of difficulties
arises for views according to which "good" in English is
used to make claims about agent-relative value in a context-dependent
way; such views fail ordinary tests for context-dependence, and
don't always generate the readings of sentences which their
proponents require.
One of the motivations for thinking that there must be such a thing as
agent-relative value comes from proponents of Fitting Attitudes
accounts of value, and goes like this: if the good is what ought to be
desired, then there will be *two kinds* of good. What ought to be
desired by everyone will be the "agent-neutral" good, and
what ought to be desired by some particular person will be the good
relative-to that person. Ancestors of this idea can be found in
Sidgwick and Ewing, and it has found a number of contemporary
proponents. Whether it is right will turn not only on whether Fitting
Attitudes accounts turn out to be correct, but on what role the answer
to the questions, "who ought?" or "whose
reasons?" plays in the shape of an adequate Fitting Attitudes
account. All of these issues remain unresolved.
The questions of whether there is such a thing as agent-relative
value, and if so, what role it might play in an agent-centered variant
on classical consequentialism, are at the heart of the debate between
consequentialists and deontologists, and over the fundamental question
of the relative priority of the evaluative versus the deontic. These
are large and open questions, but as I hope I've illustrated
here, they are intimately interconnected with a very wide range of
both traditional and non-traditional questions in the theory of value,
broadly construed. |
vasubandhu | ## 1. Biography and Works
### 1.1 Biography (Disputed)
Vasubandhu (4th century C.E.) is dated to the height of the
Gupta period by the fact that, according to Paramartha, he
provided instruction for the crown prince, and queen, of King
"Vikramaditya"--a name for the great
Chandragupta II (r.
380-415).[6]
Vasubandhu lived his life at the center of controversy, and he won
fame and patronage through his acumen as an author and debater. His
writings, packed as they are with criticisms of his
contemporaries' traditions and views, are an unparalleled
resource for understanding the debates alive among Buddhist and
Orthodox (Hindu) schools of his time.
As a young Buddhist scholar monk, Vasubandhu is said to have traveled
from his home in Gandhara to Kashmir, then the heartland of the
Vaibhasika philosophical system, to study. There, students
were sworn to keep the tradition secret amongst themselves. Vasubandhu
questioned Vaibhasika orthodoxy and made enemies there, but
managed to return home an expert in the system. Immediately, he
violated Kashmiri Vaibhasika norms by giving public
teachings, going so far as to dare all comers to debate him at the end
of each day. Vasubandhu's Kashmiri colleagues were livid until
they received a copy of the text from which he was teaching, the root
verses of his *Treasury of the Abhidharma*. This brilliant
summary of their system pleased them immensely. But then, Vasubandhu
published his auto-commentary, which criticized the system in various
ways, primarily from a Sautrantika perspective. Livid again.
Years later, Vasubandhu's Vaibhasika contemporary
Sanghabhadra composed two treatises purporting to refute the
*Commentary*, and traveled to Ayodhya, where Vasubandhu
was living under royal patronage, to challenge the master to a public
debate (Vasubandhu refused). This narrative, reported in Chinese and
Tibetan sources, accounts for the unusual fact that Vasubandhu's
*Commentary* often disagrees with positions affirmed in the
*Treasury*'s root verses.
Vasubandhu's elder half-brother was Asanga, the monk who
meditated in solitude for twelve years until he was able to meet with
and receive teachings directly from the future Buddha, Maitreya.
Asanga thereby became the preeminent expounder of the
Yogacara synthesis of "Great Vehicle"
(Mahayana) Buddhism, writing some texts himself, and
transmitting the so-called "Five Treatises" revealed by
Maitreya. When he was advanced in years, Asanga became worried
about his younger brother's karmic outlook (Vasubandhu had
publicly insulted Asanga's works). So he sent his students
to teach Vasubandhu and convince him to adopt the Yogacara
system. It worked. This narrative provides a reason for
Vasubandhu's having written many texts from the
Yogacara perspective after his prior advocacy of so-called
"Sravaka" positions. It explains why the author
of the *Treasury* and its *Commentary* might have gone
on to write several commentaries on Mahayana scriptures, a
treatise that defends the legitimacy of Mahayana scriptures,
and a series of concise Yogacara syntheses.
The mythologized, hagiographic character of the traditional
biographies, together with the (disputed) belief that they contradict
one another, has led a number of modern scholars to doubt their
veracity. Many have expressed doubts that the mass of extant texts
attributed to Vasubandhu in the Tibetan and Chinese canons could have
been written by a single individual. In the 1950s Erich Frauwallner
proposed a thesis that there were two separate Vasubandhus, whom the
traditional biographies mistakenly combined into one
figure.[7]
Versions of this theory hold sway in some areas of the academy. Some
scholars have called into question the story of Vasubandhu's
conversion from Sravaka (non-Mahayana) to
Yogacara under the influence of his brother Asanga, by
noting a more gradual set of transitions and commonalities across
works previously thought to sit on opposite sides of this
"conversion."[8]
An extension of this argument, advocated today by Robert Kritzer and
colleagues, holds that Vasubandhu was secretly an advocate of
Yogacara all
along.[9]
### 1.2 Doctrinal Positions and Works
Vasubandhu composed works from the perspectives of several different
philosophical schools. In addition, his works often state doctrinal
positions and arguments with which he disagrees, in order to refute
them. For this reason, scholars have used his works, especially his
*Commentary on the Treasury of the Abhidharma* to explicate a
range of Buddhist and non-Buddhist positions prevalent during his
time.
The main Buddhist schools represented in his works are the
Vaibhasikas or Sarvastivadins of Kashmir and
Gandhara, the Vatsiputriyas, the Sautrantikas
(also called Darstantikas), and the
Yogacarins. He also argues with non-Buddhist, Orthodox
(Hindu) positions, including especially Samkhya and
Nyaya-Vaisesika views. It would appear that the only
Buddhist school with which he always expresses disagreement is the
Vatsiputriyas, also called the Pudgalavadins or
"Personalists." Vasubandhu's works are most often
said to have been written either from the Sautrantika perspective
or from the Yogacara perspective, depending upon the work.
Yet the verses of his *Treasury of the Abhidharma* summarize
the doctrines of the Kashmiri Vaibhasika school. It is only
in the commentary that the Sautrantika system tends to win the
day, through numerous
arguments.[10]
And, it is quite often the case, even in the *Commentary*,
that Vaibhasika views are left unchallenged or
unharmed.
A great number of texts of the Theravada Abhidharma tradition are
extant in Pali, and a great number of Sarvastivada
Abhidharma texts exist in their Chinese translations. In Sanskrit and
Tibetan, however, nearly all of the "Sravaka"
Abhidharma texts that remain are the works of Vasubandhu and the
commentarial traditions stemming from them. It is tempting to take
this as evidence of Vasubandhu's philosophical mastery, to have
so comprehensively defeated his foes that his tradition dominated from
the 9th century on. Yet we may equally take this as
evidence not of the victory of Sautrantika, but of the influence
of the rising popularity of Yogacara in India and Tibet.
Vasubandhu, a great systematizer of mainstream Abhidharma, provided
arguments and doctrines, and a life story, that paved the way to, and
justified, the later dominance of Mahayana.
Vasubandhu wrote in Sanskrit, but many of his works are known from
their Chinese and Tibetan translations alone. It is beyond the
purposes of this essay to summarize all of the works attributed to
Vasubandhu, or to take into consideration the relevant criteria for
determining his authorship. A quick mention of well-known works must
suffice.[11]
Vasubandhu's most important work of Abhidharma is his
*Treasury of the Abhidharma*
(*Abhidharmakosakarika*), with its
*Commentary*
(*Abhidharmakosabhasya*).[12]
After these, his best known works are his concise, Yogacara
syntheses, the *Twenty Verses*
(*Vimsikakarika*) with its
*Commentary* (*Vimsikavrtti*), the
*Thirty Verses* (*Trimsika*), and the
*Three Natures Exposition*
(*Trisvabhavanirdesa*).[13]
The majority of the arguments discussed below are taken from these
works.
A number of Vasubandhu's shorter independent works are also
available. The *Explanation of the Establishment of Action*
(*Karmasiddhiprakarana*) draws together, synthesizes and
advances somewhat the many discussions of *karma* from the
*Treasury of the Abhidharma*--adding a defense of the
Yogacara concept of the "storehouse
consciousness" (*alayavijnana*). The
*Explanation of the Five Aggregates*
(*Pancaskandhaprakarana*) is a summary of central
Abhidharma terminology that includes some Yogacara
cross-referencing as well. The *Rules for Debate*
(*Vadavidhi*) is lost, but numerous fragments preserved in
later texts show it to be foundational for the development of formal
Buddhist logic and
epistemology.[14]
Vasubandhu wrote a number of commentaries on Buddhist scriptures,
primarily (but not exclusively) those that are classified as
Mahayana Sutras. He also wrote a tremendously
influential work on how to interpret and teach Buddhist scriptures,
entitled the *Proper Mode of Exposition*
(*Vyakhyayukti*), in which (among other things) he
argued for the legitimacy of the Mahayana Sutras (which
he called "Vaipulya
Sutras").[15]
Also quite influential, and still regularly studied in Tibetan
educational settings, are Vasubandhu's commentaries on major
Yogacara works. These include commentaries on three texts
said to have been revealed to Asanga by the bodhisattva
Maitreya, his *Commentary on Distinguishing Elements from
Reality* (*Dharmadharmatavibhagavrtti*),
*Commentary on Distinguishing the Middle from the Extremes*
(*Madhyantavibhagabhasya*), and
*Commentary on the Ornament to the Great Vehicle Discourses*
(*Mahayanasutralamkarabhasya*).
Finally, Vasubandhu is credited with an important commentary on
Asanga's Yogacara *summa*, the
*Commentary on the Summary of the Great Vehicle*
(*Mahayanasamgrahabhasya*).
Among these works, and numerous others attributed to Vasubandhu in the
Tibetan and Chinese canons, one finds countless perspectives and
approaches represented, and many scholars are skeptical as to the
unity of the authorship even of the works listed. Many, however, still
find it sensible to assume that Vasubandhu worked in various genres
and schools, and adopted the norms of each, developing his own
thinking along the way. Yet no narrative of Vasubandhu's
intellectual development has achieved scholarly consensus.
## 2. Major Arguments from the *Treasury of the Abhidharma*
### 2.1 Disproof of the Self
The Buddhist approach to the question of personal continuity attracts
perhaps more attention from contemporary philosophy than any other
aspect of the
tradition.[16]
Vasubandhu's *Commentary on the Treasury of the
Abhidharma* includes, as its final chapter, an extended defense of
the Buddhist doctrine of no-self, entitled, "The Refutation of
the Person." The majority of the argument assumes a Buddhist
interlocutor, and is intended to prove that no Buddhist ought to
accept the reality of a so-called "self"
(*atman*) or "person" (*pudgala*) over
and above the five aggregates (*skandhas*) in which, the Buddha
said, the person consists. (The aggregates, as their name suggests,
are themselves constantly-changing collections of entities of five
categories: the physical, feelings, ideas, dispositions, and
consciousness.) Vasubandhu apparently believed that the
Vatsiputriyas, also called Pudgalavadins or
"Personalists," represented a significant, Buddhist rival.
Late in the chapter, Vasubandhu also provides some arguments that are
targeted at a Vaisesika (Hindu) view of the self.
The chapter begins with a brief statement of the soteriological
purpose of the treatise, and something of a definition of the
Buddha's teachings. Vasubandhu says that for outsiders (meaning
outsiders to Buddhism), there is no possibility of liberation, because
all other systems impose a false construct of a self upon what is
really only the continuum of aggregates (*skandha*). Since
grasping the self generates the mental afflictions
(*klesa*), liberation from suffering is impossible for one
who holds onto the false, non-Buddhist view that the self has
independent reality.
Vasubandhu does not attempt, here, to prove the karmic causality that
justifies his soteriological exclusivism. Instead, he moves directly
to prove the non-existence of the self. What is real, he says, is
known by one of two means: perception or
inference.[17]
Seven things are known directly, by perception. They include the five
objects of the senses (visual forms, sounds, smells, tastes, and
touchables), mental objects (mental images or ideas), and the mind
itself.[18]
What is not known directly can only be known indirectly, by
inference. As an example, Vasubandhu provides an argument that the
five sense *organs* (eye, ear, nose, tongue, skin) can each be
inferred from the awareness of their respective sensory objects. But,
he says, there is no such inference for the self. Since Vasubandhu
believes in universal momentariness (see the section on Momentariness
and Continuity), he would reject Descartes' deduction of
"I am," as an enduring self, from the perception of the
momentary mental event, "I think."
Vasubandhu does not list the five aggregates here, but his discussion
of perception and inference stands in for having done so. Any Buddhist
scholastic would be able to see that he has claimed to have proven the
reality of the twelve sense bases (*ayatana*), and these
twelve are easy to correlate with the five aggregates. The first
aggregate, the physical (*rupa*), includes the five
sensory organs and their five objects. The second, third, and fourth
aggregates--feelings (*vedana*), thoughts
(*samjna*), and dispositions
(*samskara*)--are kinds of mental objects. The
fifth aggregate, consciousness (*vijnana*), is
equivalent to the twelfth sense base, the
mind.[19]
Earlier in the *Treasury* Vasubandhu had included other
elements within the five aggregates (including, for instance,
*avijnapti-rupa*, which is imperceptible, within
the physical), but since he had also provided arguments
*disproving* those elements (see the section on Disproof of
Invisible Physicality), perhaps he felt no compulsion to include them
here. Focusing on the twelve sense bases may be Vasubandhu's way
of providing as clean and parsimonious a Buddhist ontology as he
can.
| **12 Sense Bases** | **5 Aggregates** |
| --- | --- |
| Eye | Physical |
| Visual Form |
| Ear |
| Sound |
| Nose |
| Smell |
| Tongue |
| Taste |
| Skin/Body |
| Touchable |
| Mind | Consciousness |
| Mental Object | Feelings |
| Thoughts |
| Dispositions |
In any event, Vasubandhu is affirming the mainstream Buddhist view
that the constantly-changing elements, which seem to be the
possessions of, or the components of, the so-called
"self," are real, while the self is unreal. Buddhists
readily admit that this view is counterintuitive (if it were intuitive
we would all be liberated), and many doubts may arise when it is
posited: If there is no self, then what is the same about me as a
child and as an adult? What is named by my name? What perceives
perceptions, or experiences experiences, or enacts actions, if not the
self? What is memory, without a self? And crucially, from a Buddhist
perspective, what transmigrates from one body to the next, and reaps
the karmic fruit of good and evil deeds, if not a self? These are all
questions of great importance in Buddhist philosophical treatments of
the doctrine of no-self, and Vasubandhu will address them each in due
course. Vasubandhu begins, however, by focusing on certain commonsense
advantages to the no-self view, and exposing the difficulties in
holding to the reality of the self. His first target, then, is
Buddhist "Personalists."
For Vasubandhu, everything that is real or substantial
(*dravya*) is causally efficient, having specifiable
cause-and-effect relations with other entities. Everything that does
not have such a causal basis is unreal, and if anything, it is merely
a conceptual construct, a mere convention (*prajnapti*).
The so-called "person," like everything else, must be one
or the other--real or unreal, causally conditioned or
conceptually constructed. Vasubandhu places it in the latter category.
Notice that, to call something a "conceptual construction"
(*parikalpita*) is, for Vasubandhu, to remove it from the flow
of causality. Abstract entities have no causal impetus and are unreal.
This is Vasubandhu's formalization, and expansion, of the
Buddhist doctrine that all conditioned things are impermanent. Some
Buddhists accept (for instance) that space and nirvana are real,
though unconditioned. Vasubandhu says, instead, that all things that
are engaged with the causal world must be, themselves, conditioned
(this is a corollary of his proof of momentariness), so he rejects the
causal capacity of eternal, unchanging entities. But even setting
aside Vasubandhu's expanded view, it is clearly out of the
question for Buddhists to say that *the self* is unconditioned
and eternal. That is a non-Buddhist view, which Vasubandhu treats
later on.
Vasubandhu thus begins his argument by posing a dilemma for the
Personalists, between saying that the "person" is
uncaused--which would be to adopt a non-Buddhist view--and
specifying the cause. Since the Buddha explained the aggregates to be
the psychophysical elements that make up the person, any posited
"self" must be in some way related to those entities. But
the Personalists will not wish to say simply that the
"person" is caused by the aggregates. For, the aggregates
are temporary and impermanent in the extreme. The whole issue at hand
is how to account for the continuity of the person as the aggregates
change. If the aggregates are the cause(s) of the person, then the
person, too, must change as they change. As Vasubandhu explains in
criticizing a parallel context, "No unchanging quality is seen
to inhere in changing substances." (AKBh
159.20)[20]
If the causal basis is accepted as changing, the "person"
is no longer continuous across the aggregates and through time. So a
"person" caused by the aggregates provides no answer to
the doubts about personal identity raised above.
Yet for Vasubandhu, if the cause cannot be specified, then the person
must be conceptually constructed. He adduces the following as an
example of conceptual construction: When we see, smell, and taste
milk, we have distinct sensory impressions, which are combined in our
awareness. The "milk," then, is a mental construct--a
concept built out of discrete sensory impressions. The sensory
impressions are real, but the milk is not. In the same way, the
"self" is made up of constantly-changing sensory organs,
sensory impressions, ideas, and mental events. These separate,
momentary elements are real, but their imagined unity--as an
enduring "I"--is a false projection.
The Personalists are willing to accept that the person is a conceptual
construction, but they do not accept that this makes it unreal, or
causally inert. They believe in a perceptible "person,"
who is ineffably neither the same as nor distinct from the aggregates,
but who comes into existence in a particular lifetime "depending
upon" the aggregates. Vasubandhu considers this a muddled
attempt to have one's cake and eat it too. He focuses in on this
term, "depending upon." If, by this, the Personalists mean
"dependent as a conceptual object," he says, then really
they are conceding that the term "person" actually just
refers to the aggregates. It is to admit that the aggregates are
understood as unified through a mere conceptual construction,
"person," just as the taste, touch, and smell of milk
generate the conceptual construction, "milk." If, on the
other hand, they are saying the "person" is
*causally* dependent upon the aggregates, then they are saying
that the "person" is caused by the aggregates, which is to
specify its causes among impermanent entities.
There is, for Vasubandhu, no acceptable explanation of conceptual
constructs as real entities. His exclusive dichotomy (real &
causal vs. unreal & conceptual) serves not only as a tool for
refusing a separate self, but also as a method of denying apparent
subject/object relations or apparent substance/quality relations, and
translating them into linear, causal series. For instance, he says
that objects of awareness do not exist as causally significant
entities distinct from consciousness; rather, consciousness is
*caused* by its apparent objects, from which it takes on a
particular shape (*akara*). This is how Vasubandhu
resolves some of the standard challenges to the no-self view, such as
the question of how experience can exist without an experiencer. The
answer is that any given awareness only *appears* to be made up
of two parts, subject and object, a "cognizer" acting upon
a cognitive object. In reality, any given moment of awareness consists
in a full, constructed appearance of subject/object duality without
there actually being any separate subject. The object is an aspect or
shape of the consciousness itself. Similarly, when asked with regard
to memory "who" remembers if not the self, Vasubandhu
rejects the subject/object structure. What is really occurring is only
a series of discrete elements in the continuum of aggregates, which
arise successively, causally linked to one another. The reason I
remember my past is because my aggregates now are the causal result of
the past aggregates whose actions I remember. My consciousness today
arises with the shape of my past aggregates imprinted upon it.
Moving on, the "dependence" relation that the Personalists
are attempting to defend resists none of Vasubandhu's arguments
(a casualty, perchance, of its appearing as a counterargument within
his text), but the Personalists persist by stating that the relation
between the person and the aggregates is "ineffable." To
this, Vasubandhu argues, interestingly, that ineffability in regard to
an entity that is ostensibly perceived undermines the functionality of
perception as a source of knowledge. Without any way of
distinguishing, clearly, what is perceived directly and what is
perceived "ineffably," all of perception becomes
potentially ineffable. And, perhaps more importantly, not everything
is said by the Buddha to be ineffable, or at least not equally so. For
instance, when the Buddha denies the reality of the self, he does so
*in contradistinction to* the sense bases. If the
"person" also exists as an object of sensory perception,
how can this distinction be upheld?
Vasubandhu's dialogue with the Personalists takes some rather
technical twists and turns, which I will not attempt to cover here.
See the section on scripture for a summary of Vasubandhu's
methods of scriptural interpretation, which he employs decisively to
defend against the Personalist adducement of the Buddha's
silence on the nature of the person. I will here mention only one
further argument against the Personalists, before treating
Vasubandhu's arguments with non-Buddhists.
The Personalist challenges Vasubandhu to explain how it can be, if
everything is only a constantly-changing fluctuation of
causally-connected separate entities, that things do not arise in
predictable patterns. Why is there so much unpredictable change, the
Personalists are asking, if there is no independent person who enacts
those changes? This is a question that seems to be affirming a greater
degree of freedom of the will than would appear possible were there
not some outside, uncaused intervener in the causal flow. If I am
reading it correctly, it is something like an incredulous denial that
we could be "mere automatons," given our capacities for
innovation and creativity. Vasubandhu's answer is to emphasize
the possibility of change in a self-regulating
system.[21]
Vasubandhu points out that in fact, there *are* predictable
patterns to the causal fluctuations of thoughts, so it is a mistake to
call them unpredictable. Seeing a woman, he says, causes you to think
thoughts that you tend to think when you look at women. (If we needed
it, here's confirmation that the implied reader is a
heterosexual male--though probably a celibate one.) If you
practice leaving a woman alone in your thoughts (for instance, by
practicing thinking of the woman's overprotective father, as
monks are taught to do), then when you see a woman you'll reject
her in your thoughts. If you do not practice this, other thoughts
arise: "...then those [thoughts] which are most frequent
[or clearest], or most proximate arise, because they have been most
forcefully cultivated, except when there are simultaneous special
conditions external to the body." (Kapstein 2001a: 370) This is
an argument that undermines a naive trust in the independence of
the will. But of course, Vasubandhu adds, just because the mind is
subject to conditions does not mean that it will always be the same.
On the contrary, it is fundamental to the definition of conditioned
things that they are always changing.
It is with this setup that Vasubandhu turns to address the
non-Buddhist sectarians, asking them to account as elegantly as his
causal, no-self view does, for the changing flux of mental events.
For, the non-Buddhist opponent believes in an independent self who is
the agent and controller of mental actions. If that is so, Vasubandhu
challenges them, why do mental events arise in such disorder? Just
what kind of a controller is the "self"? This is an
important point, because it implies that Buddhists might use as
evidence what is introspectively obvious to any
meditator--namely, the difficulty of controlling the mind. If the
"self" is an uncaused agent, why can't I, for
instance, focus my concentration indefinitely? Non-Buddhist
(orthodox/Hindu) traditions generally affirm that the mind must be
controlled, and harnessed, by the self; this is one of the central
meanings of the term *yoga* (lit. "yoking") in
classical Hinduism. And, of course, the reason that such
"yoking" is necessary is that the mind has its own causal
impetus, which overwhelms the self unless and until the self attains
liberation. Vasubandhu asks why, if changes in experience must be
admitted to originate in the fluctuations of the mind, it should be
believed that the "yoking" and other willful, agent-driven
events originate *outside* of the mind in an unchanging
"self." The need for such an explanation is evidence that
the "self" requires a more burdensome (less simple)
theoretical structure than might appear at first blush.
The Buddhist view has an in-built explanation for the difficulty of
changing the mind's dispositions. Each temporary collection of
psychophysical entities generates, through causal regularities,
another collection in the next moment. The mind's present
dispositions are conditioned by its past, and its every experience is
only an expression of its internal transformations, triggered of
course by influences coming from outside the individual. It is widely
considered a difficulty for the Buddhist no-self view that it has
trouble accounting for our intuitive senses of continuity and control.
But here we see Vasubandhu turning a disadvantage into an advantage.
The intuitive sense of control is a mistake. This is introspectively
obvious and is implicit in the non-Buddhist's admission that a
mental series, distinct from the "self," is dependent upon
its own causal impetus. Of course, the non-Buddhists have a panoply of
theories explaining the relation between the self and the mental
events with which it is complexly entwined. This is territory over
which Indian thought contemplates free will and determinism.
Vasubandhu does not enter into these debates in any detail. His
interest is only to indicate the Buddhist position's comparative
simplicity and causal parsimony.
In the last stages of the chapter, Vasubandhu's non-Buddhist
believer in a permanent self proposes several potential
counter-arguments. Here Vasubandhu addresses a number of common
worries about the no-self view, including the questions of experience
and agency without a self. With regard to experience, Vasubandhu
simply makes reference to the argument above in which the
subject/object relation was described in a causal line, with the
object causing a consciousness to take on the "shape" or
appearance of an object. An individual, momentary consciousness does
not need to be possessed by a self in order to have a certain
appearance. With regard to agency, Vasubandhu again redescribes the
structure under consideration--in this case, the act/agent
structure--as a unitary, causal line: "From recollection
there is interest; from interest consideration; from consideration
willful effort; from willful effort vital energy; and from that,
action. So what does the self do
here?"[22]
(Kapstein 2001a: 373) The point, again, is not that there is no
action, but that action takes place as one event in a causal series,
no element of which requires an independent agent.
The last argument in the work addresses the crucial, Buddhist concern
about karmic fruits. If there is no self, how can there be karmic
results accruing to the agent of karmic acts? What can it mean to say
that, if I kill someone, I go to hell, when there is no
"I"? The way the question is phrased is in terms of a
technical question about the nature of karmic retribution. The way
that karma works in the non-Buddhist schools is, naturally, via the
self. In one way or another, karmic residue adheres to the soul/self.
If there is no soul that is affected by the karmic residue, how can
you be affected at a later time by a previous cause? How can something
I do now affect me at the time of my death, if at that time the deed
will be in the past?
Vasubandhu gives a clear, if quick reply. A complete answer to this
question would require a fuller discussion of the question of
continuity.[23]
(See the sections Momentariness and Continuity and Disproof of
Invisible Physicality below.) The short answer is simply that the past
action inaugurates a causal series, which eventuates in the result at
a later time via a number of intermediate steps. When I act now, it
does not alter some eternal soul, but it does alter the future of my
aggregates by sparking a causal series. As he does many times
throughout his disproof of the person, Vasubandhu encourages his
opponents to recognize the term "I" as simply referring to
the continuum of aggregates. The conceptual construction
"I" is then understood to be only a manner of speaking, a
useful shorthand. He mentions that when I say "I am pale,"
I know that it is only my body, not my eternal self, that is pale. Why
not apply such figurative use to the term overall? Then, when I say
that "I" experience the result of "my"
actions, it can be seen to be both clear and accurate. Granted, it
*seems* like there is a real "self." But it only
looks that way, just as a line of ants looks like a brown stripe on
the ground. Close philosophical and introspective attention reveals
that what seemed like a solid, coherent whole is in fact a false
mental construction based upon a failure to notice its countless,
fluctuating parts.
### 2.2 Momentariness and Continuity
For centuries before Vasubandhu, Buddhist philosophers of the
Sarvastivadin tradition had believed in, and argued for, the
doctrine of universal
momentariness.[24]
This view was a Buddhist scholastic interpretation and expression of
the Buddha's doctrine that all things in the world of ordinary
beings were subject to causes and conditions, and therefore
impermanent. Buddhists rejected the notion of substances with changing
qualities, and affirmed instead that change was logically impossible.
To be, for Buddhists, is to express certain inalienable
characteristics, whereas to change must be to exhibit the nature of a
different being. Vasubandhu adopts this view, employing it in many
contexts: "As for something that becomes different: that very
thing being different is not accepted, for it is not acceptable that
it differs from itself." (AKBh 193.9-10)
One can see how the impossibility of change, coupled with the doctrine
of impermanence, served to prove that all things persisted for only a
moment. Vasubandhu certainly shared this view, and he drew upon the
premises of impermanence and the impossibility of change to establish
momentariness in his own works. Yet, as von Rospatt (1995) has shown,
Vasubandhu added a new twist to the argument. What he added was that
things *must* self-destruct, for destruction cannot be caused.
And why not? Because a cause and a result are real entities, and the
ostensible object of a destruction is a non-existent. How, he asks,
can a non-existent be a result?
Given that things must bring about their own destruction, then,
Vasubandhu needs only to recall the impossibility of change to
establish momentariness. If things have it as part of their nature to
self-destruct, they must do so immediately upon coming into being. If
they do not have it as part of their nature, it can never become
so.[25]
The great difficulty for such a position is accounting for apparent
continuities. The standard Buddhist explanation is that usually, when
things go out of existence, they are replaced in the next moment by
new elements of the same kind, and these streams of entities cause the
appearance of continuity. Modern interpreters often illustrate the
point with the example of the apparent motion on a movie screen being
caused by a quick succession of stills. This is said to be the case
with the many entities that appear to make up the continuous self, and
of course this was the main reason the Buddha affirmed his doctrine of
impermanence in the first place. Yet for some phenomena, to call their
continuity merely apparent causes philosophical problems, even for
Buddhists. Consequently, Vasubandhu, like his Sarvastivada
forbears, was repeatedly preoccupied with the need to account for
continuity.
Many of the more prominent problems in continuity have already been
mentioned in the discussion of Vasubandhu's disproof of the self
(see above), including the problem of memory, the problem of the
reference of a name, and the problem of karmic causality across
multiple lifetimes. See also the section on the disproof of invisible
physicality for Vasubandhu's rejection of the
Vaibhasika response to a specific problem of
continuity.
One issue of great importance for the Vaibhasika and
Yogacara traditions, and consequently of interest to current
scholarship,[26]
is a problem specific to Buddhist meditation theory. The problem
comes from an apparent inconsistency among well-founded early Buddhist
scriptural positions. On the one hand, there was the orthodox belief
that the body was kept alive by consciousness. Even in deep sleep, it
was believed that there was some form of subtle consciousness that was
keeping the body alive. On the other hand, there was the very old
belief, possibly articulated by the Buddha himself, that there are six
kinds of consciousness, and that each of them is associated with one
of the six senses--the five traditional senses, plus the mental
sense (which observes mental objects). The problem was that there are
some meditative states that are defined as being completely free of
*all six* sensory consciousnesses. So the question becomes,
What keeps the meditator's body alive when all consciousness is
cut off?
Related to this is the problem that, given that each element can be
caused only by a previous element of a corresponding kind (in an
immediately preceding moment), there does not seem to be any way for
the consciousness, once cut off, to *restart*. The distinctive,
early Yogacara doctrine of the "store
consciousness" (*alayavijnana*) or the
"hidden consciousness"--the consciousness that is
tucked away in the body--was first introduced to solve these
continuity problems. Equally problematic is the same issue in reverse:
Without some doctrinal shift there does not seem to be any way that
beings born into formless realms, with no bodies, could be reborn
among those with physical form. The later, Yogacara view
that everything is only appearance takes care of this, too, by
eliminating the need for real physicality. (See the section concerning
disproof of invisible physicality.) Vasubandhu sets out these problems
in the *Treasury of the Abhidharma* without resolving them;
when he writes from a Yogacara perspective, he resolves them
with recourse to the store consciousness and appearance-only.
### 2.3 Disproof of Invisible Physicality
In a complex but significant passage from the *Treasury of the
Abhidharma*, Vasubandhu provides refutations to arguments that
defend the necessity of there being a special kind of physicality,
called "unperceived physicality," or "invisible
physicality,"
(*avijnaptirupa*).[27]
As is made clear throughout these arguments, this entity
(*dharma*) was affirmed by the Kashmiri Vaibhasikas
in order to account for the karmic effects of our physical and vocal
actions.
The need for such an entity reflects a conundrum unique to Buddhist
morality, premised as it is upon the lawlike, causal regularity of
karma operating in the absence of any divine intervention. The issue
at hand is not in itself the question of karmic continuity in the
absence of a self, but rather a particular aspect of that larger
problem. (See the sections concerning disproof of the self and
momentariness and continuity.) For this argument, then, let us agree
that the continuum of consciousness carries on after death, and that
one's future rebirth is therefore determined by the shape and
character of one's consciousness at death. Given that, we can
see how a mental action--say, holding a wrong view, or slandering
the Buddha in one's thoughts--can have consequences for
one's future, by directly affecting the shape of one's
consciousness. But how do actions of speech and body--such as
insulting, or hitting someone--bear their karmic fruit in the
mental continuum? Surely forming the intention to hit someone is a
consequential mental act. We might think that Buddhists would simply
say that it is karmically equivalent, then, to *intend* to do
something (which is a mental act) and to actually *do it*.
Although there is some blurring of the line here (the Buddha does give
great moral weight to merely intending to act), intentions and the
bodily (or vocal) acts that follow from them are understood as
distinct types of actions with distinct karmic results. Furthermore,
the Buddha is said to have indicated that the karmic effect--and
so the moral significance--of *doing* something is
additionally dependent upon the *success* of the action.
Attempted murder is not punished to the same degree as murder, even
karmically.
These distinctions accord with widely held moral instincts, but the
Abhidharma philosophers needed a full, causal explanation for why the
successful results of physical actions bring about their distinctive
effects in one's consciousness. I can see how my plunging a
knife into someone affects him; but how his *dying* (which may
happen later, in the hospital) affects *me* is invisible, or
uncognized. How does what goes around, come around? The
Vaibhasika answer is that, since his death is a physical
event, there must be another physical event--the invisible
physicality--which impinges upon me.
Vasubandhu rejects this entity, and provides a compelling explanation
of the Buddha's differentiation between intentions, acts, and
success that does not need to appeal to invisible physicality.
Although here Vasubandhu answers through the mouth of another (and so,
presents this position without explicitly endorsing it), he advocates
something like this position more forcefully in his later,
*Explanation of the Establishment of
Action*.[28]
He admits that there must be a karmic difference between (1) merely
intending, (2) intending and acting, and (3) intending, acting and
accomplishing the deed. But the difference does not come from the
accomplishment of the result itself--that is, it does not come
from the death of the murder victim. Rather, the karmic effect is the
result of my experience and my beliefs. If I experience myself
stabbing someone, and I *know* that the person died (even if by
reading the newspaper the next day), I gain a fuller karmic result
than I would have by simply intending to act.
The Vaibhasikas present a number of other arguments that
suggest the necessity of invisible physicality, all of which follow a
similar logic. A close parallel to the example above is a murder by
proxy; why do I experience a karmic result if the hit man I've
hired does his job, but not if he doesn't? My action was the
same either way. The Vaibhasikas also point to the
enduring, karmic benefits of having taken vows to refrain from
negative actions such as killing or stealing. These vows generate a
karmic benefit in spite of their being fulfilled through not acting.
How is this non-action different from that of someone who
*just* *happens* not to be killing or stealing? Another
argument suggests that advanced practitioners in refined meditative
states cannot be said to be practicing the eight-fold path of the
Buddha's teachings, since it includes "right
speech," "right action" and "right
livelihood"--but no action at all can be taken while in
meditative equipoise. Finally, another argument points out that a
donor gains great karmic merit when (obviously, later) the monk who
receives and eats her gift of food uses its energy to attain a great
meditative accomplishment.
Such arguments draw upon numerous scriptures to indicate that morally
significant effects seem to take place in the world even when physical
actions are no longer possible or relevant. In each case, Vasubandhu
reinterprets the scripture, or the situation, or both, in such a way
as to make it sensible to describe the effect without appealing to
invisible physicality. In the case of the vow, Vasubandhu says that a
vow helps a vow-taker to remember not to do immoral deeds, and the
disciplinary vows of monks have the effect of transforming the whole
character of the mind. Therefore a disciplined vow-taker does not act
without the thought of the vow arising. The vow does not cause a
magical imprint on some invisible entity; it transforms the
dispositions of the vow-taker. The same is true of the meditating
practitioner; there is no lack of "right action" during
meditation, because the meditator's continuum of aggregates is
still *disposed* to act properly after the meditation session
is complete. Such arguments exhibit Vasubandhu's general pattern
of reducing apparent continuities to causal chains of momentary
events, and eliminating as many extraneous elements from the system as
possible.
### 2.4 Disproof of a Creator God
In the *Treasury*, Vasubandhu provides an argument against the
existence of Isvara, a God who is the sovereign creator of
everything.[29]
For Vasubandhu, theism is essentially the absurd notion that all of
creation might be the result of a single cause. He begins his disproof
with what has been called verbal equivocation (Katsura 2003, p. 113),
and might also be taken as a joke or a pun:
>
>
> Living beings proceed from conditions; nor is it the case that
> "the cause (*karana*) of all the world is a
> single Isvara, Purusa or Pradhana [i.e.,
> God]."
>
>
>
> What is the reason (*hetu*) for this?
>
>
>
> If you think it is established upon the provision of a reason
> (*hetu*), this is a refusal of the statement, "the cause
> (*karana*) of all is a single
> Isvara." (AKBh 101.19-21)
>
>
>
The pun is in the fact that the word for "reason,"
*hetu*, also means "cause." It is as if Vasubandhu
is saying, "If you're looking for my [just] *cause*
for believing this, then clearly you don't believe God is the
*cause* of everything! If I believe it, it must be
*'cause* God made me believe it!" This is somewhat
silly, and apparently an equivocation between reasons and causes.
A more serious reading, though, rises to the surface when we notice
that Vasubandhu does not, in fact, verbally equivocate. He uses the
word *hetu* to refer to reasons, and another word,
*karana*, to refer to causes. In his own philosophy,
Vasubandhu separates these two regions of reality quite strictly:
"Causes" are real, substantial entities, whereas
"reasons" are conceptual constructs devoid of causal
capacity. (Cf. Davidson's "Anomalism of the Mental";
see the entries on
anomalous monism
and
Donald Davidson.)
This is not to say that Vasubandhu would endorse Sellars's
notion that ideas which inhabit the normative "space of
reasons" are irreducible to those within the "space of
causes." (See the discussion of the myth of the given, in the
entry on
Wilfrid Sellars.)
He would not; on the contrary, for Vasubandhu, to say that something
is a conceptual construct is to say that it is *caused*, but in
a way quite different from how it appears. This is developed
explicitly in his Yogacara theory of the Three Natures of
all entities, where one of the natures is said to be *how things
appear*, and another is *how things are caused*. (See the
discussion of three natures and non-duality.)
Yet here in the *Treasury* Vasubandhu is only gesturing in this
direction. His non-Buddhist opponents would also like to distinguish
reasons from causes, so instead of attempting to unify their causal
basis himself, Vasubandhu suggests that commitment to the unitary
nature of divine causality commits them to conflating them. The
serious point behind the opening joke, then, is that philosophy
requires that we make distinctions among ideas and entities, and
debate assumes differences of opinion and perspective. For Vasubandhu,
such differences are evidence that it is impossible that all things
have issued from a single cause. He begins, then, with a shot across
the bow, which expresses just how seriously he will hold the theists
to their claim that all things have one cause.
The majority of Vasubandhu's argument consists in his support
for the notion that the diversity of the world--especially its
changes through time--are inconsistent with the notion of a
single creator. An important, implicit premise in this argument is the
notion that things do not change their defining, characteristic
natures. Everything is what it is: "As for something that
becomes different: that very thing being different is not accepted,
for it is not acceptable that it differs from itself." (AKBh
193.9-10) Thus, if God is the cause of everything, God is
*always* the cause of everything--or, if God's
nature at time *t* is to be the cause of everything, then God
must cause everything at time *t*.
In response, the theist proposes a variety of predictable
alternatives. One option is that God is posited as the creator of each
specific thing at its specific time due to God's having unique,
temporally-indexed desires. One person is caused by God's desire
for a person to be born in Gandhara in the fourth century, and
another person is caused by God's desire for a person to be born
in New York City in 1969. Vasubandhu responds that if that is the
case, then things are not caused by a unitary God, but rather each
thing is caused by one of God's countless, distinct desires.
Furthermore, the theist is then responsible to account for the causes
of those specific desires.
In addition, specific temporally-indexed desires imply that something
is interfering with God's creative action--namely, the
particular temporal location. If God is capable of--not only
capable of, but having it as His inherent nature to be--creating
something, what could possibly prevent that thing's occurrence
at every moment in which God is expressing God's own nature
(which should be every moment forever)? If God has the desire for a
tree in Washington Square Park in 2010, it should always be 2010, with
a tree in Washington Square Park. Why should God wait? (This argument
relies upon a typical Buddhist penchant for particulars and moments as
the touchstone of the
real.[30])
One unwise option the theist proposes is that God's creation
changes through time because God is using other
elements--material causes, for instance--to bring things
into being, and those elements follow their own causal laws. That, of
course, makes God's actions subject to other causes, and amounts
to the Buddhist view that all things are the results of multiple
causes and conditions. Furthermore, when it is proposed that God
engages in creation for his own pleasure (*priti*),
Vasubandhu replies that this means that God is not sovereign because
He requires a means (*upaya*) to bring about a cause,
namely his own pleasure. As a cap to this argument, Vasubandhu mocks
the idea that a praiseworthy God should be satisfied with the evident
suffering of sentient beings. Thus, along with his causal argument,
Vasubandhu throws in a barb from the argument from evil.
Vasubandhu's final argument in this section is to say that
theists are, or ought to be, committed to the denial of apparent
causes and conditions. If you think that God is the only cause, then
you must deny that seeds cause sprouts. This may seem like another
overly literalistic reading of the nature of God as the only cause,
but is God really deserving of the name "cause," if He has
intermediary causes do His work of creation? For, Vasubandhu points
out, ordinarily we do not require that something be an uncaused cause
in order to call it a "cause." We call a seed a cause of a
sprout, even if it is itself caused by a previous plant. So to say
that God is the "cause" of the sprout whereas the seed is
*not* amounts to denying a visible cause and replacing it with
an invisible one. To avoid this, the theist might say that to call God
the "cause" is to say that God is the original cause of
creation. But if this is what God is doing to gain the name
"cause," and God is beginningless (uncaused), then
creation too must be beginningless--which means that, once again,
God has nothing to do.
## 3. Approaches to Scriptural Interpretation
Vasubandhu was a master of Buddhist scripture. A great number of
scriptural commentaries are attributed to him, and we find Vasubandhu
citing scripture literally hundreds of times throughout his
philosophical works. (Pasadika 1989) Furthermore, Vasubandhu
is one of very few Indian or Tibetan authors to have written a
treatise dedicated to the interpretation and exposition of Buddhist
scripture, called the *Proper Mode of Exposition*
(*Vyakhyayukti*).[31]
Here is not the place for a thorough survey of this material, which
awaits future research. It is, however, worth sketching a
characteristic pattern in Vasubandhu's citation of scripture,
and noting its relation to his view of scripture in the *Proper
Mode of Exposition*.
Like other scholastic philosophers, Vasubandhu often cites scriptures
in support of an argument. Unlike many others, however, Vasubandhu
regularly presents a scriptural passage as a potential
counter-argument, only to refute the traditional interpretation of the
passage. The opponent's interpretation, moreover, is quite often
the most literal or direct reading of the passage in question. One of
Vasubandhu's characteristic philosophical moves, then, is to
argue in favor of a secondary interpretation of a given scriptural
passage that might, on its face, be thought to disprove his view.
A prominent, paradigmatic example of this use of scriptural citation
takes place in the *Twenty Verses Commentary*. As will be
discussed below (in the section concerning defense of appearance
only), this text is dedicated to proving the central
Yogacara thesis of "appearance only"
(*vijnaptimatra*)--that is, that everything in
the many worlds of living beings is only apparent, with no real
existence outside of perception. A central challenge to this
idealistic view, then, comes from the objector's simple
statement that, "If the images of physical forms, and so on,
were just consciousness, not physical things, then the Buddha would
not have spoken of the existence of the sense bases of physical form,
and so on." (*Vims* 5) Vasubandhu thus adduces
the Buddha's word as the basis of the objection. Then, he turns
to further scriptures to account for, and explain away, this extremely
well-founded Buddhist doctrine.
Vasubandhu's response to his internal objector is to explain
that the passages in question need to be understood as spoken with a
"special intention" (*abhipraya*--on
which, see Broido 1985, Collier 1998 and Tzohar 2018). Furthermore, in
addition to defending the idea of reinterpreting scripture in general,
Vasubandhu defends the act of reinterpretation in this case. In fact,
we may distinguish four separate elements of the argument: (1)
Vasubandhu explains the Buddha's general practice of speaking
words that are not technically true, but are beneficial to his
listeners. (2) He proves that such interpretations are sometimes
necessary, by providing two passages that without such a reading would
prove the Buddha to have contradicted himself. (3) He provides an
explanation of the Buddha's "special intention" in
the passage under discussion--that is, he declares what the
Buddha *actually* believes on the topic. And, (4) he suggests
why, in this case, the Buddha might have chosen to speak the words he
did, instead of simply saying what he really believed.
For (1) and (2), Vasubandhu explains that the Buddha's statement
that "there is a self-generated living being
(*satva*)" must be taken as having a "special
intention," so as to prevent its contradicting his statement
that "there is no self or living being (*satva*), only
entities with causes." (*Vims* 5) Vasubandhu
says that the Buddha spoke the first statement in the context of
explaining karma, in order to indicate that the continuity of the
mental series was not "cut off" (*uccheda*). If the
Buddha had taught about the no-self doctrine at the time he delivered
the discourse on "the self-generated living being," his
listeners would have been led to this false view that the mental
stream comes to an end at death. Such a false view is often translated
as "nihilism," and among Buddhists it is considered far
more dangerous than the false belief in a self, because it entails the
further false view that our moral actions have no afterlife
consequences.[32]
If you believe that, then you are liable to act in a way that
*will* lead you to hell.
The danger of nihilism is perhaps the primary justification for the
Buddha's having spoken words that were technically untrue.
Vasubandhu provides a similar example in the "Disproof of the
Person" section of the *Treasury*. (See the section
concerning disproof of person.) There, Vasubandhu was arguing with the
Personalists, who pointed to a scripture in which the Buddha remained
silent when asked whether or not the person existed after death. The
Personalists, of course, saw this as evidence that the Buddha affirmed
the ineffability of the person. Vasubandhu says, in response, that the
Buddha explained quite clearly (to Ananda, his disciple, after
the questioner left) why there had been no good way to answer. To
affirm a self would be to imply a false doctrine, but to deny the self
(in this case, stating the truth) would cause the confused questioner
to fall into a still greater falsehood: Namely, the thought of
formerly having had a self, but now having none. This is the view,
once again, of the self being "cut off" or
"destroyed" (*uccheda*), the view that leads to
moral nihilism.
The danger of the false view of "destruction" is an
extreme case, in which the Buddha is silent so as not to have to lead
his disciple into a dangerous view. To what degree can this paradigm
be extended to suggest other, perhaps less dire, cases in which the
Buddha may have neglected to speak frankly? For Vasubandhu,
apparently, the presence of a contradiction between two statements
from the Buddha is one key. The Buddha cannot have literally meant
*both* that there was no living being (*satva*)
*and* that living beings (*satva*) came into being in a
particular bodily form. A similar contradiction is brought to bear in
the discussion with the Personalists, during their argument over the
functionality of memory in the absence of an enduring self. There,
Vasubandhu cites a scripture in which the Buddha says that whoever
claims to remember past lives is only remembering past aggregates in
his own continuum--a quote which vitiates the Personalists'
citation of the Buddha having said of his own past life, "I was
such-and-such a person, with such-and-such a form."
To return to the case at hand from the *Twenty Verses
Commentary*, although Vasubandhu does not indicate a direct
contradiction in the Buddha's words, he does explain (3) what he
takes to be the truth behind the Buddha's statements about the
"sense bases of physical form, and so on." The literal
meaning of the twelve sense bases, recall, is simply the listing of
the six sensory organs and their six sensory objects. The truth behind
this, for Vasubandhu, is the Yogacara causal story, which
tells how conscious events arise from karmic seeds within the
mind's own continuum. Each consciousness comes into being with a
particular subject/object appearance imprinted upon it by its karmic
seed. There are no subjects and no objects, only "dual"
appearances. It is this causal story that was the "special
intention"--the real truth--behind the Buddha's
words affirming the sense bases. Of course, the Yogacara
causal story denies the functionality of the senses, and so the
"special intention" is a direct contradiction of the
literal meaning of the sense bases.
In response to this supposed explanation, the opponent asks for the
Buddha's justification for speaking in this way, and (4)
Vasubandhu provides two separate reasons, targeted at two different
levels of listeners. For students who do not yet understand the truth
of appearance-only, the Buddha taught the sense bases in order to help
them to understand the absence of self in persons. This seems to
cohere with Vasubandhu's *Treasury* argument that uses
the sense bases as a way to explain away the apparent self. For
students who *do* understand the truth of appearance-only, the
doctrine of the sense bases is useful for helping *them* to
understand the absence of self in the elements--that is, in the
sense bases
themselves.[33]
Thus, for Vasubandhu, the Buddha's teachings are still useful,
even when they are understood to have been spoken from a provisional
standpoint.
Perhaps this point can be extended still further. For Vasubandhu,
while scriptures are always valid sources of knowledge, to excavate
their meaning often requires in-depth, sophisticated, philosophical
reasoning. We find Vasubandhu mocking his opponents, in one place in
the *Treasury*, for understanding "the words"
rather than "the meaning" of a scriptural passage in
question. (204.1) Vasubandhu often argues against his opponents'
overly literal interpretations of scripture as a crucial part of his
argument against their philosophical positions. When he turns to a
formal analysis of expository method and theory, Vasubandhu defends
this broad, rationalist approach to scripture.
In the *Proper Mode of Exposition*, Vasubandhu defends the
legitimacy of the Mahayana Sutras against a
"Sravaka" opponent, and defends their
figurative, Yogacara interpretation against a
Madhyamika opponent. Cabezon (1992) usefully summarizes
Vasubandhu's arguments here in three general categories:
Arguments based on the structure of the canon; arguments based on
doctrinal contents; and arguments based upon "intercanonical
criteria for authenticity." As we have seen in his
*Treasury* and *Twenty Verses* arguments, Vasubandhu
argues that naively to require that all scriptures be interpreted
literally is to insist that the Buddha repeatedly contradicted
himself. He cites many internal references to lost or unknown texts,
and argues that this shows that no lineage or school can claim to have
a complete canon. Unlike his Madhyamika opponents, Vasubandhu
believes that the Mahayana Sutras must be read under a
"special intention," so as to prevent the danger of
nihilism. As Cabezon writes, both the Sravaka and Madhyamaka
position share an over-emphasis on literal readings.
Vasubandhu's emphasis on "correct exegesis" (a
literal translation of *Vyakhyayukti*) shows the
Yogacara to be the school of greatest hermeneutical subtlety
and sophistication.
## 4. Major Yogacara Arguments and Positions
### 4.1 Defense of Appearance Only
The *Twenty Verses* is Vasubandhu's most readable, and
evidently analytic, philosophical text, and it has consequently drawn
a significant degree of modern scholarly
attention.[34]
It begins with a simple statement, which Vasubandhu defends
throughout the brief text and commentary: "Everything in the
three realms is only appearance
(*vijnaptimatra*)."[35]
The three realms are the three states into which Buddhists believe
living beings may be reborn. For all beings except for Buddhas and
advanced bodhisattvas, the three realms make up the universe.
Vasubandhu glosses this statement with citations from scripture that
make it clear that he means to say that there no things
(*artha*), only minds (*citta*) and mental qualities
(*caitta*). He says that the experience of the three realms is
like the appearance of hairs in front of someone with eye disease. It
is the experience of something that does not exist as it appears.
Although the term has a history of controversy among interpreters of
Yogacara, it seems safe to say that the *Twenty
Verses* defends at least some form of idealism. (See the section
on the controversy over Vasubandhu as idealist.)
After stating his thesis that everything is "only
apparent," Vasubandhu immediately voices a potential
counter-argument, which consists in four reasons that the three realms
being appearance only is impossible: First and second, why are things
restricted to specific places and times, respectively? Apparent
objects can appear anywhere, at any time. Third, why do beings in a
given place and time experience the same objects, and not different
objects according to their distinct continua? And fourth, why do
objects perform causal functions in the real world, when merely
apparent mental objects do not?
These objections aim to prove the impossibility that the world is
merely apparent by arguing that the elements of ordinary experience
behave in ways that what is merely apparent does not. Vasubandhu sets
up, and meets, these objections in order to prove the possibility, and
hence the viability of the theory, that everything in experience is
appearance only. He shows, essentially, that the objector has a narrow
view of what is possible for merely apparent things; appearances are
not *necessarily* limited in the ways the objector thinks.
Notice that it is not incumbent upon Vasubandhu here to prove that
*all* instances of the merely apparent transcend the
limitations assumed by the objector. He has not set up this objection
in order to try to prove, absurdly, that *all* mental images,
for instance, are spatially restricted, or that *all* mere
appearances perform observable causal functions in the real world. On
the contrary, one upshot of the recognition that things are appearance
only will be that things do not have the cause-and-effect structure
that we ordinarily take them to have. In any event, the positive
argument that ordinary experience is, in fact, illusory, comes later
and takes the form of an argument to the best explanation. Before an
explanation can be the best, though, it must be a possibility. So
here, all Vasubandhu must do to counter these initial objections is
provide, for each, a single example of a mental event that exemplifies
the behavior that the objector claims is only available to physical
objects.[36]
To defeat the objections that mental objects are not restricted in
space and time, Vasubandhu provides the counterexample that in dreams
objects often appear to exist in one place and time, as they do in
ordinary waking reality. In a dream, I can be looking at shells on a
beach on Long Island, during the summer of my eighth year. It is only
upon waking that I come to realize that the dream objects (the shells,
the beach) were only mental fabrications, temporally dislocated, with
no spatial reality. Thus, what is merely apparent *can*
sometimes have the character of appearing in a particular place and
time. To say they do not is to misremember the
experience.[37]
Next, to defeat the objection that unlike ordinary physical objects,
merely apparent objects are not intersubjectively shared by different
beings, Vasubandhu provides the counterexample that in hell, demonic
entities appear to torment groups of hell beings. This is a case of a
shared hallucination. When the objector wonders why the demons might
not in fact be real, Vasubandhu appeals to karma theory: Any being
with sufficient merit--sufficient "good
karma"--to generate a body capable of withstanding the
painful fires of hell would never be born into hell in the first
place. Any creature in hell that is not suffering must be an
apparition generated by the negative karma of the tormented.
Finally, to defeat the objection that merely apparent objects do not
produce functional causal results, Vasubandhu provides the memorable
counterexample of a wet dream, in which an evident, physical result is
produced by an imagined sexual encounter with a nonexistent lover.
The initial objection to the mere possibility of the "appearance
only" theory quelled, Vasubandhu turns to his main positive
proof. This consists in a systematic evaluation of every possible
account of sensory objects as physical, which ultimately leads
Vasubandhu to conclude that no account of physicality makes more
sense, or is more parsimonious, than the theory that it is appearance
only.
Before turning to Vasubandhu's treatment of physics, it is worth
stopping to note the crucial importance that Buddhist karma theory
plays in Vasubandhu's argument, overall. First, the proof of
shared hallucinations in hell depends upon the particulars of the
Buddhist belief in the hells. Of course, we might have believed in
shared hallucinations even without believing in karma. But the
tormenters in hell play an important, double role in
Vasubandhu's argument. He has the objector raise the question
again, and suggest as a last-ditch effort that perhaps, the tormenters
are physical entities generated and controlled by the karmic energies
of the tormented. At this, Vasubandhu challenges his objector: If
you're willing to admit that karma generates physical entities,
and makes them move around (pick up swords and saws, etc.), so that
they might create painful results in the mental streams of the
tormented, why not just eliminate the physical? Isn't it simpler
to say that the mind generates mental images that torment the
mind?[38]
This a crucial question, because it resonates beyond hell beings,
across Buddhist karmic theory.
In the *Treasury of the Abhidharma*, Vasubandhu expressed the
widely-held Buddhist view that in addition to causing beings'
particular rebirths, karma also shapes the realms into which beings
are reborn and the non-sentient contents of those realms. Such a
belief provides Indian religions with answers to questions often
thought unanswerable by western theisms, such as why the mudslide took
out my neighbor's house, but not mine. But this view of karmic
causality requires that the physical causes of positive or negative
experiences are linked back to our intentional acts. (For more on the
continuity problems associated with karmic causality, see the
discussion of the disproof of invisible physicality.) Vasubandhu does
not say so explicitly, but if it is easier to imagine the causes of a
mind-only hell demon than a physical one, it should also be easier to
imagine the causes of a mind-only mudslide--assuming that both
are generated as a karmic repercussion for the beings that encounter
them. The background assumption that any physical world must be
subject to karma, therefore places the realist on the defensive from
the start. Can the external realist adduce a theory intellectually
satisfying enough to counter Vasubandhu's suggestion that we
throw up our hands and admit that what appears to be out there is only
in our minds?
It is in response to *this* that the objector cites scripture,
saying that the Buddha did after all, teach of the sense bases, saying
that eyes perceive physical forms, and so on. As discussed above,
Vasubandhu reinterprets this scripture as having a "special
intention" (*abhipraya*). (See the section
concerning scriptures.) The Buddha did not intend to affirm the
ultimate reality of such entities. Yet the objector is not satisfied
with Vasubandhu's alternate reading of the Buddha's words.
In order to doubt such an explicit, repeated, statement, Vasubandhu
needs to *prove* that it could not be interpreted directly.
Even if the appearance-only world is possible, and even if it accounts
better for karmic causality than the physical world, a direct reading
of scripture is still supported, unless and until its doctrines are
shown to be internally inconsistent.
### 4.2 Disproof of Sensory Objects
In order to disprove the possibility of external objects, Vasubandhu
delves into the atomic theory of the Kashimiri Vaibhasikas
as well as that of the ("orthodox"/Hindu)
Vaisesikas.[39]
The purpose here is to undermine every possible theory that might
account for perception as caused by non-mental entities.
Vasubandhu's argument at this stage is entirely based in
mereology--the study of the relations between parts and wholes.
He argues first that atomism--the view that things are ultimately
made up of parts that are themselves partless--cannot work. Then,
he argues that any reasonable explanation of objects of perception
*must* be atomic, by arguing that the alternative--an
extended, partless whole--is incoherent. Vasubandhu takes it that
together, these conclusions prove that external objects must be unreal
appearances.
I can only summarize Vasubandhu's sophisticated mereological
arguments. He begins with the assertion that anything that serves as a
sensory object must be a whole made up of basic parts, a bare
multiplicity of basic parts, or an aggregate. But none of these can
work. A whole made up of parts is rejected on the grounds that things
are not perceived over and above their parts. This is a well-developed
Buddhist Abhidharmika view, which coheres with the rejection of a
self over and above the aggregates. A bare multiplicity of partless
parts is rejected on the grounds that separate atoms are not perceived
separately. Thus the only sensible option is a grouping of
parts--an aggregate--that somehow becomes perceptible by
being joined together.
The combination of partless entities, however, is conceptually
impossible. Vasubandhu points out that if they if they combine on one
"side" with one atom and another "side" with
another--those "sides" are parts. The opponent must
account for the relation between those parts and the whole, and we are
brought back to the beginning. Furthermore, if they are infinitesimal,
they cannot be combined into larger objects.
It is proposed, instead, that perhaps a partless entity may be
extended in space, and so perceived. But perception is generated by
contact between a sensory organ and its object. This requires the
object to put up some kind of resistance. But if a thing has no parts,
then its near side is its far side, which means that to be adjacent to
it is to have passed it by. Partless atoms are therefore logically
incapable of providing the resistance that is definitive of
physicality and the basis for sensory contact (Vasubandhu says that
they cannot produce light on one side and shade on the other). This
confirms that entities must be combined into larger groupings in order
to be functional and perceptible, which has already been shown
impossible.
If atoms and extended partless entities are impossible, so also are
unitary sensory objects. Vasubandhu is well practiced in the
Abhidharma arguments reducing apparent wholes to their composite
entities. Suppose we say I see something that has both blue and yellow
in it. How, if it is one unitary thing, can I see these two colors at
different points? What makes one point in an object different from
another point, if the object is of a single nature? This is clearly
parallel to the argument against Isvara as a single cause of
all things. A similar case is movement across an extended thing. As
with a partless atom, to step on the near side of a partless extended
singular entity would be to step on its far side, too. So there can be
no gradual movement across a singular entity. As Vasubandhu concludes,
one cannot but derive the need for partless, atomic units from such
reasoning. But partless atomic units are imperceptible. So, perception
is impossible; apparently perceived objects are only apparent.
The main part of his argument settled, Vasubandhu entertains another
set of counterarguments, which are the charges of solipsism and moral
nihilism: How, the internal objector asks, can beings interact with
one another? What is the karmic benefit of helping or harming, when we
are all merely apparent beings? Vasubandhu's direct, candid
response to these challenges may be viewed as a failure of
imagination, or a surprising willingness to bite the bullet of
anti-realism. Mental streams, he says, interact in essentially the
same way as we imagine physical objects to interact. Minds affect
minds directly. When you speak to me, and I hear you, we ordinarily
think that your mind causes your mouth to produce sounds that my ears
pick up and transform into mental events in my mind. Vasubandhu takes
Occam's razor to this account and says that--given that we
have no sensible account of physicality, let alone mental causation of
a physical event and physical causation of a mental event--it
makes *more sense* if we eliminate everything but the evident
cause and the result: Your mind and mine.
Note that Vasubandhu is not saying that nothing in our appearances
exists; he is saying, on the contrary, that mere appearances bear all
the reality that we need for full intersubjectivity and moral
responsibility. It seems worth noting in this context that something
similar to Vasubandhu's option just may be the only option
available for a modern physicalist who believes that human minds are
constituted by, or out of, neurons. Human persons, for such a modern,
do not exist as we imagine them to, and are not causally constituted
in ways that we can intuit or even comprehend, except vaguely. Does
this mean that we must advert to moral nihilism or solipsism?
Vasubandhu's alternative is to affirm our pragmatic acceptance
of our own, and others' experiences and causal responsibilities,
and not to imagine that we must await an account of physicality before
trusting the effectiveness of our mental, moral, causality. Read in
this way, Vasubandhu seems much closer to Madhyamaka philosophy than
is ordinarily assumed; see Nagao (1991).
### 4.3 Three Natures and Non-Duality
Vasubandhu did the work of defending the controversial,
Yogacara thesis of appearance-only
(*vijnaptimatra*) in his *Twenty Verses*. In
his short, poetic works the *Thirty Verses* and the *Three
Natures Exposition*, he does not entertain
objections.[40]
Instead, he draws together the basic doctrinal and conceptual
vocabulary of the Yogacara tradition and forms an elegant,
coherent system. The *Thirty Verses* includes a complete
treatment of the Buddhist path from a Yogacara perspective.
In order to understand Vasubandhu's contribution in this text,
it would be necessary to place it against the wide backdrop of
previous Yogacara doctrine. This scholarly work is ongoing.
Here, let us focus instead on introducing the basics of the intricate
doctrinal structure formulated in the *Three Natures
Exposition*.
The *Three Natures Exposition* takes as its topic, and its
title, the three natures of reality from a Yogacara
perspective: These are the fabricated nature
(*parikalpita-svabhava*), the dependent nature
(*paratantra-svabhava*) and the perfected nature
(*parinispanna-svabhava*). In Yogacara
metaphysics, things no longer have only one nature, one
*svabhava*; rather, things have three different, if
interconnected, natures. We may note immediately how fundamentally
this differs from the *Treasury*, in which the apparent purpose
is, at all times, to determine the singular nature of each entity
(*dharma*). In that context, to discover that an entity must be
admitted to have more than one, distinct nature, would be to discover
that that entity is unreal. Something that changes, for instance, is
internally inconsistent because of exhibiting multiple natures.
Vasubandhu employs exactly such arguments to deny the ultimate reality
of many specific entities (*dharma*) in the *Treasury*.
Such entities, he says, are unreal; like a "self," they
are only conceptual fabrications (*parikalpita*).
One of the basic distinguishing features of Great Vehicle
(Mahayana) metaphysics is the affirmation that not only the
person, but *all* entities (*dharma*) are empty of a
"self" or an inherent nature (*svabhava*).
That means, in the terms laid out in the *Treasury*, that
*all dharmas* are only conceptual fabrications. But that is not
the only way of looking at things. To say that all things have
*three* inherent natures is not to take back the Great
Vehicle's denial of inherent natures, but rather to explain it.
It is to say that, what we ordinarily take to be a thing's
individual character or identity is best understood from three angles,
so as to explode its unity. The first nature is the fabricated nature,
which is the thing as it appears to be. Of course, to use this term is
to indicate the acceptance that things do not really exist the way
they appear. This is a thing's nature as it might be defined and
explained in ordinary Abhidharma philosophy, but with the added
proviso that we all know that this is not really how things work. The
second nature is the dependent nature, which Vasubandhu defines as the
causal *process* of the thing's fabrication, the causal
story that brings about the thing's apparent nature. The third
nature, finally, is the emptiness of the first nature--the
*fact* that it is unreal, that it does not exist as it
appears.
Let's take the most important example, for Buddhists, of a thing
that appears real but has no nature: the self. With this as our
example of something mistakenly thought to have a nature
(*svabhava*), we may run through its *three*
natures. The self as it appears is just my self. I seem to be here, as
a living being, typing on a keyboard, thinking thoughts. That is the
first nature. The second nature is the causal story that brings about
this seeming self, which is the cycle of dependent origination. We
might appeal to the standard Abhidharma causal story, and say that the
twelve sense bases or the five aggregates, causally conjoined, bring
about the conceptually-fabricated self. In the Yogacara,
this causal story is entirely mental, of course, so we cannot appeal
to the sense bases themselves as the real cause. Rather, the sense
bases only *appear* to be there due to karmic conditioning in
my mental stream. For this reason, the second nature should instead be
said to be the causal series according to which the mental seeds
planted by previous deeds ripen into the appearance of the sense
bases, so that I think I am perceiving things--which, in turn,
makes me think I have a self. Finally, the third nature is the
non-existence of the self, the fact that it does not exist where it
appears. Of course, if there was a real self, I could not have
provided my explanation of how it comes to appear to be there. In
Vasubandhu's telling, the three natures are all one reality
viewed from three distinct angles. They are the appearance, the
process, and the emptiness of that same apparent entity.
With this in mind, we can read the opening verses of the *Three
Natures Exposition*:
>
>
> 1. Fabricated, dependent and perfected: So the wise understand, in
> depth, the three natures.
>
>
>
> 2. What appears is the dependent. How it appears is the fabricated.
> Because of being dependent on conditions. Because of being only
> fabrication.
>
>
>
> 3. The eternal non-existence of the appearance as it is appears: That
> is known to be the perfected nature, because of being always the
> same.
>
>
>
> 4. What appears there? The unreal fabrication. How does it appear? As
> a dual self. What is its nonexistence? That by which the nondual
> reality is
> there.[41]
>
>
>
These verses emphasize the crystalline, internal structure to the
three natures. "What appears" is one nature (the
dependent), whereas "how it appears" is the fabricated.
"The eternal non-existence of the appearance as it
appears" is the perfected. These are ways of talking of the same
entity, or event, from different perspectives. Also evident here is
the crucial Yogacara concept of "duality." The
fabricated is said to appear as a "dual self," whereas
reality (*dharmata*) is nondual. Since, as we have seen
above, Vasubandhu believes in the selflessness of all dharmas, the
"dual self" is the false appearance of a self that is
attributed to any entity. Things and selves mistakenly appear
"dual." What does this mean?
The twosome denied by the denial of "duality" has many
interpretations across Yogacara thought, but for Vasubandhu
the most important kinds of duality are *conceptual duality*
and *perceptual
duality*.[42]
Conceptual duality is the bifurcation of the universe that appears
necessary in the formation of any concept. When we say of any given
thing that it is "physical," we are effectively saying, at
the same time, that all things are either "physical" or
"not physical." We create a "duality"
according to which we may understand all things as falling into either
one or the other category. This is what it is to define something, and
to ascribe it its characteristic nature (*svabhava*). This
is why every "self" is "dual." To say
something is what it is imposes a duality upon the world. And, given
that the world is in causal flux and does not accord with any
conceptual construct, every duality imposes a false construct upon the
world. To illuminate this with regard to the person, when I say that
"I" exist, I am dividing the universe into
"self" and "other"--me and not-me. Since,
for Buddhists, the self is unreal, this is a mistaken imposition, and
of course many ignorant, selfish actions follow from this fundamental
conceptual error.
In addition, but closely related to conceptual duality, is
*perceptual* duality. This is the distinction between sensory
organs and their objects that appears in perception. When I see a
tree, I have the immediate impression that there is a distinction
between that tree, which I see, and my eye organs, which see the tree.
I take it that my eyes are "grasping" the tree, and
furthermore I understand that the eyes (my eyes) are part of me,
whereas the tree is not part of me. The same is true of all of the
sensory organs and their objects. The Abhidharma system calls the
sensory organs the "internal" sense bases, and the sensory
objects the "external" ones. It is the combination of the
"internal" sense bases upon which I impose the false
construction of self. Thus perception provides the basis for the
conceptual distinction between self and other.
For Yogacara Buddhists, however, sense perception is just as
false a "duality" as are false conceptual constructions.
Given that the external world must be only mind, the sensory
experience that we have of a world "out there" is only a
figment of our imagination. The self/other distinction implicit in
perception is as much a false imposition as is the self/other
distinction implicit in the concept, "self." Perception,
like conceptualization, is only a matter of the mind generating
"dual" images. As counterintuitive as this sounds, it may
be clarified by analogy to the idea of a multi-player virtual reality
game. In a shared virtual reality experience, the first thing the
computer system must be able to track is where, objectively,
everything is (where the various players are, where the castle with
the hidden jewels is, where the dragon is hiding, etc.). Then, when
any new player logs in, the system can place that player somewhere in
the multi-dimensional, virtual world. At that point, the computer must
generate a sensory perspective for that individual. Immediately that
person experiences herself to exist in a world of a certain kind, with
certain abilities to move around, and fight, and so on. But this is
only a trick of the software. The world is unreal, and so is the
player's subjective perspective on that world.
For the Yogacara, our sensory experience is something like
this. We are all only mental, and so the apparent physicalities that
intervene between us are merely false constructions of our deluded
minds. Furthermore, the physical body that we take ourselves to have,
as we negotiate the world, making use of our senses, is also not
there. Both external and internal realities are mistaken to the degree
that we see them as "two" and not "nondual"
constructions arising out of a single mental stream.
Vasubandhu's clearest, most evocative, and most famous
explanation of the three natures appears later on, when they are
analogized to a magician's production of an illusion of an
elephant, using a piece of wood and a magical spell:
>
>
> 27. It is just as [something] made into a magical illusion with the
> power of an incantation (*mantra*) appears as the self of an
> elephant. A mere appearance (*akaramatra*) is
> there, but the elephant does not exist at all.
>
>
>
> 28. The fabricated nature is the elephant; the dependent is its
> appearance (*akrti*); and the nonexistence of an
> elephant there is the perfected.
>
>
>
> 29. In the same way, the fabrication of what does not exist appears as
> the self of duality from the root mind. The duality is utterly
> nonexistent. A mere appearance (*akrtimatraka*)
> is there.
>
>
>
> 30. The root-consciousness is like the incantation. Suchness
> (*tathata*) is understood to be like the piece of wood.
> Construction (*vikalpa*) should be accepted to be like the
> appearance of the elephant. The duality is like the
> elephant.[43]
>
>
>
Notice the two different elements here in the ending of Verse 30: The
"appearance of the elephant" is
"construction," that is, the ongoing process of
imagination that arises from the karmic causal stream. The real
problem, though, "duality," the dangerous, false
distinction between self and other, is analogized to the illusory
elephant itself. If I am at a magic show, I should expect to see the
"appearance of an elephant." That will make the show worth
attending. But if I am not a fool, I will not imagine myself to
*actually* *see an elephant*. That would be to construct
duality. For Vasubandhu, this is a crucial distinction. The basic
problem for living beings in his view is not that we
*experience* an illusory world; the problem is that we are
fooled into *accepting the reality* of our perceptions and our
concepts. Once we no longer believe, once we see the falsity of the
illusion, the illusion goes away and we can come to experience the
truth that lies behind the illusion--the ineffable
"suchness" (*tathata*), the piece of wood.
## 5. Controversy over Vasubandhu as "Idealist"
Let us posit that metaphysical idealism need not be the
counterintuitive view that all that exists is, simply, mental. To call
a view idealist it is sufficient merely that it hold that everything
is *dependent upon* mind--a mind, or many minds. Idealists
hold that the natures or forms of things are mind-dependent. Of
course, such dependence is, in itself, counterintuitive, given that we
generally take the physical world to have existence independent of
minds. Contemporary prevailing wisdom holds that the direction of
causal dependence goes in the opposite direction; the physical world
in fact generates and sustains minds (using neurons to do so).
Idealism says that ideas, or minds, are in one way or other behind, or
prior to, all other forms of existence.
Given this definition, is Vasubandhu, the author of the *Twenty
Verses* and the *Thirty Verses* and probably the *Three
Natures Exposition*, a metaphysical idealist? At first blush, it
seems quite sensible to say that he is. The evident purpose of the
*Twenty Verses* is to argue that the core background
assumptions of Mahayana Buddhism (especially its views on
karma) suggest the idealistic thesis that is asserted in the
text's opening: "In the Great Vehicle, the threefold world
is only
appearance."[44]
Vasubandhu argues throughout the *Twenty Verses* that whatever
aspects of the world appear to be independent of mind, they are better
explained as mental constructions. What that means is, apparently,
that not only is Vasubandhu an idealist in the sense that he believes
that everything in experience is dependent upon mind, but he also
believes that everything *simply is* mind. This is the radical
"mind only" (*cittamatra*) position that he
believes is necessitated by adherence to a Mahayana
worldview.
Things are not quite this simple, however. First, it is important to
note two levels to Vasubandhu's "idealism." The
first level points to an analysis of what appears in perception.
Vasubandhu says that everything that we ordinarily ascribe existence
in the world--such as a self or an object--is actually only
mental. Tables and nebulae and eyes are mental fabrications that have
no existence apart from their "shape"
(*akara*), the way they
appear.[45]
The second level points to the causal story that brings about such
appearances. Along with other Yogacarins, Vasubandhu argues
that *that causal story too* is only mental. Perception, in the
form of active sensory and mental consciousnesses, is caused by the
"storehouse consciousness"
(*alayavijnana*), which stores, in mental
form, the seeds of all of our future experiences. The storehouse is
constantly replenished, because new "seeds"
(*bija*) are created every time we enact a morally
significant action (*karma*).
It is mental causation, then, that brings about our mental
experiences. It is this second level, then, that confirms Vasubandhu
as an idealist. Not only the reality that we inhabit (which we
ordinarily think of as at least partly non-mental), but also *the
causal supports upon which such a reality depends*, are only
mental. Even if the storehouse consciousnesses of living beings
provided ongoing support to a real, physical world, we would
*still* want to call that view "idealism" for its
insistence on the causal priority of the mental. Since Vasubandhu does
not say as much, but instead says that everything that is created by
minds is itself mental, we might say that in addition to a causal
idealism, Vasubandhu's is a universal, ontological idealism.
Behind both of these levels, however, there is an important sense in
which Vasubandhu is not really an idealist. Like all
Mahayana Buddhists, Vasubandhu believes that whatever can be
stated in language is only conventional, and therefore, from an
ultimate perspective, it is mistaken. Ultimate reality is an
inconceivable "thusness" (*tathata*) that is
perceived and known only by enlightened beings. Ultimately, therefore,
the idea of "mind" is just as mistaken as are ordinary
"external objects." For this reason, we can say that
Vasubandhu is an idealist, but only in the realm of conventions.
Ultimately, he affirms ineffability.
| | | | |
| --- | --- | --- | --- |
| **Level 1** | Experience | False Appearances (self, objects,
concepts) | Idealism |
| **Level 2** | Causality | Storehouse Consciousness (produces
consciousness) | Idealism |
| **Level 3** | Ultimate Reality | Only Buddhas Know (*tathata*) | Ineffability |
Levels of Analysis of Vasubandhu
It is extremely important to keep these levels apart, in order to
prevent Vasubandhu from committing an evident self-contradiction. At
the end of his commentary to verse 10 of the *Twenty Verses*,
Vasubandhu imagines an interlocutor challenging the appearance-only
view by saying that if no dharmas exist at all, then so also
appearance-only doesn't exist. How, then, can it be established?
This charge of self-referential incoherence is faced by anti-realists
of all stripes, and it is familiar to Buddhist philosophers from
Nagarjuna's *Vigrahavyavartani*.
(See the entry on
Nagarjuna
in this encyclopedia.) Vasubandhu's reply in the *Twenty
Verses Commentary* is that appearance-only does not mean the
nonexistence of dharmas. It only means the absence of a nature
(*svabhava*) that is falsely attributed to a constructed
self (*kalpitatmana*), and Vasubandhu is quick to say
that when one establishes appearance-only, appearance-only must
*also* be understood to be itself a construction, without a
self. Otherwise, there would be something other than an appearance,
which would by definition contradict appearance-only. All there is, is
appearance, and everything that appears is a false construction of
self.
We can take two important lessons from this. First,
representation-only cannot be the ultimate truth. If Vasubandhu were
affirming the ultimate reality of the "mind-only" causal
story that brings about appearances, or the appearances themselves
(levels 2 and 1, respectively), then he would surely have said so in
response to the charge of self-referential, nihilistic incoherence.
Instead, his argument adverts to another level for ultimate reality.
Madhyamaka critics of Yogacara have made much of the notion
that the latter have reified the mind and made it a new
"ultimate."[46]
Yet Vasubandhu's position is clearly intended to elude such a
critique.
Second, for Vasubandhu the lack of a self-nature
(*svabhava*) just is, precisely, what he means by
"appearance-only." This is also contrary to a common
understanding of Yogacara which arises from the equation of
"appearance-only" with "mind-only." This
equation is not, technically, a mistake in the case of Vasubandhu. He
says, specifically, that "mind" (*citta*) and
"appearance" (*vijnapti*) are synonyms. The
mistake is what we moderns tend to think, based on a background
assumption of mind/matter Cartesian dualism--namely, that the
expression "mind only" is used to affirm the reality of
the mental at the expense of the physical. Here, reality is certainly
not physical, but that is not Vasubandhu's point. His point is
to equate the lack of independent nature (*svabhava*) with
the product of mental construction. This is a simple statement of
idealism, as mentioned above: Everything that exists must either have
independent existence, or be dependent upon mind, a mental
construction; and since, for the Great Vehicle, nothing has
independent existence, everything is only a mental construction. What
it means to be "mind only," then, is not simply to be made
of mental stuff rather than physical stuff. It is to be a false
construction by a deluded mind. Again, we can see that there is no
possibility that Vasubandhu is talking about the
"ultimate" here.
These points, and the above table, may clarify some of the
difficulties in the ongoing, contentious debates over whether
Vasubandhu and other Yogacarins should be classified as
idealists or mystics. A quarter century ago, Thomas A. Kochumuttom
(1982) argued quite powerfully and sensibly for his time that
Vasubandhu *should not* be called an idealist, but he was
arguing primarily against authors such as Theodore Stcherbatsky and S.
N. Dasgupta, about whom he wrote (200):
>
> All of the above quoted passages clearly show that their authors
> almost unanimously accept *vijnapti-matrata*
> or *prajnapti-matrata* or
> *citta-matrata* as the Yogacarin's
> description of the absolute, undefiled, undifferentiated, non-dual,
> transcendent, pure, ultimate, permanent, unchanging, eternal,
> supra-mundane, unthinkable, Reality, which, according to them, is the
> same as *Parnispanna-svabhava*, or
> *Nirvana*, or Pure Consciousness, or
> *Dharma-dhatu*, or *Dharma-kaya*, or the
> Absolute Idea of Hegel, or the Brahman of Vedanta.
>
One can appreciate Kochumuttom's frustration. Vasubandhu is
cautious not to conflate a description of perceived entities as
reducible to only mind (Level 1) with a description of the ineffable
ultimate reality (Level 3). Vasubandhu explicitly denies that
"mind" has ultimate reality. He is not a Hegelian
idealist.
But not all idealists are Hegelian, absolute idealists. Among
idealisms, Vasubandhu's is more closely aligned with
Kant's, in that both assert that the objects of our experience
are only representations, while both also affirm the reality of
unknowable things in themselves (*Ding* *an
sich*).[47]
Kochumuttom was aware of this, and even in the course of his extended
argument against using the term "idealism" for Vasubandhu,
he allowed that the term might be appropriate if understood in this
way:
>
> This should not be understood to mean that there are no things other
> than consciousness. On the contrary, it means only that what falls
> within the range of experience are different forms of consciousness,
> while the things-in-themselves remain beyond the limits of experience.
> (48)
>
>
> The form in which a thing is thought to be grasped is purely imagined
> (*parikalpita*), and therefore is no sure guide to the
> thing-in-itself. It is in this sense, and only in this sense, that
> Vasubandhu's system can be called idealism. It by no means
> implies that there is nothing apart from ideas or consciousness. (53)
>
Is the term "idealism" acceptable for the view that, even
if nothing expressible in language has ultimate reality, mental events
are still *more* real than physical events, which do not have
even *conventional* reality? Perhaps "conventionalist
idealism" would be a good term for this view, but surely it sits
squarely within the range of idealisms. To say, with emphasis, as some
do, that this view is *not* idealism, may be to privilege a
narrow understanding of the term.
In addition, the issue may be more than semantic. It will strike some
as overstating the case to say, as Kochumuttom does, that the fact
that everything within perception is only mind "by no means
implies that there is nothing apart from ideas or
consciousness." Vasubandhu's arguments in the *Twenty
Verses* are intended to exclude the possibility of anything at all
that may be coherently *asserted* to be
non-mental.[48]
So the doctrine of appearance-only *does* imply that there is
nothing apart from ideas or consciousness, at least, as far as logic
and implications are allowed to go. The only exceptions would be
things about which we can say nothing. So what could allow us to say
about them that they are
exceptions?[49]
Whether or not one wants to use the word "idealism" for
Vasubandhu thus depends to a large degree upon just how strongly one
wants to lean upon the idea that idealists must affirm the
*ultimate* reality of mind. The main arguments that remain
against viewing Vasubandhu as an idealist, exemplified prominently
today by Dan Lusthaus, emphasize this idea. Given that many Buddhists
in India, Tibet and East Asia have understood the Yogacara
to be affirming the ultimate reality of the mental, the denial of such
a view on Vasubandhu's behalf is an important and useful
corrective. Lusthaus believes that Vasubandhu is not a
"metaphysical idealist" or an "ontological
idealist" but rather an "epistemological idealist,"
meaning that Vasubandhu is concerned only about awareness and its
contents, and is not in the business of affirming or denying anything
about ultimate reality. For this reason, Lusthaus refers to
Yogacara as
"phenomenology."[50]
A final complaint is that the use of the term "idealism"
is anachronistic or for some other reason inappropriate in its
application to pre-modern India. Kochummutom's point that the
term encompasses too much is well taken. It is one thing (perhaps
already stretching the meaning) to apply the term
"idealism," as R.P. Srivastava (1973) does, to modern
thinkers whose views have been influenced by European idealists, such
as Aurobindo, Vivekananda and Radhakrishnan. But it is anachronistic
if we are to impose upon Vasubandhu the modern associations and
baggage that come along with the term, which would include an advocacy
of private, personal, sacred, "religious experience."
Vasubandhu may be an idealist, in a strict sense, but he is not what
20th century religious studies textbooks call a
"mystic philosopher." Vasubandhu does not, generally,
appeal to his own, individual personal experience as evidence or
confirmation of his positions. On the contrary, Vasubandhu repeatedly
employs reasoning to cast doubts upon the apparent realities evident
through perception, whether external or internal. This is, perhaps, a
reason to avoid describing Vasubandhu as a
"phenomenologist." His work does not take the direct
examination of "experience" as its theme, but, rather, it
draws upon scripture and rational argumentation for its critiques of
the available accounts of reality. |
vegetarianism | ## 1. Terminology and Overview of Positions
Moral vegetarianism is opposed by moral omnivorism, the view according
to which it is permissible to consume meat (and also animal products,
fungi, plants, etc.).
Moral veganism accepts moral vegetarianism and adds to it that
consuming animal products is wrong. Whereas in everyday life,
"vegetarianism" and "veganism" include claims
about what one *may* eat, in this entry, the claims are simply
about what one may not eat. They agree that animals are among those
things.
In this entry, "animals" is used to refer to non-human
animals. For the most part, the animals discussed are the land animals
farmed for food in the West, especially cattle, chicken, and pigs.
There will be some discussion of insects and fish but none of dogs,
dolphins, or whales.
Primarily, this entry concerns itself with whether moral vegetarians
are correct that *eating* meat is wrong. Secondarily--but
at greater length--it concerns itself with whether the
*production* of meat is permissible.
Primarily, this entry concerns itself with eating in times of
abundance and abundant choices. Moral vegans need not argue that it is
wrong to eat an egg if that is the only way to save your life. Moral
vegetarians need not argue it is wrong to eat seal meat if that is the
only food for miles. Moral omnivores need not argue it is permissible
to eat the family dog. These cases raise important issues, but the
arguments in this entry are not about them.
Almost exclusively, the entry concerns itself with contemporary
arguments.[1]
Strikingly, many historical arguments and most contemporary arguments
against the permissibility of eating meat start with premises about
the wrongness of *producing* meat and move to conclusions about
the wrongness of *consuming* it. That is, they argue that
It is wrong to eat meat
By first arguing that
It is wrong to produce meat.
The claim about production is the topic of SS2.
## 2. Meat Production
The vast majority of animals humans eat come from industrial animal
farms that are distinguished by their holding large numbers of animals
at high stocking density. We raise birds and mammals this way.
Increasingly, we raise fish this way, too.
### 2.1 Animal Farming
Raising large numbers of animals enables farmers to take advantage of
economies of scale but also produces huge quantities of waste,
greenhouse gas, and, generally, environmental degradation (FAO 2006;
Hamerschlag 2011; Budolfson 2016). There is no question of whether to
put so many animals on pasture--there is not enough of it. Plus,
raising animals indoors, or with limited access to the outdoors,
lowers costs and provides animals with protection from weather and
predators. Yet when large numbers of animals live indoors, they are
invariably tightly packed, and raising them close together risks the
development and quick spread of disease. To deal with this risk,
farmers intensively use prophylactic antibiotics. Tight-packing also
restricts species-typical behaviors, such as rooting (pigs) or
dust-bathing (chickens), and makes it so that animals cannot escape
each other, leading to stress and to antisocial behaviors like
tail-biting in pigs or pecking in chickens. To deal with these,
farmers typically dock tails and trim beaks, and typically (in the
U.S., at least) do so without anesthetic. Animals are bred to grow
fast on a restricted amount of antibiotics, food, and hormones, and
the speed of growth saves farmers money, but this breeding causes
health problems of its own. Chickens, for example, have been bred in
such a way that their bodies become heavier than their bones can
support. As a result, they "are in chronic pain for the last 20%
of their lives" (John Webster, quoted in Erlichman 1991).
Animals are killed young--they taste better that way--and
are killed in large-scale slaughterhouses operating at speed. Animal
farms have no use for, e.g., male chicks on egg-laying farms, are
killed at birth or soon
after.[2]
Raising animals in this way has produced low sticker prices (BLS
2017). It enables us to feed our appetite for meat (OECD 2017).
Raising animals in this way is also, in various ways, morally
fraught.
It raises concerns about its effects on humans. Slaughterhouses,
processing this huge number of animals at high speed, threaten injury
and death to workers. Slaughterhouse work is exploitative. Its
distribution is classist, racist, and sexist with certain jobs being
segmented as paupers' work or Latinx work or women's
(Pachirat 2011).
Industrial meat production poses a threat to public health through the
creation and spread of pathogens resulting from the overcrowding of
animals with weakened immune systems and the routine use of
antibiotics and attendant creation of antibiotic-resistant bacteria.
Anomaly (2015) and Rossi & Garner (2014) argue that these risks
are wrongful because unconsented to and because they are not justified
by the benefits of assuming those risks.
Industrial meat production directly produces waste in the form of
greenhouse gas emissions from animals and staggering amounts of waste,
waste that, concentrated in those quantities, can contaminate water
supplies. The Boll Foundation (2014) estimates that farm animals
contribute between 6 and 32% of greenhouse gas emissions. The range is
due partly to different ideas about what to count as being farm
animals' contributions: simply what comes out of their bodies?
Or should we count, too, what comes from deforestation that's
done to grow crops to feed them and other indirect emissions?
Industrial animal farming raises two concerns about wastefulness. One
is that it uses too many resources and produces too much waste for the
amount of food it produces. The other is that feeding humans meat
typically requires producing crops, feeding them to animals, and then
eating the animals. So it typically requires more resources and makes
for more emissions than simply growing and feeding ourselves crops
(*PNAS* 2013.
Industrial animal farming raises concerns about the treatment of
animals. Among others, we raise cattle, chickens, and pigs. Evidence
from their behavior, their brains, and their evolutionary origins,
adduced in Allen 2004, Andrews 2016, and Tye 2016, supports the view
that they have mental lives and, importantly, are sentient creatures
with likes and dislikes. Even chickens and other
"birdbrains" have interesting mental lives. The exhaustive
Marino 2017 collects evidence that chickens can adopt others'
visual perspectives, communicate deceptively, engage in arithmetic and
simple logical reasoning, and keep track of pecking orders and short
increments of time. Their personalities vary with respect to boldness,
self-control, and vigilance.
We farm billions of these animals industrially each year (Boll
Foundation 2014: 15). We also raise a much smaller number on freerange
farms. In this entry "freerange" is not used in its
tightly-defined, misleading, legal sense according to which it applies
only to poultry and simply requires "access" to the
outdoors. Instead, in the entry, freerange farms are farms that that,
ideally, let animals live natural lives while offering some protection
from predators and the elements and some healthcare. These lives are
in various ways more pleasant than lives on industrial farms but
involve less protection while still involving control and early death.
These farms are designed, in part, to make animal lives go better for
them, and their design assumes that a natural life is better, other
things equal, than a non-natural life. The animal welfare literature
converges on this and also on other components of animal well-being.
Summarizing some of that literature, David Fraser writes,
>
>
> [A]s people formulated and debated various proposals about what
> constitutes a satisfactory life for animals in human care, three main
> concerns emerged: (1) that animals should feel well by being spared
> negative affect (pain, fear, hunger etc.) as much as possible, and by
> experiencing positive affect in the form of contentment and normal
> pleasures; (2) that animals should be able to lead reasonably natural
> lives by being able to perform important types of normal behavior and
> by having some natural elements in their environment such as fresh air
> and the ability to socialize with other animals in normal ways; and
> (3) that animals should function well in the sense of good health,
> normal growth and development, and normal functioning of the body.
> (Fraser 2008: 70-71)
>
>
>
In this light, it is clear why industrial farming seems to do less for
animal welfare than freerange farming: The latter enables keeping
animals healthy. It enables happy states ("positive
affect") and puts up some safeguards against the infliction of
suffering. There is no need, for example, to dock freerange
pigs' tails or to debeak freerange chickens, if they have enough
space to stay out of each other's way. It enables animals to
socialize and to otherwise lead reasonably natural lives. A
freerange's pig's life is in those ways better than an
industrially-farmed pig's.
Yet because freerange farming involves being outdoors, it involves
various risks: predator- and weather-related risks, for example. These
go into the well-being calculus, too.
Animals in the wild are subjected to greater predator- and
weather-related risks and have no health care. Yet they score very
highly with regard to expressing natural behavior and are under no
one's control. How well they do with regard to positive and
negative affect and normal growth varies from case to case. Some meat
is produced by hunting such animals. In practice, hunting involves
making animals suffer from the pain of errant shots or the terror of
being chased or wounded, but, ideally, it involves neither pain nor
confinement. Of course, either way, it involves
death.[3]
### 2.2 The Schematic Case Against Meat Production
Moral vegetarian arguments about these practices follow a pattern.
They claim that certain actions--killing animals for food we do
not need, for example--are wrong and then add that some mode of
meat production--recreational hunting, for example--does so.
It follows that the mode of meat-production is wrong.
Schematically
*X* is wrong.
*Y* involves *X*. Hence,
*Y* is wrong.
Among the candidate values of *X* are:
* Causing animals pain for the purpose of producing food when there
are readily available alternatives.
* Killing animals for the purpose of...
* Controlling animals...
* Treating animals as mere tools...
* Ontologizing animals as food...
* Harming humans....
* Harming the environment...
And among the candidate values of *Y* are:
* Industrial animal farming
* Freerange farming
* Recreational hunting
Space is limited and cranking through many instances of the schema
would be tedious. This section focuses on causing animals pain,
killing them, and harming the environment in raising them. On control,
see Francione 2009, DeGrazia 2011, and Bok 2011. On treating animals
as mere tools, see Kant's *Lectures on Ethics*, Korsgaard
2011 and 2015, and Zamir 2007. On ontologizing, see Diamond 1978,
Vialles 1987 [1994], and Gruen 2011, Chapter 3. On harming humans, see
Pachirat 2011, Anomaly 2015, and Doggett & Holmes 2018.
#### 2.2.1 Suffering
Some moral vegetarians argue:
Causing animals pain while raising them for food when there are
readily available alternatives is wrong.
Industrial animal farming involves causing animals pain while raising
them for food when there are readily available alternatives.
Hence,
Industrial animal farming is wrong.
The "while raising them for food when there are readily
available alternatives" is crucial. It is sometimes permissible
to cause animals pain: You painfully give your cat a shot, inoculating
her, or painfully tug your dog's collar, stopping him from
attacking a toddler. The first premise is asserting that causing pain
is impermissible in certain other situations. The "when there
are readily available alternatives" is getting at the point that
there are substitutes available. We could let the chickens be and eat
rice and kale. The first premise asserts it is wrong to cause animals
pain while raising them for food when there are readily available
substitutes.
It says nothing about why that is wrong. It could be that it is wrong
because it would be wrong to make *us* suffer to raise us for
food and there are no differences between us and animals that would
justify making them suffer (Singer 1975 and the enormous literature it
generated). It could, instead, be that it is wrong because impious
(Scruton 2004) or cruel (Hursthouse 2011).
So long as we accept that animals feel--for an up-to-date
philosophical defense of this, see Tye 2016--it is
uncontroversial that industrial farms do make animals suffer. No one
in the contemporary literature denies the second premise, and Norwood
and Lusk go so far as to say that
>
>
> it is impossible to raise animals for food without some form of
> temporary pain, and you must sometimes inflict this pain with your own
> hands. Animals need to be castrated, dehorned, branded, and have other
> minor surgeries. Such temporary pain is often required to produce
> longer term benefits...All of this must be done knowing that
> anesthetics would have lessened the pain but are too expensive. (2011:
> 113)
>
>
>
There is the physical suffering of tail-docking, de-beaking,
de-horning, and castrating, all without anesthetic. Also, industrial
farms make animals suffer psychologically by crowding them and by
depriving them of interesting environments. Animals are bred to grow
quickly on minimal food. Various poultry industry sources acknowledge
that this selective breeding has led to a significant percentage of
meat birds walking with painful impairments (see the extensive
citations in HSUS 2009).
This--and much more like it that is documented in Singer &
Mason 2006 and Stuart Rachels 2011--is the case for the second
premise, namely, that industrial farming causes animals pain while
raising them for food when there are readily available
alternatives.
The argument can be adapted to apply to freerange farming and hunting.
Freerange farms ideally do not hurt, but, as the Norwood and Lusk
quotation implies, they actually do: For one thing, animals typically
go to the same slaughterhouses as industrially-produced animals do.
Both slaughter and transport can be painful and stressful.
The same goes for hunting: In the ideal, there is no pain, but,
really, hunters hit animals with non-lethal and painful shots. These
animals are often--but not always--killed for pleasure or
for food hunters do not
need.[4]
Taken together the arguments allege that all manners of meat
production in fact produce suffering for low-cost food and typically
do so for food when we don't need to do so and then allege that
that justification for producing suffering is insufficient. Against
the arguments, one might accept that farms hurt animals but deny that
it is even *pro tanto* wrong to do so (Carruthers 1992 and
2011; Hsiao 2015a and 2015b) on the grounds that animals lack moral
status and, because of this, it is not intrinsically wrong to hurt
them (or kill or control them or treat them like mere tools). One
challenge for such views is to explain what, if anything, is wrong
with beating the life out of a pet. Like Kant, Carruthers and Hsiao
accept that it might be wrong to hurt animals when and because doing
so leads to hurting humans. This view is discussed in Regan 1983:
Chapter 5. It faces two distinct challenges. One is that if the only
reason it is wrong to hurt animals is because of its effects on
humans, then the only reason it is wrong to hurt a pet is because of
its effects on humans. So there is nothing wrong with beating pets
when that will have no bad effects on humans. This is hard to believe.
Another challenge for such views, addressed at some length in
Carruthers 1992 and 2011, is to explain whether and why humans with
mental lives like the lives of, say, pigs have moral status and
whether and why it is wrong to make such humans suffer.
#### 2.2.2 Killing
Consider a different argument:
Killing animals while raising them for food when there are readily
available alternatives is wrong.
Most forms of animal farming and all recreational hunting involve
killing animals while raising them for food when there are readily
available alternatives. Hence,
Most forms of animal farming and all recreational hunting are
wrong.
The second premise is straightforward and uncontroversial. All forms
of meat farming and hunting require killing animals. There is no form
of farming that involves widespread harvesting of old bodies, dead
from natural causes. Except in rare farming and hunting cases, the
meat produced in the industrialized world is meat for which there are
ready alternatives.
The first premise is more controversial. Amongst those who endorse it,
there is disagreement about why it is true. If it is true, it might be
true because killing animals wrongfully violates their rights to life
(Regan 1975). It might be true because killing animals deprives them
of lives worth living (McPherson 2015). It might be true because it
treats animals as mere tools (Korsgaard 2011).
There is disagreement about whether the first premise is true. The
"readily available alternatives" condition matters:
Everyone agrees that it is sometimes all things considered permissible
to kill animals, e.g., if doing so is the only way to save your
child's life from a surprise attack by a grizzly bear or if
doing so is the only way to prevent your pet cat from a life of
unremitting agony. (Whether it is permissible to kill animals in order
to cull them or to preserve biodiversity is a tricky issue that is set
aside here. It--and its connection to the permissibility of
hunting--is discussed in Scruton 2006b.) At any rate, animal
farms are in the business of killing animals simply on the grounds
that we want to eat them and are willing to pay for them even though
we could, instead, eat plants.
The main objection to the first premise is that animals lack the
mental lives to make killing them wrong. In the moral vegetarian
literature, some argue that the wrongness of killing animals depends
on what sort of mental life they have *and* that while animals
have a mental life that suffices for hurting them being wrong, they
lack a mental life that suffices for killing them being wrong (Belshaw
2015 endorses this; McMahan 2008 and Harman 2011 accept the first and
reject the second; Velleman 1991 endorses that animal mental lives are
such that killing them does not harm them). Animals could lack a
mental life that makes killing them wrong because it is a necessary
condition for killing a creature being wrong that that creature have
long-term goals and animals don't or that it is a necessary
condition that that creature have the capacity to form such goals and
animals don't or that it is a necessary condition that the
creature's life have a narrative structure and animals'
lives don't
or...[5]
Instead, the first premise might be false and killing animals we raise
for food might be permissible because
>
>
> [t]he genesis of domestic animals is...a matter...of an
> implicit social contract--what Stephen Budiansky...calls
> 'a covenant of the wild.'...Humans could protect such
> animals as the wild ancestors of domestic cattle and swine from
> predation, shelter them from the elements, and feed them when
> otherwise they might starve. The bargain from the animal's point
> of view, would be a better life as the price of a shorter life...
> (Callicott 2015: 56-57)
>
>
>
The idea is that we have made a "bargain" with animals to
raise them, to protect them from predators and the elements, and to
tend to them, but then, in return, to kill them. Moreover, the
"bargain" renders killing animals permissible (defended in
Hurst 2009, Other Internet Resources, and described in Midgley
1983). Such an argument might render permissible hurting animals, too,
or treating them merely as tools.
Relatedly, even conceding that it is pro tanto wrong to kill animals,
it might be all things considered permissible to kill farm animals for
food even when there are ready alternatives because and when their
well-being is replaced by the well-being of a new batch of farmed
animals (Tannsjo 2016). Farms kill one batch of chickens and
then bring in a batch of chicks to raise (and then kill) next. The
total amount of well-being is fixed though the identities of the
receptacles of that well-being frequently changes.
Anyone who endorses the views in the two paragraphs above needs to
explain whether and then why their reasoning applies to animals but
not humans. It would not be morally permissible to create humans on
organ farms and harvest those organs, justifying this with the claim
that these humans wouldn't exist if it weren't for the
plan to take their organs and so part of the "deal" is
that those humans are killed for their organs. Neither would it be
morally permissible to organ-farm humans, justifying it with the claim
that they will be replaced by other happy
humans.[6]
#### 2.2.3 Harming the Environment
Finally, consider:
Harming the environment while producing food when there are readily
available alternatives is wrong.
Industrial animal farming involves harming the environment while
producing food when there are readily available alternatives.
Hence,
Industrial animal farming is wrong.
A more plausible premise might be "egregiously harming the
environment..." The harms, detailed in Budolfson 2018,
Hamerschlag 2011, Rossi & Garner 2014, and Ranganathan et al.
2016, are egregious and include deforestation, greenhouse gas
emission, soil degradation, water pollution, water and fossil fuel
depletion.
The argument commits to it being wrong to harm the environment.
Whether this is because those harms are instrumental in harming
sentient creatures or whether it is intrinsically wrong to harm the
environment or ecosystems or species or living creatures regardless of
sentience is left
open.[7]
The argument does not commit to whether these harms to the environment
are necessary consequences of industrial animal farming. There are
important debates, discussed in *PNAS* 2013, about whether, and
how easily, these harms can be stripped off industrial animal
production.
There is an additional important debate, discussed in Budolfson 2018,
about whether something like this argument applies to freerange animal
farming.
Finally, there is a powerful objection to the first premise from the
claim that these harms are part of a package that leaves sentient
creatures better off than they would've been under any other
option.
#### 2.2.4 General Moral Theories
Nothing has been said so far about general moral theories and meat
production. There is considerable controversy about what those
theories imply about meat production. So, for example, utilitarians
agree that we are required to maximize happiness. They disagree about
which agricultural practices do so. One possibility is that because it
brings into existence many trillions of animals that, in the main,
have lives worth living and otherwise would not exist, industrial
farming maximizes happiness (Tannsjo 2016). Another is that
freerange farming maximizes happiness (Hare 1999; Crisp 1988).
Instead, it could be that no form of animal agriculture does (Singer
1975 though Singer 1999 seems to agree with Hare).
Kantians agree it is wrong to treat ends in themselves merely as
means. They disagree about which agricultural practices do so. Kant
(*Lectures on Ethics*) himself claims that no farming practice
does--animals are mere means and so treating them as mere means
is fine. Some Kantians, by contrast, claim that animals are ends in
themselves and that typically animal farming treats them as mere means
and, hence, is wrong (Korsgaard 2011 and 2015; Regan 1975 and
1983).
Contractualists agree that it is wrong to do anything that a certain
group of people would reasonably reject. (They disagree about who is
in the group.) They disagree, too, about which agricultural practice
contractualism permits. Perhaps it permits any sort of animal farming
(Carruthers 2011; Hsiao 2015a). Perhaps it permits none (Rowlands
2009). Intermediate positions are possible.
Virtue ethicists agree that it is wrong to do anything a virtuous
person would not do or would not advise. Perhaps this forbids hurting
and killing animals, so any sort of animal farming is impermissible
and so is hunting (Clark 1984; Hursthouse 2011). Instead, perhaps it
merely forbids hurting them, so freerange farming is permissible and
so is expert, pain-free hunting (Scruton 2006b).
Divine command ethicists agree that it is wrong to do anything
forbidden by God. Perhaps industrial farming, at least, would be
(Halteman 2010; Scully 2002). Lipscomb (2015) seems to endorse that
freerange farming would *not* be forbidden by God. A standard
Christian view is that no form of farming would be forbidden, that
because God gave humans dominion over animals, we may treat them in
any old way. Islamic and Jewish arguments are stricter about what may
be eaten and about how animals may be treated though neither rules out
even industrial animal farming (Regenstein, et al. 2003).
Rossian pluralists agree it is *prima facie* wrong to harm.
There is room for disagreement about which agricultural
practices--controlling, hurting, killing--do harm and so
room for disagreement about which farming practices are *prima
facie* wrong. Curnutt (1997) argues that the *prima facie*
wrongness of killing animals is not overridden by typical
justifications for doing so.
## 3. Fish and Insects
In addition to pork and beef, there are salmon and crickets. In
addition to lamb and chicken, there are mussels and shrimp. There is
little in the philosophical literature about insects and sea creatures
and their products, and this entry reflects
that.[8]
Yet the topics are important. The organization Fish Count estimates
that at least a trillion sea creatures are wild-caught or farmed each
year (Mood & Brooke 2010, 2012, in Other Internet Resources).
Globally, humans consume more
than 20 kg of fish per capita annually (FAO 2016). In the US, we
consume 1.5 lbs of honey per capita annually (Bee Culture 2016).
Estimates of insect consumption are less sure. The UN FAO estimates
that insects are part of the traditional diets of two billion humans
though whether they are eaten--whether those diets are adhered
to--and in what quantity is unclear (FAO 2013).
Seafood is produced by farming and by fishing. Fishing techniques vary
from a person using a line in a boat to large trawlers pulling nets
across the ocean floor. The arguments for and against seafood
production are much like the arguments for and against meat
production: Some worry about the effects on humans of these practices.
(Some workers, for example, are enslaved on shrimpers.) Some worry
about the effects on the environment of these practices. (Some coral
reefs, for example, are destroyed by trawlers.) Some worry about the
permissibility of killing, hurting, or controlling sea creatures or
treating them merely as tools. This last worry should not be
undersold: Again, Mood and Brooke (2010, 2012, in Other Internet
Resources) estimate that
between 970 *billion* and 2.7 *trillion* fish are
wild-caught yearly and between 37 and 120 billion farmed fish are
killed. If killing, hurting, or controlling these creatures or
treating them as mere tools is wrong, then the scale of our wrongdoing
with regard to sea creatures beggars belief.
Are these actions wrong? Complicating the question is that there is
significantly more doubt about which sea creatures have mental lives
at all and what those mental lives are like. And while whether shrimp
are sentient is clearly irrelevant to the permissibility of enslaving
workers who catch them, it does matter to the permissibility of
killing shrimp. This doubt is greater still with regard to insect
mental lives. In conversation, people sometimes say that bee mental
life is such that nothing wrong is done to bees in raising them.
Nothing wrong is done to bees in killing them. Because they are not
sentient, there is no hurting them. Because of these facts about bee
mental life, the argument goes, "taking" their honey need
be no more morally problematic than "taking" apples from
an apple tree. (There is little on the environmental impact of honey
production or (human) workers and honey. So it is unclear how forceful
environment- and human-based worries about honey are.)
This argument supporting honey production hinges on some empirical
claims about bee mental life. For an up-to-date assessment of bee
mental life, see Tye 2016, which argues that bees "have a rich
perceptual consciousness" and "can feel some
emotions" and that "the most plausible hypothesis
overall...is that bees feel pain" (2016: 158-159) and
see, too, Barron & Klein 2016, which argues that insects,
generally, have a capacity for consciousness. The argument supporting
honey production might be objected to on those empirical grounds. It
might, instead, be objected to on the grounds that we are uncertain
what the mental lives of bees are like. It could be that they are much
richer than we realize. If so, killing them or taking excessive
honey--and thereby causing them significant harms--might
well be morally wrong. And, the objection continues, the costs of not
doing so, of just letting bees be, would be small. If so, caution
requires not taking any honey or killing bees or hurting them.
Arguments like this are sometimes put applied to larger creatures. For
discussion of such arguments, see Guerrero 2007.
## 4. From Production to Consumption
None of the foregoing is about consumption. The moral vegetarian
arguments thus far have, at most, established that it is wrong to
produce meat in various ways. Assuming that some such argument is
sound, how to get from the wrongness of producing meat to the
wrongness of consuming that meat?
This question is not always taken seriously. Classics of the moral
vegetarian literature like Singer 1975, Regan 1975, Engel 2000, and
DeGrazia 2009 do not give much space to it. (C. Adams 1990 is a rare
canonical vegetarian text that devotes considerable space to
consumption ethics.) James Rachels writes,
>
>
> Sometimes philosophers explain that [my argument for vegetarianism] is
> unconvincing because it contains a logical gap. We are all opposed to
> cruelty, they say, but it does not follow that we must become
> vegetarians. It only follows that we should favor less cruel methods
> of meat production. This objection is so feeble it is hard to believe
> it explains resistance to the basic argument [for vegetarianism].
> (2004: 74)
>
>
>
Yet if the objection is that it does not *follow* from the
wrongness of producing meat that consuming meat is wrong, then the
objection is not feeble and is clearly correct. In order to validly
derive the vegetarian conclusion, additional premises are needed.
Rachels, it turns out, has some, so perhaps it is best to interpret
his complaint as that it is *obvious* what the premises
are.
Maybe so. But there is quite a bit of disagreement about what those
additional premises are and plausible candidates differ greatly from
one another.
### 4.1 Bridging the Gap
Consider a *productivist* idea about the connection between
production and consumption according to which consumption of
wrongfully-produced goods is wrong because it produces more wrongful
production. The idea issues an argument that, in outline, is:
Consuming some product *P* produces production of *Q*.
Production of *Q* is wrong.
It is wrong to produce wrongdoing. Hence,
Consuming *P* is wrong.
Or never mind *actual* production. A productivist might
argue:
Consuming some product *P* *is reasonably expected to
produce* production of *Q*.
Production of *Q* is wrong.
It is wrong to do something that is reasonably expected to produce
wrongdoing. Hence,
Consuming *P* is wrong. (Singer 1975; Norcross 2004; Kagan
2011)
(The main ideas about connecting consumption and production that
follow can--but won't --be put in terms of
expectation, too.)
The moral vegetarian might then argue that meat is among the values of
both *P* and *Q*: consuming meat is reasonably expect to
produce production of meat. Or the moral vegetarian might argue that
consuming meat produces more normalization of bad attitudes towards
animals and *that* is wrong. There are various
possibilities.
Just consider the first, the one about meat consumption producing meat
production. It is most plausible with regard to *buying*. It is
buying the wrongfully-produced good that produces more of it.
*Eating* meat produces more production, if it does, by
producing more buying. When Grandma buys the wrongfully produced
delicacy, the idea goes, she produces more wrongdoing. The company she
buys from produces more goods whether you eat the delicacy or throw it
out.
These arguments hinge on an empirical claim about production and a
moral claim about the wrongfulness of producing wrongdoing. The moral
claim has far-reaching implications (DeGrazia 2009 and Warfield 2015).
Consider this rent case:
>
>
> You pay rent to a landlord. You know that he takes your rent and uses
> the money to buy wrongfully-produced meat.
>
>
>
If buying wrongfully-produced meat is wrong because it produces more
wrongfully-produced meat, is it wrong to pay rent in the rent case? Is
it wrong to buy a vegetarian meal at a restaurant that then takes your
money and uses it to buy wrongfully-produced steak? These are
questions for productivists' moral claim. There are further,
familiar questions about whether it is wrong to produce wrongdoing
when one neither intends to nor foresees it and whether it is wrong to
produce wrongdoing when one does not intend it but does foresee it and
then about whether what is wrong is producing wrongdoing or,
rather, simply producing a bad effect (see entries on the
doctrine of double effect
and
doing vs. allowing harm).
An objection to productivist arguments denies the empirical claim and,
instead, accepting that because the food system is so enormous, fed by
so many consumers, and so stuffed with money, our eating or buying
typically has *no* effect on production, neither directly nor
even, through influencing others, indirectly (Budolfson 2015; Nefsky
2018). The idea is that buying a burger at, say, McDonald's
produces no new death nor any different treatment of live animals.
McDonald's will produce the same amount of meat--and raise
its animals in exactly the same way--regardless of whether one
buys a burger there. Moreover, the idea goes, one should reasonably
expect this. Whether or not this is a good account of how food
consumption typically works, it is an account of a possible system.
Consider the Chef in Shackles case, a modification of a case in
McPherson 2015:
>
>
> Alma runs Chef in Shackles, a restaurant at which the chef is known to
> be held against his will. It's a vanity project, and Alma will
> run the restaurant regardless of how many people come. In fact, Alma
> just burns the money that comes in. The enslaved chef is superb; the
> food is delicious.
>
>
>
The productivist idea does not imply it is wrong to buy food from or
eat at Chef in Shackles. If that is wrong, a different idea needs to
explain its wrongness.
So consider instead an *extractivist* idea according to which
consumption of wrongful goods is wrong because it is a benefiting from
wrongdoing (Barry & Wiens 2016). This idea can explain why it is
wrong to eat at Chef in Shackles--when you enjoy a delicious meal
there, you benefit from the wrongful captivity of the chef. In
outline, the extractivist argument is:
Consuming some product *P* extracts benefit from the production
of *P*.
Production of *P* is wrong.
It is wrong to extract benefit from wrongdoing. Hence,
Consuming *P* is wrong.
Moral vegetarians would then urge that meat is among the values of
*P*. Unlike the productivist argument, this one is more plausible
with regard to *eating* than buying. It's the eating,
typically, that produces the benefit and not the buying. Unlike the
productivist argument, it does not seem to have any trouble explaining
what is wrong in the Chef in Shackles case. Unlike the productivist
argument, it doesn't seem to imply that paying a landlord who
pays for wrongfully produced food is wrong--paying a landlord is
not benefiting from wrongdoing.
Like the productivist argument, the extractivist argument hinges on an
empirical claim about consumer benefits and a moral claim about the
ethics of so benefiting.
The notion of benefiting, however, is obscure. Imagine you go to Chef
in Shackles, have a truly repulsive meal, and become violently ill
afterwards. Have you benefit ted from wrongdoing? If not, the
extractivist idea cannot explain what is wrong with going to the
restaurant.
Put so plainly, the extractivist's moral claim is hard to
believe. Consider the terror-love case, a modification of a case Barry
& Wiens 2016 credits to Garrett Cullity:
>
>
> A terrorist bomb grievously injures Bob and Cece. They attend a
> support group for victims, fall in love, and live happily ever after,
> leaving them significantly better off than they were before the
> attack.
>
>
>
Bob and Cece seem to benefit from wrongdoing but seem not to be doing
anything wrong by being together. Whereas the productivist struggles
to explain why it is wrong to patronize Chef in Shackles, the
extractivist struggles to explain why it is permissible for Bob and
Cece to benefit from wrongdoing.
A *participatory* idea has no trouble with the terror-love
case. According to it, consuming wrongfully-produced goods is wrong
because it cooperates with or participates in or, in
Hursthouse's phrase, is party to wrongdoing (2011). Bob and Cece
do not participate in terror, so the idea does not imply they do
wrong. The idea issues an argument that, in outline, goes:
Consuming some product *P* is participating in the production of
*P*.
Production of *P* is wrong.
It is wrong to participate in the production of wrongful things.
Hence,
Consuming *P* is wrong. (Kutz 2000; Lepora & Goodin 2013)
Moral vegetarians would then urge that meat is among the values of
*P*. Unlike the productivist or extractivist ideas, the
participatory idea seems to as easily cover buying and eating for each
is plausibly a form of participating in wrongdoing. Unlike the
productivist idea, it has no trouble explaining why it is wrong to
patronize Chef in Shackles and does not imply it is wrong to pay rent
to a landlord who buys wrongfully-produced meat. Unlike the
extractivist idea, whether or not you get food poisoning at Chef in
Shackles has no moral importance to it. Unlike the extractivist idea,
the participatory idea does not falsely imply that the Bob and Cece do
wrong in benefiting from wrongdoing--after all, their failing in
love is not a way of participating in wrongdoing.
Yet it is not entirely clear what it is to participate in wrongdoing.
Consider the Jains who commit themselves to lives without
*himsa* (violence). Food production causes himsa. So Jains try
to avoid eating many plants, uprooted to be eaten, and even drinking
untreated water, filled with microorganisms, to minimize lives taken.
Yet Jaina monastics are supported by Jaina laypersons. The monastic
can't boil his own water--that would be violent--but
the water needs boiling so he depends on a layperson to boil. He kills
no animals but receives alms, including meat, from a layperson. Is the
monastic participating in violence? Is he participating because he is
complicit in this violence (Kutz 2000; Lepora & Goodin 2013)? Is
he part of a group that together does wrong (Parfit 1984: Chapter 3)?
When Darryl refuses to buy wrongfully-produced meat but does no
political work with regard to ending its production is he party to the
wrongful production? Does he participate in it or cooperate with its
production? Is he a member of a group that does wrong? If so, what are
the principles of group selection?
As a matter of contingent fact, failing to politically protest meat
exhibits no objectionable attitudes in contemporary US society. Yet it
might be that consuming certain foods insults or otherwise disrespects
creatures involved in that food's production (R.M. Adams 2002;
Hill 1979). Hurka (2003) argues that virtue requires exhibiting the
right attitude towards good or evil, and so *if* consuming
exhibits an attitude towards production, it is plausible that eating
wrongfully produced foods exhibits the wrong attitude towards them.
These are all *attitudinal* ideas about consumption. They might
issue in an argument like this:
Consuming some product *P* exhibits a certain attitude towards
production of *P*.
Production of *P* is wrong.
It is wrong to exhibit that attitude towards wrongdoing. Hence,
Consuming *P* is wrong.
Moral vegetarians would then urge that meat is among the values of
*P*. Like the participatory idea, the attitudinal idea explains
the wrongness of eating and buying various goods--both are ways
of exhibiting attitudes. Like the participatory idea, it has no
trouble with Chef in Shackles, the rent case, the food poisoning case,
or the terror-love case. It does hinge on an empirical claim about
exhibition--consuming certain products exhibits a certain
attitude--and then a moral claim about the impermissibility of
that exhibition. One might well wonder about both. One might well
wonder why buying meat exhibits support for that enterprise but paying
rent to someone who will buy that meat does not. One might well wonder
whether eating wrongfully-produced meat in secret exhibits support and
whether such an exhibition is wrong. Also, there are attitudes other
than attitudes towards production to consider. Failing to offer meat
to a guest might exhibit a failure of reverence (Fan 2010). In
contemporary India, in light of the "meat murders"
committed by Hindus against Muslims nominally for the latter
group's consumption of beef, refusing to eat meat might exhibit
support for religious discrimination (Doniger 2017).
The productivist, extractivist, participatory, and attitudinal ideas
are not mutually exclusive. Someone might make use of a number of
them. Driver, for example, writes,
>
>
> [E]ating [wrongfully produced] meat is *supporting* the
> industry in a situation where there were plenty of other, better,
> options open...What makes [the eater] *complicit* is that
> she is a *participant*. What makes that participation morally
> problematic...is that the eating of meat *displays* a
> *willingness to cooperate* with the producers of a product that
> is produced via huge amounts of pain and suffering. (2015: 79; all
> italics mine)
>
>
>
This seems to at least incorporate participatory and attitudinal
ideas. Lawford-Smith (2015) combines attitudinal and productivist
ideas. McPherson (2015) combines extractivist and participatory ideas.
James Rachels (2004) combines participatory and productivity ideas.
And, of course, there are ideas not discussed here, e.g., that it is
wrong to reward wrongdoers for wrongdoing and buying wrongfully
produced meat does so. The explanation of why it is wrong to consume
certain goods might be quite complex.
### 4.2 Against Bridging the Gap
Driver, Lawford-Smith, McPherson, and James Rachels argue that it is wrong to
consume wrongfully produced food and try to explain why this is. The
productivist, extractivist, participatory, and attitudinal ideas, too,
try to explain it. But it could be that there is nothing to
explain.
It could be that certain modes of production are wrong yet consuming
their products is permissible. We might assume that *if*
consumption of certain goods is wrong, then that wrongness would have
to be partly explained in terms of the wrongness of those goods'
production and then argue that there are no sound routes from a
requirement not to produce a food to a requirement not to consume it
(Frey 1983). This leaves open the possibility that consumers might be
required to do *something*--for example, work for
political changes that end the wrongful system--but permitted to
eat wrongfully-produced food.
As SS4.1 discusses, Warfield raises a problem for productivist
accounts that they seem to falsely imply that morally permissible
activities like paying rent to meat-eaters or buying salad at a
restaurant serving meat are morally wrong (2015). Add the assumption
that *if* consumption is wrong, it is wrong because some
productivist view is true, and it follows that consumption of wrongful
goods need not be wrongful. (Warfield does not assume this but instead
says that "the best discussion" of the connection between
production and consumption is "broadly consequentialist"
(ibid., 154).)
Instead, we might assume that an extractivist or participatory or
attitudinal view is correct if any is and then argue no such view is
correct. We might, for example, argue that these anti-consumption
views threaten to forbid too much. If the wrongness of producing and
wrongness of consuming are connected, what *else* is connected?
If buying meat is wrong because it exhibits the wrong attitude towards
animals, is it permissible to be friends with people who buy that
meat--or does this, too, evince the wrong attitudes towards
animals? If killing animals for food is wrong, is it permissible
merely to abstain from consuming them or must one do more work to stop
their killing? The implications of various arguments against consuming
animals and animal products might be far-reaching. Some will see this
as an acknowledgment that something is wrong with moral vegetarian
arguments. As Gruen and Jones (2015) note, the lifestyle some such
arguments point to might not be enactable by creatures like us. Yet
they see this not as grounds for rejection of the argument but,
rather, as acknowledgment that the argument sets out an aspiration
that we can orient ourselves towards (cf. SS4 of Curtin 1991 on
"contextual vegetarianism").
A different sort of argument in favor of the all things considered
permissibility of consuming meat comes from the idea that eating and
buying animals actually makes for a great cultural good (Lomasky
2013). Even if we accept that the production of those animals is
wrong, it could be that the great good of consuming justifies doing
so. (Relatedly, it could be that the bad of *refusing to*
consume justifies consumption as in a case in which a host has labored
over barbequed chicken for hours and your refusing to eat it would
devastate him.) Yet this seems to leave open the possibility that all
sorts of awful practices might be permissible because they are
essential parts of great cultural goods. It threatens to permit too
much.
## 5. Extending Moral Vegetarian Arguments: Animal Products and Plants
Moral veganism accepts moral vegetarianism and adds to it that
consuming animal products is wrong. Mere moral vegetarians deny this
and add to moral vegetarianism that it is permissible to consume
animal products. An additional issue that divides some moral vegans
and moral vegetarians is whether animal product *production* is
wrong. This raises a general question: If it is wrong to produce meat
on the grounds adduced in
SS2,
what other foods are wrongfully produced? If it is wrong to hurt
chickens for meat, isn't it wrong to hurt them for eggs? If it
is wrong to harm workers in the production of meat, isn't it
wrong to harm workers in the production of animal products? If it is
wrong to produce huge quantities of methane for meat, isn't it
wrong to produce it for milk? These are challenges posed by moral
veganism.
But various vegan diets raise moral questions. If it is wrong to hurt
chickens for meat, is it wrong to hurt mice and moles while harvesting
crops? If it is wrong to harm workers in the production of meat,
isn't it wrong to harm workers in the production of tomatoes? If
it is wrong to use huge quantities of water for meat, isn't it
wrong to use huge quantities of water for almonds?
### 5.1 Animal Products
As it might be that meat farming wrong, it might be that animal
product farming is wrong for similar reasons. These reasons stem from
concerns about plants, animals, humans, and the environment. This
entry will focus on the first, second, and fourth and will consider
eggs and dairy.
#### 5.1.1 Eggs
Like meat birds, egg layers on industrial farms are tightly confined,
given on average a letter-sized page of space. Their beaks are seared
off. They are given a cocktail of antibiotics. Males, useless as
layers, are killed right away: crushed, dehydrated, starved,
suffocated. As they age and their laying-rate slows, females are
starved so as to force them to shed feathers and induce more laying.
They are killed within a couple years (HSUS 2009; cf. Norwood &
Lusk 2011: 113-127, which rates layer hen lives as not worth
living).
Freerange egg farming ideally avoids much of this. Yet it still
involves killing off young but spent hens and also baby roosters. It
often involves painful, stressful trips to industrial slaughterhouses.
So, as it is plausible that industrially and freerange farming
chickens for meat makes them suffer, so too is it plausible that
industrially and freerange farming them for eggs does. The same goes
for killing.
The threat to the environment, too, arises from industrial farming
itself rather than whether it produces meat or eggs. Chickens produce
greenhouse gas and waste regardless of whether they are farmed for
meat or eggs. Land is deforested to grow food for them and resources
are depleted to care for them regardless of whether they are farmed
for meat or eggs.
In sum, arguments much like arguments against chicken production seem
to apply as forcefully to egg production. Arguments from premises
about killing, hurting, and harming the environment seem to apply to
typical egg production as they do to typical chicken production.
#### 5.1.2 Dairy
Like beef cattle, dairy cows on industrial farms are tightly confined
and bereft of much stimulation. As dairy cows, however, they are
routinely impregnated and then constantly milked. Males, useless as
milkers, are typically turned to veal within a matter of months.
Females live for maybe five years. (HSUS 2009; cf. Norwood & Lusk
2011: 145-150).
Freerange milk production does not avoid very much of this. Ideally,
it involves less pain and suffering but it typically involves forced
impregnation, separation of mother and calf, and an early death,
typically in an industrial slaughterhouse. So far as arguments against
raising cows for meat on the basis that doing so kills them and makes
them suffer are plausible, so are analogous arguments against raising
cows for dairy.
The threat to the environment is also similar regardless of whether
cattle are raised for meat or milk. So far as arguments against
raising cows for meat on the basis that doing so harms the environment
are plausible, so are analogous arguments against raising cows for
milk. Raising cows for meat and for milk produces greenhouse gas and
waste; it deforests and depletes resources. In fact, to take just one
example, the greenhouse-gas-based case against dairy is stronger than
the greenhouse-gas-based case against poultry and pork (Hamerschlag
2013).
In sum, arguments much like arguments against beef production seem to
apply as forcefully to dairy production. Arguments from premises about
killing, hurting, and harming the environment seem to apply to typical
dairy production as they do to typical beef production.
### 5.2 Plants
As it might be that animal, dairy, and egg farming are wrong, it might
be that plant farming is wrong for similar reasons. These reasons stem
from concerns about plants, animals, humans, and the environment. This
entry will focus on the first, second, and fourth.
#### 5.2.1 Plants Themselves
Ed drenches Fatima's prized cactus in pesticides without
permission. This is uncontroversially wrongful but only
uncontroversial because the cactus is Fatima's. If a cactus
grows in Ed's yard and, purely for fun, she drenches it in
pesticides, killing it, is that wrong? There is a family of unorthodox
but increasingly common ideas about the treatment of plants according
to which any killing of plants is at least *pro tanto* wrongful
and that treating them as mere tools is too (Marder 2013; Stone 1972,
Goodpaster 1978, and Varner 1998 are earlier discussions and Tinker
2015 discusses *much* earlier discussions). One natural way to
develop this thought is that it is wrong to treat plants this way
simply because of the effects on plants themselves. An alternative is
wrong to treat the plants this way simply because of its effects on
the biosphere. In both cases, we can do intrinsic wrong to
non-sentient creatures.
The objection raises an important issue about interests. Singer,
following Porphyry and Bentham, assumes that all and only sentient
creatures have interests. The challenge that Marder, et al. raise is
that plants at least seem to do better or worse, to flourish or
founder, because they seem to have interests in a certain amount of
light, nutrients, and water. One way to interpret the position of
Porphyry, et al. is that things are not as they seem here and, in
fact, plants, lacking sentience, have no interests. This invites the
question of why sentience is necessary for interests (Frey 1980 and
1983). Another way to interpret the position of Porphyry, et al. is
that plants do have interests but they have no moral import. This
invites the questions of when and why is it permissible to deprive
plants of what they have interests in. Marder's view is that
plants have interests and that these interests carry significantly
more moral weight than one might think. So, for example, as killing a
dog for fun is wrong, so, too, is killing a dandelion. If killing a
chicken for food we don't need is wrong, so, too, is killing
some carrots.
If it is impermissible to kill plants to provide ourselves food we
don't need, how far does the restriction on killing extend: To
bacteria? Pressed about this by Gary Francione, Marder is open-minded:
"We should not reject the possibility of respecting communities
of bacteria without analyzing the issue seriously" (2016:
179).
#### 5.2.2 Plant Production and Animals
Marder's view rests on a controversial interpretation of plant
science and, in particular, on a controversial view that vegetal
responses to stimuli--for example that "roots...are
capable of altering their growth pattern in moving toward
resource-rich soil or away from nearby roots of other members of the
same species" (2016: 176)--suffice to show that plants have
interests, are ends in themselves, and it is pro tanto wrong to kill
them and treat them as tools.
Uncontroversially, much actual plant production *does* have
various bad consequences for animals. Actual plant production in the
US is largely large scale. Large-scale plant production
involves--intentionally or otherwise--killing a great many
sentient creatures. Animals are killed by tractors and pesticides.
They are killed or left to die by loss of habitat (Davis 2003; Archer
2011). The scope of the killing is disputed in Lamey 2007 and Matheny
2003 but all agree it is vast (cf. Saja 2013 on the moral imperative
to kill large animals).
The "intentionally or otherwise" is important to some.
While these harms are foreseen consequences of farming, they are
unintended. To some, that animals are harmed but not intentionally
harmed in producing corn in Iowa helps to make those harms permissible
(see entry on
doctrine of double effect).
Pigs farmed in Iowa, by contrast, are intentionally killed. Chickens
and cows, too. (Are any intentionally hurt? Not typically. Farming is
not sadistic.)
The scale is important, too. Davis (2003) and Archer (2011) argue that
some forms of meat production kill fewer animals than plant production
and, because of that, are preferable to plant production.
The idea is that if animal farming is wrong because it kills animals
simply in the process of producing food we don't need, then some
forms of plant farming are wrong for the same reason. More weakly, if
animal farming is wrong because it kills *very large* numbers
of animals in the process of producing food we don't need, then
some forms of plant farming are wrong for the same reason.
An outstanding issue is whether these harms are necessary components
of plant production or contingent. A further issue is how easy it
would be to strip these harms off of plant production while still
producing foods humans want to eat at prices they are willing to
pay.
#### 5.2.3 Plant Production and the Environment
A final objection to the permissibility of plant production: There are
clearly environmental costs of plant production. Indeed, the
environmental costs of plant farming are large: topsoil loss; erosion;
deforestation; run-off; resource-depletion; greenhouse gas emissions.
To take just the last two examples, Budolfson (2016: 169) estimates that
broccoli produces more kilograms of CO2 per thousand
calories than pork and that almonds use two and a half times the water
per thousand calories that chicken does.
If some forms of animal farming are wrong for those environmental
reasons, then some forms of plant farming are wrong for those reasons
(Budolfson 2018).
Again, an outstanding issue is whether these harms are necessary
components of plant production or contingent. A further issue is how
easy it would be to strip these harms off of plant production while
still producing foods people want to eat at prices they are willing to
pay.
### 5.3 Summary of Animal Product and Plant Subsections
Moral vegetarian arguments standardly oppose treating animals in
various ways while raising them for food that we do not need to eat to
survive. This standardly makes up part of the arguments that it is
wrong to *eat* animals.
These arguments against meat production can be extended *mutatis
mutandis* to animal product
production.[9]
They can be extended, too, to some forms of plant production. This
suggests:
The arguments against industrial plant production and animal product
production are as strong as the arguments against meat production.
The arguments against meat production show that meat production is
wrong. Hence,
The arguments against industrial plant production and animal product
production show that those practices are wrong.
One possibility is that the first premise is false and that some of
the arguments are stronger than others.
Another possibility is that the first premise is true and all these
arguments are equally strong. We would then have to choose between
accepting the second premise--and thereby accepting the
conclusion--or denying that meat production is wrong.
Another possibility is that the argument is sound but of limited
scope, there being few if any alternatives in the industrialized West
to industrialized plant, animal product, and meat production.
A final possibility is that the parity of these arguments and evident
unsoundness of an argument against industrial plant production show
that the ideas behind those arguments are being misexpressed. Properly
understood, they issue not in a directive about the wrongness of this
practice or that. Rather, properly understood, they just show that
various practices are bad in various ways. If so, we can then ask:
Which are worse? And in which ways? The literature typically ranks
factory farming as worse for animals than industrial plant farming if
only because the former requires the latter and produces various
harms--the suffering of billions of chickens--that the
latter does not. Or consider the debate in the literature about the
relative harmfulness to animals of freerange farming and industrial
plant farming. Which produces more animal death or more animal
suffering? Ought we minimize that suffering? Or consider the relative
harmfulness of freerange and industrial animal farming. Some argue
that the former is worse for the environment but better for animals.
If so, there is a not-easy question about which, if either, to go in
for.
## 6. Conclusion: Where the Debate About Vegetarianism Stands and Is Going
Given length requirements, this entry cannot convey the vastness of
the moral vegetarian literature. There is some excellent work in the
popular press. *Between the Species*, *Journal of
Agricultural and Environmental Ethics*, *Journal of Animal
Ethics*, *Environmental Ethics*, and *Journal of Food
Ethics* publish articles yearly. Dozens of good articles have been
omitted from discussion.
This entry has omitted quite direct arguments against consuming meat,
arguments that do not derive from premises about the wrongness of
*producing* this or that. Judeo-Islamic prohibitions on pork,
for example, derive from the uncleanliness of the product rather than
the manner of its production. Rastafari prohibitions on eating meat,
for another example, derive in part from the view that meat
consumption is unnatural. Historically, such prohibitions and
justifications for them have not been limited to prohibitions on
consuming meat. The *Laws of Manu*'s prohibition on
onion-eating derives from what consuming onion will do to the consumer
rather than the manner of onion-production (Doniger & Smith
(trans.) 1991: 102). The Koran's prohibition on alcohol-drinking
derives from what consuming alcohol will do to the consumer rather
than the manner of alcohol-production (5:90-91).
Arguments like this, arguments against consumption that start from
premises about intrinsic features of the consumed or about the
consumed's effects on consumers, largely do not appear in the
contemporary philosophical literature. What we have now are arguments
according to which certain products are wrongfully produced and
consumption of such products bears a certain relation to that
wrongdoing and, *ipso facto*, is wrong. Moral vegetarians then
argue that meat is such a product: It is typically wrongfully produced
and consuming it typically bears a certain relation to that
wrongdoing. This then leaves the moral vegetarian open to two sorts of
objections: objections to the claims about
production--*is* meat produced that way? Is such
production wrongful?--and objections to the claims connecting
consumption to production--*is* consuming meat related to
wrongful production in the relevant way? Is being so related wrong?
Explaining moral vegetarian answers to these questions was the work of
SS2
and
SS4.
There are further questions. If moral vegetarian arguments against
meat-consumption are sound, then are arguments against animal
*product* consumption also sound? Might dairy, eggs, and honey
be wrongfully produced as moral vegetarians argue meat is? Might
consuming them wrongfully relate the consumer to that production?
Explaining the case for "yes" was some of the work of
SS5.
Relatedly, some plants, fruit, nuts, and other putatively vegetarian
foods might be wrongfully produced. Some tomatoes are picked by
workers working in conditions just short of slavery (Bowe 2007);
industrial production of apples sucks up much water (Budolfson 2016);
industrial production of corn crushes numerous small animals to death
(Davis 2003). Are these food wrongfully produced? Might consuming them
wrongfully relate the consumer to that production? Explaining the case
for "yes" here, too, was some of the work of
SS5.
Fischer (2018) suggests that the answers to some of the questions
noted in the previous two paragraphs support a requirement to
"eat unusually" and, one might add, to produce unusually.
*If* meat, for example, is usually wrongfully produced, it must
be produced unusually for that production to stand a chance of being
permissible, perhaps as faultless roadkill (Koelle 2012; Bruckner
2015) or as the corpse of an animal dead from natural causes (Foer
2009) or as a test-tube creation (Milburn 2016; Pluhar 2010; see the
essays in Donaldson & Carter (eds.) 2016 for discussion of
plant-based "meat").
If consuming meat is usually wrong because it usually bears a certain
relation to production, it must be consumed unusually to stand a
chance of being permissible. Some people eat only food they scavenge
from dumpsters, food that would otherwise go to waste. Some people eat
only food that is given to them without asking for any food in
particular. *If* consuming is wrong only because it produces
more production, neither of these modes of consumption would be
wrongful.
As some unusual consumption might, by lights of the arguments
considered in this entry, turn out to be morally unobjectionable, some
perfectly usual practices having nothing to do with consumption might
turn out, by those same lights, to be morally objectionable. Have you
done all you are required to do by moral vegetarian lights if you stop
eating, for example, factory-farmed animals? Clearly not. If it is
wrong to eat a factory-farmed cow, it is *for very similar
reasons* wrong to wear the skin of that cow. Does the wrongful
road stop at consumption, broadly construed to include buying, eating,
or otherwise using? Or need consumers do more than not consume
wrongfully-produced goods? Need they be pickier in how they spend
their money than simply not buying meat, e.g., not going to
restaurants that serve any meat? Need they protest or lobby? Need they
take more direct action against farms? Or more direct action against
the government? Need they refuse to pay rent to landlords who buy
wrongfully-produced meat? Is it permissible for moral vegetarians to
befriend--or to stay friends with--meat-eaters? As there are
questions about whether the moral road gets from production to
consumption, there are questions about whether the road stops at
consumption or gets much farther.
As discussed in
SS5,
the moral vegetarian case against killing, hurting, or raising
animals for food might well be extended to killing, hurting, or
raising animals in other circumstances. What, if anything, do those
cases show about the ethical treatment of pets (Bok 2011; Overall
(ed.) 2017; Palmer 2010 and 2011)? Of zoo creatures (DeGrazia 2011;
Gruen 2011: Chapter 5; Gruen 2014)?
What, if anything, do they show about duties regarding wild animals?
Palmer 2010 opens with two cases from 2007, one of which involved the
accidental deaths of 10,000 wildebeest in Kenya, the other involving
the mistreatment and death of 150 horses in England. As Palmer notes,
it is plausible that we are required to care for and help domesticated
animals--that's why it is plausibly wrong to let horses
under our care suffer--but permissible to let similar harms
befall wild animals--that's why it is plausibly permissible
to let wildebeest suffer and die. And yet, Palmer continues, it is
also plausible that animals with similar capacities--animals like
horses and wildebeest--should be treated similarly. So is the
toleration of 10,000 wildebeest deaths permissible? Or do we make a
moral mistake in not intervening in such cases? Relatedly, moral
vegetarians oppose chicken killing and consumption and yet some of
them aid and abet domestic cats in the killings of billions of birds
each year in the United States alone (Loss, et al. 2013; Pressler
2013). Is this permissible? If so, why (Cohen 2004; Milburn 2015;
Sittler-Adamczewski 2016)? McMahan (2015) argues that standard moral
vegetarian arguments against killing and suffering lead (eventually)
to the conclusion that we ought to reduce predation in the wild.
What, if anything do moral vegetarian arguments show about duties
regarding fetuses? There are forceful arguments that if abortion is
wrong, then so is killing animals for food we don't need (Scully
2013). The converse is more widely discussed but less plausible
(Abbate 2014; Colb & Dorf 2016; Nobis 2016).
Finally, in the food ethics literature, questions of food justice are
among the most common questions about food consumption. Sexism,
racism, and classism, are unjust. Among the issues of food justice,
then, are how, if at all, the practices of vegetarianism and
omnivorism or encouragement of them are sexist (C. Adams 1990) or
racist (Alkon & Agyeman (eds.) 2011) or classist (Guthman 2011).
Industrial animal agriculture raises a pair of questions of justice:
It degrades the environment--is this unjust to future generations
who will inherit this degraded environment? Also, what makes it so
environmentally harmful is the scale of it. That scale is driven, in
part, by demand for meat among the increasingly affluent in developing
countries (Herrero & Thornton 2013). Is refusing to meet that
demand--after catering to wealthy Western palates for a long
stretch--a form of classism or racism?
The animals we eat dominate the moral vegetarian literature and have
dominated it ever since there has been a moral vegetarian literature.
A way to think about these last few paragraphs is that questions about
what we eat lead naturally to questions about other, quite different
topics: the animals we eat but also the animals we don't; eating
those animals but also eating plants; refusing to eat those animals
but also raising pets and refusing to intervene with predators and
prey in the wild; refusing to eat but also failing to protest or
rectify various injustices. Whereas the questions about
animals--and the most popular arguments about them--are very
old, these other questions are newer, and there is much progress to be
made in answering them. |
original-position | ## 1. Historical Background: The Moral Point of View
The idea of the moral point of view can be traced back to David
Hume's account of the "judicious spectator." Hume
sought to explain how moral judgments of approval and disapproval are
possible given that people normally are focused on achieving their own
interests and concerns. He conjectured that in making moral judgments
individuals abstract in imagination from their own interests and adopt
an impartial point of view from which they assess the effects of their
own and others' actions on the interests of everyone. Since,
according to Hume, we all can adopt this impartial perspective in
imagination, it accounts for our agreement in moral judgments (see
Hume 1739 [1978, 581]; Rawls, LHMP 84-93, LHPP
184-187).
Subsequently, philosophers posited similar perspectives for moral
reasoning designed to yield impartial judgments once individuals
abstract from their own aims and interests and assess situations from
an impartial point of view. But rather than being mainly explanatory
of moral judgments like Hume's "judicious
spectator," the role of these impartial perspectives is to serve
as a basis from which to assess and justify moral rules and
principles. Kant's categorical imperative procedure, Adam
Smith's "impartial spectator,", and Sidgwick's
"point of view of the universe" are all different versions
of the moral point of view.
An important feature of the moral point of view is that it is designed
to represent what is essential to the activity of moral reasoning. For
example, Kant's categorical imperative is envisioned as a point
of view any reasonable morally motivated person can adopt in
deliberating about what they ought morally to do (Rawls, CP 498ff;
LHMP). When joined with the common assumption that the totality of
moral reasons is final and override non-moral reasons, the moral point
of view might be regarded as the most fundamental perspective that we
can adopt in our reasoning about justice and what we morally ought to
do.
Rawls's idea of the original position, as initially conceived,
is his account of the moral point of view regarding matters of
justice. The original position is a hypothetical perspective that we
can adopt in our moral reasoning about the most basic principles of
social and political justice. What primarily distinguishes
Rawls's impartial perspective from its antecedents (in Hume,
Smith, Kant, etc.) is that, rather than representing the judgment of
one person, it is conceived socially, as a general agreement by
(representatives of all adult) members of an ongoing society. The
point of view of justice is then represented as a general
"social contract" or agreement by free and equal persons
on the basic terms of cooperation for their society.
## 2. The Original Position and Social Contract Doctrine
Historically the idea of a social contract had a more limited role
than Rawls assigns to it. In Thomas Hobbes and John Locke the social
contract serves as an argument for the legitimacy of political
authority. Hobbes argues that in a pre-social state of nature it would
be rational for all to agree to authorize one person to exercise the
absolute political power needed to maintain peace and enforce laws
necessary for productive social cooperation. (Hobbes, 1651) By
contrast, Locke argued against absolute monarchy by contending that no
existing political constitution is legitimate or just unless it
*could* be contracted into starting from a position of equal
right within a (relatively peaceful) state of nature, and without
violating any natural rights or duties. (Locke, 1690) For Rousseau and
perhaps Kant too, the idea of a social contract plays a different
role: It is an "idea of reason" (Kant) depicting a point
of view that lawmakers and citizens should adopt in their reasoning to
ascertain the "general will," which enables them to assess
existing laws and decide upon measures that promote justice and
citizens' common good. (Rousseau, 1762; Kant, 1793, 296-7;
Kant 1797, 480) Rawls generalizes on Locke's, Rousseau's
and Kant's natural rights theories of the social contract (TJ
vii/xviii rev.; 32/28 rev.): The purpose of his original position is
to yield principles to determine and assess the justice of political
constitutions and of economic and social arrangements and the laws
that sustain them. To do so, he seeks in the original position
"to combine into one conception the totality of conditions which
we are ready upon due reflection to recognize as reasonable in our
conduct towards one another" (TJ 587/514 rev.).
Why does Rawls represent principles of justice as originating in a
kind of social contract? Rawls says that "justice as fairness
assigns a certain primacy to the social" (CP 339). Unlike
Kant's categorical imperative procedure, the original position
is designed to represent the predominantly social bases of justice. To
say that justice is predominantly social does not mean that people do
not have "natural" moral rights and duties outside society
or in non-cooperative circumstances--Rawls clearly thinks there
are human rights (see LP, SS10) and certain "natural
duties" (TJ, SSSS19, 51) that apply to all human beings
as such. But whatever our natural or human rights and duties may be,
they do not provide an adequate basis for ascertaining the rights and
duties of justice that we owe one another as members of the same
ongoing political society. It is in large part due to "the
profoundly social nature of human relationships" (PL 259) that
Rawls sees political and economic justice as grounded in social
cooperation on terms of reciprocity and mutual respect. For this
reason Rawls eschews the idea of a state of nature where pre-social
but fully rational individuals agree to cooperative terms (as in
Hobbesian views), or where pre-political persons with antecedent
natural rights agree on the form of a political constitution (as in
Locke). Rawls regards us as social beings in the sense that in the
absence of society and social development we have but inchoate and
unrealized capacities, including our capacities for rationality,
morality, even language itself. As Rousseau says, outside society we
are but "stupid and shortsighted animals" (Rousseau, 1762,
bk.I, ch.8, par. 1). This draws into question the main point of the
idea of a state of nature in Hobbesian and Lockean views, which is to
distinguish the rights, claims, duties, powers and competencies we
have prior to membership in society from those we acquire as members
of society. Not being members of some society is not an option for us.
In so far as we are rational and reasonable beings at all, we have
developed as members of some society, within its social framework and
institutions. Accordingly Rawls says that no sense can be made of the
notion of that part of an individual's social benefits that
exceed what would have been that person's situation in a state
of nature (PL 278). The traditional idea of pre-social or even
pre-political rational moral agents thus plays no role in
Rawls's account of justice and the social contract; for him the
state of nature is an idea without moral significance (PL
278-280). The original position is set forth largely as an
alternative to the state of nature and is regarded by Rawls as the
appropriate initial situation for a social contract. (Below we
consider a further reason behind Rawls's rejection of the state
of nature: it does not adequately allow for impartial judgment and the
equality of persons.)
Another way Rawls represents the "profoundly social" bases
of principles of justice is by focusing on "the basic structure
of society." The "first subject of justice," Rawls
says, is principles that regulate the basic social institutions that
constitute the "basic structure of society" (TJ sect.2).
These basic institutions include the political constitution, which
specifies political offices and procedures for legislating and
enforcing laws and the system of trials for adjudicating disputes; the
bases and organization of the economic system, including laws of
property, its transfer and distribution, contractual relations, etc.
which are all necessary for economic production, exchange, and
consumption; and finally norms that define and regulate permissible
forms of the family, which is necessary to reproduce and perpetuate
society from one generation to the next. It is the role of principles
of justice to specify and assess the system of rules that constitute
these basic institutions, and determine the fair distribution of
rights, duties, opportunities, powers and positions of office, and
income and wealth realized within them. What makes these basic social
institutions and their arrangement the first subject for principles of
social justice is that they are all necessary to social cooperation
and have such profound influences on our circumstances, aims,
characters, and future prospects. No stable, enduring society could
exist without certain rules of property, contract, and transfer of
goods and resources, for they make economic production, trade, and
consumption possible. Nor could a society long endure without some
political mechanism for resolving disputes and making, revising,
interpreting, and enforcing its economic and other cooperative norms;
nor without some form of the family, to reproduce, sustain, and
nurture members of its future generations. This is what distinguishes
the social institutions constituting the basic structure from other
profoundly influential social institutions, such as religion; religion
and other social institutions are not basic in Rawls's sense
because they are not generally necessary to social cooperation among
members of society. (Even if certain religions have been ideologically
necessary to sustain the norms of particular societies, many societies
can and do exist without the involvement or support of religious
institutions).
Another reason Rawls regards the original position as the appropriate
setting for a social contract is implicit in his stated aim in *A
Theory of Justice*: it is to discover the most appropriate moral
conception of justice for a democratic society wherein persons regard
themselves as free and equal citizens (TJ viii/xviii rev.). Here he
assumes an ideal of citizens as "moral persons" who regard
themselves as free and equal, have a conception of their rational
good, and have a "sense of justice." "Moral
persons" (an 18th century term) are not all
necessarily morally good persons. Instead moral persons are persons
who are capable of being *rational* since they have the
capacities to form, revise and pursue a rational conception of their
good; moreover, moral persons also are capable of being
*reasonable* since they have a moral capacity for a sense of
justice--to cooperate with others on terms that are fair and to
understand, apply, and act upon principles of justice and their
requirements. Because people have these capacities, or "moral
powers," (as Rawls calls them, following Kant) we hold them
responsible for their actions, and they are regarded as capable of
freely pursuing their interests and engaging in social cooperation.
Rawls's idea is that, being reasonable and rational, moral
persons (like us) who regard ourselves as free and equal should be in
a position to accept and endorse as both rational and morally
justifiable the principles of justice regulating our basic social
institutions and individual conduct. Otherwise, our conduct is coerced
or manipulated for reasons we cannot (reasonably or rationally) accept
and we are not fundamentally free persons. Starting from these
assumptions, Rawls construes the moral point of view from which to
decide moral principles of justice as a *social contract* in
which (representatives of) free and equal persons are given the task
of coming to an agreement on principles of justice that are to
regulate their social and political relations in perpetuity. How
otherwise, Rawls contends, should we represent the justification of
principles of justice for free and equal persons who have different
conceptions of their good, as well as different religious,
philosophical, and moral views? There is no commonly accepted moral or
religious authority or doctrine to which they could appeal in order to
discover principles of justice that all could agree to and accept.
Rawls contends that, since his aim is to discover a conception of
justice appropriate for a democratic society, it should be justifiable
to free and equal persons in their capacity as citizens on terms which
all can endorse and accept. The role of the social contract is to
represent this idea, that the basic principles of social cooperation
are justifiable hence acceptable to all reasonable and rational
members of society, and that they are principles which all can commit
themselves to support and comply with.
How is this social contract to be conceived? It is not an historical
event that must actually take place at some point in time (TJ 120/104
rev.ed.). It is rather a hypothetical situation, a kind of
"thought experiment" (JF 17), that is designed to uncover
the most reasonable principles of justice. Rawls maintains (in LHPP,
cf. p.15) that the major advocates of social contract
doctrine--Hobbes, Locke, Rousseau, and Kant--all regarded
the social contract, as a hypothetical event. Hobbes and Locke thus
posited a hypothetical state of nature in which there is no political
authority, and where people are regarded as rational and (for Locke)
also reasonable. The purpose of this hypothetical social contract is
to demonstrate what types of political constitutions and governments
are politically legitimate, and determine the nature of
individuals' political obligations (*LHPP* p.16). The
presumption is that if a constitution or form government could be
agreed to by rational persons subject to it according to principles
and terms they all accept, then it should be acceptable to rational
persons generally, including you and me, and hence is legitimate and
is the source of our political obligations. Thus, Hobbes argues that
all rational persons in a state of nature would agree to authorize an
absolute sovereign to enforce the "laws of nature"
necessary for society; whereas Locke comes to the opposite conclusion,
contending that absolutism would be rejected in favor of
constitutional monarchy with a representative assembly. Similarly, in
Rousseau and Kant, the social contract is a way to reason about the
General Will, including the political constitution and laws that
hypothetical moral agents would all agree to in order to promote the
common good and realize the freedom and equality of citizens.
(Rousseau, 1762, I:6, p.148; II:1, p.153; II:11, p.170; Kant, 1793,
296-7; Kant 1797, 480; cf. Rawls, LHPP, 214-48).
Rawls employs the idea of a hypothetical social contract for more
general purposes than his predecessors. He aims to provide principles
of justice that can be applied to determine not only the justice of
political constitutions and the laws, but also the justice of the
institution of property and of social and economic arrangements for
the production and distribution of income and wealth, as well as the
distribution of educational and work opportunities, and of powers and
positions of office and responsibility.
Some have objected that hypothetical agreements cannot bind or
obligate people; only actual contracts or agreements can impose
obligations and commitments (Dworkin, 1977, 150ff). But the original
position is not intended to impose new obligations on us; rather it is
a device for discovery and justification: It is to be used, as Rawls
says, "to help us work out what we now think" (CP 402); it
incorporates "conditions...we do in fact accept" (TJ
587/514 rev.) and is a kind of "thought experiment for the
purpose of public- and self-clarification" (JF, p.17).
Hypothetical agreement in the original position does not then bind
anyone to duties or commitments they do not already have. Its point
rather is to help discover and explicate the requirements of our moral
concepts of justice and enable us to draw the consequences of
considered moral convictions of justice that we all presumably share.
Whether we in turn consciously accept or agree to these consequences
and the principles and duties they implicate once brought to our
awareness does not undermine their moral justification. The point
rather of conjecturing the outcome of a hypothetical agreement is
that, if the premises underlying the original position correctly
represent our most deeply held considered moral convictions and
concepts of justice, then we are morally and rationally committed to
endorsing the resulting principles and duties whether or not we
actually accept or agree to them. Not to do so implies a failure to
accept and live up to the consequences of our own moral convictions
about justice.
Here critics may deny that the original position incorporates
*all* the relevant reasons and considered moral convictions for
justifying principles of justice (e.g. it omits *beneficence*,
or the parties' knowledge of their final ends), and/or that some
reasons it incorporates are not relevant to moral justification to
begin with (such as the *publicity* of fundamental principles,
as utilitarians argue, Sidgwick, 1907, or the *separateness or
persons*, *temporal neutrality* and *rationality of the
parties* in promoting their own conception of the good). (Parfit,
1985, 163, 336; Cohen, G.A., 2009; Cohen, J., 2015,). Or they may
argue that the state or nature, not the original position, is the
appropriate perspective from which to ascertain fundamental principles
of justice, since individuals moral and property rights are pre-social
and not dependent upon social cooperation. (Nozick, 1974,
183-231).
## 3. The Veil of Ignorance
Rawls calls his conception "justice as fairness." His aim
in designing the original position is to describe an agreement
situation that is fair among all the parties to the hypothetical
social contract. He assumes that if the parties are fairly situated
and take all relevant information into account, then the principles
they agree to are also fair. The fairness of the original agreement
situation transfers to the principles everyone agrees to; furthermore,
whatever laws or institutions are required by the principles of
justice are also fair. The principles of justice chosen in the
original position are in this way the result of a choice procedure
designed to "incorporate pure procedural justice at the highest
level" (CP, 310, cf. TJ 120/104). This feature of Rawls's
original position is closely related to his constructivism, and his
subsequent understanding of the original position as a
"procedure of construction"; see the supplementary
section:
>
> Constructivism, Objectivity, Autonomy, and the Original Position,
>
in the supplementary document
Further Topics on the Original Position.
There are different ways to define a fair agreement situation
depending on the purpose of the agreement and the description of the
parties to it. For example, certain facts are relevant to entering a
fair employment contract - knowledge of a prospective
employee's talents, skills, prior training, experience,
motivation, and reliability - that may not be relevant to other
fair agreements. What is a fair agreement situation among free and
equal persons when the purpose of the agreement is fundamental
principles of justice for the basic structure of society? What sort of
facts should the parties to such a fundamental social contract know,
and what sort of facts are irrelevant or even prejudicial to a fair
agreement? Here it is helpful to compare Rawls's and
Locke's social contracts. A feature of Locke's social
contract is that it transpires in a state of nature among free and
equal persons who know everything about themselves that you and I know
about ourselves and each other. Thus, Locke's parties know their
natural talents, skills, education, and other personal
characteristics; their racial and ethnic group, gender, social class,
and occupations; their level of wealth and income, their religious and
moral beliefs, and so on. Given this knowledge, Locke assumes that,
while starting from a position of equal political right, the great
majority of free and equal persons in a state of nature -
including all women and racial minorities, and all other men who do
not meet a rigid property qualification - could and most likely
would rationally agree to alienate their natural rights of equal
political jurisdiction in order to gain the benefits of political
society. Thus, Locke envisions as legitimate a constitutional monarchy
that is in effect a gender-and-racially biased class state wherein a
small restricted class of amply propertied white males exercise
political rights to vote, hold office, exercise political and social
influence, and enjoy other important benefits and responsibilities to
the exclusion of everyone else (see Rawls, LHPP, 138-139).
The problem with this arrangement, of course, is that gender and
racial classifications, social class, wealth and lack thereof, are,
like absence of religious belief, not good reasons for depriving free
and equal persons of their equal political rights or opportunities to
occupy social and political positions. Knowledge of these and other
facts are not then morally relevant for deciding who should qualify to
vote, hold office, and actively participate in governing and
administering society. Rawls suggests that the reason Locke's
social contract results in this unjust outcome is that it transpires
(hypothetically) under unfair conditions of a state of nature, where
the parties have complete knowledge of their circumstances,
characteristics and social situations. Socially powerful and wealthy
parties then have access to and can unfairly benefit from their
knowledge of their "favorable position and exercise their threat
advantage" to extract favorable terms of cooperation for
themselves from those in less favorable positions (JF 16).
Consequently, the parties' judgments regarding constitutional
provisions are biased by their knowledge of their particular
circumstances and their decisions are insufficiently impartial.
The remedy for such biases of judgment is to redefine the initial
situation within which the social contract transpires. Rather than a
state of nature Rawls situates the parties to the social contract so
that they do not have access to factual knowledge that can distort
their judgments and result in unfair principles. Rawls's
original position is an initial agreement situation wherein the
parties are without information enabling them to tailor principles of
justice favorable to their personal circumstances and interests. Among
the essential features of the original position are that no one knows
their place in society, class position, wealth, or social status, nor
does anyone know their race, gender, fortune or misfortune in the
distribution of natural assets and abilities, level of intelligence,
strength, education, and the like. Rawls even assumes that the parties
do not know their values or "conceptions of the good,"
their religious or philosophical convictions, or their special
psychological propensities. The principles of justice are chosen
behind a "veil of ignorance" (TJ 12/11). This veil of
ignorance deprives the parties of all knowledge of particular facts
about themselves, about one another, and even about their society and
its history.
The parties are not however completely ignorant of facts. They know
all kinds of general facts about persons and societies, including
knowledge of relatively uncontroversial scientific laws and
generalizations accepted within the natural and social sciences
- economics, psychology, political science, biology, and other
natural sciences (including applications of Darwinian evolutionary
theory that are generally accepted by scientists, however
controversial they may be among religious fundamentalists). They know
then about the general tendencies of human behavior and psychological
development, about neuropsychology and biological evolution, and about
how economic markets work, including neo-classical price theory of
supply and demand. As discussed below, they also know about the
circumstances of justice--moderate scarcity and limited
altruism--as well as the desirability of the "primary
social goods" that are needed by anyone in modern society to
live a good life and to develop their "moral powers" and
other capacities. What the parties lack however is knowledge of any
particular facts about their own and other persons' lives, as
well as knowledge of any historical facts about their society and its
population, its level of wealth and resources, religious institutions,
etc.. Rawls thinks that since the parties are required to come to an
agreement on objective principles that supply universal standards of
justice applying across all societies, knowledge of particular and
historical facts about any person or society is morally irrelevant and
potentially prejudicial to their decision.
Another reason Rawls gives for such a "thick" veil of
ignorance is that it is designed to be a strict "position of
equality" (TJ 12/11) that represents persons purely in their
capacity as free and equal moral persons. The parties in the original
position do not know any particular facts about themselves or society;
they all have the same general information. They are then situated
equally in a very strong way, "symmetrically" (JF 18) and
purely as free and equal moral persons. They know only characteristics
and interests they share in their capacity as free and equal moral
persons--their "higher-order interests" in developing
the moral powers of justice and rationality, their need for the
primary social goods, and so on. The moral powers, Rawls contends, are
the "basis of equality, the features of human beings in virtue
of which they are to be treated in accordance with the principles of
justice" (TJ, 504/441). Knowledge of the moral powers and their
essential role in social cooperation, along with knowledge of other
general facts, is all that is morally relevant, Rawls believes, to a
decision on principles of justice that are to reflect people's
status as free and equal moral persons. A thick veil of ignorance thus
is designed to represent the equality of persons purely as moral
persons, and not in any other contingent capacity or social role. In
this regard the veil of ignorance interprets the Kantian idea of
equality as equal respect for moral persons (cf. CP 255).
Many criticisms have been leveled against Rawls's veil of
ignorance. Among the more common criticisms are that the
parties' choice in the original position is indeterminate (Sen,
2009, 11-12, 56-58), or would result in choice of the
principle of (average) utility (Harsanyi, 1975), or a principle of
relative prioritarianism that gives greater weight to but does not
maximize the least advantaged position (Buchak, 2017) (The argument
for the choice of the principle of average utility is discussed
below.) Among reasons given for the indeterminacy of decision in the
original position are that the parties are deprived of so much
information about themselves that they are psychologically incapable
of making a choice; or they cannot decide between a plurality of
reasonable principles. (Sen 2009, 56-58). Or they are incapable
of making a *rational* choice, since we cannot decide upon
ethical principles without knowing our primary purposes in life, the
values of community, or certain other final ends and commitments.
(MacIntyre, 1981; Sandel 1982)
One answer to to the criticism of inability to make a rational choice
due to ignorance of our final ends is that we do not need to know
everything about ourselves, including these primary purposes, to make
rational decisions about the background social conditions needed to
pursue these primary purposes. For example, whatever our ends, we know
that personal security and an absence of social chaos are conditions
of most anyone's living a good life (as Hobbes contends).
Similarly, though Rawls's parties do not know their own values
and commitments, they do know that as free and equal persons they
require an adequate share of primary social goods (rights and
liberties, powers and opportunities, income and wealth, and the social
bases of self-respect) to effectively pursue their purposes, whatever
they may be. They also know they have a "higher-order
interest" in adequately developing and exercising their
"moral powers" - the capacities to be rational and
reasonable - which are conditions of responsible agency,
effectively pursuing one's purposes, and engaging in social
cooperation. Rawls contends that knowledge of these "essential
goods" is sufficient for a rational choice on principles of
justice by the parties in the original position.
To the objection that choice behind the veil of ignorance is
psychologically impossible, Rawls says that it is important not to get
too caught up in the theoretical fiction of the original position, as
if it were some historical event among real people who are being asked
to do something impossible. The original position is not supposed to
be realistic but is a "device of representation" (PL 27),
or a "thought experiment," (JF, 83), that is designed to
organize our considered convictions of justice and clarify their
implications. The parties in it are not real but are "artificial
persons" who have a role to play in this thought experiment.
They represent an ideal of free and equal reasonable and rational
moral persons that Rawls assumes is implicit in our reasoning about
justice. The veil of ignorance is a representation of the kinds of
reasons and information that are relevant to a decision on principles
of justice for the basic structure of a society of free and equal
moral persons (TJ 17/16). Many kinds of reasons and facts are not
morally relevant to that kind of decision (e.g., information about
people's race, gender, religious affiliation, wealth, and even,
Rawls says more controversially, their conceptions of their good),
just as many different kinds of reasons and facts are irrelevant to
mathematicians' ability to work out the formal proof of a
theorem. As a mathematician, scientist, or musician exercise their
expertise by ignoring knowledge of particular facts about themselves,
presumably we can do so too in reasoning about principles of justice
for the basic structure of society. Rawls says we can "enter the
original position at any time simply by reasoning in accordance with
the enumerated restrictions on information," (PL 27) and
considering general facts about persons, their needs, and social and
economic cooperation that are provided to the parties (TJ 120/104,
587/514).
A related criticism of Rawls's "thick" veil of
ignorance is that even if the parties can make certain rational
decisions in their interest without knowledge of their final ends,
still they cannot come to a decision about principles of justice
without knowing the desires and interests of people. For justice
consists of the measures that most effectively promote good
consequences, and these ultimately reflect facts about
individuals' utility or welfare. This criticism is mirrored in
utilitarian versions of the moral point of view, which incorporate a
"thin" veil of ignorance that represents a different idea
of impartiality. The impartial sympathetic spectator found in David
Hume and Adam Smith, or the self-interested rational chooser in John
Harsanyi's average utilitarian account, all have complete
knowledge of everyone's desires, interests and purposes as well
as knowledge of particular facts about people and their historical
situations. Impartiality is achieved by depriving the impartial
observer or rational chooser of any knowledge of its own identity.
This leads it to give equal consideration to everyone's desires
and interests, and impartially take everyone's desires and
interests into account. Since rationality is presumed to involve
maximizing something - or taking the most effective means to
promote the greatest realization of one's ends - the
impartial observer/chooser rationally chooses the rule or course of
action that maximizes the satisfaction of desires, or utility
(aggregate or average), summed across all persons. (See TJ,
SS30)
Rawls's original position with its "thick" veil of
ignorance represents a different conception of impartiality than the
utilitarian requirement that equal consideration be given to
everyone's desires, preferences, or interests. The original
position abstracts from all information about current circumstances
and the status quo, including everyone's desires and particular
interests. Utilitarians assume peoples' desires and interests
are given by their circumstances and seek to maximize their
satisfaction; in so doing utilitarians suspend judgment regarding the
moral permissibility of peoples' desires, preferences, and ends
and of the social circumstances and institutions within which these
are shaped and cultivated. For Rawls, a primary reason for a thick
veil of ignorance is to enable an unbiased assessment of the justice
of existing social and political institutions and of existing desires,
preferences, and conceptions of the good that they sustain.
People's desires and purposes are not then assumed to be given,
whatever they are, and then promoted and fulfilled. On Rawls's
Kantian view, principles of right and justice are designed to put
limits on what satisfactions and purposes have value and impose
restrictions on what are reasonable conceptions of persons'
good. This basically is what Rawls means by "*the priority of
right over the good*." People's desires and
aspirations are constrained from the outset by principles of justice,
which specify the criteria for determining permissible ends and
conceptions of the good. (TJ 31-32/27-28) If the parties
to Rawls's original position had knowledge of peoples'
beliefs and desires, as well as knowledge of the laws, institutions
and circumstances of their society, then this knowledge would
influence their decisions on principles of justice. The principles
agreed to would then not be sufficiently detached from the very
desires, circumstances, and institutions these principles are to
critically assess. Since utilitarians take peoples' desires,
preferences, and/or ends as given under existing circumstances, any
principles, laws, or institutions chosen behind their thin veil of
ignorance will reflect and be biased by the status quo. To take an
obvious counterexample, there is little if any justice in laws
approved from a utilitarian impartial perspective when these laws take
into account racially prejudiced preferences which are cultivated by
grossly unequal, racially discriminatory and segregated social
conditions. To impartially give equal consideration to
everyone's desires formed under such under unjust conditions is
hardly sufficient to meet requirements of justice. This illustrates
some of the reasons for a "thick" as opposed to a
"thin" veil of ignorance.
## 4. Description of the Parties: Rationality and the Primary Social Goods
Rawls says that in the original position, "the Reasonable frames
the Rational" (CP 319). He means the OP is a situation where
rational choice of the parties is made subject to reasonable (or
moral) constraints. In what sense are the parties and their choice and
agreement rational? Philosophers have different understandings of
practical rationality. Rawls seeks to incorporate a relatively
uncontroversial account of rationality into the original position, one
that he thinks most any account of practical rationality would endorse
as at least necessary for rational decision. The parties are then
described as rational in a formal or "thin" sense that is
characteristic of the theories of rational and social choice. They are
resourceful, take effective means to their ends, and seek to make
their preferences consistent. They also take the course of action that
is more likely to achieve their ends (other things being equal). And
they choose courses of action that satisfy more rather than fewer of
their purposes. Rawls calls these principles of rational choice the
"counting principles" (TJ SSSS25, 63; JF 87).
More generally, for Rawls rational persons upon reflection can
formulate a *conception of their good*, or of their primary
values and purposes and the best way of life for themselves to live
given their purposes. This conception incorporates their primary aims,
ambitions, and commitments to others, and is informed by the
conscientious moral, religious, and philosophical convictions that
give meaning for them to their lives. Ideally, rational persons have
carefully thought about these things and their relative importance,
and they can coherently order their purposes and commitments into a
"*rational plan of life*," which extends over their
lifetimes (TJ SSSS63-64). For Rawls, rational persons
regard life as a whole, and do not give preference to any particular
period of it. Rather in drawing up their rational plans, they are
equally concerned with their (future) good at each part of their
lives. In this regard, rational persons are
*prudent*--they care for their future good, and while they
may discount the importance of future purposes based on probability
assessments, they do not discount the achievement of their future
purposes simply because they are in the future (TJ, SS45). (For a
different view, see Parfit, 1984)
These primary aims, convictions, ambitions, and commitments are among
the primary motivations of the parties in the original position. The
parties want to provide favorable conditions for the pursuit of the
various elements of the rational plan of life that defines a good life
for them. This is ultimately what the parties are trying to accomplish
in their choice of principles of justice. In this sense they are
rational.
Rawls says the parties in the original position are "mutually
disinterested," in the sense that "they take no interest
in each other's interests" (TJ 110/[omitted in rev. ed.]).
This does not mean that they are self-interested or selfish persons,
indifferent to the welfare of others. The interests advanced by the
parties' life plans, Rawls says, "are not assumed to be
interests in the self, they are interests of a self that regards its
conception of the good as worthy of satisfaction..." (TJ
127/110) Most people are concerned, not just with their own happiness
or welfare, but with others as well, and have all kinds of
commitments, including other-regarding, beneficent, and moral
purposes, that are part of their conceptions of the good. But in the
original position itself the parties are not altruistically motivated
to benefit *each other, in their capacity as contracting
parties*. They try to do as best as they can for themselves and
for those persons and causes that they care for. Their situation is
comparable, Rawls says, to that of *"*trustees or
guardians" acting to promote the interests of the beneficiaries
they represent. (JF, 84-85) Trustees cannot sacrifice the
well-being of the beneficiary they represent to benefit other trustees
or individuals. If they did, they would be derelict in their duties.
It is perhaps to address the common criticism that the parties to the
original position are self-interested that Rawls in the revised
edition (TJ 110 rev.) omitted the phrase from the 1st
edition, cited above, that "the parties take no interest in each
other's interest." Moreover in later writings increasingly
he says that we should imagine that the parties are
"representatives" of free and equal citizens and their
interests and "act as guardians or trustees," seeking to
do as best as they can for the particular individuals that each
trustee represents. (PL SS4, JFSS24) In either case, Rawls
believes this account of the parties' motivations promotes
greater clarity, and that to attribute to the parties moral
motivations or benevolence towards each other would not result in
definite choice of a conception of justice (TJ,
148-9/128-9; 584/512). (For example, how much benevolence
should the parties have towards one another or towards people in
general? Surely not impartial benevolence towards everyone, for then
we might as well dispense with the social contract and rely on a
disinterested impartial spectator point of view. It is one of the
"circumstances of justice" that people have different and
conflicting values, and they value their own purposes and special
commitments to others more than they value others' purposes and
special commitments, This is a good thing, not to be discouraged or
undermined by justice, but rather regulated by it, since special
obligations and commitments to specific others give meaning to
people's lives. (cf. Scheffler, 2001, chs.3, 4, 6) But if not
equal concern for other parties and/or persons including themselves
(and perhaps other animals), then how much care and concern should the
parties in the original position exhibit towards others generally, as
compared with concern for themselves and their own good? (Half as much
concern for others' good as for their own? One-fifth as much?
There is no clear answer.) Rawls's thought is that, so far as
justice is concerned, fair regard for others' interests is best
represented by each party's rational choice behind a thick veil
of ignorance; for each party has to be equally concerned with the
consequences of their choice of principles for each position in
society, since they could end up in that same position.
Mutual disinterest of the parties also means they are not moved by
envy or rancor towards each other or others generally. This implies
that the parties do not strive to be wealthier or better off than
others for its own sake, and thus do not sacrifice advantages to
prevent others from having more than they do. Instead, each party in
the original position is motivated to do as well as they can in
promoting the optimal achievement of the many purposes that constitute
their rational conception of the good, without regard to how much or
how little others may have. For this reason they strive to guarantee
themselves a share of primary social goods that is at least sufficient
to enable them each to effectively pursue their (unknown) conception
of the good.
Another feature of the parties is that they represent not just
themselves, but also family lines, including their descendants, or at
least their own children. This assumption is needed, Rawls says, to
include representation of "the interests of all,"
including children and future generations. In the first edition of
*Theory* Rawls says. "For example, we may think of the
parties as heads of families and therefore as having a desire to
further the welfare of their nearest descendants" (Rawls 1971,
128). Because of criticisms of the heads of families assumption, (by
English, 1977 and others) Rawls said in the revised edition that the
problem of future generations can be addressed by the parties assuming
that all preceding generations have followed the same principles that
the parties choose to apply to future generations. (Rawls 1999a, 111
rev.). The "heads of families" assumption is discussed
further in connection with feminist criticisms of Rawls in the
supplementary section:
>
> A Liberal Feminist Critique of the Original Position and Justice within the Family
>
in the supplementary document
Further Topics on the Original Position.
Though the parties are not motivated by beneficence or even a concern
for justice, still they have a moral capacity for
*reasonableness* and a *sense of justice* (TJ, 145/125
rev.). Rawls distinguishes between the requirements of rationality and
reasonableness; both are part of practical reasoning about what we
ought to do (JF 6-7; 81-2). The concept of "the
Rational" concerns a person's *good*--hence
Rawls refers to his account of the good as "goodness as
rationality." A person's good for Rawls is the rational
plan of life they would choose under hypothetical conditions of
"deliberative rationality," where there is full knowledge
of one's circumstances, capacities, and interests, as well as
knowledge of the likelihood of succeeding at alternative life plans
one may be drawn to (TJ, SS64). "The Reasonable" on
the other hand addresses the concept and principles of *right*,
including individual moral duties and obligations as well as moral
requirements of right and justice that apply to institutions and
society. Both rationality and reasonableness are independent aspects
of practical reason for Rawls. They are independent in that Rawls,
unlike Hobbes and other interest-based social contract views, does not
regard justice and the reasonable as simply principles of prudence
that are beneficial for a person to comply with in order to
successfully pursue their purposes in social contexts. (Cf. Gauthier,
1984) Unlike Hobbes, Rawls does not argue that an immoral or unjust
person is irrational, or that morality is necessarily required by
rationality in the narrow sense of maximizing individual utility or
taking effective means to realize one's purposes. But
rational persons who violate demands of justice are unreasonable in so
far as they infringe upon moral principles and requirements of
practical reasoning. Being reasonable, even if not required by
rationality, is still an independent aspect of practical reason. Rawls
resembles Kant in this regard (PL 25n); his distinction between the
reasonable and rational parallels Kant's distinction between
categorical and hypothetical imperatives.
Essential to being reasonable is having a *sense of justice*
with the capacities to understand and reason about and act upon what
justice requires. The sense of justice is a normally effective desire
to comply with duties and obligations required by justice; it includes
a willingness to cooperate with others on terms that are fair and that
reasonable persons can accept and endorse. Rawls sees a sense of
justice as an attribute people normally have; it "would appear
to be a condition for human sociability" (TJ, 495/433 rev.). He
rejects the idea that people are motivated *only* by
self-interest in all that they do; he also rejects the Hobbesian
assumption that a willingness to do justice must be grounded in
enlightened self-interest. It is essential to Rawls's argument
for the feasibility and stability of justice as fairness that the
parties upon entering society have an effective sense of justice, and
that they are capable of doing what justice requires of them for its
own sake, or at least because they believe this is what morality
requires of them. An amoralist, Rawls believes, is largely a
philosophical construct; amoralists who actually exist Rawls regards
as sociopaths. "A capacity for a sense of justice ... would
appear to be a condition of sociability" (TJ 495/433).
Subsequent to A *Theory of Justice*, beginning in
'Kantian Constructivism in Moral Theory,' (1980) (CP
303ff.) Rawls says that the parties to the original position have a
"highest-order interest" in the development and full and
informed exercise of their two "moral powers": their
capacity for a sense of justice as well as in their capacity for a
rational conception of the good. Fulfilling these interests in the
moral powers is one of the main aims behind their agreement on
principles of justice. Subsequently in *Political Liberalism*
(1993) Rawls changed this to the parties' "higher order
interests" in development and exercise of the two moral powers
(to avoid giving the appearance that the moral powers were final ends
for free and equal moral persons, as was argued in TJ). The
parties' interest in developing these two moral powers is a
substantive feature of Rawls's account of the rationality of
free and equal persons in the original position itself. (In this
regard, his account of goodness as rationality is not as
"thin" as in social theory; cf. TJ 143/124 rev.) Here
Rawls is still not attributing specifically moral motives--a
desire to be reasonable and do what is right and just for their own
sake--to the *parties* in the original position. The idea
behind the parties' rationality in cultivating their sense of
justice is that, since being reasonable and exercising one's
sense of justice by complying with fair terms is a condition of human
sociability and social cooperation, then it is in people's
*rational interest*--part of their good--that they
normally develop their capacities for justice under social conditions.
Otherwise they will not be in a position to cooperate with others and
benefit from social life. A person who is without a sense of justice
is wholly unreasonable and as a result is normally eschewed by others,
for they are not trustworthy or reliable or even safe to interact
with. Since having a sense of justice is a condition of taking part in
social cooperation, the parties have a "higher-order
interest" in establishing conditions for the development and
full exercise of their capacity for a sense of justice. The
parties' interest in developing their capacity for a sense of
justice is then a rational interest in being reasonable; justice is
then regarded *by the parties* as instrumental to their
realizing their conception of the good. (Here again, it is important
to distinguish the purely rational motivation of the parties or their
trustees in the original position from that of free and equal citizens
in a well-ordered society, who are normally morally motivated by their
sense of justice to do what is right and just for its own sake.)
Three factors then play a role in motivating the parties in the
original position: (1) First, they aim to advance their determinate
conception of the good, or rational plan of life, even though they do
not know what that conception is. Moreover, they also seek conditions
that enable them to exercise and develop their "moral
powers," namely (2) their rational capacities to form, revise
and rationally pursue a conception of their good, and (3) their
capacity to be reasonable and to have an effective sense of justice.
These are the three "higher-order interests" the parties
to Rawls's original position aim to promote in their agreement
on principles of justice.
The three higher-order interests provide the basis for Rawls's
account of *primary social goods*. (TJ SS15, PL
178-190) The primary goods are the all-purpose social means that
are necessary to the exercise and development of the moral powers and
to pursue a wide variety of conceptions of the good. Rawls describes
them initially in *Theory* as goods that any rational person
should want, whatever their rational plan of life. The primary social
goods are basically: rights and liberties; powers and diverse
opportunities; income and wealth; and the social bases of
self-respect. 'Powers' refer not (simply) to a capacity to
effect outcomes or influence others' behavior. Rawls rather uses
the term 'powers' to refer to the legal and other
institutional abilities and prerogatives that attend offices and
social position. Hence, he sometimes refers to the primary goods of
"powers and prerogatives of offices and positions of authority
and responsibility" (JF 58). Members of various professions and
trades have institutional powers and prerogatives that are
characteristic of their position and which are necessary if they are
to carry out their respective roles and responsibilities. By income
and wealth Rawls says he intends "all-purpose means" that
have an exchange value, which are generally needed to achieve a wide
range of ends (JF 58-59). Finally, "the social bases of
self-respect" are features of institutions that are needed to
enable people to have the confidence that they and their position in
society are respected and that their conception of the good is worth
pursuing and achievable by themselves. These features depend upon
history and culture. Primary among these social bases of self respect
in a democratic society, Rawls will contend, are equal recognition of
persons as citizens, and hence the institutional conditions needed for
equal citizenship, including equality of basic rights and liberties
with equal political rights; fair equality of opportunities; and
personal independence guaranteed by adequate material means for
achieving it. The social bases of self-respect are crucial to
Rawls's argument for *equal* basic liberties, especially
political equality and equal rights of political participation.
The parties to the original position are motivated to achieve a fully
adequate share of primary goods so they can achieve their higher-order
interests in pursuing their rational plans of life and exercising
their moral powers. "They assume that they normally prefer more
primary social goods rather than less" (TJ, 142/123 rev.). This
too is part of being rational. Because they are not envious, their
concern is with the absolute level of primary goods, not their share
relative to other persons.
To sum up, the parties in the original position are formally rational
in that they are assumed to have and to effectively pursue a rational
plan of life with a schedule of coherent purposes and commitments that
they find valuable and give their lives meaning. As part of their
rational plans, they have a substantive interest in the adequate
development and full exercise of their capacities to be rational and
to be reasonable. These "higher-order interests" together
with their rational life plans provide them with sufficient reason to
procure for themselves in their choice of principles of justice an
adequate share of the primary social goods that enable them to achieve
these higher-order ends and effectively pursue their conceptions of
the good.
A final feature of Rawls's account of rationality is a normal
human tendency he calls "the Aristotelian principle" (TJ
sect.65). This "deep psychological fact" says that, other
things being equal, people normally find activities that call upon the
exercise of their developed capacities to be more interesting and
preferable to engaging in simpler tasks, and their enjoyment increases
the more the capacity is developed and realized and the greater the
complexity of activities (TJ, 426/374). Humans enjoy doing something
as they become more proficient at it, and of two activities they
perform equally well, they normally prefer the one that calls upon a
larger repertoire of more intricate and subtler discriminations.
Rawls's examples: someone who does both activities well
generally prefers playing chess to checkers, and studying algebra to
arithmetic. (TJ 426/374) Moreover Rawls, citing J.S. Mill believes
that development at least some of our "higher capacities"
(Mill's term) is normally important to our sense of
self-respect. These general facts imply that rational people should
incorporate into their life plans activities that call upon the
exercise and development of their talents and skills and distinctly
human capacities (TJ 432/379). This motivation becomes especially
relevant to Rawls's argument for the stability of justice as
fairness, the good of social union, and the good of justice (TJ
SS79, SS86; see below, SS5.3). The important point here is
that the Aristotelian principle is taken into account by the parties
in their decision on principles of justice. They want to choose
principles that maintain their sense of self-respect and enable them
to freely develop their human capacities and pursue a wide range of
activities, as well as engage their capacities for a sense of
justice.
## 5. Other Conditions on Choice in the Original Position
The veil of ignorance is the primary moral constraint upon the
rational choice of the parties in the original position. There are
several other conditions imposed on their agreement.
### 5.1 The Circumstances of Justice (TJ SS22)
Among the general facts the parties know are "the circumstances
of justice." Rawls says these are "conditions under which
human cooperation is both possible and necessary" (TJ 126/109
rev.). Following Hume, Rawls distinguishes two general kinds: the
objective and subjective circumstances of justice. The former include
physical facts about human beings, such as their rough similarity in
mental and physical faculties, and vulnerability to the united force
of others. Objective circumstances also include conditions of moderate
scarcity of resources: there are not enough resources to satisfy
everyone's desires, but there are enough to provide all with
adequate satisfaction of their basic needs; unlike conditions of
extreme scarcity (e.g. famine), cooperation then seems productive and
worthwhile for people.
Among the subjective circumstances of justice are the parties'
mutual disinterestedness, which reflects the "limited
altruism" (TJ 146/127) of persons in society.. Free and equal
persons have their own plans of life and special commitments to
others, as well as different philosophical and religious beliefs and
moral doctrines (TJ 127/110). Hume says that if humans were
impartially benevolent, equally concerned with everyone's
welfare, then justice would be unnecessary. People then would
willingly sacrifice their interests for the greater advantage of
other. They would not be concerned about their personal rights or
possessions, and property would be unnecessary (Hume 1777 [1970,
185-186]). But we are more concerned with our own aims and
interests--which include our interests in the interests of those
nearer and dearer to us--than we are with the interests of
strangers with whom we have few if any interactions. This implies a
potential conflict of human interests. Rawls adds that concern for our
interests and plans of life does not mean we are selfish or have
interests only in ourselves--again, interests *of* a self
should not be confused with interests *in* oneself; we have
interests in others and in all kinds of causes, ends, and commitments
to other persons (TJ 127/110). But, as history shows, our benevolent
interests in others and in religious and philosophical doctrines are
at least as often the cause of social and international conflict as is
self-interest.
The subjective circumstances of justice also include limitations on
human knowledge, thought, and judgment, as well as emotional
influences and great diversity of experiences. These lead to biases
and inevitable disagreements in factual and other judgments, as well
as to differences in religious, philosophical, and moral convictions.
In *Political Liberalism*, Rawls highlights these subjective
circumstances, calling them "the burdens of judgment" (PL
54-58). They imply, significantly, that regardless how impartial
and altruistic people are, they still will disagree in their factual
judgments and in religious, philosophical and moral doctrines.
Disagreements in these matters are inevitable even among fully
rational and reasonable people. This is "the fact of reasonable
pluralism" (PL 36), which is another general fact known to the
parties in the original position. Reasonable pluralism of doctrines
lends significant support to Rawls's arguments for the first
principle of justice, especially to equal basic liberties of
conscience, expression, and association.
### 5.2 Publicity and other Formal Constraints of Right (TJ SS23)
There are five "formal constraints" associated with the
concept of right that Rawls says the parties must take into account in
coming to agreement on principles of justice. The more a conception of
justice satisfies these formal constraints of right, the more reason
the parties have to choose that conception. The formal constraints of
right are: generality, universality in application, ordering of
conflicting claims, publicity, and finality. The *ordering
condition* says that a conception of justice should aspire to
completeness: it should be able to resolve conflicting claims and
order their priority. Ordering implies a systematicity requirement:
principles of justice should provide a determinate resolution to
problems of justice that arise under them; and to the degree that a
conception of justice is not able to order conflicting claims and
resolve problems of justice, that gives greater reason against
choosing it in the original position compared with those that do. The
ordering condition is important in Rawls's argument against
pluralist moral doctrines he calls "Intuitionism."
Sidgwick attaches a great deal of importance to the ordering
condition, and contends that "Universal Hedonism" is the
only reasonable moral doctrine that can satisfy it (Sidgwick 1907
[1981], 406). Rawls would have to concede that justice as fairness
does not possess, at least theoretically, the same degree of
systematic ordering of claims as does hedonistic utilitarianism which
has cardinal measures of utility. For example, Rawls's priority
principles can resolve conflicting claims regarding the priority of
basic liberties over fair equality of opportunity, fair opportunity
over the difference principle, the difference principle over the
principle of efficiency and the general welfare, as well as many
disputes arising within the difference principle itself regarding
measures that maximally promote the position of the least advantaged.
But there is no priority principle or algorithm to resolve many
conflicts between basic liberties themselves (e.g. conflicts between
freedom of speech vs. rights of security and integrity of persons in
hate speech cases; or the conflict between free speech and the fair
value of equal political liberties in restrictions on campaign finance
contributions). Often in such conflicts we have to weigh competing
considerations and come to a decision about where the greater balance
of reasons lies, much like intuitionist views. (See Hart, 1973). Rawls
in 'Basic Liberties and their Priority,' 1980, PL ch.VIII,
addresses this problem to some degree with the idea of the
significance of a basic liberty to the development and full and
informed exercise of the moral powers.) The lack of a priority or
algorithmic ordering principle does not mean the balance of reasons in
such conflicts regarding basic liberties is indeterminate but rather
that reasonable individuals will often disagree, and that final
decisions practically will have to be made through the appropriate
democratic, judicial, or other procedures (which of course can be
mistaken). But for Rawls a moral conception's capacity to
clearly order conflicting claims is not dispositive, but one among
several formal and substantive moral conditions that a conception of
justice should satisfy (ultimately in reflective equilibrium).
The *publicity* *condition* says that the parties are to
assume that the principles of justice they choose will be *publicly
known* to members of society and recognized by them as the bases
for their social cooperation. This implies that people will not be
uninformed, manipulated, or otherwise have false beliefs about the
bases of their social and political relations. There are to be no
"noble lies", false ideologies, or "fake news"
obscuring a society's principles of justice and the moral bases
for its basic social institutions. The publicity of principles of
justice is ultimately for Rawls a condition of respect for persons as
free and equal moral persons. Rawls believes that individuals in a
democratic society should know the bases of their social and political
relations and not have to be deceived about them in order to cooperate
and live together peacably and on fair terms. Publicity plays an
important role in Rawls's arguments against utilitarianism and
other consequentialist conceptions. The idea of publicity is further
developed in *Political Liberalism* through the ideas of public
justification and the role of public reason in political
deliberation.
Related to publicity is that principles should be *universal in
application*. This implies not simply that "they hold for
everyone in virtue of their being moral persons" (TJ 132/114
rev.). It also means that everyone can understand the principles of
justice and use them in their deliberations about justice and its
requirements. Universality in application then imposes a limit on how
complex principles of justice can be--they must be understandable
to common moral sense, and not so complicated that only experts can
apply them in deliberations. For among other things, these principles
are to guide democratic citizens in their judgments and shared
deliberations about just laws and policies.
Both publicity and universality in application (as Rawls defines it)
are controversial conditions. Utilitarians, for example, have argued
that the truth about morality and justice is so complicated and
controversial that it might be necessary to keep fundamental moral
principles (the principle of utility) hidden from most
individuals' awareness. For morality and justice often require
much that is contrary to peoples' beliefs and personal
interests. Also sometimes it's just too complicated for people
to understand the reasons for their moral duties. So long as they
understand their individual duties, it may be better if they do not
understand the principles and reasons behind them. So Sidgwick argues
that the aims of utilitarianism might better be achieved if it remains
an "esoteric morality," knowledge of which is confined to
"an enlightened few" (Sidgwick 1907 [1981], 489-90).
The reason Rawls sees publicity and universality as necessary relates
to the conception of the person implicit in justice as fairness. If we
conceive of persons as free and equal moral persons capable of
political and moral autonomy, then they should not be under any
illusions about the bases of their social relations, but should be
able to understand, accept, and apply these principles in their
deliberations about justice. These are important conditions Rawls
contends for the freedom, equality, and autonomy (moral and political)
of democratic citizens.
Finally, the *generality* condition is straightforward in that
it requires that principles of justice not contain any proper names or
rigged definite descriptions, which Rawls says rules out free-rider
and other forms of egoism together with the ordering condition. The
*finality* condition says that moral principles of justice
provide conclusive reasons for action, providing "the final
court of appeal in practical reasoning." They override demands
of law and custom, social rules, and reasons of personal prudence and
self-interest. (TJ 135-36/116-17). Finality is one of
several Kantian conditions Rawls imposes that have been questioned by
critics on grounds that it underestimates inevitable and sometimes
irresolvable conflicts of moral reasons with other values. For
example, should reasons of justice *always* be given priority
over special obligations owed to specific persons or associations?
Should moral reasons always be given priority over reasons of love,
prudence, or even self-interest? (See Williams 1981, chs.1, 5; Wolf,
2014, chs.2, 3, 9)
### 5.3 The Stability Requirement
Rawls says, "An important feature of a conception of justice is
that it should generate its own support. Its principles should be such
that when they are embodied in the basic structure of society, people
tend to acquire the corresponding sense of justice and develop a
desire to act in accordance with its principles. In this case a
conception of justice is stable" (TJ, 138/119). The parties in
the original position are to take into account the "relative
stability" of a conception of justice and the society that
institutes it. The stability of a just society does not mean that it
must be unchanging. It means rather that in the face of inevitable
change members of a society should be able to maintain their
allegiance to principles of justice and the institutions they support.
When disruptions to society do occur (via economic crises, war,
natural catastrophes, etc.) and/or society departs from justice,
citizens' commitments to principles of justice are sufficiently
robust that just institutions are eventually restored. The role of the
stability requirement for Rawls is twofold: first, to test whether
potential principles of justice are compatible with human natural
propensities, or our moral psychology and general facts about social
and economic institutions; and second, to determine whether acting on
and from principles of justice are conducive and even essential to
realizing the human good.
To be stable principles of justice should be realizable in a
*feasible and enduring social world*, the ideal of which Rawls
calls a "well-ordered society." (See below, SS6.3.)
They need to be practicably possible given the limitations of the
human condition. Moreover, this feasible social world must be one that
can endure over time, not by just any means, but by gaining the
willing support of people who live in it. People should knowingly want
to uphold and maintain society's just institutions not just
because they benefit from them, but on grounds of their *sense of
justice*. In choosing principles of justice, the parties in the
original position must take into account their "relative
stability" (TJ SS76). They have to consider the degree to
which a conception (in comparison with other conceptions) describes an
achievable and sustainable system of social cooperation, and whether
the institutions and demands of such a society will attract
people's willing compliance and generally engage their sense of
justice.
For example, suppose principles of justice were to impose a duty to
practice impartial benevolence towards all people, and thus a duty to
show no greater concern for the welfare of ourselves and loved ones
than we do towards billions of others. This principle demands too much
of human nature and would not be sustainable or even
feasible--people simply would reject its onerous demands. But
Rawls's stability requirement implies more than just
'ought implies can.' It says that principles of justice
and the scheme of social cooperation they describe should evince
"stability for the right reasons" ((as Rawls later says in
PL xli, 143f., 459f.,). Recall here the higher-order interests of the
parties in development and exercise of their capacities for justice. A
just society should be able to endure not simply as a *modus
vivendi*, or compromise among conflicting interests; nor simply
endure by promoting the majority of peoples' interests and/or
coercive enforcement of its provisions. Stability "for the right
reasons," as conceived in *Theory*, requires that people
support society for *moral reasons of justice*. Society's
basic principles must respond to reasonable persons' capacities
for justice and engage their sense of justice. Rawls regards our moral
capacities for justice as an integral part of our nature as sociable
beings. He believes that one role of a conception of justice is to
accommodate human capacities for sociability, the capacities for
justice that enable us to be cooperative social beings. So not only
should a conception of justice advance human interests, but it should
also answer to our moral psychology by enabling us to knowingly and
willingly exercise our moral capacities and sensibilities, which are
among the moral powers to be reasonable. This is one way that
Rawls's conception of justice is "ideal-based" (CP
400-401 n.): it is based in an ideal of human beings as free and
equal moral persons and an ideal of their social relations as
generally acceptable and justifiable to all reasonable persons
whatever their circumstances (the ideal of a well-ordered
society).
This relates to the second ground for the stability condition, which
can only be mentioned here: it is that the correct principles of
justice should be compatible with, and even integral to realizing the
*human good*. It speaks strongly in favor of a conception of
justice that it is compatible with and promotes the human good. First,
if a conception of justice requires of many reasonable people that
they change their conscientious philosophical or religious convictions
for the sake of satisfying a majority's beliefs, or abandon
their pursuit of the important interests that constitute their plan of
life, this conception could not gain their willing support and would
not be stable over sustained periods of time. Moreover, Rawls contends
that a conception of justice should enable citizens to fully exercise
and adequately develop their moral powers, including their capacities
for justice. It must then engage their sense of justice in such a way
that they do not regard justice as a burden but should come to
experience that acting on and from principles of justice is worth
doing for its own sake. For Rawls, it speaks strongly in favor of a
conception of justice that acting for the sake of its principles is
experienced as an activity that is good in itself (as Rawls contends
in *Theory of Justice*); or at least that willing compliance
with requirements of justice is an essential part of the reasonable
comprehensive philosophical, religious, or moral doctrines that
reasonable persons affirm (as Rawls contends later in *Political
Liberalism*). For then justice and the full and informed exercise
of the sense of justice are for reasonable and rational persons
essential goods, preconditions for their living a good life, as that
is defined by their rational conception of the good.
## 6. The Arguments for the Principles of Justice from the Original Position
The original position is not a bargaining situation where the parties
make proposals and counterproposals and negotiate over different
principles of justice. Nor is it a wide ranging discussion where the
parties debate, deliberate, and design their own conception of justice
(unlike, for example, Habermas's discourse ethics; see Habermas,
1995). Instead, the parties' deliberations are much more
constrained and regulated. They are presented with a list of
conceptions of justice taken from the tradition of western political
philosophy. These include different versions of utilitarianism,
perfectionism, and intuitionism (or pluralist views), rational egoism,
justice as fairness, and a group of "mixed conceptions"
that combine elements of these. (For Rawls's initial list see TJ
124/107) Rawls later says libertarian entitlement principles should
also be added to the list, and contends the principles of justice are
still preferable. (JF 83). (Nozick agrees and says the OP is incapable
of yielding historical entitlement principles, but only patterned
end-state principles instead. Nozick 1974, 198-204. Rawls
replies that the difference principle does not conform to any
observable pattern but grounds fair distributions in a fair social
process that must actually be carried out. PL, 282-83)
The parties' deliberations are confined to discussing and
agreeing upon the conception that each finds most rational, given
their specified interests. In a series of pairwise comparisons, they
consider all the conceptions of justice made available to them and
ultimately agree unanimously to accept the conception that survives
this winnowing process. In this regard, the original position is best
seen as a kind of selection process wherein the parties'
deliberations are constrained by the background conditions imposed by
the original position as well as the list of conceptions of justice
provided to them. They are assigned the task of agreeing on principles
for designing the basic structure of a self-contained society under
the circumstances of justice.
In making their decision, the parties are motivated only by their own
rational interests. They do not take moral considerations of justice
into account except in so far as these considerations bear on their
achieving their interests within society. Their interests again are
defined in terms of their each acquiring an adequate share of primary
social goods (rights and liberties, powers and opportunities, income
and wealth, etc.) and achieving the background social conditions
enabling them to effectively pursue their conception of the good and
realize their higher-order interests in developing and exercising
their moral powers. Since the parties are ignorant of their particular
conceptions of the good and of all other particular facts about their
society, they are not in a position to engage in bargaining. In effect
they all have the same general information and are motivated by the
same interests.
Rawls makes four arguments in *Theory*, Part I for the
principles of justice. The main argument for the difference principle
is made later in TJ SS49, and is amended and clarified in
*Justice as Fairness: A Restatement*. The common theme
throughout the original position arguments is that it is more rational
for the parties to choose the principles of justice over any other
alternative. Rawls devotes most of his attention to the comparison of
justice as fairness with classical and average utilitarianism, with
briefer discussions of perfectionism (TJ, SS50) and intuitionism
(TJ 278-81) Here I'll focus discussion primarily on
Rawls's comparison between justice as fairness and
utilitarianism.
### 6.1 The Principles of Justice
Before turning to Rawls's arguments from the original position,
it is helpful to have available the principles of justice and other
principles that constitute Justice as Fairness.
*First Principle*: "Each person has an equal right to the
most extensive total system of equal basic liberties compatible with a
similar system of liberty for all." (TJ 266)
The first principle was revised in 1982 to say "Each person has
an equal right to a fully adequate scheme of equal basic liberties
..." (PL, 291) replacing "... the most extensive
scheme of equal basic liberties.") Notably, Rawls also
introduces in *Political Liberalism*, almost in passing, *a
principle of basic needs* that precedes the first principle and
requires that citizens' basic needs be met at least to the
extent that they can understand and fruitfully exercise their basic
rights and liberties. (PL 7; JF 79n.) This social minimum is also said
in *Political Liberalism* to be a "constitutional
essential" for any reasonable liberal conception of justice. (PL
166, 228ff.; JF 47, n.7)
The basic rights and liberties protected by the first principle are
specified by a list (see TJ 53f., PL 291): liberty of conscience and
freedom of association, (TJ SSSS33-4); freedom of
thought and freedom of speech and expression (PL, pp.340-363);
the integrity and freedom of the person and the right to hold personal
property; equal rights of political participation and their fair value
(TJ SSSS36-37); and the rights and liberties protected
by the rule of law (due process, freedom from arbitrary arrest, etc.
TJ SS38). (Rawls says the right to ownership of means of
production and laissez faire freedom of contract are not included
among the basic liberties. TJ, 54 rev. Also freedom of movement and
free choice of occupation are said to be primary goods protected by
fair equality of opportunity principle. PL 76, JF 58f.))
*Second Principle*: "Social and economic inequalities are
to satisfy two conditions. First they must attach to offices and
positions open to all under conditions of fair equality of
opportunity; and second they must be to the greatest advantage of the
least advantaged members of society [the difference principle]"
consistent with the just savings principle. (PL 281, JF 42-43,
TJ 301/266 rev.)
*Just Savings Principle*: Each generation should save for
future generations at a savings rate that they could rationally expect
past generations to have saved for them. (TJ SS44; JF
159-160)
*Principles for individuals*, include (a) *the natural
duties* to uphold justice, mutual respect, mutual aid, and not to
injure or harm the innocent (TJ SSSS19, 51); and (b) *the
principle of fairness*, to do one's fair share in just or
nearly just practices and institutions from which one accepts their
benefits, (which grounds the principle of fidelity, to keep
one's promises and commitments. (TJ SSSS18, 52).
*The Priority Principles*: the principles of justice are ranked
in lexical order. (a) The priority of liberty requires that basic
liberties can only be restricted to strengthen the system of liberties
shared by all. (b) Fair equality of opportunity is lexically prior to
the difference principle. (c) The second principle is prior to the
principle of efficiency and maximizing the sum of advantages. (TJ
302/266 rev.)
*The General Conception of Justice:* "All social
goods--liberty and opportunity, income and wealth, and the bases
of self-respect, are to be distributed equally unless an unequal
distribution of any or all of these goods is to the advantage of the
least favored." TJ 1971, 302. Note: The general conception is
the difference principle generalized to all primary goods (TJ 1971,
83); it applies in non-ideal conditions where the priority of liberty
and opportunity is not sustainable.
### 6.2 The Argument from the Maximin Criterion (TJ, SSSS26-28)
Describing the parties' choice as a rational choice subject to
the reasonable constraints imposed by the original position allows
Rawls to invoke the theory of rational choice and decision under
conditions of uncertainty. In rational choice theory there are a
number of potential "strategies" or rules of choice that
are more or less reliably used depending on the circumstances. One
rule of choice--called "maximin"--directs that
we play it as safe as possible by choosing the alternative whose worst
outcome leaves us better off than the worst outcome of all other
alternatives. The aim is to "maximize the minimum" regret
or loss to one's position (measured in terms of welfare or, for
Rawls, one's share of primary social goods). To follow this
strategy, Rawls says you should choose as if your enemy were to assign
your social position in whatever kind of society you end up in. By
contrast another strategy leads us to focus on the most advantaged
position and says we should "maximize the maximum"
potential gain--"maximax"--and choose the
alternative whose best outcome leaves us better off than all other
alternatives. Which, if either, of these strategies is more sensible
to use depends on the circumstances and many other factors.
A third strategy advocated by orthodox Bayesian decision theory, says
we should always choose to *directly maximize expected
utility*. To do so under conditions of uncertainty of outcomes,
the degree of uncertainty should be factored into one's utility
function, with probability estimates assigned to alternatives based on
the limited knowledge that one has. Given these subjective estimates
of probability incorporated into one's utility function, one can
always choose the alternative that maximizes *expected*
utility. Since it simplifies matters to apply the same rule of choice
to all decisions this is a highly attractive idea, so long as one can
accept that it is normally safe to assume that that the maximization
of *expected* utility leads over time to maximizing
*actual* utility.
What about those extremely rare instances where there is absolutely
*no* basis upon which to make probability estimates? Suppose
you don't even have a hunch regarding the greater likelihood of
one alternative over another. According to orthodox Bayesian decision
theory, the "principle of insufficient reason" should then
be observed; it says that when there is no reason to assign a greater
likelihood to one alternative rather than another, then an *equal
probability* is to be assigned to each potential outcome. This
makes sense on the assumption that if you have no more premonition of
the likelihood of one option rather than another, they are *for all
you know equally likely* to occur. By observing this rule of
choice consistently over time, a rational chooser presumably should
maximize expected individual utility, and hopefully actual utility as
well.
What now is the appropriate decision rule to be used to choose
principles of justice under conditions of complete uncertainty of
probabilities in Rawls's original position? Rawls argues that,
given the enormous gravity of choice in the original position, plus
the fact that the choice is not repeatable (there's no
opportunity to renegotiate or revise one's decision), it is
rational for the parties to follow the maximin strategy when choosing
between the principles of justice and principles of average or
aggregate utility (or any other principles that do not guarantee basic
rights, liberties, opportunities, and a social minimum). Not
surprisingly, following the maximin rule of choice results in choice
of the principles of justice over the principles of utility (average
or aggregate); for unlike utilitarianism, justice as fairness
guarantees equal basic liberties, fair equal opportunities, and an
adequate social minimum for all citizens.
Why does Rawls think maximin is the rational choice rule? Recall what
is at stake in choice from the original position. The decision is not
an ordinary choice. It is rather a unique and irrevocable choice where
the parties decide the basic structure of their society, or the kind
of social world they will live in and the background conditions
against which they will develop and pursue their aims. It is a kind of
superchoice--an inimitable choice of the background conditions
for all one's future choices. Rawls argues that because of the
unique importance of the choice in the original
position--including the gravity of the choice, the fact that it
is not renegotiable or repeatable, and the fact that it determines all
one's future prospects--it is rational to follow the
maximin rule and choose the principles of justice. For should even the
worst transpire, the principles of justice guarantee an adequate share
of primary goods enabling one to maintain one's conscientious
convictions and sincerest affections and pursue a wide range of
permissible ends by protecting equal basic liberties and fair equal
opportunities and guaranteeing an adequate social minimum of income
and wealth. The principles of utility, by contrast, provide no
guarantee of any of these benefits.
Rawls says that in general there are three conditions that must be met
in order to make it rational to follow the maximin rule (TJ
154-55/134 rev.). First, there should be no basis or at most a
very insecure basis upon which to make estimates of probabilities.
Second, the choice singled out by observing the maximin rule is an
acceptable alternative we can live with, so that one cares relatively
little by comparison for what is to be gained above the minimum
conditions secured by the maximin choice. When this condition is
satisfied, then no matter what position one eventually ends up in, it
is at least acceptable. The third condition for applying the maximin
rule is that all the other alternatives have worse outcomes that we
could not accept and live with. Of these three conditions Rawls later
says that the first plays a minor role, and that it is the second and
third conditions that are crucial to the maximin argument for justice
as fairness (JF 99). This seems to suggest that, even if the veil of
ignorance were not as thick and parties did have some degree of
knowledge of the likelihood of ending up in one social position rather
than another, still it would be more rational to choose the principles
of justice over the principle of utility.
Rawls contends all three conditions for the maximin strategy are
satisfied in the original position when choice is made between the
principles of justice and the principle of utility (average and
aggregate). Because all one's values, commitments, and future
prospects are at stake in the original position, and there is no hope
of renegotiating the outcome, a rational person would agree to the
principles of justice instead of the principle of utility. For the
principles of justice imply that no matter what position you occupy in
society, you will have the rights and resources needed to maintain
your valued commitments and purposes, to effectively exercise your
capacities for rational and moral deliberation and action, and to
maintain your sense of self-respect as an equal citizen. With the
principle of utility there is no such guarantee; everything is
"up for grabs" (so to speak) and subject to loss if
required by the greater sum of utilities. Conditions (2) and (3) for
applying maximin are then satisfied in the comparison of justice as
fairness with the principle of (average or aggregate) utility.
It is often claimed that Rawls's parties are
"risk-averse;" otherwise they would never follow the
maximin rule but would take a chance on riskier but more rewarding
outcomes provided by the principle of utility. Thus, John Harsanyi
contends that it is more rational under conditions of complete
uncertainty *always* to choose according to the principle of
insufficient reason and assume an equal probability of occupying any
position in society. When the equiprobability assumption is made, the
parties in the original position would choose the principle of average
utility instead of the principles of justice (Harsanyi 1975).
Rawls denies that the parties have a psychological disposition to
risk-aversion. They have no knowledge of their attitudes towards risk.
He argues however that it is rational to choose *as if* one
were risk averse under the highly exceptional circumstances of the
original position. His point is that, while there is nothing rational
about a fixed disposition to risk aversion, it is nonetheless rational
in some circumstances to choose conservatively to protect certain
fundamental interests against loss or compromise. It does not make one
a risk averse person, but instead it is normally rational to purchase
auto liability, health, and home insurance against accident or
calamity (assuming it is affordable). The original position is such a
situation writ large. Even if one knew in the original position that
the citizen one represents enjoys gambling and taking great risks,
this would still not be a reason to gamble with their rights,
liberties and starting position in society. For if the high risktaker
were born into a traditional, repressive, or fundamentalist society,
they might never have an opportunity for gambling and taking other
risks they normally enjoy. It is rational then even for high
risktakers to choose conservatively in the original position and
guarantee their future opportunities to gamble or otherwise take
risks.
Harsanyi and other orthodox Bayesians contend that maximin is an
irrational decision rule, and they provide ample examples. To take
Rawls' own example, in a lottery where the loss and gain
alternatives are either (0, n) or (1/n, 1) for all natural numbers n,
maximin says choose the latter alternative (1/n, 1). This is clearly
irrational for almost any number n except very small numbers. (TJ 136
rev.). But such examples do not suffice here; simply because maximin
is under most circumstances irrational does not mean that it is
*never* rational. For example, suppose n>1 and you must have
1/n to save you own life. Given the gravity of the circumstances, it
would be rational to choose conservatively since you are guaranteed
1/n according to the maximin strategy, and there is no guarantee you
will survive if you choose according to the principle of insufficient
reason.
No doubt maximin *is* an irrational strategy under most
circumstances of choice uncertainty, particularly under circumstances
where we will have future opportunities to recoup our losses and
choose again. But these are not the circumstances of the original
position. Once the principles of justice are decided, they apply in
perpetuity, and there is no opportunity to renegotiate or escape the
situation. One who relies on the equiprobability assumption in
choosing principles of justice in the original position is being
foolishly reckless given the gravity of choice at stake. It is not
being risk-averse, but rather entirely rational to refuse to gamble
with one's basic liberties, fair equal opportunities and
adequate resources needed to pursue one's most cherished ends
and commitments, simply for the unknown chance of gaining the
marginally greater social powers, income and wealth that might be
available to some in a society governed entirely by the principle of
utility.
Rawls exhibits the force of the maximin argument in discussing liberty
of conscience. He says (TJ, sect. 33) that a person who is willing to
jeopardize their right to hold and practice their conscientious
religious, philosophical and moral convictions, all for the sake of
gaining uncertain added benefits via the principle of utility, does
not know what it means to have conscientious beliefs, or at least does
not take such beliefs seriously (TJ 207-08/181-82 rev.). A
rational person with convictions about what gives life meaning is not
willing to negotiate with and gamble away the right to hold and
express those convictions and the freedom to act on them. After all
what could be the basis for negotiation, for what could matter more
than the objects of one's most sincere convictions and
commitments? Some people (e.g. some nihilists) may not have
*any* conscientious convictions (except the belief that nothing
is worthwhile) and are simply willing to act on impulse or on whatever
thoughts and desires they happen to have at the moment. But behind the
veil of ignorance no one knows whether they are such a person, and it
would be foolish to make this assumption. Knowing general facts about
human propensities and sociability, the parties must take into account
that people normally have conscientious convictions and values and
commitments they are unwilling to compromise. (Besides, even the
nihilist should want to protect the freedom to be a nihilist, to avoid
ending up in an intolerant religious society.) Thus it remains
irrational to jeopardize basic liberties by choosing the principle of
utility instead of the principles of justice.
None of this is to say that maximin is normally a rational choice
strategy. Rawls himself says it "is not, in general, a suitable
guide for choices under uncertainty" (TJ 153). It is not even a
rational strategy in the original position when the alternatives for
choice guarantee basic liberties, equal opportunities, and a social
minimum guaranteed by the principle of average utility - see the
discussion in the supplementary section:
>
> The Argument for the Difference Principle
>
in the supplementary document
The Argument for the Difference Principle and the Four Stage Sequence.
Rawls relies upon the maximin argument mainly to argue for the first
principle of justice and fair equality of opportunity. Other arguments
are needed to support his claim that justice requires the social
minimum be determined by the difference principle.
### 6.3 The Strains of Commitment
There are three additional arguments Rawls makes to support justice as
fairness (all in TJ, sect. 29). Each of these depends upon the concept
of a "well-ordered society." The parties in the original
position are to choose principles that are to govern a well-ordered
society where everyone agrees, complies with, and wants to comply with
its principles of justice. The ideal of a *well-ordered
society* is Rawls's development of social contract doctrine.
It is a society in which (1) everyone knows and willingly accepts and
affirms the same public principles of justice and everyone knows this;
(2) these principles are successfully realized in basic social
institutions, including laws and conventions, and are generally
complied with by citizens; and (3) reasonable persons are morally
motivated to comply by their sense of justice - they want to do
what justice requires of them (TJ 4-5, SS69). There are then
two sides to Rawls's social contract. The parties in the
original position have the task of agreeing to principles that all can
rationally accept behind the veil of ignorance under the conditions of
the original position. But their rational choice is partially
determined by the principles that free and equal moral persons in a
well ordered society who are motivated by their sense of justice
reasonably can accept, agree to.and comply with, as the basic
principles governing their social and political relations.
The parties are to assess principles according to the relative
stability of the well ordered societies into which they are
incorporated. Thus a well-ordered society of justice as fairness is to
be compared with a well-ordered society whose basic structure is
organized according to the average utility principle, aggregate
utility, perfectionism, intuitionism, libertarianism, and so on. They
are to consider which of these societies' basic struture is
relatively more stable and likely to endure over time from one
generation to the next, given natural and socially influenced
psychological propensities and conditions of social cooperation as
they interact with alternative principles of justice.
Now to return to Rawls's arguments for his principles of
justice. The first of Rawls's three arguments highlights the
idea that choice in the original position is an *agreement*,
and involves certain "*strains of commitment.*" It
is assumed by all the parties that all will comply with the principles
they agree to once the veil is lifted and they are members of a
well-ordered society (TJ 176f./153f. and CP 250ff). Knowing that they
will be held to their commitment and expected to comply with
principles for a well-ordered society, the parties must choose
principles that they sincerely believe they will be able to accept,
endorse and willingly observe under conditions where these principles
are generally accepted and enforced. For reasons to be discussed
shortly, Rawls says this condition favors agreement on the principles
of justice over utilitarianism and other alternatives.
But first, consider the frequent objection that there is no genuine
agreement in the original position, for the thick veil of ignorance
deprives the parties of all bases for bargaining (cf. TJ,
139-40/120-21 rev.). In the absence of bargaining, it is
said, there can be no contract. For contracts must involve a *quid
pro quo*--something given for something received (called
'consideration' at common law). The parties in the OP
cannot bargain without knowing what they have to offer or to gain in
exchange. So (the objection continues) Rawls's original position
does not involve a real social contract, unlike those that transpire,
say, in a state of nature. Rather, since the parties are all
"described in the same way," there is no need for multiple
parties but simply the rational choice of one person in the original
position (see Hampton, 1980, 334; see also Gauthier, 1974 and 1985,
203).
In response, not all contracts involve bargaining or are of the nature
of economic transactions. Some involve a mutual pledge and commitment
to shared purposes and principles. Marriage contracts, or agreements
among friends or the members of a religious, benevolent, or political
association are often of this nature. For example, the Mayflower
Compact was a "covenant" to "combine ourselves
together into a civil body politic" charged with making and
administering "just and equal laws...for the general
good." Likewise the U.S. Constitution represents itself as a
commitment wherein "We The People ... ordain and establish
this Constitution" in order "to establish justice,"
"promote the general welfare," "secure the blessings
of liberty," and so on. The agreement in Rawls's original
position is more of this nature. Even though ignorant of particular
facts about themselves, the parties in fact do give something in
exchange for something received: they all exchange their mutual
commitment to accept and abide by the principles of justice and to
uphold just institutions once they enter their well-ordered society.
Each agrees only on condition others do too, and all tie themselves
into social and political relations in perpetuity. Their agreement is
final, and they will not permit its renegotiation should circumstances
turn out to be different than some had hoped for. Their mutual
commitment to justice is reflected by the fact that once these
principles become embodied in institutions there are no legitimate
means that permit anyone to depart from the terms of their agreement.
As a result, the parties have to take seriously the moral and legal
obligations and potential social sanctions they will incur as a result
of their agreement, for there is no going back to the initial
situation. So if they do not sincerely believe that they can accept
the requirements of a conception of justice and voluntarily conform
their actions and life plans accordingly, then these are strong
reasons to avoid choosing those principles. It would not be rational
for the parties to take risks, falsely assuming that if they end up
badly, they can violate at will the terms of agreement or later regain
their initial situation and renegotiate terms of cooperation (see
Freeman, 1990; Freeman, 2007b, 180-182).
Rawls gives special poignancy to this mutual commitment of the parties
by making it a condition that the parties cannot choose and agree to
principles in bad faith; they have to be able, not simply to live with
and grudgingly accept, but instead to willingly endorse the principles
of justice as members of society. Essential to Rawls's argument
for stability is the assumption of everyone's *willing
compliance* with requirements of justice. This is a feature of a
well-ordered society. The parties are assumed to have a sense of
justice; indeed the development and exercise of it is one of their
fundamental interests. Hence they must choose principles that that
they can not only accept and live with, but which are responsive to
their sense of justice and they can unreservedly endorse. Given these
conditions on choice, the parties cannot take risks with principles
they know they will have difficulty complying with voluntarily. They
would be making an agreement in bad faith, and this is ruled out by
the conditions of the original position.
Rawls contends that these "strains of commitment" created
by the parties' agreement strongly favor the principles of
justice over the principles of utility and other teleological (and
most consequentialist) views. For everyone's freedom, basic
rights and liberties, and basic needs are met by the principles of
justice because of their egalitarian nature. Given the lack of these
guarantees under the principle of utility, it is much more difficult
for those who end up worse off in a utilitarian society to willingly
accept their situation and commit themselves to the utility principle.
It is a rare person indeed who can freely and without resentment
sacrifice their life prospects so that those who are better off can
have even greater comforts, privileges, and powers. This is too much
to demand of our capacities for human benevolence. It requires a kind
of commitment that people cannot make in good faith, for who could
willingly support laws that are so detrimental to oneself and the
people one cares about most that they must sacrifice their fundamental
interests for the sake of those more advantaged? Besides, why should
we encourage such subservient dispositions and the accompanying lack
of self-respect? The principles of justice, by contrast, conform
better with everyone's interests, their desire for self-respect
and their natural moral capacities to reciprocally recognize and
respect others' legitimate interests while freely promoting
their own good. The strains of commitment incurred by agreement in the
original position provide strong reasons for the parties to choose the
principles of justice and reject the risks involved in choosing the
principles of average or aggregate utility.
### 6.4 Stability, Publicity, and Self-Respect
Rawls's strains-of-commitment argument explicitly relies upon a
rarely noted feature of his argument: as mentioned earlier, there are
in effect *two social contracts*. First, hypothetical agents
situated equally in the original position unanimously agree to
principles of justice. This agreement has attracted the most attention
from Rawls's critics. But the parties' hypothetical
agreement in the original position is patterned on the general
acceptability of a conception of justice by free and equal persons in
a well-ordered society. Rawls says, "The reason for invoking the
concept of a contract in the original position lies in its
correspondence with the features of a well-ordered society [which]
require...that everyone accepts, and knows that the others
accept, the same principles of justice" (CP 250). In order for
the hypothetical parties in the original position to agree on
principles of justice, there must be a high likelihood that real
persons, given human nature and general facts about social and
economic cooperation, can also agree and act on the same principles,
and that a society structured by these principles is feasible and can
endure. This is the *stability* requirement referred to
earlier. One conception of justice is relatively more stable than
another the more willing people are to observe its requirements under
conditions of a well-ordered society. Assuming that each conception of
justice has a corresponding society that is as well-ordered as can be
according to its terms, the stability question raised in
*Theory* is: Which conception of justice is more likely to
engage the moral sensibilities and sense of justice of free and equal
persons as well as affirm their good? This requires an inquiry into
moral psychology and the human good, which takes up most of Part III
of *A Theory of Justice*.
Rawls makes two arguments in *Theory* from the original
position that invoke the stability requirement, the arguments (1) from
publicity and (2) from self-respect (see TJ, SS29)
(1) The argument from publicity: Rawls contends that utilitarianism,
perfectionism, and other "teleological" conceptions are
unlikely to be freely acceptable to many citizens when made fully
public under the conditions of a well-ordered society. Recall the
*publicity condition* discussed earlier: A feature of a
well-ordered society is that its regulative principles of justice are
publicly known and appealed to as a basis for deciding laws and
justifying basic institutions. Because all reasonable members of
society accept the public conception of justice, there is no need for
the illusions and delusions of ideology for a society to function
properly and citizens to accept its laws and institutions willingly.
In this sense a well-ordered society lacks false consciousness about
the bases of social and political relations. (PL 68-69n.) A
conception of justice that satisfies the publicity condition but that
cannot maintain the stability of a well-ordered society is to be
rejected by the parties in the original position. Rawls contends that
under the publicity condition justice as fairness generally engages
citizens' sense of justice and remains more stable than
utilitarianism (TJ 177f./154f.rev.) For public knowledge that reasons
of maximum average (or aggregate) utility determine the distribution
of benefits and burdens would lead those worse-off to object to and
resent their situation, and reject the principle of utility as the
basic principle governing social institutions. After all, the
well-being and interests of the least advantaged, perhaps even their
basic liberties, are being sacrificed for the greater happiness of
those who are already more fortunate and have a greater share of
primary social goods. It is too much to expect of human nature that
people should freely acquiesce in and embrace such publicly known
terms of cooperation. By contrast, the principles of justice are
designed to advance reciprocally everyone's position; those who
are better off do not achieve their gains at the expense of the less
advantaged. "Since everyone's good is affirmed, all
acquire inclinations to uphold the scheme" (TJ, 177/155). It is
a feature of our moral psychology, Rawls contends, that we normally
come to form attachments to people and institutions that are concerned
with our good; moreover we tend to resent those persons and
institutions that take unfair advantage of us and act contrary to our
good. Rawls argues at length in chapter 8 of *Theory*,
SSSS70-75, that justice as fairness accords better than
alternative principles with the reciprocity principles of moral
psychology that are characteristic of human beings' moral
development.
In *Political Liberalism*, Rawls expands the publicity
condition to include three levels: First, the principles of justice
governing a well-ordered society are publicly known and appealed to in
political debate and deliberation; second, so too are the general
beliefs in light of which society's conception of justice is
generally accepted--including beliefs about human nature and the
way political and social institutions generally work--and
citizens generally agree on these beliefs that support society's
onception of justice. Finally the full justification of the public
conception of justice is also publicly known (or at least publicly
available to any who are interested) and is reflected in
society's system of law, judicial decisions and other political
institutions, as well as its system of education.
(2) The argument from *the social bases of self-respect*: The
publicity condition is also crucial to Rawls's fourth argument
for the principles of justice, from the social bases of self-respect
(TJ, 178-82/155-59 rev.). These principles, when publicly
known, give greater support to citizens' *sense of
self-respect* than do utilitarian and perfectionist principles.
Rawls says self-respect is "perhaps the most important primary
good," (TJ, 440/386 rev.) since few things seem worth doing if a
person has little sense of their own worth or no confidence in their
abilities to execute a worthwhile life plan or fulfill the duties and
expectations in their role as citizens. The parties in the original
position will then aim to choose principles that best secure their
sense of self-respect. Now being regarded by others as a free and
independent person of equal status with others is crucial to the
self-respect of persons who regard themselves as free and equal
members of a democratic society. Justice as fairness, by affording and
protecting the priority of equal basic liberties and fair equal
opportunities for all, secures the status of each as free and equal
citizens. For example, because of equal political liberties, there are
no "passive citizens" who must depend on others to
politically protect their rights and interests; and with fair equal
opportunities no one has grounds to experience the resentment that
inevitably arises in societies where social positions are effectively
closed off to those less advantaged or less powerful. Moreover, the
second principle secures adequate social powers and economic resources
for all so that they find the effective exercise of their equal basic
liberties to be worthwhile. The second principle has the effect of
making citizens socially and economically independent, so that no one
need be subservient to the will of another. Citizens then can regard
and respect one another as equals, and not as masters or subordinates.
("Non-domination," an idea central to contemporary
Republicanism, is then essential to citizens' sense of
self-respect in Rawls's sense. See Pettit 1997.) Equal basic
liberties, fair equal opportunities, and political and economic
independence are primary among the social bases of self-respect in a
democratic society. The parties in the original position should then
choose the principles of justice over utilitarianism and other
teleological views both to secure their sense of self-respect, and to
procure the same for others, thereby guaranteeing greater overall
stability.
In connection with Rawls's argument for the
greater stability of principles of justice on grounds of their
publicity and the bases of self-respect, Rawls provides a Kantian
interpretation of difference principle. He says: "[T]he
difference principle interprets the distinction between treating men
as means only and treating them as ends in themselves. To regard
persons as ends in themselves in the basic design of society is to
agree to forgo those gains that do not contribute to everyone's
expectations. By contrast to regard persons as means is to be prepared
to impose on those already less favored still lower prospects of life
for the sake of the higher expectations of others" (TJ 157
rev.). Rawls says the principle of utility does just this; it treats
the less fortunate as means since it requires them to accept even
lower life prospects for the sake of others who are more fortunate and
already better off. This exhibits a lack of respect for the less
advantaged and in turn has the effect of undermining their sense of
self respect. (TJ 158 rev.) The difference principle, by contrast,
does not treat people as means or undermine their sense of self
respect, and this adds to the reasons the parties have for choosing
the principles of justice instead of the principle of utility.
Rawls substantially relies on the publicity condition to argue against
utilitarianism and perfectionism. He says publicity "arises
naturally from a contractarian standpoint" (TJ, 133/115 rev.).
In *Theory* he puts great weight on publicity ultimately
because he thinks that giving people knowledge of the moral bases of
coercive laws and the principles governing society is a condition of
fully acknowledging and respecting them as free and responsible
rational moral agents. With publicity of principles of justice, people
have knowledge of the real reasons for their social and political
relations and the formative influences of the basic structure on their
characters, plans and prospects. In a well-ordered society with a
public conception of justice, there is no need for an "esoteric
morality" that must be confined "to an enlightened
few" (as Sidgwick says of utilitarianism, Sidgwick 1907 [1981],
490). Moreover, public principles of justice can serve agents in their
practical reasoning and provide democratic citizens a common basis for
political argument and justification. These considerations underlie
Rawls's later contention that having knowledge of the principles
that determine the bases of social relations is a precondition of
individuals' freedom.(CP 325f.) Rawls means in part that
publicity of society's fundamental principles is a condition of
citizens' exercise of the powers and abilities that enable them
to take full responsibility for their lives. Full publicity is then a
condition of the political and (in *TJ*) moral autonomy of
persons, which are significant values according to justice as
fairness. (TJ SS78, PL 68, CP 325-26)
Utilitarians often regard Rawls's emphasis on the publicity of
the fundamental principles underlying social cooperation as
unwarranted. They contend that publicity of laws is of course
important for them to be effective, but there's no practical
need for the publicity of the fundamental principles (such as the
principles of efficiency and utility) that govern political decisions,
the economy, and society, much less so for the publicity of the full
justification of these principles. Most people are not interested and
have little understanding of the complex often technical details that
must go into deciding laws and social policies. Moreover, as Sidgwick
claimed, utilitarianism functions better as an "esoteric
morality" that is not generally incorporated into the public
justification of laws and institutions. Others claim that
Rawls's arguments from publicity are exaggerated. If people were
properly educated to believe that promoting greater overall happiness
or welfare is the ultimate requirement of justice and more generally
of morality, then just as they have for centuries constrained their
conduct and their self-interests and accepted political constraints on
their own liberties for the sake of their religious beliefs, so too
could they be educated to accept the promotion of social utility and
the general welfare as the fundamental bases for social and political
cooperation.
## Supplementary Documents on Other Topics
Additional topics concerining the original position are discussed
in the following supplementary documents:
* The Argument for the Difference Principle.
Explains the Difference Principle and the least advantaged class.
Comparison of the difference principle with mixed conceptions,
including restricted utility. Arguments from reciprocity, stability
and self-respect, and the strains of commitment. Rawls's reasons
why the difference principle supports property owning democracy rather
than welfare-state capitalism.
* The Four Stage Sequence.
How principles chosen in OP (first stage) apply to choice of political
constitution (second-stage), democratic legislation (third stage), and
application of laws to particular circumstances (fourth stage).
* Ideal Theory, Strict Compliance and the Well-Ordered Society.
Why strict compliance is said to be necessary to justification of
universal principles. Sen's, Mills's, and others'
criticisms of ideal theory. Rawls's contention that ideal
theory is necessary to determine injustice in non-ideal conditions.
Role of non-ideal theory.
* A Liberal Feminist Critique of the Original Position and Justice within the Family.
Criticism of "heads of families" assumption in OP and
Rawls's response to criticisms that principles do not secure
equal justice for women and children. Rawls's discussion of
justice within the family.
* The Original Position and the Law of Peoples.
Rawls's extension of OP to decisions on the Law of Peoples
governing relations among liberal and decent societies. Human rights,
the duty to assist burdened peoples, oulaw societies, and
Rawls's rejection of a global principle of distributive
justice.
* Constructivism, Objectivity, Autonomy, and the Original Position.
Kantian Interpretation of the OP and Constructivism. OP as a
procedure of construction and objective point of view. Response to
Humean argument that social agreements cannot justify. Role of OP in
reflective equilibrium.
* Is the Original Position Necessary or Relevant?
Reply to claims that OP is superfluous or irrelevant. Why Rawls
thinks rational acceptance of principles in OP and congruence of Right
and Good is essential to justice. |
intensional-trans-verbs | ## 1. Some groups of ITV's and their behavior
Search verbs and desire verbs manifest all three of the behaviors
listed in the prologue as "marks" or effects of
intensionality. Thus, Lois Lane may be seeking Superman. But it does
not seem to follow that she is seeking Clark, even though Superman is
Clark, and so we have an example of the first kind of anomaly
mentioned in the prologue: substitution of one name with another for
the same person leads to a change in truth-value for the embedding
sentence (here "Lois is seeking Superman"). Similarly, a
thirsty person who believes that water quenches thirst and that
H2O is a kind of rat poison may want some water but not
some H2O. [According to some, this alleged failure of
substitution to preserve truth-value is an illusion; but for reasons
of space I do not pursue this theory -- its *locus
classicus* is (Salmon 1986).]
Second, both search verbs and desire verbs create specific-unspecific
ambiguities in their containing VPs when the syntactic object of the
verb consists in a determiner followed by a nominal (this ambiguity is
also known as the relational/notional ambiguity, following Quine 1956,
where it was first studied, at least in the modern period). For
example, 'Oedipus is seeking a member of his family' could
be true because Oedipus is seeking specifically Jocasta, who is a
member of his family, though he doesn't realize it. On such an
occasion, what is true can be more carefully stated as 'there is
a member of his family such that Oedipus is seeking that
person'. The alternative, unspecific or notional reading, is
forced by adding 'but no particular one': 'Oedipus
is seeking a member of his family, but no particular one'. Here
Oedipus is implied just to have a general intention to find some
member or other of his family. Contrast the extensional
'embrace': Oedipus cannot embrace a member of his family,
but no particular one.
Third, it is obvious that it is possible both to want, and to search
for, that which does not exist, for instance, a fountain of eternal
youth. But it isn't possible to, say, stumble across such a thing,
unless it exists.
Depiction verbs, such as 'draw', 'sculpt', and
'imagine', resist substitution in their syntactic objects,
at least if the clausal 'imagine' does: if imagining that
Superman rescues you is not the same thing as imagining that Clark
rescues you, it is hard to see why imagining Superman would be the
same thing as imagining Clark. A specific/unspecific ambiguity is also
possible, as is attested by the wall label for Guercino's *The
Aldrovandi Dog* (*ca.* 1625) in the Norton Simon Museum,
which states 'this must be the portrait of a specific
dog', thereby implicating an alternative, that 'Guercino
drew a dog' *could* be taken to mean that he drew a dog,
but no particular dog -- he just made one up. And we can clearly
draw or imagine that which does not exist (as opposed to, say,
photographing it). Braque's *Little Harbor in Normandy* (1909)
is an example, according to the curators of the Art Institute of
Chicago: 'it appears this work was painted from imagination,
since the landscape depicted cannot be identified.'
However, whether or not a notional reading of a depiction VP is
possible depends on which quantificational determiner occurs in the
noun phrase complement. If we say 'Guercino drew every
dog', 'Guercino drew most dogs', or 'Guercino
drew the dog' (non-anaphoric 'the dog'), we seem to
advert to some antecedent domain with respect to which 'drew
every/most/the dog(s)' are to be evaluated. So specific readings
are
required.[1]
This resistance to unspecific construal is robust across languages
and is typical of those quantificational determiners which do not
occur naturally in existential contexts such as 'there
is': contrast 'there is a dog in the garden' with
'there is every dog in the garden', 'there are most
dogs in the garden', or 'there is the dog in the
garden'. An account of what is wrong with 'there is every
dog in the garden' (see Keenan 2003) might well contain the
materials for explaining the lack of unspecific readings of depiction
VPs with determiners like 'every', 'most' and
'the'; see further (Forbes 2006:142-150).
It should be emphasized that depiction verbs are special in this
respect, for there is no problem getting unspecific readings with
'every', 'most' and 'the' using
desire verbs or search verbs. Guercino might be looking for every dog
on Aldrovandi's estate, though there are no particular dogs he is
looking for; the reader might be driving around an unfamiliar airport
rental car lot, looking for the exit, and in this case there is no
exit such that *it* is being sought. ('look' isn't
really a transitive verb, but when the following preposition is
'for', a search activity is denoted, so it is customary to
count the likes of 'looking for' as a "transitive
verb" in contexts where intensionality is under discussion.)
Mixed behavior is also manifested by evaluative verbs, for example
'respect', 'admire', 'disdain',
'worship', including emotion verbs such as 'lust
(after)' and 'fear'. Lex Luthor might fear Superman,
but not Clark, and Lois might disdain Clark, but not Superman.
However, unspecific readings of VPs with quantified complements are
harder to hear, at least when the quantifier is existential.
'Lois admires an extraterrestrial' can be heard in two
ways: there is the 'admires a particular extraterrestrial'
reading, and there is a *generic* reading, which means that
among the kinds of thing she admires are extraterrestrials in general.
Generic readings of evaluative VPs attribute dispositions, and are
not the same as unspecific or notional readings (see Cohen 1999, 2008,
and the entry on
generics).
There does not seem to be a sensible non-generic construal of
'Lois admires an extraterrestrial, but no particular
one'.
The verb 'need' is an interesting case. A sports team
might need a better coach, though no specific better coach, and might
need a better coach even if there are none to be had. So two out of
three marks of intensionality are present. However, 'need'
contrasts with 'want' as regards substitution: our
dehydrated subject who does not want H2O because he
believes it to be a kind of rat poison, nevertheless *needs*
H2O. It seems that co-denoting terms may be interchanged in
the complement of 'need'. But merely accidentally
co-extensive ones cannot be: Larson (2001, 232) gives the example of
Max the theater impresario, who needs more singers but not more
dancers, even though all who sing dance, and vice-versa. The property
*singer* and the property *dancer* are different
properties, so expressions for them cannot be exchanged in the
complement of 'need'. Similar restricted substitutivity is
observed with transaction verbs such as 'wager',
'owe', 'buy', 'sell',
'reserve', and perhaps the transaction resultative
'own'. One may reserve a table at a restaurant, though
there need not be a specific table one has reserved in making the
reservation, since the restaurant may be expecting a slow night. But
these verbs do allow interchange of co-referential expressions (a
purchase of water-rights is a purchase of H2O-rights)
though not (Zimmerman 1993, 151) of accidentally co-extensive ones.
For 'own' see (Zimmerman 2001, *passim*).
Indeed, it is even arguable that some marks of intensionality are
present with verbs that allow interchange of *accidentally*
co-extensive expressions. A case in point is verbs of absence, such as
'omit' and 'lack'. If it so happens that all
and only the physicists on the faculty are the faculty's Nobel prize
winners, then a faculty committee that lacks a physicist lacks a Nobel
prize winner. However, not too much weight can be put on this case,
since it may be that at some level, 'lack' should be
analyzed (possibly in a complicated way) in terms of *not
having*, in which case it would not really be an intensional verb
at all; though so far as the author knows, no convincing analysis of
this type has been formulated. See (Zimmerman 2001, 516-20) for
further discussion of relationships among the marks of
intensionality.
## 2. How many mechanisms for how many marks?
We have distinguished three "marks" or effects of
intensionality: substitution-resistance, the availability of
unspecific readings, and existence-neutrality. A natural question is
whether one and the same semantic mechanism underlies all three
effects, whether they are entirely independent, or whether two have a
common source distinct from the third's.
In the context of a discussion of propositional attitude verbs, that
is, verbs which take clausal or clause-embedding noun-phrase
complements rather than simple noun-phrase (NP) objects, one
hypothesis which keeps explanatory apparatus to the minimum is that
all three effects of intensionality arise from the possibility of the
complement having *narrow scope* with respect to the attitude
verb. Thus we might distinguish two readings of
(1)
Lex Luthor fears that Superman is nearby,
namely
(2a)
Lex Luthor fears-true the proposition that Superman is
nearby[2]
and
(2b)
Superman is someone such that Lex Luthor fears(-true the
proposition) that he is nearby.
In (2a) we make the clause 'that Superman is nearby' the
complement of 'proposition' to guarantee that
'Superman' is within the scope of 'fears' (the
resulting NP 'the proposition...' is a "scope
island"). And in (2b) we use a form of words that encourages an
audience to process 'Superman' ahead of
'fears'. We can associate substitution-resistance with
(2a), while allowing substitution in (2b). (2b) ascribes a complex
property to Superman, and therefore also to Clark; namely, *being
an x such that Lex fears x to be nearby*. (2a), on the other hand,
puts Lex in the fearing-true attitude relation to a certain
proposition, with essentially no implications about what other
propositions he may fear-true. So, provided the proposition that
Superman is nearby is distinct from the proposition that Clark Kent is
nearby, substitution-failure in (1) construed as (2a) is
explicable.
As for taking these two propositions to be distinct, there are many
serviceable accounts, mostly involving some variation of the original
proposal in modern philosophical semantics, found in (Frege 1892,
1970). According to Frege, every meaningful expression or phrase has
both a *customary reference* that it denotes, and a
*customary sense* that it expresses. In the case of clauses,
the customary reference would be a truth-value which is
compositionally derived from the denotations of the words of the
clause, and the customary sense would be a way of thinking of that
truth-value, compositionally derived from the senses of the words of
the clause. Therefore, provided that 'Superman' and
'Clark Kent' have different though coreferential senses
(i.e., provided they express different ways of thinking of the same
individual) we will get different propositions. (However, it's highly
non-trivial to find an adequate account of the senses of names, given
the critique of the most straightforward accounts in (Kripke
1972).)
However, on the face of it, this only has (1), intended in the (2a)
manner, *expressing a different proposition* from
(2c)
Lex Luthor fears that Clark is nearby.
Since truth-value is on the level of reference, and the corresponding
words of (2a) and (2c) all have the same referents, the resulting
truth-values for (2a) and (2c) will be the same; but they are supposed
to be different. So Frege makes the ingenious suggestion that it is an
effect of embedding in intensional contexts (he only considered
clausal verbs) that expressions in such contexts no longer denote
their customary references, but rather their customary senses. Using
'iff' henceforth to abbreviate 'if and only
if', then (1) intended as (2a) is true iff the reference of
'Lex' stands in the fearing-true relation to
the *switched* reference of 'Superman is nearby',
namely, its customary sense. Now we have our explanation of why merely
interchanging
*customarily* co-referential expressions in (1) can produce
falsehood from a truth: the substitution does not preserve reference,
since the names are now denoting their customary senses. However, if
(1) is intended as (2b), there will be no truth-value switch, since
there is no name-reference switch: in (2b) 'Superman' is
not in the scope of 'fears', therefore it denotes its
customary reference, and interchange with any other expression
denoting that same referent must of necessity be truth-value
preserving. (Readers in search of more detailed discussion of Frege's
notion of sense might consult the section on
Frege's philosophy of language
in the entry on Frege. See also the entry on
propositional attitude reports,
and for other uses of Frege-style "switcher semantics",
Gluer and Pagin 2012.)
A view like this has the capacity to explain the other intensionality
effects. The specific-unspecific ambiguity in 'Lex fears that an
extraterrestrial is nearby' is explained in terms of scope
ambiguity, the notional or unspecific reading corresponding to
(3a)
Lex Luthor fears-true the proposition that an extraterrestrial is
nearby
and the relational or specific reading to
(3b)
An extraterrestrial is such that Lex Luthor fears that it is
nearby.
Existence-neutrality is also explained, since the *proposition*
that an extraterrestrial is nearby is available for its truth to be
feared, believed, doubted or denied, whether or not there are
extraterrestrials.
There are other accounts of substitution failure, but details are
incidental at this point. For there are real issues about (A) whether
a single mechanism *could* be responsible for all three
effects, and (B) whether an account of any effect in terms of a scope
mechanism is workable for transitive, as opposed to clausal,
verbs.
(A) The behaviors cited in the previous section suggest that
substitution-resistance and the availability of an unspecific reading
have different explanations. For we saw that the verb
'need' contrasts with the verb 'want' as
regards substitution-resistance, but is similar as regards the
availability of unspecific readings of embedding VPs. So it seems
that there is a mechanism that blocks substitution, perhaps the
Fregean reference-switch one, perhaps something else more compatible
with what Davidson calls "semantic innocence" (Davidson
1969, 172--a semantically innocent account of
substitution-failure is one that does not alter the semantics of the
substitution-resisting expressions themselves for the special case in
which they occur in intensional contexts). And this mechanism cannot
occur with 'need', but can with 'want'
('can' rather than 'does' because it is
optional; this is to allow for "transparent" or
substitution-permitting readings of the likes of 'Lex fears
Superman' analogous to (3b)). On the other hand, whatever
accounts for notional readings is evidently available to both verbs,
and therefore it is not the same mechanism as underlies the
substitution-resistance of 'wants'. However, this
reasoning is not conclusive, since the substitution-resistance
mechanism may be present with 'needs' (and transaction
verbs) but somehow rendered ineffective (see Parsons 1997, 370). One
would need to hear how the ineffectiveness comes about.
Evaluative verbs present the converse challenge:
substitution-resistance but apparently no unspecific readings of
embedding VPs, certainly not existential ones. It is less clear how a
defender of a 'single explanation' theory would handle
this, at least if the single explanation is a scope mechanism, since
it appears from the other cases that occurrence within the scope of
the intensional verb immediately produces an unspecific reading.
Suspension of existential commitment may group with availability of
unspecific readings for explanatory purposes. There appear to be no
cases of intensional transitives which allow notional readings of
embedded VPs, but where those VPs have the same existential
consequences as ones which differ just by the substitution of an
extensional for the intensional verb.
(B) The scope account is the only real contender for a single
explanation of the intensionality effects. But there is a major
question about whether it can be transferred at all from clausal to
transitive verbs. For the intensionality effects would all be
associated with narrow-scope occurrences of noun phrases (NPs), and
with a transitive verb such syntactic configuration is problematic
when the NP is quantified. This is because, in standard first-order
syntax, a quantifier must have a sentence within its scope (an open
sentence with a free variable the NP binds, if redundant
quantification is ruled out in the syntax). We can provide this for
relational or wide-scope readings, for example
(4)
An extraterrestrial is such that Lois is looking for it
in which 'Lois is looking for it' is the scope of
'an extraterrestrial'. But if 'an
extraterrestrial' is supposed to be within the scope of
'looking for' there is no clause to be *its* scope;
it has to be an argument of the relation, which is not allowed in
first-order language. As Kaplan says, 'without an inner
sentential context...distinctions of scope disappear'
(Kaplan 1986, 266). (Though readers who have taught symbolic logic
will be very familiar with the student who, having symbolized
'Jack hit Bill' as '*Hjb*', then offers
something like '*Hj*([?]*x*)' as the
symbolization of 'Jack hit someone.')
The description of the problem suggests two forms of solution. One is
to preserve first-order syntax by uncovering hidden material to be the
scope of a quantified NP even when the latter is within the scope of
the intensional verb. The other is to drop first-order syntax in favor
of a formalism which permits the meanings of quantified NPs to be
arguments of intensional relations such as fearing and seeking. We
consider these options in turn in the following two sections.
## 3. Propositionalism
The idea of uncovering hidden material to provide NPs in notional
readings of intensional VPs with sentential scope was prominently
endorsed in (Quine 1956), where the proposal is to paraphrase search
verbs with 'endeavor to find'. So for (5a) we would have
(5b):
(5a)
Lois is looking for an extraterrestrial
(5b)
Lois is endeavoring to find an extraterrestrial
Partee (1974, 97) objects that this cannot be the whole story, since
search verbs are not all synonyms ('groping for' doesn't
mean exactly the same as 'rummaging about for'), but den
Dikken, Larson, & Ludlow (1996) and Parsons (1997, 381) suggest
that the search verb itself be used in place of
'endeavor'. So we get
(6a)
Lois is looking to find an extraterrestrial
or in somewhat non-Quinean lingo,
(6b)
Lois is looking in order to make true the proposition that an
extraterrestrial is such that she herself finds
it.[3]
Here 'an extraterrestrial' is within the scope of
'looking' but has the open sentence 'she herself
finds it' as its own scope.
It may or may not be meaning-preserving to replace (5a)'s
prepositional phrase with (6a)'s purpose clause, but even if it
is meaning-preserving, that is insufficient to show that (6a) or (6b)
articulates the semantics of (5a); it may merely be a synonym.
However, with 'need' and desire verbs, evidence for the
presence of a hidden clause is strong. For example, in
(7)
Physics needs some new computers soon
it makes little sense to construe 'soon' as modifying
'needs'; it seems rather to modify a hidden
'get' or 'have', as is explicit in
'Physics needs to get some new computers soon', i.e.,
'Physics needs it to be the case that, for some new computers,
it gets them soon'. (For 'have' versus
'get', see (Harley 2004).)
Secondly, there is the phenomenon of propositional anaphor (den
Dikken, Larson, & Ludlow 2018, 52-3), illustrated in
(8)
Physics needs some new computers, but its budget won't allow it.
What is not allowed is the truth of the proposition that Physics gets
some new computers.
Third, attachment ambiguities suggest there is more than one verb
present for modifiers to attach to (Dikken, Larson, & Ludlow 1996,
332):
(9)
Physics will need some new computers next year
could mean that a need for new computers will arise in the department
next year, but could also mean that next year is when Physics should
*get* new computers, if its need (which may arise later this
year) is to be met.
Finally, ellipsis generates similar ambiguities:
(10)
Physics will need some new computers before Chemistry
could mean that the need will arise in Physics before it does in
Chemistry, but could also mean that Physics will need to get some new
computers before Chemistry gets any.
However, the strength of the case for a hidden 'get' with
'need' or 'want' contrasts with the case for
propositionalism about search verbs. As observed by Partee (1974, 99),
for the latter there are no attachment ambiguities like those in (9).
For example,
(11)
Physics will shop for some new computers next year
can only mean that the shopping will occur next year. There is no
second reading, corresponding to the other reading of (9), in which
'next year' attaches to a hidden 'find/buy'.
The phenomena in (8) and (10) also lack parallels with search verbs;
for example, 'Physics will shop for some new computers before
Chemistry' lacks a reading that has Physics shopping with the
following goal: to find/buy new computers before Chemistry finds/buys
any. And although the propositionalist might offer something like
(12)
Physics will seek more office space by noon
as an analog of (7), it is not easy to decide if (12) genuinely has a
reading of the 'seek to find more office space by noon'
sort, or whether the hint of such a reading is just an echo of
(7).
Other groups of intensional transitives, such as depiction verbs and
evaluative verbs, raise the problem that there is no evident
propositional paraphrase in the first place. For psychological
depiction verbs such as 'fantasize' and
'imagine', Parsons (1997, 376) proposes what he calls
"Hamlet ellipsis": for 'Mary imagined a
unicorn' we would have the clausal 'Mary imagined a
unicorn to be'. Larson (2001, 233) suggests that the complement
is a "small" or "verbless" clause, and for
'Max visualizes a unicorn' proposes 'Max visualizes
a unicorn in front of him'. This is too specific, for we can
understand 'Max visualizes a unicorn' without knowing
whether he visualizes it in front of him, above him or below him, but
even if we change the paraphrase to 'Max visualizes a unicorn
spatially related to him', this proposal, as well as Parsons',
have problems with negation: 'Mary didn't imagine a
unicorn' is not synonymous with either 'Mary didn't
imagine a unicorn to be' or with 'Mary didn't imagine a
unicorn spatially related to her', since the first of these
allows for her to imagine a unicorn but not imagine it to be, the
second, for her to imagine a unicorn but not as spatially related to
her. There may be philosophical arguments that exclude these
options,[4]
but the very fact that a philosophical argument is needed makes the
proposals unsatisfactory as semantics.
Clausal paraphrases for verbs like 'fear' are even less
likely, since the extra material in the paraphrase can be read as the
focus of the fear, making the paraphrase insufficient. For example,
fearing *x* is not the same as fearing encountering *x*,
since it may be the encounter that is feared, say if *x* is an
unfearsome individual with a dangerous communicable disease. In the
same vein, fearing *x* is not the same as fearing that
*x* will hurt you; for instance, you may fear that your
accident-prone dentist will hurt you, without fearing the dentist.
We conclude that if any single approach to intensional transitives is
to cover all the ground, it will have to be non-propositionalist. But
it is also possible, perhaps likely, that intensional transitives are
not a unitary class, and that propositionalism is correct for some of
these verbs but not for others (see further Schwarz 2006, Montague
2007).
## 4. Montague's semantics
The main non-propositionalist approaches to ITVs begin from the work
of Richard Montague, especially his paper "The Proper Treatment
of Quantification in Ordinary English" (Montague 1973), usually
referred to as PTQ in the literature (Montague's condition (9) (1974,
264) defines 'seek' as 'try to find', but this
is optional). Montague developed a systematic semantics of natural
language based on *higher-order intensional type-theory*. We
explain this term right-to-left.
Type theory embodies a specific model of semantic compositionality in
terms of *functional application*. According to this model, if
two expressions *x* and *y* can concatenate into a
meaningful expression *xy*, then (i), the meaning of one of
these expressions is taken to be a function, (ii) the meaning of the
other is taken to be an item of the kind that the function in question
is defined for, and (iii) the meaning of *xy* is the output of
the function when applied to the input. The type-theoretic
representation of this meaning is written **x(y)** or
**y(x)**, depending on which expression is taken to be
the function and which the input or argument. A functional application
such as **x(y)** is said to be *well-typed* iff
the input that **y** denotes is the type of input for
which the function **x** is defined.
For example, in *the simple theory of types*, a common noun
such as 'sweater' is assigned a meaning of the following
type: a function from individuals to truth-values (a function of type
*ib*, for short; *b* for 'boolean'). For
'sweater' the function in question is the one which maps
all sweaters to the truth-value TRUE, and all other individuals to the
truth-value FALSE. On the other hand, an ("intersective")
adjective such as 'woollen' would be assigned a meaning of
the following type: a function from (functions from individuals to
truth-values) to (functions from individuals to truth-values), or a
function of type (*ib*)(*ib*) for short. Thus the
meaning of 'woollen' can take the meaning of
'sweater' (an *ib*) as input and produce the
meaning of 'woollen sweater' (another *ib*) as
output; this is why the meaning of 'woollen' has the type
(*ib*)(*ib*). **woollen(sweater)** is the
specific function of type *ib* that maps sweaters made of wool
to TRUE, and all other individuals to FALSE.
In this framework, a quantified NP such as 'every sweater'
has a meaning which can take the meaning of an intransitive verb
(e.g., 'unravelled'), or more generally, a Verb Phrase
(VP), as input and produce the meaning (truth-value) of a sentence
(e.g. 'every sweater unravelled') as output. Intransitive
verbs and VPs are like common nouns in being of type *ib*. For
example, the VP **quickly(unravelled)** is of type
*ib*, mapping all and only individuals that unravelled quickly
to TRUE. So a quantified NP is a function from inputs of type
*ib* to outputs of type *b*, and is thus of type
(*ib*)*b*. 'Every sweater unravelled
quickly' would be represented as
**(every(sweater))(quickly(unravelled))**, and would
denote the truth-value that is the result of applying a meaning of
type (*ib*)*b*, that of **every(sweater)**,
to a meaning of type *ib*, that of
**quickly(unravelled)** (the adverb
**quickly** itself is of type (*ib*)(*ib*),
like the adjective **woollen**). Rules specific to
**every** guarantee that **every(sweater)**
maps **quickly(unravelled)** to TRUE iff
**quickly(unravelled)** maps to TRUE everything that
**sweater** maps to TRUE.
So far, the apparatus is extensional, which, besides providing only
two possible sentence-meanings, TRUE and FALSE, imposes severe
limitations on the range of concepts we can express. Suppose that the
Scottish clothing company Pringle has a monopoly on the manufacture of
woollen sweaters, and makes sweaters of no other material. Then a
garment is a woollen sweater iff it is a Pringle sweater, meaning that
**woollen(sweater)** and
**pringle(sweater)** are the same function of type
*ib*, and these two terms for that function are everywhere
interchangeable in the type-theoretic language. Then modal operators
such as 'it is contingent that' cannot be in the language,
since interchanging **woollen(sweater)** and
**pringle(sweater)** within their scope should sometimes
lead to change of truth-value, but cannot if the two expressions
receive the same meaning in the semantics. For example, 'it is
contingent that every Pringle sweater is woollen' is true, but
'it is contingent that every woollen sweater is woollen'
is false. Therefore the concept of contingency has no adequate
representation in the type-theoretic (boldface) language.
Shifting to *intensional* type theory deals with this
difficulty. The intension of any expression X is a function from
possible worlds to an extension of the type which that expression has
in the extensional theory just sketched, if it has such an extension,
otherwise to something appropriate for intensional vocabulary such as
'it is contingent that'. An intension which is a function
from possible worlds to items of type ***t*** is
said to be of type *s**t***.
**sweater**, for instance, will have as its intension a
function from possible worlds to functions of type *ib*,
providing for each possible world a function that specifies the
individuals which are sweaters at that world; so
**sweater**'s intension is of type
*s*(*ib*). However, a modal sentential operator such as
'it is contingent that' will have as its intension a
function that, for each possible world, produces the very same
function, which takes as input functions of type *sb* and
produces truth-values as output. So the extension of 'it is
contingent that' at each world is the same function, of type
(*sb*)*b*. (The operator is said to be
*intensional* because its *ex*tension at each world is a
function taking intensions, such as functions of type *sb*, as
input.)
A function of type *sb* is sometimes called a
*possible-worlds proposition*, since it traces the truth-value
of a sentence across worlds. For example, with appropriate assignments
to the constituents,
(13)
**(every(woollen(sweater)))(woollen)**
should be true, that is, refer to TRUE, at every
world.[5]
So the intension of (13) is the function ***f***
of type *sb* such that for every world w,
***f***(w) = TRUE. This is a *constant*
intension. On the other hand,
(14)
**(every(pringle(sweater)))(woollen)**
is true at some worlds but false at others, those where Pringle makes
non-woollen sweaters; so its intension is non-constant.
We define the intension of **contingent** to be the
function which, for each world *w* as input, produces as output the
function ***c*** of type (*sb*)*b*
such that for any function ***p*** of type
*sb*, ***c***(***p***)
is true at w iff there are worlds *u* and *v* such that
***p***(*u*) = TRUE and
***p***(*v*) = FALSE (this is the meaning of
'contingent that' in the sense 'contingent
whether'). So the intension of 'contingent' is also
constant, since the same function ***c*** is the
output at every world.
Since **contingent** expects an input of type
*sb*, we cannot write
(15)
**contingent((every(woollen(sweater)))(woollen))**
since in evaluating this formula at a world *w* we would find ourselves
trying to apply the reference of **contingent** at w,
namely, the function ***c*** just defined, to the
reference of **(every(woollen(sweater)))(woollen)** at w,
namely, the truth-value TRUE. But ***c*** requires
an input of type *sb*, not *b*. So we introduce a new
operator, written **^**, such that if X is an expression
and ***t*** is the type of X's reference at each
w, then at each w, the reference of **^**X is of type
*s**t***. **^**X may be read as
'the intension of X', since the rule for
**^** is that at each world, **^**X refers
to that function which for each world w, outputs the reference of X at
w.
If we now evaluate
(16)
**contingent^((every(woollen(sweater)))(woollen))**
at a world w, the result will be FALSE. This is because the function
***p*** of type *sb* that is the reference
of **^((every(woollen(sweater)))(woollen))** at every
world, maps every world to TRUE. So there is no *u* such that
***p***(*u*) = FALSE. But there is such a *u* for
**^((every(pringle(sweater)))(woollen))**, and so
(17)
**contingent^((every(pringle(sweater)))(woollen))**
is TRUE at w. Note that choice of *w* doesn't matter, since the
intension of **contingent** produces the same function
***c*** at every world, and the reference of,
e.g., **^((every(pringle(sweater)))(woollen))**, is the
same function of type *sb* at every world.
Finally, our type-theory, intensional or extensional, is
*higher-order*, because the semantics makes available
higher-order domains of quantification and reference.
**sweater** refers to a property of individuals, a
*first-order* property. **(every(sweater))**
refers to a property of properties of individuals, a second-order
property. For just as **sweater(my favorite garment)**
attributes the property of being a sweater to a certain individual, so
we can think of **(every(sweater))(woollen)** as
attributing a property to the property of being woollen. Which
property is attributed to being woollen? The rules governing
**every** ensure that **(every(F))** is
truly predicated of **G** iff **G** is a
property of every *F*. In that case, **G** has the
property of being a property of every *F*. So
**(every(F))** stands for the property of being a
property of every *F*.
Treating quantified NPs as terms for properties of properties means
they can occur as arguments to any expression defined for properties
of properties. We can even rescue the uncomprehending student's
attempt at 'Jack hit someone', for provided
'hit' is of the right type -- which is easily
arranged -- we can have **(hit(someone))(jack)**.
Here **hit** accepts the property of being a property of
at least one person, and produces the first-order property of hitting
someone, which is then attributed to Jack. In extensional type-theory,
**hit** has the type
((*ib*)*b*)(*ib*) if
**(hit(someone))(jack)** is well-typed and
**jack** is of type
*i*.[6]
The significance of this for the semantics of intensional transitives
is that we now have a way of representing a reading of, say,
(18)
Jack wants a woollen sweater
in which the quantified NP is within the semantic scope of the verb
without having scope over a hidden subsentence with a free variable
for the NP to bind: the quantified NP 'a woollen sweater'
can just be the argument to the verb. To allow for the intensionality
of the transitive verb, Montague adopts the rule that if *x*
and *y* can concatenate into a meaningful expression
*xy*, the reference of the functional expression is a function
which operates on the *intension* of the argument expression.
Suppressing some irrelevant detail, this means that if
'wants' syntactically combines with 'a woollen
sweater' to produce the VP 'wants a woollen
sweater', then in its semantics, **want** applies
to the intension of **(a(woollen (sweater)))**, resulting
in the following semantics for (18):
(19)
**want(^(a(woollen(sweater))))(jack)**.[7]
In (19), **a(woollen((sweater))** is within the scope of
**want**. So if we take (19) to represent the notional
reading of (18), the idea that notional readings are readings in which
the quantified NP has narrow scope with respect to the intensional
verb is sustained.
## 5. Revisions and refinements
How does Montague's account of intensional transitives fare
*vis* *a* *vis* the three marks of
intensionality? The existence-neutrality of existential NPs is
clearly supported by (19), for there is nothing to prevent the
application at a world *w* of **want** to
**^(a(woollen(sweater)))** from producing a function
mapping Jack to TRUE even if, at the same w,
**(woollen(sweater))** maps every individual to FALSE
(woollen sweaters don't exist).
Substitution-failure is supported for contingently coextensive
expressions. For instance, (19) does not entail
**want(^(a(pringle(sweater))))(jack)** even if there are
worlds where all and only woollen sweaters are Pringle sweaters, so
long as there are other worlds where this is not so. Let *u* be a world
of the latter sort. Then **a(pringle(sweater))** at *u* and
**a(woollen(sweater))** at *u* are different functions of
type (*ib*)*b*, making
**^(a(pringle(sweater)))** and
**^(a(woollen(sweater)))** different at every world.
Therefore **want(^(a(pringle(sweater))))** may map Jack
to FALSE at worlds where **want(^(a(woollen(sweater))))**
maps Jack to TRUE: since **want** is applied to different
inputs here, the outputs may also be different.
But this result depends on the fact that
**(pringle(sweater))** and
**(woollen(sweater))** are merely contingently
coextensive. If 'water' and 'H2O'
are *necessarily* co-extensive, then wanting a glass of water
and wanting a glass of H2O will be indistinguishable in
higher-order intensional type theory. This failure to make a
distinction, however, traces to the intensionality of the semantics
-- to its being (merely) a possible-worlds semantics -- not
to its being higher-order or type-theoretic. So possible solutions
include (i) augmenting higher-order intensional type theory with extra
apparatus, or (ii) employing a different kind of higher-order type
theory. In both cases the aim is to mark distinctions like that
between wanting a glass of water and wanting a glass of
H2O.
A solution of the first kind, following (Carnap 1947), is proposed in
(Lewis 1972:182-6); the idea is that the meaning of a complex
expression is not its intension, but rather a tree that exhibits the
expression's syntactic construction bottom-up, with each node in the
tree decorated by an appropriate syntactic category label and semantic
intension. But as Lewis says (p. 182), for non-compound lexical
constituents, sameness of intension implies sameness of meaning. So
although his approach will handle the water/H2O problem if
we assume the term 'H2O' has structure that the
term 'water' lacks, it will not without such an
assumption. For the same reason, it cannot explain
substitution-failure involving unstructured proper names, on the usual
view (deriving from Kripke 1972) that identity of extension (at any
world) for such names implies identity of intension. So we have no
account of why admiring Cicero isn't the same thing as admiring
Tully.
A solution of the second kind, employing a different kind of
higher-order type theory, is pursued in (Thomason 1980). In Thomason's
"intentional" logic, propositions are taken as a primitive
category, instead of being analyzed as intensions of type *sb*.
In turns out that a somewhat familiar higher-order type theory can be
built on this basis, in which, roughly, the type of propositions plays
a role analogous to the type of truth-values in extensional type
theory. A property such as **orator**, for example, is a
function of type *ip* (as opposed to *ib*), where
*p* is the type of propositions: given an individual as input,
**orator** will produce the proposition that that
individual is an orator as output. Proper names, however, are not
translated as terms of type *i*, for then
**cicero** and **tully** would present the
same input to **orator**, resulting in the same
proposition as output: **orator(cicero)** =
**orator(tully)**. So there would be no believing Cicero
is an orator without believing Tully is an orator. Instead, Thomason
assigns proper names the type (*ip*)*p*, functions from
properties to propositions. And merely the fact that Cicero and Tully
are the same individual does not require us to say that
**cicero** and **tully** must produce the
same propositional output given the same property input. Instead, we
can have **cicero(orator)** and
**tully(orator)** distinct (see Muskens 2005 for further
development of Thomason's approach).
Applying this to intensional transitives is just a matter of assigning
appropriate types so that the translations of, say, 'Lucia seeks
Cicero' and 'Lucia seeks Tully', are different
propositions (potentially with different truth-values). We need to
keep the verb as function, and we already have the types of
**cicero** and **tully** set to
(*ip*)*p*. The translations of 'seeks
Cicero' and 'seeks Tully' should be functions
capable of accepting inputs of type (*ip*)*p*, such as
**lucia**, and producing propositions as output.
**seeks** therefore accepts input of type
(*ip*)*p* and produces output that accepts input of type
(*ip*)*p* and produces output of type *p*. Thus
**seeks** is of type
((*ip*)*p*)(((*ip*)*p*)*p*), and we
get substitution-failure because **seeks(cicero)** and
**seeks(tully)** can be different functions of type
((*ip*)*p*)*p* so long as **cicero**
and **tully** are different functions of type
(*ip*)*p* (as we already said they should be).
**seeks(cicero)** can therefore map
**lucia** to one proposition while
**seeks(tully)** maps it (not 'her') to
another; and these propositions can have different
truth-values.[8]
Finally, there is the question whether (19) shows that Montague's
semantics supports notional readings. One problem is that Montague's
semantics for extensional verbs such as 'get' is exactly
the same as for intensional verbs, and it takes an extra stipulation,
or *meaning-postulate*, for 'get', to guarantee
that the extension of **get(^(a(woollen(sweater))))** at
*w* maps Jack to TRUE only if the extension of
**(woollen(sweater))** at *w* maps some individual to TRUE
(you can want a golden fleece even if there aren't any, but you can't
*get* one if there aren't any). So apparently (19)'s pattern
embodies something in common to the notional meaning of 'want a
woollen sweater' and the meaning of 'gets a woollen
sweater', something which is neutral on the existence of woollen
sweaters. This is unintuitive, but is perhaps not a serious problem,
since it can be avoided by a different treatment of extensional
transitives.
A more pressing question is what justification we have for thinking
that (19) captures the notional, 'no particular one',
reading of
(18).[9]
On the face of it, (19) imputes to Jack the wanting attitude towards
the property of being a property of a woollen sweater. This is the
same attitude as Jack may stand in to a particular woollen sweater,
say *that* one. But it is not at all clear that we have any
grasp of what a single attitude with such diverse objects could be,
and the difficulty seems to lie mainly with the proposed semantics for
notional readings. What does it mean to have the attitude of desire
towards the property of being a property of a woollen sweater?
Two ways of dealing with this suggest themselves. First, we might
supplement the formal semantics with an elucidation of *what it
is* to stand in a common-or-garden attitude to a property of
properties. Second, we might revise the analysis to eliminate this
counterintuitive aspect of it, but without importing the
propositionalist's hidden sentential contexts.
Both (Moltmann 1997) and (Richard 2001) can be read as providing,
within Montague's general approach, an account of what it is to stand
in an attitude relation to a property of properties. Both accounts are
modal, having to do with the nature of possible situations in which
the attitude is in some sense "matched" by the situation:
an attitude-state of need or expectation is matched if the need or
expectation is met, an attitude-state of desire is matched if the
desire is satisfied, an attitude-event of seeking is met if the search
concludes successfully, and so on. According to Moltmann's account
(1997, 22-3) one stands in the attitude relation of seeking to
**^(a(woollen(sweater)))** iff, in every *minimal
situation* s in which that search concludes successfully,
you find a woollen sweater in s. Richard (2001, p. 116) offers a
more complex analysis that is designed to handle negative quantified
NPs as well ('no woollen sweater', 'few woollen
sweaters', etc.). On this account, a search p
*demands* **^(a(P))** iff for every relevant
success-story *m* = <w, s> for p, things in s with a
property entailing **P** are in the extension of
**^(a(P))** at w. Here s is the set of things that are
found when the search concludes successfully in w.
By contrast, (Zimmerman 1993) and (Forbes 2000, 2006) propose
revisions in (19) itself and its ilk. Zimmerman (161-2) replaces
the quantifier intension with a property intension, since he holds
that (i) unspecific readings are restricted to "broadly"
existential quantified NPs, and (ii) the property corresponding to
the nominal in the existential NP (e.g., 'woollen
sweater') can do duty for the NP itself. Of course, the proposed
restriction of unspecific readings to existentials is controversial
(*cf*. our earlier example, 'Guercino is looking for
every dog on Aldrovandi's estate'). It may also be wondered
whether there is any less of a need to explain what it is to stand in
the seeking relation to a property of objects than to a property of
properties (but for a response to this kind of objection, see
Grzankowski 2018, 146-9).
According to (Forbes 2000) the need for such an explanation already
threatens the univocality of a verb such as 'look for' as
it occurs in 'look for *that* woollen sweater' and
'look for some woollen sweater'. Observing that search
verbs are action verbs, Forbes applies Davidson's event semantics to
them (Davidson 1967). In this semantics, as developed in (Parsons
1990), search verbs become predicates of events, and in relational
(specific) readings, the object searched for is said to be in a
thematic relation to the event, one denoted by 'for'; thus
'some search *e* is for *that* woollen
sweater'. But in unspecific readings, no thematic relation is
invoked; rather, the quantified NP is used to *characterize*
the search. So we would have 'some search *e* is
characterized by **^(a(woollen(sweater)))**', i.e.,
*e* is an a-woollen-sweater search (Forbes 2000, 174-6;
2006, 77-84). What it is for a search to be characterized by a
quantifier, say **^(a(woollen(sweater)))**, is explained
in terms of 'outcome postulates'. For the current example,
as a first approximation, a search is characterized by
**^(a(woollen(sweater)))** iff any course of events in
which that search culminates successfully includes an event of finding
a woollen sweater whose agent is the agent of the search. Similar
postulates can be given for, e.g., the meeting of expectations and the
satisfaction of desires: a state of desire is characterized by
**^(a(woollen(sweater)))** iff any course of events in
which that desire is satisfied includes an event of getting a woollen
sweater whose recipient is the agent of the search (Forbes 2006,
94-129).
There is therefore a range of different non-propositionalist
approaches to intensional transitives. As we already remarked, one
possibility is that propositionalism is correct for some verbs and
non-propositionalism correct for others. However, there is also the
option that non-propositionalism is correct for all. A
non-propositionalist who makes this claim will have to explain the
phenomena illustrated in (7)-(10), without introducing degrees
of freedom that make it unintelligible that these phenomena do not
arise for all intensional transitives.
## 6. Prior's Puzzle
Intensional transitive verbs are also involved in another puzzle of
substitution-resistance besides the one already discussed. In the
literature on propositional attitude reports, it's the received view
that the complement clauses in such reports refer to propositions. So,
for example, in 'Holmes believes that Moriarty has
returned', the clause 'that Moriarty has returned'
is taken to stand for the proposition that Moriarty has returned. The
whole ascription is then understood to have the form *Rab*,
which in terms of the example means that Holmes (*a*) stands in
the relation of belief (*R*) to the proposition that Moriarty
has returned (*b*). However, as well as being denoted by
*that*-clauses, it seems that propositions are also denoted by
proposition descriptions, noun phrases that explicitly use 'the
proposition', such as, in the previous sentence, 'the
proposition that Moriarty has returned'. So, if the clause and
the description co-denote, we have the following truth:
(20)
that Moriarty has returned is the proposition that Moriarty has
returned.
But then we should be able to substitute proposition-description for
*that*-clause, which, indeed, works well enough for
'believes': from 'Holmes believes that Moriarty has
returned' it does seem to follow that Holmes believes the
proposition that Moriarty has returned, even though a side-effect of
the substitution is to change the clausal 'believes' into
its transitive form. However, 'believes' is rather special
in this respect. Despite (20), examples (21a) and (21b) below
appear to have very different meanings:
(21a)
Holmes {fears/suspects} that Moriarty has returned.
(21b)
Holmes {fears/suspects} the proposition that Moriarty has
returned.
(21a) may well be true, but it is unlikely that Holmes fears a
proposition, or that some proposition is a thing of which he is
suspicious. (This phenomenon seems first to have been noted in print
by A. N. Prior (1963).)
That the meaning of the minor premise does not survive substitution is
the rule rather than the exception: we get a similar outcome with
'announce', 'anticipate', 'ask',
'boast', 'calculate', 'caution',
'complain', 'conclude', 'crow',
'decide', 'detect', 'discover',
'dream', 'estimate', 'forget',
'guess', 'hope', 'insinuate',
'insist', 'interrogate' (literary theory),
'judge', 'know', 'love',
'mention', 'notice', 'observe',
'prefer', 'pretend',
'question','realize', 'rejoice',
'require', 'see', 'suggest',
'surmise', 'suspect', 'trust',
'understand', 'vote', 'wish', and
various cognates of these. In some cases, substitution fails because
it is meaning-altering, in others because it is meaning-dissolving (a
selection constraint is violated, or the purported transitive verb
simply doesn't exist in the language). The verbs for which
substitution is acceptable are thinner on the ground: inference verbs
such as 'conclude', 'deduce',
'entail' and 'establish', along with a few
others like 'accept','believe',
'doubt', 'state' and 'verify' (but
for 'believe' see King 2002, 359-60; Forbes 2018,
118; and Nebel 2019, 97-9).
The puzzle doesn't depend on the credentials of (20) as an identity
sentence. Even if, for whatever reason, it isn't, it is still hard to
see how truth-value can change going from (21a) to (21b), granted
that *the proposition that Moriarty has returned* and *that
Moriarty has returned* are co-denoting. There also doesn't seem to
be a useful application of a traditional account of referential
opacity, such as Frege's (discussed in connection with (2c) above).
Substitution-failure is explained in Fregean terms by co-denoting
expressions having different senses, but it's unclear that adding or
deleting 'the proposition' is a significant enough
difference to change sense. We also expect that examples of
substitution-failure become unrealistic if the subject is attributed
explicit belief in the identity premise. However, adding that Holmes
is absolutely clear about the truth of (20) and has it at the
forefront of his consciousness doesn't make it any more likely that
(21b) is true, even though (21a) is.
Consequently, there is some appeal to solutions of Prior's Puzzle that
discern an equivocation in the inference, or propose that the crucial
terms, *the proposition that Moriarty has returned* and
*that Moriarty has returned*, aren't really coreferential as
they occur in the inference. An approach of the first kind, in (King
2002), argues that the transitive and clausal forms of the intensional
verb are *polysemous*, that is, weakly ambiguous (the two
senses are related). This has been criticized on the basis of ellipsis
examples, such as 'Bob didn't even mention the proposition that
first-order logic is undecidable, let alone that it is provable'
(Boer 2009, 552), and 'the Soviet authorities genuinely fear a
religious revival, and that the contagion of religion will
spread' (after Nebel 2019, 77). Since these examples don't
strike one as amusingly incongruous, in the way that typical examples
of zeugma do ('All over Ireland the farmers grew potatoes,
barley, and bored'), they are in tension with a postulation of
polysemy. Consequently, Boer (2009) and Nebel (2019) propose that the
problem lies in an equivocation in the terms *the proposition that
Moriarty has returned* and *that Moriarty has returned*. On
both their accounts, it is the propositional description 'the
proposition that...' which fails to denote what one might
expect. On the other hand, as evidence in favor of polysemy, there is
the fact that some cases of ellipsis seem in some way anomalous, such
as 'John heard thunder and that a storm was rolling in'.
Here the anomaly can be explained in terms of *how*
different the two senses of 'heard' are.
Event semantics (see section 5 above) provides an alternative which
may avoid the need for any kind of equivocation. An initial proposal
is made in (Pietroski 2000) for 'explain', which behaves
like 'fear' and 'suspect', as in
(22a)
Martin explained that the nothing itself nothings.
(22b)
Martin explained the proposition that the nothing itself
nothings.
(22a) may well be true but (22b) is unlikely. The idea is then that
the clausal and transitive forms of 'explain' take
different thematic complements, content versus theme: with the clausal
verb of (22a), a 'that'-clause provides a content, while
with the transitive verb of (22b), a proposition description provides
a theme, the thing that *gets* explained. Since, as already
noted, a side-effect of the substitution is to change the verb from
its clausal to its transitive form, a consequent side-effect is to
change the role ascribed to the proposition from content to theme.
This account is developed in (Pietroski 2005) and also, as an
extension of the event semantics of (Forbes 2006), in (Forbes
2018).
## 7. The logic of intensional transitives
There may be no such topic as the logic of *propositional*
attitudes: it may be doubted whether 'Mary wants to meet a man
who has read Proust and a man who has read Gide'
*logically* entails 'Mary wants to meet a man who has
read Proust'. Even if standing in the wanting-to-be-true
attitude to *I meet a man who has read Proust and a man who has
read Gide* somehow necessitates standing in the wanting-to-be-true
attitude to *I meet a man who has read Proust*, the
necessitation appears to be more psychological than logical. On the
other hand, what we might call 'objectual attitudes', the
non-propositional attitudes ascribed by intensional transitives, seem
to have a logic (for a compendium of examples, see Richard 2000,
105-7): 'Mary seeks a man who has read Proust and a man
who has read Gide' does seem to entail 'Mary seeks a man
who has read Proust'.
Yet, as Richard notes (Richard 2001, 107-8), the inferential
behavior of quantified complements of intensional transitives is still
very different from the extensional case. For example (his
'Literary Example'), even if it is true that Mary seeks a
man who has read Proust and a man who has read Gide, it may be false
both that she seeks at most one man and that she seeks at least two
men; for she may be indifferent between finding a man who has read
both, versus finding two men, one a reader of Proust but not Gide, the
other of Gide but not Proust. Contrast 'saw',
'photographed', or 'met': if she met a man who
has read Proust and a man who has read Gide, it cannot be false both
that she met at most at one man and also that she met at least two
men. As Richard insists, it is a constraint on any semantics of
intensional transitives that they get this type of case right.
By contrast, in other cases, even very simple ones, it is
controversial exactly what inferences intensional transitives in
unspecific readings support. If Mary seeks a man who has read Proust,
does it follow that she seeks a man who can read? After all, it is
unlikely that a comic-book reader will satisfy her tastes in
men.[10]
If Perseus seeks a mortal gorgon, does it follow that he seeks a
gorgon?[11]
After all, if he finds an immortal gorgon, he is in trouble.
Zimmerman (1993, 173) takes it to be a requirement on accounts of
notional readings that they validate these deletion or
"weakening" inferences. But there are characterizations of
unspecific readings on which these inferences are in fact invalid, for
instance, a characterization in terms of *indifference* towards
which object of the relevant kind is found (Lewis uses such an
'any one would do' characterization in Lewis 1972, p.
199). For even if it is true that Mary seeks a man who has read
Proust, and any male Proust-reader would do, it does not follow that
Mary seeks a man who can read, and any man who can read would do. For
not every man who can read has read Proust.
Is there anything intrinsic to the indifference characterization of
unspecificity ('any one would do') that we can object to
while leaving the status of weakening inferences open? One objection
is that the characterization does not work for every verb or
quantifier: 'Guercino painted a dog, any dog would do'
makes little sense, and 'the police are looking for everyone who
was in the room, any people who were in the room would do' is
not much better. More importantly, the characterization seems to put
warranted assertibility out of reach, since the grounds which we
normally take to justify ascribing an existentially quantified
objectual attitude will rarely give reason to think that the agent has
absolutely no further preferences going beyond the characterization of
the object-kind given in the ascription (probably Mary would pass on
meeting a male psychopathic killer who has read Proust; see further
Graff Fara 2013 for analogous discussion of desire).
Still, this is not to validate weakening inferences; for that we would
need to show that the more usual gloss of the unspecific reading,
using a 'but no particular one' rider, supports the
inferences as strongly as the indifference characterization refutes
them. And it is unclear how such an argument would proceed (see
further Forbes 2006, 94-6). In addition, making a good case that
the inferences are intuitively valid is one thing, getting the
semantics to validate them is another. Both the propositionalist and
the Montagovian need to add extra principles, since there is nothing
in their bare semantics to compel these inferences. For even if Jack
stands in the wanting relation to
**^(a(woollen(sweater)))**, that by itself is silent on
whether he also stands in the wanting relation to
**^(a(sweater))**. The accounts of Moltmann and Richard
both decide the matter positively, however (e.g., if in every minimal
situation in which Jack's desire is satisfied, he gets a woollen
sweater, then in every such situation, he gets a sweater; so he wants
a sweater).
The intuitive validity of weakening can also be directly challenged.
For example, via weakening we can infer that if A is looking for a cat
and B is looking for a dog, then A and B are looking for the same
thing (an animal). For discussion of this kind of example, see
(Zimmerman 2006), and for the special use of 'the same
thing', (Moltmann 2008). Asher (1987, 171) proposes an even more
direct counterexample. Suppose you enter a competition whose prize is
a free ticket on the Concorde to New York. So presumably you *want
a free ticket* on the Concorde. But you don't *want a
ticket* on the Concorde, since you know that normally these are
very expensive, you are poor, and you strictly resist desiring the
unattainable. Asher is here assimilating notional uses of indefinites
to generics, which on his account involve quantification over
*normal* worlds. So if for some bizarre reason you want a
sloop, but one whose hull is riddled with holes, it will not be
literally true to say you want a sloop.
Undeniably there is a real phenomenon here, but perhaps it belongs to
pragmatics rather than semantics. If I say 'I want a sloop',
someone who offers to buy me any sloop floating in the harbor could
reasonably complain 'You should have said that' if I decline the
offer on the grounds that none of those sloops meets my unstated
requirement of having a hull riddled with holes. But my aspirational
benefactor's complaint might be justified because normality is a
*default implicature* or presupposition that a co-operative
speaker is under some obligation to let her audience know isn't in
force, when it isn't. It is still literally true, on this view, that
you want a sloop, despite the idiosyncrasy of the details of your
desire. However, this is far from the end of the story. Those with
doubts about weakening will find the discussion in (Sainsbury 2018,
129-133) congenial.
Another interesting logical problem concerns the "conjunctive
force" of disjoined quantified NPs in objectual ascriptions.
There is a large literature on the conjunctive force of disjunction in
many other contexts (e.g., Kamp 1973, Loewer 1976, Makinson 1984,
Jennings 1994, Zimmerman 2000, Simons 2005, Fox 2007), for instance as
exhibited in '*x* is larger than *y* or
*z*' and 'John can speak French or Italian to
you'. In these cases the conjunctive force is easily captured by
simple distribution: '*x* is larger than *y* and
larger than *z*', 'John can speak French to you and
can speak Italian to
you'.[12]
However, with intensional transitives we find the same conjunctive
force, but no distributive articulation. If we say that Jack needs a
woollen sweater or a fleece jacket, we say something to the effect
that (i) his getting a woollen sweater is one way his need could be
met, *and* (ii) his getting a fleece jacket is another way his
need could be met. But 'Jack needs a woollen sweater or a fleece
jacket' does *not* mean that Jack needs a woollen sweater
*and* needs a fleece jacket. This last conjunction ascribes two
needs, only one of which is met by getting a satisfactory woollen
sweater. But the latter acquisition by itself meets the disjunctive
need for a woollen sweater or a fleece jacket. So there is a challenge
to explain the semantics of the disjunctive ascription, while at the
same time remaining within a framework that can accommodate all cases
of conjunctive force -- comparatives, various senses of
'can', counterfactuals with disjunctive antecedents
('if Jack were to put on a woollen sweater or a fleece jacket
he'd be warmer') and so on; see further (Forbes 2006,
97-111).
A penultimate type of inference we will mention is one in which
intensional and extensional verbs both occur, and the inference seems
valid even when the intensional VPs are construed unspecifically. An
example:
(23a)
Jack wants a woollen sweater
(23b)
Whatever Jack wants, he gets
(23c)
Therefore, Jack will get a woollen sweater
Obviously, (23a,b) entail (23c) when (23a) is understood specifically.
But informants judge that the inference is also valid when (23a) is
understood unspecifically, with 'but no particular one'
explicitly appended. If we seek a validation of the inference that
hews to surface form, Montague's uniform treatment of intensional and
extensional verbs has its appeal: (23b) will say that for whatever
property of properties ***P*** Jack stands in the
wanting relation to, he stands in the getting relation to. So the
inference is portrayed as the simple *modus ponens* it seems to
be. It would then be the task of other meaning-postulates to carry us
from his standing in the getting relation to
**^(a(woollen(sweater)))** to there being a woollen
sweater such that he gets it.
The final example to be considered
here involves fictional or mythical names in the scope of intensional
transitives. An example (Zalta 1988, 128) of the puzzles these can
lead to is:
(24a)
The ancient Greeks worshipped Zeus.
(24b)
Zeus is a mythical character.
(24c)
Mythical characters do not exist.
(24d)
Therefore, the ancient Greeks worshipped something that does
(did) not exist.
Or even:
(24e)
Therefore, there is something that doesn't exist such that the
ancient Greeks worshipped it.
One thing the example shows is that specific/unspecific is not to be
confused with real/fictional. (24a) is a true specific ascription,
just as 'the ancient Greeks worshipped Ahura Mazda' is a
false one. (24b) is also true. So the ancient Greeks, who would not
have *knowingly* worshipped a mythical character, were making a
rather large mistake, if one of a familiar sort.
(24c) is also true, if we are careful about what 'do not
exist' means in this context. It is contingent that Zeus-myths
were ever formulated, and one sense in which we might mean (24c) turns
on the assumption that fictional and mythical characters exist iff
fictions and myths about them do. In this sense, (24c) is false,
though an actual fictional character such as Zeus would not have
existed if there had been no Zeus-myths. This also explains why (24a)
and (24b) can both be true: 'Zeus' refers to the mythical
character, a contingently existing abstract object.
However, by far the more likely reading of (24c) is one on which it
means that mythical characters are not real. Zeus is not flesh and
blood, not even immaterial flesh and blood. With this in mind, (24d)
and (24e) are both true. The quantifier 'something' ranges
over a domain that includes both real and fictional or mythical
entities, and there is something in that domain, the mythical
character, which was worshipped by the ancient Greeks and which is not
in the subdomain of real items.
This gets the right truth-values for the statements in (24), but might
be thought to run into trouble with the likes of 'Zeus lives on
Mt. Olympus': if 'Zeus' refers to an abstract
object, how can Zeus live anywhere? One way of dealing with this kind
of case is to suppose, evidently plausibly, that someone who says
'Zeus lives on Mt. Olympus' and knows the facts means
(25)
According to the myth, Zeus lives on Mt. Olympus.
On the other hand, if an ancient Greek believer says 'Zeus lives
on Mt. Olympus', he or she says something false, there being no
reason to posit a covert 'according to the myth' in this
case.
However, even the covert operator theory might be contested, on the
grounds that within its scope we are still unintelligibly predicating
'lived on Mt. Olympus' of an abstract object. Simply
prefixing 'according to the myth' to the unintelligible
cannot render it intelligible. But there is the evident fact that (25)
is both intelligible and true. So either prefixing 'according to
the myth' *can* render the unintelligible intelligible,
or what is going on in the embedded sentence is not to be construed as
standard predication. For further discussion of these matters, see,
for example, van Inwagen 1977, Parsons 1980, Zalta 1988, Thomasson
1998, and Salmon 2002. |
truthlikeness | ## 1. The Logical Problem
Truth, perhaps even more than beauty and goodness, has been the target
of an extraordinary amount of philosophical dissection and
speculation. By comparison with truth, the more complex and much more
interesting concept of truthlikeness has only recently become the
subject of serious investigation. The logical problem of truthlikeness
is to give a consistent and materially adequate account of the
concept. But first, we have to make it plausible that there is a
coherent concept in the offing to be investigated.
### 1.1 What's the problem?
Suppose we are interested in what is the number of planets in the
solar system. With the demotion of Pluto to planetoid status, the
truth of this matter is that there are precisely 8 planets. Now, the
proposition *the number of planets in our solar system is 9*
may be false, but quite a bit closer to the truth than the proposition
that *the number of planets in our solar system is 9 billion*.
(One falsehood may be closer to the truth than another falsehood.) The
true proposition *the number of the planets is between 7 and 9
inclusive* is closer to the truth than the true proposition that
*the number of the planets is greater than or equal to 0*. (So
a truth may be closer to the truth than another truth.) Finally, the
proposition that *the number of the planets is either less than or
greater than 9* may be true but it is arguably not as close to the
*whole* truth as its highly accurate but strictly false
negation: *that there are 9 planets.*
This particular numerical example is admittedly simple, but a wide
variety of judgments of relative likeness to truth crop up both in
everyday parlance as well as in scientific discourse. While some
involve the relative accuracy of claims concerning the value of
numerical magnitudes, others involve the sharing of properties,
structural similarity, or closeness among putative laws.
Consider a non-numerical example, also highly simplified but quite
topical in the light of the recent rise in status of the concept of
*fundamentality*. Suppose you are interested in the truth about
which particles are fundamental. At the outset of your inquiry all you
know are various logical truths, like the tautology *either
electrons are fundamental or they are not*. Tautologies are pretty
much useless in helping you locate the truth about fundamental
particles. Suppose that the standard model is actually on the right
track. Then, learning that electrons are fundamental edges you a
little bit closer to your goal. It is by no means the complete truth
about fundamental particles, but it is a piece of it. If you go on to
learn that electrons, along with muons and tau particles, are a kind
of lepton and that all leptons are fundamental, you have presumably
edged closer.
If this is right, then some truths are closer to the truth about
fundamental particles than others.
The discovery that atoms are not fundamental, that they are in fact
composite objects, displaced the earlier hypothesis that *atoms are
fundamental*. For a while the proposition that *protons,
neutrons and electrons are the fundamental components of atoms*
was embraced, but unfortunately it too turned out to be false. Still,
this latter falsehood seems closer to the truth than its predecessor
(assuming, again, that the standard model is true). And even if the
standard model contains errors, as surely it does, it is presumably
closer to the truth about fundamental particles than these other
falsehoods.
So again, some falsehoods may be closer to the truth about fundamental
particles than other falsehoods.
As we have seen, a tautology is not a terrific truth locator, but if
you moved from the tautology that *electrons either are or are not
fundamental* to embrace the false proposition *that electrons
are not fundamental* you would have moved further from your
goal.
So, some truths are closer to the truth than some falsehoods.
But it is by no means obvious that all truths about fundamental
particles are closer to the whole truth than any falsehood. The false
proposition that *electrons, protons and neutrons are the
fundamental components of atoms*, for instance, may well be an
improvement over the tautology.
If this is right, certain falsehoods are closer to the truth than some
truths.
Investigations into the concept of truthlikeness only began in earnest
in the early nineteen sixties. Why was *truthlikeness* such a
latecomer to the philosophical scene? It wasn't until the latter
half of the twentieth century that mainstream philosophers gave up on
the Cartesian goal of infallible knowledge. The idea that we are quite
possibly, even probably, mistaken in our most cherished beliefs, that
they might well be just *false*, was mostly considered
tantamount to capitulation to the skeptic. By the middle of the
twentieth century, however, it was clear that many of our commonsense
beliefs, as well as previous scientific theories, are strictly
speaking, false. Further, the increasingly rapid turnover of
scientific theories suggested that, far from being certain, they are
ever vulnerable to refutation, and typically are eventually refuted
and replaced by some new theory. The history of inquiry is one of a
parade of refuted theories, replaced by other theories awaiting their
turn at the guillotine. (This is the "dismal induction",
see also the entry
on realism and theory change in science.)
*Realism* holds that the constitutive aim of inquiry is the
truth of some matter. *Optimism* holds that the history of
inquiry is one of progress with respect to its constitutive aim. But
*fallibilism* holds that our theories are false or very likely
to be false, and to be replaced by other false theories. To combine
these three ideas, we must affirm that some false propositions better
realize the goal of truth - are closer to the truth - than
others. We are thus stuck with the logical problem of
truthlikeness.
While a multitude of apparently different solutions to the problem
have been proposed, they can be classified into three main
approaches, each with its own heuristic - the *content*
approach, the *consequence* approach and the *likeness*
approach. Before exploring these possible solutions to the logical
problem, it could be useful to dispel a couple of common confusions,
since truthlikeness should not be conflated with either epistemic
probability or with vagueness. We discuss this latter notion in the
supplement Why truthlikeness is not probability or vagueness (see
also the entry on
vagueness);
as for the former, we shall discuss the difference between (expected)
truthlikeness and probability when discussing the epistemological
problem (SS2).
### 1.2 The content approach
Karl Popper was the first philosopher to take the logical problem of
truthlikeness seriously enough to make an assay on it. This is not
surprising, since Popper was also the first prominent realist to
embrace a very radical fallibilism about science while also trumpeting
the epistemic superiority of the enterprise. In his early work, he
implied that the only kind of progress an inquiry can make consists in
falsification of theories. This is a little depressing, to say the
least. It is almost as depressing as the pessimistic induction. What
it lacks is a positive account of how a succession of falsehoods might
constitute positive cognitive progress. Perhaps this is why
Popper's early work received a pretty short shrift from other
philosophers. If a miss is as good as a mile, and all we can ever
establish with confidence is that our inquiry has missed its target
once again, then epistemic pessimism seems inevitable. Popper
eventually realized that falsificationism is compatible with optimism
provided we have an acceptable notion of verisimilitude (or
truthlikeness). If some false hypotheses are closer to the truth than
others, then the history of inquiry may turn out to be one of progress
towards the goal of truth. Moreover, it may even be reasonable, on the
basis of our evidence, to conjecture that our theories are in fact
making such progress, even though we know they are all false or highly
likely to be false.
Popper saw clearly that the concept of truthlikeness should not be
confused with the concept of epistemic probability, and that it has
often been so confused. (See Popper 1963 for a history of the
confusion and the supplement
Why truthlikeness is not probability or vagueness
for an explanation of the difference between the two concepts.)
Popper's insight here was facilitated by his deep but largely
unjustified antipathy to epistemic probability. He thought that his
starkly falsificationist account favored bold, contentful theories.
Degree of informative content varies inversely with probability
- the greater the content the less likely a theory is to be
true. So if you are after theories which seem, on the evidence, to be
true, then you will eschew those which make bold - that is,
highly improbable - predictions. On this picture, the quest for
theories with high probability is simply misguided.
To see this distinction between truthlikeness and probability clearly,
and to articulate it, was one of Popper's most significant
contributions, not only to the debate about truthlikeness, but to
philosophy of science and logic in general. However, his deep
antagonism to probability, combined with his love of boldness, was
both a blessing and a curse. The blessing: it led him to produce not
only the first interesting and important account of truthlikeness, but
to initiate an approach to the problem in terms of content. The curse:
content alone, as Popper envisaged it, is insufficient to characterize
truthlikeness.
Popper made the first attempt to solve the problem in his famous
collection *Conjectures and Refutations*. As a great admirer of
Tarski's assay on the concept of truth, he modelled his theory
of truthlikeness on Tarski's theory. First, let a matter for
investigation be circumscribed by a formalized language \(L\) adequate
for discussing it. Tarski showed us how each possible world, or model
of the language, induces a partition of sentences of \(L\) into those
that are true and those that are false. The set of all sentences true
in the actual world is thus a complete true account of the world, as
far as that language goes. It is aptly called the Truth, \(T\). \(T\)
is the target of the investigation couched in \(L\). It is the theory
(relative to the resources in \(L)\) that we are seeking. If
truthlikeness is to make sense, theories other than \(T\), even false
theories, come more or less close to capturing \(T\).
\(T\), the Truth, is a theory only in the technical Tarskian sense,
not in the ordinary everyday sense of that term. It is a set of
sentences closed under the consequence relation: \(T\) may not be
finitely axiomatizable, or even axiomatizable at all. However, it is a
perfectly good set of sentences all the same. In general, we will
follow the Tarski-Popper usage here and call any set of sentences
closed under consequence a *theory*, and we will assume that
each proposition we deal with is identified with a theory in this
sense. (Note that theory \(A\) logically entails theory \(B\) just in
case \(B\) is a subset of \(A\).)
The complement of \(T\), the set of false sentences \(F\), is not a
theory even in this technical sense. Since falsehoods always entail
truths, \(F\) is not closed under the consequence relation. (This may
be the reason why we have no expression like *the Falsth*: the
set of false sentences does not describe a possible alternative to the
actual world.) But \(F\) too is a perfectly good set of sentences. The
consequences of any theory \(A\) that can be formulated in \(L\) will
thus divide between \(T\) and \(F\). Popper called the intersection of
\(A\) and \(T\), the *truth content* of \(A\) (\(A\_T\)), and
the intersection of \(A\) and \(F\), the *falsity content* of
\(A\) (\(A\_T\)). Any theory \(A\) is thus the union of its
non-overlapping truth content and falsity content. Note that since
every theory entails all logical truths, these will constitute a
special set, at the center of \(T\), which will be included in every
theory, whether true or false.
![missing text, please inform](diagram1.png)
Diagram 1. Truth and falsity contents of
false theory \(A\)
A false theory will cover some of \(F\), but because every false
theory has true consequences, including all logical truths, it will
also overlap with \(T\) (Diagram 1).
A true theory, however, will only overlap \(T\) (Diagram 2):
![missing text, please inform](diagram2.png)
Diagram 2. True theory \(A\) is
identical to its own truth content
Amongst true theories, then, it seems that the more true sentences
that are entailed, the closer we get to \(T\), hence the more
truthlike. Set theoretically that simply means that, where \(A\) and
\(B\) are both true, \(A\) will be more truthlike than \(B\) just in
case \(B\) is a proper subset of \(A\) (which for true theories means
that \(B\_T\) is a proper subset of \(A\_T\)). Call this principle:
*the value of content for truths*.
![missing text, please inform](diagram3.png)
Diagram 3. True theory \(A\) has more
truth content than true theory \(B\)
This essentially syntactic account of truthlikeness has some nice
features. It induces a partial ordering of truths, with the whole
Truth \(T\) at the top of the ordering: \(T\) is closer to the Truth
than any other true theory. The set of logical truths is at the
bottom: further from the Truth than any other true theory. In between
these two extremes, true theories are ordered simply by logical
strength: the more logical content, the closer to the Truth. Since
probability varies inversely with logical strength, amongst truths the
theory with the greatest truthlikeness \((T)\) must have the smallest
probability, and the theory with the largest probability (the logical
truth) is the furthest from the Truth. Popper made a simple and
perhaps plausible generalization of this. Just as truth content
(coverage of \(T)\) counts in favor of truthlikeness, falsity content
(coverage of \(F)\) counts against. In general then, a theory \(A\) is
closer to the truth if it has more truth content without engendering
more falsity content, or has less falsity content without sacrificing
truth content (diagram 4):
![missing text, please inform](diagram4.png)
Diagram 4. False theory \(A\) closer to
the Truth than false theory \(B\)
The generalization of the truth content comparison, by incorporating
falsity content comparisons, also has some nice features. It preserves
the comparisons of true theories mentioned above. The truth content
\(A\_T\) of a false theory \(A\) (itself a theory in the Tarskian
sense) will clearly be closer to the truth than \(A\) (Diagram 1). And
the whole truth \(T\) will be closer to the truth than any falsehood
\(B\) because the truth content of \(B\) must be contained within
\(T\), and the falsity content of \(T\) (the empty class) must be
properly contained within the non-empty falsity content of \(B\).
Despite its attractive features, the account has a couple of
disastrous consequences. Firstly, since a falsehood has some false
consequences, and no truth has any, it follows that no falsehood can
be as close to the truth as a logical truth - the weakest of all
truths. A logical truth leaves the location of the truth wide open, so
it is rather worthless as an approximation to the whole truth. On
Popper's account, no falsehood can ever be more worthwhile than
a worthless logical truth. (We could call this result *the absolute
worthlessness of falsehoods*).
Furthermore, it is impossible to add a true consequence to a false
theory without thereby adding additional false consequences (or
subtract a false consequence without subtracting true consequences).
So the account entails that no false theory is closer to the truth
than any other. We could call this result *the relative
worthlessness of all falsehoods*. These worthlessness results were
proved independently by Pavel Tichy and David Miller (Miller
1974, and Tichy 1974) - for a proof, see the supplement
on Why Popper's definition of truthlikeness fails: the Tichy-Miller theorem.
It is tempting (and Popper was so tempted) to retreat in the face of
these results to something like the comparison of truth contents
alone. That is to say, \(A\) is as close to the truth as \(B\) if
\(B\_T\) is contained in \(A\_T\), and \(A\) is closer to the truth than
\(B\) just in case \(B\_T\) is properly contained in \(B\_T\). Call this
the *Simple Truth Content account*.
This Simple Truth Content account preserves what many consider to be
the chief virtue of Popper's account: the value of content for
truths. And while it delivers the absolute worthlessness of falsehoods
(no falsehood is closer to the truth than a tautology) it avoids the
relative worthlessness of falsehoods. If \(A\) and \(B\) are both
false, then \(A\_T\) may well properly contain \(B\_T\). But that holds
if and only if \(A\) is logically stronger than \(B\). That is to say,
a false proposition is the closer to the truth the stronger it is.
According to this principle - call it *the value of content
for falsehoods* - the false proposition that *there are
nine planets, and all of them are made of green cheese* is more
truthlike than the false proposition *there are nine planets*.
And so once one knows that a certain theory is false one can be
confident that tacking on any old arbitrary proposition, no matter how
inaccurate it is, will lead us inexorably closer to the truth. This is
sometimes called the *child's play* objection. Among
false theories, *brute logical strength* becomes the sole
criterion of a theory's likeness to truth.
Even though Popper's particular proposals were flawed, his idea
of comparing truth-content and falsity-content is nevertheless worth
exploring. Several philosophers have developed variations on the idea.
Some stay within Popper's essentially syntactic paradigm,
elucidating content in terms of consequence classes (e.g., Newton
Smith 1981; Schurz and Weingartner 1987, 2010; Cevolani and Festa
2020). Others have switched to a semantic conception of content,
construing semantic content in terms of classes of possibilities, and
searching for a plausible theory of distance between those.
A variant of this approach takes the class of models of a language as
a surrogate for possible states of affairs (Miller 1978a). The other
utilizes a semantics of incomplete possible states like those favored
by structuralist accounts of scientific theories (Kuipers 1987b,
Kuipers 2019). The idea which these accounts have in common is that
the distance between two propositions \(A\) and \(B\) is measured by
the *symmetric difference* \(A\mathbin{\Delta} B\) of the two
sets of possibilities: \((A - B)\cup(B - A)\). Roughly speaking, the
larger the symmetric difference, the greater the distance between the
two propositions. Symmetric differences might be compared
qualitatively - by means of set-theoretic inclusion - or
quantitatively, using some kind of probability measure. Both can be
shown to have the general features of a measure of distance.
The fundamental problem with the content approach lies not in the way
it has been articulated, but rather in the basic underlying
assumption: that truthlikeness is a function of just two variables
- content and truth value. This assumption has several rather
problematic consequences.
Firstly, any given proposition \(A\) can have only two degrees of
verisimilitude: one in case it is false and the other in case it is
true. This is obviously wrong. A theory can be false in very many
different ways. The proposition that *there are eight planets*
is false whether there are nine planets or a thousand planets, but its
degree of truthlikeness is much higher in the first case than in the
latter. Secondly, if we combine the value of content for truths and
the value of content for falsehoods, then if we fix truth value,
verisimilitude will vary only according to amount of content. So, for
example, two equally strong false theories will have to have the same
degree of verisimilitude. That's pretty far-fetched. That
*there are ten planets* and that *there are ten billion
planets* are (roughly) equally strong, and both are false in fact,
but the latter seems much further from the truth than the former.
Finally, how might strength determine verisimilitude amongst false
theories? There seem to be just two plausible candidates: that
verisimilitude increases with increasing strength (the principle of
the value of content for falsehoods) or that it decreases with
increasing strength (the principle of the disvalue of content for
falsehoods). Both proposals are at odds with attractive judgements and
principles, which suggest that the original content approach is in
need of serious revision (see, e.g., Kuipers 2019 for a recent
proposal).
### 1.3 The Consequence Approach
Popper crafted his initial proposal in terms of the true and false
consequences of a theory. Any sentence at all that follows from a
theory is counted as a consequence that, if true, contributes to its
overall truthlikeness, and if false, detracts from that. But it has
struck many that this both involves an enormous amount of double
counting, and that it is the indiscriminate counting of arbitrary
consequences that lies behind the Tichy-Miller trivialization
result.
Consider a very simple framework with three primitive sentences: \(h\)
(for the state *hot*), \(r\) (for *rainy*) and \(w\)
(for *windy*). This framework generates a very small space of
eight possibilities. The eight maximal conjunctions (like \(h \amp r
\amp w, {\sim}h \amp r \amp w,\) etc.) of the three primitive
sentences and of their negations express those possibilities.
Suppose that in fact it is hot, rainy and windy (expressed by the
maximal conjunction \(h \amp r \amp w)\). Then the claim that it is
cold, dry and still (expressed by the sentence \({\sim}h \amp{\sim}r
\amp{\sim}w)\) is further from the truth than the claim that it is
cold, rainy and windy (expressed by the sentence \({\sim}h \amp r \amp
w)\). And the claim that it is cold, dry and windy (expressed by the
sentence \({\sim}h \amp{\sim}r \amp w)\) is somewhere between the two.
These kinds of judgements, which seem both innocent and intuitively
correct, Popper's theory cannot accommodate. And if they are to
be accommodated we cannot treat all true and false consequences alike.
For the three false claims mentioned here have exactly the same number
of true and false consequences (this is the problem we called *the
relative worthlessness of all falsehoods*).
Clearly, if we are going to measure closeness to truth by counting
true and false consequences, some true consequences should count more
than others. For example, \(h\) and \(r\) are both true, and
\({\sim}h\) and \({\sim}r\) are false. The former should surely count
in favor of a claim, and the latter against. But
\({\sim}h\rightarrow{\sim}r\) is true and \(h\rightarrow{\sim}r\) is
false. After we have counted the truth \(h\) in favor of a
claim's truthlikeness and the falsehood \({\sim}r\) against it,
should we also count the true consequence
\({\sim}h\rightarrow{\sim}r\) in favor, and the falsehood
\(h\rightarrow{\sim}r\) against? Surely this is both unnecessary and
misleading. And it is precisely counting sentences like these that
renders Popper's account susceptible to the Tichy-Miller
argument.
According to the consequence approach, Popper was right in thinking
that truthlikeness depends on the relative sizes of classes of true
and false consequences, but erred in thinking that all consequences of
a theory count the same. Some consequences are *relevant*, some
aren't. Let \(R\) be some criterion of relevance of
consequences; let \(A\_R\) be the set of *relevant* consequences
of \(A\). Whatever the criterion \(R\) is it has to satisfy the
constraint that \(A\) be recoverable from (and hence equivalent to)
\(A\_R\). Popper's account is the limiting one - all
consequences are relevant. (Popper's relevance criterion is the
empty one, \(P\), according to which \(A\_P\) is just \(A\) itself.)
The *relevant truth content of A* (abbreviated \(A\_R^T\)) can
be defined as \(A\_R\cap T\) (or \(A\cap T\_R\)), and similarly the
*relevant falsity content of* \(A\) can be defined as \(A\_R\cap
F\). Since \(A\_R = (A\_R\cap T)\cup(A\_R\cap F)\) it follows that the
union of true and false relevant consequences of \(A\) is equivalent
to \(A\). And where \(A\) is true \(A\_R\cap F\) is empty, so that A is
equivalent to \(A\_R\cap T\) alone.
With this restriction to relevant consequences we can basically apply
Popper's definitions: one theory is more truthlike than another
if its relevant truth content is larger and its relevant falsity
content no larger; or its relevant falsity content is smaller, and its
relevant truth content is no smaller.
This idea was first explored by Mortensen in his 1983, but he
abandoned the basic idea as unworkable. Subsequent proposals within
the broad program have been offered by Burger and Heidema 1994, Schurz
and Weingartner 1987 and 2010, and Gemes 2007. (Gerla 2007 also uses
the notion of the relevance of a "test" or factor, but his
account is best located more squarely within the likeness
approach.)
One possible relevance criterion that the \(h\)-\(r\)-\(w\) framework
might suggest is *atomicity*. This amounts to identifying
relevant consequences as *basic* ones, i.e., atomic sentences
or their negations (Cevolani, Crupi and Festa 2011; Cevolani, Festa
and Kuipers 2013). But even if we could avoid the problem of saying
what it is for a sentence to be atomic, since many distinct
propositions imply the same atomic sentences, this criterion would not
satisfy the requirement that \(A\) be equivalent to \(A\_R\). For
example, \((h\vee r)\) and \(({\sim}h\vee{\sim}r)\), like tautologies,
imply no atomic sentences at all. This latter problem can be solved by
resorting to the notion of *partial consequence*;
interestingly, the resulting account becomes virtually indentical to
one version of the likeness approach (Cevolani and Festa 2020).
Burger and Heidema 1994 compare theories by positive and negative
sentences. A positive sentence is one that can be constructed out of
\(\amp,\) \(\vee\) and any true basic sentence. A negative sentence is
one that can be constructed out of \(\amp,\) \(\vee\) and any false
basic sentence. Call a sentence *pure* if it is either positive
or negative. If we take the relevance criterion to be *purity*,
and combine that with the relevant consequence schema above, we have
Burger and Heidema's proposal, which yields a reasonable set of
intuitive judgments. Unfortunately purity (like atomicity) does not
quite satisfy the constraint that \(A\) be equivalent to the class of
its relevant consequences. For example, if \(h\) and \(r\) are both
true then \(({\sim}h\vee r)\) and \((h\vee{\sim}r)\) both have the
same pure consequences (namely, none).
Schurz and Weingartner 2010 use the following notion of relevance
\(S\): being equivalent to a disjunction of atomic propositions or
their negations. With this criterion they can accommodate a range of
intuitive judgments in the simple weather framework that
Popper's account cannot.
For example, where \(\gt\_S\) is the relation of *greater
S-truthlikeness* we capture the following relations among false
claims, which, on Popper's account, are mostly
incommensurable:
\[ (h \amp{\sim}r) \gt\_S ({\sim}r) \gt\_S ({\sim}h \amp{\sim}r). \]
and
\[ (h\vee r) \gt\_S ({\sim}r) \gt\_S ({\sim}h\vee{\sim}r) \gt\_S ({\sim}h \amp{\sim}r). \]
The relevant consequence approach faces three major hurdles.
The first is an extension problem: the approach does produce some
intuitively acceptable results in a finite propositional framework,
but it needs to be extended to more realistic frameworks - for
example, first-order and higher-order frameworks (see Gemes 2007 for
an attempt along these lines).
The second is that, like Popper's original proposal, it judges
no false proposition to be closer to the truth than any truth,
including logical truths. Schurz and Weingartner (2010) have answered
this objection by quantitatively extending their qualitative account
by assigning weights to relevant consequences and summing; one problem
with this is that it assumes finite consequence classes.
The third involves the language-dependence of any adequate relevance
criterion. This problem will be outlined and discussed below in
connection with the likeness approach (SS1.4.3).
### 1.4 The Likeness Approach
In the wake of the difficulties facing Popper's approach,
two philosophers, working quite independently, suggested a radically
different approach: one which takes the *likeness* in
truthlikeness seriously (Tichy 1974, Hilpinen 1976). This shift
from content to likeness was also marked by an immediate shift from
Popper's essentially syntactic approach (something it shares
with the consequence program) to a semantic approach.
Traditionally the semantic contents of sentences have been taken to be
non-linguistic, or rather non-syntactic, items -
*propositions*. What propositions are is highly contested, but
most agree that a proposition carves the class of possibilities into
two sub-classes - those in which the proposition is true and
those in which it is false. Call the class of worlds in which the
proposition is true its *range*. Some have proposed that
propositions be *identified* with their ranges (for example,
David Lewis, in his 1986). This is implausible since, for example, the
informative content of \(7+5=12\) seems distinct from the informative
content of \(12=12\), which in turn seems distinct from the
informative content of Godel's first incompleteness theorem
- and yet all three have the same range: they are all true in
all possible worlds. Clearly, if semantic content is supposed to be
sensitive to informative content, classes of possible worlds will are
not discriminating enough. We need something more fine-grained for a
full theory of semantic content.
Despite this, the range of a proposition is certainly an important
aspect of informative content, and it is not immediately obvious why
truthlikeness should be sensitive to differences in the way a
proposition picks out its range. (Perhaps there are cases of logical
falsehoods some of which seem further from the truth than others. For
example \(7+5=113\) might be considered further from the truth than
\(7+5=13\) though both have the same range - namely, the empty
set of worlds; see Sorensen 2007.) But as a first approximation, we
will assume that it is not hyperintensional and that logically
equivalent propositions have the same degree of truthlikess. The
proposition that *the number of planets is eight* for example,
should have the same degree of truthlikeness as the proposition that
*the square of the number of the planets is sixty four*.
Leaving apart the controversy over the nature of possible worlds, we
shall call the complete collection of possibilities, given some array
of features, the *logical space*, and call the array of
properties and relations which underlie that logical space, the
*framework* of the space. Familiar logical relations and
operations correspond to well-understood set-theoretic relations and
operations on ranges. The range of the conjunction of two propositions
is the intersection of the ranges of the two conjuncts. Entailment
corresponds to the subset relation on ranges. The actual world is a
single point in logical space - a complete specification of
every matter of fact (with respect to the framework of features)
- and a proposition is true if its range contains the actual
world, false otherwise. The whole Truth is a true proposition that is
also complete: it entails all true propositions. The range of the
Truth is none other than the singleton of the actual world. That
singleton is the target, the bullseye, the thing at which the most
comprehensive inquiry is aiming.
Without additional structure on the logical space we have just three
factors for a theorist of truthlikeness to work with - the size
of a proposition (content factor), whether it contains the actual
world (truth factor), and which propositions it implies (consequence
factor). The likeness approach requires some additional structure to
the logical space. For example, worlds might be more or less
*like* other worlds. There might be a betweenness relation
amongst worlds, or even a fully-fledged distance metric. If
that's the case, we can start to see how one proposition might
be closer to the Truth - the proposition whose range contains
just the actual world - than another. The core of the likeness
approach is that truthlikeness supervenes on the likeness between
worlds.
The likeness theorist has two tasks: firstly, making it plausible that
there is an appropriate likeness or distance function on worlds; and
secondly, extending likeness between individual worlds to likeness of
propositions (i.e., sets of worlds) to the actual world. Suppose, for
example, that worlds are arranged in similarity spheres nested around
the actual world, familiar from the Stalnaker-Lewis approach to
counterfactuals. Consider Diagram 5.
![missing text, please inform](diagram5.png)
Diagram 5. Verisimilitude by similarity
circles
The bullseye is the actual world and the small sphere which includes
it is \(T\), the Truth. The nested spheres represent likeness to the
actual world. A world is less like the actual world the larger the
first sphere of which it is a member. Propositions \(A\) and \(B\) are
false, \(C\) and \(D\) are true. A carves out a class of worlds which
are rather close to the actual world - all within spheres two to
four - whereas \(B\) carves out a class rather far from the
actual world - all within spheres five to seven. Intuitively
\(A\) is closer to the bullseye than is \(B\).
The largest sphere which does not overlap at all with a proposition is
plausibly a measure of how close the proposition is to being true.
Call that the *truth factor*. A proposition \(X\) is closer to
being true than \(Y\) if the truth factor of \(X\) is included in the
truth factor of \(Y\). The truth factor of \(A\), for example, is the
smallest non-empty sphere, \(T\) itself, whereas the truth factor of
\(B\) is the fourth sphere, of which \(T\) is a proper subset: so
\(A\) is closer to being true than \(B\).
If a proposition includes the bullseye, then of course it is true
simpliciter and it has the maximal truth factor (the empty set).
So all true propositions are equally close to being true. But
truthlikeness is not just a matter of being close to being true. The
tautology, \(D,\) \(C\) and the Truth itself are equally true, but in
that order they increase in their closeness to the whole truth.
Taking a leaf out of Popper's book, Hilpinen argued that
closeness to the whole truth is in part a matter of degree of
informativeness of a proposition. In the case of the true
propositions, this correlates roughly with the smallest sphere which
totally includes the proposition. The further out the outermost
sphere, the less informative the proposition is, because the larger
the area of the logical space which it covers. So, in a way which
echoes Popper's account, we could take truthlikeness to be a
combination of a truth factor (given by the likeness of that world in
the range of a proposition that is closest to the actual world) and a
content factor (given by the likeness of that world in the range of a
proposition that is furthest from the actual world):
>
> \(A\) is closer to the truth than \(B\) if and only if \(A\) does as
> well as \(B\) on both truth factor and content factor, and better on
> at least one of those.
>
Applying Hilpinen's definition we capture two more particular
judgements, in addition to those already mentioned, that seem
intuitively acceptable: that \(C\) is closer to the truth than \(A\),
and that \(D\) is closer than \(B\). (Note, however, that we have here
a partial ordering: \(A\) and \(D\), for example, are not ranked.) We
can derive from this various apparently desirable features of the
relation *closer to the truth*: for example, that the relation
is transitive, asymmetric and irreflexive; that the Truth is closer to
the Truth than any other theory; that the tautology is at least as far
from the Truth as any other truth; that one cannot make a true theory
worse by strengthening it by a truth (a weak version of the value of
content for truths); that a falsehood is not necessarily improved by
adding another falsehood, or even by adding another truth (a
repudiation of the value of content for falsehoods).
But there are also some worrying features here. While it avoids the
relative worthlessness of falsehoods, Hilpinen's account, just
like Popper's, entails the absolute worthlessness of all
falsehoods: no falsehood is closer to the truth than any truth. So,
for example, Newton's theory is deemed to be no more truthlike,
no closer to the whole truth, than the tautology.
Characterizing Hilpinen's account as a combination of a truth
factor and an information factor seems to mask its quite radical
departure from Popper's account. The incorporation of similarity
spheres signals a fundamental break with the pure content approach,
and opens up a range of possible new accounts: what such accounts have
in common is that the truthlikeness of a proposition is a
*non-trivial function of the likeness to the actual world of worlds
in the range of the proposition*.
There are three main problems for any concrete proposal within the
likeness approach. The first concerns an account of likeness between
states of affairs - in what does this consist and how can it be
analyzed or defined? The second concerns the dependence of the
truthlikeness of a proposition on the likeness of worlds in its range
to the actual world: what is the correct function? (This can be called
"the extension problem".) And finally, there is the famous
problem of "translation variance" or "framework
dependence" of judgements of likeness and of truthlikeness. This
last problem will be taken up in SS1.4.3.
#### 1.4.1 Likeness of worlds in a simple propositional framework
One objection to Hilpinen's proposal (like Lewis's
proposal for counterfactuals) is that it assumes the similarity
relation on worlds as a primitive, there for the taking. At the end of
his 1974 paper, Tichy not only suggested the use of similarity
rankings on worlds, but also provided a ranking in propositional
frameworks and indicated how to generalize this to more complex
frameworks.
Examples and counterexamples in Tichy 1974 are exceedingly
simple, utilizing the little propositional framework introduced above,
with three primitives - \(h\) (for the state *hot*),
\(r\) (for *rainy*) and \(w\) (for *windy*).
Corresponding to the eight-members of the logical space generated by
distributions of truth values through the three basic conditions,
there are eight maximal conjunctions (or constituents):
| | | | | |
| --- | --- | --- | --- | --- |
| \(w\_1\) | \(h \amp r \amp w\) | | \(w\_5\) | \({\sim}h \amp r \amp w\) |
| \(w\_2\) | \(h \amp r \amp{\sim}w\) | | \(w\_6\) | \({\sim}h \amp r \amp{\sim}w\) |
| \(w\_3\) | \(h \amp{\sim}r \amp w\) | | \(w\_7\) | \({\sim}h \amp{\sim}r \amp w\) |
| \(w\_4\) | \(h \amp{\sim}r \amp{\sim}w\) | | \(w\_8\) | \({\sim}h \amp{\sim}r \amp{\sim}w\) |
Worlds differ in the distributions of these traits, and a natural,
albeit simple, suggestion is to measure the likeness between two
worlds by the number of agreements on traits. This is tantamount to
taking distance to be measured by the size of the symmetric difference
of generating states - the so-called city-block measure. As is
well known, this will generate a genuine metric, in particular
satisfying the triangular inequality. If \(w\_1\) is the actual world
this immediately induces a system of nested spheres, but one in which
the spheres come with numbers attached:
![missing text, please inform](diagram6.png)
Diagram 6. Similarity circles for the
weather space
Those worlds orbiting on the sphere \(n\) are of distance \(n\) from
the actual world.
In fact, the structure of the space is better represented not by
similarity circles, but rather by a three-dimensional cube:
![missing text](diagram7.png)
Diagram 7. The three-dimensional weather
space
This way of representing the space makes a clearer connection between
distances between worlds and the role of the atomic propositions in
generating those distances through the city-block metric. It also
eliminates inaccuracies in the relations between the worlds that are
not at the center that the similarity circle diagram suggests.
#### 1.4.2 The likeness of a proposition to the truth
Now that we have numerical distances between worlds, numerical
measures of propositional likeness to, and distance from, the truth
can be defined as some function of the distances, from the actual
world, of worlds in the range of a proposition. But which function is
the right one? This is the *extension problem*.
Suppose that \(h \amp r \amp w\) is the whole truth about the weather.
Following Hilpinen, we might consider overall distance of a
propositions from the truth to be some function of the distances from
actuality of two extreme worlds. Let *truth*\((A)\) be the
truth value of \(A\) in the actual world. Let *min*\((A)\) be
the distance from actuality of that world in \(A\) closest to the
actual world, and *max*\((A)\) be the distance from actuality
of that world in \(A\) furthest from the actual world. Table 1 display
the values of the *min* and *max* functions for some
representative propositions.
| | | | |
| --- | --- | --- | --- |
| \(\boldsymbol{A}\) | ***truth*\((\boldsymbol{A})\)** | ***min*\((\boldsymbol{A})\)** | ***max*\((\boldsymbol{A})\)** |
| \(h \amp r \amp w\) | true | 0 | 0 |
| \(h \amp r\) | true | 0 | 1 |
| \(h \amp r \amp{\sim}\)w | false | 1 | 1 |
| \(h\) | true | 0 | 2 |
| \(h \amp{\sim}r\) | false | 1 | 2 |
| \({\sim}h\) | false | 1 | 3 |
| \({\sim}h \amp{\sim}r \amp w\) | false | 2 | 2 |
| \({\sim}h \amp{\sim}r\) | false | 2 | 3 |
| \({\sim}h \amp{\sim}r \amp{\sim}w\) | false | 3 | 3 |
Table 1. The *min* and
*max* functions.
The simplest proposal (made first in Niiniluoto 1977) would be to take
the average of the *min* and the *max* (call this
measure *min-max-average*). This would remedy a rather glaring
shortcoming which Hilpinen's qualitative proposal shares with
Popper's proposal, namely that no falsehood is closer to the
truth than any truth (even the worthless tautology). This numerical
equivalent of Hilpinen's proposal renders all propositions
comparable for truthlikeness, and some falsehoods it deems more
truthlike than some truths.
But now that we have distances between all worlds, why take only the
extreme worlds in a proposition into account? Why shouldn't
every world in a proposition potentially count towards its overall
distance from the actual world?
A simple measure which does count all worlds is average distance from
the actual world. *Average* delivers all of the particular
judgements we used above to motivate Hilpinen's proposal in the
first place, and in conjunction with the simple metric on worlds it
delivers the following ordering of propositions in our simple
framework:
| | | |
| --- | --- | --- |
| \(\boldsymbol{A}\) | ***truth*\((\boldsymbol{A})\)** | ***average*\((\boldsymbol{A})\)** |
| \(h \amp r \amp w\) | true | 0 |
| \(h \amp r\) | true | 0.5 |
| \(h \amp r \amp{\sim}\)w | false | 1.0 |
| \(h\) | true | 1.3 |
| \(h \amp{\sim}r\) | false | 1.5 |
| \({\sim}h\) | false | 1.7 |
| \({\sim}h \amp{\sim}r \amp w\) | false | 2.0 |
| \({\sim}h \amp{\sim}r\) | false | 2.5 |
| \({\sim}h \amp{\sim}r \amp{\sim}w\) | false | 3.0 |
Table 2. The *average*
function.
This ordering looks promising. Propositions are closer to the truth
the more they get the basic weather traits right, further away the
more mistakes they make. A false proposition may be made either worse
or better by strengthening \((h \amp r \amp{\sim}w\) is better than
\({\sim}w\) while \({\sim}h \amp{\sim}r \amp{\sim}w\) is worse). A
false proposition (like \(h \amp r \amp{\sim}w)\) can be closer to the
truth than some true propositions (like \(h)\). And so on.
These judgments may be sufficient to show that *average* is
superior to *min-max-average*, at least on this group of
propositions, but they are clearly not sufficient to show that
averaging is the right procedure. What we need are some
straightforward and compelling general desiderata which jointly yield
a single correct function. In the absence of such a proof, we can only
resort to case by case comparisons.
Furthermore *average* has not found universal favor. Notably,
there are pairs of true propositions such that *average* deems
the stronger of the two to be the further from the truth. According to
*average*, the tautology is not the true proposition furthest
from the truth. Averaging thus violates the Popperian principle of the
value of content for truths, see Table 3 for an example.
| | | |
| --- | --- | --- |
| \(\boldsymbol{A}\) | ***truth*\((\boldsymbol{A})\)** | ***average*\((\boldsymbol{A})\)** |
| \(h \vee{\sim}r \vee w\) | true | 1.4 |
| \(h \vee{\sim}r\) | true | 1.5 |
| \(h \vee{\sim}h\) | true | 1.5 |
Table 3. *average* violates the
value of content for truths.
Let's then consider other measures, like the *sum*
function - the sum of the distances of worlds in the range of a
proposition from the actual world (Table 4).
| | | |
| --- | --- | --- |
| \(\boldsymbol{A}\) | ***truth*\((\boldsymbol{A})\)** | ***sum*\((\boldsymbol{A})\)** |
| \(h \amp r \amp w\) | true | 0 |
| \(h \amp r\) | true | 1 |
| \(h \amp r \amp{\sim}\)w | false | 1 |
| \(h\) | true | 4 |
| \(h \amp{\sim}r\) | false | 3 |
| \({\sim}h\) | false | 8 |
| \({\sim}h \amp{\sim}r \amp w\) | false | 2 |
| \({\sim}h \amp{\sim}r\) | false | 5 |
| \(h \amp{\sim}r \amp{\sim}w\) | false | 3 |
Table 4. The *sum* function.
The *sum* function is an interesting measure in its own right.
While, like *average*, it is sensitive to the distances of
all worlds in a proposition from the actual world, it is not plausible
as a measure of distance from the truth, and indeed no one has
proposed it as such a measure. What *sum* does measure is a
certain kind of distance-weighted *logical weakness*. In
general the weaker a proposition is, the larger its *sum*
value. But adding words far from the actual world makes the
*sum* value larger than adding worlds closer to it. This
guarantees, for example, that of two truths the *sum* of the
logically weaker is always greater than the *sum* of the
stronger. Thus *sum* might play a role in capturing the value
of content for truths. But it also delivers the implausible value of
content for falsehoods. If you think that there is anything to the
likeness program it is hardly plausible that the falsehood \({\sim}h
\amp{\sim}r \amp{\sim}w\) is closer to the truth than its consequence
\({\sim}h\). Niiniluoto argues that *sum* is a good
likeness-based candidate for measuring Hilpinen's
"information factor". It is obviously much more sensitive
than is *max* to the proposition's informativeness about
the location of the truth.
Niiniluoto thus proposes, as a measure of distance from the truth, the
average of this information factor and Hilpinen's truth factor:
*min-sum-average*. Averaging the more sensitive information
factor (*sum*) and the closeness-to-being-true factor
(*min*) yields some interesting results (see Table 5).
| | | |
| --- | --- | --- |
| \(\boldsymbol{A}\) | ***truth*\((\boldsymbol{A})\)** | ***min-sum-average*\((\boldsymbol{A})\)** |
| \(h \amp r \amp w\) | true | 0 |
| \(h \amp r\) | true | 0.5 |
| \(h \amp r \amp{\sim}\)w | false | 1 |
| \(h\) | true | 2 |
| \(h \amp{\sim}r\) | false | 2 |
| \({\sim}h\) | false | 4.5 |
| \({\sim}h \amp{\sim}r \amp w\) | false | 2 |
| \({\sim}h \amp{\sim}r\) | false | 3.5 |
| \({\sim}h \amp{\sim}r \amp{\sim}w\) | false | 3 |
Table 5. The *min-sum-average*
function.
For example, this measure deems \(h \amp r \amp w\) more truthlike
than \(h \amp r\), and the latter more truthlike than \(h\). And in
general *min-sum-average* delivers the value of content for
truths. For any two truths the *min* factor is the same (0),
and the *sum* factor increases as content decreases.
Furthermore, unlike the symmetric difference measures,
*min-sum-average* doesn't deliver the objectionable value
of contents for falsehoods. For example, \({\sim}h \amp{\sim}r
\amp{\sim}w\) is deemed further from the truth than \({\sim}h\). But
*min-sum-average* is not quite home free, at least from an
intuitive point of view. For example, \({\sim}h \amp{\sim}r
\amp{\sim}w\) is deemed closer to the truth than \({\sim}h
\amp{\sim}r\). This is because what \({\sim}h \amp{\sim}r
\amp{\sim}w\) loses in closeness to the actual world (*min*) it
makes up for by an increase in strength (*sum*).
In deciding how to proceed here we confront a methodological problem.
The methodology favored by Tichy is very much bottom-up. For
the purposes of deciding between rival accounts it takes the intuitive
data very seriously. Popper (along with Popperians like Miller) favor
a more top-down approach. They are suspicious of folk intuitions, and
sometimes appear to be in the business of constructing a new concept
rather than explicating an existing one. They place enormous weight on
certain plausible general principles, largely those that fit in with
other principles of their overall theory of science: for example, the
principle that strength is a virtue and that the stronger of two true
theories (and maybe even of two false theories) is the closer to the
truth. A third approach, one which lies between these two extremes, is
that of reflective equilibrium. This recognizes the claims of both
intuitive judgements on low-level cases, and plausible high-level
principles, and enjoins us to bring principle and judgement into
equilibrium, possibly by tinkering with both. Neither intuitive
low-level judgements nor plausible high-level principles are given
advance priority. The protagonist in the truthlikeness debate who has
argued most consistently for this approach is Niiniluoto.
How might reflective equilibrium be employed to help resolve the
current dispute? Consider a different space of possibilities,
generated by a single magnitude like the number of the planets
\((N).\) Suppose that \(N\) is in fact 8 and that the further \(n\) is
from 8, the further the proposition that \(N=n\) from the Truth.
Consider the three sets of propositions in Table 6. In the left-hand
column we have a sequence of false propositions which, intuitively,
decrease in truthlikeness while increasing in strength. In the middle
column we have a sequence of corresponding true propositions, in each
case the strongest true consequence of its false counterpart on the
left (Popper's "truth content"). Again members of
this sequence steadily increase in strength. Finally on the right we
have another column of falsehoods. These are also steadily increasing
in strength, and like the left-hand falsehoods, seem (intuitively) to
be decreasing in truthlikeness as well.
| | | |
| --- | --- | --- |
| **Falsehood (1)** | **Strongest True Consequence** | **Falsehood (2)** |
| \(10 \le N \le 20\) | \(N=8\) or \(10 \le N \le 20\) | \(N=9\) or \(10 \le N \le 20\) |
| \(11 \le N \le 20\) | \(N=8\) or \(11 \le N \le 20\) | \(N=9\) or \(11 \le N \le 20\) |
| ...... | ...... | ...... |
| \(19 \le N \le 20\) | \(N=8\) or \(19 \le N \le 20\) | \(N=9\) or \(19 \le N \le 20\) |
| \(N= 20\) | \(N=8\) or \(N= 20\) | \(N=9\) or \(N= 20\) |
Table 6.
Judgements about the closeness of the true propositions in the center
column to the truth may be less intuitively clear than are judgments
about their left-hand counterparts. However, it would seem highly
incongruous to judge the truths in Table 6 to be steadily increasing
in truthlikeness, while the falsehoods both to the left and the right,
both marginally different in their overall likeness relations to
truth, steadily decrease in truthlikeness. This suggests that that all
three are sequences of steadily increasing strength combined with
steadily *decreasing* truthlikeness. And if that's right,
it might be enough to overturn Popper's principle that amongst
true theories strength and truthlikeness must covary (even while
granting that this is not so for falsehoods).
If this argument is sound, it removes an objection to averaging
distances, but it does not settle the issue in its favor, for there
may still be other more plausible counterexamples to averaging that we
have not considered.
Schurz and Weingartner argue that this extension problem is the main
defect of the likeness approach:
>
> the problem of extending truthlikeness from possible worlds to
> propositions is intuitively underdetermined. Even if we are granted an
> ordering or a measure of distance on worlds, there are many very
> different ways of extending that to propositional distance, and
> apparently no objective way to decide between them. (Schurz and
> Weingartner 2010, 423)
>
One way of answering this objection head on is to identify principles
that, given a distance function on worlds, constrain the distances
between worlds and sets of worlds, principles perhaps powerful enough
to identify a unique extension.
Apart from the extension problem, two other issues affect the likeness
approach. The first is how to apply it beyond simple propositional
examples as the ones considered above (Popper's content
approach, whatever else its shortcomings, can be applied in principle
to theories expressible in any language, no matter how sophisticated).
We discuss this in the supplement on
Extending the likeness approach to first-order and higher-order frameworks.
The second has to do with the fact that assessments of relative
likeness are sensitive to how the framework underlying the logical
space is defined. This "framework dependence" issue is
discussed in the next section.
#### 1.4.3 The framework dependence of likeness
The single most powerful and influential argument against the whole
likeness approach is the charge that it is "language
dependent" or "framework dependent" (Miller 1974a,
197 a, 1976, and most recently defended, vigorously as usual, in his
2006). Early formulations of the likeness approach (Tichy 1974,
1976, Niiniluoto 1976) proceeded in terms of syntactic surrogates for
their semantic correlates - sentences for propositions,
predicates for properties, constituents for partitions of the logical
space, and the like. The question naturally arises, then, whether we
obtain the same measures if all the syntactic items are translated
into an essentially equivalent language - one capable of
expressing the same propositions and properties with a different set
of primitive predicates. Newton's theory can be formulated with
a variety of different primitive concepts, but these formulations are
typically taken to be equivalent. If the degree of truthlikeness of
Newton's theory were to vary from one such formulation to
another, then while such a concept might still might have useful
applications, it would hardly help to vindicate realism.
Take our simple weather-framework above. This trafficks in three
primitives - *hot*, *rainy*, and *windy*.
Suppose, however, that we define the following two new weather
conditions:
>
>
> *minnesotan* \( =\_{df}\) *hot* if and only if
> *rainy*
>
>
>
> *arizonan* \( =\_{df}\) *hot* if and only if
> *windy*
>
>
>
Now it appears as though we can describe the same sets of weather
states in the new \(h\)-\(m\)-\(a\)-ese language based on the above
conditions. Table 7 shows the translations of four representative
theories between the two languages.
| | | |
| --- | --- | --- |
| | *\(h\)-\(r\)-\(w\)-ese* | *\(h\)-\(m\)-\(a\)-ese* |
| \(T\) | \(h \amp r \amp w\) | \(h \amp m \amp a\) |
| \(A\) | \({\sim}h \amp r \amp w\) | \({\sim}h \amp{\sim}m \amp{\sim}a\) |
| \(B\) | \({\sim}h \amp{\sim}r \amp w\) | \({\sim}h \amp m \amp{\sim}a\) |
| \(C\) | \({\sim}h \amp{\sim}r \amp{\sim}w\) | \({\sim}h \amp m \amp a\) |
Table 7.
If \(T\) is the truth about the weather then theory \(A\), in
\(h\)-\(r\)-\(w\)-ese, seems to make just one error concerning the
original weather states, while \(B\) makes two and \(C\) makes three.
However, if we express these two theories in \(h\)-\(m\)-\(a\)-ese
however, then this is reversed: \(A\) appears to make three errors and
\(B\) still makes two and \(C\) makes only one error. But that means
the account makes truthlikeness, unlike truth, radically
language-relative.
There are two live responses to this criticism. But before detailing
them, note a dead one: the likeness theorist cannot object that
\(h\)-\(m\)-\(a\) is somehow logically inferior to \(h\)-\(r\)-\(w\),
on the grounds that the primitives of the latter are essentially
"biconditional" whereas the primitives of the former are
not. This is because there is a perfect symmetry between the two sets
of primitives. Starting within \(h\)-\(m\)-\(a\)-ese we can arrive at
the original primitives by exactly analogous definitions:
>
> *rainy* \( =\_{df}\) *hot* if and only if
> *minnesotan*
>
>
> *windy* \( =\_{df}\) *hot* if and only if
> *arizonan*
>
Thus if we are going to object to \(h\)-\(m\)-\(a\)-ese it will have
to be on other than purely logical grounds.
Firstly, then, the likeness theorist could maintain that certain
predicates (presumably "hot", "rainy" and
"windy") are primitive in some absolute, realist, sense.
Such predicates "carve reality at the joints" whereas
others (like "minnesotan" and "arizonan") are
gerrymandered affairs. With the demise of predicate nominalism as a
viable account of properties and relations this approach is not as
unattractive as it might have seemed in the middle of the last
century. Realism about universals is certainly on the rise. While this
version of realism presupposes a sparse theory of properties -
that is to say, it is not the case that to every definable predicate
there corresponds a genuine universal - such theories have been
championed both by those doing traditional a priori metaphysics of
properties (e.g. Bealer 1982) as well as those who favor a more
empiricist, scientifically informed approach (e.g. Armstrong 1978,
Tooley 1977). According to Armstrong, for example, which predicates
pick out genuine universals is a matter for developed science. The
primitive predicates of our best fundamental physical theory will give
us our best guess at what the genuine universals in nature are. They
might be predicates like electron or mass, or more likely something
even more abstruse and remote from the phenomena - like the
primitives of String Theory.
One apparently cogent objection to this realist solution is that it
would render the task of empirically estimating degree of
truthlikeness completely hopeless. If we know a priori which
primitives should be used in the computation of distances between
theories it will be difficult to estimate truthlikeness, but not
impossible. For example, we might compute the distance of a theory
from the various possibilities for the truth, and then make a weighted
average, weighting each possible true theory by its probability on the
evidence. That would be the credence-mean estimate of truthlikeness
(see SS2). However, if we don't even know which features
should count towards the computation of similarities and distances
then it appears that we cannot get off first base.
To see this consider our simple weather frameworks. Suppose that all I
learn is that it is rainy. Do I thereby have some grounds for thinking
\(A\) is closer to the truth than \(B\)? I would if I also knew that
\(h\)-\(r\)-\(w\)-ese is the language for calculating distances. For
then, whatever the truth is, \(A\) makes one fewer mistake than \(B\)
makes. \(A\) gets it right on the rain factor, while \(B\)
doesn't, and they must score the same on the other two factors
whatever the truth of the matter. But if we switch to
\(h\)-\(m\)-\(a\)-ese then \(A\)'s epistemic superiority is no
longer guaranteed. If, for example, \(T\) is the truth then \(B\) will
be closer to the truth than \(A\). That's because in the
\(h\)-\(m\)-\(a\) framework raininess as such doesn't count in
favor or against the truthlikeness of a proposition.
This objection would fail if there were empirical indicators not just
of which atomic states obtain, but also of which are the genuine ones,
the ones that really carve reality at the joints. Obviously the
framework would have to contain more than just \(h, m\) and \(a\). It
would have to contain resources for describing the states that
indicate whether these were genuine universals. Maybe whether they
enter into genuine causal relations will be crucial, for example. Once
we can distribute probabilities over the candidates for the real
universals, then we can use those probabilities to weight the various
possible distances which a hypothesis might be from any given
theory.
The second live response is both more modest and more radical. It is
more modest in that it is not hostage to the objective priority of a
particular conceptual scheme, whether that priority is accessed a
*priori* or *a posteriori*. It is more radical in that
it denies a premise of the invariance argument that at first blush is
apparently obvious. It denies the equivalence of the two conceptual
schemes. It denies that \(h \amp r \amp w\), for example, expresses
the very same proposition as \(h \amp m \amp a\) expresses. If we deny
translatability then we can grant the invariance principle, and grant
the judgements of distance in both cases, but remain untroubled. There
is no contradiction (Tichy 1978).
At first blush this response seems somewhat desperate. Haven't
the respective conditions been *defined* in such a way that
they are simple equivalents by fiat? That would, of course, be the
case if \(m\) and \(a\) had been introduced as defined terms into
\(h\)-\(r\)-\(w\). But if that were the intention then the likeness
theorist could retort that the calculation of distances should proceed
in terms of the primitives, not the introduced terms. However, that is
not the only way the argument can be read. We are asked to contemplate
two partially overlapping sequences of conditions, and two spaces of
possibilities generated by those two sequences. We can thus think of
each possibility as a point in a simple three dimensional space. These
points are ordered triples of 0s and 1s, the \(n\)th entry being 0 if
the \(n\)th condition is satisfied and 1 if it isn't. Thinking
of possibilities in this way, we already have rudimentary geometrical
features generated simply by the selection of generating conditions.
Points are adjacent if they differ on only one dimension. A path is a
sequence of adjacent points. A point \(q\) is between two points \(p\)
and \(r\) if \(q\) lies on a shortest path from \(p\) to \(r\). A
region of possibility space is convex if it is closed under the
betweenness relation - anything between two points in the region
is also in the region (Oddie 1987, Goldstick and O'Neill
1988).
Evidently we have two spaces of possibilities, S1 and S2, and the
question now arises whether a sentence interpreted over one of these
spaces expresses the very same thing as any sentence interpreted over
the other. Does \(h \amp r \amp w\) express the same thing as \(h \amp
m \amp a\)? \(h \amp r \amp w\) expresses (the singleton of) \(u\_1\)
(which is the entity \(\langle 1,1,1\rangle\) in S1 or \(\langle
1,1,1\rangle\_{S1})\) and \(h \amp m \amp a\) expresses \(v\_1\) (the
entity \(\langle 1,1,1\rangle\_{S2}). {\sim}h \amp r \amp w\) expresses
\(u\_2 (\langle 0,1,1\rangle\_{S1})\), a point adjacent to that
expressed by \(h \amp r \amp w\). However \({\sim}h \amp{\sim}m
\amp{\sim}a\) expresses \(v\_8 (\langle 0,0,0\rangle\_{S2})\), which is
not adjacent to \(v\_1 (\langle 1,1,1\rangle\_{S2})\). So now we can
construct a simple proof that the two sentences do not express the
same thing.
* \(u\_1\) is adjacent to \(u\_2\).
* \(v\_1\) is not adjacent to \(v\_8\).
* *Therefore*, either \(u\_1\) is not identical to \(v\_1\), or
\(u\_2\) is not identical \(v\_8\).
* *Therefore*, either \(h \amp r \amp w\) and \(h \amp m \amp
a\) do not express the same proposition, or \({\sim}h \amp r \amp w\)
and \({\sim}h \amp{\sim}m \amp{\sim}a\) do not express the same
proposition.
Thus at least one of the two required intertranslatability claims
fails, and \(h\)-\(r\)-\(w\)-ese is not intertranslatable with
\(h\)-\(m\)-\(a\)-ese. The important point here is that a space of
possibilities already comes with a structure and the points in such a
space cannot be individuated without reference to rest of the space
and its structure. The identity of a possibility is bound up with its
geometrical relations to other possibilities. Different relations,
different possibilities. This kind of response has also been endorsed
in the very different truth-maker proposal put forward in Fine (2021,
2022).
This kind of rebuttal to the Miller argument would have radical
implications for the comparability of actual theories that appear to
be constructed from quite different sets of primitives. Classical
mechanics can be formulated using mass and position as basic, or it
can be formulated using mass and momentum. The classical concepts of
velocity and of mass are different from their relativistic
counterparts, even if they were "intertranslatable" in the
way that the concepts of \(h\)-\(r\)-\(w\)-ese are intertranslatable
with \(h\)-\(m\)-\(a\)-ese.
This idea meshes well with recent work on conceptual spaces in
Gardenfors (2000). Gardenfors is concerned both with the
semantics and the nature of genuine properties, and his bold and
simple hypothesis is that properties carve out convex regions of an
\(n\)-dimensional quality space. He supports this hypothesis with an
impressive array of logical, linguistic and empirical data. (Looking
back at our little spaces above it is not hard to see that the convex
regions are those that correspond to the generating (or atomic)
conditions and conjunctions of those. See Burger and Heidema 1994.)
While Gardenfors is dealing with properties, it is not hard to
see that similar considerations apply to propositions, since
propositions can be regarded as 0-ary properties.
Ultimately, however, this response may seem less than entirely
satisfactory by itself. If the choice of a conceptual space is merely
a matter of taste then we may be forced to embrace a radical kind of
incommensurability. Those who talk \(h\)-\(r\)-\(w\)-ese and
conjecture \({\sim}h \amp r \amp w\) on the basis of the available
evidence will be close to the truth. Those who talk
\(h\)-\(m\)-\(a\)-ese while exposed to the "same"
circumstances would presumably conjecture \({\sim}h \amp{\sim}m
\amp{\sim}a\) on the basis of the "same" evidence (or the
corresponding evidence that they gather). If in fact \(h \amp r \amp
w\) is the truth (in \(h\)-\(r\)-\(w\)-ese) then the \(h\)-\(r\)-\(w\)
weather researchers will be close to the truth. But the
\(h\)-\(m\)-\(a\) researchers will be very far from the truth.
This may not be an explicit contradiction, but it should be worrying
all the same. Realists started out with the ambition of defending a
concept of truthlikeness which would enable them to embrace both
fallibilism and optimism. But what the likeness theorists seem to have
ended up with here is something that suggests a rather unpalatable
incommensurability of competing conceptual frameworks. To avoid this,
the realist will need to affirm that some conceptual frameworks really
are better than others. Some really do "carve reality at the
joints" and others don't. But is that something the
realist should be reluctant to affirm?
## 2. The Epistemological Problem
The quest to nail down a viable concept of truthlikeness is motivated,
at least in part, by fallibilism (SS1.1). It is certainly true
that a viable notion of distance from the truth
renders progress in an inquiry through a succession of false
theories at least possible. It is also true that if there is no such
viable notion, then truth can be retained as the goal of inquiry only
at the cost of making partial progress towards it virtually
impossible. But does the mere *possibility* of making progress
towards the truth improve our epistemic lot? Some have argued that it
doesn't (see for example Laudan 1977, Cohen 1980, Newton-Smith
1981). One common argument can be recast in the form of a simple
dilemma. Either we can ascertain the truth, or we can't. If we
can ascertain the truth then we do not need a concept of truthlikeness
- it is an entirely useless addition to our intellectual
repertoire. But if we cannot ascertain the truth, then we cannot
ascertain the degree of truthlikeness of our theories either. So
again, the concept is useless for all practical purposes. (See the
entry on
scientific progress,
especially SS2.4.)
Consider the second horn of this dilemma. Is it true that if we
can't know what the (whole) truth of some matter is, we also
cannot ascertain whether or not we are making progress towards it?
Suppose you are interested in the truth about the weather tomorrow.
Suppose you learn (from a highly reliable source) that it will be hot.
Even though you don't know the *whole* truth about the
weather tomorrow, you do know that you have added a truth to your
existing corpus of weather beliefs. One does not need to be able to
ascertain the whole truth to ascertain some less-encompassing truths.
And it seems to follow that you can also know you have made at least
some progress towards the whole weather truth.
This rebuttal is too swift. It presupposes that the addition of a new
truth \(A\) to an existing corpus \(K\) guarantees that your revised
belief \(K\)\*\(A\) constitutes progress towards the truth. But whether
or not \(K\)\*\(A\) is closer to the truth than \(K\) depends not only
on a theory of truthlikeness but also on a theory of belief revision.
(See also the entry on the
logic of belief revision.)
Let's consider a simple case. Suppose \(A\) is some newly
discovered truth, and that \(A\) is compatible with \(K\). Assume that
belief revision in such cases is simply a matter of so-called
*expansion* - i.e., conjoining \(A\) to \(K\). Consider
the case in which \(K\) also happens to be true. Then any account of
truthlikeness that endorses the value of content for truths (e.g.
Niiniluoto's *min-sum-average*) guarantees that
\(K\)\*\(A\) is closer to the truth than \(K\). That's a welcome
result, but it has rather limited application. Typically one
doesn't know that \(K\) is true: so even if one knows that \(A\)
is true, one cannot use this fact to celebrate progress.
The situation is more dire when it comes to falsehoods. If \(K\) is in
fact false then, without the disastrous principle of the value of
content for falsehoods, there is certainly no guarantee that
\(K\)\*\(A\) will constitute a step toward the truth. (And even if one
endorsed the disastrous principle one would hardly be better off. For
then the addition of any proposition, whether true or false, would
constitute an improvement on a false theory.) Consider again the
number of the planets, \(N\). Suppose that the truth is N\(=8\), and
that your existing corpus \(K\) is (\(N=7 \vee N=100)\). Suppose you
somehow acquire the truth \(A\): \(N\gt 7\). Then \(K\)\*\(A\) is
\(N=100\), which (on *average*, *min-max-average* and
*min-sum-average*) is further from the truth than \(K\). So
revising a false theory by adding truths by no means guarantees
progress towards the truth.
For theories that reject the value of content for truths (e.g., the
*average* proposal) the situation is worse still. Even if \(K\)
happens to be true, there is no guarantee that expanding \(K\) with
truths will constitute progress. Of course, there will be certain
general conditions under which the value of content for truths holds.
For example, on the *average* proposal, the expansion of a true
\(K\) by an atomic truth (or, more generally, by a convex truth) will
guarantee progress toward the truth.
So under very special conditions, one can know that the acquisition of
a truth will enhance the overall truthlikeness of one's
theories, but these conditions are exceptionally narrow and provide at
best a very weak defense against the dilemma. (See Niiniluoto 2011.
For rather more optimistic views of the relation between truthlikeness
and belief revision see Kuipers 2000, Lavalette, Renardel & Zwart
2011, Cevolani, Crupi and Festa 2011, and Cevolani, Festa and Kuipers
2013.)
A different tack is to deny that a concept is useless if there is no
effective empirical decision procedure for ascertaining whether it
applies. For even if we cannot know for sure what the value of a
certain unobservable magnitude is, we might well have better or worse
*estimates* of the value of the magnitude on the evidence. And
that may be all we need for the concept to be of practical value.
Consider, for example, the propensity of a certain coin-tossing set-up
to produce heads - a magnitude which, for the sake of the
example, we assume to be not directly observable. Any non-extreme
value of this magnitude is compatible with any number of heads in a
sequence of \(n\) tosses. So we can never know with certainty what the
actual propensity is, no matter how many tosses we observe. But we can
certainly make rational estimates of the propensity on the basis of
the accumulating evidence. Suppose one's initial state of
ignorance of the propensity is represented by an even distribution of
credences over the space of possibilities for the propensity (i.e.,
the unit interval). Using Bayes theorem and the Principal Principle,
after a fairly small number of tosses we can become quite confident
that the propensity lies in a small interval around the observed
relative frequency. Our *best estimate* of the value of the
magnitude is its *expected value* on the evidence.
Similarly, suppose we don't and perhaps cannot know which
constituent is in fact true. But suppose that we do have a good
measure of distance between constituents (or the elements of some
salient partition of the space) \(C\_i\) and we have selected the right
extension function. So we have a measure \(TL(A\mid C\_i)\)of the
truthlikeness of a proposition \(A\) given that constituent \(C\_i\) is
true. Provided we also have a measure of epistemic probability \(P\)
(where \(P(C\_i\mid e)\) is the degree of rational credence in \(C\_i\)
given evidence \(e)\) we also have a measure of the *expected*
degree of truthlikeness of \(A\) on the evidence (call this
\(\mathbf{E}TL(A\mid e))\) which we can identify with the best
epistemic estimate of truthlikeness. (Niiniluoto, who first explored
this concept in his 1977, calls the epistemic estimate of degree of
truthlikeness on the evidence, or expected degree of truthlikeness,
*verisimilitude*. Since *verisimilitude* is typically
taken to be a synonym for *truthlikeness,* we will not follow
him in this, and will stick instead with *expected
truthlikeness* for the epistemic notion. See also Maher
(1993).)
\[ \mathbf{E}TL(A\mid e) = \sum\_i TL(A\mid C\_i)\times P(C\_i \mid e). \]
Clearly, the expected degree of truthlikeness of a proposition \(is\)
epistemically accessible, and it can serve as our best empirical
estimate of the objective degree of truthlikeness. Progress occurs in
an inquiry when actual truthlikeness increases. And apparent progress
occurs when the expected degree of truthlikeness increases. (See the
entry on
scientific progress,
SS2.5.) This notion of expected truthlikeness is comparable to,
but sharply different from, that of epistemic probability: the
supplement on
Expected Truthlikeness
discusses some instructive differences between the two concepts.
With this proposal, Niiniluoto also made a connection with the
application of decision theory to epistemology. Decision theory is an
account of what it is rational to do in light of one's beliefs
and desires. One's goal, it is assumed, is to maximize utility.
But given that one does not have perfect information about the state
of the world, one cannot know for sure how to accomplish that. Given
uncertainty, the rule to maximize utility or value cannot be applied
in normal circumstances. So under conditions of uncertainty, what it
is rational to do, according to the theory, is to maximize
*subjective expected utility*. Starting with Hempel's
classic 1960 essay, epistemologists conjectured that
decision-theoretic tools might be applied to the problem of theory
*acceptance* - which hypothesis it is rational to accept
on the basis of the total available evidence to hand. But, as Hempel
argued, the values or utilities involved in a decision to accept a
hypothesis cannot be simply regular practical values. These are
typically thought to be generated by one's desires for various
states of affairs to obtain in the world. But the fact that one would
very much like a certain favored hypothesis to be true does not
increase the cognitive value of accepting that hypothesis.
>
> This much is clear: the utilities should reflect the value or disvalue
> which the different outcomes have from the point of view of pure
> scientific research rather than the practical advantages or
> disadvantages that might result from the application of an accepted
> hypothesis, according as the latter is true or false. Let me refer to
> the kind of utilities thus vaguely characterized as purely scientific,
> or epistemic, utilities. (Hempel 1960, 465)
>
If we had a decent theory of *epistemic utility* (also known as
*cognitive utility* or *cognitive value*), perhaps what
hypotheses one ought to accept, or what experiments one ought to
perform, or how one ought to revise one's corpus of belief in
the light of new information, could be determined by the rule:
maximize expected epistemic utility (or maximize expected cognitive
value). Thus decision-theoretic epistemology was born.
Hempel went on to ask what epistemic utilities are implied in the
standard conception of scientific inquiry -
"...construing the proverbial 'pursuit of
truth' in science as aimed at the establishment of a maximal
system of true statements..." Hempel (1960), p 465)
Already we have here the germ of the idea central to the truthlikeness
program: that the goal of an inquiry is to end up accepting (or
"establishing") the theory that yields the whole truth of
some matter. It is interesting that, around the same time that Popper
was trying to articulate a content-based account of truthlikeness,
Hempel was attempting to characterize partial fulfillment of the goal
(that is, of maximally contentful truth) in terms of some combination
of truth and content. These decision-theoretic considerations lead
naturally to a brief consideration of the axiological problem of
truthlikeness.
## 3. The Axiological Problem
Our interest in the concept of truthlikeness seems grounded in the
value of highly truthlike theories. And that, in turn, seems grounded
in the value of truth.
Let's start with the putative value of truth. Truth is not, of
course, a good-making property of the objects of belief states. The
proposition \(h \amp r \amp w\) is not a better *proposition*
when it happens to be hot, rainy and windy than when it happens to be
cold, dry and still. Rather, the cognitive state of *believing*
\(h \amp r \amp w\) is often deemed to be a good state to be in if the
proposition is true, not good if it is false. So the state of
*believing truly* is better than the state of *believing
falsely*.
At least some, perhaps most, of the value of believing truly can be
accounted for instrumentally. Desires mesh with beliefs to produce
actions that will best achieve what is desired. True beliefs will
generally do a better job of this than false beliefs. If you are
thirsty and you believe the glass in front of you contains safe
drinking water rather than lethal poison then you will be motivated to
drink. And if you do drink, you will be better off if the belief you
acted on was true (you quench your thirst) rather than false (you end
up dead).
We can do more with decision theory utilizing purely practical or
non-cognitive values. For example, there is a well-known,
decision-theoretic, *value-of-learning theorem*, the
Ramsey-Good theorem, which partially vindicates the value of gathering
new information, and it does so in terms of "practical"
values, without assuming any purely cognitive values. (Good 1967.)
Suppose you have a choice to make, and you can either choose now, or
choose after doing some experiment. Suppose further that you are
rational (you always choose by expected value) and that the experiment
is cost-free. It follows that performing the experiment and then
choosing always has at least as much expected value as choosing
without further ado. Further, doing the experiment has higher expected
value if one possible outcome of the experiment would alter the
relative values of some your options. So, you should do the experiment
just in case the outcome of the experiment could make a difference to
what you choose to do. Of course, the expected gain of doing the
experiment has to be worth the expected cost. Not all information is
worth pursuing when you have limited time and resources.
David Miller notes the following serious limitation of the Ramsey-Good
value-of-learning theorem:
>
> The argument as it stands simply doesn't impinge on the question
> of why evidence is collected in those innumerable cases in theoretical
> science in which no decisions will be, or anyway are envisaged to be,
> affected. (Miller 1994, 141).
>
There are some things that we think are worth knowing about, the
knowledge of which would not change the expected value of any action
that one might be contemplating performing. We spent billions of
dollars conducting an elaborate experiment to determine whether the
Higgs boson exists, and the results were promising enough that Higgs
was awarded the Nobel Prize for his prediction. The discovery may also
yield practical benefits, but it is the *cognitive* change it
induces that makes it worthwhile. It is valuable simply for what it
did to our credal state - not just our credence in the existence
of the Higgs, but our overall credal state. We may of course be wrong
about the Higgs - the results might be misleading - but
from our new epistemic viewpoint it certainly appears to us that we
have made cognitive gains. We have a little bit more evidence for the
truth, or at least the truthlikeness, of the standard model.
What we need, it seems, is an account of pure cognitive value, one
that embodies the idea that getting at the truth is itself valuable
whatever its practical benefits. As noted this possibility was first
explicitly raised by Hempel, and developed in various difference ways
in the 1960s and 1970s by Levi (1967) and Hilpinen (1968), amongst
others. (For recent developments see the entry
epistemic utility arguments for probabilism,
SS6.)
We could take the cognitive value of believing a single proposition
\(A\) to be positive when \(A\) is true and negative when \(A\) is
false. But acknowledging that belief comes in degrees, the idea might
be that the stronger one's belief in a truth (or of one's
disbelief in a falsehood), the greater the cognitive value. Let \(V\)
be the characteristic function of answers, both complete and partial,
to the question \(\{C\_1, C\_2 , \ldots ,C\_n , \ldots \}\). That is to
say, where \(A\) is equivalent to some disjunction of complete
answers:
* \(V\_i (A) = 1\) if \(A\) is entailed by the complete answer
\(C\_i\) (i.e \(A\) is true according to \(C\_i)\);
* \(V\_i (A) = 0\) if the negation of \(A\) is entailed by the
complete answer \(C\_i\) (i.e., \(A\) is false according to
\(C\_i)\).
Then the simple view is that the cognitive value of believing \(A\) to
degree \(P(A)\) (where \(C\_i\) is the complete true answer) is greater
the closer \(P(A)\) is to the actual value of \(V\_i (A)\)- that
is, the smaller \(|V\_i (A)-P(A)|\) is. Variants of this idea have been
endorsed, for example by Horwich (1982, 127-9), and Goldman
(1999), who calls this "veristic value".
\(|V\_i (A)-P(A)|\) is a measure of how far \(P(A)\) is from \(V\_i
(A)\), and \(-|V\_i (A)-P(A)|\) of how close it is. So here is the
simplest linear realization of this desideratum:
\[ Cv^1 \_i (A, P) = - |V\_i (A)-P(A)|. \]
But there are of course many other measures satisfying the basic idea.
For example we have the following quadratic measure:
\[ Cv^2 \_i (A, P) = -((V\_i (A)- P(A))^2. \]
Both \(Cv^1\) and \(Cv^2\) reach a maximum when \(P(A)\) is maximal
and \(A\) is true, or \(P(A)\) is minimal and \(A\) is false; and a
minimum when \(P(A)\) is maximal and \(A\) is false, or \(P(A)\) is
minimal and \(A\) is true.
These are measures of *local* value - the value of
investing a certain degree of belief in a single answer \(A\) to the
question \(Q\). But we can agglomerate these local values into the
value of a total credal state \(P\). This involves a substantive
assumption: that the value of a credal state is some additive function
of the values of the individual beliefs states it underwrites. A
credal state, relative to inquiry \(Q\), is characterized by its
distribution of credences over the elements of the partition \(Q\). An
*opinionated* credal state is one that assigns credence 1 to
just one of the complete answers. The best credal state to be in,
relative to the inquiry \(Q\), is the opinionated state that assigns
credence 1 to the complete correct answer. This will also assign
credence 1 to every true answer, and 0 to every false answer, whether
partial or complete. In other words, if the correct answer to \(Q\) is
\(C\_i\) then the best credal state to be in is the one identical to
\(V\_i\): for each partial or complete answer \(A\) to \(Q, P(A) = V\_i
(A)\). This credal state is the state of believing the truth, the
whole truth and nothing but the truth about \(Q\). Other total credal
states (whether opinionated or not) should turn out to be less good
than this perfectly accurate credal state. But there will, of course,
have to be more constraints on the accuracy and value of total
cognitive states.
Assuming additivity, the value of the credal state \(P\) can be taken
to be the (weighted) sum of the cognitive values of local belief
states.
>
> \(CV\_i(P) = \sum\_A \lambda\_A Cv\_i(A,P)\) (where \(A\) ranges over all
> the answers, both complete and incomplete, to question \(Q)\), and the
> \(\lambda\)-terms assign a fixed (non-contingent) weight to the
> contribution of each answer \(A\) to overall accuracy.
>
Plugging in either of the above local cognitive value measures, total
cognitive value is maximal for an assignment of maximal probability to
the true complete answer, and falls off as confidence in the correct
answer falls away.
Since there are many different measures that reflect the basic idea of
the value of true belief how are we to decide between them? We have
here a familiar problem of underdetermination. Joyce (1998) lays down
a number of desiderata, most of them very plausible, for a measure of
what he calls the *accuracy* of a cognitive state. These
desiderata are satisfied by the \(Cv^2\) but not by \(Cv^1\). In fact
all the members of a family of closely related measures, of which
\(Cv^2\) is a member, satisfy Joyce's desiderata:
\[ CV\_i(P) = \sum\_A -\lambda\_A [(V\_i (A) - P(A))^2]. \]
Giving equal weight to all propositions \((\lambda\_A =\) some non-zero
constant \(c\) for all \(A)\), this is equivalent to the
*Brier* measure (see the entry on
epistemic utility arguments for probabilism,
SS6). Given the \(\lambda\)-weightings can vary, there is
considerable flexibility in the family of admissible measures.
Our main interest here is whether any of these so-called *scoring
rules* (rules which score the overall accuracy of a credal state)
might constitute an acceptable measure of the cognitive value of a
credal state.
Absent specific refinements to the \(\lambda\)-weightings, the
quadratic measures seem unsatisfactory as a solution either to the
problem of accuracy or to the problem of cognitive value. Suppose you
are inquiring into the number of the planets and you end up fully
believing that the number of the planets is 9. Given that the correct
answer is 8, your credal state is not perfect. But it is pretty good,
and it is surely a much better credal state to be in than the
opinionated state that sets probability 1 on the number of planets
being 9 billion. It seems rather natural to hold that the cognitive
value of an opinionated credal state is sensitive to the degree of
*truthlikeness* of the complete answer that it fixes on, not
just to its *truth value*. But this is not endorsed by
quadratic measures.
Joyce's desiderata build in the thesis that accuracy is
insensitive to the distances of various different complete answers
from the complete true answer. The crucial principle is
*Extensionality*.
>
> Our next constraint stipulates that the "facts" which a
> person's partial beliefs must "fit" are exhausted by
> the truth-values of the propositions believed, and that the only
> aspect of her opinions that matter is their strengths. (Joyce 1998,
> 591)
>
We have already seen that this seems wrong for opinionated states that
confine all probability to a false hypothesis. Being convinced that
the number of planets in the solar system is 9 is better than being
convinced that it is 9 billion. But the same idea holds for truths.
Suppose you believe truly that the number of the planets is either 7,
8, 9 or 9 billion. This can be true in four different ways. If the
number of the planets is 8 then, intuitively, your belief is a little
bit closer to the truth than if it is 9 billion. (This judgment is
endorsed by both the *average* and the *min-sum-average*
proposals.) So even in the case of a true answer to some query, the
value of believing that truth can depend not just on its truth and the
strength of the belief, but on where the truth lies. If this is right
*Extensionality* is misguided.
Joyce considers a variant of this objection: namely, that given
Extensionality Kepler's beliefs about planetary motion would be
judged to be no more accurate than Copernicus's. His response is
that there will always be propositions which distinguish between the
accuracy of these falsehoods:
>
> I am happy to admit that Kepler held more accurate beliefs than
> Copernicus did, but I think the sense in which they were more accurate
> is best captured by an extensional notion. While Extensionality rates
> Kepler and Copernicus as equally inaccurate when their false beliefs
> about the earth's orbit are considered apart from their effects
> on other beliefs, the advantage of Kepler's belief has to do
> with the other opinions it supports. An agent who strongly believes
> that the earth's orbit is elliptical will also strongly believe
> many more truths than a person who believes that it is circular (e.g.,
> that the average distance from the earth to the sun is different in
> different seasons). This means that the overall effect of
> Kepler's inaccurate belief was to improve the extensional
> accuracy of his system of beliefs as a whole. Indeed, this is why his
> theory won the day. I suspect that most intuitions about falsehoods
> being "close to the truth" can be explained in this way,
> and that they therefore pose no real threat to Extensionality. (Joyce
> 1998, 592)
>
Unfortunately, this contention - that considerations of accuracy
over the whole set of answers which the two theories give will sort
them into the right ranking - isn't correct (Oddie
2019).
Wallace and Greaves (2005) assert that the weighted quadratic measures
"can take account of the value of verisimilitude." They
suggest this can be done "by a judicious choice of the
coefficients" (i.e., the \(\lambda\)-values). They go on
to say: "... we simply assign high \(\lambda\_A\) when \(A\)
is a set of 'close' states." (Wallace and Greaves
2005, 628). 'Close' here presumably means 'close to
the actual world'. But whether an answer \(A\) contains complete
answers that are close to the actual world - that is, whether
\(A\) is *truthlike* - is clearly a world-dependent
matter. The coefficients \(\lambda\_A\) were not intended by the
authors to be world-dependent. (But whether or not the co-efficients
of world-dependent or world-independent the quadratic measures cannot
capture truthlikeness. See Oddie 2019.)
One defect of the quadratic class of measures, combined with
world-independent weights, corresponds to a problem familiar from the
investigation of attempts to capture truthlikeness simply in terms of
classes of true and false consequences, or in terms of truth value and
content alone. Any two complete false answers yield precisely the same
number of true consequences and the same number of false consequences.
The local quadratic measure accords the same value to each true answer
and the same disvalue to each false answer. So if, for example, all
answers are given the same \(\lambda\)-weighting in the global
quadratic measure, any two opinionated false answers will be accorded
the same degree of accuracy by the corresponding global measure.
It may be that there are ways of tinkering with the
\(\lambda\)-terms to avoid the objection, but on the face of it
the notions of local cognitive value embodied in both the linear and
quadratic rules seem to be deficient because they omit considerations
of truthlikeness even while valuing truth. It seems that any adequate
measure of the cognitive value of investing a certain degree of belief
in \(A\) should take into account not just whether \(A\) is true or
false, but here the truth lies in relation to the worlds in \(A\).
Whatever our account of cognitive value, each credal state \(P\)
assigns an expected cognitive value to every credal state \(Q\)
(including \(P\) itself of course).
\[ \mathbf{E}CV\_P(Q) = \sum\_i P(C\_i) CV\_i(Q). \]
Suppose we accept the injunction that one ought to maximize expected
value as calculated from the perspective of one's current credal
state. Let us say that a credal state \(P\) is *self-defeating*
if to maximize expected value from the perspective of \(P\) itself one
would have to adopt some distinct credal state \(Q\), *without the
benefit of any new information*:
>
> \(P\) is *self-defeating* = for some \(Q\) distinct from \(P\),
> *\(\mathbf{E}\)CV\(\_P\)*\((Q) \gt\)
> *\(\mathbf{E}\)CV\(\_P\)*\((P)\).
>
The requirement of *propriety* requires that no credal state be
self-defeating. No credal state demands that you shift to another
credal state without new information. One feature of the quadratic
family of proper scoring rules is that they guarantee
*propriety*, and *propriety* is an extremely attractive
feature of cognitive value. From *propriety* alone, one can
construct arguments for the various elements of probabilism. For
example, *propriety* effectively guarantees that the credence
function must obey the standard axioms governing additivity (Leitgeb
and Pettigrew 2010); *propriety* guarantees a version of the
value-of-learning theorem in purely cognitive terms (Oddie, 1997);
*propriety* provides an argument that conditionalization is the
only method of updating compatible with the maximization of expected
cognitive value (Oddie 1997, Greaves and Wallace 2005, and Leitgeb and
Pettigrew 2010). Not only does propriety deliver these goods for
probabilism, but any account of cognitive value that doesn't
obey *propriety* entails that self-defeating probability
distributions should be eschewed *a priori*. (And that might
appear to violate a fundamental commitment of empiricism.)
There is however a powerful argument that any measure of cognitive
value that satisfies *propriety* cannot deliver a fundamental
desideratum on an adequate theory of truthlikeness (or accuracy): the
principle of *proximity*. Every serious account of
truthlikeness satisfies *proximity*. But *proximity* and
*propriety* turn out to be incompatible, given only very weak
assumptions (see Oddie 2019 and, for an extended critique, also
Schoenfield 2022). If this is right then the best account of
*genuine* accuracy cannot provide a vindication of pure
probabilism.
## 4. Conclusion
We are all fallibilists now, but we are not all skeptics, or
antirealists or nihilists. Most of us think inquiries can and do
progress even when they fall short of their goal of locating the truth
of the matter. We think that an inquiry can progress by moving from
one falsehood to another falsehood, or from one imperfect credal state
to another. To reconcile epistemic optimism with realism in the teeth
of the dismal induction we need a viable concept of truthlikeness, a
viable account of the empirical indicators of truthlikeness, and a
viable account of the role of truthlikeness in cognitive value. And
all three accounts must fit together appropriately.
There are a number of approaches to the logical problem of
truthlikeness but, unfortunately, there is as yet little consensus on
what constitutes the best or most promising approach, and prospects
for combining the best features of each approach do not at this stage
seem bright. In fact, recent work on whether the three main approaches
to the logical problem of truthlikeness presented above are compatible
seems to point to a negative answer (see the supplement on
The compatibility of the approaches).
There is, however, much work to be done on both the epistemological
and the axiological aspects of truthlikeness, and it may well be that
new constraints will emerge from those investigations that will help
facilitate a fully adequate solution to the logical problem as
well. |
vico | ## 1. Vico's Life and Influence
Giovanni Battista Vico was born in Naples, Italy, June 23 1668, to a
bookseller and daughter of a carriage maker. He received his formal
education at local grammar schools, from various Jesuit tutors, and at
the University of Naples from which he graduated in 1694 as Doctor of
Civil and Canon Law. As he reports in his autobiography (*Vita di
Giambattista Vico*), however, Vico considered himself as
"teacher of
himself,"[1]
a habit he developed under the direction of his father, after a fall
at the age of 7 caused a three year absence from school. Vico reports
that "as a result of this mischance he grew up with a melancholy
and irritable temperament such as belongs to men of ingenuity and
depth" (Vita, 111). Vico left Naples in 1686 for Vatolla where,
with occasional returns to his home city, he remained for the next
nine years as tutor to the sons of Domenico Rocca. Vico returned to
Naples in 1695 and four years later married Teresa Caterina Destito
with whom he had eight children. Of his surviving children (three of
them died), his younger son Gennaro was a favorite and went on to an
academic career; his daughter Lusia achieved success as a singer and
minor poet, and Ignazio is known to have been a source of
disappointment to him. Of the others (Angela Teresa and Filippo)
little is known. Vico was never rich, though he made a living, and the
legend of his poverty derives from Villarosa's embellishment of
the autobiography. He did, however, suffer bouts of ill-health, and
failed in his life-long ambition of succeeding to the chair of
Jurisprudence at the University of Naples, having to settle instead
for a lower and poorly paid professorship in Rhetoric. He retained
this position until 1741 at which time he was succeeded by his son
Gennaro. Vico died in Naples on January 22-23, 1744, aged
75.
In his own time Vico's work was largely neglected and generally
misunderstood-he describes himself living as a "stranger"
and "quite unknown" in his native city (Vita, 134)-and it
was not until the nineteenth century that his thought began to make a
significant impression on the philosophical world. However,
Vico's thought never suffered from complete obscurity and its
influence can be discerned in various traditions of European thought
from the mid-eighteenth century
onwards.[2]
In Italy, Vico's impact on aesthetic and literary criticism is
evident in the writings of Francesco De Sanctis and Benedetto Croce,
and in jurisprudence, economics, and political theory, his influence
can be traced from Antonio Genovesi (one of Vico's own pupils),
Ferdinando Galiani, and Gaetano Filangieri. In Germany, Vico's
ideas were known to Johann-George Hamman and, via his disciple J.G.
von Herder, to Johann Wolfgang von Goethe and Friedrich Heinrich
Jacobi. Vico's ideas were sufficiently familiar to Friedrich
August Wolf to inspire his article "G.B. Vico on Homer."
In France, Vico's thought was likely known to Charles de
Secondat, baron de Montesquieu, and Jean-Jacques Rousseau, and some
have seen his influence in the writings of Denis Diderot, Etienne
Bonnot, abbe de Condillac, and Joseph Marie, comte de Maistre.
In Great Britain, although Vichian themes are intimated in the
philosophical writings of the Empiricists and thinkers of the Scottish
Enlightenment, there is no direct evidence that they knew of his
writings. The earliest known disseminator of Vico's views in the
English-speaking world is Samuel Taylor Coleridge, who was responsible
for much of the interest in Vico in the second half of the nineteenth
century.
Vico's ideas reached a wider audience with a German translation
of *The New Science* by W.E. Weber which appeared in 1822, and,
more significantly, through a French version by Jules Michelet in
1824, which was reissued in 1835. Michelet's translation was
widely read and was responsible for a new appreciation of Vico's
work in France. Subsequently, Vico's views impacted the work of
Wilhelm Dilthey, Karl Marx, R.G. Collingwood, and James Joyce, who
used *The New Science* to structure *Finnegans Wake*.
Twentieth century scholarship has established illuminating comparisons
with the tradition of Hegelian idealism, and taken up the relationship
between Vico's thought and that of philosophers in the western
tradition and beyond, including Plato, Aristotle, Ibn Khaldun, Thomas
Hobbes, Benedict de Spinoza, David Hume, Immanuel Kant, and Friedrich
Nietzsche. Comparisons and connections have also been drawn between
Vichean themes and the work of various modern and contemporary
thinkers, *inter alia* W.B. Yeats, Friedrich Froebel, Max
Horkheimer, Walter Benjamin, Martin Heidegger, Hans-Georg Gadamer,
Jurgen Habermas, Paul Ricoeur, Jean-Francois Lyotard, and
Alisdair MacIntyre. As a review of recent and current literature
demonstrates, an appreciation of Vico's thought has spread far
beyond philosophy, and his ideas have been taken up by scholars within
a range of contemporary disciplines, including anthropology, cultural
theory, education, hermeneutics, history, literary criticism,
psychology, and sociology. Thus despite obscure beginnings, Vico is
now widely regarded as a highly original thinker who anticipated
central currents in later philosophy and the human sciences.
## 2. Vico's Early Works
Vico's earliest publications were in poetry rather than
philosophy-the Lucretian poem "Affetti di un disperato"
("The Feelings of One in Despair") (composed in 1692) and
"Canzone in morte di Antonio Carafa" ("Ode on the
Death of Antonio Carafa") both published in 1693. After his
appointment to Professor of Rhetoric in 1699, Vico began to address
philosophical themes in the first of six *Orazioni Inaugurali*
(*Inaugural Orations*). In 1707 Vico gave a seventh
oration-intended as an augmentation of the philosophy of Bacon that he
subsequently revised and published in 1709 under the title *De
nostri temporis studiorum ratione* (*On the Study Methods of
Our Time*). Two years later his projected but uncompleted
statement on metaphysics appeared, *De antiquissima Italorum
sapientia ex linguae latinae originibus eruenda libri tres*
(*On the Most Ancient Wisdom of the Italians Unearthed from the
Origins of the Latin Language*). The years that followed saw the
publication of various works, including two replies to critical
reviews of *De antiquissima* (1711 and 1712), a work of royal
historiography-*De rebus gestis Antonii Caraphaei libri
quattuor* (*The Life of Antonio Caraffa*) (1716)-*Il
diritto universale* (*Universal Right*) (1720-22),
and in 1725 and 1731 the two parts of his autobiography which together
compose the *Vita di Giambattista Vico scritta da se medesimo*
(*Life of Giambattista Vico Written by Himself*). Over the
course of his professional career, Vico also composed and delivered
lectures on rhetoric to students preparing to enter into the study of
jurisprudence. These survive as student transcriptions or copies of
the original lectures, and are collected under the title
*Institutiones Oratoriae* (1711-1741) (translated as
*The Art of Rhetoric*) in which Vico's develops his views
on rhetoric or eloquence, "the faculty of speaking appropriate
to the purpose of persuading" through which the orator aims to
"bend the spirit by his
speech."[3]
Although these early writings are increasingly studied as significant
works in their right, they have generally been regarded as important
for tracing the development of various themes and ideas which Vico
came to refine and express fully in his major and influential work,
*The New Science*. In *De nostri temporis studiorum
ratione*, for example, Vico takes up the theme of modernity and
raises the question of "which study method is finer or better,
ours or the Ancients?" and illustrates "by examples the
advantages and drawbacks of the respective
methods."[4]
Vico observes that the Moderns are equipped with the
"instruments" of "philosophical
'critique'" and "analytic geometry"
(especially in the form of Cartesian logic) (DN, 7-9), and that
these open realms of natural scientific inquiry (chemistry,
pharmacology, astronomy, geographical exploration, and mechanics) and
artistic production (realism in poetry, oratory, and sculpture) which
were unknown and unavailable to the Ancients (DN, 11-12).
Although these bring significant benefits, Vico argues, modern
education suffers unnecessarily from ignoring the *ars topica*
(art of topics) which encourage the use of imagination and memory in
organizing speech into eloquent persuasion. The result, Vico argues,
is an undue attention to the "geometrical method" modeled
on the discipline of physics (DN, 21ff.), and an emphasis on abstract
philosophical criticism over poetry. This undermines the importance of
exposition, persuasion, and pleasure in learning; it
"benumbs...[the] imagination and stupefies...[the] memory"
(DN, 42), both of which are central to learning, complex reasoning,
and the discovery of truth. Combining the methods of both Ancients and
Moderns, Vico argues that education should aim ideally at cultivating
the "total life of the body politic" (DN, 36): students
"should be taught the totality of the sciences and arts, and
their intellectual powers should be developed to the full" so
that they "would become exact in science, clever in practical
matters, fluent in eloquence, imaginative in understanding poetry or
painting, and strong in memorizing what they have learned in their
legal studies" (DN, 19).
This defense of humanistic education is expanded in the
*Orations*, directed "to the flower and stock of
well-born young
manhood,"[5]
where Vico makes a case for modern humanistic education and focuses
on a certain kind of "practical wisdom" or
*prudentia* which the human mind, with the appropriate
discipline and diligence, is able to attain. This theme is continued
in *De Antiquissima*, where Vico traces the consequences of his
insight that language can be treated as a source of historical
knowledge. Many words of the Latin language, Vico observes, appear to
be "derived from some inward learning rather than from the
vernacular usage of the
people."[6]
Treated as a repository of the past, Latin might be investigated as a
way of "seek[ing] out the ancient wisdom of the Italians from
the very wisdom of their words."(DA, 40).
In the course of pursuing this task, Vico also develops two central
themes of his mature philosophy: the outlines of a philosophical
system in contrast to the then dominant philosophy of Descartes
already criticized in *De nostri temporis studiorum ratione*,
and his statement of the principle that *verum et factum
convertuntur*, that "the true and the made
are...convertible," or that "the true is precisely what is
made" (*verum esse ipsum factum*). Vico emphasizes that
science should be conceived as the "genus or mode by which a
thing is made" so that human science in general is a matter of
dissecting the "anatomy of nature's works" (DA, 48),
albeit through the "vice" of human beings that they are
limited to "abstraction" as opposed to the power of
"construction" which is found in God alone (DA,
50-52). Given that "the norm of the truth is to have made
it" (DA, 52), Vico reasons, Descartes' famous first
principle that clear and distinct ideas are the source of truth must
be rejected: "For the mind does not make itself as it gets to
know itself," Vico observes, "and since it does not make
itself, it does not know the genus or mode by which it makes
itself" (DA, 52). Thus the truths of morality, natural science,
and mathematics do not require "metaphysical
justification" as the Cartesians held, but demand an analysis of
the causes-the "activity"-through which things are made
(DA, 64).
Vico's *Vita di Giambattista Vico* is of particular
interest, not only as a source of insight into the influences on his
intellectual development, but as one of the earliest and most
sophisticated examples of philosophical autobiography. Vico composed
the work in response (and indeed the only response) to a proposal
published by Count Gian Artico di Porcia to Italian scholars to
write their biographies for the edification of students. Referring to
himself in the third person, Vico records the course of his life and
the influence of various thinkers which led him to develop the
concepts central to his mature work. Vico reports on the importance of
reading Plato, Aristotle, the Hellenics, Scotus, Suarez, and the
Classical poets, and traces his growing interest in jurisprudence and
the Latin language (Vita, 116ff. passim). Vico describes how he came
to "meditate a principle of the natural law, which should be apt
for the explanation of the origins of Roman law and every other
gentile civil law in respect of history" (Vita, 119) and how he
discovered that "an ideal eternal law...should be observed in a
universal city after the idea or design of providence" (Vita,
122). According to Vico's own account, his studies culminated in
a distinction between ideas and languages. The first, he says,
"discovers new historical principles of geography and
chronology, the two ideas of history, and thence the principles of
universal history lacking hitherto" (Vita, 167), while the
latter "discovers new principles of poetry, both of song and
verse, and shows that both it and they sprang up by the same natural
necessity in all the first nations" (Vita, 168). Taken together,
these form the central doctrine of *The New Science*, namely,
that there is a "philosophy and philology of the human
race" which produces "an ideal eternal history based on
the idea of...providence...[and] traversed in time by all the
particular histories of the nations, each with its rise, development,
acme, decline and fall" (Vita, 169).
## 3. The New Science
Many themes of these early works-language, wisdom, history, truth,
causality, philology, rhetoric, philosophy, poetry, and the relative
strengths and weaknesses of ancient and modern learning-are
incorporated into and receive their fullest treatment in Vico's
major work, *The New Science*, first published in 1725 (an
edition subsequently known as the *Scienza Nuova Prima* or
*First New Science*) and again in a second and largely
rewritten version five years later. Vico published a third edition in
1744, which was later edited by Fausto Nicolini and appeared in 1928.
Nicoloni is responsible for updating Vico's punctuation and
breaking the work up into its current structure of chapters, sections,
and numbered paragraphs (1112 in all), not originally supplied by Vico
himself, which have been reproduced in many though not all subsequent
editions. Nicolini referred to his edition as the *Scienza Nuova
seconda*-*Second New Science* (or simply *The New
Science*), under which title the definitive version is known
today.
The text consists of an overview ("Idea of the Work")
couched as an explication of a frontispiece depicting a female figure
(Metaphysics), standing on a globe of the Earth and contemplating a
luminous triangle containing the eye of God, or Providence. Below
stands a statue of Homer representing the origins of human society in
"poetic wisdom." This is followed by five Books and a
Conclusion, the first of which ("Establishment of
Principles") establishes the method upon which Vico constructs a
history of civil society from its earliest beginnings in the state of
nature (*stato di natura*) to its contemporary manifestation in
seventeenth century Europe.
In the work, Vico consciously develops his notion of *scienza*
(science or knowledge) in opposition to the then dominant philosophy
of Descartes with its emphasis on clear and distinct ideas, the most
simple elements of thought from which all knowledge, the Cartesians
held, could be derived a priori by way of deductive rules. As Vico had
already argued, one consequence and drawback of this
hypothetico-deductive method is that it renders phenomena which cannot
be expressed logically or mathematically as illusions of one sort or
another. This applies not only most obviously to the data of sense and
psychological experience, but also to the non-quantifiable evidence
that makes up the human sciences. Drawing on the *verum factum*
principle first described in *De Antiquissima*, Vico argues
against Cartesian philosophy that full knowledge of any thing involves
discovering *how* it came to be what it is as a product of
human action and the "principal property" of human beings,
viz., "of being social" ("Idea of the Work,"
SS2,
p.3).[7]
The reduction of all facts to the ostensibly paradigmatic form of
mathematical knowledge is a form of "conceit," Vico
maintains, which arises from the fact that "man makes himself
the measure of all things" (Element I, SS120, p.60) and that
"whenever men can form no idea of distant and unknown things,
they judge them by what is familiar and at hand" (Element II,
SS122, p.60). Recognizing this limitation, Vico argues, is at once
to grasp that phenomena can only be known via their origins, or per
caussas (through causes). For "Doctrines must take their
beginning from that of the matters of which they treat" (Element
CVI, SS314, p.92), he says, and it is one "great labor
of...Science to recover...[the] grounds of truth-truth which, with the
passage of years and the changes in language and customs, has come
down to us enveloped in falsehood" (Element XVI, SS150,
pp.64-5). Unveiling this falsehood leads to
"wisdom," which is "nothing but the science of
making such use of things as their nature dictates" (Element
CXIV, SS326, p.94). Given that *verum ipsum
factum*-"the true is the made," or something is true
*because* it is made-*scienzia* both sets knowledge per
caussas as its task and as the method for attaining it; or, expressed
in other terms, the content of *scienza* is identical with the
development of that *scienza* itself.
The challenge, however, is to develop this science in such a way as to
understand the facts of the human world without either reducing them
to mere contingency or explaining their order by way of speculative
principles of the sort generated by traditional metaphysics. They must
be rendered intelligible, that is, without reducing them, as did the
Cartesians, to the status of ephemera. Vico satisfies this demand by
distinguishing at the outset of *The New Science* between
*il vero* and *il certo*, "the true" and
"the certain." The former is the object of knowledge
(*scienza*) since it is universal and eternal, whereas the
latter, related as it is to human consciousness (*coscienza*),
is particular and individuated. This produces two pairs of
terms-*il vero*/*scienza* and *il
certo*/*coscienza*-which constitute, in turn, the
explananda of philosophy and philology ("history" broadly
conceived), respectively. As Vico says, "philosophy contemplates
reason, whence comes knowledge of the true; philology observes that of
which human choice is author, whence comes consciousness of the
certain" (Element X, SS138, p.63). These two disciplines
combine in a method or "new critical art"
(*nuova'arte critica*) where philosophy aims at
articulating the universal forms of intelligibility common to all
experience, while philology adumbrates the empirical phenomena of the
world which arise from human choice: the languages, customs, and
actions of people which make up civil society. Understood as mutually
exclusive disciplines-a tendency evident, according to Vico, in the
history of philosophy up to his time-philosophy and philology appear
as empty and abstract (as in the rational certainty of Cartesian
metaphysics) and merely empirical and contingent, respectively. Once
combined, however, they form a doctrine which yields a full knowledge
of facts where "knowledge" in the Vichean sense means to
have grasped both the necessity of human affairs (manifest in the
causal connections between otherwise random events) and the
contingency of the events which form the content of the causal chains.
Philosophy yields the universally true and philology the individually
certain.
The text of *The New Science* then constitutes Vico's
attempt to develop a method which itself comes to be in the course of
applying it to human experience, and this takes the form of a history
of civil society and its development through the progress of war and
peace, law, social order, commerce, and government. "Thus our
Science," Vico says near the beginning of the work, "comes
to be at once a history of the ideas, the customs, the deeds of
mankind. From these three we shall derive the principles of the
history of human nature, which we shall show to be the principles of
universal history, which principles it seems hitherto to have
lacked" ("Poetic Wisdom," SS368, p.112).
Accomplishing this task involves tracing human society back to its
origins in order to reveal a common human nature and a genetic,
universal pattern through which all nations run. Vico sees this common
nature reflected in language, conceived as a store-house of customs,
in which the wisdom of successive ages accumulates and is presupposed
in the form of a *sensus communis* or "mental
dictionary" by subsequent generations. Vico defines this common
sense as "judgment without reflection, shared by an entire
class, an entire people, and entire nation, or the entire human
race" (Element XII, SS145, pp.63-4). It is also
available to the philosopher who, by deciphering and thus recovering
its content, can discover an "ideal eternal history traversed in
time by the histories of all nations" (Proposition XLII,
SS114, p.57).
The result of this, in Vico's view, is to appreciate history as
at once "ideal"-since it is never perfectly actualized-and
"eternal," because it reflects the presence of a divine
order or Providence guiding the development of human institutions.
Nations need not develop at the same pace-less developed ones can and
do coexist with those in a more advanced phase-but they all pass
through the same distinct stages (*corsi*): the ages of gods,
heroes, and men. Nations "develop in conformity to this
division," Vico says, "by a constant and uninterrupted
order of causes and effects present in every nation" ("The
Course the Nations Run," SS915, p.335). Each stage, and thus
the history of any nation, is characterized by the manifestation of
natural law peculiar to it, and the distinct languages (signs,
metaphors, and words), governments (divine, aristocratic
commonwealths, and popular commonwealths and monarchies), as well as
systems of jurisprudence (mystic theology, heroic jurisprudence, and
the natural equity of free commonwealths) that define them.
In addition to specifying the distinct stages through which social,
civil, and political order develops, Vico draws on his earlier
writings to trace the origin of nations back to two distinct features
of human nature: the ages of gods and heroes result from memory and
creative acts of "imagination" (*fantasia*), while
the age of men stems from the faculty of "reflection"
(*riflessione*). Vico thus claims to have discovered two kinds
of wisdom-"poetic" and
"philosophical"-corresponding to the dual nature of human
beings (sense and intellect), represented in the creations of
theological poets and philosophers, respectively ("Poetic
Wisdom," SS779, p.297). Institutions arise first from the
immediacy of sense-experience, pure feeling, curiosity, wonder, fear,
superstition, and the child-like capacity of human beings to imitate
and anthropomorphize the world around them. Since "in the
world's childhood men were by nature sublime poets"
(Element XXXVII, SS187, p.71), Vico reasons, nations must be
"poetic in their beginnings" (Element XLIV, SS200,
p.73), so that their origin and course can be discovered by recreating
or remembering the "poetic" or "metaphysical
truth" which underlies them (Element XLVII, SS205, p.74).
This is manifest primarily in fable, myth, the structure of early
languages, and the formations of polytheistic religion. The belief
systems of early societies are thus characterized by "poetic
metaphysics" which "seeks its proofs not in the external
world but within the modifications of the mind of him who meditates
it" ("Poetic Wisdom," SS374, p.116), and
"poetic logic," through which the creations of this
metaphysics are signified. Metaphysics of this sort is "not
rational and abstract like that of learned men now," Vico
emphasizes, "but felt and imagined [by men] without power of
ratiocination...This metaphysics was their poetry, a faculty born with
them...born of their ignorance of causes, for ignorance, the mother of
wonder, made everything wonderful to men who were ignorant of
everything" ("Poetic Wisdom," SS375, p.116).
Incapable of forming "intelligible class concepts of
things"-a feature of human mind realized only in the age of
men-people "had a natural need to create poetic characters; that
is, imaginative class concepts or universals, to which, as to certain
models or ideal portraits, to reduce all the particular species which
resembled them" (Element XLIX, SS209, p.74).
From this genus of poetic metaphysics, Vico then extrapolates the
various species of wisdom born of it. "Poetic morals" have
their source in piety and shame ("Poetic Wisdom,"
SS502, p.170), he argues, while "poetic economy"
arises from the feral equality of human beings and the family
relationships into which they were forced by need ("Poetic
Wisdom," SS523, p.180). Similarly, "poetic
cosmography" grows from the seeing "the world as composed
of gods of the sky, of the underworld...and gods intermediate between
earth and sky" ("Poetic Wisdom," SS710, p.269),
"poetic astronomy" from raising the gods "to the
planets and [assigning] the heroes to the constellations"
("Poetic Wisdom," SS728, p.277), "poetic
chronology" out of the cycles of harvest and the seasons
("Poetic Wisdom," SS732, p.279), and "poetic
geography" from naming the natural world through "the
semblances of things known or near at hand" ("Poetic
Wisdom," SS741, p.285). As the faculty of reason develops
and grows, however, the power of imagination from which the earliest
forms of human society grew weakens and gives way finally to the power
of reflection; the cognitive powers of human beings gain ascendance
over their creative capacity, and reason replaces poetry as the
primary way of understanding the world. This defines the age of men
which makes philosophy, Vico reasons, a relatively recent development
in history, appearing as it did "some two thousand years after
the gentile nations were founded" (Element CV, SS313,
p.92).
Since history itself, in Vico's view, is the manifestation of
Providence in the world, the transition from one stage to the next and
the steady ascendance of reason over imagination represent a gradual
progress of civilization, a qualitative improvement from simpler to
more complex forms of social organization. Vico characterizes this
movement as a "necessity of nature" ("Idea of the
Work," SS34, p.21) which means that, with the passage of
time, human beings and societies tend increasingly towards realizing
their full potential. From rude beginnings undirected passion is
transformed into virtue, the bestial state of early society is
subordinated to the rule of law, and philosophy replaces sentiments of
religion. "Out of ferocity, avarice, and ambition, the three
vices which run throughout the human race," Vico says,
"legislation creates the military, merchant, and governing
classes, and thus the strength, riches, and wisdom of commonwealths.
Out of these three great vices, which could certainly destroy all
mankind on the face of the earth, it makes civil happiness"
(Element VII, SS132, p.62). In addition, the transition from
poetic to rational consciousness enables reflective individuals-the
philosopher, that is, in the shape of Vico-to recover the body of
universal history from the particularity of apparently random events.
This is a fact attested to by the form and content of *The New
Science* itself.
Although from a general point of view history reveals a progress of
civilization through actualizing the potential of human nature, Vico
also emphasizes the cyclical feature of historical development.
Society progresses towards perfection, but without reaching it (thus
history is "ideal"), interrupted as it is by a break or
return (*ricorso*) to a relatively more primitive condition.
Out of this reversal, history begins its course anew, albeit from the
irreversibly higher point to which it has already attained. Vico
observes that in the latter part of the age of men (manifest in the
institutions and customs of medieval feudalism) the
"barbarism" which marks the first stages of civil society
returns as a "civil disease" to corrupt the body politic
from within. This development is marked by the decline of popular
commonwealths into bureaucratic monarchies, and, by the force of
unrestrained passions, the return of corrupt manners which had
characterized the earlier societies of gods and heroes. Out of this
"second barbarism," however, either through the appearance
of wise legislators, the rise of the fittest, or a the last vestiges
of civilization, society returns to the "primitive simplicity of
the first world of peoples," and individuals are again
"religious, truthful, and faithful" ("Conclusion of
the Work," SS1104-1106, pp.423-4). From this
begins a new *corso* which Vico saw manifest in his own time as
the "second age of men" characterized by the
"true" Christian religion and the monarchical government
of seventeenth century Europe.
In addition to his account of the origins and history of civil society
developed in *The New Science*, Vico also advances a highly
original thesis about the origin and character of Homeric poetry,
which he refers to as "The Discovery of the New Homer."
Vico observes that the "vulgar feelings and vulgar customs
provide the poets with their proper materials" ("Discovery
of the True Homer," SS781, p.301), and, given that in the
age of heroes these customs constituted a "savage" and
"unreasonable" state of human nature, the poetry of Homer
cannot be the esoteric wisdom or creative act of a single individual,
as scholars have assumed, but represents the imaginative universals of
the Greek people themselves. Homeric poetry thus contains the
"models or ideal portraits" which form the mental
dictionary of the Ancients, and explains the place Vico assigns to
Homer in the frontispiece where his statue represents the origins of
human society in poetic wisdom. Vico thus applies his doctrine of the
imaginative universal to the case of Homer and concludes that though
it "marks a famous epoch in history it never in the world took
place." The historical Homer was "quite simply a man of
the people" ("Discovery of the True Homer, SS806,
p.308) and quite distinct from the ostensible author of the Odyssey
and Iliad who was actually a "purely ideal poet who never
existed in the world of nature...but was an idea or a heroic character
of Grecian men insofar as they told their histories in song"
("Discovery of the True Homer, SS873, p.323). |
vienna-circle | ## 1. Introductory Remarks
While it is in the nature of philosophical movements and their leading
doctrines to court controversy, the Vienna Circle and its philosophies
did so more than most. To begin with, its members styled themselves as
conceptual revolutionaries who cleared the stables of academic
philosophy by showing metaphysics not simply to be false, but to be
cognitively empty and meaningless. In addition, they often associated
their attempt to overcome metaphysics with their public engagement for
scientific Enlightenment reason in the ever-darkening political
situation of 1920s and 1930s central Europe. Small wonder then that
the Vienna Circle has sharply divided opinion from the start. There is
very little beyond the basic facts of membership and its record of
publications and conferences that can be asserted about it without
courting some degree of controversy. (For English-language survey
monographs and articles on the Vienna Circle, see Kraft 1950,
Jorgensen 1951, Ayer 1959b, Passmore 1967, Hanfling 1981, Stadler
1998, Richardson 2003. Particularly rich in background and
bio-bibliographical materials is Stadler 1997 [2015]. The best short
introductory book has remained untranslated: Haller 1993. For popular overviews see Sigmund 2015 [2018] and Edmonds [2020].)
Fortunately, more than three decades worth of recent scholarship in
history of philosophy of science now allows at least some disputes to
be put into perspective. (See, e.g., the following at least in part
English-language collections of articles and research monographs:
Haller 1982, McGuinness 1985, Rescher 1985, Gower 1987, Proust 1989,
Zolo 1989, Coffa 1991, Spohn 1991, Uebel 1991, Bell and Vossenkuhl
1992, Sarkar 1992, Uebel 1992, Oberdan 1993, Stadler 1993, Cirera
1994, Salmon and Wolters 1994, Cartwright, Cat, Fleck and Uebel 1996,
Giere and Richardson 1996, Nemeth and Stadler 1996, Sarkar 1996,
Richardson 1998, Friedman 1999, Wolenski and Kohler 1999, Fetzer
2000, Friedman 2000, Bonk 2003, Hardcastle and Richardson 2003,
Parrini, Salmon and Salmon 2003, Stadler 2003, Awodey and Klein 2004,
Reisch 2005, Galavotti 2006, Carus 2007, Creath and Friedman 2007,
Nemeth, Schmitz and Uebel 2007, Richardson and Uebel 2007, Uebel 2007,
Wagner 2009, Manninen and Stadler 2010, McGuinness 2011, Symons, Pombo
and Torres 2011, Creath 2012, Wagner 2012, Dambock 2016,
Pihlstrom, Stadler and Weidtmann 2017, Schiemer 2017, Tuboly 2017, Carus 2018, Cat and Tuboly 2019, Makovec and Shapiro 2019.) What distinguishes these works from valuable
collections like Schilpp 1963, Hintikka 1975 and Achinstein and Barker
1979 is that the unspoken assumption, to have understood Vienna Circle
philosophy correctly enough so as to consider its consequences
straightforward, is questioned in the more recent
scholarship. Many other pieces of new Vienna Circle scholarship are
spread throughout philosophical journals and essay collections with
more systematic or wider historical scope; important work has also
been done in German, Italian and French language publications but here
must remain unreferenced.)
Two facts must be clearly recognized if a proper evaluation of the
Vienna Circle is to be attempted. The first is, that, despite its
relatively short existence, even some of the most central theses of
the Vienna Circle underwent radical changes. The second is that its
members were by no means of one mind in all important matters;
occasionally they espoused perspectives so radically at variance with
each other that even their ostensive agreements cannot remain wholly
unquestioned. Behind the rather thin public front, then, quite
different philosophical projects were being pursued by the leading
participants with, moreover, changing alliances. One way of taking
account of this is by speaking (as above) explicitly of the
philosophies (in the plural) of the Vienna Circle (and to avoid the
singular definite description) while using the expression
"Vienna Circle philosophy" (without an article) in a
neutral generic sense.
Recent scholarship has provided what the received view of Viennese
neopositivism lacks: recognition and documentation of the sometimes
sharply differentiated positions behind the generic surface. This does
not invalidate all previous scholarship, including some fundamental
criticisms of its positions, but it restores a depth to Vienna Circle
philosophy that was absent from the standard histories. The value of
this development must not be underestimated, for the recognition of
the Vienna Circle's sophisticated engagement with aspects of the
philosophical tradition and contemporaneous challenges calls into
question unwarranted certainties of our own self-consciously
post-positivist era. While there remains support for the view that
philosophical doctrines were held in the Vienna Circle that wholly
merited many of the standard criticisms to be cited below, there is
now also support for the view that in nearly all such cases, these
doctrines were already in their day opposed within the Circle itself.
While some of the Vienna Circle philosophies are dated and may even
be, as John Passmore once put it, as dead as philosophies can be,
others show signs of surprising vitality. Which ones these are,
however, remains a matter of debate.
The lead pursued in this article is provided by the comments of a
long-time associate of the Vienna Circle, C.G. Hempel, made in
1991:
>
> When people these days talk about logical positivism or the Vienna
> Circle and say that its ideas are passe, this is just wrong.
> This overlooks the fact that there were two quite different schools of
> logical empiricism, namely the one of Carnap and Schlick and so on and
> then the quite different one of Otto Neurath, who advocates a
> completely pragmatic conception of the philosophy of science....
> And this form of empiricism is in no way affected by any of the
> fundamental objections against logical positivism.... (quoted in
> Wolters 2003, 117)
>
While differing with Hempel's specific claim about how
the two "schools" divide, the aim here is to fill out his suggestive
picture by indicating what Schlick, Carnap and Neurath stand for
philosophically and why the different wings of the Vienna Circle
require differentiated assessments. After reviewing the basic facts
and providing an overall outline of Vienna Circle philosophy (in sect.
2), this article considers various doctrines in greater detail by way
of discussing standard criticisms with the appropriate distinctions in
mind (in sect. 3). No comprehensive assessment of the Vienna Circle
and the work of its members can be attempted here, but some basic
conclusions will be drawn (in sect. 4).
## 2. The Basics: People, Activities and Overview of Doctrines
### 2.1 People
The Vienna Circle was a group of scientifically trained philosophers
and philosophically interested scientists who met under the (nominal)
leadership of Moritz Schlick for often weekly discussions of problems
in the philosophy of science during academic terms in the years from
1924 to 1936. As is not uncommon with such groups, its identity is
blurred along several edges. Not all of those who ever attended the
discussions can be called members, and not all who attended did so
over the entire period. Typically, attention is focused on long-term
regulars who gained prominence through their philosophical
publications, but even these do not in all cases fall into the period
of the Vienna Circle proper. It is natural, nevertheless, to consider
under the heading "Vienna Circle" also the later work of
leading members who were still active in the 1940s, 50s and 60s.
Finally, there is the so-called periphery of international contacts
and visitors that prefigured the post-World War II network of
analytical philosophers of science. In the present article the
emphasis will be placed on the long-term regulars whose contributions
will be followed, selectively, into the post-Schlick era.
According to its unofficial manifesto (see section 2.3 below), the
Circle had "members" and recognized others as
"sympathetic" to its aims. It included as members, besides
Schlick who had been appointed to Mach's old chair in Philosophy
of the Inductive Sciences at the University of Vienna in 1922, the
mathematician Hans Hahn, the physicist Philipp Frank, the social
scientist Otto Neurath, his wife, the mathematician Olga Hahn-Neurath,
the philosopher Viktor Kraft, the mathematicians Theodor Radacovic and
Gustav Bergmann and, since 1926, the philosopher and logician Rudolf
Carnap. (Even before World War I, there existed a similarly oriented
discussion circle that included Frank, Hahn and Neurath. During the
time of the Schlick Circle, Frank resided in Prague throughout, Carnap
did so from 1931.) Further members were recruited among
Schlick's students, like Friedrich Waismann, Herbert Feigl and
Marcel Natkin, others were recruited among Hahn's students, like
Karl Menger and Kurt Godel. Though listed as members in the
manifesto, Menger and Kraft later wanted to be known only as as
sympathetic associates, like, all along, the mathematician Kurt
Reidemeister and the philosopher and historian of science Edgar
Zilsel. (Karl Popper was never a member or associate of the Circle,
though he studied with Hahn in the 1920s and in the early 1930s
discussed its doctrines with Feigl and Carnap.) Over the years, other
participants (not listed in the manifesto) included other students of
Schlick's and Hahn's like Bela von Juhos, Josef
Schachter, Walter Hollitscher, Heinrich Neider, Rose Rand, Josef
Rauscher and Kathe Steinhardt, a secondary teacher, Robert
Neumann, and, as notable thinkers with independent connections, the
jurist and philosopher Felix Kaufmann (also a member of F.A. von
Hayek's "Geistkreis") and the innovative
psychologist Egon Brunswik (coming, like the even more loosely
associated sociologists Paul Lazarsfeld and Marie Jahoda, from the
pioneering Karl Buhler's University Institute of
Psychology).
Despite its prominent position in the rich, if fragile, intellectual
culture of inter-war Vienna and most likely due to its radical
doctrines, the Vienna Circle found itself virtually isolated in most
of German speaking philosophy. The one exception was its contact and
cooperation with the Berlin Society for Empirical (later: Scientific)
Philosophy (the other point of origin of logical empiricism). The
members of the Berlin Society sported a broadly similar outlook and
included, besides the philosopher Hans Reichenbach, the logicians Kurt
Grelling and Walter Dubislav, the psychologist Kurt Lewin, the surgeon
Friedrich Kraus and the mathematician Richard von Mises. (Its leading
members Reichenbach, Grelling and Dubislav were listed in the
Circle's manifesto as sympathisers.) At the same time, members
of the Vienna Circle also engaged directly, if selectively, with the
Warsaw logicians (Tarski visited Vienna in 1930, Carnap later that
year visited Warsaw and Tarski returned to Vienna in 1935). Probably
partly because of its firebrand reputation, the Circle attracted also
a series of visiting younger researchers and students including Carl
Gustav Hempel from Berlin, Hasso Harlen from Stuttgart, Ludovico
Geymonat from Italy, Jorgen Jorgensen, Eino Kaila, Arne
Naess and Ake Petzall from Scandinavia, A.J. Ayer from the UK, Albert
Blumberg, Charles Morris, Ernest Nagel and W.V.O. Quine from the USA,
H.A. Lindemann from Argentina and Tscha Hung from China. (The reports
and recollections of these former visitors--e.g. Nagel
1936--are of interest in complementing the Circle's
in-house histories and recollections which start with the unofficial
manifesto--Carnap, Hahn and Neurath 1929--and extend through
Neurath 1936, Frank 1941, 1949a and Feigl 1943 to the memoirs by
Carnap 1963, Feigl 1969a, 1969b, Bergmann 1987, Menger 1994.)
The aforementioned social and political engagement of members of the
Vienna Circle and of Vienna Circle philosophy for Enlightenment reason
had never made the advancement of its associates or protegees
easy in Viennese academia. From 1934 onwards, with anti-semitism
institutionalized and irrationalism increasingly dominating public
discourse, this engagement began to cost the Circle still more dearly.
Not only was the Verein Ernst Mach closed down early that year for
political reasons, but the ongoing dispersal of Circle members by
emigration, forced exile and death meant that after the murder of
Schlick in 1936 only a small rump was able to continue meetings for
another two years in Vienna. (1931: Feigl emigrated to USA; 1934: Hahn
died, Neurath fled to Holland, 1940 to UK; 1935: Carnap emigrated to
USA; 1936: Schlick murdered; 1937: Menger emigrated to USA, Waismann
to UK; 1938: Frank, Kaufmann, Brunswik, Bergmann emigrated to USA;
Zilsel, Rand to UK, later to USA; Hollitscher fled to Switzerland,
later to UK; Schachter emigrated to Palestine; 1940: Godel
emigrated to USA; see Dahms 1995 for a chronology of the exodus.) But
the end of the Vienna Circle as such did not mean the end of its
influence due to the continuing development of the philosophy of
former members (and the work of Kraft in post-World War II Vienna; on
this see Stadler 2008). Particularly through their work in American
exile (esp. Carnap at Harvard, Chicago and UCLA; Feigl at Iowa and
Minnesota; less so Frank at Harvard) and that of earlier American
visitors (esp. Quine, Nagel), as well as through the work of fellow
emigrees from the Berlin Society (esp. Reichenbach, Hempel) and
their students (Hilary Putnam, Wesley Salmon), logical empiricism
strongly influenced the post-World War II development of analytic
philosophy of science. By contrast, Waismann had little influence in
the UK where Neurath, already marginalized, had died in 1945. (The
full story of logical empiricism's acculturation in the English
speaking world remains to be written, but see Howard 2003, Reisch
2005, Uebel 2005a, 2010, Douglas 2009, and Romizi 2012 for considerations of
aspects of Vienna Circle philosophy early and/or late that were neglected in the process
and remained long forgotten.)
### 2.2 Activities
After its formative phase which was confined to the Thursday evening
discussions, the Circle went public in 1928 and 1929 when it seemed
that the time had come for their emerging philosophy to play a
distinctive role not only in the academic but also the public sphere.
In November 1928, at its founding session, Schlick accepted the
presidency of the newly formed Verein Ernst Mach (Association Ernst
Mach), Hahn accepted one of its vice-presidencies and Neurath and
Carnap joined its secretariat. Originally proposed by the Austrian
Freidenkerbund (Free Thinker Association), the Verein Ernst Mach was
dedicated to the dissemination of scientific ways of thought and so
provided a forum for popular lectures on the new scientific
philosophy. In the following year the Circle stepped out under its own
name (invented by Neurath) with a manifesto and a special conference.
The publication of "The Scientific World Conception: The Vienna
Circle", signed by Carnap, Hahn and Neurath and dedicated to
Schlick, coincided with the "First Conference for the
Epistemology of the Exact Sciences" in mid-September 1929,
organised jointly with the Berlin Society as an adjunct to the Fifth
Congress of German Physicists and Mathematicians in Prague (where
Frank played a prominent role in the local organising committee). (On
the production history and early reception of the manifesto see Uebel
2008.) A distinct philosophical school appeared to be emerging, one
that was dedicated to ending the previous disputes of philosophical
schools by dismissing them, controversially, as strictly speaking
meaningless.
Throughout the early and mid-1930s the Circle kept a high and
increasingly international profile with its numerous publications and
conferences. In 1930, the Circle took over, again together with the
Berlin Society, the journal *Annalen der Philosophie* and
restarted it under the name of *Erkenntnis* with Carnap and
Reichenbach as co-editors. (Besides publishing original articles and
sustaining lengthy debates, this journal featured selected proceedings
of their early conferences and documented the lecture series of the
Verein Ernst Mach and the Berlin Society as well as their
international congresses.) In addition, from 1928 until 1936, Schlick
and Frank served as editors of their book series "Schriften zur
wissenschaftlichen Weltauffassung" ("Writings on the
Scientific World Conception"), which published major works by
leading members and early critics like Popper, while Neurath
edited, from 1933 until 1939, the series
"Einheitswissenschaft" ("Unified Science"),
which published more introductory essays by leading members and
sympathisers. Conference-wise, the Circle organized, again with the
Berlin Society, a "Second Conference for the Epistemology of the
Exact Sciences" as an adjunct to the Sixth Congress of German
Physicists and Mathematicians in Konigsberg in September 1930
(where Reidemeister played a prominent role in the organization and
Godel first announced his incompleteness result in the
discussion) and then began the series of International Congresses on
the Unity of Science with a "Pre-Conference" just prior to
the start of the Eighth International Congress of Philosophy in Prague
in September 1934. This, their last conference in Central Europe, was
followed by the International Congresses of various sizes in Paris
(September 1935, July 1937), Copenhagen (June 1936), Cambridge,
England (July 1938), Cambridge, Mass. (September 1939), all in the
main organized by Neurath; a smaller last gathering was held in
Chicago in September 1941. By 1938 their collective publication
activity began to centre on a monumentally planned *International
Encyclopedia of Unified Science*, with Neurath as editor-in-chief
and Carnap and Charles Morris as co-editors; by the time of
Neurath's death in 1945, only 10 monographs had appeared and the
series was wound up in 1970 numbering 20 monographs under the title
"Foundations of the Unity of Science" (notably containing
Thomas Kuhn's *Structure of Scientific Revolutions*
amongst them).
Individually, the members of the Vienna Circle published extensively
before, during and after the years of the Circle around Schlick. For
some (Frank, Hahn, Menger, Neurath), philosophy was only part of their
scientific output, with numerous monographs and articles in their
respective disciplines (mathematics, physics and social science);
others (Schlick, Carnap, Feigl, Waismann) concentrated on philosophy,
but even their output cared relatively little for traditional concerns
of the field. Here it must be noted that two early monographs by
Schlick (1918/25) and Carnap (1928a), commonly associated with the
Vienna Circle, mostly predate their authors' participation there
and exhibit a variety of influences not typically associated with
logical positivism (see section 3.7 below). Moreover, important
monographs by Frank (1932), Neurath (1931a), Carnap (1934/37) and
Menger (1934) in the first half of the 1930s represent moves away from
positions that had been held in the Circle before and contradict its
orthodox profile. Yet the Circle's orthodoxy, as it were, is not
easily pinned down either. Schlick himself was critical of the
manifesto of 1929 and gave a brief vision statement of his own in
"The Turning Point in Philosophy" (1930). A long-planned
book by Waismann of updates on Wittgenstein's thought, to which
Schlick was extremely sympathetic, was never completed as originally
planned and only appeared posthumously (Waismann 1965; for earlier
material see Baker 2003). Comparison with rough transcripts of the
Circle's discussions in the early 1930s (for transcripts from
between December 1930 and July 1931 see Stadler 1997 [2015,
69-123]) suggest that Waismann's Wittgensteinian
"Theses", dated
to "around 1930" (Waismann 1967 [1979, Appendix B]), come closest
to an elaboration of the orthodox Circle position at that time (but
which remained not undisputed even then). Again, what needs to be
stressed is that all of the Circle's publications are to be
understood as contributions to ongoing discussions among its members
and associates.
### 2.3 Overview of Doctrines
Despite the pluralism of the Vienna Circle's views, there did
exist a minimal consensus which may be put as follows. A theory of
scientific knowledge was propagated which sought to renew empiricism
by freeing it from the impossible task of justifying the claims of the
formal sciences. It will be noted that this updating did not leave
empiricism unchanged.
Following the logicism of Frege and Russell, the Circle considered
logic and mathematics to be analytic in nature. Extending
Wittgenstein's insight about logical truths to mathematical ones
as well, the Circle considered both to be tautological. Like true
statements of logic, mathematical statements did not express factual
truths: devoid of empirical content they only concerned ways of
representing the world, spelling out implication relations between
statements. The knowledge claims of logic and mathematics gained their
justification on purely formal grounds, by proof of their derivability
by stated rules from stated axioms and premises. (Depending on the
standing of these axioms and premises, justification was conditional
or unconditional.) Thus defanged of appeals to rational intuition, the
contribution of pure reason to human knowledge (in the form of logic
and mathematics) was thought easily integrated into the empiricist
framework. (Carnap sought to accommodate Godel's
incompleteness results by separating analyticity from effective
provability and by postulating that arithmetic consisted of an
infinite series of ever richer arithmetical languages; see the
discussion and references in section 3.2 below.)
The synthetic statements of the empirical sciences meanwhile were held
to be cognitively meaningful if and only if they were empirically
testable in some sense. They derived their justification as knowledge
claims from successful tests. Here the Circle appealed to a criterion of meaningfulness
(cognitive significance) the correct formulation of which was problematical and much
debated (and will be discussed in greater detail in section 3.1
below). Roughly, if synthetic statements failed testability in
principle they were considered to be cognitively meaningless and to
give rise only to pseudo-problems. No third category of significance
besides that of *a priori* analytical and *a posteriori*
synthetic statements was admitted: in particular, Kant's
synthetic *a priori* was banned as having been refuted by the
progress of science itself. (The theory of relativity showed what had
been held to be an example of the synthetic *a priori*, namely
Euclidean geometry, to be false as the geometry of physical space.)
Thus the Circle rejected the knowledge claims of metaphysics as being
neither analytic and *a priori* nor empirical and synthetic.
(On related but different grounds, they also rejected the knowledge
claims of normative ethics: whereas conditional norms could be
grounded in means-ends relations, unconditional norms remained
unprovable in empirical terms and so depended crucially on the
disputed substantive *a priori* intuition.)
Given their empiricism, all of the members of the Vienna Circle also
called into question the principled separation of the natural and the
human sciences. They were happy enough to admit to differences in
their object domains, but denied the categorical difference in both
their overarching methodologies and ultimate goals in inquiry, which
the historicist tradition in the still only emerging social sciences
and the idealist tradition in philosophy insisted on. The
Circle's own methodologically monist position was sometimes
represented under the heading of "unified science".
Precisely how such a unification of the sciences was to be effected or
understood remained a matter for further discussion (see section 3.3
below).
It is easy to see that, combined with the rejection of rational
intuition, the Vienna Circle's exclusive apportionment of reason
into either formal *a priori* reasoning, issuing in analytic
truths (or contradictions), and substantive *a posteriori*
reasoning, issuing in synthetic truths (or falsehoods), severely
challenged the traditional understanding of philosophy. All members of
the Circle hailed the end of distinctive philosophical system
building. In line with the *Tractatus* claim that all
philosophy is really a critique of language, the Vienna Circle took
the so-called linguistic turn, the turn to representation as the
proper subject matter of philosophy. Philosophy itself was denied a
separate first-order domain of expertise and declared a second-order
inquiry. Whether the once queen of the sciences was thereby reduced to
the mere handmaiden of the latter was still left open. It remained a
matter of disagreement what type of distinctively
philosophical insight, if any, would remain legitimate. Just as
importantly, however, the tools of modern logic were employed also for
metatheoretical construction, not just for the reduction of empirical
claims to their observational base or, more generally, for the
derivation of their observational consequences. For the price of
abandoning foundationalist certainty this allowed for an enormous
expansion of the domain of empirical discourse. Ultimately, it opened
the space for the still ongoing discussions of scientific realism and
its alternatives (see section 3.4 below).
The Circle's leading protagonists differed in how they
conceptualized this reflexive second-order inquiry that the linguistic
turn had inaugurated. Nevertheless, they all agreed broadly that the
ways of representing the world were largely determined by convention.
A multitude of ideas hide behind this invocation of conventionality.
One particularly radical one is the denial of the apodicity of all
apriority, the denial of the claim that knowledge justified through
reason alone represents truths that are unconditionally necessary.
Another one is the imputation of agency in the construction of the
logico-linguistic frameworks that make human cognition possible, the
denial that conventionality could only mean acquiescence in tradition.
Whether such ideas were followed by individual members of the Circle,
however, depended on their own interests and influences. It is these
often overlooked or misunderstood differences that hold the key to
understanding the interplay of occasionally incompatible positions
that make up Vienna Circle philosophy. (As can be seen from some of
their internal disputes, moreover, these differences were also not
always obvious to the protagonists themselves.)
To see a striking example, consider their overarching conceptions of
philosophy itself. Some protagonists retained the idea that philosophy
possessed a separate disciplinary identity from science and, like
Schlick, turned philosophy into a distinctive, albeit non-formal
activity of meaning determination. Others, like Carnap, agreed on the
distinction between philosophy and science but turned philosophy into
a purely formal enterprise, the so-called logic of science. Still
others went even further and, like Neurath under the banner of
"unified science", also rejected philosophy as a separate
discipline and treated what remained of it, after the rejection of
metaphysics, as its partly empirical meta-theory, to be set beside the logic of science. With Schlick, then,
philosophy became the activity of achieving a much clarified and
deepened understanding of the cognitive and linguistic practices
actually already employed in science and everyday discourse. By
contrast, for Carnap, philosophy investigated and reconstructed
existing language fragments, developed new logico-linguistic
frameworks and suggested possible formal conventions for science,
while, for Neurath, the investigation of science was pursued by an
interdisciplinary meta-theory that encompassed empirical disciplines,
again with a pragmatic orientation. Thus we find in apparent
competition different conceptions of post-metaphysical philosophy: the
projects of experiential meaning determination, of formal rational
reconstruction and of naturalistic explications of leading theoretical
and methodological notions. (For roughly representative early
essay-length statements of their positions see Schlick 1930, Carnap
1932a and Neurath 1932a; later restatements are given in Schlick 1937,
Carnap 1936b and Neurath 1936b.) In the more detailed discussions
below these differences of overall approach will figure repeatedly
(see also section 3.6 below).
Criticisms of the basic positions adopted in the Vienna Circle are
legion, though it may be questioned whether most of them took account
of the sophisticated variations on offer. (Sometimes the
Circle's own writings are disregarded altogether and
"logical positivism" is discussed only via the proxy of
Ayer's popular exposition; see, e.g., Soames 2003.) But some
Neo-Kantians like Ernst Cassirer may claim that they too accepted
developments like the merely relative *a priori* and an
appropriate conception of the historical development of science.
Likewise, Wittgensteinians may claim that Wittgenstein's own
opposition to metaphysics only concerned false attempts to render it
intelligible: his merely ineffable but uneliminated metaphysics
concerned precisely what for him were essentials of ways of
representing the world. The commonest criticisms, however, concerned
not the uniqueness of the Vienna Circle's doctrines, or their
faithfulness to their supposed sources, but whether they were tenable
at all. Prominent objects of this type of criticism include the
verificationist theory of meaning and its claimed anti-metaphysical
and non-cognitivist consequences as well as its own significance; the
reductionism in phenomenalist or physicalist guises that appeared to
attend the Circle's attempted operationalisation of the logical
atomism of Russell and Wittgenstein; and the Circle's alleged
scientism in general and their formalist and a-historical conception
of scientific cognition in particular. These criticisms are discussed
in some detail below in order to assess why which of the associated
doctrines remain of what importance.
As noted, the Vienna Circle did not last long: its philosophical
revolution came at a cost. Yet what was so socially, indeed
politically, explosive about what appears on first sight to be a
particularly arid, if not astringent, doctrine of specialist
scientific knowledge? To a large part, precisely what made it so
controversial philosophically: its claim to refute opponents not by
proving their statements to be false but by showing them to be
(cognitively) meaningless. Whatever the niceties of their
philosophical argument here, the socio-political impact of the Vienna
Circle's philosophies of science was obvious and profound. All
of them opposed the increasing groundswell of radically mistaken,
indeed irrational, ways of thinking about thought and its place in the
world. In their time and place, the mere demand that public discourse
be perspicuous, in particular, that reasoning be valid and premises
true--a demand implicit in their general ideal of
reason--placed them in the middle of crucial socio-political
struggles. Some members and sympathisers of the Circle also actively
opposed the then increasingly popular *volkisch*
supra-individual holism in social science as a dangerous intellectual
aberration. Not only did such ideas support racism and fascism in
politics, but such ideas themselves were supported only by radically
mistaken arguments concerning the nature and explanation of organic
and unorganic matter. So the first thing that made all of the Vienna
Circle philosophies politically relevant was the contingent fact that
in their day much political discourse exhibited striking epistemic
deficits. That some of the members of the Circle went, without logical
blunders, still further by arguing that socio-political considerations
can play a legitimate role in some instances of theory choice due to
underdetermination is yet another matter. This particular issue will
not be pursued further here (see references at the end of section 2.1
above), nor the general topic of the Circle's embedding in
modernism and the discourse of modernity (see Putnam 1981b for a
reductionist, Galison 1990 for a foundationalist, Uebel 1996 for a
constructivist reading of their modernism, Dahms 2004 for an account
of personal relations with representatives of modernism in art and
architecture).
## 3. Selected Doctrines and their Criticisms
Given only the outlines of Vienna Circle philosophy, its controversial
character is evident. The boldness of its claims made it attractive
but that boldness also seemed to be its undoing. Turning to the
questions of how far and, if at all, which forms of Vienna Circle
philosophy stand up to some common criticisms, both the synchronic
variations and the diachronic trajectories of its variants must be
taken into account. This will be attempted in the sections below.
Before expectations are raised too high, however, it must also
remembered that in this article only the views of members of the
Vienna Circle can be discussed, even though the problematic issues
were pervasive in logical empiricism generally. (Compare the SEP article "Logical Empiricism"; for articles on
Reichenbach see, e.g, Spohn 1991 and Salmon and Wolters 1994,
Richardson 2005, 2006, and, for the Berlin Group as a whole, Milkov and Peckhaus 2013.) Moreover, here
the emphasis must lie on the main protagonists: Schlick, Carnap and
Neurath. (Neither Hahn or Frank, nor Waismann or Feigl, for instance,
can be discussed here as extensively as their work deserves; see,
e.g., Uebel 2005b, 2011b, McGuinness 2011, Haller 2003, respectively.)
What will be noted, however, is that Vienna Circle philosophy was by
no means identical with the post-World War II logical empiricism that
has come to be known as the "received view" of scientific
theories, even though it would be hard to imagine the latter without
the former. (For a systematic if schematic critical discussion of the
received view, see Suppe 1977, for a partial defense Mormann
2007a.)
To deepen the somewhat cursory overview of Vienna Circle philosophy
given above, we now turn to the discussion of the following issues:
first, the viability of the conceptions of empirical significance
employed by the Vienna Circle in its classical period; second, its uses of
the analytic-synthetic distinction; third, its supposedly reductionist
designs and foundationalist ambitions for philosophy; fourth, its
stances in the debate about realism or anti-realism with regard to the
theoretical terms in science; fifth, Carnap's later ideas in
response to some of the problems encountered; sixth, the issue of the
status of the meaning criterion itself (variably referred to also as "criterion of significance" or "criterion of empirical significance") and of the point of their
critique of metaphysics; seventh, the Vienna Circle's attitude
towards history and of their own place in the history of
philosophy.
These topics have been chosen for the light their investigation throws
on the Circle's own agendas and the reception of its doctrines
amongst philosophers at large, as well as for the relative ease with
which their discussion allows its development and legacy to be
charted. There can be little doubt about the enormous impact that the
members of the Vienna Circle had on the development of
twentieth-century philosophy. What is less clear is whether any of its
distinctive doctrines are left standing once the dust of their
discussion has settled or whether those of its teachings that were
deemed defensible merged seamlessly into the broad church that
analytic philosophy has become (and, if so, what those surviving
doctrines and teachings may be).
It must be noted, then, that the topics chosen for this article do not
exhaust the issues concerning which the members of the Vienna Circle
made significant contributions (which continue to stimulate work in
the history of philosophy of science). Important topics like that of
the theory and practice of unified science, of the nature of the
empirical basis of science (the so-called protocol-sentence debate)
and of the general structure of the theories of individual sciences
can only be touched upon selectively. Likewise, while the general
topic of ethical non-cognitivism receives only passing mentions, the
Circle's varied approaches to value theory cannot be discussed
here (for an overview see Rutte 1986, for detailed analyses see Siegetsleitner 2014). Other matters, like the
contributions made by Vienna Circle members to the development of
probability theory and inductive logic, the philosophy of logic and
mathematics (apart from the guiding ideas of Carnap) and to the
philosophy of individual empirical sciences (physics, biology,
psychology, social science), cannot be discussed at all (see Creath
and Friedman 2007 and Richardson and Uebel 2007 for relevant essays with further references).
But it may be noted that with his "logic of science"
Carnap counts among the pioneers of what nowadays is called
"formal epistemology".
### 3.1 Verificationism and the Critique of Metaphysics
Not surprisingly, it was the Circle's rejection of metaphysics
by means of their seemingly devastating criterion of cognitive
significance that attracted immediate opposition. (That they did not
deny all meaning to statements thus ruled out of court was freely
admitted from early on, but this "expressive" surplus was
considered secondary to so-called "cognitive" meaning and
discountable in science (see Carnap 1928b, 1932a).) Notwithstanding
the metaphysicans' thunder, however, the most telling criticisms
of the criterion came from within the Circle or broadly sympathetic
philosophers. When it was protested that failure to meet an empiricist
criterion of significance did not make philosophical statements
meaningless, members of the Circle simply asked for an account of what
this non-empirical and presumably non-emotive meaning consisted in and
typically received no convincing answer. The weakness of their
position was rather that their own criterion of empirical significance
seemed to resist an acceptable formal characterization.
To start with, it must be noted that long before the verification
principle proper entered the Circle's discourse in the late 1920s,
the thought expressed by Mach's dictum that "where neither
confirmation nor refutation is possible, science is not
concerned" (1883 [1960, 587]) was accepted as a basic precept of
critical reflection about science. Responsiveness to evidence for and
against a claim was the hallmark of scientific discourse.
(Particularly the group Frank-Hahn-Neurath, who formed part of a
pre-World War I discussion group (Frank 1941, 1949a) sometimes called
the "First Vienna Circle" (Haller 1985, Uebel 2003), can
be presumed to be familiar with Mach's criterion.) Beyond this,
still in the 1920s, Schlick (1926) convicted metaphysics for falsely
trying to express as logically structured cognition what is but the
inexpressible qualitative content of experience. Already then,
however, Carnap (1928b, SS7) edged towards a formal criterion by
requiring empirically significant statements to be such that
experiential support for them or for their negation is at least
conceivable. Meaningfulness meant the possession of "factual
content" which could not, on pain of rendering many scientific
hypotheses meaningless, be reduced to actual testability. Instead, the
empirical significance of a statement had to be conceived of as
possession of the potential to receive direct or indirect experiential
support (via deductive or inductive reasoning).
In 1930, considerations of this sort appeared to receive a
considerable boost due to Waismann's reports of
Wittgenstein's meetings with him and Schlick. Wittgenstein had
discussed the thesis "The sense of a proposition is [the method of] its
verification" in conversations with Schlick and Waismann on 22
December 1929 and 2 January 1930 (see Waismann 1967 [1979, 47 and 79]) and Waismann elaborated it in his "Theses" which were presented to the Circle as
Wittgenstein's considered views. While Wittgenstein appears to have
thought of his dictum primarily as a constitutive principle of meaning, in the Circle it was put to work mainly as a demarcation criterion against metaphysics. Whether it was always noted that Wittgenstein's thesis and the criterion of cognitive significance must be distinguished (the former entails the latter, but not vice versa) may be doubted, but striking differences emerged all the same. Like Carnap's criterion of significance of 1928, the version of the criterion which followed from Wittgenstein's dictum also allowed for
verifiability in principle (and did not demand actual
verifiability): like Carnap's notion of experiential support, it worked with the mere conceivability of verifiability. Yet Wittgenstein's criterion required conclusive verifiability which Carnap's did not. (The demand for conclusive verifiability was discussed in the meetings
with Wittgenstein.) By 1931, however, it had become clear to some that
this would not do. What Carnap later called the "liberalization
of empiricism" was underway and different camps became
discernible within the Circle. It was over this issue that the
so-called "left wing" with Carnap, Hahn, Frank and Neurath
first distinguished itself from the "more conservative
wing" around Schlick. (See Carnap 1936-37, 422 and 1963a,
SS9. Carnap 1936-37, 37n dated the opposition to strict
verificationism to "about 1931".)
In the first place, this liberalization meant the accommodation of
universally quantified statements and the return, as it were, to
salient aspects of Carnap's 1928 conception. Everybody had noted
that Wittgenstein's criterion rendered
universally quantified statements meaningless. Schlick (1931) thus
followed Wittgenstein's own suggestion to treat "hypotheses" instead as
representing rules for the formation of verifiable singular
statements. (His abandonment of conclusive verifiability is indicated
only in Schlick 1936a.) By contrast, Hahn (1933, drawn from lectures
in 1932) pointed out that hypotheses should be counted as properly
meaningful as well and that the criterion be weakened to allow for
less than conclusive verifiability. But other elements played into
this liberalization as well. One that began to do so soon was the
recognition of the problem of the irreducibility of disposition terms
to observation terms (more on this presently). A third element was
that disagreement arose as to whether the in-principle verifiability
or support turned on what was merely logically possible or on what was
nomologically possible, as a matter of physical law etc. A fourth
element, finally, was that differences emerged as to whether the
criterion of significance was to apply to all languages or whether it
was to apply primarily to constructed, formal languages. Schlick
retained the focus on logical possibility and natural languages
throughout, but Carnap had firmly settled his focus on nomological
possibility and constructed languages by the mid-thirties. Concerned
with natural language, Schlick (1932, 1936a) deemed all statements
meaningful for which it was logically possible to conceive of a
procedure of verification; concerned with constructed languages only,
Carnap (1936-37) deemed meaningful only statements for whom it
was nomologically possible to conceive of a procedure of confirmation
of disconfirmation.
Many of these issues were openly discussed at the Paris congress in
1935. Already in 1932 Carnap had sought to sharpen his previous
criterion by stipulating that those statements were meaningful that
were syntactically well-formed and whose non-logical terms were
reducible to terms occurring in the basic observational evidence
statements of science. While Carnap's focus on the reduction of
descriptive terms allows for the conclusive verification of some
statements, it must be noted that his criterion also allowed
universally quantified statements to be meaningful in general, provided they were
syntactically and terminologically correct (1932a, SS2). It was
not until one of his Paris addresses, however, that Carnap officially
declared the criterion of cognitive significance to be mere confirmability.
Carnap's new criterion required neither verification nor
falsification but only partial testability so as now to include not
only universal statements but also the disposition statements of
science (see Carnap 1936-37; the English translation of the
original Paris address (1936a [1949]) combines it with extraneous
material). These disposition terms were thought to be linked to
observation statements by a variety of reduction postulates or longer
reduction chains, all of which provided only partial definitions
(despite their name they provided no eliminative reductions). Though
plausible initially, the device of introducing non-observational terms
in this way gave rise to a number of difficulties which impugned the
supposedly clear distinctions between logical and empirical matters
and analytic and synthetic statements (Hempel 1951, 1963).
Independently, Carnap himself (1939) soon gave up the hope that all
theoretical terms of science could be related to an observational base
by such reduction chains. This admission raised a serious problem for
the formulation of a criterion of cognitive significance: how was one to rule out
unwanted metaphysical claims while admitting as significant highly
abstract scientific claims?
Consider that Carnap (1939, 1956b) admitted as legitimate theoretical
terms that are implicitly defined in calculi that are
only partially interpreted by correspondence rules between
some select calculus terms and expressions belonging to an
observational language (via non-eliminative reductions). The problem
was that mere confirmability was simply too weak a criterion
to rule out some putative metaphysical claims. Moreover, this problem
arose for both the statement-based approach to the criterion (taken by
Carnap in 1928, by Wittgenstein in 1929/30, and by Ayer both in the
first (1936) and the second editions (1946) of *Language Truth and
Logic*) and for the term-based approach (taken by Carnap since
1932). For the former approach, the problem was that the empirical
legitimacy of statements obtained via indirect testing also
transferred to any expressions that could be truth-functionally
conjoined to them (for instance, by the rule of
'or'-introduction). Statements thus became empirically
significant, however vacuous they had been on their own. For the
term-based approach, the problem was that, given the non-eliminability
of dispositional and theoretical terms, empirical significance was no
longer ascribable to individual expressions in isolation but became a
holistic affair, with little guarantee in turn for the empiricist
legitimacy of all the terms now involved.
For most critics (even within the ranks of logical empiricism), the
problem of ruling out metaphysical statements while retaining the
terms of high theory remained unsolved. By 1950, in response to the
troubles of Ayer's two attempts to account for the indirect
testing of theoretical statements via their consequences, Hempel
conceded that it was "useless to continue to search for an
adequate criterion of testability in terms of deductive relationships
to observation sentences" (1950 [1959, 116]). The following
year, Hempel also abandoned the idea of using, as a criterion of
empirical significance, Carnap's method of translatability into
an antecedently determined empirical language consisting only of
observational non-logical vocabulary. Precisely because it was
suitably liberalized to allow abstract scientific theories with merely
partial interpretations, its anti-metaphysical edge was blunted: it
allowed for combination with "some set of metaphysical
propositions, even if the latter have no empirical interpretation at
all" (1951, 70). Hempel drew the holistic conclusion that the
units of empirical significance were entire theories and that the
measure of empirical significance itself was multi-criterial and,
moreover, allowed for degrees of significance. To many, this amounted
to the demise of the Circle's anti-metaphysical campaign. By
contrast, Feigl's reaction (1956) was to reduce the ambition of
the criterion of significance to the mere provision of necessary
conditions.
Some further work was undertaken on rescuing and, again, debunking a
version of the statement-based criterion, but mostly not by (former) members
of the Vienna Circle. However, in response to the problem of how to
formulate a meaning criterion that suitably distinguished between
empirically significant and insignificant non-observational terms,
Carnap proposed a new solution in 1956. We
will return to discuss it separately (see section 3.5 below); for
now we need only note that these proposals were highly technical and
applied only to axiomatized theories in formal languages. They too,
however, found not much favor amongst philosophers. Yet whatever the
problems that may or may not beset them, it would seem that far more
general philosophical considerations contributed to the disappearance
of the problem of the criterion of cognitive significance from most philosophical
discussions since the early 1960s (other than as an example of
mistaken positivism). These include the increasing opposition to the
distinctions between analytic and synthetic statements and
observational and theoretical terms as well as a general sense of
dissatisfaction with Carnap's approach to philosophy which began
to seem both too formalist in execution and too deflationary in
ambition. The entire philosophical program of which the search for a
precise criterion of empirical significance was a part had begun to
fall out of favor (and with it technical discussions about the
criterion's latest version).
The widely perceived collapse of the classical Viennese project to
find in an empiricist meaning criterion a demarcation criterion
against metaphysics--we reserve judgement about Carnap's
last two proposals here--can be interpreted in a variety of ways.
It strongly suggests that cognitive significance cannot be reduced to
what is directly observable, whether that be interpreted in
phenomenalist or intersubjective, physicalist terms. In that important
but somewhat subsidiary sense, the collapse spelt the failure of many
of the reductivist projects typically ascribed to Viennese
neopositivism (but see section 3.3 below). Beyond that, what actually
had failed was the attempt to characterize for natural languages the
class of cognitively significant propositions by recursive definitions
in purely logical terms, either by relations of deducibility or
translatability. What failed, in other words, was the attempt to apply
a general conception of philosophical analysis as purely formal,
pursued also in other areas, to the problem of characterizing
meaningfulness.
This general conception can be considered formalist in several senses.
It was formalist, first, in demanding the analysis of the meaning of
concepts and propositions in terms of logically necessary and
sufficient conditions: it was precise and brooked no exceptions. And
it was formalist, second, in demanding that such analyses be given
solely in terms of the logical relations of these concepts and
propositions to other concepts and propositions: it used the tools of
formal logic. There is also a third sense that is, however, applicable
predominantly to the philosophical project in Carnap's hands, in
that it was formalist since it concentrated on the analysis of
contested concepts via their explication in formal languages.
(Discussion of its viability must be deferred until sections 3.5 and
3.6 below, since what's at issue currently is only the formalist
project as applied to concepts in natural language.) The question
arises whether all Vienna Circle philosophers concerned with empirical
significance in natural language were equally affected, for the
collapse of the formalist project may leave as yet untouched other
ways of sustaining the objection that metaphysics is, in some relevant
sense, cognitively insignificant. (Such philosophers in turn would
have to answer the charge, of course, that only the formalist project
of showing metaphysics strictly meaningless rendered the Viennese
anti-metaphysics distinctive.)
Even though the formalist project became identified with mainstream
logical empiricism generally (consider its prominence in confirmation
theory and in the theory of explanation), it was not universally
subscribed to in the Vienna Circle itself. In different ways, neither
Schlick nor Neurath or Frank adhered to it. As noted in the overview
above, Schlick's attempts to exhibit natural language meaning
abjured efforts to characterize it in explicitly formal terms, even
though he accepted the demand for necessary and sufficient conditions
of significance. In the end, moreover, Schlick turned away from his
colleagues' search for a criterion of empirical significance. In
allowing talk of life after death as meaningful (1936a), for the very
reason that what speaks against it is only the empirical impossibility
of verifying such talk, Schlick's final criterion clearly left
empiricist strictures behind.
By contrast, Neurath and Frank kept their focus on empirical
significance. While they rarely discussed these matters explicitly,
their writings give the impression that Neurath and Frank chose to
adopt (if not retain) a contextual, exemplar-based approach to
characterizing the criterion of meaninglessness and so decided to
forego the enumeration of necessary and sufficient conditions.
Mach's precept cited earlier is an example of such a pragmatic
approach, as is, it should be noted, Peirce's criterion of
significance, endorsed by Quine (1969), which claims that significance
consists in making a discernible difference whether a proposition is
likely to be true or false. Mach's pragmatic approach had been
championed already before verificationism proper by Neurath, Frank and
Hahn who became, like Carnap, early opponents of conclusive
verifiability. (Indeed, it is doubtful whether Neurath's radical
fallibilism, most clearly expressed already in 1913, ever wavered.)
This pragmatic understanding found clear expression in Neurath's
adoption (1935a, 1938) of K. Reach's formulation of metaphysical
statements as "isolated" ones, as statements that do not
derive from and hold no consequences for those statements that we do
accept on the basis of empirical evidence or for logical reasons.
(Hempel's dismissal, in 1951, of this pragmatic indicator
presupposes the desiderata of the formalist project.) Finally, there
is Frank's suggestion (1963), coupled with his longstanding
advice to combine logical empiricism with pragmatism, that
Carnap's purely logical critique of metaphysics in (1932a) was
bound to remain ineffective as long as the actual use of metaphysics
remained unexamined. It would be worth investigating whether--if
the critique of the alleged reductionist ambitions of their philosophy
could also be deflected (see section 3.3 below)--the impetus of
the anti-verificationist critique can be absorbed by those with a
pragmatic approach to the demarcation against metaphysics. Much as
with Quine's Peirce, such a criterion rules out as without
interest for epistemic activity all concepts and propositions whose
truth or falsity make no appreciable difference to the sets of
concepts and propositions we do accept already.
An entirely different moral was drawn by Reichenbach (1938) and
thinkers indebted to his probabilistic conception of meaning and his
probabilistic version of verificationism, which escaped the criticisms
surveyed above by vagaries of its own. Such theorists perceive the
failure of the formalist model to accommodate the empirical
significance of theoretical terms to stem from its so-called deductive
chauvinism. In place of the exclusive reliance on the
hypothetical-deductive method these theorists employ non-demonstrative
analogical and causal inductive reasoning to ground theoretical
statements empirically. Like Salmon, these theorists adopt a form of
"non-linguistic empiricism" which they sharply
differentiate from the empiricism of the Vienna Circle (Salmon 1985,
2003 and Parrini 1998).
Now against both the pragmatic and the post-linguistic responses to
the perceived failure of the attempt to provide a precise formal
criterion of significance serious worries can be raised. Thus it must
be asked whether without a precise way of determining when a statement
'makes an appreciable difference', criticism of
metaphysics based on such a criterion may be not be considered as a
biased dismissal rather than a demonstration of fact and so fall short
of what is needed. Likewise in the case of the anti-deductivist
response, it must be noted that a criterion based on analogical
reasoning will only be as effective as the strength of the analogy
which can always be criticized as inapt (and similarly for appeals to
causal reasoning). The very point of exact philosophy in a scientific
spirit--for many the very point of Vienna Circle philosophy
itself--seems threatened by such maneuvres. Acquiescence in the
perceived failure of the proposed criteria of significance thus comes
with a price: if not that of abandoning Vienna Circle philosophy
altogether, then at least that of formulating an alternative
understanding to how some of its ambitions ought to be understood.
(Recent reconstructive work on Carnap, Neurath and Frank may be
regarded in this light.)
A still different response--but one emblematic for the
philosophical public at large--is that of another of
Reichenbach's former students, Putnam, who came to reject
the anti-metaphysical project that powered verificationism in its
entirety. Repeatedly in his later years, Putnam called for a
refashioning of analytic philosophy as such, providing, as it were, a
philosophically conservative counterweight to Rorty's turn to
postmodernism. Putnam's reasons (the alleged self-refutation of
the meaning criterion) are still different from those surveyed above
and will be discussed when we return to reconsider the very point of
the Circle's campaign against metaphysics (see section 3.6
below).
### 3.2 The Analytic/Synthetic Distinction and the Relative *A Priori*
Whether the verificationist agenda was pursued in a formalist or
pragmatic vein, however, all members shared the belief that meaningful
statements divided exclusively into analytic and synthetic statements
which, when asserted, were strictly matched with *a priori* and
*a posteriori* reasoning for their support. The Vienna Circle
wielded this pairing of epistemic and semantic notions as a weapon not
only against the substantive *a priori* of the Schoolmen but
also against Kant's synthetic *a priori*. Moreover, their
notion of analyticity comprized both logical and mathematical truths,
thereby extending Wittgenstein's understanding of the former as
"tautological" in support of a broadly logicist
program.
It is well known that this central component of the Vienna
Circle's arsenal, the analytic/synthetic distinction, came under
sharp criticism from Quine in his "Two Dogmas of
Empiricism" (1951a), less so that the criticism can only be
sustained by relying on objections of a type first published by
Tarski. The argument is more complex, but here is a very rough sketch.
So as to discard the analytic/synthetic distinction as an unwarranted
dogma, Quine in "Two Dogmas" argued for the in-principle
revisability of all knowledge claims and criticised the impossibility
of defining analyticity in a non-circular fashion. The first argument
tells against the apodictic *a priori* of old (the eternal
conceptual verities), but, as we shall see, it is unclear whether it
tells against at least some of the notions of the *a priori*
held in the Vienna Circle. The second argument presupposes a
commitment to extensionalism that likewise can be argued not to have
been shared by all in the Circle. By contrast, Tarski had merely
observed that, at a still more fundamental level, he knew of no basis
for a sharp distinction between logical and non-logical terms. (For
relevant primary source materials see also Quine 1935, 1951b, 1963,
Carnap 1950, 1955, 1963b, their correspondence and related previously
unpublished lectures and writings in Creath 1990, the memoir Quine
1991, and Tarski 1936.)
The central role on the Vienna Circle's side in this discussion
falls to Carnap and the reorientation of philosophy he sought to
effect in *Logical Syntax* (1934/37). It was the notion of the
merely relative and therefore non-apodictic *a priori* that
deeply conditioned his notion of analyticity and allowed him to
sidestep Quine's fallibilist argument in a most instructive
fashion. In doing so Carnap built on an idea behind
Reichenbach's early attempt (1920) to comprehend the general
theory of relativity by means of the notion of a merely constitutive
*a priori*. Now Schlick had objected to the residual idealism
of this proposal (see Oberdan 2009) and prefered talk of conventions
instead and Reichenbach soon followed him in this (see Coffa 1991, Ch.
10). Carnap too did not speak of the relative *a priori* as
such (in returning to this terminology present discussions follow
Friedman 1994), but his pluralism of logico-linguistic frameworks
furnishes precisely that.
First consider Schlick as a contrast class. Schlick (1934) appeared to
show little awareness of the language-relativity of the
analytic/synthetic distinction and spoke of analytic truths as
conceptual necessities that can be conclusively surveyed. This would
suggest that Schlick rejected Kant's apodictic synthetic *a
priori* but not the apodicity of analytic statements. Clearly, if
that were so, Quine's argument from universal fallibilism would
find a target here. Matters are not quite so clear-cut, however.
Schlick had long accepted the doctrine of semantic conventionalism
that the same facts could be captured by different conceptual systems
(1915): this would suggest that his analytic truths were conventions that were
framework-relative and as such necessary only in the very frameworks
they helped to constitute. Yet Schlick did not countenance
the possibility of incommensurable conceptual frameworks:
any fact was potentially expressible in any framework (1936b). As a
result, Schlick did not accept the possibility that after the adoption
of a new framework the analytic truths of the old one may be no longer
assertable, that they could be discarded as no longer applicable even
in translation, as it were. Herein lay a point that Quine's
argument could exploit: Schlick's analytic truths remained
unassailable despite their language-relativity.
Now Carnap, under the banner of the principle of logical tolerance
(1934/37, SS17), abandoned the idea of the one universal logic
which had informed Frege, Russell and Wittgenstein before him.
Instead, he recognized a plurality of logics and languages whose
consistency was an objective matter even though axioms and logical
rules were fixed entirely by convention. Already due to this logical
pluralism, the framework-relativity of analytic statements went deeper
for Carnap than it did for Schlick. But Carnap also accepted the
possibility of incommensurability between seemingly similar
descriptive terms and between entire conceptual systems (1936a).
Accepting the analytic truths of the framework of our best physical
theory may thus be incompatible with accepting those of an earlier
one, even if the same logic is employed in both. Carnapian
analyticities do not therefore express propositions that we hold to be
true unconditionally, but only propositions true relative to their own
framework: they are no longer held to be
potentially translatable across all frameworks. Quine's charge of
universal revisability (which itself needs some modification; see
Putnam 1978) thus misses its mark against them. (Quine, of course, also
rejected Carnap's ultimately intensionalist accommodation of radical
fallibilism via the notion of language change.)
Concerning the criticism of the circular nature of the definition of
analyticity, Carnap responded that it pertained primarily to the idea
of analyticity in natural language whereas he was interested in
"explications" as provided by the logic of science (or
better, a logic of science, since there existed no unique logic of
science for Carnap). Explications are reconstructions in a formal
language of selected aspects of complex terms that should not be
expected to model the original in all respects (1950b, Ch.1).
Moreover, Carnap held that explication of the notion of analyticity in
formal languages yielded the kind of precision that rendered the
complaint of circularity irrelevant: vague intuitions of meaning were
no longer relied upon. Those propositions of a given language were
analytic that followed from its axioms and, once the syntactic
limitations of the *Logical Syntax* period had been left
behind, from its definitions and meaning postulates, by application of
its rules: no ambiguity obtained.
So it may appear that the notion of analyticity is easily delimitable
in Carnap's explicational approach: analytic propositions would
be those that constitute a logico-linguistic framework. But
complications arose from the fact that, on Carnap's
understanding, not all propositions defining a logico-linguistic
framework need be analytic ones (1934/37, SS51). It was possible
for a framework to consist not only of L-rules, whose entirety
determines the notion of logical consequence, but also of P-rules,
which represent presumed physical laws. So let analytic propositions
be those framework propositions whose negations are
self-contradictory. Here a problem arose once the syntactic
constraints were dropped by Carnap after *Logical Syntax* so as
to allow semantic reasoning and the introduction of so-called meaning
postulates: now the class of analytic propositions was widened to
include not only logical and mathematical truths but also those
obtained by substitution of semantically equivalent expressions. How
was one now to explicate the idea that there can be non-analytic
framework propositions (whose negations are not self-contradictory)?
Consider that for opponents like Quine, responding that the negations
of non-analytic framework propositions do not contradict meaning
postulates, was merely to dress up a presupposed notion of meaning in
pseudo-formal garb: while it provided what looked like formal
criteria, Carnap's method did not leave the circle of
intensional notions behind and so seemed to beg the question. Meaning
postulates (Carnap 1952), after all, could only be identified by
appearing on a list headed "meaning postulates" (as Quine
added in reprints of 1951a).
Here one must note that in *Logical Syntax*, Carnap also
modified the thesis of extensionality he had previously defended
alongside Russell and Wittgenstein: now it merely claimed the
possibility of purely extensional languages and no longer demanded
that intensional languages be reduced to them (*ibid.*,
SS67). Of course, the mere claim that the language of science can
be extensional still proves troublesome enough, given that in such a
language a distinction between laws and accidentally true universal
propositions cannot be drawn (the notion of a counterfactual
conditional, needed to distinguish the former, is an intensional one).
Even so, this opening of Carnap's towards intensionalism already
at the height of his syntacticism--to say nothing of his explicit
intensionalism in *Meaning and Necessity* (1947)--seems
enough to thwart Quine's second complaint in "Two
Dogmas". Carnap did not share Quine's extensionalist
agenda, so the need to break out of the circle of intensional notions
once these were clearly defined in his formal languages did not apply.
That theirs were in fact different empiricist research programmes was
insufficiently stressed, it would appear, by Quine and Quinean critics
of Carnap (as noted pointedly by Stein 1992; cf. Ricketts 1982, 2003, Creath 1991, 2004,
Richardson 1997).
To sustain his critique, Quine had to revive his and Tarski's
early doubts about Carnap's methodological apparatus and dig
even deeper. (Tarski also shared Quine's misgivings about
analyticity when they discussed these issues with Carnap at Harvard;
see Mancosu 2005, Frost-Arnold 2013.) Their scepticism found its
target in Carnap's ingenious measures in *Logical Syntax*
taken to preserve the thesis that mathematics is analytic from the
ravages of Godel's incompleteness theorems. Godel
proved that every formal system strong enough to represent number
theory contains a formula that is true but neither itself or its
negation is provable in that system; such formulae--known since
as Godel sentences--are provable in a still stronger system
which, however, also contains a formula of its own that is true but
not provable in it (and neither is its negation). Commonly,
Godel's proof is taken to have undermined the thesis of the
analyticity of arithmetic. (For discussions of this challenge to
Carnap's logical syntax project, see Friedman 1988, 1999a, 2009,
Goldfarb and Ricketts 1992, Richardson 1994, Awodey and Carus 2003,
2004.) Carnap responded by stating that arithmetic demands an infinite
sequence of ever richer languages and by declaring analytic statements
to be provable by non-finite reasoning (1934/37, SS60a-d). This
looked like fitting the bill on purely technical grounds, but it is
questionable whether such reasoning may still count as syntactic.
Nowadays, it is computational effectiveness that is taken to
distinguish purely formal from non-formal, material reasoning.
(Carnap's move highlights the tension within *Logical
Syntax* between formal and crypto-semantic reasoning: that the rigid
syntacticism officially advertised there was at the same time
undermined as its failings were being compensated--illegitimately so
by official standards, e.g., by considering translatability a
syntactic notion--points ahead to his acceptance of semantics in 1935, only one
year after the publication of *Logical Syntax* and contrary to
his disparagement of the notion of synthetic truth there. For a discussion of Carnap's moves, see Coffa
1978, Ricketts 1996, Goldfarb 1997, Creath 1996, 1999.) It is no
criticism that Carnap's reconstruction of arithmetic was not
standard logicism, but that Carnap unduly stretched the idea of formal
reasoning is. Was he saved by his shift to semantics?
Tarski (1936) granted the language-relativity of the reconstructed
notion of analyticity in *Logical Syntax*. He also did not
object that Carnap's procedure of circumventing the problem
which the Godel sentences presented to the thesis of the
analyticity of arithmetic was illegitimate. Tarski rather questioned
whether there were "objective reasons" for the sharp
distinction between logical and non-logical terms and he pointed out
that Carnap's distinction between the logical and the empirical
was not a hard and fast one. Since noting that the distinction between
logical and non-logical was not a sharp one and arguing that no
principled distinction could be upheld between them are two quite
different reactions, however, Tarski's point on its own does not
fully support the Quinean critique. Quine's conclusion (1940,
SS60) that the notion of logical truth itself is
"informal" rather reflects the moral that he drew from
Tarski's observation. It appears that what motivated him (after
a nominalistic interlude in the 1940s) to develop his naturalistic alternative to
Carnap's conception of philosophy was his considered rejection
of Carnap's accommodation of the thesis that arithmetic is
analytic to Godel's result.
Different strands of Quine's criticism of the analytic/synthetic
distinction must thus be distinguished. While Quine's criticisms
in "Two Dogmas" on their own clearly did not undermine all
forms of the distinction that were defended in the Vienna
Circle--Carnap's reconstructions of the notion of
analyticity did not express unconditionally necessary and unrevisable
propositions--they do gain in plausibility even against
Carnap's once it is recognized that the deepest ground of
contestation lies elsewhere: not in the notion of analyticity widely
understood but in that of logical truth narrowly understood. Read in
this way, Quine can be seen to argue that the notion of L-consequence
as explication of analytic truth--as opposed to P-consequence as
non-analytic, mere framework entailments--traded not only on the
idea of non-finitary notions of proof but also on a distinction of
logical from descriptive expressions that itself only proceeded on the
basis of a finite enumeration of the former (compare Carnap 1934/37,
SSSS51-52). (This deficit was not repaired in
Carnap's later work either; see Awodey 2007.) What Quine
criticized was precisely the fact that Carnap could ground the
distinction between logical and non-logical terms no deeper than by
the enumeration of the former in a given framework: was the
distinction therefore not quite arbitrary?
Quine's direct arguments against the distinction between logical
and empirical truth (1963) have been found to beg the question against
Carnap and his way of conceiving of philosophy (Creath 2003). This way
of responding to Quine's objection requires us to specify still
more precisely just what Carnap thought he was doing when he employed
the distinction between analytic and synthetic propositions. To be
sure, in (1955) he gave broadly behavioural criteria for when meaning
ascriptions could be deemed accepted in linguistic practice, but he also noted that this was not a general requirement for the
acceptability of explicatory discourse. To repeat, explications did
not seek to model natural language concepts in their tension-filled
vividness, but to make proposals for future use and to extract and
systematize certain aspects for constructive purposes. Thus Carnap
clarified (1963b, SS32) that he regarded the distinction between
analytic and synthetic statements--just like the distinction
between descriptive or factual and prescriptive or evaluative
statements--not as descriptive of natural language practices, but
as a constructive tool for logico-linguistic analysis and theory
construction. It is difficult to overstress the significance of this
stance of Carnap's for the evaluation of his version of the
philosophical project of the Vienna Circle. Carnap's
understanding of philosophy has thus been aptly described as the
"science of possibilities" (Mormann 2000).
As Carnap understood the analytic/synthetic distinction, it was a
distinction drawn by a logician to enable greater theoretical
systematicity in the reconstructive understanding of a given symbol
system, typically a fragment of a historically developed one. That
fully determinative objective criteria of what to regard as a logical
and what as a non-logical term cannot be assumed to be pre-given does
not then in and of itself invalidate the use of that distinction by
Carnap. On the contrary, it has been convincingly argued that Carnap
himself did not hold to a notion of what is a factual and what is a
formal expression or statement that was independent of the
specification of the language in question (Ricketts 1994). The
ultimate ungroundedness of his basic semantic explicatory categories,
this suggests instead, was a fact that his own theories fully
recognized and consciously exploited. (Somewhat analogously, that we
cannot define science independently of the practice of scientists of
"our culture" was admitted by Neurath 1932a, Carnap 1932d
and Hempel 1935, much to the exasperation of Zilsel 1932 and Schlick
1935a.)
It remained open for Carnap then to declare his notion of analyticity
to be only operationally defined for constructed languages and to let
that notion be judged entirely in terms of its utility for
meta-theoretical reflection. Just on that account, however, a last
hurdle remains: finding a suitable criterion of significance for
theoretical terms that allows the distinction between analytic and
synthetic statements to be drawn in the non-observational, theoretical
languages of science. (This was a problem ever since the
non-eliminative reducibility of disposition terms had been accepted
and one that still held for Carnap's 1956 criterion; see section
3.5 below). Only if that can be done, we must therefore add, can
Carnap claim his formalist explicationist project to emerge unscathed
from the criticisms of both Tarski and Quine.
An important related though independently pursued line of criticism
may be noted here. It finds its origin in Saunders Mac Lane's
review (1937) of Carnap's *Logical Syntax* and focusses
less on the analytic/synthetic distinction than on Carnap's
failure to give a formally correct definition of logic. It challenges
the ambition to have accounted for the formal sciences but declines to
embrace a naturalistic alternative. Further research along these lines
has suggested to some that by extending Carnap's approach and
framework it can be linked to attempts in category theory to provide
the missing definition (see Awodey 2012, 2017), while a different response to Mac Lane's as well as Quine's criticisms defends Carnap's inability, frankly admitted in (1942, 57) to define logical terms as such in full generality (Creath 2017).
None of the above considerations should lead one to deny, however,
that one can find understandings of the term "analytic" by
members of the Vienna Circle (like Schlick) that do fall victim to the
criticisms of Quine more easily. Nor should one discount the fact that
Carnap's logic of science emerges as wilfully ill-equipped to
deal with the problems that exercise the traditional metaphysics or
epistemologies that deal in analyticities. (Of course, unlike his
detractors, Carnap considered this to be a merit of his approach.)
Lastly, it must be noted that Carnap's intensionalist logic of
science holds out the promise of practical utility only for the price
of a pragmatic story that remains to be told. Of what nature are the
practical considerations and decisions that, as Carnap so freely
conceded (1950a), are called for when choosing logico-linguistic
frameworks? (Such conventional choices do not respond to truth or
falsity, but instead to whatever is taken to measure convenience.)
That Carnap rightly may have considered such pragmatic questions
beyond his own specific brief as a logician of science does not
obviate the need for an answer to the question itself. (Carus 2007
argues that in this broadly pragmatic dimension lies the very point of
Carnap's explicationism.) On this issue, too, it would have been
helpful if there had been more collaboration with Neurath and Frank,
who were sympathetic to Carnap's explication of analyticity but
did not refer to it much in their own, more practice-oriented
investigations (see section 3.6 below).
### 3.3 Reductionism and Foundationalism: Two Criticisms Partly Rebutted
As it happens, anti-verificationism has two aspects: opposition to
meaning reductionism and opposition to the formalist project. Turning
to the former, we must distinguish two forms of reductionism,
phenomenalist and physicalist reductionism. Phenomenalism holds
statements to be cognitively significant if they can be reduced to
statements about one's experience, be it outer (senses) or inner
(introspection). Physicalism holds statements to be cognitively
significant if they can be reduced or evidentially related to
statements about physical states of affairs. Here it must be noted not
only that the Vienna Circle is typically remembered in terms of the
apparently phenomenalist ambitions of Carnap's *Aufbau*
of 1928, but also that already by the early 1930s some form of
physicalism was favoured by some leading members including Carnap
(1932b) and that already in the *Aufbau* the possibility of a
different basis for a conceptual genealogy than the phenomenal one actually chosen had been indicated. Thus one
must not only ask about the reductionism in the *Aufbau* but
also consider just how reductivist in intent the physicalism was meant
to be.
Considerations can begin with an early critique that has given rise in
some quarters to a sharp distinction between Viennese logical
positivism and German logical empiricism, with the former accused of
reductionism and the latter praised for their anti-reductionism, a
distinction which falsely discounts the changing nature and variety of
Vienna Circle doctrines. Reichenbach's defense of empiricism
(1938) turned on the replacement of the criterion of strict
verifiability with one demanding only that the degree of probability
of meaningful statements be determinable. This involved opposition
also to demands for the eliminative reduction of non-observational to
observational statements: both phenomenalism and reductive physicalism
were viewed as untenable and a correspondentist realism was advanced
in their stead. Now it is true that of the members of the Vienna
Circle only Feigl ever showed sympathies for scientific realism, but
it is incorrect that all opposition to it in the Circle depended on
the naive semantics of early verificationism. Again, of course, some
Vienna Circle positions were liable to Reichenbach's
criticism.
Another misunderstanding to guard against is that the Vienna
Circle's ongoing concern with "foundational issues"
and the "foundations of science" does in itself betoken
foundationalism. (In the Vienna Circle's days, foundationalism
had it that the basic items of knowledge upon which all others
depended were independent of each other, concerned phenomenal states
of affairs and were infallible; nowadays, foundationalists drop
phenomenalism and infallibility.) Already the manifesto sought to make
clear the Circle's opposition when it claimed that "the
work of 'philosophic' or 'foundational'
investigations remains important in accord with the scientific world
conception. For the logical clarification of scientific concepts,
statements and methods liberates one from inhibiting prejudices.
Logical and epistemological analysis does not wish to set barriers to
scientific enquiry; on the contrary, analysis provides science with as
complete a range of formal possibilities as is possible, from which to
select what best fits each empirical finding (example: non-Euclidean
geometries and the theory of relativity)." (Carnap-Hahn-Neurath
1929 [1973, 316]) This passage can be read as an early articulation of
the project of a critical-constructivist meta-theory of science that
abjures a special authority of its own beyond that stemming from the
application of the methods of the empirical and formal sciences to
science itself, but instead remains open to what the actual practice
of these sciences demands.
How then can Vienna Circle philosophy be absolved of foundationalism? As
noted, it is the *Aufbau* (and echoes of it in the manifesto)
that invites the charge of phenomenalist reductionism. To begin with,
one must distinguish between the strategy of reductionism and the
ambition of foundationalism. Concerning the *Aufbau* it has
been argued that its strategy of reconstructing empirical knowledge
from the position of methodological solipsism (phenomenalism without
its ontological commitments and some of its epistemological ambitions)
is owed not to foundationalist aims but to the ease by which this
position seemed to allow the demonstration of the interlocking and
structural nature of our system of empirical concepts, a system that
exhibited unity and afforded objectivity, which was Carnap's
main concern. (See Friedman's path-breaking 1987 and 1992, and Richardson 1990, 1998, Ryckman
1991, Pincock 2002, 2005. For the wide variety of influences on the
*Aufbau* more generally, see Dambock 2016.) However, it is
hard to deny categorically that Carnap ever harbored foundationalist
ambitions. Not only did Carnap locate his *Aufbau* very close
to foundationalism in retrospect (1963a), but a passage in his (1930)
led Uebel (2007, Ch.6) to claim that around 1929/30 Carnap was motivated by
foundationalist principles and reinterpreted his own *Aufbau*
along these lines (around the same time that Wittgenstein
entertained a psychologistic reinterpretation of his own
*Tractatus* that was reported back to the Circle by Waismann).
To correct this foundationalist aberration was the task of the
Circle's subsequent protocol-sentence debate about the content,
form and status of the evidence statements of science.
This concession to the foundationalist misinterpretation of Vienna
Circle philosophies generally must not, however, be taken to tell
against the new reading of the *Aufbau* or the epistemologies
developed from 1930 onwards on the physicalist wing of the Circle. In fact, the *Aufbau* itself fails when it is read as
a foundationalist project, as it was by Quine (1951a) who pointed out
that no eliminative definition of the relation 'is at' was
provided (required for locating objects of consciousness in physical
space). Yet other failures of reduction were detected by Richardson (1998, Ch.3), further questioning the *Aufbau's* supposed foundationalist intent. Ultimately it was a still different failure of reduction that prompted Carnap to abandon as
mistaken reconstructions of the scientific language on the basis of
methodological solipsism (though not logical investigations of such languages for their own sake, as noted in his 1961a). Initially Carnap had not been prepared to draw this conclusion even though Neurath (1931b, 1932a) argued that such a type of
rational reconstruction traded on objectionable
counterfactual presuppositions (methodological solipsism did not
provide a correct description of the reasoning involved in cognitive
commerce with the world around us). At the time Carnap
merely conceded that it was more "convenient" to
reconstruct the language of science on a physicalistic basis (1932e). Only the failure of reducing dispositional predicates to observational ones convinced him to abandon the methodologically solipsist approach (1936-37, 464 and 10) and to adopt an exclusively physicalist basis for his reconstructions of the language of science from then on (a decision explicitly reaffirmed in 1963b).
Carnap's initial reluctance to draw this conclusion is best interpreted, not as hankering after epistemological foundations, but as indicating the growing conviction that philosophical methodology had to be built on logical reasons and should not be "entangled with psychological
questions" (1934/37, SS72). Moreover, already Carnap's initial hope for a phenomenalist reduction in the *Aufbau* as well as his decision in (1932e) to loosen the methodologically solipsist claim on epistemology were motivated by logical considerations, the former by mistaken ideas in *Untersuchungen zur allgemeinen Axiomatik* (2000, unpublished at the time) and the latter partly as a consequence of Godel's advice on a faulty definition of analyticity in a draft of *Logical Syntax* (see Ricketts 2010 and Awodey and Carus 2007 respectively).
It could likewise be asked concerning physicalism whether it represented, on a different basis, the pursuit of a
foundationalist agenda. But that Carnap later in
(1936/37) called his non-eliminative definitions of disposition
terms "reduction sentences" already indicates that it was enough
for him to provide a basis for the applicability of these terms by
merely sufficient but not necessary conditions. Likewise,
Carnap's proposal (1939) to conceive of theoretical terms as
defined by implicit definition in a non-interpreted language, selected terms of which were, however, linked via non-eliminative
reductive chain to the observational language, suggests that what concerned him
primarily was the capture of indicator relations to sustain in
principle testability of statements containing the terms in question.
This is best understood as an attempt to preserve the empirical
applicability of the formal languages constructed for high-level theory, but
not as reductivism with regard to some foundational given.
By contrast, Neurath never advocated methodological solipsism. Consider that his
complex conception of the form of protocol statements (1932b)
explicated the concept of observational evidence in terms that
expressly reflected debts to empirical assumptions which called for
theoretical elaboration in turn. For unlike the logician of science
Carnap, who left it to psychology and brain science to determine more
precisely what the class observational predicates were that could
feature in protocol statements (1936/37), the empirically oriented
meta-theoretician of science Neurath was concerned to encompass and
comprehend the practical complexities of reliance on scientific
testimony: the different clauses (embeddings) of his proposed protocol
statements stand for conditions on the acceptance of such testimony
(see Uebel 2009). In addition, Neurath's theory of protocol
statements also makes clear that his understanding of physicalism did
not entail the eliminative reduction of the phenomenon of
intentionality but, like Carnap (1932c), merely sought its integration
into empiricist discourse.
Given these different emphases of their respective physicalisms,
mention must also be made of the significant differences between
Carnap's and Neurath's conceptions of unified science:
where the formalist Carnap once preferred a hierarchical ordering of
finitely axiomatized theoretical languages that allowed at least
partial cross-language definitions and derivations--these
requirements were liberalized over the years (1936b), (1938),
(1939)--the pragmatist Neurath opted from the start to demand
only the interconnectability of predictions made in the different
individual sciences (1935a), (1936c), (1944). (Metereology, botany and
sociology must be combinable to predict the consequences of a forest
fire, say, even though each may have its own autonomous theoretical
vocabulary.) Here too it must be remembered that, unlike Carnap,
Neurath only rarely addressed issues in the formal logic of science
but mainly concerned himself with the partly contextually fixed
pragmatics of science. (One exception is his 1935b, a coda to his
previous contributions to the socialist calculation debate with Ludwig
von Mises and others.) Not surprisingly, at times the priorities set
by Neurath for the pragmatics of science seemed to conflict with those
of Carnap's logic of science. (These tensions often were
palpable in the grand publication project undertaken by Carnap and
Neurath in conjunction with Morris, the International Encyclopedia of
the Unity of Science; see Reisch 2003.) That said, however, note that
Carnap's more hierarchical approach to the unity of science also
does not support the attribution of foundationalist ambitions.
But for a brief lapse around 1929/30 and perhaps in some pre-Vienna years, then,
Carnap fully represents the position of Vienna Circle
anti-foundationalism. In this he joined Neurath whose long-standing
anti-foundationalism is evident from his famous simile, first used in
1913, that likens scientists to sailors who have to repair their boat
without ever being able to pull into dry dock (1932b). Their positions
contrasted at least *prima facie* with that of Schlick (1934)
who explicitly defended the idea of foundations in the Circle's
protocol-sentence debate. Even Schlick conceded, however, that all
scientific statements were fallible ones, so his position on
foundationalism was by no means the traditional one. The point of his
"foundations" remained less than wholly clear and
different interpretation of it have been put forward (e.g., Oberdan 1998, Uebel 1996, 2020). (On the protocol
sentence debate as a whole, which included not only the debate between
Carnap and Neurath but also debates between the physicalists and Schlick and other occasional participants,
see, e.g., the differently centered accounts of Uebel 1992, 2007,
Oberdan 1993, Cirera 1994.) While all in the Circle thus recognized as
futile the attempt to restore certainty to scientific knowledge
claims, not all members embraced positions that rejected
foundationalism *tout court*. Clearly, however, attributing
foundationalist ambitions to the Circle as a whole constitutes a total
misunderstanding of its internal dynamics and historical development,
if it does not bespeak wilfull ignorance. At most, a foundationalist
faction around Schlick can be distinguished from the so-called left
wing whose members pioneered anti-foundationalism with regard to both
the empirical and formal sciences.
### 3.4 Scientific Theories, Theoretical Terms and the Problem of Realism
Yet even if it be conceded that the members of the Vienna Circle did
not harbour undue reductionist-foundationalist ambitions, the question
remains open whether they were able to deal with the complexities of
scientific theory building.
Here the prominent role of Schlick must be mentioned, whose
*General Theory of Knowledge* (1918, second edition 1925) was
one of the first publications by (future) members of the Vienna Circle
to introduce the so-called two-languages model of scientific theories.
According to this model, scientific theories comprised an
observational part formulated with observational predicates as
customarily interpreted, in which observations and experiential laws
were stated, and a theoretical part which consisted of theoretical
laws the terms of which were merely implicitly defined, namely, in
terms of the roles they played in the laws in which they figured. Both
parts were connected in virtue of a correlation that could be
established between selected terms of the theoretical part and
observational terms. In the second half of the 1920s, however,
Schlick's model, involving separate conceptual systems, was put
aside in favor of a more streamlined conception of scientific theories
along lines as suggested by the *Aufbau*. Clearly, however,
Schlick's model represents an early form of the conception of
scientific theories as uninterpreted calculi connected to observation
by potentially complicated correspondence rules that Carnap
reintroduced in (1939) and that became standard in the "received
view". (Another, albeit faint precursor was the idea contained
in a 1910 remark of Frank's pointing out the applicability of
Hilbert's method of implicit definition to the reconstruction
empirical scientific theories as conceived, also along the lines of
two distinct languages, by the French conventionalists Rey and Duhem;
see Uebel 2003.)
Even granted the model in outline, questions arise both concerning its
observational base as well as its theoretical superstructure. We
already discussed one aspect of the former topic, the issue of
protocols, in the previous section; let's here turn to the
latter topic. Talk of correspondence rules only masks the problem that
is raised by theoretical terms. One of the pressing issues concerns
their so-called surplus meaning over and above their observational
consequences. This issue is closely related to the problem of
scientific realism: are there truth-evaluatable matters of fact for
scientific theories beyond their empirical, observational adequacy?
Even though the moniker "neo-positivism" would seem to
prescribe an easy answer as to what the Vienna Circle's position
was, it must be noted that just as there is no consensus discernible
today there was none in the Circle beyond certain basics that left the
matter undecided.
All in the Vienna Circle followed Carnap's judgement in
*Pseudoproblems of Philosophy* (1928b) and Schlick's
contention in his response to Planck's renewal of anti-Machian
polemics (1932) that questions like that of the reality of the
external world were not well-formed ones but only constituted
pseudo-questions. While this left the observables of empirical reality
clearly in place, theoretical entities remained problematical: were
they really only computational fictions introduced for the ease with
which they they allowed complex predictive reasoning, as Frank (1932)
held? This hardly seems to do justice to the surplus meaning of
theoretical terms over and above their computational utility: theories
employing them seem to tell us about non-observable features of the
world. This indeed was Feigl's complaint (1950) in what must
count as the first of very few forays into "semantical
realism" (scientific realism by another word) by a former member
of the Vienna Circle--and one that was quickly opposed by
Frank's instrumentalist rejoinder (1950). Carnap sought to remain
aloof on this as on other ontological questions. So while in the
heyday of the Vienna Circle itself the issue had not yet come into
clear focus, by mid-century one could distinguish amongst its
surviving members both realists (Feigl) and anti-realists (Frank) as
well as ontological deflationists (Carnap).
Carnap's general recipe for avoiding undue commitments (while
pursuing his investigations of various language forms, including the
intensional ones Quine frowned upon) was given in terms of the
distinction between so-called internal and external questions (1950a).
Given the adoption of a logico-linguistic framework, we can state the
facts in accordance with what that framework allows us to say. Given
any of the languages of arithmetic, say, we can state as arithmetical
fact whatever we can prove in them; to say that there are
numbers, however, is at best to express the fact that numbers are a
basic category of that framework (irrespective of whether they are
logically derived from a still more basic category). As to whether
certain special types of numbers exist (in the deflated sense), that
depends on the expressive power of the framework at hand and on
whether the relevant facts can be proven. Analogous considerations
apply to the existence of physical things (the external world) given
the logico-linguistic frameworks of everyday discourse and empirical
science. (The near-tautologous nature of these categorical claims in
Carnap's hands echoes his earlier diagnosis of metaphysical
claims as pseudo-statements; see also 1934/37, Part V.A.) Unlike such
internal questions, however, external questions, questions whether
numbers or electrons "really exist" irrespective of any
framework, are ruled out as illegitimate and meaningless. The only way
in which sense could be given to them was to read them as pragmatic
questions concerned with the utility of talk about numbers or
electrons, of adopting certain frameworks. Carnap clearly retained his
basic position: existence claims remain the
province of science and there must be seen as mediated by the
available conceptual tools of inquiry. Logicians of science are in no
position to double-guess the scientists in their own proper
domain. (Needless to say, Quine 1951b opposed the internal/external distinction as much as his 1951a opposed the analytic/synthetic distinction.)
Matters came to a head with the discovery of a proof (see Craig 1956)
that the theoretical terms of a scientific theory are dispensable in
the sense of it being possible to devise a functionally equivalent
theory that does not make use of them. Did this not rob theoretical
terms of their distinctive role and so support instrumentalism? The
negative answer was twofold. As regards defending their utility,
Carnap (1963b, SS24) agreed with Hempel (1958) that in practice
theoretical terms were indispensable in facilitating inductive
relations between observational data. As regards the defense of their
cognitive legitimacy, Carnap held that this demanded determining what
he called their "experiential import", namely, determining
what specifically their empirical significance consisted in. It was
for this purpose that Carnap came to employ Ramsey's method of
regimenting theoretical terms. Nowadays this so-called ramseyfication
is often discussed as a means for expressing a position of
"structural realism", a position midway between
fully-blown scientific realism and anti-realism and so sometimes
thought to be of interest to Carnap. Carnap's own concern with
ramseyfication throws into relief not only the question of the
viability of one of the Vienna Circle's most forward-looking
stances in the debate about theoretical terms--intending to avoid
both realism and anti-realism--but also several other issues that
bear on the question of which, if any, forms of Vienna Circle
philosophy remain viable.
### 3.5 Carnap's Later Meaning Criterion and the Problem of Ramseyfication
Note that the issue of realism vis-a-vis theoretical terms is
closely related to two other issues central to the development of
Vienna Circle philosophy: Carnap's further attempts to develop a
criterion of empiricist significance for the terms of the theoretical
languages of science and his attempts to defend the distinction
between synthetic and analytic statements with regard to such
theoretical languages.
In 1956 Carnap introduced a new criterion of significance specifically
for theoretical terms (1956b). This criterion was explicitly
theory-relative. Roughly, Carnap first defined the concept of the
"relative significance" of a theoretical term. A term is
relatively significant if and only if there exists a statement in the
theoretical language that contains it as the only non-logical term and
from which, in conjunction with another theoretical statement and the
sets of theoretical postulates and correspondence rules, an
observational statement is derivable that is not derivable from that
other theoretical statement and the sets of theoretical postulates and
correspondence rules alone. Then Carnap defined the
"significance" of a theoretical term in terms of it
belonging to a sequence of such terms such that each is relatively
significant to the class of those terms that precede it in the
sequence. Now those theoretical statements were legitimate and
cognitively significant that were well-formed and whose descriptive
constants were significant the sense just specified. It is clear that
by the stepwise introduction of theoretical terms as specified, Carnap
sought to avoid the deleterious situations that rendered Ayer's
criterion false (and his own of 1928). Nevertheless, this proposal too
was subjected to criticism (e.g., Rozeboom 1960, Kaplan 1975a). A
common impression amongst philosophers appears to be that this
criterion failed as well, but this judgement is by no means
universally shared (for the majority view see Glymour 1980, for a
contrary assessment see Sarkar 2001). Thus it has been argued that
subject to some further refinements, Carnap's proposal can be
made to work (see for discussion Creath 1976, Justus 2014, Lutz 2017)--as long as the sharp
distinction between observational and theoretical terms can be
sustained. (In light of the objections to the latter distinction one
wants to add: or by a dichotomy of terms functionally equivalent to
it.)
Carnap's own position on his 1956 criterion appears somewhat
ambiguous. While he is reported to have accepted one set of criticisms
(Kaplan 1975b), he also asserted even after they had been put to him
that he thought his 1956 criterion remained adequate (1963b,
SS24b). Yet Carnap there also advised investigation of
whether still another, then entirely new approach to theoretical terms
that he was developing would allow for an improved criterion of significance
for them. So when Carnap offered "the Ramsey method" as a
method of characterizing the "empirical meaning of theoretical
terms" it was not their empirical significance as such but the
specific empirical import of theoretical terms that he considered
(1966, Ch. 26). What prompted him to undertake his investigations of
ramseyfications was not dissatisfaction with his 1956 proposal as a
criterion of significance for theoretical terms, but the fact that it still proved
impossible with this model to draw the distinction between synthetic and analytic
statements in the theoretical language. The reason for this was that
the postulates for the theoretical language also specify factual
relations between phenomena that fall under the concepts that are
implicitly defined by them. (As noted, a similar problem already had
plagued Carnap's analyses of disposition terms ever since he
allowed for non-eliminative reduction chains.)
Carnap's attempt to address this problem by ramseyfication was
published in several places from 1958 onwards. (See Carnap 1958, 1963,
SS24C-D and the popular exposition 1966, Chs. 26 and 28; compare
Ramsey 1929 and see Psillos 1999, Ch.3. This proposal and a variant of
it (1961b) were both presented in his 1959 Santa Barbara lecture
(published in Psillos 2000a); as it happened, Kaplan presented his
criticism (1975a) of Carnap's 1956 criterion at the same
conference.) With ramseyfication Carnap adverted again to entire
theories as the unit of assessment. Ramseyfication consists in the
replacement of the theoretical terms of a finitely axiomatized theory
by bound higher-order variables. This involves combining all the
theoretical postulates which define theoretical terms (call this
conjunction *T*) and correspondence rules of a theory which
link some of these theoretical terms with observational ones (call
this *C*) in one long sentence (call this *TC*) and then
replacing all the theoretical predicates that occur in it by bound
higher-order variables (call this *RTC*). This is
the so-called Ramsey-sentence of the entire theory; in it no
theoretical terms appear, but it possesses the same explanatory and
predictive power as the original theory: it has the same observational
consequences. However, Carnap stressed that the Ramsey sentence cannot
be said to be expressed in a "simple" but only an
"extended" observational language, for due to its
higher-order quantificational apparatus it includes "an advanced
complicated logic embracing virtually the whole of mathematics"
(1966, [1996, 253]).
To distinguish between analytic and synthetic statements in the
theoretical language Carnap made the following proposal. Let the
Ramsey sentence of the conjunction of all theoretical postulates and
the conjunction of all correspondence rules of that theory be
considered as expressing the entire factual, synthetic content of the
scientific theory and its terms in their entirety. By contrast, the
statement "*RTC* - *TC*"
expressed the purely analytic component of the theory, its
"A-postulate" (or so-called Carnap sentence). This
"A-postulate states that if entities exist (referred to by the
existential quantifiers of the Ramsey sentence) that are of a kind
bound together by all the relations expressed in the theoretical
postulates of the theory and that are related to observational
entities by all the relations specified by the correspondence
postulates of the theory, then the theory itself is true." Or
differently put, the A-postulate "says only that if the world is
this way, then the theoretical terms must be understood as satisfying
the theory" (1966 [1996, 271]). With *RTC*
- *TC* expressing a meaning postulate Carnap
claimed to have separated the analytic and synthetic components of a
scientific theory.
Carnap's adoption of the Ramsey method met mainly with criticism
(Psillos 1999, ch.3, 2000b, Demopoulos 2003, 2017), even though
ramseyfications continue to be discussed as a method of characterizing
theoretical terms in a realist vein (albeit with conditions not yet
introduced by Carnap, as in Lewis 1970, Papineau 1996). With
Carnap's ramseyfications, however, we do not get the answer that what exists is the structure that the ramseyfication at hand
identifies. Given the absence of a clause requiring unique
realizability, ramseyfications counseled modesty: the structure that
is identified remains indeterminate to just that degree to which
theoretical terms remain incompletely interpreted (Carnap 1961b). To
this we must add that for Carnap ramseyfications of theoretical terms
can support only internal existence claims: he explicitly reaffirmed
his confidence in the distinction between internal and external
question to defuse the realism/anti-realism issue (1966 [1996, 256]).
This strongly suggests that with these proposals Carnap did not intend
to deviate from his deflationist approach to ontology.
What must be considered, however, is that Carnap's proposal to
reconstruct the contribution of theoretical terms by ramseyfication
falls foul of arguments deriving from M.H.A. Newman's objection
to Russell's structuralism in *Analysis of Matter* (see
Demopoulos and Friedman 1985). This objection says that once they are
empirically adequate, ramseyfied theories are trivially true, given
the nature of their reconstruction of original theories. Russell held
that "nothing but the structure of the external world is
known". But if nothing is known about the generating relation
that produces the structure, then the claim that there exists such a
structure is vacuous, Newman claimed. "Any collection of things
can be organised so as to have the structure *W*, provided
there are the right number of them." (Newman 1928, 144) To see
how this so-called cardinality constraint applies to ramseyfications
of theories, note that in Carnap's hands, the non-observational
part of reconstructed theories, their theoretical entities, were
represented by "purely logico-mathematical entities, e.g.
natural numbers, classes of such, classes of classes, etc." For
him the Ramsey sentence asserted that "observable events in the
world are such that there are numbers, classes of such, etc., which
are correlated with events in a prescribed way and which have among
themselves certain relations", this being "clearly a
factual statement about the world" (Carnap 1963b, 963). Carnap
here had mathematical physics in mind where space-time points are
represented by quadruples of real numbers and physical properties like
electrical charge-density or mass-density are represented as functions
of such quadruples of real numbers.
The problem that arises from this for Carnap is not the
failure to single out the intended interpretation of the theory: as
noted, Carnap clearly thought it an advantage of the method that it
remained suitably indeterminate. The problem for Carnap is rather
that, subject to its empirical adequacy, the truth conditions of the
Ramsey-sentences are fulfilled trivially on logico-mathematical
grounds alone. As he stated, Ramsey-sentences demand that there be a
structure of entities that is correlated with observable events in the
way described. Yet given the amount of mathematics that went into the
ramseyfied theory--"virtually the whole of
mathematics"--some such structure as demanded by the
Ramsey-sentence is bound to be found among those entities presupposed
by its representational apparatus. (Here the cardinality constraint is no
constraint at all.) That any theory is trivially true for purely
formal reasons (as long as it is empirically adequate) therefore is held against
Carnap's proposal to use Ramsey sentences as reconstructions of
the synthetic content of the theoretical part of empirical scientific
theories. Given that with ramseyfications "the truth of physical
theory reduces to the truth of its observational consequences"
(Demopoulos and Friedman 1985, 635), this is a problem for
Carnap's project on its own terms: any surplus empirical meaning
of theoretical terms that Carnap sought to capture simply evaporates
(Demopoulos 2003).
This result casts its shadow over Carnap's last treatment of
theoretical terms in its entirety and threatens further consequences.
If the reconstruction of empirical theories by ramseyfication in
Carnap's fashion is unacceptable, then all explications that
build on this are called into question: explications of theoretical
analyticity as much as explications of the experiential import of
theories. If no justice has been done to the experiential
import of theoretical terms, then one must ask whether
the analytic components of a theoretical language have been correctly
identified. If they have not, then the meta-theoretical utility of the
synthetic/analytic distinction is once again be called into
question.
One is lead to wonder whether Carnap would not be well advised to
return to his 1956 position. This allowed for a criterion of empirical
significance for theoretical terms but not for the analytic/synthetic
distinction to be sustained with regard to the theoretical language.
According to Carnap's fall-back position before he hit upon
ramseyfication, it was thought possible to distinguish narrow logical
truth from factual truth in the theoretical language (1966, Ch. 28).
Yet it is difficult to silence the suspicion that an
analytic/synthetic distinction that applies only to observational
languages--and admits inescapable semantical holism for
theoretical languages--is not what the debate between Carnap and
Quine was all about. Attempts have thus been undertaken
to provide interpretations of Carnap's
ramseyfications that contain or mitigate the effects of the Newman
objection (Friedman 2008, 2011, Uebel 2011b, Creath 2012). What has become clear, in any case,
is that much is at stake for Carnap's formal explicationism,
indeed for the standard logical empiricist model of scientific
theories (see below).
### 3.6 The Status of the Criterion of Significance and the Point of the Project of Explication
We are now in a position to return to a final criticism of the search
for a criterion of empiricist significance. Much has been made of the
very status of the criterion itself (however it may be put in the
end): was it empirically testable? It is common to claim that it is
not and therefore to consign it to insignificance in turn, following
Putnam (1981a, 1981b). The question arises whether this is to overlook the fact
that the criterion of significance was put forward not as an empirical
claim but as a meta-theoretical proposal for how to delimit empiricist
languages from non-empiricist ones. Again, pursuing this line of
inquiry is not to deny that the meaning
criterion may have been understood by some members of the Circle in such a way that it became liable
to charges of self-refutation. Even if that were the case, the reason
for this may be found not in the very idea of such a criterion, but
in the contradictory status of the Tractarian "elucidations" to which
the criterion was likened. (The legitimacy of these elucidations was
at issue already in the debates that divided the Circle in the early
1930s; see, e.g., Neurath 1932a.) What will be considered here is primarily the view of Carnap, who from 1932 onwards put his
philosophical theses in the form of "proposals" for
alternative language forms, but how the pragmatist alternative fares
will also be considered. Finally, we will consider where this does
leave neopositivist anti-metaphysics.
For Carnap, the empiricist criterion of significance was an analytic
principle, but in a very special sense. As a convention, the criterion
had the standing of an analytic statement, but it was not a formally
specifiable framework principle of the language *Ln*
to which it pertained. Properly formulated, it was a semantic
principle concerning *Ln* that was statable only in
its meta-language *L**n*+1. To argue that the
criterion itself is meaningless because it has no standing in
*Ln* is to commit a category mistake, for
meta-linguistic assertions need not have counterparts in their object
languages (Goldfarb 1997, Creath 2004, Richardson 2004). Nor would it
be correct to claim that the criterion hides circular reasoning,
allegedly because its rejection of the meaningless depends on an
unquestioned notion of experiential fact as self-explanatory (when
such fact is still to be constituted). Importantly, Carnap's
language constructor does not start with fixed notions of what is
empirical (rather than formal) or what is given (rather than assumed
or inferred), but from the beginning allows a plurality of
perspectives on these distinctions (Ricketts 1994). Carnap's
empiricist criterion of significance is precisely this: an
explication, a proposal for how empiricists may wish to speak. It is
not an explanation of how meaning arises from what is not meaningful
in itself. Unlike theorists who wish to explain how meaning itself is
constituted, explicationists can remain untroubled by the regress of
formal semantics with Tarskian strictures. For them, the lack of
formal closure (the incompleteness of arithmetic and the
inapplicability of the truth predicate to its own language) only
betokens the fact that our very own home languages cannot ever be
fully explicated.
It may be wondered whether such considerations have not become
pointless, given the troubles that attempts to provide a criterion of
significance ran into. However, as we saw, Carnap's 1956
criterion for constructed languages remains in play. Moreover, there
also remains the informal, pragmatic approach that can be applied even
more widely. Thus it is not without importance to see that pragmatic
principles delineating empirical significance (like Mach's or
Quine's Peircean insight) are not ruled out from the start
either. The reason for this is different however. For pragmatists, the
anti-metaphysical demarcation criterion is not strictly speaking a
meaning criterion. The pragmatic criterion of significance is
expressly epistemic, not semantic: it speaks of relevance with regard
to an established cognitive practice, not in-principle
truth-evaluability. This criterion is most easily expressed as a
conditional norm, alongside other methodological maxims. (If you want
your reasoning to be responsible to evidence, then avoid statements
that experience can neither confirm or disconfirm, however
indirectly.) So the suggestion that the criterion of empirical
significance can be regarded as a proposal for how to treat the
language of science cannot be brushed aside but for the persistent
neglect of the philosophical projects of Carnap or the non-formalist
left Vienna Circle.
Still, some readers may wonder whether in the course of responding to
the various counter-criticisms, the Vienna Circle's position has
not shifted considerably. This indeed is true: the attempt to show
metaphysics strictly meaningless once and for all did not succeed. For
even if Carnap's 1956 criterion and the pragmatic approach work,
they do not achieve that: Carnap's criterion only works for
constructed languages and the pragmatic one does not address the
semantic issue and only works case by case. But it can be argued that
while this debilitates the Vienna Circle's most notorious claim
(if understood without qualifications), it does not debilitate their
entire program. That was, we recall, to defend Enlightenment reason
and to counter the abuse of possibly empty but certainly
ill-understood deep-sounding language in science and in public life.
Their program was, to put it somewhat anachronistically, to promote
epistemic empowerment. This program would have been helped by an
across-the-board criterion to show metaphysics meaningless, but it can
also proceed in its absence.
But now the suspicion may be that if all that is meant to be excluded
is speculative reason without due regard to empirical and
logico-linguistic evidence, the program's success appears too
easy. Few contemporary philosophers would confess to such reckless
practices. Still, even the rejection of speculative reason is by no
means uncontroversial, as shown by the unresolved status of the appeal
to intuitions that characterizes much of contemporary analytical
metaphysics and epistemology. Moreover, much depends on what's
considered to be "due regard": is merely bad science
"metaphysics"? Or only appeals to the supernatural? And
what about *de re* necessities? Or the seeming commitments of
existential quantification? The promotion of anti-metaphysics may be
applauded in principle as an exercise in intellectual hygiene, the
objection goes, but in practice it excludes either too much or too
little: it either cripples our understanding of theoretical science or
normalizes away the Vienna Circle's most notorious claim.
In response it is helpful to consider the conception of metaphysics
that can be seen to be motivating much of the Circle's ethical
non-cognitivism. What did Carnap (1935) and Neurath (1932a) dismiss
when they dismissed normative ethics as metaphysical and cognitively
meaningless? One may concede that due to the brusque way in which they
put their broadly Humean point, they opened themselves up to
significant criticism, but it is very important to see also what they
did not do. Most notably, they did not dismiss as meaningless all
concern with how to live. Conditional prescriptions remained
straight-forwardly truth-evaluable in instrumental terms and so
cognitively meaningful. In addition, their own active engagement for
Enlightenment values in public life showed that they took these
matters very seriously themselves. (In fact, their engagement as
public intellectuals compares strikingly with that of most
contemporary philosophers of science.) But neither did they fall
victim to the naturalistic fallacy nor were they simply inconsistent.
In the determination of basic values they rather saw acts of personal
self-definition, but, characteristically, Carnap showed a more
individualistic and Neurath a more collectively oriented approach to the
matter. What needs to be borne in mind, then, is the meaning that they
attached to the epithet "metaphysical" in this and other
areas: the arrogation of unique and fully determined objective insight
into matters beyond scientific reason. It was in the ambition of
providing such unconditional prescriptions that they saw philosophical
ethics being the heir of theology. (Compare Carnap 1935 and 1963b,
SS32 and Neurath 1932a and 1944, SS19.) Needless to say, it
remains contentious to claim those types of philosophical ethics to be
cognitively meaningless that seek to derive determinate sets of codes
from some indisputable principle or other. But the ongoing discussion
of non-cognitivism and its persistent defense in analytical ethics as "expressivism"
suggest that, understood as outlined, the Circle's
non-cognitivism was by no means absurd or contradictory.
It may be noted here that a newly discovered fragment of
Carnap's writing (2017) has given fresh impetus to explorations
of the model of ethical reasoning in terms of optatives that Carnap
outlined (1963b, SS32) in response to A. Kaplan's criticism
(1963) of his earlier position. What emerges is that Carnap was
prepared to integrate ethical desiderata among non-ethical ones within
the network of means and ends that decision theory as a normative
theory of rational action seeks to systematize and regiment. Moral
reasoning is assimilated to practical reasoning and no longer suffers
from a deficit of significance--albeit at the cost of not being
able to exclude appeals to certain types of intrinsic value on the ground their being beyond the pale (see Carus 2017). Carnap
may reasonably respond here that as a theorist of science he is not
required to account for normative ethics beyond providing a framework
for understanding its undeniable role in a generic theory of human
behavior. What was rightly objected against his earlier position was
that it made such understanding impossible.
Whatever the details, their non-cognitivism supports the idea that the
left wing's anti-metaphysics was primarily deflationist. They opposed all claims to have a categorically deeper
insight into reality than either empirical or formal science, such
that philosophy would stand in judgement of these sciences as to their
reality content or that mere science would stand in need of
philosophical interpretations. (Concerned with practical problems,
they likewise opposed philosophical claims to stand above the
contestations of mere mortals.) Importantly, such deflationism need
not remain general and vague, but can be given precise content. For
instance, it has been argued (Carus 1999) that Carnap was correct to understand Tarski's theory of truth not as a traditional
correspondence theory such that truth consisted in some kind of
agreement of statements or judgements and facts or the world where the
latter make true the former. In Carnap's unchanged opposition to
the classical correspondence theory of truth in turn lies not only the
continuity between his own syntactic and semantic phases, but also the
key to his and the entire left Vienna Circle's understanding of
their anti-metaphysical campaign. (On various occasions in the early
30s, Hahn, Frank and Neurath opposed correspondence truth very
explicitly, while, in later years, Neurath resisted Tarskian semantics
precisely because he wrongly suspected it of resurrecting
correspondentism and Frank continued to castigate correspondentism
whenever required. On this tangled issue, see Mormann 1999, Uebel
2004, Mancosu 2008.)
This suggests that a hard core of Viennese anti-metaphysics survives
the criticism and subsequent qualifications of early claims made for
their criteria of empirical significance, yet retains sufficient
philosophical teeth to remain of contemporary interest. The
metaphysics which the left wing attacked, besides everyday
supernaturalism and the supra-scientific essentialism of old, was the
correspondence conception of truth and associated realist conceptions
of knowledge. These notions were deemed attackable directly on
epistemological grounds, without any diversion through the theory of
meaning: how could such correspondences or likenesses ever be
established? As Neurath liked to put it (1930), we cannot step outside
of our thinking to see whether a correspondence obtains between what
we think and how the world is. (Against defenses of the correspondence
theory by arguments from analogy it would be argued that the
analogy is overextended.) Against the counter that this is merely an
epistemic argument that does not touch the ontological issue Neurath
is likely to have argued that doing without an epistemic account is a
recipe for uncontrollable metaphysics.
Importantly, the left wing's deflationary anti-metaphysics was
accompanied by a distinctively constructivist attitude. Here one must
hasten to add, of course, that what was constructed were not the
objects of first-order discourse (tables, chairs, electrons and black
holes) but concepts, be they concepts associated with technical terms for observables, theoretical terms or terms needed for reflection
about the cognitive enterprise of science (ideas like evidence and its
degrees and presuppositions). As meta-theorists of science they
developed explications: different types of explications were
envisaged, ranging from analytic definitions giving necessary and
sufficient conditions in formal languages all the way to pragmatic,
exemplar-based criterial delimitations of the central applications of
contested concepts or practices. Two branches of the Circle's
constructivist tendency can thus be distinguished: Carnap's
rational reconstructions and formalist explications and
Neurath's and Frank's empirically informed and
practice-oriented reconceptualizations. The difference between these
two approaches can be understood as a division
of labor between the tasks of exploring logico-linguistic
possibilities of conceptual reconstruction and considering the
efficacy of particular scientific practices. In principle, the
constructivist tendency in Vienna Circle philosophy was able to
embrace both (compare Carnap 1934/37, SS72 and Neurath 1936b).
However, in its own day, this two-track approach remained
incompletely realized as philosophical relations between Carnap and
Neurath soured over disputes stemming from mutual misunderstandings.
Frank's final paper (1963) was a terse reminder that the logic
of science was not the sole successor or replacement of traditional
philosophy and Carnap's response (1963b, SS3) again acknowledged the compatibility in principle of his logic of science and what Frank called the "pragmatics of science" (1957).
Considering the Vienna Circle as a whole in the light of this reading
of its anti-metaphysical philosophy, we find the most striking
division within it yet. Unlike Carnap and the left wing, Schlick had
little problem with a correspondence theory of truth once it was
cleansed of psychologistic and intuitive accretions and centered on
the idea of unique coordination of statement and fact. In this lay the
strongest sense of continuity between his pre-Vienna Circle
*General Theory of Knowledge* (1918/25) and his post-Tractarian
epistemology (1935a, 1935b). (Schlick also showed little enthusiasm
for the constructivist tendencies which already the manifesto of 1929
had celebrated.) Allowing for some simplification, it must be noted
that Schlick's attack on metaphysics (which gradually weakened
anyway) presupposed a non-constructivist reading of the criterion of
significance. Whether his conception can escape the charge
self-refutation must be left open here.
### 3.7 The Vienna Circle and History
Much confusion exists concerning the Vienna Circle and history, that
is, both concerning the Vienna Circle's attitude towards the
history of philosophy and science and concerning its own place in that
history. As more has been learnt about the history of the Vienna
Circle itself--the development and variety of its doctrines as
well as its own prehistory as a philosophical forum--this
confusion can be addressed more adequately.
As the unnamed villain of the opening sentences of Kuhn's
influential *Structure of Scientific Revolutions* (1962),
logical empiricism is often accused of lacking historical
consciousness and any sense of the embedding of philosophy and science
in the wider culture of the day. Again it can hardly be denied that
much logical empiricist philosophy, especially after World War II, was
ahistorical in outlook and asocial in its orientation.
Reichenbach's distinction (1938) between the contexts of
discovery and justification--which echoed distinctions made since
Kant (Hoyningen-Huene 1987) and was already observed under a different
name by Carnap in the *Aufbau*--was often employed to
shield philosophy not only from contact with the sciences as practiced
but also culture at large. But this was not the case for the Vienna
Circle generally. On the one hand, unlike Reichenbach, who drew a
sharp break between traditional philosophy and the new philosophy of
logical empiricism in his popular *The Rise of Scientific
Philosophy* (1951), Schlick was very much concerned to stress the
remaining continuities with traditional philosophy and its cultural
mission in his last paper (1938). On the other hand, on the left wing
of the Circle scientific meta-theory was opened to the empirical
sciences. To be sure, Carnap for his own part was happy to withdraw to
the "icy slopes" of the logic of science and showed no
research interest of his own in the history of science or philosophy,
let alone its social history. By way of the division of labor he left
it to Neurath and Frank to pursue the historical and practice-related
sociological questions that the pure logic of science had to leave
unaddressed. (See, e.g., Neurath's studies of the history of
optics (1915, 1916), Frank's homage to Mach (1917), his
pedagogical papers in (1949b) and his concern with the practice of
theory acceptance and change in (1956); cf. Uebel 2000 and Nemeth
2007.) Moreover, it must be noted that Neurath himself all along had
planned a volume on the history of science for the International
Encyclopedia of Science, a volume that in the end became Kuhn's
*Structure*. Its publication in this series is often regarded as supremely ironical, given
how Kuhn's book is commonly read. But this is not only to
overlook that the surviving editors of that series, Carnap and Morris
were happy to accept the manuscript, but also that Carnap found himself in
positive agreement with Kuhn (Reisch 1991, Irzik and
Grunberg 1995; cf. Friedman 2001). Finally, one look at the 1929
manifesto shows that its authors were very aware of and promoted the
links between their philosophy of science and the socio-political and
cultural issues of the day.
Turning to the historical influences on the Vienna Circle itself, the
scholarship of recent decades has unearthed a much greater variety
than was previously recognized. Scientifically, the strongest
influences belonged to the physicists Helmholtz, Mach and Boltzmann,
the mathematicians Hilbert and Klein and the logicians Frege and
Russell; amongst contemporaries, Einstein was revered above all
others. The Circle's philosophical influences extend far beyond
that of the British empiricists (especially Hume), to include the
French conventionalists Henri Poincare, Pierre Duhem and Abel
Rey, American pragmatists like James and, in German-language
philosophy, the Neo-Kantianism of both the Heidelberg and the Marburg
variety, even the early phenomenology of Husserl as well as the
Austrian tradition of Bolzano's logic and the Brentano school.
(See Frank 1949a for the influence of the French conventionalists; for the importance of
Neo-Kantianism for Carnap, see Friedman 1987, 1992, Sauer 1989,
Richardson 1998, Mormann 2007; for Neo-Kantianism in Schlick, see
Coffa 1991, Ch. 9 and Gower 2000; for the significance of Husserl for
Carnap, see Sarkar 2004, Ryckman 2007, Carus 2016, Dambock 2018; for the influence of and sympathies for pragmatism see Frank 1949a and Uebel 2016; the Bolzano-Brentano
connection is explored in Haller 1986.) It is against this very wide
background of influences that the seminal force must be assessed that
their contemporary Wittgenstein exerted. The literature on the
relation between Wittgenstein and the Vienna Circle is vast but very
often suffers from an over-simplified conception of the latter. (See
Stern 2007 for an attempt by a Wittgenstein scholar to redress the
balance.) Needless to say, different wings of the Circle show these
influences to different degrees. German Neo-Kantianism was important
for Schlick and particularly so for Carnap, whereas the Austrian
naturalist-pragmatist influences were particularly strong on Hahn,
Frank and Neurath. Frege was of great importance for Carnap, less so
for Hahn who looked to Russell. Most importantly, by no means all
members of the Vienna Circle sought to emulate Wittgenstein--thus
the division between the faction around Schlick and the left wing (see Uebel 2017).
While these findings leave numerous questions open, they nevertheless refute the standard picture of Vienna Circle philosophy which confuses A.J. Ayer's
*Language, Truth and Logic* with the real thing. Ayer offered a version of British empiricism (Berkeley's epistemic phenomenalism updated with Russellian logic) and paid no attention, for instance, to the Circle's overarching concern with establishing
the objectivity claim of science. Ayer's remark in the
preface to his later anthology *Logical Positivism* that his own *Language, Truth and Logic* "did
something to popularize what may be called the classical position of
the Vienna Circle" (1959b, 8) is highly misleading therefore. What he
called "the classical position" was at best a partial characterisation of the starting
position of some--by no means all--of its members, a position which by 1932 the left
wing as a whole rejected and even Schlick had no reason to
endorse any longer.
All that said by way of embedding the Vienna Circle's philosophy
in its time, one must also ask whether its members understood their
own position correctly. Here one issue in particular has become
increasingly prominent and raises questions that are of importance for
philosophy of science still today. That is whether, after all, logical
empiricism did have the resources to understand correctly the then
paradigm modern science, the general theory of relativity. According
to the standard logical empiricist story (Schlick 1915, 1917, 1921,
1922), their theory conclusively refuted the Kantian conception of the
synthetic *a priori*: Euclidean geometry was not only one
geometry amongst many, it also was not the one that characterized
empirical reality. With one of its most prominent exemplars refuted,
the synthetic *a priori* was deemed overthrown altogether. As
noted, Schlick convinced the young Reichenbach to drop his residually
Kantian talk of constitutive principles and speak of conventions
instead. Likewise Schlick rejected efforts by Ernst Cassirer (see his
1921, developing themes from his 1910) to make do with a merely
relative *a priori* in helping along scientific
self-reflection. Even though much later, and on the independent
grounds of quantum physics, Frank attested to the increased proximity
of his and Cassirer's understanding of scientific theories
(1938), Schlick's disregard of Cassirer's efforts remains
notable.
Most controversial is how the issue of general relativity as a
touchstone for competing philosophies of science was framed: having
dismissed Kant's own synthetic *a priori* for its
mistaken apodicity, no time was spared for discussion of its then
contemporary development in Neo-Kantianism as a merely relative but
still constitutive *a priori*. Now in the philosophy of
physics, this omission--committed both by Schlick and
Reichenbach--has recently come back to haunt logical empiricists
with considerable vengeance. Thus it has been argued that the
Schlick-Reichenbach reading of general relativity as embodying the
standard logical empiricist model of scientific theories, with high
theory linked to its observational strata by purely conventional
coordinative definitions, is deeply mistaken in representing the local
metric of space-time not to be empirically but conventionally
determined as in special relativity (Ryckman 1992) and that
it is instead only the tradition of transcendental idealism that
possesses the resources to understand the achievements of mathematical
physics (Ryckman 2005; cf. Friedman 2001). It is tempting to speak of
the return of the repressed Neo-Kantian opposition. But it is tempting
too to note that Schlick's and Reichenbach's mistake was
already corrected quietly and without fanfare by Carnap (see the
example in his 1934/37, SS50). Clearly then, the mistake was not
inevitable and inherent in logical empiricist theorizing about
science as such. (For discussion of different criticism of the Circle's reading of Einstein, especially his relation to Mach, see DiSalle 2006, Ch.4.)
The charge of a constitutive failing would rather seem to come from
Demopoulos's challenge to the two-languages model (nearly)
universally adopted in logical empiricism (forthcoming). Importantly,
this challenge does not proceed, as some previous ones have, from the
impossibility of drawing a sharp distinction between the observational and the theoretical (Putnam
1962). Rather, the two-languages model falsely supposes that the
process of testing scientific hypotheses must only advert to
theoretically uncontaminated facts and so results in misunderstanding
the empirical import of theoretical claims (as in the Newman problem).
Instead, a conception of theory-mediated measurement and testing is
suggested that extends responsiveness to observational data to
theoretical claims by showing them to be essentially implicated in the
production of observed experimental consequences. Hoping to advance
beyond the stalemate between realism and instrumentalism without
appealing to question-begging semantics, Demopoulos here breaks with a
supposition upheld by Carnap throughout, namely that the theoretical
language be regarded as an essentially uninterpreted, at best partially
interpreted calculus. Whatever the outcome of this challenge, it is
remarkable how on this far-reaching and fundamental issue contemporary
philosophy of science intersects with the history of philosophy of
science.
## 4. Concluding Remarks
In conclusion, the results of the discussions in section 3 can be
briefly summarised. To start with, the dominant popular picture of the
Vienna Circle as a monolithic group of simple-minded verificationists
who pursued a blandly reductivist philosophy with foundationalist
ambitions is widely off the mark. Instead, the Vienna Circle must be
seen as a forum in which widely divergent ideas about how empiricism
can cope with modern empirical and formal science were discussed.
While by no means all of the philosophical initiatives started by
members of the Vienna Circle have born fruit, it is neither the case
that all of them have remained fruitless. Nor is it the case that
everything once distinctive of Vienna Circle philosophy has to be
discarded.
Consider verificationism. While the idea to show metaphysics
once-and-for-all and across-the-board to be not false but
meaningless--arguably the most distinctive thesis associated with
the Vienna Circle--did indeed have to be abandoned, two elements
of that program remain so far unrefuted. On the one hand, it remains
an option to pursue the search for a criterion of empirical
significance in terms of constructed, formal languages further along
the lines opened by Carnap with his theory-relative proposal of 1956
(and its later defense against critics). On the other
hand--albeit at the cost of merging with the pragmatist tradition
and losing the apparent Viennese distinctiveness--the option to
neglect as cognitively irrelevant, and in this sense metaphysical, all
assertions whose truth or falsity would not make a difference remains
as open as it always was. In addition it must be noted that, properly
formulated, neither the formalist version of the criterion of
empirical significance for constructed languages nor the pragmatist
version of the criterion for natural languages are threatened by
self-refutation.
Consider analyticity. Here again, the traditional idea--sometimes
defended by some members--did show itself indefensible, but this
leaves Carnap's framework-relative interpretation of analyticity
and the understanding of the *a priori* as equally relative
to be explored. As noted, if Carnap's ramseyfications can be
defended, an analytic/synthetic distinction could be upheld also for
the theoretical languages of science. In any case, however, the
distinction between framework principles and content continues to be
drawable on a case by case basis.
Consider reductionism and foundationalism. While it cannot be denied
that various reductionist projects were at one time or another
undertaken by members of the Vienna Circle and that not all of its
members were epistemological anti-foundationalists either from the
start or at the end, it is clearly false to paint all of them with
reductivist and/or foundationalist brushes. This is particularly true
of the members of the so-called left wing of the Circle, all of whom
ended up with anti-foundationalist and anti-reductionist positions
(even though this did involve instrumentalism for some).
Consider also, however, the challenges mentioned above to the
fundamental tenets of logical empiricism that remain issues of intense
discussion: challenges to its conception of the nature of empirical
theory and of what is distinctive about the formal sciences. That to
this day no agreement has been reached about how its proposals are to
be replaced is not something that is unique to logical empiricism as a
philosophical movement, but that they remain on the table, as it were,
shows the ongoing relevance and centrality of its work for philosophy
of science.
Whether the indicated qualifications and/or modifications count as
defeats of the original project depends at least in part on what
precisely is meant to be rejected when metaphysics is rejected and
that in turn depends on what the positive vision for philosophy
consists in. Here again one must differentiate. While some members
ended with considerable more sympathy for traditional philosophy than
they displayed in the Circle's heyday--and may thus be
charged with partial surrender--others stuck to their guns. For
them, what remained of philosophy stayed squarely in the deflationist
vein established by the linguistic turn. They offered explications of
contested concepts or practices that, they hoped, would prove useful.
Importantly, the explications given can be of two sorts: the formal
explications of the logic of science by means of exemplary models of
constructed languages, and the more informal explications of the
empirical theory of science given by spelling out how certain
theoretical desiderata can be attained more or less under practical
constraints. This has been designated as the bipartite metatheory
conception of scientific philosophy and ascribed to the left wing of
the Circle as an ideal unifying its diverse methodologies (Uebel 2007,
Ch. 12, 2015). Readers will note therefore that despite his enormous
contribution to the development of Vienna Circle philosophy, it is not
Schlick's version of it that appears to this reviewer to be of
continuing relevance to contemporary philosophy--unlike, in their
very different but not incompatible ways, Carnap's and
Neurath's and Frank's. This may be taken as a partial
endorsement of Hempel's 1991 judgement (quoted in sect. 1
above), against which, however, Carnap has here been re-claimed for
the Neurathian wing.
Needless to say, recent work on Vienna Circle philosophies continues
to inspire a variety of approaches to the legacy they constitute
(besides prompting continuing excavations of other members'
non-standard variants; e.g., on Feigl see Neuber 2011). There is
Michael Friedman's extremely wide-ranging project (2001, 2010,
2012) to use the shortcomings of Vienna Circle philosophies as a
springboard for developing a renewed Kantian philosophy that also
overcomes the failings of neo-Kantianism and provides a
philosophy-cum-history fit for our post-Kuhnian times. Then there is
Richardson's proposal (2008) to turn the ambition to develop a
scientific philosophy into a research programme for the history of
science, so as to reveal more clearly the real world dynamics and
limitations of philosophy as a scientific metatheory. And there is
Carus's suggestion (2007) that Carnap's minimalist
explicationism be placed in the service of a renewed Enlightenment
agenda (continuing the task of the "scientific world
conception"). This connects with the current metaphilosophical interest in conceptual engineering, the relevance of Carnap's work to which becoming increasingly recognised (Justus 2012, Wagner 2012, Brun 2016, Reck and Dutilh Novaes 2017, Dutilh Novaes forthcoming, Lutz forthcoming). All along, of course, Vienna Circle philosophies
also continue to serve as foils for alternative and self-consciously
post-positivist programs, fruitfully so when informed by the results of recent
scholarship (e.g. Ebbs 2017).
It would appear then that despite continued resistance to recent
revisionist scholarship--a resistance that consists not so much
in contesting but in ignoring its results--the fortune of Vienna
Circle philosophy has turned again. Restored from the numerous
distortions of its teachings that accrued over generations of acolytes
and opponents, the Vienna Circle is being recognized again as a force
of considerable philosophical sophistication. Not only is it the case
that its members profoundly influenced the actual development of
analytical philosophy of science with conceptual initiatives that,
typically, were seen through to their bitter end. It is also the case
that some of its members offered proposals and suggested approaches
that were not taken up widely at the time (if at all), but that are
relevant again today. Much like its precursors Frege, Russell and
Wittgenstein, the conventionalists Poincare and Duhem, the
pragmatists Peirce and Dewey--and like its contemporaries from
Reichenbach's Berlin Group and the Warsaw-Lvov school of logic
to the Neo-Kantian Cassirer--the Vienna Circle affords a valuable
vantage point on contemporary philosophy of empirical and formal
science. |
ethics-ancient | ## 1. Introduction
In their moral theories, the ancient philosophers depended on several
important notions. These include virtue and the virtues, happiness
(*eudaimonia*), and the soul. We can begin with virtue.
Virtue is a general term that translates the Greek word
*arete*. Sometimes *arete* is also
translated as excellence. Many objects, natural or artificial, have
their particular *arete* or kind of excellence. There is
the excellence of a horse and the excellence of a knife. Then, of
course, there is human excellence. Conceptions of human excellence
include such disparate figures as the Homeric warrior chieftain and
the Athenian statesman of the period of its imperial expansion.
Plato's character Meno sums up one important strain of thought
when he says that excellence for a man is managing the business of the
city so that he benefits his friends, harms his enemies, and comes to
no harm himself (*Meno* 71e). From this description we can see
that some versions of human excellence have a problematic relation to
the moral virtues.
In the ancient world, courage, moderation, justice and piety were
leading instances of moral virtue. A virtue is a settled disposition
to act in a certain way; justice, for instance, is the settled
disposition to act, let's say, so that each one receives their
due. This settled disposition consists in a practical knowledge about
how to bring it about, in each situation, that each receives their
due. It also includes a strong positive attitude toward bringing it
about that each receives their due. Just people, then, are not ones
who occasionally act justly, or even who regularly act justly but do
so out of some other motive; rather they are people who reliably act
that way because they place a positive, high intrinsic value on
rendering to each their due and they are good at it. Courage is a
settled disposition that allows one to act reliably to pursue right
ends in fearful situations, because one values so acting
intrinsically. Moderation is the virtue that deals similarly with
one's appetites and emotions.
Human excellence can be conceived in ways that do not include the
moral virtues. For instance, someone thought of as excellent for
benefiting friends and harming enemies can be cruel, arbitrary,
rapacious, and ravenous of appetite. Most ancient philosophers,
however, argue that human excellence must include the moral virtues
and that the excellent human will be, above all, courageous, moderate,
and just. This argument depends on making a link between the moral
virtues and happiness. While most ancient philosophers hold that
happiness is the proper goal or end of human life, the notion is both
simple and complicated, as Aristotle points out. It seems simple to
say everyone wants to be happy; it is complicated to say what
happiness is. We can approach the problem by discussing, first, the
relation of happiness to human excellence and, then, the relation of
human excellence to the moral virtues.
It is significant that synonyms for *eudaimonia* are living
well and doing well. These phrases imply certain activities associated
with human living. Ancient philosophers argued that whatever
activities constitute human living - e.g., those associated with
pleasure - one can engage in those activities in a mediocre or
even a poor way. One can feel and react to pleasures sometimes
appropriately and sometimes inappropriately; or one might always act
shamefully and dishonorably. However, to carry out the activities that
constitute human living well over a whole lifetime, or long stretches
of it, is living well or doing well. At this point the relation of
happiness to human excellence should be clear. Human excellence is the
psychological basis for carrying out the activities of a human life
well; to that extent human excellence is also happiness.
So described, human excellence is general and covers many activities
of a human life. However, one can see how human excellence might at
least include the moral virtues. The moral virtue relevant to fear,
for instance, is courage. Courage is a reliable disposition to react
to fear in an appropriate way. What counts as appropriate entails
harnessing fear for good or honorable ends. Such ends are not confined
to one's own welfare but include, e.g., the welfare of
one's city. In this way, moral virtues become the kind of human
excellence that is other-regarding. The moral virtues, then, are
excellent qualities of character - intrinsically valuable for
the one who has them; but they are also valuable for others. In rough
outline, we can see one important way ancient moral theory tries to
link happiness to moral virtue by way of human excellence. Happiness
derives from human excellence; human excellence includes the moral
virtues, which are implicitly or explicitly other-regarding.
Since happiness plays such a vital role in ancient moral theory, we
should note the difference between the Greek word *eudaimonia*
and its usual translation as 'happiness'. Although its
usage varies, most often the English word 'happiness'
refers to a feeling. For example, we say, "You can tell he feels
happy right now, from the way he looks and how he is behaving."
The feeling is described as one of contentment or satisfaction,
perhaps with the way one's life as a whole is going. While some
think there is a distinction between feeling happy and feeling
content, still happiness is a good and pleasant feeling. However,
'happiness' has a secondary sense that does not focus on
feelings but rather on activities. For instance, one might say,
"It was a happy time in my life; my work was going well."
The speaker need not be referring to the feelings he or she was
experiencing but just to the fact that some important activity was
going well. Of course, if their work is going well, they might feel
contentment. But in speaking of their happiness, they might just as
well be referring to their absorption in some successful activity. For
ancient philosophers *eudaimonia* is closer to the secondary
sense of our own term. Happiness means not so much feeling a certain
way, or feeling a certain way about how one's life as a whole is
going, but rather carrying out certain activities or functioning in a
certain way. This sort of happiness is an admirable and praiseworthy
accomplishment, whereas achieving satisfaction or contentment may not
be.
In this way, then, ancient philosophers typically justify moral
virtue. Being courageous, just, and moderate is valuable for the
virtuous person because these virtues are inextricably linked with
happiness. Everyone wants to be happy, so anyone who realizes the link
between virtue and happiness will also want to be virtuous. This
argument depends on two central ideas. First, human excellence is a
good of the soul - not a material or bodily good such as wealth
or political power. Another way to put this idea is to say happiness
is not something external, like wealth or political power, but an
internal, psychological good. The second central idea is that the most
important good of the soul is moral virtue. By being virtuous one
enjoys a psychological state whose value outweighs whatever other
kinds of goods one might have by being vicious.
Finally, a few words about the soul are in order since, typically,
philosophers argue that virtue is a good of the soul. In some ways,
this claim is found in many traditions. Many thinkers argue that being
moral does not necessarily provide physical beauty, health, or
prosperity. Rather, as something good, virtue must be understood as
belonging to the soul; it is a psychological good. However, in order
to explain virtue as a good of the soul, one does not have to hold
that the soul is immortal. While Plato, for example, holds that the
soul is immortal and that its virtue is a good that transcends death,
his argument for virtue as a psychological good does not depend on the
immortality of the soul. He argues that virtue is a psychological good
in this life. To live a mortal human life with this good is in itself
happiness.
This position that links happiness and virtue is called eudaimonism
- a word based on the principal Greek word for happiness,
*eudaimonia*. By eudaimonism, we will mean one of several
theses: (a) virtue, together with its active exercise, is identical
with happiness; (b) virtue, together with its activities, is the most
important and dominant constituent of happiness; (c) virtue is the
only means to happiness. However, one must be cautious not to conclude
that ancient theories in general attempt to construe the value of
virtue simply as a means to achieving happiness. Each theory, as we
shall see, has its own approach to the nature of the link between
virtue and happiness. It would not be advisable to see ancient
theories as concerned with such contemporary issues as whether moral
discourse - i.e., discourse about what one ought to do -
can or should be reduced to non-moral discourse - i.e., to
discourse about what is good for one.
These reflections on virtue can provide an occasion for contrasting
ancient moral theory and modern. One way to put the contrast is to say
that ancient moral theory is *agent-centered* while modern
moral theory is *action-centered*. To say that it is
action-centered means that, as a theory of morality, it explains
morality, to begin with, in terms of actions and their circumstances,
and the ways in which actions are moral or immoral. We can roughly
divide modern thinkers into two groups. Those who judge the morality
of an action on the basis of its known or expected consequences are
consequentialist; those who judge the morality of an action on the
basis of its conformity to certain kinds of laws, prohibitions, or
positive commandments are deontologists. The former include, e.g.,
those utilitarians who say an action is moral if it provides the
greatest good for the greatest number. Deontologists say an action is
moral if it conforms to a moral principle, e.g., the obligation to
tell the truth. While these thinkers are not uninterested in the moral
disposition to produce such actions, or in what disposition is
required if they are to show any moral worth in the persons who do
them, their focus is on actions, their consequences, and the rules or
other principles to which they conform. The result of these ways of
approaching morality is that moral assessment falls on actions. This
focus explains, for instance, contemporary fascination with such
questions of casuistry as, e.g., the conditions under which an action
like abortion is morally permitted or immoral.
By contrast, ancient moral theory explains morality in terms that
focus on the moral agent. These thinkers are interested in what
constitutes, e.g., a just person. They are concerned about the state
of mind and character, the set of values, the attitudes to oneself and
to others, and the conception of one's own place in the common
life of a community that belong to just persons simply insofar as they
are just. A modern might object that this way of proceeding is
backwards. Just actions are logically prior to just persons and must
be specifiable in advance of any account of what it is to be a just
person. Of course, the ancients had a rough idea of what just actions
were; and this rough idea certainly contributed to the notion of a
just person, and his motivation and system of values. Still, the
notion of a just person is not exhausted by an account of the
consequences of just actions, or any principle for determining which
actions are and which are not just. For the ancients, the just person
is compared to a craftsman, e.g., a physician. Acting as a physician
is not simply a collection of medically effective actions. It is
knowing when such actions are appropriate, among other things; and
this kind of knowledge is not always definable. To understand what
being a physician means one must turn to the physician's
judgment and even motivation. These are manifested in particular
actions but are not reducible to those actions. In the same way, what
constitutes a just person is not exhausted by the actions he or she
does nor, for that matter, by any catalogue of possible just actions.
Rather, being a just person entails qualities of character proper to
the just person, in the light of which they decide what actions
justice requires of them and are inclined or disposed to act
accordingly.
## 2. Socrates
In this section we confine ourselves to the character Socrates in
Plato's dialogues, and indeed to only certain ones of the
dialogues in which a Socrates character plays a role. In those
dialogues in which he plays a major role, Socrates varies considerably
between two extremes. On the one hand, there is the Socrates who
claims to know nothing about virtue and confines himself to asking
other characters questions; this Socrates is found in the
*Apology* and in certain dialogues most of which end
inconclusively. These dialogues, e.g., *Charmides*,
*Laches*, *Crito*, *Euthydemus*, and
*Euthyphro*, are called aporetic. On the other hand, in other
dialogues we find a Socrates who expounds positive teachings about
virtue; this Socrates usually asks questions only to elicit agreement.
These dialogues are didactic, and conclusive in tone, e.g.,
*Republic, Phaedo, Phaedrus,* and *Philebus*. However,
these distinctions between kinds of dialogues and kinds of Socratic
characters are not exclusive; there are dialogues that mix the
aporetic and conclusive styles, e.g., *Protagoras, Meno*, and
*Gorgias*. In observing these distinctions, we refer only to
the characteristic style of the dialogue and leave aside controversies
about the relative dates of composition of the dialogues. (See the
entry on
Plato,
especially the section on
Socrates
and the section on
the historical Socrates.)
The significance of this distinction among dialogues is that one can
isolate a strain of moral teaching in the aporetic and mixed
dialogues. In spite of their inconclusive nature, in the aporetic
dialogues the character Socrates maintains principles about morality
that he seems to take to be fundamental. In the mixed dialogues we
find similar teaching. This strain is distinct enough from the
accounts of morality in the more didactic dialogues that it has been
called Socratic, as opposed to Platonic, and associated with the
historical personage's own views. In what follows we limit
ourselves to this "Socratic" moral teaching -
without taking a position about the relation of "Socratic"
moral teaching to that of the historical Socrates. For our purposes it
is sufficient to point out a distinction between kinds of moral
teaching in the dialogues. We will focus on the aporetic dialogues as
well as the mixed dialogues *Protagoras*, *Gorgias*, and
*Meno*.
The first feature of Socratic teaching is its heroic quality. In the
*Apology*, Socrates says that a man worth anything at all does
not reckon whether his course of action endangers his life or
threatens death. He looks only at one thing - whether what he
does is just or not, the work of a good or of a bad man (28b-c).
Said in the context of his trial, this statement is both about himself
and a fundamental claim of his moral teaching. Socrates puts moral
considerations above all others. If we think of justice as, roughly,
the way we treat others, the just actions to which he refers cover a
wide range. It is unjust to rob temples, betray friends, steal, break
oaths, commit adultery, and mistreat parents (*Rep*
443a-b). A similarly strong statement about wrong-doing is found
in the *Crito*, where the question is whether Socrates should
save his life by escaping from the jail in Athens and aborting the
sentence of death. Socrates says that whether he should escape or not
must be governed only by whether it is just or unjust to do so (48d).
Obviously, by posing wrong-doing against losing one's life,
Socrates means to emphasize that nothing outweighs in positive value
the disvalue of doing unjust actions. In such passages, then, Socrates
seems to be a moral hero, willing to sacrifice his very life rather
than commit an injustice, and to recommend such heroism to others.
However, this heroism also includes an important element of
self-regard. In the passage from the *Apology* just quoted
Socrates goes on to describe his approach to the citizens of Athens.
He chides them for being absorbed in the acquisition of wealth,
reputation, and honor while they do not take care for nor think about
wisdom, truth, and how to make their souls better (*Ap*.
29d-e). As he develops this idea it becomes clear that the
perfection of the soul, making it better, means acquiring and having
moral virtue. Rather than heaping up riches and honor, Athenians
should seek to perfect their souls in virtue. From this exhortation we
can conclude that for Socrates psychological good outweighs material
good and that virtue is a psychological good of the first importance.
The *Crito* gives another perspective on psychological good.
Socrates says (as something obvious to everyone) that life is not
worth living if that which is harmed by disease and benefited by
health - i.e., the body - is ruined. But even more so, he
adds, life is not worth living if that which is harmed by wrong-doing
(*to adikon*) and benefited by the right - *sc*.
the soul - is ruined, insofar as the soul is more valuable than
the body (47e-48a). We can understand this claim in positive
terms. Virtue is the chief psychological good; wrong-doing destroys
virtue. So Socrates' strong commitment to virtue reflects his
belief in its value for the soul, as well as the importance of the
soul's condition for the quality of our lives.
A second feature of Socratic teaching is its intellectualism. Socratic
intellectualism is usually expressed in the claim that virtue is
knowledge, implying that if one knows what is good one will do what is
good. We find a clear statement of the claim in *Protagoras*
352c; but it underlies a lot of Socratic teaching. The idea is
paradoxical because it flies in the face of what seems to be the
ordinary experience of knowingly doing what is not good, called
*akrasia* or being overcome (sometimes anachronistically
translated as weakness of will). However, Socrates defends the idea
that *akrasia* is impossible. First, he argues that virtue is
all one needs for happiness. In the *Apology* (41c), Socrates
says that no evil at all can come to a good man either in living or in
dying (...*ouk esti andri agatho(i) kakon ouden oute
zonti oute teleutesanti*), implying the good
man's virtue alone makes him proof against bad fortune. In the
*Crito* (48b), he says living well and finely and justly are
the same thing (*to de eu kai kalos kai dikaios
[zen] hoti tauton estin...*). In turn, in the *Meno*
(77c-78b), Socrates argues that everyone desires happiness,
i.e., one's own welfare. So, in the first place, one always has
a good reason for acting virtuously; doing so entails one's own
happiness. However, even if the link between virtue and happiness is
granted, another problem remains. It is possible that one can,
knowingly, act against one's happiness, understood, as it is by
Socrates, as one's own welfare. It seems that an individual can
choose to do what is not in her own best interest. People can all too
easily desire and go after the pastry that they know is bad for them.
Socrates, however, seems to think that once one recognizes (i.e.,
really knows and fully appreciates) that the pastry is not good in
this way, one will cease to desire it. There is no residual desire
for, e.g., pleasure, that might compete with the desire for what is
good. This position is called intellectualism because it implies that
what ultimately motivates any action is some cognitive state, rational
or doxastic: if you know what is good you will do it, and if you do an
action, and it is bad, that is because you thought somehow that it was
good. All error in such choices is due to ignorance.
In support of the idea that if one knows what happiness is, one will
pursue it, Socrates argues, in the *Euthydemus*, that wisdom is
necessary and sufficient for happiness. While most of this dialogue is
given over to Euthydemus' and Dionysiodorus' eristic
display, there are two Socratic interludes. In the first of these
- in a passage that has a parallel in *Meno* (88a ff)
- Socrates helps the young Cleinias to see that wisdom is a kind
of knowledge that infallibly brings happiness. He uses an analogy with
craft (*techne*); a carpenter must not only have but know
how to use his tools and materials to be successful (*Euthyd*.
280b-d). In turn, someone may have such goods as health, wealth,
good birth, and beauty, as well as the virtues of justice, moderation,
courage, and wisdom (279a-c). Wisdom is the most important,
however, because, like carpentry, for example, it is a kind of
knowledge, about how to use the other assets so that they are
beneficial (281b-c). Moreover, all of these other so-called
goods are useless - in fact, even harmful - without
wisdom, because without it one will misuse any of the other assets one
may possess, so as to act not well but badly. Wisdom is the only
unconditional good (281d-e). Socrates' argument leaves it
ambiguous whether wisdom (taken together with its exercise) is
identical with happiness or whether it is the dominant and essential
component of happiness (282a-b).
In this account, the focus is on a kind of knowledge as the active
ingredient in happiness. The other parts of the account are certain
assets that seem as passive in relation to wisdom as wood and tools
are to the carpenter. Socratic intellectualism has been criticized for
either ignoring the non-rational, desiderative and volitional causes
of human action or providing an implausibly rationalist account of
them. In either case, the common charge is that it fails to account
for or appreciate the apparent complexities of moral psychology.
## 3. Plato
If the objections to intellectualism are warranted, Plato makes
significant progress by having his character Socrates suppose that the
soul has desires that are not always for what is good. This allows for
the complexities of moral psychology to become an important issue in
the account of virtue. That development is found in Plato's
mature moral theory. In the *Republic*, especially in its first
four books, Socrates presents the most thorough and detailed account
of moral psychology and virtue in the dialogues.
It all begins with the challenge to the very notion of morality,
understood along traditional lines, mounted by Callicles in the second
half of the *Gorgias* and by Thrasymachus in *Republic*
I. Callicles thinks that moral convention is designed by the numerous
weak people to intimidate the few strong ones, to keep the latter from
taking what they could if they would only use their strength. No truly
strong person should be taken in by such conventions (*Gorg.*
482e ff). Thrasymachus argues that justice is the advantage of the
more powerful; he holds that justice is a social practice set up by
the powerful, i.e., rulers who require their subjects through that
practice to act against their own individual and group self-interest
(*Rep*. 338d ff). No sensible person should, and no
strong-willed person would, accept rules of justice as having any
legitimate authority over them. In answer to the latter challenge, in
*Republic* II, Glaucon and Adeimantus repeatedly urge Socrates
to show what value justice has in itself, apart from its rewards and
reputation. They gloss the intrinsic value of justice as what value it
has in the soul (358b-c), what it does to the soul simply and
immediately by its presence therein. Before giving what will be a new
account of the soul, Socrates introduces his famous comparison between
the soul and the city. As he develops his account of the city,
however, it becomes clear that Socrates is talking about an ideal
city, which he proceeds to construct in his discussion.
This city has three classes (*gene*) of citizens. The
rulers are characterized by their knowledge about and devotion to the
welfare of the city. The auxiliaries are the warrior class that helps
the rulers. These two are collectively the guardians of the city
(413c-414b; 428c-d). Finally, there are the farmers,
artisans, and merchants - in general those concerned with the
production of material goods necessary for daily life
(369b-371e). The importance of this structure is that it allows
Socrates to define virtues of the city by relations among its parts.
These virtues are justice, wisdom, courage, and moderation. For
instance, justice in the city is each one performing that function for
which he is suited by nature and not doing the work that belongs to
others (433a-b). One's function, in turn, is determined by
the class to which he belongs. So rulers should rule and not amass
wealth, which is the function of the farmers, artisans, and merchants;
if the rulers turn from ruling to money-making they are unjust.
Completing the analogy, Socrates gives an account of the soul. He
argues that it has three parts, each corresponding to one of the
classes in the city. At this point, we should note the difficulty of
talking about parts of the soul. In making his argument about conflict
in the soul, Socrates does not usually use a word that easily
corresponds to the English 'part.' Sometimes he uses
'form' (*eidos*) and 'kind'
(*genos*) (435b-c), other times a periphrasis such as
'that in the soul that calculates' (439d-e),
although in the subsequent account of virtue, we do find
*meros*, which means part (442b-d). So, insofar as
vocabulary is concerned, one should be cautious about taking the
subdivisions of the soul to be independent agents. Perhaps the least
misleading way of thinking about the parts is as distinct functions.
Reason is the function of calculating, especially about what is good
for the soul. The appetites for food, drink, and sex are like the
producing class - they are necessary for bodily existence
(439c-e). These two parts are familiar to the modern reader, who
will recognize the psychological capacity for reasoning and
calculating, on the one hand, and the bodily desires, on the other.
Less familiar is the part of the soul that corresponds to the
auxiliaries, the military class. This is the spirited part
(*thymos* or *thymoeides*). Associated with the heart,
it is an aggressive drive concerned with honor. *Thumos* is
manifested as anger with those who attack one's honor. Perhaps
more importantly, it is manifested as anger with oneself when failing
to do what one knows he should do (439e-440d).
The importance of this account is that it is a moral psychology, an
account of the soul which serves as a basis for explaining the
virtues. Socrates' account also introduces the idea that there
is conflict in the soul. For instance, the appetites can lead one
toward pleasure which reason recognizes is not good for the whole
soul. In cases of conflict, Socrates says the spirited part sides with
reason against appetite. Here we see in rough outline the chief
characters in a well-known moral drama. Reason knows what is good both
for oneself and in the treatment of others. The appetites,
short-sighted and self-centered, pull in the opposite direction. The
spirited part is the principle that sides with reason and enforces its
decrees.
While the opposition between reason and appetite establishes their
distinctness, it has another, more profound consequence. Most
commentators read this section of the argument as implying that reason
looks out for what is good for the soul while appetite seeks food,
drink, and sex, heedless of their benefit for the soul (437d ff).
Although open to various interpretations, the difference between
reason and appetite does seem aimed at one of the central paradoxes of
Socratic intellectualism. Common experience seems to contradict the
claim that if one knows what is good he will do it; there seem to be
obvious instances where someone does what she, in some sense, knows is
not good, while having options to act otherwise. By introducing
non-rational elements in the soul, the argument in the
*Republic* also introduces the possibility of doing what one
knows, all things considered, not to be good. In such cases, one is
motivated by appetite, which lacks the capacity to conceive of what is
good, all things considered.
Some interpret this heedlessness as appetite's being
good-independent, whereas reason is good-dependent. Thus, appetite
pursues what it pursues without reference to whether what it pursues
is good; reason pursues what it pursues always understanding that what
it pursues is good. In this kind of interpretation, Socrates in the
*Republic* accepts the possibility of *akrasia* because
some parts of the soul, which are indifferent to the good, can
motivate actions that do not aim at what is good. Others interpret
this heedlessness as appetite's operating on a constrained
notion of good; for instance, for appetite only pleasure is good. By
contrast, reason operates on a larger notion, i.e., what is good, all
things considered. In this interpretation, *akrasia* is also
possible, but now because some parts of the soul, which motivate
action, do so with a constrained view of good. In any event, the stage
is set for conflict in the soul between reason and appetite. If we
assume that either can motive action on its own, the possibility
exists for the soul's pursuing bodily pleasure in spite of
reason's determination that doing so is not good, all things
considered. This potential gives rise to a separate strain of thinking
about virtue. While, for Socratic intellectualism, virtue just is
knowledge, in the aftermath of the argument for subdividing the soul,
virtue comes to have two aspects. The first is to acquire the
knowledge which is the basis of virtue; the second is to instill in
the appetites and emotions - which cannot grasp the knowledge
- a docility so that they react in a compliant way to what
reason knows to be the best thing to do. Thus, non-rational parts of
the soul acquire reliable habits on which the moral virtues
depend.
Given the complexity of the differences among the parts, we can now
understand how their relations to one another define virtue in the
soul. Virtue reduces the potential for conflict to harmony. The master
virtue, justice, is each part doing its function and not interfering
with that of another (441d-e; 443d). Since the function of
reason is to exercise forethought for the whole soul, it should rule.
The appetites, which seek only their immediate satisfaction, should
not rule. A soul ruled by appetites is the very picture of
psychological injustice. Still, to fulfill its function of ruling,
reason needs wisdom, the knowledge of what is beneficial for each of
the parts of the soul and for the whole. Moderation is a harmony among
the parts based on agreement that reason should rule. Courage is the
spirited part carrying out the decrees of reason about what is to be
feared (442b-d). Any attentive reader of the dialogues must feel
that Socrates has now given an answer to the questions that started
many of the aporetic dialogues. At this point, we have the fully
developed moral psychology that allows the definition of the moral
virtues. They fall into place around the tripartite structure of the
soul.
One might object, however, that all Socrates has accomplished is to
define justice and the other virtues as they operate *within*
the soul. While each part treats the others justly, so to speak, it is
not clear what justice among the parts of someone's soul has to
do with that person treating other people justly. Socrates at first
addresses this issue rather brusquely, saying that someone with a just
soul would not embezzle funds, rob temples, steal, betray friends,
break oaths, commit adultery, neglect parents, nor ignore the gods.
The reason for this is that each part in the soul does its own
function in the matter of ruling and being ruled (443a-b).
Socrates does not explain this connection between psychic harmony and
moral virtue. However, if we assume that injustice is based in
overweening appetite or unbridled anger, then one can see the
connection between restrained appetites, well-governed anger and
treating others justly. The man whose sexual appetite is not governed
by reason, e.g., would commit the injustice of adultery.
This approach to the virtues by way of moral psychology, in fact,
proves to be remarkably durable in ancient moral theory. In one way or
another, the various schools attempt to explain the virtues in terms
of the soul, although there are, of course, variations in the
accounts. Indeed, we can treat the theory of the *Republic* as
one such variation. While the account in *Republic* IV has
affinities with that of Aristotle in *Nicomachean*
*Ethics*, for instance, further developments in
*Republic* V-VII make Plato's overall account
altogether unique. It is in these books that the theory of forms makes
its appearance.
In *Republic* V, Socrates returns to the issue of political
rule by asking what change in actual cities would bring the ideal city
closer to realization. The famous answer is that philosophers should
rule as kings (473d). Trying to make the scandalous, even ridiculous,
answer more palatable, Socrates immediately begins to explain what he
means by philosophers. They are the ones who can distinguish between
the many beautiful things and the one beautiful itself. The beautiful
itself, the good itself, and the just itself are what he calls forms.
The ability to understand such forms defines the philosopher
(476a-c). Fully elaborated, this extraordinary theory holds that
there is a set of unchanging and unambiguous entities, collectively
referred to as being. These are known directly by reason in a way that
is separate from the use of sensory perception. The objects of sensory
perception are collectively referred to as becoming since they are
changing and ambiguous (508d). This infallible grasp means that one
knows what goodness is, what beauty is, and what justice is. Because
only philosophers have this knowledge - an infallible grasp of
goodness, beauty, and justice - they and only they are fit to be
rulers in the city.
While moral theory occupies a significant portion of the
*Republic*, Socrates does not say a lot about its relation to
the epistemology and metaphysics of the central books. For instance,
one might have expected the account to show how, e.g., reason might
use knowledge of the forms to govern the other parts of the soul.
Indeed, in Book VI, Socrates does say that the philosopher will
imitate in his own soul the order and harmony of the forms out of
admiration (500c-d). Since imitation is the heart of the account
of moral education in Books II-III, the idea that one might
imitate forms is intriguing; but it is not developed. In fact, the
relation between virtues in the soul and the metaphysics and
epistemology of the forms is not an easy one. For instance, Socrates
says that virtue in the soul is happiness (580b-c). However, he
also says that knowledge and contemplation of the forms is happiness
(517c-d). Since having or exercising the virtues of wisdom,
moderation, courage, and justice is different from knowledge and
contemplation of forms, the questions naturally arises about how to
bring the two together into an integrated life. In the dialogues, the
relation between knowledge of forms and virtues such as moderation,
justice, and courage is an issue that never seems to be fully resolved
because it receives different treatments.
In *Republic*IX, Socrates offers a sketch of one way to bring
these two dimensions together. Toward the end of the book, Socrates
conducts a three-part contest to show the philosophical, or
aristocratic, man is the happiest. In the second of these contests, he
claims that each part of the soul has its peculiar desire and
corresponding pleasure. Reason, for instance, desires to learn;
satisfying that desire is a pleasure distinct from the pleasure, e.g.,
of eating (580d-e). One can now see that the functions of each
of the parts has a new, affective dimension. Then, in the last of the
contests, Socrates makes an ontological distinction between true
pleasure of the soul and less true pleasure of the body. The former is
the pleasure of knowing being, i.e., the forms, and the latter is the
pleasure of filling bodily appetite. The less true pleasures also give
rise to illusions and phantoms of pleasure; these illusions and
phantoms recall the images and shadows in the allegory of the cave
(583b-586c). Clearly, then, distinctions from the metaphysics
and epistemology of the central books are being made relevant to the
divisions within the soul. Pleasures of reason are on a higher
ontological level than those of the bodily appetites; this difference
calls for a modification of the definitions of the virtues from Book
IV. For instance, in Book IV, justice is each part fulfilling its own
function; in Book IX, that function is specified in terms of the kinds
of pleasure each part pursues. Reason pursues the true pleasure of
knowing and the appetites the less true pleasure of the body. Appetite
commits injustice if it pursues pleasure not proper to it, i.e., if it
pursues bodily pleasures as though they were true pleasures
(586d-587a). The result is an account of virtue which
discriminates among pleasures, as any virtue ethics should; but the
criterion of discrimination reflects the distinction between being and
becoming. Rational pleasures are more real than bodily pleasures,
although bodily pleasures are not negligible. Happiness, in turn,
integrates the virtue of managing bodily appetites and pleasures with
the pleasures of learning, but definitely gives greater weight to the
latter.
However, in the *Phaedo*, we find another approach to the
relation between knowledge of forms and virtues in the soul. Socrates
identifies wisdom with kowledge of such forms as beauty, justice, and
goodness (65d-e). The philosopher can know these only by reason
that is detached from the body as much as possible (65e-66a). In
fact, pure knowledge is found only when the soul is completely
detached from the body in death. With such a severe account of the
knowledge of the forms it is not surprising that courage, justice, and
moderation are subordinated to wisdom. When Socrates says the true
exchange of pleasures, pains, and fears is for wisdom, he invokes
moderation, justice, and courage as virtues that serve this exchange
(68b-69e). This passage suggests, for instance, that moderation
would control bodily pleasures and pains and fears so that reason
could be free of these disturbances in order to pursue knowledge. This
notion of virtues differs from the account in the first books of the
*Republic* in that the latter presents a comprehensive picture
of the welfare of all the parts of the soul whereas the
*Phaedo*, by implication, subordinates to reason those parts
characterized by pleasures, pain, and fears.
In the *Symposium*, we find yet another configuration of the
relation between virtue and the forms, in which the non-rational parts
of the soul disappear - or are sublimated. In Diotima's
discourse on *eros*, she argues that the real purpose of love
is to give birth in beauty (206b-c). This idea implies two
dimensions: inspiration by what is itself beautiful and producing
beautiful objects. Although Diotima talks about the way these concepts
work in sexual procreation, her real interest is in their
psychological manifestations. For instance, the lover, inspired by the
physical beauty of the beloved, will produce beautiful ideas. In turn,
the beauty of a soul will inspire ideas that will improve young men
(210a-d). Finally, Diotima claims that the true object of erotic
love is not the beautiful body or even the beautiful soul - but
beauty itself. Unlike the beauty of bodies and of souls, this beauty
does not come to be nor pass away, neither increases nor decreases.
Nor does it vary according to aspect or context. It is not in a face
or hands but is itself by itself, one in form (211a-c). Since
this paradigm of beauty is the true object of *eros*, it
inspires its own particular product, i.e., true virtue, distinct from
the images of virtue inspired by encounters with beauty in bodies and
souls (212a-c). While this passage highlights the relation
between *eros* and the form of beauty in motivating virtue, it
is almost silent on what this virtue might be. We do not know whether
it just is the expression of love of beauty in the lover's
actions, or its concretization in the dispositions of the
lover's soul, or another manifestation. Clearly, this account in
the *Symposium* works without the elaborate theory about parts
of the soul and their function in virtue, found in *Republic*
I-IV.
As with other ancient theories, this account of virtue can be called
eudaimonist. Plato's theory is best represented as holding that
virtue, together with its active exercise, is the most important and
the dominant constituent of happiness (580b-c). One might object
that eudaimonist theories reduce morality to self-interest. We should
recall however that *eudaimonia* in this theory does not refer
primarily to a feeling. In the *Republic*, it refers to a state
of the soul, and the active life to which it leads, whose value is
multifaceted. The order and harmony of the soul is, of course, good
for the soul because it provides what is good for each of the parts
and the whole, and so makes the parts function well, for the benefit
of each and of the whole person. In this way, the soul has the best
internal economy, so to speak. Still, we should not overlook the ways
in which order and harmony in the soul are paired with order and
harmony in the city. They are both modeled on the forms. As a
consequence, virtue in the soul is not a private concern artificially
joined to the public function of ruling. Rather, the philosopher who
imitates forms in ruling her soul is equally motivated to imitate
forms in ruling the city. So, insofar as virtue consists in imitating
the forms, it is also a state of the soul best expressed by exercising
rule in the city - or at least in the ideal city.
*Eudaimonia*, then, includes looking after the welfare of
others.
Indeed, the very nature of Plato's account of virtue and
happiness leaves some aspects of the link between the two unclear.
While virtue is the dominant factor in happiness, we still cannot tell
whether for Plato one can have a reason for being, e.g., courageous
that does not depend directly on happiness. Plato, even though he has
forged a strong link between virtue and happiness, has not addressed
such issues. In his account, it is still possible that one might
be courageous just for its own sake while at the same time believing
courage is also reliably linked to happiness. In the
*Republic*, Socrates tries to answer the question what value
justice has by itself in the soul; it does not follow that he is
trying to convince Glaucon and Adeimantus that the value of justice is
exhausted by its connection to one's own happiness.
(For further developments in Plato's moral theory in dialogues
usually thought to postdate the *Republic* (especially
*Philebus*), see
Plato
entry, especially the section on
Socrates
and the section on
Plato's indirectness.)
## 4. Aristotle
The moral theory of Aristotle, like that of Plato, focuses on virtue,
recommending the virtuous way of life by its relation to happiness.
His most important ethical work, *Nicomachean Ethics*, devotes
the first book to a preliminary account of happiness, which is then
completed in the last chapters of the final book, Book X. This account
ties happiness to excellent activity of the soul. In subsequent books,
excellent activity of the soul is tied to the moral virtues and to the
virtue of "practical wisdom" - excellence in
thinking and deciding about how to behave. This approach to moral
theory depends on a moral psychology that shares a number of
affinities with Plato's. However, while for Plato the theory of
forms has a role in justifying virtue, Aristotle notoriously rejects
that theory. Aristotle grounds his account of virtue in his theory
about the soul - a topic to which he devotes a separate
treatise, *de Anima*.
Aristotle opens the first book of the *Nicomachean Ethics* by
positing some one supreme good as the aim of human actions,
investigations, and crafts (1094a). Identifying this good as
happiness, he immediately notes the variations in the notion
(1095a15-25). Some think the happy life is the life of
enjoyment; the more refined think it is the life of political
activity; others think it is the life of study or theoretical
contemplation (1095b10-20). The object of the life of enjoyment
is bodily pleasure; that of political activity is honor or even
virtue. The object of the life of study is philosophical or scientific
understanding. Arguing that the end of human life must be the most
complete, he concludes that happiness is the most complete end.
Whereas pleasure, honor, virtue, and understanding are choice-worthy
in themselves, they are also chosen for the sake of happiness.
Happiness is not chosen for the sake of anything else
(1097a25-1097b5). That the other choice-worthy ends are chosen
for the sake of happiness might suggest that they are chosen only as
instrumental means to happiness, as though happiness were a separate
state. However, it is more likely that the other choice-worthy ends
are constituents of happiness. As a consequence, the happy life is
composed of such activities as virtuous pursuits, honorable acts, and
contemplation of truth. While conceiving these choice-worthy ends as
constituents of happiness might be illuminating, it does, in turn,
raise the issue of whether happiness is a jumble of activities or
whether it requires organization - even prioritization -
of the constituents.
Next, Aristotle turns to his own account of happiness, the summit of
Book I. The account depends on an analogy with the notion of function
or characteristic activity or work (*ergon*). A flutist has a
function or work, i.e., playing the flute. The key idea for Aristotle
is that the good of flute players (as such) is found in their
functioning as a flutist. By analogy, if there is a human function,
the good for a human is found in this function (1097b20-30).
Aristotle then turns to the human soul. He argues that, in fact, there
is a human function, to be found in the human soul's
characteristic activity, i.e., the exercise of reason
(1098a1-20). Then, without explanation, he makes the claim that
this rational function is expressed in two distinct ways: by
deliberating and issuing commands, on the one hand, and by obeying
such commands, on the other. The part that has reason in itself
deliberates about decisions, both for the short term and the long. The
part that obeys reason is that aspect of the soul, such as the
appetites, that functions in a human being under the influence of
reason. The appetites can fail to obey reason; but they at least have
the capacity to obey (as, for example, such autonomic functions as
nutritive and metabolic ones do not).
Aristotle then argues that since the function of a human is to
exercise the soul's activities according to reason, the function
of a good human is to exercise well and finely the soul's
activities according to reason. Given the two aspects of reason that
Aristotle has distinguished, one can see that both can be well or
badly done. On the one hand, one can reason well or badly -
about what to do within the next five minutes, twenty-four hours, or
ten years. On the other, actions motivated by appetites can be well or
badly done, and likewise having an appetite at all can sometimes not
be a well done, but a badly done, activity of the soul. Acting on the
desire for a drink from the wine cooler at a banquet is not always a
good idea, nor is having such a desire. According to Aristotle, the
good human being has a soul in which these functions are consistently
done well. Thus, good persons reason well about plans, short term or
long; and when they satisfy their appetites, and even when they
*have* appetites, it is in conformity with reason. Returning to
the question of happiness, Aristotle says the good for a human is to
live the way the good human lives, that is, to live with one's
life aimed at and structured by the same thing that the good human
being aims at in his or her life. So, his account of happiness, i.e.,
the highest good for a human, is virtuous or excellent activity of the
soul. But he has not identified this virtuous activity with that of
the moral virtues, at this point. In fact, he says if there are many
kinds of excellence, then human good is found in the active exercise
of the highest. He is careful to point out that happiness is not just
the ability to function well in this way; it is the activity itself.
Moreover, this activity must be carried out for a complete life. One
swallow does not make a spring.
Although the reference here to parts or aspects of the soul is
cursory, it is influenced by Aristotle's theories in the *de
Anima*. Fundamental to the human soul and to all living things,
including plants, is nutrition and growth (415a20 ff). Next is
sensation and locomotion; these functions are characteristic of
animals (416b30 ff). Aristotle associates appetite and desire with
this part of the soul (414b1-5). Thus we have a rough sketch of
animal life: animals, moved by appetite for food, go toward the
objects of desire, which are discerned by sensation. To these
functions is added thought in the case of humans. Thought is both
theoretical and practical (427a 15-20). The bulk of the *de
Anima* is devoted to explaining the nutritive, sensory, and
rational functions; Aristotle considers desire and appetite as the
source of movement in other animals (432a15 ff), and these plus reason
as its source in humans. In *Nicomachean Ethics* he focuses on
the role that appetite and desire, together with reason, play in the
moral drama of human life.
In chapter 8 of Book I, Aristotle explicitly identifies human good
with psychological good. Dividing goods into external goods, those of
the body, and those of the soul, he states that his account of
happiness agrees with those who hold it is a good of the soul. In
fact, in this account, happiness is closely related to traditionally
conceived psychological goods such as pleasure and moral virtue,
although the nature of the relation has yet to be shown
(1098b10-30). Still, in Book I Aristotle is laying the
foundation in his moral psychology for showing the link between the
moral virtues and happiness. In Book II he completes this foundation
when he turns to the question of which condition of the soul is to be
identified with (moral) virtue, or virtue of character.
In II. 5, he says that conditions of the soul are either feelings
(*pathe*), capacities for feeling (*dynameis*) or
dispositions (*hexeis*). Feelings are such things as appetite,
anger, fear and generally those conditions that are accompanied by
pleasure and pain. Capacities are, for example, the simple capacity to
have these feelings. Finally, disposition is that condition of the
soul whereby we are well or badly off with respect to feelings. For
instance, people are badly disposed with respect to anger who
typically get angry violently or who typically get angry weakly
(1105b20-30). Virtue, as a condition of the soul, will be one of
these three. After arguing that virtue is neither feeling nor
capacity, Aristotle turns to what it means to be well or badly off
with respect to feelings. He says that in everything that is
continuous and divisible it is possible to take more, less, or an
equal amount (1106a25). This remark is puzzling until we realize that
he is actually talking about feelings. Feelings are continuous and
divisible; so one can take more, less, or an equal amount of them.
Presumably, when it comes to feeling anger, e.g., one can feel too
much, not enough, or a balanced amount. Aristotle thinks that what
counts as too much, not enough, or a balanced amount can vary to some
extent from individual to individual. At this point he is ready to
come back to moral virtue for it is concerned with feelings and
actions (to which feelings give rise), in which one can have excess,
deficiency, or the mean. To have a feeling like anger at the right
time, on the right occasion, towards the right people, for the right
purpose and in the right manner is to feel the right amount, the mean
between extremes of excess and deficiency; this is the mark of moral
virtue (1106a15-20). Finally, virtue is not a question only of
feelings since there is a mean between extremes of action. Presumably,
Aristotle means that the appropriate feeling - the mean between
the extremes in each situation - gives rise to the appropriate
action.
At last Aristotle is ready to discuss particular moral virtues.
Beginning with courage, he mentions here two feelings, fear and
confidence. An excessive disposition to confidence is rashness and an
excessive disposition to fear and a deficiency in confidence is
cowardice. When it comes to certain bodily pleasures and pains, the
mean is moderation. While the excess is profligacy, deficiency in
respect of pleasures almost never occurs. Aristotle gives a fuller
account of both of these virtues in Book III; however, the basic idea
remains. The virtue in each case is a mean between two extremes, the
extremes being vices. Virtue, then, is a reliable disposition whereby
one reacts in relevant situations with the appropriate feeling -
neither excessive nor deficient - and acts in the appropriate
way - neither excessively nor deficiently.
To complete the notion of moral virtue we must consider the role
reason plays in moral actions. Summing up at Book II.6, Aristotle says
virtue is a disposition to choose, lying in the mean which is relative
to us, determined by reason (1107a1). Since he is talking about
choosing actions, he is focusing on the way moral virtue issues in
actions. In turn, it is the role of practical wisdom
(*phronesis*) to determine choice. While moral virtues,
virtues of character, belong to the part of the soul which can obey
reason, practical wisdom is a virtue of the part of the soul that
itself reasons. The virtues of thought, intellectual virtues, are
knowledge (*episteme*), comprehension
(*nous*), wisdom (*sophia*), craft
(*techne*), and practical wisdom (1139b15-25). The
first three grasp the truth about what cannot be otherwise and is not
contingent. A good example of knowledge about what cannot be otherwise
is mathematics. Craft and practical wisdom pursue the truth that can
be had about what can be otherwise and is contingent. What can be
otherwise includes what is made - the province of craft -
and what is done - the province of practical wisdom (1140a1).
While Aristotle's account of practical wisdom raises several
problems, we will focus on only two closely related issues. He says
that it is the mark of someone with practical wisdom to deliberate
well about what leads to the good for himself (*to dunasthai
kalos bouleusasthai peri ta hauto(i) agatha*). This
good is not specific, such as health and strength, but is living well
in general (1140a25-30). This description of practical wisdom,
first of all, implies that it deliberates about actions; it is a skill
for discerning those actions which hit the mean between the two
extremes. However, the ambiguity of the phrase 'what leads to
the good' might suggest that practical wisdom deliberates only
about instrumental means to living well. Still since practical wisdom
determines which actions hit the mean between two extremes, such
actions are not instrumental means to living well - as though
living well were a separate state. Actions which hit the mean are
parts of living well; the good life is composed of actions under the
headings of, for instance, honor and pleasure, which achieve the mean.
In addition, the deliberation of practical wisdom does not have to be
confined to determining which actions hit the mean. While Aristotle
would deny that anyone deliberates about whether happiness is the end
of human life, we do deliberate about the constituents of happiness.
So, one might well deliberate about the ways in which honor and
pleasure fit into happiness.
Now we can discern the link between morality and happiness. While
happiness itself is excellent or virtuous activity of the soul, moral
virtue is a disposition to achieve the mean between two extremes in
feeling and in action. The missing link is that achieving the mean is
also excellent activity of the soul. Activity that expresses the
virtue of courage, for example, is also the best kind of activity when
it comes to the emotion of fear. Activity that expresses the virtue of
moderation is also excellent activity when it comes to the bodily
appetites. In this way, then, the happy person is also the virtuous
person. However, in Book I Aristotle has already pointed out the
problem of bodily and external goods in relation to happiness. Even if
happiness is virtuous activity of the soul, in some cases these goods
are needed to be virtuous - for example, one must have money to
be generous. In fact, the lack of good birth, good children, and
beauty can mar one's happiness for the happy person does not
appear to be one who is altogether ugly, low born, solitary, or
childless, and even less so if he has friends and children who are
bad, or good friends and children who then die
(1099a30-1099b10). Aristotle is raising a problem that he does
not attempt to solve in this passage. Even if happiness is virtuous
activity of the soul, it does not confer immunity to the vicissitudes
of life.
Aristotle's moral psychology has further implications for his
account of happiness. In Book I, chapter 7, he said that human good is
virtuous activity of the soul but was indefinite about the virtues. In
most of the *Nicomachean Ethics* he talks about the moral
virtues, leaving the impression that virtuous activity is the same as
activity associated with moral virtues. In Book X, however, Aristotle
revisits the issue of virtuous activity. If happiness, he says, is
activity in accordance with virtue, it will be activity in accordance
with the highest. The highest virtue belongs to the best part of the
soul, i.e., the intellect (*nous*) or the part that governs in
the soul and contemplates the fine and godly, being itself the divine
part of the soul or that which is closest to the divine
(1177a10-20). Up to this point, Aristotle has apparently been
talking about the man of political action and the happiness that is
suitable to rational, embodied human beings. Active in the life of the
city, this person exercises courage, moderation, liberality, and
justice in the public arena. Now, instead of the life of an effective
and successful citizen, Aristotle is holding up the life of study and
contemplation as the one that achieves happiness - that is, the
highest human good, the activity of the highest virtue. Such a life
would achieve the greatest possible self-sufficiency and
invulnerability (1177a30). Indeed, at first he portrays these two
lives as so opposite that they seem incompatible.
In the end, however, he palliates the differences, leaving the
possibility for some way to harmonize the two (1178a30). The
differences between the two lives are rooted in the different aspects
of the soul. Moral virtues belong to the appetites and desires of the
sensory soul - the part obviously associated with the active
political life, when its activities are brought under the guidance and
control of excellent practical thought and judgment. The
"highest" virtues, those belonging to the scientific or
philosophical intellect, belong to theoretical reason. To concentrate
on these activities one must be appropriately disengaged from active
political life. While the latter description leads Aristotle to
portray as possible a kind of human life that partakes of divine
detachment (1178b5 ff), finally human life is an indissolvable
composite of intellect, reason, sensation, desires, and appetites. For
Aristotle, strictly speaking, happiness simply *is* the
exercise of the highest virtues, those of theoretical reason and
understanding. But even persons pursuing those activities as their
highest good, and making them central to their lives, will need to
remain connected to daily life, and even to political affairs in the
community in which they live. Hence, they will possess and exercise
the moral virtues and those of practical thought, as well as those
other, higher, virtues, throughout their lives. Clearly, this
conception of happiness does not hold all virtue, moral and
intellectual, to be of equal value. Rather, Aristotle means the
intellectual virtue of study and contemplation to be the dominant part
of happiness. However, problems remains since we can understand
dominance in two ways. In the first version, the activity of
theoretical contemplation is the sole, exclusive component of
happiness and the exercise of the moral virtues and practical wisdom
is an instrumental means to happiness, but not integral to it. The
problem with this version of dominance is that it undermines what
Aristotle has said about the intrinsic value of the virtuous activity
of politically and socially engaged human beings, including
friendship. In a second version of dominance, we might understand
contemplation to be the principal, but not exclusive, constituent of
happiness. The problem with this version of dominance lies in
integrating such apparently incompatible activities into a coherent
life. If we give the proper weight to the divine good of theoretical
contemplation it may leave us little interest in the virtuous pursuits
of the moral goods arising from our political nature, except, again,
as means for establishing and maintaining the conditions in which we
may contemplate.
Like Plato, Aristotle is a eudaimonist in that he argues that virtue
(including in some way the moral virtues of courage, justice and the
rest) is the dominant and most important component of happiness.
However, he is not claiming that the only reason to be morally
virtuous is that moral virtue is a constituent of happiness. He says
that we seek to have virtue and virtuous action for itself as well
(*Nicomachean Ethics*, 1097b 1-10); not to do so is to
fail even to *be* virtuous. In this regard, it is like
pleasure, which is also a constituent of the happy life. Like
pleasure, virtue is sought for its own sake. Still, as a constituent
of happiness, virtuous action is grounded in the highest end for a
human being. One can discern in the *Nicomachean Ethics* two
different types of argument for the link between virtue and happiness.
One is based on Aristotle's account of human nature and
culminates in the so-called function argument of Book I. If happiness
is excellent or virtuous activity of the soul, the latter is
understood by way of the uniquely human function. If one understands
the human function then one can understand what it is for that
function to be done excellently (1098a5-15). This sort of
argument has been criticized because it moves from a premise about
what humans are to a conclusion about what they ought to be. Such
criticism reflects the modern claim that there is a fact-value
distinction. One defense of Aristotle's argument holds that his
account of human nature is meant both to be objective and to offer the
basis for an understanding of excellence. The difference, then,
between modern moral theory and ancient is over what counts as an
objective account of human nature. However, even if we accept this
defense, we can still ask why a human would consider it good to
achieve human excellence as it is defined in the function argument. At
this point another argument for the link between happiness and virtue
- one more dispersed in the text of the *Nicomachean
Ethics* - becomes relevant; it is based on value terms only
and appeals to what a human might consider it good to achieve.
Aristotle describes virtuous activity of the soul as fine
(*kalos*) and excellent (*spoudaios*). Finally, the link
between virtue and happiness is forged if a human sees that it is good
to live a life that one considers to be fine and excellent.
(For further detailed discussion, see entry on
Aristotle's ethics.)
## 5. Cynics
Although the Cynics had an impact on moral thinking in Athens after
the death of Socrates, it is through later, and highly controversial,
reports of their deeds and sayings - rather than their writings
- that we know of them. Diogenes the Cynic, the central figure,
is famous for living in a wine jar (Diogenes Laertius [= DL] VI 23)
and going about with a lantern looking for 'a man' -
i.e., someone not corrupted (DL VI 41). He claimed to set courage over
against fortune, nature against convention, and reason against passion
(DL VI 38). Of this trio of opposites, the most characteristic for
understanding the Cynics is nature against convention. Diogenes taught
that a life according to nature was better than one that conformed to
convention. First of all, natural life is simpler. Diogenes ate,
slept, or conversed wherever it suited him and carried his food around
with him (DL VI 22). When he saw a child drinking out of its hand, he
threw away his cup, saying that a child had bested him in frugality
(DL VI 37). He said the life of humans had been made easy by the gods
but that humans had lost sight of this through seeking after honeyed
cakes, perfumes, and similar things (DL VI 44). With sufficient
training the life according to nature is the happy life (DL VI
71).
Accordingly Diogenes became famous for behavior that flouted
convention (DL VI 69). Still, he thought that the simple life not only
freed one from unnecessary concerns but was essential to virtue.
Although he says nothing specific about the virtues, he does commend
training for virtuous behavior (DL VI 70). His frugality certainly
bespeaks self-control. He condemned love of money, praised good men,
and held love to be the occupation of the idle (DL VI
50-51).
Besides his contempt for convention, what is most noteworthy about
Diogenes as a moral teacher is his emphasis on detachment from those
things most people consider good. In this emphasis, Diogenes seems to
have intensified a tendency found in Socrates. Certainly Socrates
could be heedless of convention and careless about providing for his
bodily needs. To Plato, however, Diogenes seemed to be Socrates gone
mad (DL VI 54). Still, in Diogenes' attitude, we can see at
least the beginning of the idea that the end of life is a
psychological state marked by detachment. Counseling the simple and
uncomplicated satisfaction of one's natural instincts and
desires, Diogenes urges detachment from those things held out by
convention to be good. While he is not so explicit, others develop the
theme of detachment into the notion of tranquility. The Stoics and
Epicureans hold that happiness depends on detachment from vulnerable
or difficult to obtain bodily and external goods and consists in a
psychological state more under one's own direct control. In this
way, happiness becomes associated (for the Epicureans) with
tranquility (*ataraxia*). Finally, in Skepticism, suspension of
judgment is a kind of epistemic detachment that provides tranquility.
So in Diogenes we find the beginnings of an idea that will become
central to later ancient moral theory.
## 6. Cyrenaics
The first of the Cyrenaic school was Aristippus, who came from Cyrene,
a Greek city on the north African coast. The account of his teachings,
in Diogenes Laertius, can seem sometimes inconsistent. Nevertheless,
Aristippus is interesting because, as a thorough hedonist, he is
something of a foil for Epicurus. First of all, pleasure is the end or
the goal of life - what everyone should seek in life. However,
the pleasure that is the end is not pleasure in general, or pleasure
over the long term, but immediate, particular pleasures. Thus the end
varies situation by situation, action by action. The end is not
happiness because happiness is the sum of particular pleasures (DL II
87-88). Accumulating the pleasures that produce happiness is
tiresome (DL II 90). Particular pleasures are ones that are close-by
or sure. Moreover, Aristippus said that pleasures do not differ from
one another, that one pleasure is not more pleasant than another. This
sort of thinking would encourage one to choose a readily available
pleasure rather than wait for a "better" one in the
future. This conclusion is reinforced by other parts of his teaching.
His school says that bodily pleasures are much better than mental
pleasures. While this claim would seem to contradict the idea that
pleasures do not differ, it does show preference for the immediately
or easily available pleasures of bodily gratification over, e.g., the
mental pleasure of a self-aware just person. In fact,
Aristippus' school holds that pleasure is good even if it comes
from the most unseemly things (DL II 88). Aristippus, then, seems to
have raised improvidence to the level of a principle.
Still, it is possible that the position is more than an elaborate
justification for short-sighted pleasure-seeking. Cyrenaics taught
that a wise man (*sophos*) (one who always pursues immediate
gratification) will in general live more pleasantly than a foolish
man. That prudence or wisdom (*phronesis*) is good, not
in itself but in its consequences, suggests that some balance, perhaps
even regarding others, is required in choosing pleasures (DL II 91).
The Cyrenaic attitude to punishment seems to be an example of
prudence. They hold that nothing is just, fine, or base by nature but
only by convention and custom; still a good man will do nothing out of
line through fear of punishment (DL II 93). Finally, they hold that
friendship is based in self-interest (DL II 91). These aspects of
Cyrenaic teaching suggest they are egoist hedonists. If so, there are
grounds for taking the interest of others into account as long as
doing so is based on what best provides an individual pleasure.
Nevertheless, Aristippus' school holds that the end of life is a
psychological good, pleasure. Still, it is particular pleasures not
the accumulation of these that is the end. As a consequence, their
moral theory contrasts sharply with others in antiquity. If we take
the claims about the wise man, prudence, and friendship to be
references to virtue, then Aristippus' school denies that virtue
is indispensable for achieving the end or goal of life. While they
hold that virtue is good insofar as it leads to the end, they seem
prepared to dispense with virtue in circumstances where it proves
ineffective. Even if they held virtue in more esteem, the Cyrenaics
would nonetheless not be eudaimonists since they deny that happiness
is the end of life.
## 7. Epicurus
Epicurean moral theory is the most prominent hedonistic theory in the
ancient world. While Epicurus holds that pleasure is the sole
intrinsic good and pain is what is intrinsically bad for humans, he is
also very careful about defining these two. Aware of the Cyrenaics who
hold that pleasures, moral and immoral, are the end or goal of all
action, Epicurus presents a sustained argument that pleasure,
correctly understood, will coincide with virtue.
In the *Letter to Menoeceus*, Epicurus begins by making a
distinction among desires. Some desires are empty or groundless and
others are natural; the natural are further subdivided into the merely
natural and the necessary. Finally, the necessary are those necessary
for happiness, those necessary for the body's freedom from
distress, and those necessary for life itself (*Letter to
Menoeceus* 127). A helpful scholiast (cf. *Principal
Doctrines* XXIX) gives us some examples; necessary desires are
ones that bring relief from unavoidable pain, such as drinking when
thirsty - if we don't drink when we need replenishment, we
will just get thirstier and thirstier, a painful experience. The
natural but not necessary are the ones that vary pleasure but are not
*needed* in order to motivate us to remove or ward off pain,
such as the desire for expensive food: we do not need to want, or to
eat, expensive food in order to ward off the pain of prolonged hunger.
Finally, the groundless desires are for such things as crowns and
statues bestowed as civic honors - these are things that when
desired at all are desired with intense and harmful cravings. Keeping
these distinctions in mind is a great help in one's life because
it shows us what we need to aim for. The aim of the blessed life is
the body's health and the soul's freedom from disturbance
(*ataraxia*) (128).
After this austere introduction, Epicurus makes the bold claim that
pleasure is the beginning and end of the blessed life. Then he makes
an important qualification. Just because pleasure is the good,
Epicureans do not seek every pleasure. Some lead to greater pain. Just
so, they do not avoid all pains; some lead to greater pleasures
(128-29). Such a position sounds, of course, like common-sense
hedonism. If one's aim is to have as much pleasure as possible
over the long term, it makes sense to avoid some smaller pleasures
that will be followed by larger pains. If one wants, for example, to
have as much pleasure from drinking wine as possible, then it would
make sense to exercise some judgment about how much to drink on an
occasion since the next morning's hangover will be very
unpleasant, and might keep one from having wine the next day. However,
his distinction among groundless, natural, and necessary desires
should make us suspicious that Epicurus is no common-sense hedonist.
The aim of life is not maximizing pleasures in the way the above
example suggests. Rather, real pleasure, the aim of life, is what we
experience through freedom from pain and distress. So it is not the
pains of the hangover or the possible loss of further bouts of wine
drinking that should restrain my drinking on this occasion. Rather one
should be aiming at the pleasure given by freedom from bodily pain and
mental distress (131-32).
The usual way to understand the pleasure of freedom from pain and from
distress is by way of the distinction between kinetic pleasures and
katastematic, or what are, following Cicero, misleadingly called
'static' ones (Diogenes Laertius X 136). The name of the
former implies motion and the name of the latter implies a state or
condition. The reason the distinction is important is that freedom
from pain and from distress is a state or condition, not a motion.
Epicurus holds not only that this state is a kind of pleasure but that
it is the most complete pleasure (*Principal Doctrines* III).
Modern commentators have taken various approaches to explaining why
this state should be considered a pleasure, as opposed to, e.g., the
attitude of taking pleasure in the fact that one is free of pain and
distress. After all, taking pleasure in a fact is not a feeling in the
same sense as the pleasure of drinking when thirsty. Since the latter
is usually taken to be an example of kinetic pleasure (and is
associated with the pain of thirst), sometimes katastematic pleasure
is said to be just kinetic pleasure free from pain and distress, e.g.,
the pleasure of satiety. A somewhat broader conception of katastematic
pleasure holds that it is the enjoyment of one's natural
constitution when one is not distracted by bodily pain or mental
distress. Finally, some commentators hold the pleasure of freedom from
pain and from distress to be a feeling available only to the wise
person, who properly appreciates simple pleasures. Since eating plain
bread and drinking water are usually not difficult to achieve, to the
wise, their enjoyment is not overwhelmed by fear (130-31).
Accordingly, the pleasure they take in these is free of pain and
distress. Of course, the wise do not have to confine themselves to
simple pleasures; they can enjoy luxurious ones as well - as
long as they avoid needing them, which entails fear.
At this point, we can see that Epicurus has so refined the account of
pleasure and pain that he is able to tie them to virtue. In the
*Letter to Menoeceus*, he claims, as a truth for which he does
not argue, that virtue and pleasure are inseparable and that living a
prudent, honorable, and just life is the necessary and sufficient
means to the pleasure that is the end of life (132). An example of
what he might mean is found in *Principal Doctrines*, where
Epicurus holds that justice is a contract among humans to avoid
suffering harm from one another. Then he argues that injustice is not
bad *per se* but is bad because of the fear that arises from
the expectation that one will be punished for his misdeeds. He
reinforces this claim by arguing that it is impossible for someone who
violates the compact to be confident that he will escape detection
(*Principal Doctrines* XXXIV-V). While one might doubt
this claim about the malfactor's state of mind, nevertheless, we
can see that Epicurus means to ground justice, understood as the rules
governing human intercourse, in his moral psychology, i.e., the need
to avoid distress.
Epicurus, like his predecessors in the ancient moral tradition,
identified the good as something psychological. However, instead of,
for example, the complex Aristotelian notion of excellent activity of
the soul, Epicurus settled on the fairly obvious psychological good of
pleasure. Of course, Aristotle argues that excellent activity of the
soul is intrinsically pleasurable (*Nicomachean Ethics*
1099a5). Still, in his account pleasure seems something like a
dividend of excellent activity (1175b30). By contrast, for Epicurus
pleasure itself is the end of life. However, since Epicureans hold
freedom from pain (*aponia*) and distress (*ataraxia*)
gives the preferable pleasure, they emphasize tranquility
(*ataraxia*) as the end of life. Modern utilitarians, for whom
freedom from pain and distress is not paramount, would include a
broader palate of pleasures.
Epicurus' doctrine can be considered eudaimonist. While Plato
and Aristotle maintain that virtue is constitutive of happiness,
Epicurus holds that virtue is the only means to achieve happiness,
where happiness is understood as a continuous experience of the
pleasure that comes from freedom from pain and from mental distress.
Thus, he is a eudaimonist in that he holds virtue is indispensable to
happiness; but he does not identify virtuous activity, in whole or in
part, with happiness. Finally, Epicurus is usually interpreted to have
held a version of psychological hedonism - i.e., everything we
do is done for the sake of pleasure - rather than ethical
hedonism - i.e., we ought to do everything for the sake of
pleasure. However, *Principal Doctrine* XXV suggests the latter
position; and when in the *Letter to Menoeceus* he says that
"we" do everything in order not to be in pain or in fear,
he might mean to be referring to "we" Epicureans. If so,
the claim would be normative. Still, once all disturbance of the soul
is dispelled, he says, one is no longer in need nor is there any other
good that could be added (128). Since this claim appears to be
descriptive, Epicurus could be taken, as he usually has been, to be
arguing that whatever we do is done for the sake of pleasure. In this
account, that aspect of human nature on which virtue is based is
fairly straightforward. The account is certainly less complex than,
e.g., Aristotle's. In turn, Epicurus seems to have argued in
such a way as to make pleasure the only reason for being virtuous. If
psychological hedonism is true, then when one realizes the necessary
link between virtue and pleasure, one has all the reason one needs to
be virtuous and the only reason one can have.
## 8. Stoics
The Stoics are well known for their teaching that the good is to be
identified with virtue. Virtues include logic, physics, and ethics
(*Stoicorum Veterum Fragmenta* [=SVF]II 35), as well as wisdom,
moderation, justice, and courage. To our modern ears, the first three
sound like academic subjects; but for the Stoics, they were virtues of
thought. However, orthodox Stoics do not follow the Aristotelian
distinction between intellectual and moral virtues because - as
we shall see - they hold that all human psychological functions,
including the affective and volitional, are rational in a single,
unified sense. For them, consequently, all virtues form a unity around
the core concept of knowledge. Finally, all that is required for
happiness (i.e., the secure possession of the good, of what is needed
to make one's life a thoroughly good one) - and the only
thing - is to lead a virtuous life. In this teaching Stoics are
addressing the problem of bodily and external goods raised by
Aristotle. Their solution takes the radical course of dismissing such
alleged goods from the account of happiness because they are not
necessary for virtue, and are not, in fact, in any way good at
all.
They argue that health, pleasure, beauty, strength, wealth, good
reputation, and noble birth are neither good nor bad. Since they can
be used well or badly and the good is invariably good, these assets
are not good. The virtues, however, are good (DL VII 102-103),
since they are perfections of our rationality, and only rationally
perfected thoughts and decisions can possibly have the features of
harmony and order in which goodness itself consists. Since possessing
and exercising virtue is happiness, happiness does not include such
things as health, pleasure, and wealth. Still, the Stoics do not
dismiss these assets altogether since they still have a kind of value.
These things are indifferent to happiness in that they do not add to
one's virtue nor detract from it, and so they do not add to or
take away from one's possession of the good. One is not more
virtuous because healthy nor less virtuous because ill. But being
healthy generally conforms with nature's plans for the lives of
animals and plants, so it is preferable to be healthy, and one should
try to preserve and maintain one's health. Health is, then, the
kind of value they call a preferred indifferent; but it is not in any
way a good, and it makes no contribution to the quality of one's
life as a good or a bad one, happy or miserable.
In order to understand the Stoic claims about the relation of virtue
to happiness, we can begin with virtue. Chrysippus says that virtue is
a craft (*techne*) having to do with the things of life
(*SVF* II 909). In other texts, we learn that the things of
life include impulse (*horme*). Each animal has an
impulse for self-preservation; it has an awareness of its constitution
and strives to preserve its integrity. There is also a natural impulse
to care for offspring. Humans, then, are naturally inclined toward
preserving life, health, and children. But then grown-up humans also
do these things under the guidance of reason; reason in the adult
human case is the craftsman of impulse (DL VII 85-6). This
latter phrase is significant because it implies that just following
natural impulse is not enough. In fact, it is not even possible for an
adult human, whose nature is such as to do everything they do
*by* reason, even to follow the sort of natural impulses that
animals and immature humans do in their actions. In order to lead a
virtuous life, reason must shape our impulses and guide their
expression in action.
Impulse is the key to understanding the relation between virtue and
happiness. An impulse is a propositional attitude that, when assented
to, leads to action, e.g., 'I should eat this bread now.'
However, impulses do not arise in a separate, non-rational part of the
soul. Stoics deny there are any non-rational desires of appetite
capable of impelling action. The soul, insofar as it provides
motivations and is the cause of our actions, consists of the
commanding faculty (*hegemonikon*) which is also reason
(*SVF* I 202). We can distinguish correct impulses into those
which treat virtue as the only good and those which treat things
indifferent as indifferent. By contrast, emotions or passions (*pathe*) are incorrect impulses that treat what is
indifferent as good. However, they do not come from a non-rational
part of the soul but are false judgments about the good, where
judgment is understood as assent to some impulse. Emotions, such as
desire, fear, pleasure, and pain, embody such erroneous judgments
(*SVF* III 391, 393, 394). For instance, the desire for health
arises from assenting to the impulse that embodies the false judgment
that health is good, instead of a preferred indifferent. The sage
- someone perfected in virtue - would never assent to such
false propositions and thus would never have emotions in this sense,
no feelings that carried him beyond reason's true assessment. He
would, however, experience feelings attuned to reason, *eupatheiai* -literally good emotions or feelings. For instance, he
would feel joy over his virtue, but not pleasure - the latter
being an emotion that treats the actual possession of an indifferent
as a good.
Knowledge, then, about what is good, bad, and indifferent is the heart
of virtue. Courage, e.g., is simply knowledge of what is to be
endured: the impulse to endure or not, and the *only* impulse
that is needed by courage, then follows automatically, as a product or
aspect of that knowledge. This tight unity in the soul is the basis
for the Stoic teaching about the unity of the virtues. Zeno (the
founder of the school) defines wisdom (*phronesis*), or
rather practical *knowledge*, in matters requiring distribution
as justice, in matters requiring choice as moderation, and in matters
requiring endurance as courage (Plutarch *On Moral Virtue*
440E-441D). Practical knowledge, then, is a single,
comprehensive knowledge of what is good and bad in each of these kinds
of circumstance.
Attending to this identification of virtue and practical knowledge is
a good way to understand the central Stoic teaching that virtue is
living in agreement with nature (*SVF* III 16). Nature includes
not only what produces natural impulses but also the rest of the
government of the cosmos, the natural world. The universe is governed
by right reason that pervades everything and directs (causes) the way
it functions - with the exception of the only rational animals
there are, the adult human beings: their actions are governed by
themselves, i.e., by assenting to, or withholding assent from
particular impulses. Nature is even identified with Zeus, who is said
to be the director of the administration of all that exists (DL VII
87-9). Since reason governs the universe for the good,
everything happens of necessity and for the overall good. Virtue,
then, includes understanding both one's individual nature as a
human being and the way nature arranges the whole universe. At this
point, we can appreciate the role of logic and physics as virtue since
these entail knowledge of the universe. This understanding is the
basis for living in agreement with the government of the universe,
i.e., with nature, by making one's decisions and actions be such
as to agree with Zeus's or nature's own plans, so far as
one can understand what those are.
It is in this context that we can best understand the Stoic teaching
about indifferents, such as health and wealth. An individual's
health is vulnerable to being lost if right reason that governs the
universe requires it for the good of the whole. If happiness depended
on having these assets and avoiding their opposites, then, in these
cases, happiness would be impossible. However, if virtue is living in
agreement with nature's government of the universe and if virtue
is the only good, one's happiness is entirely determined by his
patterns of assent and is therefore not vulnerable to being lost. If
one understands that the good of the whole dictates that in a
particular case one's health must be sacrificed, then one
recognizes that his happiness does not require health. We should not,
however, see this recognition as tantamount to renunciation. If the
Stoic notion of happiness has any relation at all to the ordinary
sense, renunciation cannot be a part of it. Rather, the Stoic view of
living in accordance with nature should imply not only understanding
the way right reason rules the universe but agreeing with it and even
desiring that things happen as they do. We can best appreciate the
notion that virtue is the good, then, if we take virtue as both
acknowledging that the universe is well governed and adopting the
point of view, so to speak, of the government (DL VII 87-9).
A refinement of the Stoic approach to indifferents gives us a way of
understanding what living in agreement with nature might look like.
After all, such things as health and wealth cannot just be dismissed
since they are something like the raw material of virtue. It is in
pursuing and using them that one exercises virtue, for instance. In
the attempt to integrate preferred indifferents into the pursuit of
the good, Stoics used an analogy with archery (*On Ends*
III.22). Since he aims to hit the target, the archer does everything
in his power to hit the target. Trying everything in his power
reflects the idea that such factors as a gust of wind - chance
happenings that cannot be controlled or foreseen - can intervene
and keep him from achieving the goal. To account for this type of
factor, it is claimed that the goal of the archer is really trying
everything in his power to attain the end. The analogy with the art of
living focuses on this shift from the goal of hitting the target to
the goal of trying everything in one's power to hit the target
(*On Ends* V.17-20). It is the art of trying everything
in one's power to attain such preferred indifferents as health
and wealth. However, if right reason, which governs the universe,
decides that one will not have either, then the sage follows right
reason. At this point the analogy with archery breaks down since, for
the sage, trying everything in one's power does not mean
striving until one fails; rather, it means seeking preferred
indifferents guided by right reason. By this reasoning, one should see
that virtue and happiness are not identified with achieving heath and
wealth but with the way one seeks them and the evaluative propositions
one assents to. Finally, the art of living is best compared to such
skills as acting and dancing (*On Ends* V. 24-5).
This way of relating preferred indifferents to the end, i.e., to
happiness, was challenged in antiquity as incoherent. Plutarch, for
instance, argues that it is contrary to common understanding to say
that the end is different from the reference point of all action. If
the reference point of all action - what one does everything to
achieve - is to have preferred indifferents such as health, then
the end is to have preferred indifferents. However, if the end is not
to have preferred indifferents (but, say, always to act prudently),
then the reference point of all action cannot be the preferred
indifferents (*On Common Conceptions* 1070F-1071E). The
Stoics are presented with a dilemma: either preferred indifferents are
integral to the end or they are not the object of choice.
The Stoics are extreme eudaimonists compared to Plato or Aristotle,
although they are clearly inspired by Socratic intellectualism. While
Plato clearly associates virtue and happiness, he never squarely faces
the issue whether happiness may require other goods, e.g., wealth and
health. Aristotle holds happiness to be virtuous activity of the soul;
but he raises - without solving - the problem of bodily
and external goods and happiness. For these two, virtue, together with
its active exercise, is the dominant and most important component of
happiness, while Stoics simply identify virtue and the good, and so
make it the only thing needed for a happy life. Still, Stoics do not
reduce happiness to virtue, as though 'happiness' is just
a name for being perfectly just, courageous, and moderate. Rather they
have independent ways of describing happiness. Following Zeno, all the
Stoics say it is a good flow of life. Seneca says the happy life is
peacefulness and constant tranquility. However, we should keep in mind
that, while they do not reduce happiness to virtue, their account of
happiness is not that of the common person. So in recommending virtue
because it secures happiness the Stoics are relying on happiness in a
special, although not idiosyncratic, sense. In fact, their idea of
happiness shares an important feature with the Epicurean, which puts a
premium on tranquil pleasures. In Stoicism as well, deliverance from
the vicissitudes of fate leads to a notion of happiness that
emphasizes tranquility. And, as we shall see, tranquility is a value
for the Skeptics.
Clearly, the Stoic account of virtue and happiness depends on their
theory about human nature. For Aristotle, virtue is perfection of the
human function and the Stoics follow in this line of thinking. While
their notion of virtue builds on their notion of the underlying human
nature, their account of the perfection of human nature is more
complex than Aristotle's. It includes accommodation to the
nature of the universe. Virtue is the perfection of human nature that
makes it harmonious with the workings of fate, i.e. with Zeus's
overall plan, regarded as the ineluctable, though providential, cause
of what happens in the world at large.
## 9. Pyrrhonian Skeptics
Pyrrho, a murky figure, roughly contemporary with Epicurus and Zeno
the Stoic, left no writings. In the late, anecdotal tradition he is
credited with introducing suspension of judgment (DL IX 61). He became
the eponymous hero for the founding of Pyrrhonian skepticism in the
first century B.C. (See entry on
*Pyrrho*
and on
*Ancient Skepticism*.)
Having discovered that for every argument examined so far there is an
opposing argument, Pyrrhonian Skeptics expressed no determinate
opinions (DL IX 74). This attitude would seem to lead to a kind of
epistemic paralysis. The Skeptics reply that they do not abolish,
e.g., relying on sight. They do not say that it is unreliable and they
do not refuse, personally, ever to rely on it. Rather, they have the
impression that there is no reason why we are entitled to rely on it,
in a given case or in general, even if we go ahead and rely on it
anyhow (DL IX 103). For example, if one has the visual impression of a
tower, that appearance is not in dispute. What is in dispute is
whether the tower is as it appears to be. For Skeptics, claiming that
the tower is as it appears to be is a dogmatic statement about the
object or the causal history of the appearance. As far as the Skeptic
is concerned, all such statements have failed to be adequately
justified, and all supporting arguments can be opposed by equally
convincing counter-arguments. Lacking any grounds on which to prefer
one dogmatic statement or view over another, he suspends judgment.
Such views have an obvious impact on practical and moral issues. First
of all, the Skeptics argue that, so far as we have been given any
compelling reason by philosophers to believe, there is nothing good or
bad by nature (Sextus Empiricus, *Outlines of Pyrrhonism*
[=*PH*] III 179-238, *Adversus Mathematicos*
[=*M*] XI 42-109). And just in case he does find such
reasons compelling, the Skeptic will resist the temptation to assent
by posing a general counter-argument: what is good by nature would be
recognized by everyone, but clearly not everyone agrees - for
instance, Epicurus holds that pleasure is good by nature but
Antisthenes holds that it is not (DL IX 101) - hence there is
nothing good by nature (see section 4 of the entry on
*Moral Skepticism*).
The practical consequences of suspending judgment are illustrated by
two traditions about the life of Pyrrho. In one, Pyrrho himself did
not avoid anything, whether it was wagons, precipices, or dogs. It
would appear that, suspending judgment about whether being hit by a
wagon was naturally good or bad, Pyrrho might walk into its path, or
not bother to get out of the way. His less skeptical friends kept him
alive - presumably by guiding him away from busy roads, vicious
dogs, and deep gorges. Another tradition, however, says that he only
theorized about suspension of judgments, and took action to preserve
himself and otherwise lead a normal life, but doing so without any
judgments as to natural good and bad. Living providently, he reached
ninety years of age (DL IX 62).
In any event, Pyrrho's suspension of judgment led to a certain
detachment. Not knowing what was good or bad by nature, he was
indifferent where dogmatists would be unhappy or at least anxious. For
instance, he performed household chores usually done by women (DL IX
63-66). Thus Pyrrho's skepticism detached him from the
dogmatic judgments of a culture in which a man's performing
women's work was considered demeaning. In turn, his skepticism
and suspension of judgment led to freedom from disturbance
(*ataraxia*) (DL IX 68). It is significant that this
psychological state, so important for Epicureans as part of the end of
life, should play a key role in Pyrrhonian skepticism at its beginning
with Pyrrho, but certainly in its development with Sextus
Empiricus.
While suspension of judgment in moral matters brings freedom from
disturbance, it does not lead to immoral behavior - anymore than
it leads to the foolhardy behavior of the first tradition about
Pyrrho's life. Sextus generalizes the Skeptic teaching about
appearances to cover the whole area of practical activity. He says the
rules of everyday conduct are divided into four parts: (1) the
guidance from nature, (2) compulsion that comes from bodily states
like hunger and thirst, (3) traditional laws and customs about pious
and good living, (4) the teachings of the crafts (*PH*, I
21-24). The Skeptic is guided by all of these as by appearance,
and thus undogmatically. For instance, he would follow the traditional
laws about pious and good living, accepting these laws as the way
things appear to him to be in matters of piety and goodness but
claiming no knowledge and holding no beliefs.
Sextus says that the end of life is freedom from disturbance in
matters of belief, plus moderate states in matters of compulsion.
Suspension of judgment provides the former in that one is not
disturbed about which of two opposing claims is true, when (as always
seems to happen) one cannot rationally decide between them. Matters of
compulsion cover such things as bodily needs for food, drink, and
warmth. While the Skeptic undeniably suffers when hungry, thirsty, or
cold, he achieves a moderate state with respect to these sufferings
when compared to the person who both suffers them and believes they
are naturally bad. The Skeptic's suspension of judgment about
whether his suffering is naturally bad gives him a certain detachment
from the suffering, and puts him in a better condition than those who
also believe their suffering is naturally bad (*PH* I
25-30).
As a consequence, the Skeptic conception of the end of life is similar
in some ways to Epicurean and Stoic beliefs. For Epicureans the end is
freedom from pain and distress; the Stoic identification of virtue and
the good promises freedom from disturbance. While Pyrrho and Sextus
hold freedom from disturbance to be the end of life, they differ from
the former over the means by which it is achieved. Both the Epicureans
and Stoics, for example, hold that a tranquil life is impossible in
the absence of virtue. By contrast, Sextus does not have a lot to say
in a positive vein about virtue, although he does recommend following
traditional laws about piety and goodness, and he indicates that the
Skeptic may live virtuously (*PH* I 17). Rather, it is for the
Skeptics an epistemic attitude embodied in a distinctive practice, not
virtue, that leads to the desired state. This alone would seem to be
enough to disqualify Pyrrhonism as a form of eudaimonism.
Sextus does however offer his skeptical practice as a corrective to
the dogmatists' misleading path to *eudaimonia*: it is
not possible to be happy while supposing things are good or bad by
nature (*M* XI 130, 144). It is the person who suspends
judgment that enjoys the most complete happiness (*M* XI 160,
140). It is important to observe that Sextus proposes his skeptical
*practice* as an alternative to dogmatic *theories*. For
the Skeptic makes no commitments, or even assertions, regarding the
supposedly objective conditions that are supposed by other
philosophers to underlie our natural, human *telos*.
Accordingly, some have objected that the kind of tranquility and
happiness that the Skeptic enjoys could just as well be produced
through pharmacology or, we might add, Experience Machines. More
generally speaking, the objection is that skeptical tranquility and
happiness comes at a cost we should not be willing to pay. By refusing
to accept the world is ever as it appears, the Skeptic becomes a
detached, passive spectator, undisturbed by any of the thoughts that
pass through his mind. Such detachment might seem to threaten his
ability to care about other people as he observes all that happens to
them with the same tranquil indifference. Also, he would seem to be at
best a moral conformist, unable or at least unwilling to engage in any
moral deliberation, either hypothetically or as a means to directly
address a pressing moral issue.
Sextus provides an example of the latter situation when he imagines
the Skeptic being compelled by a tyrant to commit some unspeakable
deed, or be tortured (*M* XI 164-66). Critics propose
that the Skeptic's choice will necessarily reveal his evaluative
convictions, and thus his inconsistency in claiming to have no such
convictions. But Sextus replies that he will simply opt for one or the
other in accordance with his family's laws and customs. In other
words, he will rely on whatever moral sentiments and dispositions he
happens to have. These cognitive states will be neither the product of
any rational consideration nor will they inform any premises for moral
reasoning or philosophical deliberation. Like Sartre's
existentialism, Sextus' skepticism offers a way to respond to
moral dilemmas without supposing there is a correct, or even
objectively better, choice.
We should note that it is not entirely fair to ask whether the
Pyrrhonist's moral life is one that we, non-Skeptics, would deem
good or praise-worthy. For, as Sextus would say, we are part of the
dispute regarding what kind of life is morally good and praise-worthy.
To claim that nothing really matters to the Skeptic, for example,
presupposes some account of the mental state of caring. In the end,
the question of how or whether a Skeptic might be morally good is
equivalent to asking what role belief, or knowledge, plays in the
moral goodness of people and their actions. If we suppose that one
must, at a minimum, have adequately justified evaluative beliefs
informing her actions in order for those actions, and the underlying
character, to count as morally good, then the Skeptic will be immoral,
or at least amoral. On the other hand, if the Skeptic manages to
undermine our confidence in that supposition, we will have to suspend
judgment regarding whether the Skeptic, or anyone else for that
matter, is in fact capable of performing morally good and
praise-worthy action, or whether their reasoned account of
*eudaimonia* is true.
The question we should ask here is whether and in what respect those
with strong, rational convictions will be better off than the Skeptic
when confronting moral dilemmas, or even everyday moral choices. If,
like the Skeptic, one suspects that the dogmatists' have no
knowledge, let alone adequately justified beliefs to guide them, then
it seems that they are no better or worse off than those who simply
conform to moral customs and norms. |
ethics-virtue | ## 1. Preliminaries
In the West, virtue ethics' founding fathers are Plato and
Aristotle, and in the East it can be traced back to Mencius and
Confucius. It persisted as the dominant approach in Western moral
philosophy until at least the Enlightenment, suffered a momentary
eclipse during the nineteenth century, but re-emerged in
Anglo-American philosophy in the late 1950s. It was heralded by
Anscombe's famous article "Modern Moral Philosophy"
(Anscombe 1958) which crystallized an increasing dissatisfaction with
the forms of deontology and utilitarianism then prevailing. Neither of
them, at that time, paid attention to a number of topics that had
always figured in the virtue ethics tradition--virtues and vices,
motives and moral character, moral education, moral wisdom or
discernment, friendship and family relationships, a deep concept of
happiness, the role of the emotions in our moral life and the
fundamentally important questions of what sorts of persons we should
be and how we should live.
Its re-emergence had an invigorating effect on the other two
approaches, many of whose proponents then began to address these
topics in the terms of their favoured theory. (One consequence of this
has been that it is now necessary to distinguish "virtue
ethics" (the third approach) from "virtue theory", a
term which includes accounts of virtue within the other approaches.)
Interest in Kant's virtue theory has redirected
philosophers' attention to Kant's long neglected
*Doctrine of Virtue*, and utilitarians have developed
consequentialist virtue theories (Driver 2001; Hurka 2001). It has
also generated virtue ethical readings of philosophers other than
Plato and Aristotle, such as Martineau, Hume and Nietzsche, and
thereby different forms of virtue ethics have developed (Slote 2001;
Swanton 2003, 2011a).
Although modern virtue ethics does not have to take a
"neo-Aristotelian" or eudaimonist form (see section 2),
almost any modern version still shows that its roots are in ancient
Greek philosophy by the employment of three concepts derived from it.
These are *arete* (excellence or virtue),
*phronesis* (practical or moral wisdom) and *eudaimonia*
(usually translated as happiness or flourishing). (See Annas 2011 for
a short, clear, and authoritative account of all three.) We discuss
the first two in the remainder of this section. *Eudaimonia* is
discussed in connection with eudaimonist versions of virtue ethics in
the next.
### 1.1 Virtue
A virtue is an excellent trait of character. It is a disposition, well
entrenched in its possessor--something that, as we say, goes all
the way down, unlike a habit such as being a tea-drinker--to
notice, expect, value, feel, desire, choose, act, and react in certain
characteristic ways. To possess a virtue is to be a certain sort of
person with a certain complex mindset. A significant aspect of this
mindset is the wholehearted acceptance of a distinctive range of
considerations as reasons for action. An honest person cannot be
identified simply as one who, for example, practices honest dealing
and does not cheat. If such actions are done merely because the agent
thinks that honesty is the best policy, or because they fear being
caught out, rather than through recognising "To do otherwise
would be dishonest" as the relevant reason, they are not the
actions of an honest person. An honest person cannot be identified
simply as one who, for example, tells the truth because it *is*
the truth, for one can have the virtue of honesty without being
tactless or indiscreet. The honest person recognises "That would
be a lie" as a strong (though perhaps not overriding) reason for
not making certain statements in certain circumstances, and gives due,
but not overriding, weight to "That would be the truth" as
a reason for making them.
An honest person's reasons and choices with respect to honest
and dishonest actions reflect her views about honesty, truth, and
deception--but of course such views manifest themselves with
respect to other actions, and to emotional reactions as well. Valuing
honesty as she does, she chooses, where possible to work with honest
people, to have honest friends, to bring up her children to be honest.
She disapproves of, dislikes, deplores dishonesty, is not amused by
certain tales of chicanery, despises or pities those who succeed
through deception rather than thinking they have been clever, is
unsurprised, or pleased (as appropriate) when honesty triumphs, is
shocked or distressed when those near and dear to her do what is
dishonest and so on. Given that a virtue is such a multi-track
disposition, it would obviously be reckless to attribute one to an
agent on the basis of a single observed action or even a series of
similar actions, especially if you don't know the agent's
reasons for doing as she did (Sreenivasan 2002).
Possessing a virtue is a matter of degree. To possess such a
disposition fully is to possess full or perfect virtue, which is rare,
and there are a number of ways of falling short of this ideal
(Athanassoulis 2000). Most people who can truly be described as fairly
virtuous, and certainly markedly better than those who can truly be
described as dishonest, self-centred and greedy, still have their
blind spots--little areas where they do not act for the reasons
one would expect. So someone honest or kind in most situations, and
notably so in demanding ones, may nevertheless be trivially tainted by
snobbery, inclined to be disingenuous about their forebears and less
than kind to strangers with the wrong accent.
Further, it is not easy to get one's emotions in harmony with
one's rational recognition of certain reasons for action. I may
be honest enough to recognise that I must own up to a mistake because
it would be dishonest not to do so without my acceptance being so
wholehearted that I can own up easily, with no inner conflict.
Following (and adapting) Aristotle, virtue ethicists draw a
distinction between full or perfect virtue and
"continence", or strength of will. The fully virtuous do
what they should without a struggle against contrary desires; the
continent have to control a desire or temptation to do otherwise.
Describing the continent as "falling short" of perfect
virtue appears to go against the intuition that there is something
particularly admirable about people who manage to act well when it is
especially hard for them to do so, but the plausibility of this
depends on exactly what "makes it hard" (Foot 1978:
11-14). If it is the circumstances in which the agent
acts--say that she is very poor when she sees someone drop a full
purse or that she is in deep grief when someone visits seeking
help--then indeed it is particularly admirable of her to restore
the purse or give the help when it is hard for her to do so. But if
what makes it hard is an imperfection in her character--the
temptation to keep what is not hers, or a callous indifference to the
suffering of others--then it is not.
### 1.2 Practical Wisdom
Another way in which one can easily fall short of full virtue is
through lacking *phronesis*--moral or practical
wisdom.
The concept of a virtue is the concept of something that makes its
possessor good: a virtuous person is a morally good, excellent or
admirable person who acts and feels as she should. These are commonly
accepted truisms. But it is equally common, in relation to particular
(putative) examples of virtues to give these truisms up. We may say of
someone that he is generous or honest "to a fault". It is
commonly asserted that someone's compassion might lead them to
act wrongly, to tell a lie they should not have told, for example, in
their desire to prevent someone else's hurt feelings. It is also
said that courage, in a desperado, enables him to do far more wicked
things than he would have been able to do if he were timid. So it
would appear that generosity, honesty, compassion and courage despite
being virtues, are sometimes faults. Someone who is generous, honest,
compassionate, and courageous might not be a morally good
person--or, if it is still held to be a truism that they are,
then morally good people may be led by what makes them morally good to
act wrongly! How have we arrived at such an odd conclusion?
The answer lies in too ready an acceptance of ordinary usage, which
permits a fairly wide-ranging application of many of the virtue terms,
combined, perhaps, with a modern readiness to suppose that the
virtuous agent is motivated by emotion or inclination, not by rational
choice. *If* one thinks of generosity or honesty as the
disposition to be moved to action by generous or honest impulses such
as the desire to give or to speak the truth, *if* one thinks of
compassion as the disposition to be moved by the sufferings of others
and to act on that emotion, *if* one thinks of courage as mere
fearlessness or the willingness to face danger, then it will indeed
seem obvious that these are all dispositions that can lead to their
possessor's acting wrongly. But it is also obvious, as soon as
it is stated, that these are dispositions that can be possessed by
children, and although children thus endowed (bar the
"courageous" disposition) would undoubtedly be very nice
children, we would not say that they were morally virtuous or
admirable people. The ordinary usage, or the reliance on motivation by
inclination, gives us what Aristotle calls "natural
virtue"--a proto version of full virtue awaiting perfection
by *phronesis* or practical wisdom.
Aristotle makes a number of specific remarks about *phronesis*
that are the subject of much scholarly debate, but the (related)
modern concept is best understood by thinking of what the virtuous
morally mature adult has that nice children, including nice
adolescents, lack. Both the virtuous adult and the nice child have
good intentions, but the child is much more prone to mess things up
because he is ignorant of what he needs to know in order to do what he
intends. A virtuous adult is not, of course, infallible and may also,
on occasion, fail to do what she intended to do through lack of
knowledge, but only on those occasions on which the lack of knowledge
is not culpable. So, for example, children and adolescents often harm
those they intend to benefit either because they do not know how to
set about securing the benefit or because their understanding of what
is beneficial and harmful is limited and often mistaken. Such
ignorance in small children is rarely, if ever culpable. Adults, on
the other hand, are culpable if they mess things up by being
thoughtless, insensitive, reckless, impulsive, shortsighted, and by
assuming that what suits them will suit everyone instead of taking a
more objective viewpoint. They are also culpable if their
understanding of what is beneficial and harmful is mistaken. It is
part of practical wisdom to know how to secure real benefits
effectively; those who have practical wisdom will not make the mistake
of concealing the hurtful truth from the person who really needs to
know it in the belief that they are benefiting him.
Quite generally, given that good intentions are intentions to act well
or "do the right thing", we may say that practical wisdom
is the knowledge or understanding that enables its possessor, unlike
the nice adolescents, to do just that, in any given situation. The
detailed specification of what is involved in such knowledge or
understanding has not yet appeared in the literature, but some aspects
of it are becoming well known. Even many deontologists now stress the
point that their action-guiding rules cannot, reliably, be applied
without practical wisdom, because correct application requires
situational appreciation--the capacity to recognise, in any
particular situation, those features of it that are morally salient.
This brings out two aspects of practical wisdom.
One is that it characteristically comes only with experience of life.
Amongst the morally relevant features of a situation may be the likely
consequences, for the people involved, of a certain action, and this
is something that adolescents are notoriously clueless about precisely
because they are inexperienced. It is part of practical wisdom to be
wise about human beings and human life. (It should go without saying
that the virtuous are mindful of the consequences of possible actions.
How could they fail to be reckless, thoughtless and short-sighted if
they were not?)
The second is the practically wise agent's capacity to recognise
some features of a situation as more important than others, or indeed,
in that situation, as the only relevant ones. The wise do not see
things in the same way as the nice adolescents who, with their
under-developed virtues, still tend to see the personally
disadvantageous nature of a certain action as competing in importance
with its honesty or benevolence or justice.
These aspects coalesce in the description of the practically wise as
those who understand what is truly worthwhile, truly important, and
thereby truly advantageous in life, who know, in short, how to live
well.
## 2. Forms of Virtue Ethics
While all forms of virtue ethics agree that virtue is central and
practical wisdom required, they differ in how they combine these and
other concepts to illuminate what we should do in particular contexts
and how we should live our lives as a whole. In what follows we sketch
four distinct forms taken by contemporary virtue ethics, namely, a)
eudaimonist virtue ethics, b) agent-based and exemplarist virtue
ethics, c) target-centered virtue ethics, and d) Platonistic virtue
ethics.
### 2.1 Eudaimonist Virtue Ethics
The distinctive feature of eudaimonist versions of virtue ethics is
that they define virtues in terms of their relationship to
*eudaimonia*. A virtue is a trait that contributes to or is a
constituent of *eudaimonia* and we ought to develop virtues,
the eudaimonist claims, precisely because they contribute to
*eudaimonia*.
The concept of *eudaimonia*, a key term in ancient Greek moral
philosophy, is standardly translated as "happiness" or
"flourishing" and occasionally as
"well-being." Each translation has its disadvantages. The
trouble with "flourishing" is that animals and even plants
can flourish but *eudaimonia* is possible only for rational
beings. The trouble with "happiness" is that in ordinary
conversation it connotes something subjectively determined. It is for
me, not for you, to pronounce on whether I am happy. If I think I am
happy then I am--it is not something I can be wrong about
(barring advanced cases of self-deception). Contrast my being healthy
or flourishing. Here we have no difficulty in recognizing that I might
think I was healthy, either physically or psychologically, or think
that I was flourishing but be wrong. In this respect,
"flourishing" is a better translation than
"happiness". It is all too easy to be mistaken about
whether one's life is *eudaimon* (the adjective from
*eudaimonia*) not simply because it is easy to deceive oneself,
but because it is easy to have a mistaken conception of
*eudaimonia*, or of what it is to live well as a human being,
believing it to consist largely in physical pleasure or luxury for
example.
*Eudaimonia* is, avowedly, a moralized or value-laden concept
of happiness, something like "true" or "real"
happiness or "the sort of happiness worth seeking or
having." It is thereby the sort of concept about which there can
be substantial disagreement between people with different views about
human life that cannot be resolved by appeal to some external standard
on which, despite their different views, the parties to the
disagreement concur (Hursthouse 1999: 188-189).
Most versions of virtue ethics agree that living a life in accordance
with virtue is necessary for *eudaimonia.* This supreme good is
not conceived of as an independently defined state (made up of, say, a
list of non-moral goods that does not include virtuous activity) which
exercise of the virtues might be thought to promote. It is, within
virtue ethics, already conceived of as something of which virtuous
activity is at least partially constitutive (Kraut 1989). Thereby
virtue ethicists claim that a human life devoted to physical pleasure
or the acquisition of wealth is not *eudaimon*, but a wasted
life.
But although all standard versions of virtue ethics insist on that
conceptual link between *virtue* and *eudaimonia*,
further links are matters of dispute and generate different versions.
For Aristotle, virtue is necessary but not sufficient--what is
also needed are external goods which are a matter of luck. For Plato
and the Stoics, virtue is both necessary and sufficient for
*eudaimonia* (Annas 1993).
According to eudaimonist virtue ethics, the good life is the
*eudaimon* life, and the virtues are what enable a human being
to be *eudaimon* because the virtues just are those character
traits that benefit their possessor in that way, barring bad luck. So
there is a link between *eudaimonia* and what confers virtue
status on a character trait. (For a discussion of the differences
between eudaimonists see Baril 2014. For recent defenses of
eudaimonism see Annas 2011; LeBar 2013b; Badhwar 2014; and Bloomfield
2014.)
### 2.2 Agent-Based and Exemplarist Virtue Ethics
Rather than deriving the normativity of virtue from the value of
*eudaimonia*, agent-based virtue ethicists argue that other
forms of normativity--including the value of
*eudaimonia*--are traced back to and ultimately explained
in terms of the motivational and dispositional qualities of
agents.
It is unclear how many other forms of normativity must be explained in
terms of the qualities of agents in order for a theory to count as
agent-based. The two best-known agent-based theorists, Michael Slote
and Linda Zagzebski, trace a wide range of normative qualities back to
the qualities of agents. For example, Slote defines rightness and
wrongness in terms of agents' motivations: "[A]gent-based
virtue ethics ... understands rightness in terms of good
motivations and wrongness in terms of the having of bad (or
insufficiently good) motives" (2001: 14). Similarly, he explains
the goodness of an action, the value of *eudaimonia*, the
justice of a law or social institution, and the normativity of
practical rationality in terms of the motivational and dispositional
qualities of agents (2001: 99-100, 154, 2000). Zagzebski
likewise defines right and wrong actions by reference to the emotions,
motives, and dispositions of virtuous and vicious agents. For example,
"A wrong act = an act that the *phronimos*
characteristically would not do, and he would feel guilty if he did =
an act such that it is not the case that he might do it = an act that
expresses a vice = an act that is against a requirement of virtue (the
virtuous self)" (Zagzebski 2004: 160). Her definitions of
duties, good and bad ends, and good and bad states of affairs are
similarly grounded in the motivational and dispositional states of
exemplary agents (1998, 2004, 2010).
However, there could also be less ambitious agent-based approaches to
virtue ethics (see Slote 1997). At the very least, an agent-based
approach must be committed to explaining what one should do by
reference to the motivational and dispositional states of agents. But
this is not yet a sufficient condition for counting as an agent-based
approach, since the same condition will be met by *every*
virtue ethical account. For a theory to count as an agent-based form
of virtue ethics it must also be the case that the normative
properties of motivations and dispositions cannot be explained in
terms of the normative properties of something else (such as
*eudaimonia* or states of affairs) which is taken to be more
fundamental.
Beyond this basic commitment, there is room for agent-based theories
to be developed in a number of different directions. The most
important distinguishing factor has to do with how motivations and
dispositions are taken to matter for the purposes of explaining other
normative qualities. For Slote what matters are *this particular
agent's actual motives and dispositions*. The goodness of
action A, for example, is derived from the agent's motives when
she performs A. If those motives are good then the action is good, if
not then not. On Zagzebski's account, by contrast, a good or
bad, right or wrong action is defined not by this agent's actual
motives but rather by whether this is the sort of action a virtuously
motivated agent would perform (Zagzebski 2004: 160). Appeal to *the
virtuous agent's hypothetical motives and dispositions*
enables Zagzebski to distinguish between performing the right action
and doing so for the right reasons (a distinction that, as Brady
(2004) observes, Slote has trouble drawing).
Another point on which agent-based forms of virtue ethics might differ
concerns how one identifies virtuous motivations and dispositions.
According to Zagzebski's exemplarist account, "We do not
have criteria for goodness in advance of identifying the exemplars of
goodness" (Zagzebski 2004: 41). As we observe the people around
us, we find ourselves wanting to be like some of them (in at least
some respects) and not wanting to be like others. The former provide
us with positive exemplars and the latter with negative ones. Our
understanding of better and worse motivations and virtuous and vicious
dispositions is grounded in these primitive responses to exemplars
(2004: 53). This is not to say that every time we act we stop and ask
ourselves what one of our exemplars would do in this situations. Our
moral concepts become more refined over time as we encounter a wider
variety of exemplars and begin to draw systematic connections between
them, noting what they have in common, how they differ, and which of
these commonalities and differences matter, morally speaking.
Recognizable motivational profiles emerge and come to be labeled as
virtues or vices, and these, in turn, shape our understanding of the
obligations we have and the ends we should pursue. However, even
though the systematising of moral thought can travel a long way from
our starting point, according to the exemplarist it never reaches a
stage where reference to exemplars is replaced by the recognition of
something more fundamental. At the end of the day, according to the
exemplarist, our moral system still rests on our basic propensity to
take a liking (or disliking) to exemplars. Nevertheless, one could be
an agent-based theorist without advancing the exemplarist's
account of the origins or reference conditions for judgments of good
and bad, virtuous and vicious.
### 2.3 Target-Centered Virtue Ethics
The touchstone for eudaimonist virtue ethicists is a flourishing human
life. For agent-based virtue ethicists it is an exemplary
agent's motivations. The target-centered view developed by
Christine Swanton (2003), by contrast, begins with our existing
conceptions of the virtues. We already have a passable idea of which
traits are virtues and what they involve. Of course, this untutored
understanding can be clarified and improved, and it is one of the
tasks of the virtue ethicist to help us do precisely that. But rather
than stripping things back to something as basic as the motivations we
want to imitate or building it up to something as elaborate as an
entire flourishing life, the target-centered view begins where most
ethics students find themselves, namely, with the idea that
generosity, courage, self-discipline, compassion, and the like get a
tick of approval. It then examines what these traits involve.
A complete account of virtue will map out 1) its *field*, 2)
its *mode* of responsiveness, 3) its *basis* of moral
acknowledgment, and 4) its *target*. Different virtues are
concerned with different *fields*. Courage, for example, is
concerned with what might harm us, whereas generosity is concerned
with the sharing of time, talent, and property. The *basis* of
acknowledgment of a virtue is the feature within the virtue's
field to which it responds. To continue with our previous examples,
generosity is attentive to the benefits that others might enjoy
through one's agency, and courage responds to threats to value,
status, or the bonds that exist between oneself and particular others,
and the fear such threats might generate. A virtue's
*mode* has to do with how it responds to the bases of
acknowledgment within its field. Generosity *promotes* a good,
namely, another's benefit, whereas courage *defends* a
value, bond, or status. Finally, a virtue's *target* is
that at which it is aimed. Courage aims to control fear and handle
danger, while generosity aims to share time, talents, or possessions
with others in ways that benefit them.
A *virtue*, on a target-centered account, "is a
disposition to respond to, or acknowledge, items within its field or
fields in an excellent or good enough way" (Swanton 2003: 19). A
*virtuous act* is an act that hits the target of a virtue,
which is to say that it succeeds in responding to items in its field
in the specified way (233). Providing a target-centered definition of
a *right action* requires us to move beyond the analysis of a
single virtue and the actions that follow from it. This is because a
single action context may involve a number of different, overlapping
fields. Determination might lead me to persist in trying to complete a
difficult task even if doing so requires a singleness of purpose. But
love for my family might make a different use of my time and
attention. In order to define right action a target-centered view must
explain how we handle different virtues' conflicting claims on
our resources. There are at least three different ways to address this
challenge. A *perfectionist* target-centered account would
stipulate, "An act is right if and only if it is overall
virtuous, and that entails that it is the, or a, best action possible
in the circumstances" (239-240). A more
*permissive* target-centered account would not identify
'right' with 'best', but would allow an action
to count as right provided "it is good enough even if not the
(or a) best action" (240). A *minimalist* target-centered
account would not even require an action to be good in order to be
right. On such a view, "An act is right if and only if it is not
overall vicious" (240). (For further discussion of
target-centered virtue ethics see Van Zyl 2014; and Smith 2016).
### 2.4 Platonistic Virtue Ethics
The fourth form a virtue ethic might adopt takes its inspiration from
Plato. The Socrates of Plato's dialogues devotes a great deal of
time to asking his fellow Athenians to explain the nature of virtues
like justice, courage, piety, and wisdom. So it is clear that Plato
counts as a virtue theorist. But it is a matter of some debate whether
he should be read as a virtue ethicist (White 2015). What is not open
to debate is whether Plato has had an important influence on the
contemporary revival of interest in virtue ethics. A number of those
who have contributed to the revival have done so as Plato scholars
(e.g., Prior 1991; Kamtekar 1998; Annas 1999; and Reshotko 2006).
However, often they have ended up championing a eudaimonist version of
virtue ethics (see Prior 2001 and Annas 2011), rather than a version
that would warrant a separate classification. Nevertheless, there are
two variants that call for distinct treatment.
Timothy Chappell takes the defining feature of Platonistic virtue
ethics to be that "Good agency in the truest and fullest sense
presupposes the contemplation of the Form of the Good" (2014).
Chappell follows Iris Murdoch in arguing that "In the moral life
the enemy is the fat relentless ego" (Murdoch 1971: 51).
Constantly attending to our needs, our desires, our passions, and our
thoughts skews our perspective on what the world is actually like and
blinds us to the goods around us. Contemplating the goodness of
something we encounter--which is to say, carefully attending to
it "for its own sake, in order to understand it" (Chappell
2014: 300)--breaks this natural tendency by drawing our attention
away from ourselves. Contemplating such goodness with regularity makes
room for new habits of thought that focus more readily and more
honestly on things other than the self. It alters the quality of our
consciousness. And "anything which alters consciousness in the
direction of unselfishness, objectivity, and realism is to be
connected with virtue" (Murdoch 1971: 82). The virtues get
defined, then, in terms of qualities that help one "pierce the
veil of selfish consciousness and join the world as it really
is" (91). And good agency is defined by the possession and
exercise of such virtues. Within Chappell's and Murdoch's
framework, then, not all normative properties get defined in terms of
virtue. Goodness, in particular, is not so defined. But the kind of
goodness which is possible for creatures like us is defined by virtue,
and any answer to the question of what one should do or how one should
live will appeal to the virtues.
Another Platonistic variant of virtue ethics is exemplified by Robert
Merrihew Adams. Unlike Murdoch and Chappell, his starting point is not
a set of claims about our consciousness of goodness. Rather, he begins
with an account of the metaphysics of goodness. Like Murdoch and
others influenced by Platonism, Adams's account of goodness is
built around a conception of a supremely perfect good. And like
Augustine, Adams takes that perfect good to be God. God is both the
exemplification and the source of all goodness. Other things are good,
he suggests, to the extent that they resemble God (Adams 1999).
The resemblance requirement identifies a necessary condition for being
good, but it does not yet give us a sufficient condition. This is
because there are ways in which finite creatures might resemble God
that would not be suitable to the type of creature they are. For
example, if God were all-knowing, then the belief, "I am
all-knowing," would be a suitable belief for God to have. In
God, such a belief--because true--would be part of
God's perfection. However, as neither you nor I are all-knowing,
the belief, "I am all-knowing," in one of us would not be
good. To rule out such cases we need to introduce another factor. That
factor is the fitting response to goodness, which Adams suggests is
love. Adams uses love to weed out problematic resemblances:
"being excellent in the way that a finite thing can be consists
in resembling God in a way that could serve God as a reason for loving
the thing" (Adams 1999: 36).
Virtues come into the account as one of the ways in which some things
(namely, persons) could resemble God. "[M]ost of the excellences
that are most important to us, and of whose value we are most
confident, are excellences of persons or of qualities or actions or
works or lives or stories of persons" (1999: 42). This is one of
the reasons Adams offers for conceiving of the ideal of perfection as
a personal God, rather than an impersonal form of the Good. Many of
the excellences of persons of which we are most confident are virtues
such as love, wisdom, justice, patience, and generosity. And within
many theistic traditions, including Adams's own Christian
tradition, such virtues are commonly attributed to divine agents.
A Platonistic account like the one Adams puts forward in *Finite
and Infinite Goods* clearly does not derive all other normative
properties from the virtues (for a discussion of the relationship
between this view and the one he puts forward in *A Theory of
Virtue* (2006) see Pettigrove 2014). Goodness provides the
normative foundation. Virtues are not built on that foundation;
rather, as one of the varieties of goodness of whose value we are most
confident, virtues form part of the foundation. Obligations, by
contrast, come into the account at a different level. Moral
obligations, Adams argues, are determined by the expectations and
demands that "arise in a relationship or system of relationships
that is good or valuable" (1999: 244). Other things being equal,
the more virtuous the parties to the relationship, the more binding
the obligation. Thus, within Adams's account, the good (which
includes virtue) is prior to the right. However, once good
relationships have given rise to obligations, those obligations take
on a life of their own. Their bindingness is not traced directly to
considerations of goodness. Rather, they are determined by the
expectations of the parties and the demands of the relationship.
## 3. Objections to virtue ethics
A number of objections have been raised against virtue ethics, some of
which bear more directly on one form of virtue ethics than on others.
In this section we consider eight objections, namely, the a)
application, b) adequacy, c) relativism, d) conflict, e)
self-effacement, f) justification, g) egoism, and h) situationist
problems.
a) In the early days of virtue ethics' revival, the approach was
associated with an "anti-codifiability" thesis about
ethics, directed against the prevailing pretensions of normative
theory. At the time, utilitarians and deontologists commonly (though
not universally) held that the task of ethical theory was to come up
with a code consisting of universal rules or principles (possibly only
one, as in the case of act-utilitarianism) which would have two
significant features: i) the rule(s) would amount to a decision
procedure for determining what the right action was in any particular
case; ii) the rule(s) would be stated in such terms that any
non-virtuous person could understand and apply it (them)
correctly.
Virtue ethicists maintained, contrary to these two claims, that it was
quite unrealistic to imagine that there could be such a code (see, in
particular, McDowell 1979). The results of attempts to produce and
employ such a code, in the heady days of the 1960s and 1970s, when
medical and then bioethics boomed and bloomed, tended to support the
virtue ethicists' claim. More and more utilitarians and
deontologists found themselves agreed on their general rules but on
opposite sides of the controversial moral issues in contemporary
discussion. It came to be recognised that moral sensitivity,
perception, imagination, and judgement informed by
experience--*phronesis* in short--is needed to apply
rules or principles correctly. Hence many (though by no means all)
utilitarians and deontologists have explicitly abandoned (ii) and much
less emphasis is placed on (i).
Nevertheless, the complaint that virtue ethics does not produce
codifiable principles is still a commonly voiced criticism of the
approach, expressed as the objection that it is, in principle, unable
to provide action-guidance.
Initially, the objection was based on a misunderstanding. Blinkered by
slogans that described virtue ethics as "concerned with Being
rather than Doing," as addressing "What sort of
person should I be?" but not "What should I do?" as
being "agent-centered rather than act-centered," its
critics maintained that it was unable to provide
action-guidance. Hence, rather than being a normative rival to
utilitarian and deontological ethics, it could claim to be no more
than a valuable supplement to them. The rather odd idea was that all
virtue ethics could offer was, "Identify a moral exemplar and do
what he would do," as though the university student trying
to decide whether to study music (her preference) or engineering (her
parents' preference) was supposed to ask herself, "What
would Socrates study if he were in my circumstances?"
But the objection failed to take note of Anscombe's hint that a
great deal of specific action guidance could be found in rules
employing the virtue and vice terms ("v-rules") such as
"Do what is honest/charitable; do not do what is
dishonest/uncharitable" (Hursthouse 1999). (It is a noteworthy
feature of our virtue and vice vocabulary that, although our list of
generally recognised virtue terms is comparatively short, our list of
vice terms is remarkably, and usefully, long, far exceeding anything
that anyone who thinks in terms of standard deontological rules has
ever come up with. Much invaluable action guidance comes from avoiding
courses of action that would be irresponsible, feckless, lazy,
inconsiderate, uncooperative, harsh, intolerant, selfish, mercenary,
indiscreet, tactless, arrogant, unsympathetic, cold, incautious,
unenterprising, pusillanimous, feeble, presumptuous, rude,
hypocritical, self-indulgent, materialistic, grasping, short-sighted,
vindictive, calculating, ungrateful, grudging, brutal, profligate,
disloyal, and on and on.)
(b) A closely related objection has to do with whether virtue ethics
can provide an adequate account of right action. This worry can take
two forms. (i) One might think a virtue ethical account of right
action is extensionally inadequate. It is possible to perform a right
action without being virtuous and a virtuous person can occasionally
perform the wrong action without that calling her virtue into
question. If virtue is neither necessary nor sufficient for right
action, one might wonder whether the relationship between
rightness/wrongness and virtue/vice is close enough for the former to
be identified in terms of the latter. (ii) Alternatively, even if one
thought it possible to produce a virtue ethical account that picked
out all (and only) right actions, one might still think that at least
in some cases virtue is not what explains rightness (Adams
2006:6-8).
Some virtue ethicists respond to the adequacy objection by rejecting
the assumption that virtue ethics ought to be in the business of
providing an account of right action in the first place. Following in
the footsteps of Anscombe (1958) and MacIntyre (1985), Talbot Brewer
(2009) argues that to work with the categories of rightness and
wrongness is already to get off on the wrong foot. Contemporary
conceptions of right and wrong action, built as they are around a
notion of moral duty that presupposes a framework of divine (or moral)
law or around a conception of obligation that is defined in contrast
to self-interest, carry baggage the virtue ethicist is better off
without. Virtue ethics can address the questions of how one should
live, what kind of person one should become, and even what one should
do without that committing it to providing an account of 'right
action'. One might choose, instead, to work with aretaic
concepts (defined in terms of virtues and vices) and axiological
concepts (defined in terms of good and bad, better and worse) and
leave out deontic notions (like right/wrong action, duty, and
obligation) altogether.
Other virtue ethicists wish to retain the concept of right action but
note that in the current philosophical discussion a number of distinct
qualities march under that banner. In some contexts, 'right
action' identifies the best action an agent might perform in the
circumstances. In others, it designates an action that is commendable
(even if not the best possible). In still others, it picks out actions
that are not blameworthy (even if not commendable). A virtue ethicist
might choose to define one of these--for example, the best
action--in terms of virtues and vices, but appeal to other
normative concepts--such as legitimate expectations--when
defining other conceptions of right action.
As we observed in section 2, a virtue ethical account need not attempt
to reduce *all* other normative concepts to virtues and vices.
What is required is simply (i) that virtue is *not* reduced to
some other normative concept that is taken to be more fundamental and
(ii) that some other normative concepts *are* explained in
terms of virtue and vice. This takes the sting out of the adequacy
objection, which is most compelling against versions of virtue ethics
that attempt to define all of the senses of 'right action'
in terms of virtues. Appealing to virtues *and* vices makes it
much easier to achieve extensional adequacy. Making room for normative
concepts that are not taken to be reducible to virtue and vice
concepts makes it even easier to generate a theory that is both
extensionally and explanatorily adequate. Whether one needs other
concepts and, if so, how many, is still a matter of debate among
virtue ethicists, as is the question of whether virtue ethics even
ought to be offering an account of right action. Either way virtue
ethicists have resources available to them to address the adequacy
objection.
Insofar as the different versions of virtue ethics all retain an
emphasis on the virtues, they are open to the familiar problem of (c)
the charge of cultural relativity. Is it not the case that different
cultures embody different virtues, (MacIntyre 1985) and hence that the
v-rules will pick out actions as right or wrong only relative to a
particular culture? Different replies have been made to this charge.
One--the *tu quoque*, or "partners in crime"
response--exhibits a quite familiar pattern in virtue
ethicists' defensive strategy (Solomon 1988). They admit that,
for them, cultural relativism *is* a challenge, but point out
that it is just as much a problem for the other two approaches. The
(putative) cultural variation in character traits regarded as virtues
is no greater--indeed markedly less--than the cultural
variation in rules of conduct, and different cultures have different
ideas about what constitutes happiness or welfare. That cultural
relativity should be a problem common to all three approaches is
hardly surprising. It is related, after all, to the
"justification problem"
(see below)
the quite general metaethical problem of justifying one's moral
beliefs to those who disagree, whether they be moral sceptics,
pluralists or from another culture.
A bolder strategy involves claiming that virtue ethics has less
difficulty with cultural relativity than the other two approaches.
Much cultural disagreement arises, it may be claimed, from local
understandings of the virtues, but the virtues themselves are not
relative to culture (Nussbaum 1993).
Another objection to which the *tu quoque* response is
partially appropriate is (d) "the conflict problem." What
does virtue ethics have to say about dilemmas--cases in which,
apparently, the requirements of different virtues conflict because
they point in opposed directions? Charity prompts me to kill the
person who would be better off dead, but justice forbids it. Honesty
points to telling the hurtful truth, kindness and compassion to
remaining silent or even lying. What shall I do? Of course, the same
sorts of dilemmas are generated by conflicts between deontological
rules. Deontology and virtue ethics share the conflict problem (and
are happy to take it on board rather than follow some of the
utilitarians in their consequentialist resolutions of such dilemmas)
and in fact their strategies for responding to it are parallel. Both
aim to resolve a number of dilemmas by arguing that the conflict is
merely apparent; a discriminating understanding of the virtues or
rules in question, possessed only by those with practical wisdom, will
perceive that, in this particular case, the virtues do not make
opposing demands or that one rule outranks another, or has a certain
exception clause built into it. Whether this is all there is to it
depends on whether there are any irresolvable dilemmas. If there are,
proponents of either normative approach may point out reasonably that
it could only be a mistake to offer a resolution of what is, *ex
hypothesi*, irresolvable.
Another problem arguably shared by all three approaches is (e), that
of being self-effacing. An ethical theory is self-effacing if,
roughly, whatever it claims justifies a particular action, or makes it
right, had better not be the agent's motive for doing it.
Michael Stocker (1976) originally introduced it as a problem for
deontology and consequentialism. He pointed out that the agent who,
rightly, visits a friend in hospital will rather lessen the impact of
his visit on her if he tells her either that he is doing it because it
is his duty or because he thought it would maximize the general
happiness. But as Simon Keller observes, she won't be any better
pleased if he tells her that he is visiting her because it is what a
virtuous agent would do, so virtue ethics would appear to have the
problem too (Keller 2007). However, virtue ethics' defenders
have argued that not all forms of virtue ethics are subject to this
objection (Pettigrove 2011) and those that are are not seriously
undermined by the problem (Martinez 2011).
Another problem for virtue ethics, which is shared by both
utilitarianism and deontology, is (f)
"the justification problem."
Abstractly conceived, this is the problem of how we justify or ground
our ethical beliefs, an issue that is hotly debated at the level of
metaethics. In its particular versions, for deontology there is the
question of how to justify its claims that certain moral rules are the
correct ones, and for utilitarianism of how to justify its claim that
all that really matters morally are consequences for happiness or
well-being. For virtue ethics, the problem concerns the question of
which character traits are the virtues.
In the metaethical debate, there is widespread disagreement about the
possibility of providing an external foundation for
ethics--"external" in the sense of being external to
ethical beliefs--and the same disagreement is found amongst
deontologists and utilitarians. Some believe that their normative
ethics can be placed on a secure basis, resistant to any form of
scepticism, such as what anyone rationally desires, or would accept or
agree on, regardless of their ethical outlook; others that it
cannot.
Virtue ethicists have eschewed any attempt to ground virtue ethics in
an external foundation while continuing to maintain that their claims
can be validated. Some follow a form of Rawls's coherentist
approach (Slote 2001; Swanton 2003); neo-Aristotelians a form of
ethical naturalism.
A misunderstanding of *eudaimonia* as an unmoralized concept
leads some critics to suppose that the neo-Aristotelians are
attempting to ground their claims in a scientific account of human
nature and what counts, for a human being, as flourishing. Others
assume that, if this is not what they are doing, they cannot be
validating their claims that, for example, justice, charity, courage,
and generosity are virtues. Either they are illegitimately helping
themselves to Aristotle's discredited natural teleology
(Williams 1985) or producing mere rationalizations of their own
personal or culturally inculcated values. But McDowell, Foot,
MacIntyre and Hursthouse have all outlined versions of a third way
between these two extremes. *Eudaimonia* in virtue ethics, is
indeed a moralized concept, but it is not only that. Claims about what
constitutes flourishing for human beings no more float free of
scientific facts about what human beings are like than ethological
claims about what constitutes flourishing for elephants. In both
cases, the truth of the claims depends *in part* on what kind
of animal they are and what capacities, desires and interests the
humans or elephants have.
The best available science today (including evolutionary theory and
psychology) supports rather than undermines the ancient Greek
assumption that we are social animals, like elephants and wolves and
unlike polar bears. No rationalizing explanation in terms of anything
like a social contract is needed to explain why we choose to live
together, subjugating our egoistic desires in order to secure the
advantages of co-operation. Like other social animals, our natural
impulses are not solely directed towards our own pleasures and
preservation, but include altruistic and cooperative ones.
This basic fact about us should make more comprehensible the claim
that the virtues are at least partially constitutive of human
flourishing and also undercut the objection that virtue ethics is, in
some sense, egoistic.
(g) The egoism objection has a number of sources. One is a simple
confusion. Once it is understood that the fully virtuous agent
characteristically does what she should without inner conflict, it is
triumphantly asserted that "she is only doing what she
*wants* to do and hence is being selfish." So when the
generous person gives gladly, as the generous are wont to do, it turns
out she is not generous and unselfish after all, or at least not as
generous as the one who greedily wants to hang on to everything she
has but forces herself to give because she thinks she should! A
related version ascribes bizarre reasons to the virtuous agent,
unjustifiably assuming that she acts as she does *because* she
believes that acting thus on this occasion will help her to achieve
*eudaimonia.* But "the virtuous agent" is just
"the agent with the virtues" and it is part of our
ordinary understanding of the virtue terms that each carries with it
its own typical range of reasons for acting. The virtuous agent acts
as she does because she believes that someone's suffering will
be averted, or someone benefited, or the truth established, or a debt
repaid, or ... thereby.
It is the exercise of the virtues during one's life that is held
to be at least partially constitutive of *eudaimonia*, and this
is consistent with recognising that bad luck may land the virtuous
agent in circumstances that require her to give up her life. Given the
sorts of considerations that courageous, honest, loyal, charitable
people wholeheartedly recognise as reasons for action, they may find
themselves compelled to face danger for a worthwhile end, to speak out
in someone's defence, or refuse to reveal the names of their
comrades, even when they know that this will inevitably lead to their
execution, to share their last crust and face starvation. On the view
that the exercise of the virtues is necessary but not sufficient for
*eudaimonia*, such cases are described as those in which the
virtuous agent sees that, as things have unfortunately turned out,
*eudaimonia* is not possible for them (Foot 2001, 95). On the
Stoical view that it is both necessary and sufficient, a
*eudaimon* life is a life that has been successfully lived
(where "success" of course is not to be understood in a
materialistic way) and such people die knowing not only that they have
made a success of their lives but that they have also brought their
lives to a markedly successful completion. Either way, such heroic
acts can hardly be regarded as egoistic.
A lingering suggestion of egoism may be found in the misconceived
distinction between so-called "self-regarding" and
"other-regarding" virtues. Those who have been insulated
from the ancient tradition tend to regard justice and benevolence as
real virtues, which benefit others but not their possessor, and
prudence, fortitude and providence (the virtue whose opposite is
"improvidence" or being a spendthrift) as not real virtues
at all because they benefit only their possessor. This is a mistake on
two counts. Firstly, justice and benevolence do, in general, benefit
their possessors, since without them *eudaimonia* is not
possible. Secondly, given that we live together, as social animals,
the "self-regarding" virtues do benefit others--those
who lack them are a great drain on, and sometimes grief to, those who
are close to them (as parents with improvident or imprudent adult
offspring know only too well).
The most recent objection (h) to virtue ethics claims that work in
"situationist" social psychology shows that there are no
such things as character traits and thereby no such things as virtues
for virtue ethics to be about (Doris 1998; Harman 1999). In reply,
some virtue ethicists have argued that the social psychologists'
studies are irrelevant to the multi-track disposition (see above) that
a virtue is supposed to be (Sreenivasan 2002; Kamtekar 2004). Mindful
of just how multi-track it is, they agree that it would be reckless in
the extreme to ascribe a demanding virtue such as charity to people of
whom they know no more than that they have exhibited conventional
decency; this would indeed be "a fundamental attribution
error." Others have worked to develop alternative, empirically
grounded conceptions of character traits (Snow 2010; Miller 2013 and
2014; however see Upton 2016 for objections to Miller). There have
been other responses as well (summarized helpfully in Prinz 2009 and
Miller 2014). Notable among these is a response by Adams (2006,
echoing Merritt 2000) who steers a middle road between "no
character traits at all" and the exacting standard of the
Aristotelian conception of virtue which, because of its emphasis on
phronesis, requires a high level of character integration. On his
conception, character traits may be "frail and
fragmentary" but still virtues, and not uncommon. But giving up
the idea that practical wisdom is the heart of all the virtues, as
Adams has to do, is a substantial sacrifice, as Russell (2009) and
Kamtekar (2010) argue.
Even though the "situationist challenge" has left
traditional virtue ethicists unmoved, it has generated a healthy
engagement with empirical psychological literature, which has also
been fuelled by the growing literature on Foot's *Natural
Goodness* and, quite independently, an upsurge of interest in
character education (see below).
## 4. Future Directions
Over the past thirty-five years most of those contributing to the
revival of virtue ethics have worked within a neo-Aristotelian,
eudaimonist framework. However, as noted in section 2, other forms of
virtue ethics have begun to emerge. Theorists have begun to turn to
philosophers like Hutcheson, Hume, Nietzsche, Martineau, and Heidegger
for resources they might use to develop alternatives (see Russell
2006; Swanton 2013 and 2015; Taylor 2015; and Harcourt 2015). Others
have turned their attention eastward, exploring Confucian, Buddhist,
and Hindu traditions (Yu 2007; Slingerland 2011; Finnigan and Tanaka
2011; McRae 2012; Angle and Slote 2013; Davis 2014; Flanagan 2015;
Perrett and Pettigrove 2015; and Sim 2015). These explorations promise
to open up new avenues for the development of virtue ethics.
Although virtue ethics has grown remarkably in the last thirty-five
years, it is still very much in the minority, particularly in the area
of applied ethics. Many editors of big textbook collections on
"moral problems" or "applied ethics" now try
to include articles representative of each of the three normative
approaches but are often unable to find a virtue ethics article
addressing a particular issue. This is sometimes, no doubt, because
"the" issue has been set up as a
deontologicial/utilitarian debate, but it is often simply because no
virtue ethicist has yet written on the topic. However, the last decade
has seen an increase in the amount of attention applied virtue ethics
has received (Walker and Ivanhoe 2007; Hartman 2013; Austin 2014; Van
Hooft 2014; and Annas 2015). This area can certainly be expected to
grow in the future, and it looks as though applying virtue ethics in
the field of environmental ethics may prove particularly fruitful
(Sandler 2007; Hursthouse 2007, 2011; Zwolinski and Schmidtz 2013;
Cafaro 2015).
Whether virtue ethics can be expected to grow into "virtue
politics"--i.e. to extend from moral philosophy into
political philosophy--is not so clear. Gisela Striker (2006) has
argued that Aristotle's ethics cannot be understood adequately
without attending to its place in his politics. That suggests that at
least those virtue ethicists who take their inspiration from Aristotle
should have resources to offer for the development of virtue politics.
But, while Plato and Aristotle can be great inspirations as far as
virtue ethics is concerned, neither, on the face of it, are attractive
sources of insight where politics is concerned. However, recent work
suggests that Aristotelian ideas can, after all, generate a
satisfyingly liberal political philosophy (Nussbaum 2006; LeBar
2013a). Moreover, as noted above, virtue ethics does not have to be
neo-Aristotelian. It may be that the virtue ethics of Hutcheson and
Hume can be naturally extended into a modern political philosophy
(Hursthouse 1990-91; Slote 1993).
Following Plato and Aristotle, modern virtue ethics has always
emphasised the importance of moral education, not as the inculcation
of rules but as the training of character. There is now a growing
movement towards virtues education, amongst both academics (Carr 1999;
Athanassoulis 2014; Curren 2015) and teachers in the classroom. One
exciting thing about research in this area is its engagement with
other academic disciplines, including psychology, educational theory,
and theology (see Cline 2015; and Snow 2015).
Finally, one of the more productive developments of virtue ethics has
come through the study of particular virtues and vices. There are now
a number of careful studies of the cardinal virtues and capital vices
(Pieper 1966; Taylor 2006; Curzer 2012; Timpe and Boyd 2014). Others
have explored less widely discussed virtues or vices, such as
civility, decency, truthfulness, ambition, and meekness (Calhoun 2000;
Kekes 2002; Williams 2002; and Pettigrove 2007 and 2012). One of the
questions these studies raise is "How many virtues are
there?" A second is, "How are these virtues related to one
another?" Some virtue ethicists have been happy to work on the
assumption that there is no principled reason for limiting the number
of virtues and plenty of reason for positing a plurality of them
(Swanton 2003; Battaly 2015). Others have been concerned that such an
open-handed approach to the virtues will make it difficult for virtue
ethicists to come up with an adequate account of right action or deal
with the conflict problem discussed above. Dan Russell has proposed
cardinality and a version of the unity thesis as a solution to what he
calls "the enumeration problem" (the problem of too many
virtues). The apparent proliferation of virtues can be significantly
reduced if we group virtues together with some being cardinal and
others subordinate extensions of those cardinal virtues. Possible
conflicts between the remaining virtues can then be managed if they
are tied together in some way as part of a unified whole (Russell
2009). This highlights two important avenues for future research, one
of which explores individual virtues and the other of which analyses
how they might be related to one another. |
epistemology-visual-thinking | ## 1. Introduction
Visual thinking is a feature of mathematical practice across many
subject areas and at many levels. It is so pervasive that the question
naturally arises: does visual thinking in mathematics have any
epistemically significant roles? A positive answer begets further
questions. Can we rationally arrive at a belief with the generality
and necessity characteristic of mathematical theorems by attending to
specific diagrams or images? If visual thinking contributes to warrant
for believing a mathematical conclusion, must the outcome be an
empirical belief? How, if at all can visual thinking contribute to
understanding abstract mathematical subject matter?
Visual thinking includes thinking with external visual
representations (e.g., diagrams, symbol arrays, kinematic computer
images) *and* thinking with internal visual imagery; often the
two are used in combination, as when we are required to visually
imagine a certain spatial transformation of an object represented by a
diagram on paper or on screen. Almost always (and perhaps always)
visual thinking in mathematics is used in conjunction with non-visual
thinking. Possible epistemic roles include contributions to evidence,
proof, discovery, understanding and grasp of concepts. The kinds and
the uses of visual thinking in mathematics are numerous and diverse.
This entry will deal with some of the topics in this area that have
received attention and omit others. Among the omissions is the
possible explanatory role of visual representations in
mathematics. The topic of explanation within pure mathematics is
tricky and best dealt with separately; for this an excellent starting
place is the entry on
explanation in mathematics
(Mancosu 2011). Two other omissions are the
development of logic diagrams (Euler, Venn, Pierce and Shin) and the
nature and use of geometric diagrams in Euclid's
*Elements*, both of which are well treated in the entry
diagrams (Shin et al. 2013). The
focus here is on visual thinking generally, which includes thinking
with symbol arrays as well as with diagrams; there will be no attempt
here to formulate a criterion for distinguishing between symbolic and
diagrammatic thinking. However, the use of visual thinking in proving
and in various kinds of discovery will be covered in what
follows. Discussions of some related questions and some studies of
historical cases not considered here are to be found in the collection
*Diagrams in Mathematics: History and Philosophy* (Mumma and
Panza 2012).
## 2. Historical Background
"Mathematics can achieve nothing by concepts alone but
hastens at once to intuition" wrote Kant (1781/9: A715/B743),
before describing the geometrical construction in Euclid's proof
of the angle sum theorem (Euclid, Book 1, proposition 32). In a review
of 1816 Gauss echoes Kant:
>
> anybody who is acquainted with the essence of geometry knows that [the
> logical principles of identity and contradiction] are able to
> accomplish nothing by themselves, and that they put forth sterile
> blossoms unless the fertile living intuition of the object itself
> prevails everywhere. (Ewald 1996 [Vol. 1]: 300)
>
The word "intuition" here translates the German
"*Anschauung*", a word which applies to visual
imagination and perception, though it also has more general uses.
By the late 19th century a different view had emerged,
at least in foundational areas. In a celebrated text giving the first
rigorous axiomatization of projective geometry, Pasch wrote:
"the theorem is only truly demonstrated if the proof is
completely independent of the figure" (Pasch 1882), a view
expressed also by Hilbert in writing on the foundations of geometry
(Hilbert 1894). A negative attitude to visual thinking was not
confined to geometry. Dedekind, for example, wrote of an overpowering
feeling of dissatisfaction with appeal to geometric intuitions in
basic infinitesimal analysis (Dedekind 1872, Introduction). The
grounds were felt to be uncertain, the concepts employed vague and
unclear. When such concepts were replaced by precisely defined
alternatives without allusions to space, time or motion, our intuitive
expectations turned out to be unreliable (Hahn 1933).
In some quarters this view turned into a general disdain for visual
thinking in mathematics: "In the best books" Russell
pronounced "there are no figures at all" (Russell 1901).
Although this attitude was opposed by a few mathematicians, notably
Klein (1893), others took it to heart. Landau, for example, wrote a
calculus textbook without a single diagram (Landau 1934). But the
predominant view was not so extreme: thinking in terms of figures was
valued as a means of facilitating grasp of formulae and linguistic
text, but only reasoning expressed by means of formulae and text could
bear any epistemological weight.
By the late 20th century the mood had swung back in
favour of visualization: Mancosu (2005) provides an excellent survey.
Some books advertise their defiance of anti-visual puritanism in their
titles, for example *Visual Geometry and Topology* (Fomenko
1994) and *Visual Complex Analysis* (Needham 1997); mathematics
educators turn their attention to pedagogical uses of visualization
(Zimmerman and Cunningham 1991); the use of computer-generated imagery
begins to bear fruit at research level (Hoffman 1987; Palais 1999),
and diagrams find their way into research papers in abstract fields:
see for example the papers on higher dimensional category theory by
Joyal et al. (1996), Leinster (2004) and Lauda (2005, Other Internet
Resources). But attitudes
to the epistemology of visual thinking remain mixed. The discussion is
mostly concerned with the role of diagrams in proofs.
## 3. Visual thinking and proof
In some cases, it is claimed, a picture alone is a proof (Brown
1999: ch. 3). But that view is rare. Even the editor of *Proofs
without Words: Exercises in Visual Thinking*, writes "Of
course, 'proofs without words' are not really
proofs" (Nelsen 1993: vi). Expressions of the other extreme are
rare but can be found:
> [the diagram] has no proper place in the proof as
> such. For the proof is a syntactic object consisting only of sentences
> arranged in a finite and inspectable array. (Tennant 1986)
>
>
>
Between the extremes we find the view that, even if no picture
alone is a proof, visual representations can have a non-superfluous
role in reasoning that constitutes a proof. (This is not to deny that
there may be another proof of the same conclusion which does not involve
any visual representation.) Geometric diagrams, graphs and maps, all
carry information. Taking valid deductive reasoning to be the reliable
extraction of information from information already obtained, Barwise
and Etchemendy (1996:4) pose the following question: Why cannot the
representations composing a proof be visual as well as linguistic? The
sole reason for denying this role to visual representations is the
thought that, with the possible exception of very restricted cases,
visual thinking is unreliable, hence cannot contribute to proof. Is
that right?
Our concern here is thinking through the steps in a proof, either
for the first time (a first successful attempt to construct a proof)
or following a given proof. Clearly we want to distinguish between
visual thinking which merely accompanies the process of thinking
through the steps in a proof and visual thinking which is essential to
the process. This is not always straightforward as a proof can be
presented in different ways. How different can distinct presentations
be and yet be presentations of the same proof? There is no
context-invariant answer to this. Often mathematicians are happy to
regard two presentations as presenting the same proof if the central
idea is the same in both cases. But if one's main concern is
with what is involved in thinking through a proof, its central idea is
not enough to individuate it: the overall structure, the sequence of
steps and perhaps some other factors affecting the cognitive processes
involved will be relevant.
Once individuation of proofs has been settled, we can distinguish
between replaceable thinking and superfluous thinking, where these
attributions are understood as relative to a given argument or proof.
In the process of thinking through a proof, a given part of the
thinking is *replaceable* if thinking of some other kind could
stand in place of the given part in a process that would count as
thinking through the same proof. A given part of the thinking is
*superfluous* if its excision without replacement would be a
process of thinking through the same proof. Superfluous thinking may
be extremely valuable in facilitating grasp of the proof text and in
enabling one to understand the idea underlying the proof steps; but it
is not necessary for thinking through the proof.
It is uncontentious that the visual thinking involved in symbol
manipulations, for example in following the "algebraic"
steps of proofs of basic lemmas about groups, can be essential, that
is neither superfluous nor replaceable. The worry is about thinking
visually with diagrams, where "diagram" is used widely to
include all non-symbolic visual representations. Let us agree that
there can be superfluous diagrammatic thinking in thinking through a
proof. This leaves several possibilities.
* (a) All diagrammatic thinking in a process of thinking through a proof is superfluous.
* (b)Not all diagrammatic thinking in a process of thinking through a proof is superfluous; but if not superfluous it will be replaceable by non-diagrammatic thinking.
* (c)Some diagrammatic thinking in a process of thinking through a proof is neither superfluous nor replaceable by non-diagrammatic thinking.
### 3.1 The reliability question
The negative view stated earlier that diagrams can have no role in
proof entails claim (a). The idea behind (a) is that, because
diagrammatic reasoning is unreliable, if a process of thinking through
an argument contains some non-superfluous diagrammatic thinking, that
process lacks the epistemic security to be a case of thinking through
a proof.
This view, claim (a) in particular, is threatened by cases in which
the reliability of the diagrammatic thinking is demonstrated
non-visually. The clearest kind of example would be provided by a
formal system which has diagrams in place of formulas among its
syntactic objects, and types of inter-diagram transition for inference
rules. Suppose you take in such a formal system and an interpretation
of it, and then think through a proof of the system's soundness
with respect to that interpretation; suppose you then inspect a
sequence of diagrams, checking along the way that it constitutes a
derivation in the system; suppose finally that you recover the
interpretation to reach a conclusion. (The order is unimportant: one
can go through the derivation first and then follow the soundness
proof.) That entire process would constitute thinking through a proof
of the conclusion; and the diagrammatic thinking involved would not be
superfluous.
Shin et al. (2013) report that formal diagrammatic systems of logic
and geometry have been proven to be sound. People have indeed followed
proofs in these systems. That is enough to refute claim (a), the claim
that all diagrammatic thinking in thinking through a proof is
superfluous. For a concrete example, Figure 1
presents a derivation of Euclid's first theorem, that on any
straight line segment an equilateral triangle is constructible, in a
formal diagrammatic system of a part of Euclidean geometry (Miller
2001).
![[a three by three array of rectangles each containing a diagram. Going left to right then top to bottom, the first has a line segment with each end having a dot. The second is a circle with a radius drawn and dots on each end of the radius line segment. The third is the same the second except another overlapping circle is drawn using the same radius line segment but with the first circle's center dot now on the perimeter and the first circle's perimeter dot now the center of the second circle, dots are added at the two points the circles intersect. The fourth diagram is identical to the third except a line segment is drawn from the top intersection dot to the first circle's center dot. The fifth diagram is like the fourth except a line segment is drawn from the top intersection dot to the center dot of the second circle. ...]](fig1.png)
Figure 1
What about Tennant's claim that a proof is "a syntactic
object consisting only of sentences" as opposed to diagrams? A
proof is *never* a syntactic object. A formal derivation on its
own is a syntactic object but not a proof. Without an interpretation
of the language of the formal system the end-formula of the derivation
says nothing; and so nothing is proved. Without a demonstration of the
system's soundness with respect to the interpretation, one may
lack sufficient reason to believe that all derivable conclusions are
true. A formal derivation *plus* an interpretation and
soundness proof can be a proof of the derived conclusion, but that
whole package is not a syntactic object. Moreover, the part of the
proof which really is a syntactic object, the formal derivation, need
not consist solely of sentences; it can consist of diagrams.
With claim (a) disposed of, consider again claim (b) that, while
not all diagrammatic thinking in a process of thinking through a proof
is superfluous, all non-superfluous diagrammatic thinking will be
replaceable by non-diagrammatic thinking in a process of thinking
through that same proof. The visual thinking in following the proof of
Euclid's first theorem using Miller's formal system
consists in going through a sequence of diagrams and at each step
seeing that the next diagram results from a permitted alteration of
the previous diagram. It is clear that in a process that counts as
thinking through *this* proof, the diagrammatic thinking is
neither superfluous nor replaceable by non-diagrammatic thinking. That
knocks out (b), leaving only (c): some thinking that involves a
diagram in thinking through a proof is neither superfluous nor
replaceable by non-diagrammatic thinking (without changing the
proof).
### 3.2 Visual means in non-formal proving
Mathematical practice almost never proceeds by way of formal
systems. Outside the context of formal diagrammatic systems, the use
of diagrams is widely felt to be unreliable. A diagram can be
unfaithful to the described construction: it may represent something
with a property that is ruled out by the description, or without a
property that is demanded by the description. This is exemplified by
diagrams in the famous argument for the proposition that all triangles
are isosceles: the meeting point of an angle bisector and the
perpendicular bisector of the opposite side is represented as falling
inside the triangle, when it has to be outside (Rouse Ball 1939;
Maxwell 1959). Errors of this sort are comparatively rare, usually
avoidable with a modicum of care, and not inherent in the nature of
diagrams; so they do not warrant a general charge of
unreliability.
The major sort of error is unwarranted generalisation. Typically
diagrams (and other non-verbal visual representations) do not
represent their objects as having a property that is actually ruled
out by the intention or specification of the object to be
represented. But diagrams very frequently do represent their objects
as having properties that, though not ruled out by the specification,
are not demanded by it. Verbal descriptions can be discrete, in that
they supply no more information than is needed. But visual
representations are typically indiscrete, in that they supply too much
detail. This is often unavoidable, because for many properties or
kinds \(F\), a visual representation cannot represent something as
being \(F\) without representing it as being \(F\) *in a particular
way*. Any diagram of a triangle, for instance, must represent it
as having three acute angles or as having just two acute angles, even
if neither property is required by the specification, as would be the
case if the specification were "Let ABC be a triangle". As
a result there is a danger that in using a diagram to reason about an
arbitrary instance of class \(K\), we will unwittingly rely on a
feature represented in the diagram that is not common to all instances
of the class \(K\). Thus the risk of unwarranted generalisation is a
danger inherent in the use of many diagrams.
Indiscretion of diagrams is not confined to geometrical
figures. The dot or pebble diagrams of ancient mathematics used to
convince one of elementary truths of number theory necessarily display
particular numbers of dots, though the truths are general. Here is an
example, used to justify the formula for the \(n\)th
triangular number, i.e., the sum of the first \(n\) positive
integers.
![[a grid of blue dots 5 wide and 7 deep, on the right side is a brace embracing the right column labeled n+1 and on the bottom a brace embracing the bottom row labeled n]](fig2.png)
Figure 2
The conclusion drawn is that the sum of integers from 1 to \(n\) is
\((n \times n+1)/2\) for any positive integer \(n\), but the diagram
presents the case for \(n = 6\). We can perhaps avoid representing a
particular number of dots when we merely imagine a display of the
relevant kind; or if a particular number is represented, our
experience may not make us aware of the number--just as, when one
imagines the sky on a starry night, for no particular number \(k\) are
we aware that exactly \(k\) stars are represented. Even so, there is
likely to be some extra specificity. For example, in imagining an
array of dots of the form just illustrated, one is unlikely to imagine
just two columns of three dots, the rectangular array for \(n =
2\). Typically the subject will be aware of imagining an array with
more than two columns. This entails that an image is likely to have
unintended exclusions. In this case it would exclude the three-by-two
array. An image of a triangle representing all angles as acute would
exclude triangles with an obtuse angle or a right angle. The danger is
that the visual reasoning will not be valid for the cases that are
unintentionally excluded by the visual representation, with the result
that the step to the conclusion is an unwarranted generalisation.
What should we make of this? First, let us note that in a few cases
the image or diagram will not be over-specific. When in geometry all
instances of the relevant class are congruent to one another, for
instance all circles or all squares, the image or diagram will not be
over-specific for a generalisation about that class; so there will be
no unintended exclusions and no danger of unwarranted generalisation.
Here then are possibilities for reliable visual thinking in
proving.
To get clear about the other cases, where there *is* a
danger of over generalizing, it helps to look at generalisation in
ordinary non-visual reasoning. Schematically put, in reasoning about
things of kind \(K\), once we have shown that from certain premisses
it follows that such-and-such a condition is true of arbitrary
instance \(c\), we can validly infer from those same premisses that
that condition is true of all \(K\)s, with the proviso that neither
the condition nor any premiss mentions \(c\). The proviso is required,
because if a premiss or the condition does mention \(c\), the
reasoning may depend on a property of \(c\) that is not shared by all
other \(K\)s and so the generalisation would be unsafe. For a trivial
example consider a step from "\(x = c\)" to
"\(\forall x [x = c]\)".
A question we face is whether, in order to come to *know*
the truth of a conclusion by following an argument involving
generalisation on an arbitrary instance (a.k.a. universal
generalisation, or universal quantifier introduction), the thinking
must include a conscious, explicit check that the proviso is met. It
is clearly not enough that the proviso is in fact met. For in that
case it might just be the thinker's good luck that the proviso
is met; hence the thinker would not know that the generalisation is
valid and so would not have genuinely thought through the proof at
that step.
This leaves two options. The strict option is that without a
conscious, explicit check one has not really thought through the
proof. The relaxed option is that one *can* properly think
through the proof without checking that the proviso is met, but only
if one is sensitive to the potential error and would detect it in
otherwise similar arguments. For then one is not just lucky that the
proviso is met. Being sensitive in this context consists in being
alert to dependence on features of the arbitrary instance not shared
by all members of the class of generalisation, a state produced by a
combination of past experience and current vigilance. Without
compelling reason to prefer one of these options, decisions on what is
to count as proving or following a proof must be conditional.
How does all this apply to generalizing from visual thinking about
an arbitrary instance? Take the example of the visual route to the
formula for triangular numbers using the diagram of Figure 2. The diagram reveals that the formula holds
for the 6th triangular number. The generalisation to all
triangular numbers is justified only if the visuo-spatial method used
is applicable to the \(n\)th triangular number for all
positive integers \(n\), that is, provided that the method used does
not depend on a property not shared by all positive integers. A
conscious, explicit check that this proviso is met requires making
explicit the method exemplified for 6 and proving that the method is
applicable for all positive integers in place of 6. (For a similar
idea in the context of automating visual arguments, see Jamnik
2001). This is not done in practice when thinking visually, and so if
we accept the strict option for thinking through a proof involving
generalisation, we would have to accept that the visual route to the
formula for triangular numbers does not amount to thinking through a
proof of it; and the same would apply to the familiar visual routes to
other general positive integer formulas, such as that \(n^2 =\) the
sum of the first \(n\) odd numbers.
But what if the strict option for proving by generalisation on an
arbitrary instance is too strict, and the relaxed option is right?
When arriving at the formula in the visual way indicated, one does not
pay attention to the fact that the visual display represents the
situation for the 6th triangular number; it is as if the
mind had somehow extracted a general schema of visual reasoning from
exposure to the particular case, and had then proceeded to reason
schematically, converting a schematic result into a universal
proposition. What is required, on the relaxed option, is sensitivity
to the possibility that the schema is not applicable to all positive
integers; one must be so alert to ways a schema of the given kind can
fall short of universal applicability that if one had been presented
with a schema that did fall short, one would have detected the
failure.
In the example at hand, the schema of visual reasoning involves at
the start taking a number \(k\) to be represented by a column of \(k\)
dots, thence taking the triangular array of \(n\) columns to represent
the sum of the first \(n\) positive integers, thence taking that array
combined with an inverted copy to make a rectangular array of \(n\)
columns of \(n+1\) dots. For a schema starting this way to be
universally applicable, it must be possible, given any positive
integer \(n\), for the sum of the first \(n\) positive integers to be
represented in the form of a triangular array, so that combined with
an inverted copy one gets a rectangular array. This actually fails at
the extreme case: \(n = 1\). The formula \((n.(n + 1))/2\) holds for
this case; but that is something we know by substituting
"1" for the variable in the formula, not by the visual
method indicated. That method cannot be applied to \(n = 1\), because
a single dot does not form a triangular array, and combined with a
copy it does not form a rectangular array. But we can check that the
method works for all positive integers after the first, using visual
reasoning to assure ourselves that it works for 2 and that if the
method works for \(k\) it works for \(k+1\). Together with this
reflective thinking, the visual thinking sketched earlier constitutes
following a proof of the formula for the \(n\)th triangular
number for all integers \(n > 1\), at least if the relaxed view of
thinking through a proof is correct. Similar conclusions hold in the
case of other "dot" arguments (Giaquinto 1993, 2007:
ch. 8). So in some cases when the visual representation carries
unwanted detail, the danger of over-generalisation in visual reasoning
can be overcome.
But the fact that this is frequently missed by commentators
suggests that the required sensitivity is often absent. Missing an
untypical case is a common hazard in attempts at visual proving. A
well-known example is the proof of Euler's formula \(V - E + F =
2\) for polyhedra by "removing triangles" of a
triangulated planar projection of a polyhedron. One is easily
convinced by the thinking, but only because the polyhedra we normally
think of are convex, while the exceptions are not convex. But it is
also easy to miss a case which is not untypical or extreme when
thinking visually. An example is Cauchy's attempted proof
(Cauchy 1813) of the claim that if a convex polygon is transformed
into another polygon keeping all but one of the sides constant, then
if some or all of the internal angles at the vertices increase, the
remaining side increases, while if some or all of the internal angles
at the vertices decrease, the remaining side decreases. The argument
proceeds by considering what happens when one transforms a polygon by
increasing (or decreasing) angles, angle by angle. But in a trapezoid,
changing a single angle can turn a convex polygon into a concave
polygon, and this invalidates the argument (Lyusternik 1963).
The frequency of such mistakes indicates that visual arguments
(other than symbol manipulations) often lack the transparency required
for proof. Even when a visual argument is in fact sound, its soundness
may not be clear, in which case the argument is not a way of
*proving* the truth of the conclusion, though it may be a way
of discovering it. But this is consistent with the claim that visual
non-symbolic thinking can be (and often is) part of a way of proving
something.
An example from knot theory will substantiate the modal part of
this claim. To present the example, we need some background
information, which will be given with a minimum of technical
detail.
A *knot* is a tame closed non-self-intersecting curve in
Euclidean 3-space.
In other words, knots are just the tame curves in Euclidean 3-space
which are homeomorphic to a circle. The word "tame" here
stands for a property intended to rule out certain pathological cases,
such as curves with infinitely nested knotting. There is more than one
way of making this mathematically precise, but we have no need for
these details. A knot has a specific geometric shape, size and
axis-relative position. Now imagine it to be made of flexible yet
unbreakable yarn that is stretchable and shrinkable, so that it can be
smoothly transformed into other knots without cutting or gluing. Since
our interest in a knot is the nature of its knottedness regardless of
shape, size or axis-relative position, the real focus of interest is
not just the knot but all its possible transforms. A way to think of
this is to imagine a knot transforming continuously, so that every
possible transform is realized at some time. Then the thing of central
interest would be the object that persists over time in varying forms,
with knots strictly so called being the things captured in each
particular freeze frame. Mathematically, we represent the relevant
entity as an equivalence class of knots.
Two knots are *equivalent* iff one can be smoothly deformed into the other by stretching, shrinking, twisting, flipping, repositioning or in any other way that does not involve cutting, gluing or passing one strand through another.
The relevant kind of deformation forbids eliminating a knotted part
by shrinking it down to a point. Again there are mathematically
precise definitions of knot-equivalence. Figure 3 gives diagrams of
equivalent knots, instances of a trefoil.
![[a closed line which goes under, over, under, over, under, over itself forming a shape with three nodes]](fig3a.png)
![[a closed line which goes under, over, under, over, under, over itself but forming a shop closer to a figure 8 inside an oval]](fig3b.png)
(a)
(b)
Figure 3
Diagrams like these are not merely illustrations; they also have an
operational role in knot theory. But not any picture of a knot will do
for this purpose. We need to specify:
A *knot diagram* is a regular projection of a knot onto a plane which, when there is a crossing, tells us which strand passes over the other.
Regularity here is a combination of conditions. In particular,
regularity entails that not more than two points of the strict knot
project to the same point on the plane, and that two points of the
strict knot project to the same point on the plane only where there is
a crossing. For more on diagrams in knot theory see (De Toffoli and
Giardino 2014).
A major task of knot theory is to find ways of telling whether two
knot diagrams are diagrams of equivalent knots. In particular we will
want to know if a given knot diagram represents a knot equivalent to
an *unknot*, that is, a knot representable by a knot diagram
without crossings.
One way of showing that a knot diagram represents a knot equivalent
to an unknot is to show that the diagram can be transformed into one
without crossings by a sequence of atomic moves, known as Reidemeister
moves. The relevant background fact is Reidemeister's theorem,
which links the visualizable diagrammatic changes to the
mathematically precise definition of knot equivalence: Two knots are
equivalent if and only if there is a finite sequence of Reidemeister
moves taking a knot diagram of one to a knot diagram of the
other. Figure 4 illustrates. Each knot diagram is changed into the
adjacent knot diagram by a Reidemeister move; hence the knot
represented by the leftmost diagram is equivalent to the unknot.
![[a closed line that goes under, under, under, over, over, over but forming otherwise a shape much like figure 3a]](fig4a.png)
![[a closed line that goes over, under forming a shape much like a loop within a loop]]](fig4b.png)
![[a closed line that forms a distorted loop with no intersections]](fig4c.png)
(a)
(b)
(c)
Figure 4
In contrast to these, the knot presented by the left knot diagram
of Figure 3, a trefoil, may seem impossible to
deform into an unknot. And in fact it is. To prove it, we can use a
knot invariant known as colourability. An arc in a knot diagram is a
maximal part between crossings (or the whole thing if there are no
crossings). Colourability is this:
A knot diagram is *colourable* if and only if each of its arcs can be coloured one of three different colours so that (a) at least two colours are used and (b) at each crossing the three arcs are all coloured the same or all coloured differently.
The reference to colours here is inessential. Colourability is in
fact a specific case of a kind of combinatorial property known as mod
\(p\) labelling (for \(p\) an odd prime). Colourability is a knot
invariant in the sense that if one diagram of a knot is colourable
every diagram of that knot and of any equivalent knot is
colourable. (By Reidemeister's theorem this can be proved by
showing that each Reidemeister move preserves colourability.) A
standard diagram of an unknot, a diagram without crossings, is clearly
not colourable because it has only one arc (the whole thing) and so
two colours cannot be used. So in order to complete proving that the
trefoil is not equivalent to an unknot, we only need prove that our
trefoil diagram is colourable. This can be done visually. Colour each
arc of the knot diagram one of the three colours red, green or blue so
that no two arcs have the same colour (or visualize this). Then do a
visual check of each crossing, to see that at each crossing the three
meeting arcs are all coloured differently. That visual part of the
proof is clearly non-superfluous and non-replaceable (without changing
the proof). Moreover, the soundness of the argument is quite
transparent. So here is a case of a non-formal, non-symbolic visual
way of proving a mathematical truth.
### 3.3 A dispute: diagrams in proofs in analysis.
Where notions involving the infinite are in play, such as many
involving limits, the use of diagrams is famously risky. For this
reason it has been widely thought that, beyond some very simple cases,
arguments in real and complex analysis in which diagrams have a
non-superfluous role are not genuine proofs. Bolzano [1817] expressed
this attitude with regard to the intermediate value theorem for the
real numbers (IVT) before giving a purely analytic proof, arguing that
spatial thinking could not be used to help justify the IVT. James
Robert Brown (1999) takes issue with Bolzano on this point. The IVT
is this:
If \(f\) is a real-valued function of a real variable continuous on the closed interval \([a, b]\) and \(f(a) < c < f(b)\), then for some \(x\) in \((a, b), f(x) = c\).
Brown focuses on the special case when \(c = 0\). As the IVT can be
deduced easily from this special case using the theorem that the
difference of two continuous functions is continuous, there is no loss
of generality here. Alluding to a diagram like Figure 5, Brown (1999)
writes
> We have a continuous line running from below to above
> the \(x\)-axis. Clearly, it *must* cross that axis in doing
> so. (1999: 26)
>
>
Later he claims:
>
> Using the picture alone, we can be certain of this result--if
> we can be certain of anything. (1999: 28)
>
>
![[a first quadrant graph, the x-axis labeled near the left with 'a' and near the right with 'b'; the y-axis labeled at the top with 'f(b)', in the middle with 'c' and near the bottom with 'f(a)'. A dotted horizontal line lines up with the 'c'. A solid curve starts the intersection of 'f(b)' and 'a', rambles horizontally for a short while before rising above the 'c' dotted line, dips below then rises again and ending at the intersection of 'f(b)' and 'b'. ]](fig5.png)
Figure 5
Bolzano's diagram-free proof of the IVT is an argument from
what later became known as the Dedekind completeness of the real
numbers: every non-empty set of reals bounded above (below) has a
least upper bound (greatest lower bound). The value of Bolzano's
deduction of the IVT from the Dedekind completeness of the reals,
according to Brown, is not that it proves the IVT but that it gives us
confirmation of Dedekind completeness, just as an empirical hypothesis
in empirical science gets confirmed by deducing some consequence of
the hypothesis and observing those consequence to be true. This view
assumes that we already know the IVT to be true by observing a diagram
relevantly like Figure 5.
That assumption is challenged by Giaquinto (2011). Once we
distinguish graphical concepts from associated analytic concepts, the
underlying argument from the diagram is essentially this.
* 1. Any function \(f\) which is \(\varepsilon\textrm{-}\delta\) continuous on \([a, b]\) with \(f (a) < 0 < f (b)\) has a visually continuous graphical curve from below the horizontal line representing the \(x\)-axis to above.
* 2. Any visually continuous graphical curve from below a horizontal line to above it meets the line at a crossing point.
* 3. Any function whose graphical curve meets the line representing the \(x\)-axis at a crossing point has a zero value.
* 4. So, any \(\varepsilon\textrm{-}\delta\) continuous function \(f\) on \([a, b]\) with \(f (a) < 0< f (b)\) has a zero value.
What is inferred from the diagram is premiss 2. Premisses 1 and 3
are assumptions linking analytical with graphical conditions. These
linking assumptions are disputed. With regard to premiss 1 Giaquinto
(2011) argues that there are functions on the reals which meet the
antecedent condition but do not have graphical curves, such as
continuous but nowhere differentiable functions and functions which
oscillate with unbounded frequency e.g., \(f(x) = x \cdot\sin(1/x)\)
for non-zero \(x\) in \([-1, 1]\) and \(f(0) = 0\).
With regard to premiss 3 it is argued that, under the standard
conventions of graphical representation of functions in a Cartesian
co-ordinate frame, the graphical curve for \(x^2 - 2\) in the
rationals is the same as the graphical curve for \(x^2- 2\) in the
reals. This is because every real is a limit point of rationals; so
for every point \(P\) with one or both co-ordinates irrational, there
are points arbitrarily close to \(P\) with both co-ordinates rational;
so no gaps would appear if irrational points were removed from the
curve for \(x^2- 2\) in the reals. But for \(x\) in the rational
interval [0, 2] the function \(x^2- 2\) has no zero value, even though
it has a graphical curve which visually crosses the line representing
the \(x\)-axis. So one cannot read off the existence of a zero of
\(x^2- 2\) on the reals from the diagram; one needs to appeal to some
property of the reals which the rationals lack, such as Dedekind
completeness.
This raises some obvious questions. Do *any* theorems of
analysis have proofs in which diagrams have a non-superfluous role?
Littlewood (1953: 54-5) thought so and gives an example which is
examined in Giaquinto (1994). If so, can we demarcate this class of
theorems by some mathematical feature of their content? Another
question is whether there is a significantly broad class of functions
on the reals for which we could prove an intermediate value theorem
(i.e., restricted to that class).
If there are theorems of analysis provable with diagrams we do not
yet have a mathematical demarcation criterion for them. A natural
place to look would be O-minimal structures on the reals--this
was brought to the author's attention by Ethan Galebach. This is
because of some remarkable theorems about such structures which
exclude all the pathological (hence vision-defying) functions on the
reals (Van den Dries 1998), such as continuous nowhere differentiable
functions and "space-filling" curves i.e., continuous
surjections \(f:(0, 1)\rightarrow(0, 1)^2\). Is the IVT for functions
in an O-minimal structure on the reals provable by visual means?
Certainly one objection to the visual argument for the unrestricted
IVT does not apply when the restriction is in place. This is the
objection that continuous nowhere differentiable functions, having no
graphical curve, provide counterexamples to the premiss that any
\(\varepsilon\textrm{-}\delta\) continuous function \(f\) on \([a,
b]\) with \(f (a) < c < f (b)\) has a visually continuous
graphical curve from below the horizontal line representing \(y = c\)
to above. But the existence of continuous functions with no graphical
curve is not the only objection to the visual argument, contrary to a
claim of Azzouni (2013: 327). There are also counterexamples to the
premiss that any function that *does* have a graphical curve
which visibly crosses the line representing \(y = c\) takes \(c\) as a
value, e.g., the function \(x^2 - 2\) on the rationals with \(c =
0\). So the question of a visual proof of the IVT restricted to
functions in an O-minimal structure on the reals is still open at the
time of writing.
## 4. Visual thinking and discovery
Though philosophical discussion of visual thinking in mathematics
has concentrated on its role in proof, visual thinking may be more
valuable for discovery than proof. Three kinds of discovery important
in mathematical practice are these:
* (1)propositional discovery (discovering, of a proposition, that it is true),
* (2)discovering a proof strategy (or more loosely, getting the idea for a proof of a proposition), and
* (3)discovering a property or kind of mathematical entity.
In the following subsections visual discovery of these kinds will
be discussed and illustrated.
### 4.1 Propositional discovery
To *discover* a truth, as that expression is being used
here, is to come to believe it by one's own lights (as opposed
to reading it or being told) in a way that is reliable and involves no
violation of epistemic rationality (given one's epistemic
state). One can discover a truth without being the first to discover
it (in this context); it is enough that one comes to believe it in an
independent, reliable and rational way. The difference between merely
discovering a truth and proving it is a matter of transparency: for
proving or following a proof the subject must be aware of the way in
which the conclusion is reached and the soundness of that way; this is
not required for discovery.
Sometimes one discovers something by means of visual thinking using
background knowledge, resulting in a cogent argument from which one
could construct a proof. A nice example is a visual argument that any
knot diagram with a finite number of crossings can be turned into a
diagram of an unknot by interchanging the over-strand and under-strand
of some of its crossings (Adams 2001: 58-90). That argument is a
bit too long to present accessibly here. For a short example, here is
a way of discovering that the geometric mean of two positive numbers
is less than or equal to their arithmetic mean (Eddy 1985) using
Figure 6.
![[two circles of differing sizes next to each other and touching at one point, the larger left circle has a vertical diameter line drawn and adjacent, parallel on the left is a double arrow headed line labelled 'a'. The smaller circle has a similar vertical diameter line with a double arrow headed line labelled 'b' to the right. The bottom of the diameter lines are connected by a double headed arrow line labeled 'square root of (ab)'. Another line connects the centers of both circles and has a parallel double arrow headed line labeled '(a+b)/2'. A dashed horizontal line goes horizontally from the center of the smaller circle until it hits the diameter line of the larger circle. Between this intersection and the center of the larger circle is a double arrow headed line labeled '(a-b)/2'.]](fig6.png)
Figure 6
Two circles (with diameters \(a\) and \(b\)) meet at a single
point. A line is drawn between their centres through their common
point; its length is \((a + b)/2\), the sum of the two radii. This
line is the hypotenuse of a right angled triangle with one other side
of length \((a - b)/2\), the difference of the radii.
Pythagoras's theorem is used to infer that the remaining side of
the right-angled triangle has length \(\sqrt{(ab)}\).Then visualizing
what happens to the triangle when the diameter of the smaller circle
varies between 0 and the diameter of the larger circle, one infers
that \(0 < \sqrt{(ab)} < (a + b)/2\); then verifying
symbolically that \(\sqrt{(ab)} = (a + b)/2\) when \(a = b\), one
concludes that for positive \(a\) and \(b\), \(\sqrt{(ab)} \le (a +
b)/2\).
This thinking does not constitute a case of proving or following a
proof of the conclusion, because it involves a step which we cannot
*clearly tell* is valid. This is the step of attempting to
visually imagine what would happen when the smaller circle varies in
diameter between 0 and the diameter of the larger circle and inferring
from the resulting experience that the line joining the centres of the
circles will always be longer than the horizontal line from the centre
of the smaller circle to the vertical diameter of the larger circle.
This step *seems* sound (does not lead us into error) and may
*be* sound; but its soundness is opaque. If in fact it is
sound, the whole thinking process is a reliable way of reaching the
conclusion; so in the absence of factors that would make it irrational
to trust the thinking, it would be a way of discovering the conclusion
to be true.
### 4.2 Discovering a proof strategy
In some cases visual thinking inclines one to believe something on
the basis of assumptions suggested by the visual representation that
remain to be justified given the subject's current knowledge. In
such cases there is always the danger that the subject takes the
visual representation to show the correctness of the assumptions and
ends up with an unwarranted belief. In such a case, even if the belief
is true, the subject has not made a discovery, as the means of
belief-acquisition is unreliable. Here is an example using Figure 7
(Montuchi and Page 1988).
![[A first quadrant graph, on the x-axis are marked (2 squareroot(k), 0) and further to the right (j,0). On the y-axis is marked (0,2(squareroot(k)) and further up, (0,j). Solid lines connect (0,2(squareroot(k)) to (2(squareroot(k),0) and (0,j) to (j,0). A dotted line goes from the origin in a roughly 45 degree angle the point where it intersects the (0,2(squareroot(k)) to (2(squareroot(k),0) line is labeled (squareroot(k),squareroot(k)). A curve tangent to that point with one end heading up and the other right is labeled 'xy=k'.]](fig7.png)
Figure 7
Using this diagram one can come to think the following about the
real numbers. When for a constant \(k\) the positive values of \(x\)
and \(y\) are constrained to satisfy the equation \(x \cdot y = k\),
the positive values of \(x\) and \(y\) for which \(x + y\) is minimal
are \(x = \sqrt{k} = y\). (Let "#" denote this claim.)
Suppose that one knows the conventions for representing functions
by graphs in a Cartesian co-ordinate system, knows also that the
diagonal represents the function \(y = x\), and that a line segment
with gradient -1 from \((0, b)\) to \((b, 0)\) represents the
function \(x + y = b\). Then looking at the diagram may incline one to
think that for no positive value of \(x\) does the value of \(y\) in
the function \(x\cdot y = k\) fall below the value of \(y\) in \(x + y
= 2\sqrt{k}\), and that these functions coincide just at the
diagonal. From these beliefs the subject may (correctly) infer the
conclusion #. But mere attention to the diagram cannot warrant
believing that, for a given positive \(x\)-value, the \(y\)-value of
\(x\cdot y = k\) never falls below the \(y\)-value of \(x + y =
2\sqrt{k}\) and that the functions coincide just at the diagonal; for
the conventions of representation do not rule out that the curve of
\(x\cdot y = k\) meets the curve of \(x + y = 2\sqrt{k}\) at two
points extremely close to the diagonal, and that the former curve
falls under the latter in between those two points. So the visual
thinking is not in this case a means of discovering proposition #.
But it is useful because it provides the idea for a proof of the
conclusion--one of the major benefits of visual thinking in
mathematics. In brief: for each equation \((x\cdot y = k\); \(x + y =
2\sqrt{k})\) if \(x = y\), their common value is \(\sqrt{k}\). So the
functions expressed by those equations meet at the diagonal. To show
that, for a fixed positive \(x\)-value, the \(y\)-values of \(x\cdot y
= k\) never fall below the \(y\)-values of \(x + y = 2\sqrt{k}\), it
suffices to show that \(2\sqrt{k} - x \le k/x\). As a geometric mean
is less than or equal to the corresponding arithmetic mean, \(\sqrt{[x
\cdot (k/x)]} \le [x + (k/x)]/2\). So \(2\sqrt{k} \le x + (k/x)\).
So \(2\sqrt{k} - x \le k/x\).
In this example, visual attention to, and reasoning about, the
diagram is not part of a way of discovering the conclusion. But if it
gave one the idea for the argument just given, it would be part of
what led to a way of discovering the conclusion, and that is
important.
Can visual thinking lead to discovery of an idea for a proof in
more advanced contexts? Yes. Carter (2010) gives an example from free
probability theory. The case is about certain permutations (those
denoted by "\(p\)" with a circumflex in Carter 2010) on a
finite set of natural numbers. Using specific kinds of diagram, easily
seen properties of the diagrams lead one naturally to certain
properties of the permutations (crossing and non-crossing, having
neighbouring pairs), and to a certain operation (cancellation of
neighbouring pairs). All of these have algebraic definitions, but the
ideas defined were noticed by thinking in terms of the diagrams. For
the relevant permutations \(\sigma\), \(\sigma(\sigma(n)) = n\); so a
permutation can be represented by a set of lines joining dots. The
permutations represented on the left and right in Figure 8 are
non-crossing and crossing respectively, the former with neighbouring
pairs \(\{2, 3\}\) and \(\{6, 7\}\).
![[a circle with 8 points on the circumference, a point at about 45 degrees is labeled '1', at 15 degrees, '2', at -15 degrees '3', at -45 degrees '4', at -135 degrees '5', at -165 degrees '6', at 165 degrees '7', and at 135 degrees '8'. Smooth curves in the interior of the circle connect point 1 to 4, 2 to 3, 5 to 8, and 6 to 7.]](fig8a.png)
![[a circle with 8 points on the circumference, a point at about 45 degrees is labeled '1', at 15 degrees, '2', at -15 degrees '3', at -45 degrees '4', at -135 degrees '5', at -165 degrees '6', at 165 degrees '7', and at 135 degrees '8'. Straight lines connect 1 to 6, 2 to 5, 3 to 8, and 4 to 7.]](fig8b.png)
(a)
(b)
Figure 8
A permutation \(\sigma\) of \(\{1, 2, \ldots, 2p\}\) is defined to
have a crossing just when there are \(a\), \(b\), \(c\), \(d\) in
\(\{1, 2, \ldots, 2p\}\) such that \(a < b < c < d\) and
\(\sigma(a) = c\) and \(\sigma(b) = d\). The focus is on the proof of
a theorem which employs this notion. (The theorem is that when a
permutation of \(\{1, 2, \ldots, 2p\}\) of the relevant kind is
non-crossing, there will be exactly \(p+1\) R-equivalence classes,
where \(R\) is a certain equivalence relation on \(\{1, 2, \ldots,
2p\}\) defined in terms of the permutation.) Carter says that the
proofs of some lemmas "rely on a visualization of the
setup", in that to grasp the correctness of one or more of the
steps one needs to visualize the situation. There is also a nice
example of some reasoning in terms of a diagram which gives the idea
for a proof ("suggests a proof strategy") for the lemma
that every non-crossing permutation has a neighbouring
pair. Reflection on a diagram such as Figure 9 does the work.
![[A circle, a dashed interior curve connects an unmarked point at about 40 degrees to an unmarked point at -10 degrees (the second point is labeled 'j+1'). Another dashed interior curve connects this point to an unmarked point at about -100 degrees. A solid interior curve connects and unmarked point at about 10 degrees (labeled 'j') to another unmarked point at about -60 degrees (labeled 'j+a'). Between the labels 'j+1' and 'j+a' is another label 'j+2' and then a dotted line between 'j+2' and 'j+a'.]](fig9.png)
Figure 9
The reasoning is this. Suppose that \(\pi\) has no neighbouring
pair. Choose \(j\) such that \(\pi(j) - j = a\) is minimal, that is,
for all \(k, \pi(j) - j \le \pi(k) - k\). As \(\pi\) has no
neighbouring pair, \(\pi(j+1) \ne j\). So either \(\pi(j+1)\) is less
than \(j\) and we have a crossing, or by minimality of \(\pi(j) - j\),
\(\pi(j+1)\) is greater than \(j+a\) and again we have a
crossing. Carter reports that this disjunction was initially believed
by thinking in term of the diagram, and the proof of the lemma given
in the published paper is a non-diagrammatic "version" of
that reasoning. In this case study, visual thinking is shown to
contribute to discovery in several ways; in particular, by leading the
mathematicians to notice crucial properties--the
"definitions are based on the diagrams"--and in
giving them the ideas for parts of the overall proof.
### 4.3 Discovering properties and kinds
In this section I will illustrate and then discuss the use of
visual thinking in discovering kinds of mathematical entity, by going
through a few of the main steps leading to geometric group theory, a
subject which really took off in the 1980s through the work of Mikhail
Gromov. The material is set out nicely in greater depth in Starikova
(2012).
Sometimes it can be fruitful to think of non-spatial entities, such
as algebraic structures, in terms of a spatial representation. An
example is the representation of a finitely generated group by a
Cayley graph. Let \((G, \cdot)\) be a group and \(S\) a finite subset
of \(G\). Let \(S^{-1}\) be the set of inverses of members of \(S\).
Then \((G, \cdot)\) is *generated by* \(S\) if and only if
every member of \(G\) is the product (with respect to \(\cdot\)) of
members of \(S\cup S^{-1}\). In that case \((G, \cdot, S)\) is said to
be a finitely generated group. Here are a couple of examples.
First consider the group \(S\_{3}\) of permutations of 3 elements
under composition. Letting \(\{a, b, c\}\) be the elements, all six
permutations can be generated by \(\rf\) and \(\rr\) where
\(\rf\) (for "flip") fixes a and swaps \(b\) with \(c\),
i.e., it takes to \(\langle a, b, c\rangle\) to \(\langle a, c,
b\rangle\), and
\(\rr\) (for "rotate") takes \(\langle a, b, c\rangle\) to
\(\langle c, a, b\rangle\).
The Cayley graph for \((S\_{3}, \cdot, \{\rf, \rr\})\) is a graph
whose vertices represent the members of \(S\_{3}\) and two
"colours" of directed edges, representing composition with
\(\rf\) and composition with \(\rr\). Figure 10 illustrates: red
directed edges represent composition with \(\rr\) and black edges
represent composition with \(\rf\). So a red edge from a vertex
\(\rv\) representing \(\rs\) in \(S\_{3}\) ends at a vertex
representing \(\rs\rr\) and a black edge from \(\rv\) ends at a vertex
representing \(\rs\rf\). (Notation: "\(\rs\rr\)"
abbreviates "\(\rs \cdot \rr\)" which here denotes
"\(\rs\) followed by \(\rr\)"; same for
"\(\rf\)" in place of "\(\rr\)".) A black edge
has arrowheads both ways because \(\rf\) is its own inverse, that is,
flipping and flipping again takes you back to where you
started. (Sometimes a pair of edges with arrows in opposite directions
is used instead.) The symbol "\(\re\)" denotes the identity.
![[Two red equilateral triangles, one inside the other. The smaller triangle has arrows on each side pointing in a clockwise direction; the larger has arrows on each side in a counterclockwise direction. Black double arrow lines connect the respective vertices of each triangle. The top vertice of the outside triangle is labeled 'e', of the inside triangle 'f'; the bottom left vertice of the outside triangle is labeled 'r', of the inside triangle 'r'; the bottom right vertix of the outside triangle is labeled with 'rr',of the inside triangle with 'fr'.]](fig10.png)
Figure 10
An example of a finitely generated group of infinite order is
\((\mathbb{Z}, +, \{1\})\). We can get any integer by successively
adding 1 or its additive inverse \(-1\). Since 3 added to the inverse
of 2 is 1, and 2 added to the inverse of 3 is \(-1\), we can get any
integer by adding members of \(\{2, 3\}\) and their inverses. Thus
both \(\{1\}\) and \(\{2, 3\}\) are generating sets for \((\mathbb{Z},
+)\). Figure 11 illustrates part of the Cayley graph for
\((\mathbb{Z}, +, \{2, 3\})\). The horizontal directed edges represent
+2. The directed edges ascending or descending obliquely represent
\(+3\).
![[Two horizontal parallel black lines with directional arrows pointing to the right. The top line has equidistant points marked '-2', '0', '2', '4' and the bottom line equidistant points marked '-1' (about half way between the upper line's '-2' and '0'), '1', '3', '5'. A red arrow goes from '-2' to '1', from somewhere to the left up to '0', from '0' to '3', from '-1' to '2', from '1' to '4, from '2' to '5', and from '3' to somewhere to the right up.]](fig11.png)
Figure 11
Another example of a generated group of infinite order is \(F\_2\),
the free group generated by a pair of members. The first few
iterations of its Cayley graph are shown in Figure 12, where \(\{a,
b\}\) is the set of generators and a right horizontal move between
adjacent vertices represents composition with \(a\), an upward
vertical move represents composition with \(b\), and leftward and
downward moves represent composition with the inverse of \(a\) and the
inverse of \(b\) respectively. The central vertex represents the
identity.
![[A blue vertical line pointing up labeled 'b' crossed by a red horizontal line pointing right labeled 'b'. Each line is crossed by two smaller copies of the other line on either side of the main intersection. And, in turn, each of those smaller copies of the line are crossed by two smaller copies of the other line, again on either side of their main intersection.]](fig12.png)
Figure 12
Thinking of generated groups in terms of their Cayley graphs makes
it very natural to view them as metric spaces. A *path* is a
sequence of consecutively adjacent edges, regardless of direction. For
example in the Cayley graph for \((\mathbb{Z}, +, \{2, 3\})\) the
edges from \(-2\) to 1, from 1 to \(-1\), from \(-1\) to 2 (in that
order) constitute a path, representing the action, starting from
\(-2\), of adding 3, then adding \(-2\), then adding 3. Taking each
edge to have unit length, the metric \(d\_S\) for a group \(G\)
generated by a finite subset \(S\) of \(G\) is defined: for any \(g\),
\(h \in G\), \(d\_{S}(g, h) =\) the length of a shortest path from
\(g\) to \(h\) in the Caley graph of \((G, \cdot, S)\). This is the
*word metric* for this generated group.
Viewing a finitely generated group as a metric space allows us to
consider its growth function \(\gamma(n)\) which is the cardinality of
the "ball" of radius \(\le n\) centred on the identity
(the number of members of the group whose distance from the identity
is not greater than \(n\)). A growth function for a given group
depends on the set of generators chosen, but when the group is
infinite the asymptotic behaviour as \(n \rightarrow \infty\) of the
growth functions is independent of the set of generators.
Noticing the possibility of defining a metric on generated groups
did not require first viewing diagrams of their Cayley graphs. This is
because a word in the generators is just a finite sequence of symbols
for the generators or their inverses (we omit the symbol for the group
operation), and so has an obvious length visually suggested by the
written form of the word, namely the number of symbols in the
sequence; and then it is natural to define the distance between group
members \(g\) and \(h\) to be the length of a shortest word that gets
one from \(g\) to \(h\) by right multiplication, that is,
\(\textrm{min}\{\textrm{length}(w): w = g^{-1}h\}\).
However, viewing generated groups by means of their Cayley graphs
was the necessary starting point for geometric group theory, which
enables us to view finitely generated groups of infinite order not
merely as graphs or metric spaces but as geometric entities. The main
steps on this route will be sketched briefly here; for more detail see
Starikova (2012) and the references therein. The visual key is to
start thinking in terms of the "coarse geometry" of the
Cayley graph of the generated group, by zooming out in visual
imagination so far that the discrete nature of the graph is
transformed into a traditional geometrical object. For example, the
Cayley graph of a generated group of finite order such as \((S\_{3},
\cdot, \{f, r\})\) illustrated in Figure 11
becomes a dot; the Cayley graph for \((\mathbb{Z}, +, \{2, 3\})\)
illustrated in Figure 12 becomes an uninterrupted
line infinite in both directions.
The word metric of a generated group is discrete: the values are
always in \(N\). How is this visuo-spatial association of a discrete
metric space with a continuous geometrical object achieved
mathematically? By quasi-isometry. While an isometry from one metric
space to another is a distance preserving map, a quasi-isometry is a
map which preserves distances to within fixed linear bounds. Precisely
put, a map \(f\) from \((S, d)\) to \((S', d')\) is a
*quasi-isometry* iff for some real constants \(L > 0\) and
\(K \ge 0\) and all \(x\), \(y\) in \(S\) \[ d(x, y)/L - K \le
d'(f(x), f(y)) \le L \cdot d(x, y) + K. \]
The spaces \((S, d)\) and \((S', d')\) are
*quasi*-*isometric* spaces iff the quasi-isometry \(f\)
is also quasi-surjective, in the sense that there is a real constant
\(M \ge 0\) such that every point of \(S'\) is no further than \(M\)
away from some point in the image of \(f\).
For example, \((\mathbb{Z}, d)\) is quasi-isometric to
\((\mathbb{R}, d)\) where \(d(x, y) = |y - x|\), because the inclusion
map \(\iota\) from \(\mathbb{Z}\) to \(\mathbb{R}\), \(\iota(n) = n\),
is an isometry hence a quasi-isometry with \(L = 1\) and \(K = 0\),
and each point in \(\mathbb{R}\) is no further than \(1/2\) away from
an integer (in \(\mathbb{R}\)). Also, it is easy to see that for any
real number \(x\), if \(g(x) =\) the nearest integer to \(x\) (or the
greatest integer less than \(x\) if it is midway between integers)
then \(g\) is a quasi-isometry from \(\mathbb{R}\) to \(\mathbb{Z}\)
with \(L = 1\) and \(K =\frac{1}{2}\);.
The relation between metric spaces of being quasi-isometric is an
equivalence relation. Also, if \(S\) and \(T\) are generating sets of
a group \((G, \cdot)\), the Cayley graphs of \((G, \cdot, S)\) and
\((G, \cdot, T)\) with their word metrics are quasi-isometric
spaces. This means that properties of a generated group which are
quasi-isometric invariants will be independent of the choice of
generating set, and therefore informative about the group itself.
Moreover, it is easy to show that the Cayley graph of a generated
group with word metric is quasi-isometric to a geodesic
space.[1]
A *triangle*
with vertices \(x\), \(y\), \(z\) in this space is the union of three
geodesic segments, between \(x\) and \(y\), between \(y\) and \(z\),
and between \(z\) and \(x\). This is the gateway for the application
of Gromov's insights, some of which can be grasped with the help
of visual geometric thinking.
Here are some indications. Recall the Poincare open disc
model of hyperbolic geometry: geodesics are diameters or arcs of
circles orthogonal to the boundary, with unit distance represented by
ever shorter Euclidean distances as one moves from the centre towards
the boundary. (The boundary is not part of the model). All triangles
have angle sum \(< \pi\) (Figure 13, left), and
there is a global constant d such that all triangles are
d-thin in the following sense:
A triangle \(T\) is d-*thin* if and
only if any point on one side of \(T\) lies within d of some
point on one of the other two sides.
This condition is equivalent to the condition that each side of
\(T\) lies within the union of the d-neighbourhoods of the other
two sides, as illustrated in Figure 13,
right. There is no constant d such that all triangles in a
Euclidean plane are d-thin, because for any d there are
triangles large enough that the midpoint of a longest side lies
further than d from all points on the other two sides.
![[a circle. In the interior are three arcs colored green, blue, and red. For all three smooth curves where each meets the circumference of the circle is marked as at a 90 degree angle. The green curve may actually be a straight line and goes from about 160 degrees to about -20 degrees. The blue curve goes from about 170 degrees to about 80 degrees. The red curve goes from about 90 degrees to about -25 degrees. Where the green and blue curves intersect is marked as an angle and labelled with the Greek letter alpha; where the blue and the red curves intersect is also marked as an angle and labelled with gamma; and with where the red and the green curves intersect and this labelled with beta.]](fig13a.png)
![[not sure how to describe this]](fig13b.png)
(a)
(b)
Figure 13[2]
The definition of thin triangles is sufficiently general to apply
to any geodesic space and allows a generalisation of the concept of
hyperbolicity beyond its original context:
* A geodesic space is *hyperbolic* iff for some d all
its triangles are d-thin.
* A group is *hyperbolic* iff it has a Cayley graph
quasi-isometric to a hyperbolic geodesic space.
The class of hyperbolic groups is large and includes important
subkinds, such as finite groups, free groups and the fundamental
groups of surfaces of genus \(\ge 2\). Some striking theorems have
been proved for them. For example, for every hyperbolic group the word
problem is solvable, and every hyperbolic group has a finite
presentation. So we can reasonably conclude that the discovery of this
mathematical kind, the hyperbolic groups, has been fruitful.
How important was visual thinking to the discoveries leading to
geometric group theory? Visual thinking was needed to discover Cayley
graphs as a means of representing finitely generated groups. This is
not the triviality it might seem: Cayley graphs must be distinguished
from the diagrams we use to present them visually. A Cayley graph is a
*mathematical* representation of a generated group, not a
*visual* representation. It consists of the following
components: a set \(V\) ("vertices"), a set \(E\) of
ordered pairs of members of \(V\) ("directed edges") and a
partition of \(E\) into distinguished subsets, ("colours",
each one for representing right multiplication by a particular
generator). The Cayley graph of a generated group of infinite order
cannot be fully represented by a diagram given the usual conventions
of representation for diagrams of graphs, and distinct diagrams may
visually represent the same Cayley graph: both diagrams in Figure 14
can be labelled so that under the usual conventions they represent the
Cayley graph of \((S\_{3}, \cdot, \{f, r\})\), already illustrated by
Figure 10. So the Cayley graph cannot be a
diagram.
![[two identical red triangles, one above the other and inverted. Both have arrows going clockwise around. Black lines with arrows pointing both ways link the respective vertices.]](fig14a.png)
![[two identical red triangles, one above the other. The bottom one has arrows going clockwise around and the top counterclockwise. Black lines with arrows pointing both ways link the respective vertices.]](fig14b.png)
(a)
(b)
Figure 14
Diagrams of Cayley graphs were important in prompting
mathematicians to think in terms of the coarse-grained geometry of the
graphs, in that this idea arises just when one thinks in terms of
"zooming out" visually. Gromov (1993) makes the point in a
passage quoted in Starikova (2012:138)
> This space [a Cayley graph with the word metric] may
> appear boring and uneventful to a geometer's eye since it is
> discrete and the traditional (e.g., topological and infinitesimal)
> machinery does not run in [the group] G. To regain the geometric
> perspective one has to change one's position and move the
> observation point far away from G. Then the metric in G
> seen from the distance \(d\) becomes the original distance divided by
> \(d\) and for \(d \rightarrow \infty\) the points in G coalesce
> into a connected continuous solid unity which occupies the visual
> horizon without any gaps and holes and fills our geometer's
> heart with joy.
>
>
In saying that one has to move the observation point far away from
G so that the points coalesce into a unity which occupies the
visual horizon, he makes clear that visual imagination is involved in
a crucial step on the road to geometric group theory. Visual thinking
is again involved in discovering hyperbolicity as a property of
general geodesic spaces from thinking about the Poincare disk
model of hyperbolic geometry. It is hard to see how this property
would have been discovered without the use of visual resources.
## 5. Visual thinking and mental arithmetic
While there is no reason to think that mental arithmetic (mental
calculation in the integers and rational numbers) typically involves
much visual thinking, there is strong evidence of substantial visual
processing in the mental arithmetic of highly trained abacus users.
In earlier times an abacus would be a rectangular board or table
surface marked with lines or grooves along which pebbles or counters
could be moved. The oldest surviving abacus, the Salamis abacus, dated
around 300 BCE, is a white marble slab, with markings designed for
monetary calculation (Fernandes 2015, Other Internet Resources). These were
superseded by rectangular frames within which wires or rods parallel to
the short sides are fixed, with moveable holed beads on them. There are
several kinds of modern abacus -- the Chinese suanpan, the Russian
schoty and the Japanese soroban for example -- each kind with
variations. Evidence for visual processing in mental arithmetic comes
from studies with well trained users of the soroban, an example of
which is shown in Figure 15.
![[Picture of a soroban with 17 columns of beads, each column has 1 bead above the horizontal bar used to represent 5 and 4 beads below the bar each of which represents 1. Together the beads in each column can represent any digit from 0 to 9.]](fig15.png)
Figure 15
Each column of beads represents a power of 10, increasing to the
left. The horizontal bar, sometimes called *the* *reckoning
bar*, separates the beads on each column into one bead of value 5
above and four beads of value 1 below. The number represented in a
column is determined by the beads which are not separated from the
reckoning bar. A column on which all beads are separated by a gap from
the bar represents zero. For example, the number 6059 is represented on
a portion of a schematic soroban in Figure 16.
![[A schematic soroban representing 6059. There are 8 places and the first four from the left are set to 0, then 6, then 0, then 5, then 9 ]](fig16.png)
Figure 16
On some sorobans there is a mark on the reckoning bar at every third
column; if a user chooses one of these as a unit column, the marks will
help the user keep track of which columns represent which powers of
ten. Calculations are made by using forefinger and thumb to move beads
according to procedures for the standard four numerical operations and
for extraction of square and cube roots (Bernazzani 2005,Other Internet Resources). Despite the
fact that the soroban has a decimal place representation of numbers,
the soroban procedures are not 'translations' of the
procedures normally taught for the standard operations using arabic
numerals. For example, multidigit addition on a soroban starts by
adding highest powers of ten and proceeds rightwards to lower powers,
instead of starting with units thence proceeding leftwards to tens,
hundreds and so on.
People trained to use a soroban often learn to do mental arithmetic
by visualizing an abacus and imagining moving beads on it in accordance
with the procedures learned for arithmetical calculations (Frank and
Barner 2012). Mental abacus (MA), as this kind of mental arithmetic is
known, compares favourably with other kinds of mental calculation for
speed and accuracy (Kojima 1954) and MA users are often found among the
medallists in the Mental Calculation World Cup.
Although visual and manual motor imagery is likely to occur,
cognitive scientists have probed the question whether the actual
processes of MA calculation consist in or involve imagining performing
operations on a physical abacus. Brain imaging studies provide one
source of evidence bearing on this question. Comparing well-trained
abacus calculators with matched controls, evidence has been found that
MA involves neural resources of visuospatial working memory with a form
of abacus which does not depend on the modality (visual or auditory) of
the numerical inputs (Chen et al. 2006). Another imaging study found
that, compared to controls without abacus training, subjects with long
term MA training from a young age had enhanced brain white matter
related to motor and visuospatial processes (Hu et al. 2011).
Behavioural studies provide more evidence. Tests on expert and
intermediate level abacus users strongly suggest that MA calculators
mentally manipulate an abacus representation so that it passes through
the same states that an actual abacus would pass through in solving an
addition problem. Without using an actual abacus MA calculators were
able to answer correctly questions about intermediates states unique to
the abacus-based solution of a problem; moreover, their response times
were a monotonic function of the position of the probed state in the
sequence of states of the abacus process for solving the problem
(Stigler 1984). On top of the 'intermediate states'
evidence, there is 'error type' evidence. Mental addition
tests comparing abacus users with American subjects revealed that
abacus users made errors of a kind which the Americans did not make,
but which were predictable from the distribution of errors in physical
abacus addition (Stigler 1984).
Another study found evidence that when a sequence of numbers
is presented auditorily (as a verbal whole "three thousand five
hundred and forty seven" or as a digit sequence "Three,
five, four, seven") abacus experts encode it into an imaged
abacus display, while non-experts encode it verbally (Hishitani
1990).
Further evidence comes from behavioural interference studies. In
these studies subjects have to perform mental calculations, with and
without a task of some other kind to be performed during the
calculation, with the aim of seeing which kinds of task interfere with
calculation as measured by differences of reaction time and error rate.
An early study found that a linguistic task interfered weakly with MA
performance (unless the linguistic task was to answer a mathematical
question), while motor and visual tasks interfered relatively
strongly. These findings suggested to the paper's authors
that MA representations are not linguistic in nature but rely on visual
mechanisms and, for intermediate practitioners, on motor mechanisms as
well (Hatano et al. 1977).
These studies provide impressive evidence that MA does involve mental
manipulation of a visualized abacus. However, limits of the known
capacities for perceiving or representing pluralities of objects seem
to pose a problem. We have a *parallel individuation* system
for keeping track of up to four objects simultaneously and
an *approximate number system* (ANS) which allows us to gauge
roughly the cardinality of a set of things, with an error which
increases with the size of the set. The parallel individuation system
has a limit of three or four objects and the ANS represents
cardinalities greater than four only approximately. Yet mental abacus
users would need to hold in mind with precision abacus representations
involving a much larger number of beads than four (and the way in
which those beads are distributed on the abacus). For example, the
number 439 requires a precise distribution of twelve beads. Frank and
Barner (2012) address this problem. In some circumstances we can
perceive a plurality of objects as a single entity, a set, and
simultaneously perceive those objects as individuals. There is
evidence that we can keep track of up to three such sets in parallel
and simultaneously make reliable estimates of the cardinalities of the
sets (if not more than four). If the sets themselves can be easily
perceived as (a) divided into disjoint subsets, e.g. columns of beads
on an abacus, and (b) structured in a familiar way, e.g. as a
distribution of four beads below a reckoning bar and one above, we
have the resources for recognising a three-digit number from its
abacus representation. The findings of (Frank and Barner 2012) suggest
that this is what happens in MA: a mental abacus is represented in
visuospatial working memory by splitting it into a series of columns
each of which is stored as a unit with its own detailed
substructure.
These cognitive investigations confirm the self-reports of mental
abacus users that they calculate mentally by visualizing operating on
an abacus as they would operate on a physical abacus. (See the 20-second
movie
Brief interview with mental abacus user, at the Stanford Language and Cognition
Lab, for one such self-report.) There is good evidence that MA often involves
processes linked to motor cognition in addition to active visual
imagination. Intermediate abacus users often make hand movements,
without necessarily attending to those movements during MA calculation,
as shown in the second of the three short movies just mentioned.
Experiments to test the possible role of motor processes in MA resulted
in findings which led the authors to conclude that premotor processes
involved in the planning of hand movements were involved in MA (Brooks
et al. 2018).
## 6. *A priori* and *a posteriori* roles of visual experience
In coming to know a mathematical truth visual experience can play a
merely "enabling" role. For example, visual experience may
have been a factor in a person's getting certain concepts
involved in a mathematical proposition, thus enabling her to
understand the proposition, without giving her reason to believe
it. Or the visual experience of reading an argument in a text book may
enable one to find out just what the argument is, without helping her
tell that the argument is sound. In earlier sections visual experience
has been presented as having roles in proof and propositional
discovery that are not merely enabling. On the face of it this raises
a puzzle: mathematics, as opposed to its application to natural
phenomena, has traditionally been thought to be an *a priori*
science; but if visual experience plays a role in acquiring
mathematical knowledge which is not merely enabling, the result would
surely be *a posteriori* knowledge, not *a priori*
knowledge. Setting aside knowledge acquired by testimony (reading or
hearing that such-&-such is the case), there remain plenty of
cases where sensory experience seems to play an evidential role in
coming to know some mathematical fact.
### 6.1 Evidential uses of visual experience
A plausible example of the evidential use of sensory experience is
the case of a child coming to know that \(5 + 3 = 8\) by counting on
her fingers. While there may be an important \(a\) *priori*
element in the child's appreciation that she can reliably
generalise from the result of her counting experiment, getting that
result by counting is an *a posteriori* route to it. For
another example, consider the question: how many vertices does a cube
have? With the background knowledge that cubes do not vary in shape
and that material cubes do not differ from geometrical cubes in number
of vertices (where a "vertex" of a material cube is a
corner), one can find the answer by visually inspecting a material
cube. Or if one does not have a material cube to hand, one can
visually imagine a cube, and by attending to its top and bottom faces
extract the information that the vertices of the cube are exactly the
vertices of these two quadrangular faces. When one gets the answer by
inspecting a material cube, the visual experience contributes to
one's grounds for believing the answer and that contribution is
part of what makes the belief state knowledge. So the role of the
visual experience is evidential; hence the resulting knowledge is not
*a priori*. When one gets the answer by visually imagining a
cube, one is drawing on the accumulated cognitive effects of past
experiences of seeing material cubes to bring to mind what a cube
looks like; so the experience of visual imagining has an indirectly
evidential role in this case.
Do such examples show that mathematics is not an *a priori*
science? Yes, if an *a priori* science is understood to be one
whose knowable truths are all knowable *only* in an *a
priori* way, without use of sense experience as evidence. No, if
an *a priori* science is one whose knowable truths are all
knowable in an *a priori* way, allowing that some may be
knowable also in an *a posteriori* way.
### 6.2 An evidential use of visual experience in proving
Many cases of proving something (or following a proof of it)
involve making, or imagining making, changes in a symbol array. A
standard presentation of the proof of left-cancellation in group
theory provides an example. "Left-cancellation" is the
claim that for any members \(a\), \(b\), \(c\) of a group with
operation \(\cdot\) and identity element \(\mathbf{e}\), if \(a \cdot
b = a \cdot c\), then \(b = c\). Here is (the core of) a proof of
it:
\begin{align\*} a \cdot b &= a \cdot c\\ a^{-1} \cdot (a
\cdot b) &= a^{-1} \cdot (a \cdot c)\\ (a^{-1} \cdot a) \cdot b
&= (a^{-1} \cdot a) \cdot c\\ \mathbf{e}\cdot b &= \mathbf{e}
\cdot c\\ b &= c. \end{align\*}
Suppose that one comes to know left-cancellation by following this
sequence of steps. Is this an *a priori* way of getting this
knowledge? Although following a mathematical proof is thought to be a
paradigmatically *a priori* way of getting knowledge, attention
to the role of visual experience here throws this into doubt. The case
for claiming that the visual experience has an evidential role is as
follows.
The visual experience reveals not only what the steps of the
argument are but also that they are valid, thereby contributing to our
grounds for accepting the argument and believing its conclusion.
Consider, for example, the step from the second equation to the third.
The relevant background knowledge, apart from the logic of identity,
is that a group operation is associative. This fact is usually
represented in the form of an equation that simply relocates brackets
in an obvious way:
\[ x \cdot (y \cdot z) = (x \cdot y) \cdot z \]
We see that relocating the brackets in accord with this format, the
left-hand term of the second equation is transformed into the
left-hand term of the third equation, and the same for the right-hand
terms. So the visual experience plays an evidential role in our
recognising as valid the step from the second equation to the
third. Hence this quite standard route to knowledge of
left-cancellation turns out to be *a posteriori*, even though
it is a clear case of following a proof.
Against this, one may argue that the description just given of what
is going on in following the proof is not strictly correct, as
follows. Exactly the same proof can be expressed in natural language,
using "the composition of \(x\) with \(y\)" for "\(x
\cdot y\)", but the result would be hard to take in. Or the
proof can be presented using a different notational convention, one
which forces a quite different expression of associativity. For
example, we can use the Polish convention of putting the operation
symbol before the operands: instead of "\(x \cdot y\)" we
put "\(\cdot x y\)". In that case associativity would be
expressed in the following way, without brackets:
\[ \cdot x \cdot
y z = \cdot \cdot x y z. \]
The equations of the proof would then need to be re-symbolised; but
what is expressed by each equation after re-symbolisation and the
steps from one to the next would be exactly as before. So we would be
following the very same proof, step by step. But we would not be using
visual experiences involved to notice the relocation of brackets this
time. This suggests that the role of the different visual experiences
involved in following the argument in its different guises is merely
to give us access to the common reasoning: the role of the experience
is merely enabling. On this account the visual experience does not
strictly and literally enable us to see that any of the steps are
valid; rather, recognition of (or sensitivity to) the validity of the
steps results from cognitive processing at a more abstract level.
Which of these rival views is correct? Does our visual experience
in following the argument presented with brackets (1) reveal to us the
validity of some of the steps, given the relevant background knowledge
? Or (2) merely give us access to the argument? The core of the
argument against view (1) is this:
Seeing the relocation of brackets is not essential to following the
argument.
So seeing merely gives access to the argument; it does not reveal
any step to be valid.
The step to this conclusion is faulty. How one follows a proof may,
and in this case does, depend on how it is presented, and different
ways of following a proof may be different ways of coming to know its
conclusion. While seeing the relocation of brackets is not essential
to all ways of following this argument, it is essential to the normal
way of following the argument when it is symbolically presented with
brackets in the way given above.
Associativity, expressed without symbols, is this: When the binary
group operation is applied twice in succession on an ordered triple of
operands \(\langle a, b, c\rangle\), it makes no difference whether
the first application is to the initial two operands or the final two
operands. While this is the content of associativity, for ease of
processing associativity is almost always expressed as a
symbol-manipulation rule. Visual perception is used to tell in
particular cases whether the rule thus expressed is correctly
implemented, in the context of prior knowledge that the rule is
correct. What is going on here is a familiar division of labour in
mathematical thinking. We first establish the soundness of a rule of
symbol-manipulation (in terms of the governing semantic
conventions--in this case the matter is trivial); then we check
visually that the rule is correctly implemented. Processing at a more
abstract, semantic level is often harder than processing at a purely
syntactic level; it is for this reason that we often resort to
symbol-manipulation techniques as proxy for reasoning directly with
meanings to solve a problem. (What is six eighths *divided by*
three fifths, without using any symbolic technique?) When we do use
symbol-manipulation in proving or following a proof, visual experience
is required to discern that the moves conform to permitted patterns
and thus contributes to our grounds for accepting the argument. Then
the way of coming to know the conclusion has an *a posteriori*
element.
### 6.3 A non-evidential use of visual experience
Must a use of visual experience in knowledge acquisition be
*evidential*, if the visual experience is not merely enabling?
Here is an example which supports a negative answer. Imagine a square
or look at a drawing of one. Each of its four sides has a
midpoint. Now visualize the "inner" square whose sides run
between the midpoints of adjacent sides of the original square (Figure
17, left). By visualizing this figure, it should be clear that the
original square is composed precisely of the inner square plus four
corner triangles, each side of the inner square being the base of a
corner triangle. One can now visualize the corner triangles folding
over, with creases along the sides of the inner square. The starting
and end states of the imagery transformation can be represented by the
left and right diagrams of Figure 17.
![[The first of identical squares in size. The first has lines connecting the midpoints of each adjacent pair of sides to form another square. The second has in addition lines connecting the midpoints of opposite pairs of sides. In addition the outer square of the second has dashed lines instead of solid.]](fig17a.png)
![[The second of identical squares in size. The first has lines connecting the midpoints of each adjacent pair of sides to form another square. The second has in addition lines connecting the midpoints of opposite pairs of sides. In addition the outer square of the second has dashed lines instead of solid.]](fig17b.png)
(a)
(b)
Figure 17
Visualizing the folding-over within the remembered frame of the
original square results in an image of the original square divided
into square quarters, its quadrants, and the sides of the inner square
seem to be diagonals of the quadrants. Many people conclude that the
corner triangles can be arranged to cover the inner square exactly,
without any gap or overlap. Thence they infer that the area of the
original square is twice the size of the inner square. Let us assume
that the propositions concerned are about Euclidean figures. Our
concern is with the visual route to the following:
> The parts of a square beyond its inner square (formed by
> joining midpoints of adjacent sides of the original square) can be
> arranged to fit the inner square exactly, without overlap or gap,
> without change of size or shape.
The experience of visualizing the corner triangles folding over can
lead one to this belief. But it cannot provide good evidence for it.
This is because visual experience (of sight or imagination) has
limited acuity and so does not enable us to discriminate between a
situation in which the outer triangles fit the inner square
*exactly* and a situation in which they fit inexactly but well
enough for the mismatch to escape visual detection. (This contrasts
with the case of discovering the number of vertices of a cube by
seeing or visualizing one.) Even though visualizing the square, the
inner square and then visualizing the corner triangles folding over is
constrained by the results of earlier perceptual experience of scenes
with relevant similarities, we cannot draw from it reliable
information about exact equality of areas, because perception itself
is not reliable about exact equalities (or exact proportions) of
continuous magnitudes.
Though the visual experience could not provide good evidence for
the belief, it is possible that we erroneously *use* the
experience evidentially in reaching the belief. But it is also
possible, when reaching the belief in the way described, that we do
*not* take the experience to provide evidence. A non-evidential
use is more likely, if when one arrives at the belief in this way one
feels fairly certain of it, while aware that visual perception and
imagination have limited acuity and so cannot provide evidence for a
claim of exact fit.
But what could the role of the visualizing experience possibly be,
if it were neither merely enabling nor evidential? One suggestion is
that we already have relevant beliefs and belief-forming dispositions,
and the visualizing experience could serve to bring to mind the
beliefs and to activate the belief-forming dispositions (Giaquinto
2007). These beliefs and dispositions will have resulted from prior
possession of cognitive resources, some subject-specific such as
concepts of geometrical figures, some subject-general such as symmetry
perception about perceptually salient vertical and horizontal axes. A
relevant prior belief in this case might be that a square is symmetric
about a diagonal. A relevant disposition might be the disposition to
believe that the quadrants of a square are congruent squares upon
seeing or visualizing a square with a horizontal base plus the
vertical and horizontal line segments joining midpoints of its
opposite sides. (These dispositions differ from ordinary perceptual
dispositions to believe what we see in that they are not cancelled
when we mistrust the accuracy of the visual experience.)
The question whether the resulting belief would be knowledge
depends on whether the belief-forming dispositions are reliable
(truth-conducive) and the pre-existing belief states are states of
knowledge. As these conditions can be met without any violation of
epistemic rationality, the visualizing route described incompletely
here can be a route to knowledge. In that case we would have an
example of a use of visual experience which is integral to a way of
knowing a truth, which is not merely enabling and yet not
evidential. A fuller account and discussion is given in chapters 3 and
4 of Giaquinto (2007).
## 7. Further uses of visual representations
There are other significant uses of visual representations in
mathematics. This final section briefly presents a couple of them.
Although the use of diagrams in arguments in analysis faces special
dangers (as noted in 3.3), the use of
diagrams to illustrate symbolically presented operations can be very
helpful. Consider, for example, this pair of operations \(\{ f(x) + k,
f(x + k) \}\). Grasping them and the difference between them can be
aided by a visual illustration; similarly for the sets \(\{ f(x + k),
f(x - k) \}\), \(\{ |f(x)|, f(|x|) \}\), \(\{ f(x)^{-1}, f^{-1}(x),
f(x^{-1}) \}\). While generalization on the basis of a visual
illustration is unreliable, we can use them as checks against
calculation errors and overgeneralization. The same holds for
properties. Consider for example, functions for which \(f(-x) =
f(x)\), known as *even* functions, and functions for which
\(f(-x) = -f(x)\), the *odd* functions: it can be helpful to
have in mind the images of graphs of \(y = x^2\) and \(y = x^{3}\) as
instances of evenness and oddness, to remind one that even functions
are symmetrical about the \(y\)-axis and odd functions have rotation
symmetry by \(\pi\) about the origin. They can serve as a reminder and
check against over-generalisation: any general claim true of all odd
functions, for example, must be true of \(y = x^{3}\) in
particular.
The utility of visual representations in real and complex analysis
is not confined to such simple cases. Visual representations can help
us grasp what motivates certain definitions and arguments, and thereby
deepen our understanding. Abundant confirmation of this claim can be
gathered from working through the text *Visual Complex
Analysis* (Needham 1997). Some mathematical subjects have natural
visual representations, which then give rise to a domain of
mathematical entities in their own right. This is true of geometry but
is also true of subjects which become algebraic in nature very
quickly, such as graph theory, knot theory and braid
theory. Techniques of computer graphics now enable us to use moving
images. For an example of the power of kinematic visual
representations to provide and increase understanding of a subject,
see the first two "chapters" of the online introduction to
braid theory by Ester Dalvit (2012, Other Internet Resources).
With regard to proofs, a minimal kind of understanding consists in
understanding each line (proposition or formula) and grasping the
validity of each step to a new line from earlier lines. But we can
have that stepwise grasp of proof without any idea of why it proceeds
by those steps. One has a more advanced (or deeper) kind of
understanding when one has the minimal understanding *and* a
grasp of the motivating idea(s) and strategy of the proof. The point
is sharply expressed by Weyl (1995 [1932]: 453), quoted in (Tappenden
2005:150)
> We are not very pleased when we are forced to accept
> a mathematical truth by virtue of a complicated chain of formal
> conclusions and computations, which we traverse blindly, link by link,
> feeling our way by touch. We want first an overview of the aim and the
> road; we want to understand the *idea* of the proof, the deeper
> context.
>
>
Occasionally the author of a proof gives readers the desired
understanding by adding commentary. But this is not always needed, as
the idea of a proof is sometimes revealed in the presentation of the
proof itself. Often this is done by using visual representations. An
example is Fisk's proof of Chvatal's "art
gallery" theorem. This theorem is the answer to a combinatorial
problem in geometry. Put concretely, the problem is this. Let the
\(n\) walls of a single-floored gallery make a polygon. What is the
smallest number of stationary guards needed to ensure that every point
of the gallery wall can be seen by a guard? If the polygon is convex
(all interior angles < 180deg), one guard will suffice, as guards
may rotate. But if the polygon is not convex, as in Figure 18, one
guard may not be enough.
![[An irregular 9 sided polygon.]](fig18.png)
Figure 18
Chvatal's theorem gives the answer: for a gallery with
\(n\) walls, \(\llcorner n/3\lrcorner\) guards suffice, where
\(\llcorner n/3\lrcorner\) is the greatest integer \(\le n/3\). (If
this does not sound to you sufficiently like a mathematical theorem,
it can be restated as follows: Let \(S\) be a subset of the Euclidean
plane. For a subset \(B\) of \(S\) let us say that \(B\)
*supervises* \(S\) iff for each \(x \in S\) there is a \(y \in
B\) such that the segment \(xy\) lies within \(S\). Then the smallest
number \(f(n)\) such that every set bounded by a simple \(n\)-gon is
supervised by a set of \(f(n)\) points is at most \(\llcorner
n/3.\lrcorner\)
Here is Steve Fisk's proof. A short induction shows that
every polygon can be triangulated, i.e., non-crossing edges between
non-adjacent vertices ("diagonals") can be added so that
the polygon is entirely composed of non-overlapping triangles. So take
any \(n\)-sided polygon with a fixed triangulation. Think of it as a
graph, a set of vertices and connected edges, as in Figure 19.
![[10 irregularly placed black dots with a solid black line connecting them to form an irregular 10 sided polygon. One black dot has dashed lines going to four other dots that are not adjacent to it and one of its adjacent dots has dashed lines going to three other non-adjacent dots (including one dot that was the endpoint for one of the first dots dashed lines), the dashed lines do not intersect.]](fig19.png)
Figure 19
The first part of the proof shows that the graph is 3-colourable,
i.e., every vertex can be coloured with one of just three colours
(red, white and blue, say) so that no edge connects vertices of the
same colour.
The argument proceeds by induction on \(n \ge 3\), the number of
vertices.
For \(n = 3\) it is trivial. Assume it holds for all \(k\), where
\(3 \le k < n\).
Let triangulated polygon \(G\) have \(n\) vertices. Let \(u\) and
\(v\) be any two vertices connected by diagonal edge \(uv\). The
diagonal \(uv\) splits \(G\) into two smaller graphs, both containing
\(uv\). Give \(u\) and \(v\) different colours, say red and white, as
in Figure 20.
![[Same figure as before with one of the black dots split into two red dots side-by-side and another black dot split into two white dots side-by-side. This splits the previously joined figure into two smaller graphs.]](fig20.png)
Figure 20
By the inductive assumption, we may colour each of the smaller
graphs with the three colours so that no edge joins vertices of the
same colour, keeping fixed the colours of \(u\) and \(v\). Pasting
together the two smaller graphs as coloured gives us a 3-colouring of
the whole graph.
What remains is to show that \(\llcorner n/3\lrcorner\) or fewer
guards can be placed on vertices so that every triangle is in the view
of a guard. Let \(b\), \(r\) and \(w\) be the number of vertices
coloured blue, red and white respectively. Let \(b\) be minimal in
\(\{b, r, w\}\). Then \(b \le r\) and \(b \le w\). Then \(2b \le r +
w\). So \(3b \le b + r + w = n\). So \(b \le n/3\) and so \(b \le
\llcorner n/3\lrcorner\). Place a guard on each blue vertex. Done.
The central idea of this proof, or the proof strategy, is
clear. While the actual diagrams produced here are superfluous to the
proof, some visualizing enables us to grasp the central idea.
## 8. Conclusion
Thinking which involves the use of seen or visualized images, which
may be static or moving, is widespread in mathematical practice. Such
visual thinking may constitute a non-superfluous and non-replaceable
part of thinking through a specific proof. But there is a real danger
of over-generalisation when using images, which we need to guard
against, and in some contexts, such as real and complex analysis, the
apparent soundness of a diagrammatic inference is liable to be
illusory.
Even when visual thinking does not contribute to *proving* a
mathematical truth, it may enable one to *discover* a truth,
where to discover a truth is to come to believe it in an independent,
reliable and rational way. Visual thinking can also play a large role
in discovering a central idea for a proof or a proof-strategy; and in
discovering a kind of mathematical entity or a mathematical
property.
The (non-superfluous) use of visual thinking in coming to know a
mathematical truth does in some cases introduce an *a
posteriori* element into the way one comes to know it, resulting
in *a posteriori* mathematical knowledge. This is not as
revolutionary as it may sound as a truth knowable *a
posteriori* may also be knowable *a priori*. More
interesting is the possibility that one can acquire some mathematical
knowledge in a way in which visual thinking is essential but does not
contribute evidence; in this case the role of the visual thinking may
be to activate one's prior cognitive resources. This opens the
possibility that non-superfluous visual thinking may result in *a
priori* knowledge of a mathematical truth.
Visual thinking may contribute to understanding in more than one
way. Visual illustrations may be extremely useful in providing
examples and non-examples of analytic concepts, thus helping to
sharpen our grasp of those concepts. Also, visual thinking
accompanying a proof may deepen our understanding of the proof, giving
us an awareness of the direction of the proof so that, as Hermann Weyl
put it, we are not forced to traverse the steps blindly, link by link,
feeling our way by touch. |
james-viterbo | ## 1. Life and Writings
James was born in Viterbo around 1255. Nothing is known about the
early years of his life. It is presumed that he joined the order of
the Hermits of St. Augustine somewhere around 1272. He is first
mentioned in 1283 in the capitular acts of the Roman province of his
order as a recently appointed lecturer (*lector*) in the
Augustinian Convent of Viterbo. This means that he must have spent the
previous five years (i.e., 1278-83) in Paris, as the Augustinian
order required its lecturers to be trained in theology in that city
for a period not exceeding five years. James returned to Paris not
long after his appointment in Viterbo, for he is mentioned again in
the capitular acts in 1288, this time as *baccalaureus
parisiensis*, i.e., as holding a bachelor of theology degree from
the University of Paris. After completing his course of studies, James
was appointed Master of Theology in 1292 (Wippel 1974) or 1293 (Ypma
1974), teaching in Paris for the next seven years. His output during
that time was considerable. All of his works in speculative theology
and metaphysics date from the Parisian period. In 1299-1300, he
was named *definitor* (i.e., member of the governing council)
of the Roman province by the general chapter of his order, and in May
of 1300 he became regent master of the order's *studium
generale* in Naples. These would be the last two years of
his academic career, however, for in September 1302 he was named
Archbishop of Benevento by Pope Boniface VIII, possibly in gratitude
for the support James had shown for the papal cause in his great
treatise, *De regimine christiano* (*On Christian
Rulership*). In December of the same year, at the request of the
Angevin King, Charles II, he was transferred to the Archbishopric of
Naples. He remained in Naples until his death, in 1307 or 1308.
James wrote the bulk of his philosophically relevant work while in
Paris. Nothing remains of his early production in that city. His
commentary on Peter Lombard's *Sentences* has been lost,
although still extant is a so-called *Abbreviatio in I Sententiarum
Aegidii Romani* (*Summary of the First Book of the Sentences of
Giles of Rome*), which recent scholarship suggests might be
James' preparatory notes for lectures on the Sentences rather
than a summary of Giles' commentary to the same (Tavolaro 2018),
although it is unclear whether these notes were prepared during
James' student years in Paris or whether they date from his
later teaching in Naples. Also lost are a treatise on the animation of
the heavens (*De animatione caelorum*), as well as a commentary
on Aristotle's *Metaphysics* (*Expositio primae
philosophiae*), two works to which James refers explicitly in his
Quodlibets and which Ypma surmises must have been written during
James' four-year residency as a "formed bachelor",
i.e., between 1288 and 1292 (Ypma 1975).
James' most significant works written while in Paris are his
four quodlibetal questions and thirty-four disputed questions on the
categories as they pertain to divine matters (*Quaestiones de
praedicamentis in divinis*), although five of them have actually
nothing to do with theology or the categories, as they deal with the
human will and the habits (see Ypma 1975 for a possible explanation
regarding their inclusion in the *Quaestiones*). These two
works constitute the most important sources for our knowledge of
James' philosophical thought.
In Naples, James wrote the work for which he is arguably best known
(Gutierrez 1939: 61-2), the *De regimine
christiano*, an important work in the history of late medieval
political thought, and the only philosophical work he wrote in the
last eight or so years of his life.
James is not the author of two works that have been regularly
attributed to him by scholars, the *De perfectione specierum*
(Ypma 1974, Tavolaro 2014), and the *Quaestiones septem de
verbo* (*Seven Disputed Questions on the Divine Word*)
(Guttierez 1939, Scanzillo 1972, Ypma 1975). The author of the
*De perfectione specierum* is James of Saint Martin (Maier
1943), or James of Naples, as he is referred to in the manuscripts, an
Augustinian Hermit who was active in the late fourteenth century; and
the author of the *Quaestiones septem de verbo*, as Stephen
Dumont has recently shown (Dumont 2018b), was James'
confrere and near-successor at the head of the Augustinian
school in Paris, Henry of Friemar.
The following table provides the list of James' authentic works
with approximate dates of composition. Single asterisks mark those
which have been fully edited; double asterisks indicate those for
which there exist only partial editions:
1. *Lectura super IV libros Sententiarum* (1286-1288)
2. *Quaestiones Parisius disputatae De praedicamentis in
divinis*\*\* (1293-1295)
3. *Quaestio de animatione caelorum* (1293-1296)
4. *Expositio primae philosophiae* (1293-1295)
5. *Quodlibeta quattuor*\* (1293-1298)
6. *Abbreviatio In Sententiarum Aegidii Romani*\*\* (1286-1288
or 1300-1302)
7. *De regimine christiano*\* (1302)
8. *Summa de peccatorum distinctione*\*\* (1300-1308)
9. *Sermones diversarum rerum*\*\* (1303-1308)
10. *Concordantia psalmorum David* (1302-1308)
11. *De confessione* (1302-1308)
12. *De episcopali officio*\* (1302-1308)
## 2. Philosophical Theology
### 2.1 Theology as a Practical Science
Like many of his contemporaries, James devotes serious attention to
determining the status of theology as a science and to specifying its
object, or rather, as the scholastics say, its subject. In
*Quodlibet* III, q. 1, he asks whether theology is principally
a practical or a speculative science. Unsurprisingly, perhaps, for an
Augustinian, James responds that the end of theology resides
principally not in the knowledge but in the love of God. The love of
God, informed by grace, is what distinguishes the way in which
Christians worship God from the way in which pagans worship their
deities. For philosophers--James has Cicero in
mind--religion is a species of justice; worship is owed to God as
a sign of submission. For the Christian, by contrast, there can be no
worship without an internal affection of the soul, i.e., without love.
James allows that there is some recognition of this fact in Book X of
the *Nicomachean Ethics*, for the happy man would not be
"most beloved of God," as Aristotle claims he is, if he
did not love God by making him the object of his theorizing. In this
sense, it can be said that philosophy as well sees its end as the love
of God as its principal subject. But there is a difference, James
contends, in the manner in which a science based on natural reason
aims for the love of God and the manner in which sacred science does
so: sacred science tends to the love of God more perfectly. One way in
which James illustrates the difference between both approaches is by
contrasting the ways in which God is the "highest" object
for metaphysics and for theology. The proper subject of metaphysics is
being, not God, although God is the highest being. Theology, on the
other hand, views God as its subject and considers being in relation
to God. Thus, James concludes, "theology is called divine or of
God in a much more excellent and principal way than metaphysics, for
metaphysics considers God only in relation to common being, whereas
theology considers common being in relation to God"
(*Quodl.* III, q. 1, p. 20, 370-374). Another way in
which James illustrates the difference between natural theology and
sacred science is by means of the distinction between the love of
desire (*amor concupiscientiae*) and the love of friendship
(*amor amicitiae*) to which I will return in section 7.3. James
defines the love of desire as "the love of some good which we
want for ourselves or for others," and the love of friendship as
"the love of someone for their own sake." The love of God
philosophers have in mind, James contends, is the love of desire; it
cannot, by the philosophers' own admission, be the love of
friendship, for according to the *Magna Moralia*, friendship
involves a form of community or sharing between the friends that
cannot possibly obtain between mere mortals and the gods. Now although
James concedes that a "community of life" between God and
man cannot be achieved by natural means, it is attainable through the
gift of grace. The particular friendship grace affords is called
charity and it is to the conferring of charity that sacred scripture
is principally ordered.
### 2.2 Divine Power
Like all scholastics since the early thirteenth century, James
subscribes to the distinction between God's ordained power,
according to which "God can only do what he preordained he would
do according to wisdom and will" (*Quodl.* I, q. 2, p.
17, 35-37) and his absolute power, according to which he can do
whatever is "doable," i.e., whatever does not imply a
contradiction. Problems concerning what God can or cannot do arise
only in the latter case. James considers several questions: can God
add an infinite number of created species to the species already in
existence (*Quodl.* I, q. 2)? Can he make matter exist without
form (*Quodl.* IV, q. 1)? Can he make an accident subsist
without a substrate (*Quodl.* II, q. 1)? Can he create the
seminal reason of a rational soul in matter (*Quodl.* III, q.
10)? In response to the first question, James explains, following
Giles of Rome but against the opinion of Godfrey of Fontaines and
Henry of Ghent, that God can by his absolute power add an infinite
number of created species *ad superius*, in the ascending order
of perfection, if not in actuality, then at least in potency. God
cannot, however, add even one additional species of reality *ad
inferius*, between prime matter and pure nothingness, not because
this exceeds his power but because prime matter is contiguous to
nothingness and leaves, so to speak, no room for God to exercise his
power (Cote 2009). James is more hesitant about the
second question. He is sympathetic both to the arguments of those who
deny that God can make matter subsist independently of form and to the
arguments of those who claim he can. Both positions can reasonably be
held, because each argues from a different (and valid) perspective.
Proponents of the first position argue from the point of view of
reason: because they rightly believe that God cannot make what implies
a contradiction, and because they believe (rightly or wrongly) that
making matter exist without form does involve a contradiction, they
conclude that God cannot make matter exist without form. Proponents of
the second group argue from the perspective of God's omnipotence
which transcends human reason: because they rightly assume that
God's power exceeds human comprehension, they conclude (rightly
or wrongly) that making matter exist without form is among those
things exceeding human comprehension that God can make come to
pass.
Another question James considers is whether God can make an accident
subsist without a subject or substrate. The question arises only with
respect to what he calls "absolute accidents," namely
quantity and quality, as opposed to relational accidents--the
remaining categories of accident. God clearly cannot make relational
accidents exist without a subject in which they inhere, for this would
entail a contradiction. This is so because relations for James, as we
will see in
section 3.3
below, are modes, not things. What about absolute accidents? As a
Catholic theologian, James is committed to the view that some
quantities and qualities can subsist without a subject, for instance
extension and color, a view for which he attempts to provide a
philosophical justification. His position, in a nutshell, is that
accidents are capable of existing independently if they are thing-like
(*dicunt rem*). Numbers, place (*locus*), and time are
not thing-like and are thus not capable of independent existence;
extension, however, is and so can be made to exist without a subject.
The same reasoning applies to quality. This is somewhat surprising,
for according to the traditional account of the Eucharist, whereas
extension may exist without a subject, the qualities, color, odor,
texture, necessarily cannot; they inhere in the extension. James,
however, holds that just as God can make thing-like quantities to
exist without a subject, so too must he be able to make a thing-like
quality exist without the subject in which it inheres. Just which
qualities are capable of existing without a subject is determined by
whether or not they are "modes of being," i.e., by whether
or not they are relational. This seems to be the case with health and
shape: health is a proportion of the humors, and so, relational;
likewise, shape is related to parts of quantity, without which,
therefore, it cannot exist. Colors and weight, by contrast, are
non-relational, according to James, and are thus in principle capable
of being made to exist without a subject.
The fourth question James considers in relation to God's
omnipotence raises the interesting problem of whether the rational
soul can come from matter. James proceeds carefully, claiming not to
provide a definitive solution but merely to investigate the issue
(*non determinando sed investigando*). The upshot of the
investigation is that although there are many good reasons (the
soul's immortality, its spirituality and its *per se*
existence) to say that God cannot produce the seminal reason of the
rational soul in matter, in the end, James decides, with the help of
Augustine, that such a possibility must be open to God. Thus, it is
true in the order which God has *de facto* instituted, that the
soul's incorruptibility is repugnant to matter, but this is not
so in absolute terms: if God can miraculously cause something to come
to existence through generation and confer immortality upon it (James
is presumably thinking of Christ), then he can make it come to pass
that souls are produced through generation without being subject to
corruption. Likewise, although it appears inconceivable that something
material could generate something endowed with *per se*
existence, it is not impossible absolutely speaking: if God can confer
separate existence upon an accident--despite the fact that
accidents naturally inhere in their substrates--then, in like
manner, he can confer separate existence upon a soul, although it has
a seminal reason in matter.
### 2.3 Divine Ideas
The scholastics held that because God is the creative cause of all
natural beings, he must possess the ideas corresponding to each of his
creatures. But because God is eternal and is not subject to change,
the ideas must be eternally present in him, although creatures exist
for only a finite period of time. This doctrine of course raised many
difficulties, which each author addressed with varying degrees of
success. One difficulty had to do with reconciling the multiplicity of
ideas with God's unity: since there are many species of being,
there must be a corresponding number of ideas; but God is one and,
hence, cannot contain any multiplicity. Another, directly related,
difficulty had to do with the ontological status of ideas: do ideas
have any reality apart from God? If one denied them any kind of
reality, it was hard to see how they could function as exemplar causes
of things; but to assign too much reality to them was to run the risk
of introducing multiplicity in God and undermining the *ex
nihilo* character of creation.
One influential solution to these difficulties was provided by Thomas
Aquinas, who argued that divine ideas are nothing else but the diverse
ways in which God's essence is capable of being imitated, so
that God knows the ideas of things by knowing his essence. Ideas are
not distinct from God's essence, though they are distinct from
the essences of the things God creates (*De veritate*, q. 2, a.
3).
One can discern two answers to the problem of divine ideas in the
works of James of Viterbo. At an early stage of his career, in the
*Abbreviatio in Sententiarum Aegidii
Romani*--assuming one accepts, as seems reasonable, the
early dating suggested by Ypma (1975)--James defends a position
that is almost identical to that of Thomas Aquinas (Giustiniani 1979).
In his *Quodlibeta*, however, he defends a position that is
closer to that of Henry of Ghent. In the following I will sketch
James' position in the *Quodlibeta* as it provides the
most mature statement of his views. For detailed discussions, see
Gossiaux (2007) and Cote (2018).
Although James agreed with the notion that ideas are to be viewed as
the differing ways in which God can be imitated, he did not think that
one could make sense of the claim that God knows other things by
cognizing his own essence unless one supposed that the essences of
those things preexist in some way (*aliquo modo*) in God.
James' solution is to distinguish two ways in which ideas are in
God's intellect. They are in God's intellect, firstly, as
identical with it, and, secondly, as distinct from it. The first mode
of being is necessary as a means of acknowledging God's unity;
but the second mode of being is just as necessary, for, as James puts
it (*Quodl.* I, q. 5, p. 64, 65-67), "if God knows
creatures before they exist, even insofar as they are other than him
and distinct (from him), that which he knows is a cognized object,
which must needs be something; for that which nowise exists and is
absolutely nothing cannot be understood." But James also
thinks that the necessity of positing distinct ideas in God follows
from a consideration of God's essence. God enjoys the highest
degree of nobility and goodness. His mode of knowledge must be
commensurate with his nature. But according to Proclus, an author
James is quite fond of quoting, the highest form of knowledge is
knowledge through a thing's cause. That means that God knows
things through his own essence. More precisely, he knows things by
knowing his essence as the cause of those things, knowledge that is
distinct in James's view from God's mere knowledge of
himself.
Although James' insistence on the distinctness of ideas with
respect to God's essence is reminiscent of Henry of
Ghent's teaching, it is important to note, as has been stressed
by M. Gossiaux (2007), that James does not conceive of this
distinctness as Henry does. For Henry, ideas possess "being of
essence" (*esse essentiae*); James, by contrast, while
referring to divine ideas as things (*res*), is careful to add
that they are not things "in the absolute sense but only in a
qualified sense," viz., as cognized objects (*Quodl.* I,
q. 5, p. 63, 60). Thus, divine ideas for James possess a lesser degree
of distinction from God's essence than do Henry of
Ghent's. Nevertheless, because James did consider ideas to be
distinct in some sense from God, his position would be viewed by some
later authors--e.g., William of Alnwick--as compromising
divine unity. (See Cote 2016)
## 3. Metaphysics
### 3.1 Analogy of Being
The concept of being, all the medievals agreed, is common. What was
debated was the nature of this commonness. According to James of
Viterbo, all commonness is founded on some agreement, and this
agreement can be either merely nominal or grounded in reality.
Agreement is nominal when the same name is predicated of wholly
different things, without there being any objective basis for the
application of the common name; such is the case of equivocal names.
Agreement is real in the following two cases: (1) if it is based on
some *essential* resemblance between the many things to which a
particular concept applies, in which case the concept applies to these
many things by virtue of the self-same *ratio* and is said of
them univocally; or (2) if that concept is truly common to the many
things of which it is said, although it is not said of them in
relation to the same nature (*ratio*), but rather it is said in
a prior sense of one and in a posterior sense of others, insofar as
they are related in a certain way to the first. A concept that is
predicated of things in this way is said to be analogous, and the
agreement displayed by the things to which it applies is said to be an
agreement of attribution (*convenientia attributionis*). James
believes that it is according to this sense of analogy that being is
said of God and creatures, and of substance and accident
(*Quaestiones de divinis praedicamentis* I, q. 1, p. 25,
674-80). For being is said in a prior sense of God and in a
posterior sense of creatures by virtue of a certain relation between
the two; likewise, being is said first of substance and secondarily of
accidents, on account of the relation of posteriority accidents have
to substance. The reason why being is said in a prior sense of God and
in a secondary sense of creatures and, hence, the reason why the
'*ratio*' or nature of being is different in the
two cases is that being, in God, is "the very thing which God
is" (*Quaestiones de divinis praedicamentis*, q. 1, p.
16, 412), whereas created being is only being through something added
to it. From this first difference follows a second, namely, that
created being is being by virtue of being related to an agent, whereas
uncreated being has no relation. These two differences can be
summarized by saying that divine being is being through itself
(*per se*), whereas created being is being through another
(*per aliud*) (*Quaestiones de divinis praedicamentis*,
q. 1, p. 16, 425-6). In sum, being is said of God and creature,
but according to a different *ratio*: it is said of God
according to the proper and perfect nature of being, but of creatures
in a derivative or secondary way.
### 3.2 The Distinction of Being and Essence
The question of how being and essence are related to each other, and
in particular whether they are really identical or not, attracted
considerable interest in the last quarter of the thirteenth century,
with all major masters devoting some discussion to it. James of
Viterbo is no exception. Drawing his inspiration from Anselm's
semantics, he attempts to articulate a compromise solution (Gossiaux
2018) between the main competing solutions on offer in his day.
James' most detailed discussion of the distinction between being
and essence occurs in the context of a question that asks whether
creation could be saved if being (*esse*) and essence were not
different (*Quodl.* I, q. 4). His answer is that although he
finds it difficult to see how one could account for creation if being
and essence were not really different, he does not believe it is
necessary to conceive of the real distinction in the way in which
"certain Doctors" do. Which Doctors does he have in mind?
In *Quodl.* I, q. 4, he summarizes the views of three authors:
Godfrey of Fontaines, according to whom the distinction is only
conceptual (*secundum rationem*); Henry of Ghent, for whom
*esse* is only intentionally different from essence, a
distinction that is less than a real distinction but greater than a
verbal distinction; and finally, Giles of Rome, for whom *esse*
is one thing (*res*), and essence another. Thus, James agrees
with Giles, and disagrees with Henry and Godfrey, that the distinction
between being and essence is real; however, he disagrees with Giles
about the proper way of understanding the real distinction.
The starting point of his analysis is Anselm's statement in the
*Monologion* that the substantive *lux* (light), the
infinitive *lucere* (to-emit-light), and the present participle
*lucens* (emitting light) are related to each other in the same
way as *essentia* (essence), *esse* (to be), and
*ens* (being). The relation of *lucere* to *lux*,
James tells us, is the relation of a concrete term to an abstract one;
but this is just the relation that obtains between being and essence.
Now, a concrete term signifies more things than the corresponding
abstract term; for an abstract term signifies only a form, whereas the
concrete term signifies the form and the "subject", i.e.,
the bearer of the form; it is said to signify the former principally,
and the latter secondarily. So it is in the case of being and essence:
*esse* signifies the form (principally) and the subject
(secondarily), while essence signifies the form only. What is taken
for granted in this analysis is that the distinction between a form
and its bearer is a real one, at least in creatures. This is how James
achieves his compromise solution: *pace* Godfrey of Fontaines
and Henry of Ghent there is a real distinction (that between the
subject and the form) but *pace* Giles, it is real only in
qualified sense, since being principally signifies the same thing as
essence.
### 3.3 Relations
James devotes five of his *Quaestiones de divinis
praedicamentis* (qq. 11-15), representing some 270 pages of
edited text, to the question of relations. It is with a view to
providing a proper account of divine relations, he explains, that it
is "necessary to examine the nature of relation with such
diligence" (*Quaestiones de divinis praedicamentis*, q.
11, p. 12, 300-301). But before turning to Trinitarian
relations, James devotes the whole of q.11 to the status of relations
in general. The following account focuses exclusively on q. 11. James
in essence adopts Henry of Ghent's "modalist"
solution, which was to exercise considerable influence among late
thirteenth-century thinkers (Henninger 1989), although he disagrees
with Henry about the proper way of understanding what a mode is.
The question boils down to whether relations exist in some manner in
extra-mental reality or solely through the operation of the intellect,
like second intentions (species and genera). Many arguments can be
adduced in support of each position, as Simplicius had already shown
in his commentary on Aristotle's *Categories*--a
work that would have a decisive influence on James' thought. For
instance, in support of the view that relations are not real, one may
point out that the intellect is able to apprehend relations between
existents and non-existents, e.g., the relation between a father and
his deceased son; yet, there cannot be anything real in the relation
given that one of the two relata is a non-existent. But if so, then
the same must be true of all relations, as the intellectual operation
involved is the same in all cases. Another argument concerns the way
in which relations come to be and cease to be. This appears to happen
without any change taking place in the subject which the relation is
said to affect. For instance, a child who has lost his mother is said
to be an orphan until the age of eighteen, at which point it ceases to
be one, although no change has occurred: "the relation recedes
or ceases by reason of the mere passage of time."
But good reasons can also be found in support of the opposing view.
For one, Aristotle clearly considers relations to be real, as they
constitute one of the ten categories that apply to things outside the
soul. Furthermore, according to a view commonly held by the
scholastics, the perfection of the universe cannot consist solely of
the perfection of the individual things of which it is made; it is
also determined by the relations those things have to each other;
hence, those relations must be real.
The correct solution to the question of whether relations are real or
not, James contends, depends on assigning to a given relation no more
but no less reality than is fitting to it. Those who rely on arguments
such as the first two above to infer that relations are entirely
devoid of reality are guilty of assigning relations too little
reality; those who appeal to arguments such as the last two, showing
that relations are distinct from their subjects in the way in which
things are distinct from each other, assign too great a degree of
reality to relations. The correct view must lie somewhere in between:
relations are real, but are not distinct from their subjects in the
way one thing is distinct from another.
That they must be real is sufficiently shown by the first Simplician
arguments mentioned above, to which James adds some others of his own.
However, showing that they are not things is slightly more
complicated. James' position, in fact, is that relations are not
things "properly and absolutely speaking," but only
"in a certain way according to a less proper way of
speaking." A relation is not a thing in an absolute sense
because of the "meekness" of its being, for which reason
"it is like a middle point between being and non-being"
(*Quaestiones de divinis praedicamentis*, q. 11, p. 30,
668-9). The reasoning behind this last statement is as follows:
the more intrinsic some principle is to a thing, the more that thing
is said to be through it; what is maximally intrinsic to a thing is
its substance; a thing is therefore maximally said to be on account of
its substance. Now a thing's being related to another is, in the
constellation of accidents that qualify that thing, what is minimally
intrinsic to it and thus farthest from its being, and so closest to
non-being. But if relations are not things, at least in the absolute
sense, what are they? James answers that they are *modes* of
being of their foundations. "The mode of being of a thing does
not differ from the thing in such a way as to constitute another
essence or thing. The relation, therefore, is not different from its
foundation" (*Quaestiones de divinis praedicamentis*, q.
11, p. 33, 745-7). Speaking of relations as modes allows us to
acknowledge their reality, as attested by experience, without
hypostasizing them. A certain number's being equal to another is
clearly something distinct from the number itself. The number and its
being equal are two "somethings" (*aliqua*), says
James; they are not, however, two *things*; they are two in the
sense that one is a thing (the number) and the other is a mode of
being of the number.
In making relations *modes* of being of the foundation, James
was clearly taking his cue from Henry of Ghent, "the chief
representative of the modalist theory of relation" (Henninger
1989). For Henry and James, relations are real in the sense that they
are distinct from their foundations and belong to extra mental
reality. However, James' understanding of the way in which a
relation is a mode differs from Henry's. For Henry, a
thing's mode is the same thing as its *ratio* or nature;
it is the particular type of being that thing has, what
"specifies" it. But according to James'
understanding of the term, a mode lies *beyond* the
*ratio* of a thing, like an accident of that thing
(*Quaestiones de divinis praedicamentis*, p. 34,
767-8).
In conclusion, one could say that in his discussion of relations,
James was guided by the same motivation as many of his contemporaries,
namely securing the objectivity of relations without conferring
full-blooded existence upon them. Relations do exhibit some form of
being, James believed, but it is a most faint one
(*debilissimum*), the existence of a mode qua accident.
### 3.4 Individuation
James discusses individuation in two places: *Quodl.* I, q. 21
and *Quodl.* II, q. 1. I will focus on the first treatment,
because it is the lengthier of the two and because the tenor of
James' brief remarks on individuation in *Quodl.* II, q.
1, despite certain similarities with his earlier discussion (Wippel
1994), make it hard to see how they fit into an overall theory of
individuation.
The question James faces in *Quodl.* I, q. 21 is a markedly
theological one, namely whether, if the soul were to take on the ashes
or the matter of another man at resurrection, the resulting individual
would be the same as before resurrection. In order to answer that
question, James tells us, it is first necessary to determine what the
cause of numerical unity is in the case of composite beings. There
have been numerous answers to that question and James provides a short
account of each. Some philosophers have appealed to quantity as the
principle of numerical unity; others to matter; others yet to matter
as subtending indeterminate dimensions; finally, others have turned to
form as the cause of individuation. According to James, each of these
answers is part of the correct explanation though each is insufficient
if taken on its own. The correct view, according to him, is that form
and matter taken together are the principal causes of numerical
identity in the composite, with quantity contributing something
"in a certain manner." Form and matter, however, are
principal causes in different ways; more precisely, each accounts for
a different kind of numerical unity. For by 'singularity'
we can really mean two distinct things: we can mean the mere fact of
something's being this or that singular thing, or we can point
to a thing qua "something complete and perfect within a certain
species" (*Quodl.* I, 21, 227, 134-35). It is
matter that accounts for the first kind of singularity, and form for
the second. Put otherwise, the kind of unity that accrues to a thing
on account of its being a mere singular results from the concurrence
of the "substantial" unity provided by matter and the
"accidental" unity provided by quantity. By contrast, the
unity that characterizes a thing by virtue of the perfection or
completeness it displays is conferred to it by the form, which is the
principle of perfection and actuality in composites.
Although James thinks he can quite legitimately enlist the support of
such prestigious authorities as Aristotle and Averroes in favor of the
view that matter and form together are constitutive of a thing's
numerical unity, his solution has struck commentators as a somewhat
contrived and ad hoc attempt to reach a compromise solution at all
costs (Pickave 2007; Wippel 1994). James, it has been
suggested, "seems to be driven by the desire to offer a
compromise position with which everyone can to some extent
agree" (Pickave 2007: 55). Such a suggestion does accord
with James' oft-expressed preference for solutions that present
a "middle way" (*media via*) among competing
theories (*Quaestiones de divinis praedicamentis*, q. 11, p.
23, 513; *Quodl.* II, q. 7, p. 108, 118; *De regimine
christiano*, 210; see also *Quodl.* II, q. 5, p. 65,
208-209), although these professions of moderation must
sometimes be taken with a grain of salt, as we will see in
Section 8
below.
## 4. Natural Philosophy (The Doctrine of Seminal Reasons)
The belief that matter contains the 'seeds' of all the
forms that can possibly accrue to it is one of the hallmarks of James
of Viterbo's thought, as is the belief that the soul
pre-contains, in the shape of "fitnesses"
(*idoneitates*), all the sensitive, intellective, and
volitional forms it is able to take on, though, as we will see, there
is an important difference between both cases. I will present
James' doctrine of "fitnesses" in the intellect in
Section 6,
and his doctrine of fitnesses in the will in
Section 6.
In this section, I review James' arguments in favor of seminal
reasons (For a full account see Pickave and Cote
2018).
James takes as the point of departure of his analysis of change the
view commonly attributed to Aristotle by the medievals according to
which substantial change involves a natural agent educing a form from
the potency of matter. His contention is that for the form to be
educed from prime matter it has to preexist in prime matter in an
"incipient or inchoate" state. Otherwise the forms would
have to be put into matter by an external cause. That cause could only
be a natural agent or a supernatural one. It cannot be the latter, for
then change would no longer be natural; but nor can it be the former
because James holds that forms do not "migrate" from one
substance to another. Hence the forms must preexist in matter.
James holds that the inchoate form present in matter is the same as
the full-fledged, actualized form, differing from it only modally
(*Quodl.* II, q. 5, p. 70, 386-388). To the objection
that if this were true nothing "new" would result from the
process of change, he responded by pointing out that the assumption
that natural change results in the emergence of new things is rejected
by no less an authority than Averroes himself, who denies that natural
agents "induce something extrinsic in matter"
(*Quodl.* II, q. 5, p. 77, 621). What newness does result from
natural change is accounted for by the modal difference between the
potential and the actualized form.
James holds that natural change requires two active principles: that
of the potential form existing in potency in matter, i.e., the seminal
reason, and that of the extrinsic natural agent acting on the matter.
He explicitly denies that the potential form on its own is a
sufficient cause of change (*Quodl.* II, q. 5, p. 89,
1012-1014). The first active principle is that of the inchoate
form itself: it is active "by means of inclination", that
is, it is active inasmuch as it naturally tends to its actualization.
The second active principle is the extrinsic agent that educes the
form from matter; it is active by means of *transmutation*,
i.e., by efficiently causing the change by virtue of the power of
acting (*virtus agendi*) conferred upon it by God. Both causes
cooperate in the production of the effect.
Although James teaches that there are preexisting ideas and volitions
in the soul similarly to the way in which seminal reasons preexist in
matter, there is an important difference between the two cases. For
the "inclining principles" in matter, he explains in
*Quodl.* III, q. 4 (p. 70, 416-424), stand farther from
their actualization than do the inclining principles in the soul. They
thus require more on the part of the extrinsic transmutative cause
than do the soul's inclining principles. So much so, James
explains, that it sometimes appears that all the work is being done by
the extrinsic agent. This is not so, of course, as the
"intrinsic inclining principle" also plays a role. Because
James assigns such an important role to external agent causes in his
account of natural change, one of the common complaints directed
against theories of seminal reasons, namely that they deny the reality
of secondary causes, clearly does not apply to his version of the
theory.
James' doctrine of seminal reasons would elicit considerable
criticism in the early fourteenth century and beyond (Phelps 1980).
The initial reaction came from Dominicans. Bernard of Auvergne wrote a
series of *Impugnationes* (i.e., refutations) *contra
Jacobum de Viterbio* (see Pattin 1962 and Cote 2016),
attacking various aspects of James' metaphysics, including his
theory of seminal reasons; and John of Naples later argued against
James' distinction between the potency of matter and matter *tout court.* James' theory also encountered resistance from
within the Augustinian Order, e.g., from Alphonsus Vargas of Toledo.
Even arts masters entered the fray. The Milanese arts master Maino de
Maineri devoted a lengthy section of one of his questions in his
question-commentary on Averroes' *De substantia orbis*
dating from 1315-1317 to a presentation and rebuttal of James'
theory of natural change (see Cote 2019).
## 5. The Soul and Its Powers
According to Aristotle in the *De anima*," the soul is in
a sense all things." James wants to infer from this that all
things must somehow preexist in the soul "by a certain
conformity and resemblance." (*Quodl.* I, q. 7, p. 91,
403). James distinguished between three sorts of conformity: the
conformity between the sense and the sensibles, that between the
intellect and intelligibles, and that between the will and appetibles.
He thus believed that all sensibles must preexist in the sense-power,
all intelligibles in the intellective power, and all
"appetibles" in the appetitive power, i.e., the will. They
do not preexist in their fully actualized state, of course; but nor do
they preexist as purely indeterminate capacities: James holds that
they preexist as "incomplete actualities," innate and
permanent qualities of the soul tending toward their actualization.
Borrowing from Simplicius' commentary on Aristotle's
*Categories,* James described a power of the soul as a
"general (*communis*) aptitude (*aptitudo*) or a
fitness (*idoneitas*) with respect to the complete act"
(*Quodl.* I, q. 7, p. 92, 421) that is divided into further
specific "fitnesses" (*speciales idoneitates*),
"following the diversity of objects" of that power. For
instance, intellect is a general *idoneitas* that is subdivided
in specific *idoneitates* "following the diversity"
of intelligibles. Despite what the phrase "following the
diversity" would have us believe, James did not assert that the
division of fitnesses in the soul exactly mapped the division of kinds
of objects. Though he was clearly committed to some correspondence
between the two, he believed it was not possible in this life to know
how far the division into specific "fitnesses" proceeded,
and hence how many "fitnesses" there are in the soul.
In addition to explaining how the soul's powers relate to
intelligibles, sensibles and appetibles, James also described their
relation to occurrent *acts* of understanding, sensing, and
willing. As with seminal reasons, an aptitude in potency in the soul
and the corresponding actualized aptitude were not viewed as really
but only as modally distinct (*Quodl.* III, q. 5, p. 84,
62-63). James also held that each power was both passive and
active in relation to its actualization: passive inasmuch as a power
qua power is not yet actualized, active insofar as it tends toward its
actualization (*Quodl.* I, q. 12, p. 165, 281-285). To
the argument that nothing could be both active and passive in the same
respect, James responded that this was true only of *transeunt*
actions, such as fire heating a pot, which require the active cause of
change to be distinct from the passive recipient of change, not of
*immanent* actions, as are the operations of the soul.
Although James held that all the soul's powers were active, they
were not so to the same degree: the will and its aptitudes were
considered to be more active and thus "closer" to their
actualization than those of the intellect; and the intellect and its
aptitudes, in turn, were viewed as more active and closer to their
actualization than those of the senses. Accordingly, James considered
that the more active an aptitude or "inclining principle"
was, the more causal power it had and the less causal input it
required from other sources.
From the foregoing it is easy to see what position James would take in
what was a commonly discussed topic in the thirteenth century, namely
the problem whether or not the "essence" of the soul was
really different from its powers. The position espoused by the
scholastics whose teachings James studies most carefully, namely
Thomas Aquinas, Giles of Rome, and Godfrey of Fontaines was the soul
was indeed really distinct from its powers. There was, however, a
commonly discussed minority position, one that eschewed both real
distinction and strict identity (which had few followers): that of
Henry of Ghent. Henry believed that the powers of the soul were
"intentionally," not really distinct from its essence.
James, however, sided with Thomas, Giles, and Godfrey, against Henry
(*Quodl.* II, q. 14, p. 160, 70-71; *Quodl.* III,
q. 5, 56-84,63). His reasoning was as follows. Given that
everyone agreed that there was a real distinction between the essence
of the soul and one of its powers in *act*, that is, between
the soul and, e.g., an occurrent act of willing, then if one denied
that there was a real distinction between the soul and its powers, as
Henry had, one would be committed to the existence of a real
distinction between the power in act (the occurrent act of willing)
and that same power in potency (the will qua capable of produce that
act). But as we have already seen, James believed that something in
potency is not really distinct from that same thing in act. Hence the
soul's essence must be distinct from its powers.
## 6. Cognition
James' longest discussion of cognition occurs in question 12 of
his first Quodlibet, which asks whether it is necessary to posit an
agent intellect in the human soul that is distinct from the possible
intellect (for discussion see Cote 2013, and
Solere 2018a and 2018b). Since the view that there is such a
distinction was an important component of abstraction theories of
knowledge, what was really at stake was whether abstraction theory
provided the correct account of knowledge acquisition. According to
abstraction theory, of which Aquinas was one of the most famous
exponents, the mind derives its content from an engagement with
sense-images (phantasms). The agent intellect, a "part" of
the intellectual soul, was held to abstract intelligible species from
the sense-images, and these species were then "received"
by the possible intellect. Since the same power could not be both
active and passive, the two intellects had to be distinct. As can be
gathered from the preceding section, James would not be favorably
disposed to such a theory. Since intelligibles, for him, preexisted in
the soul in the form of "aptitudes" or
"fitnesses," there was no need for them to be abstracted;
and since the main reason for positing an agent intellect was to
perform abstraction, there was no reason to suppose that such a
faculty existed -- which is exactly the position he defended in
question 12. James' theory of innate *idoneitates* in
the soul thus entailed a wholesale rejection of abstraction theories
of knowledge and of the model of the soul on which they rested
(Cote 2013).
James was well aware that by denying the distinction between the two
intellects, he was opposing the consensus view of Aristotle
commentators. Indeed, his views seem to run counter to the *De
anima* itself, though, as he would mischievously point out, it was
difficult to determine just what Aristotle's doctrine was, so
obscure was its formulation (*Quodl.* I, q. 12, p. 169,
426--170, 439). He explained that what he was denying was not
that there was a "difference" in the intellect--the
soul's powers did exhibit such a difference since they were both
active and passive--rather, what he was denying was that's the
existence of a difference implied a real distinction of powers
(*Quodl.* I, q. 12, p. 170, 440-45).
As far as James was concerned the relevant question was not how
intelligibles are abstracted so as to produce occurrent acts of
cognition, but rather how innate intellectual aptitudes can develop
into occurrent acts of cognition. The key to his solution lay in his
view that aptitudes actively tend toward their completion. James
described such active inclination as a kind of
self-motion--*formal* self-motion-which he viewed as
the main created causal contributor to a power's actualization.
But of course "main causal contributor" does not mean
"sole causal contributor.;" Although the soul's
powers stand closer to their actualization and exercise greater formal
self-motion than do the seminal reasons in matter, and although the
intellect and its aptitudes likewise stand closer to their
actualization and exercise greater self-motion than do the senses and
their aptitudes, no "fitness" on its own is sufficient to
bring about its actualization (*Quodl.* I, q. 7, p. 102,
777-778; something further is required. In the case of the
senses, what is required is the "excitative" causality
(*excitatio*) exercised by the physical change in the organ of
sense; in the case of the intellect, it is the excitative causality
supplied by the phantasm. In both cases, James held that the
power's formal self-motion together with the
"rousing" action stemming from the organ or the phantasm
necessarily entailed their effect.
James of Viterbo is thus part of a group of thinkers in the history of
philosophy for whom the intellect itself, as opposed to extra-mental
objects or their proxies in the soul, is the main causal factor not
only of the production of acts of knowledge but of their conceptual
content as well. Indeed, for James, that content is already present in
the soul in the form of general and specific "fitnesses"
or "aptitudes." The aptitudes are innate (*naturaliter
inditae*, "always present in [the soul]"
(*Quodl.* I, q. 7, p. 92, 422-423), ready to be
actualized in the presence of the appropriate triggering factors. Few
other scholastics were ready to espouse such an extreme form of
innatism, which some scholars have likened to the theory innate ideas
of the Early Moderns (Solere 2018a).
## 7. Ethics
### 7.1 Freedom of the Will
The scholastics all agreed that the human will is free. They also
agreed that the will and the intellect both played a role in the
genesis of the voluntary act. What they disagreed about was which of
the two faculties had the decisive role. For Henry of Ghent, the will
was the sole cause of its free acts (*Quodl*. I, q. 17), so
much so that he tended to relegate the intellect's role to that
of a "sine qua non" cause. At the other end of the
spectrum of opinions, Godfrey of Fontaines held that the will is
always passive in regard to the intellect and that it is the intellect
that exercises the decisive motion in producing the voluntary act
(*Quodl*. III, q. 16). Although James of Viterbo used language
apt to suggest that he wanted to steer a middle course between Henry
and Godfrey (*Quodl*. II, q. 7), his preferences clearly lay
with a position like that of Henry's, as can be gathered from
his most detailed treatment of the question in *Quodl*. I, q. 7
(see Dumont 2018a for a complete treatment of the issue in James).
James' thesis in *Quodl*. I, q. 7 was that the will both
moves itself and is moved by another. An agent that is moved by
another can be said to be free as long as it also moves itself in some
way. The human will is just such an agent: it is moved by itself; but
it is also moved by another, namely by the object of willing outside
the soul, and it is moved by that same object insofar as it is
apprehended by the intellect. To explain how this was so, James used
Aquinas' distinction between the specification of an act of will
(the will's willing this as opposed to that) and the exercise of
the act (the will's actually willing this). The will formally
moves itself in regard to the exercise of its act; it is moved by the
object outside the soul in regard to the specification of its act; and
it is moved by the intellect, or more precisely by the object as
apprehended by the intellect, both in regard to the specification and
the exercise of its act. James called the motion by which the will was
specified by the object as apprehended by the intellect
"metaphorical" motion, as distinct from
"proper" motion. Final causes were metaphorical movers in
this sense. James called the motion by which the will is moved to its
exercise by the object as apprehended by the intellect
*efficient* motion. This is *prima facie* a surprising
move given what we know about James' theory of the soul as a
self-mover. For even though James was keen to show that the will
cannot, so to speak, "go it alone," did not making the
object as understood by the intellect the efficient cause of the
will's exercise to act risk tilting the causal balance too far
in the direction of the intellect? But in fact if the intellect was an
efficient cause, it wasn't so in the usual sense of the word
"efficient." James claimed to follow Aristotle in
*Metaphysics* in distinguishing two ways in which something
could be moved efficiently. The first was through direct efficient
causation; in this sense, only God was the efficient cause of the
will's motion. The second sense was through "a certain
connection" that obtains between two things that are rooted in
the same subject. It is this second sense that is relevant for
understanding how the intellect moves the will. As James understood
the matter, "because the will and the intellect are rooted and
connected in the same essence of the soul, it follows that when the
soul is in actuality in relation to the intellect, there arises
(*fit*) in the soul a certain inclination such that it becomes
in actuality in relation to the will, with the result that (the will)
moves itself." It is, he concluded, "on account of that
inclination that the intellect moves the will;" and the motion
by which it does so can be termed a "rousing up"
(*excitatio*) ( *Quodl.* I, q. 7, p. 104,
849-855).
Thus, despite what the language of "efficiency" might
suggest, James' account of how the intellect moves the will is
similar to his account of how the sense organ moves the power of sense
and how the imagination moves the intellect: in all three cases the
motion is of the "excitatory" variety; the power itself as
the formal cause of its acts supplies the lion's share of the
voluntary act's causality. If anything, the will itself can be
considered to be an efficient cause (in the first sense of the word),
if only *per accidens.* This is because the will, like heavy
objects but unlike the intellect and the senses, belongs to the class
of things that both move themselves formally and move others
efficiently. Thus, just as a heavy object is formally self-moved as a
result of its heaviness but moves another efficiently inasmuch as it
divides the medium it traverses, so too the will is a self-mover that
efficiently moves the intellect and the other faculties, and hence can
be said to move itself *per accidens* (*Quodl.* I, q.
7, p. 98, 633-640).
What is unclear is how any of the above makes the will's
volitions free. After all, by James' own admission, the senses
and the intellect are also formal self-movers, and yet they are not
free. The difference is that whereas acts of sensation and
intellection, given the self-motion of the senses and the intellect,
will necessarily follow upon the motions of the organs of sense and
the motion of the imagination, no act of willing according to James is
necessitated by the efficient causality (in James' sense of the
word) of the intellect. Thus, although any act of willing must be
preceded by an act of the intellect, no act of the intellect
necessarily entails a particular act of willing. This makes the will
the only power of the soul that is essentially free, though some other
powers may be called free "by participation," insofar as
they are apt to be moved by the will. (*Quodl.* I, q. 7, p.
107, 935-944)
### 7.2 Connection of the Virtues
Like Albert the Great and Thomas Aquinas, James of Viterbo holds that
the moral virtues, considered as habits, i.e., virtuous dispositions
or acts, are "connected." In other words, he believes that
one cannot have one of the virtues without having the others as well.
The virtues he has in mind are what he calls the "purely"
moral virtues, that is, courage, justice, and temperance, which he
distinguishes from prudence, which is a partly moral, partly
intellectual virtue. In his discussion in *Quodl*. II, q. 17
James begins by granting that the question is difficult and proceeds
to expound Aristotle's solution to it, which he will ultimately
adopt. As James sees it, Aristotle proves in *Nicomachean
Ethics* VI the connection of the purely moral virtues by showing
their necessary relation to prudence, and this is to show that just as
moral virtue cannot be had without prudence, prudence cannot be had
without moral virtue. The connection of the purely moral virtues
follows from this: they are necessarily connected because (1) each is
connected to prudence and (2) prudence is connected to the virtues
(*Quodl*. II, q. 17, p. 187, 436 - p. 188, 441).
### 7.3 Love of Self vs. Love of God
Although there has never been any doubt among medieval theologians
that man ought to love God more than he loves himself, medieval
theologians did disagree about whether man can fulfill this obligation
by purely natural means or whether he requires grace in order to do
so. The consensus in the thirteenth century, shared, among other
authors, by Thomas Aquinas, Giles of Rome, and Godfrey of Fontaines,
was that although perfect love of God is possible only through grace,
man does have the natural capacity to love God more than himself (See
Osborne 2005 for discussion of the topic in the thirteenth century,
including in James of Viterbo). Many of these authors believed that
granting man such a capacity was necessary in order to safeguard the
universally accepted principle that "grace perfects, not
destroys, nature". For if man is naturally inclined to love
himself more than God and is made to love God more than himself only
through grace, then that natural inclination is not so much perfected
as replaced by a different one. Against these authors, James of
Viterbo in his *Quodlibet* II, question 20 famously defended
the view that man naturally loves himself more than God. Before
stating James' argument in support of this position, it is
important to be attentive to the precise way in which he formulated
the question, as well as to how he understands the term
"love" (*diligere*).
First, the question James raised in *Quodl.* II, q. 20, was not
(a): "Does man naturally love God more than he loves
himself?" or (b): "Does man naturally love himself more
than God," but rather (c): "Does man naturally love God
more than himself, or the converse (*vel econverso*)?"
What (c), but not (a) or (b), rules out is that man can love himself
and God equally. James did not think that this was so. He compared the
case of self-love vs. love of God to the distinction between natural
and supernatural knowledge: "Natural knowledge starts from
creatures and proceeds thence to God; contrariwise, supernatural
knowledge begins with God and proceeds thence to creatures."
Since the distinction between natural and supernatural knowledge is
exhaustive and mutually exclusive, there is no other kind of
knowledge, and hence none that would proceed "equally"
from creatures and God. But for the comparison between knowledge and
love to hold, the conclusion that applies to the former must apply to
the latter, and that conclusion is that if man does not love God more
than himself, then he loves himself more than God; and conversely.
Second, following the practice of many scholastics, James
distinguished between two forms of love: the "love of
desire" or "love of covetousness" (*amor
concupiscentiae*), which he defined as the "love of some
good which we want for ourselves or for others," and the love of
friendship (*amor amicitiae*), or the love of someone for their
own sake. Although James believed that rational creatures love God in
both these ways, he was clear that the debate about whether or not man
loves himself more than God or the converse concerned only the love of
friendship; the debate, then, was about whether man naturally loves
himself for his own sake more than he loves God for his own
sake--which was the view James defended--or the converse.
James offered two arguments to support his position. The first was
based on the principle that the mode of natural love must be
commensurate with the mode of being and, hence, of the mode of being
one--since everything that is is one. Now a thing is one with
itself by virtue of numerical identity, but it is one with something
else by virtue of a certain conformity. For instance, Socrates is one
with himself by virtue of his being Socrates, but he is one with Plato
by virtue of the fact that both share the same form. But the being
something has by virtue of numerical identity is "greater"
than the being it has by reason of something it shares with another.
And given that the species of natural love follows the mode of being,
it follows that it is more perfect to love oneself than to love
another (*Quodl*. II, q. 20, p. 206, 148 - p. 149, 165).
The second argument attempted to derive the same conclusion from the
principle that "God's love of charity elevates [human]
nature's" love of God ( *Quodl.* II, q. 20, p. 207,
166-167). James reasoned that there was only one way in which
charity could elevate nature, namely by making it love God above all
else. But if this was so, it had to follow "that nature in
itself cannot love God in this way (...). For if it could, it would
not need to be elevated by charity." (*Quodl.* II, q. 20,
p. 207, 181-184). And for James to say that nature itself cannot
love God above all else was just to say that man naturally loves
himself above all else. QED.
But if charity elevated nature in this way, did it not
"destroy" nature after all, by substituting for the
inclination to love the self above all else the entirely different
inclination to love God above all else? James' answer was that
it did not, since "through charity man loves himself no less
than before; he simply loves God more." (*Quodl.* II, q.
20, p. 210, 280-282).
James' opposition to the consensus position on the issue of the
love of self vs. love of God did not go unnoticed. It elicited
considerable criticism in the years following his death, from such
authors as Durand of Saint-Pourcain, Peter of Palude, and John
of Naples (Jeschke 2009).
## 8. Political Thought
Although James touches briefly on political issues in *Quodl*.
I, q. 17 (see Kempshall, 1999, and Cote, 2012), his most
extensive discussions occur in his celebrated *De regimine
christiano* (*On Christian Government*), written in 1302
during the conflict pitting Boniface VIII against the king of France
Philip IV (the Fair). *De regimine christiano* is often
compared in aim and content with Giles of Rome's *De
ecclesiastica potestate* (*On Ecclesiastical Power*), which
offers one of the most extreme statements of pontifical supremacy in
the thirteenth century; indeed, in the words of *De
regimine*'s editor, James' goal is "to formulate
a theory of papal monarchy that is every bit as imposing and ambitious
as that of [Giles]" (*De regimine christiano*: xxxiv).
However, as scholars have also recognized, James shows a greater
sensitivity to the distinction between nature and grace than Giles did
(Arquilliere 1926).
*De regimine christiano* is divided into two parts. The first,
dealing with the theory of the Church, is of little philosophical
interest, save for James' enlisting of Aristotle to show that
all human communities, including the Church, are rooted in the
"natural inclination of mankind." The second and longest
part is devoted to defining the nature and extent of Christ's
and the pope's power. One of James' most characteristic
doctrines is found in Book II, chapter 7, where he turns to the
question of whether temporal power must be "instituted" by
spiritual power, in other words, whether it derives its legitimacy
from the spiritual, or possesses a legitimacy of its own. James states
outright that spiritual power does institute temporal power, but notes
that there have been two views in this regard. Some, e. g., the
proponents of the so-called "dualist" position such as
John Quidort of Paris, hold that the temporal power derives directly
from God and thus in no way needs to be instituted by the spiritual,
while others, such as Giles of Rome in *De ecclesiastica
potestate*, contend that the temporal derives wholly from the
spiritual and is devoid of any legitimacy whatsoever "unless it
is united with spiritual power in the same person or instituted by the
spiritual power" (*De regimine christiano*: 211).
James is dissatisfied with both positions and, as he so often does,
endeavors to find a "middle way" between them. His
solution is to say that the "being" of the temporal
power's institution comes both from God--by way of
man's natural inclination--in "a material and
incomplete sense," and from the spiritual power by which it is
"perfected and formed." This is a very clever solution. On
the one hand, by rooting the temporal power in man's natural
inclination, albeit in the imperfect sense just mentioned, James was
acknowledging the legitimacy of temporal rule independently of its
connection to the spiritual, thus "avoid[ing] the extreme and
implausible view of [Giles of Rome]" (Dyson 2009: xxix). On the
other hand, making the natural origins of temporal power merely the
incomplete matter of its being was a way of stressing its
subordination and inferiority to the spiritual order, in keeping with
his papalist convictions. Still, James' very choice of analogies
to illustrate the relationship between the spiritual and temporal
realms showed that his solution lay much closer to the theocratic
position espoused by Giles of Rome than his efforts to find a
"middle way" would have us believe. Thus, comparing the
spiritual power's relation to the temporal in terms of the
relation of light to color, he explains that although "color has
something of the nature of light, (...) it has such a feeble
light that, unless there is present a more excellent light by which it
may be formed, not in its own nature but in its power, it cannot move
the vision" (*De regimine christiano*: 211). In other
words, James is telling us that although temporal power does originate
in man's natural inclinations, it is ineffectual qua power
unless it is informed by the spiritual. |
vives | ## 1. Life and main works
Juan Luis Vives was born in Valencia, Spain on March 6, 1493 (not
1492, as is often found in the literature on him). His parents were
Jewish cloth merchants who had converted to Catholicism and who strove
to live with the insecurities of their precarious situation. His
father, Luis Vives Valeriola (1453-1524), had been prosecuted in
1477 for secretly practicing Judaism. A second trial took place in
1522 and ended two years later when he was burned at the stake. His
mother, Blanquina March (1473-1508), became a Christian in 1491,
one year before the decree expelling Jews from Spain. She died in 1508
of the plague. Twenty years after her death, she was charged with
having visited a clandestine synagogue. Her remains were exhumed and
publicly burned.
In his youth, Vives attended the Estudio General of his hometown. In
1509, he moved to Paris and enrolled as a freshman in the faculty of
arts. He was never to return to Spain. Vives began his studies at the
College de Lisieux, where Juan Dolz had just started a
triennial course, but soon moved to the College de Beauvais,
where he attended the lectures of Jan Dullaert (d.1513). From the fall
of 1512, Vives started to attend the course of the Aragonese Gaspar
Lax (1487-1560) at the College de Montaigu. Through
Nicolas Berault (c.1470-c.1545), who was an associate of
Guillaume Bude (1467-1540) and taught at various colleges
in Paris, Vives also came into contact with the Parisian humanist
circle.
In 1514, Vives left Paris without having taken any formal academic
degree and moved to the Low Countries. He settled in Bruges, where he
would spend most of his life. About this time, he was introduced to
Erasmus and appointed as tutor to the Flemish nobleman William of
Croy. From 1517 until Croy's premature death in 1521, Vives
lived in Louvain and taught at the Collegium Trilingue, a humanist
foundation based on Erasmian educational principles. In this period he
wrote '*Fabula de homine*' ('A Fable about
Man,' 1518), an early version of his views on the nature and
purpose of mankind; *De initiis, sectis et laudibus
philosophiae* (*On the Origins, Schools and Merits of
Philosophy*, 1518), a short essay on the history of philosophy;
*In pseudodialecticos* (*Against the
Pseudo-Dialecticians*, 1519), a lively and trenchant attack on
scholastic logic; as well as a critical edition, with an extensive
commentary, of Augustine's *De civitate Dei* (*City of
God*, 1522), which was commissioned by Erasmus.
From 1523 to 1528, Vives divided his time between England, which he
visited on six occasions, and Bruges, where he married Margarita
Valldaura in 1524. In England he attended the court of Henry VIII and
Catherine of Aragon, and was tutor to their daughter, Mary. He also
held a lectureship at Corpus Christi College, Oxford, and associated
with English humanists such as Thomas More and Thomas Linacre. During
these years he published *De institutione feminae Christianae*
(*The Education of a Christian Woman*, 1524), in which he set
out pedagogical principles for the instruction of women; the extremely
popular *Introductio ad sapientiam* (*Introduction to
Wisdom*, 1524), a short handbook of ethics, blending Stoicism and
Christianity; and *De subventione pauperum* (*On Assistance
to the Poor*, 1526), a program for the organization of public
relief, which he dedicated to the magistrates of Bruges. In 1528 he
lost the favor of Henry VIII by siding with his fellow countrywoman
Catherine of Aragon in the matter of the divorce. He was placed under
house arrest for a time, before being allowed to return to Bruges.
The last twelve years of Vives' life were his most productive,
and it was in this period that he published several of the works for
which he is best known today. These include *De concordia et
discordia in humano genere* (*On Concord and Discord in
Humankind*, 1529), a piece of social criticism emphasizing the
value of peace and the absurdity of war; *De disciplinis*
(*On the Disciplines*, 1531), an encyclopedic treatise
providing an extensive critique of the foundations of contemporary
education, as well as a program for its renewal; and *De anima et
vita* (*On the Soul and Life*, 1538), a study of the soul
and its interaction with the body, which also contains a penetrating
analysis of the emotions. *De veritate fidei Christianae*
(*On the Truth of the Christian Faith*), the most thorough
discussion of his religious views, was published posthumously in 1543.
He died in Bruges on May 6, 1540.
## 2. Dialectic and language
Vives' career as a leading northern European humanist starts
with the publication in 1519 of *In pseudodialecticos*, a
satirical diatribe in which he voices his opposition to scholastic
logic on several counts. He follows in the footsteps of earlier
humanists such as Lorenzo Valla (1406-57) and Rudolph Agricola
(1443-85), who set about to replace the scholastic curriculum,
based on syllogistic and disputation, with a treatment of logic
oriented toward the use of the topics, a technique of verbal
association aiming at the invention and organization of material for
arguments, and persuasion. Vives' severe censure of scholastic
logic derived from his own unhappy experience with the scholastic
curriculum at Paris. Therefore, as he himself emphasized, no one could
accuse him of "condemning it because he did not understand
it" (*Opera omnia*, 1964, III, 38; all quotations below
are from this edition). Erasmus wrote to More that no one was better
suited for the battle against the dialecticians, in whose ranks he had
served for many years.
The main targets of Vives' criticism are Peter of Spain's
*Summule logicales*, a work dating from the thirteenth century
but which still held an important place in the university curriculum,
and the theory of the property of terms, a semantic theory dealing
with properties of linguistic expressions such as signification, i.e.,
the meaning of a word regardless of context, and supposition, i.e.,
the meaning of a word within the context of its use in a proposition.
He repudiates the use of technical jargon, accessible only to a narrow
group of professionals, and maintains that if scholastic logicians
made an effort to speak plainly and according to common usage, many of
their conundrums would disappear. Instead, they choose to fritter away
their ingenuity on logically ambiguous propositions known as
*sophismata*. Vives provides many examples of such
propositions, which in his view make no sense whatever and are
certainly of no use. Many of these, such as "Some animal is not
man, therefore some man is not animal" (III, 54), were standard
scholastic examples. Others, such as "Only any non-donkey
*c* of any man except Socrates and another *c* belonging
to this same man begins contingently to be black" (III, 40), are
intended as a mockery of the futile quibbling he associated with
scholastic method. Since dialectic, like rhetoric and grammar, deals
with language, its rules should be adapted to the rules of ordinary
language; but with what language, he asks, have these propositions to
do? Moreover, dialectic should not be learned for its own sake, but as
a support for the other arts; therefore, no more effort should be
spent on it than is absolutely necessary. Vives' criticism is
also informed by ethical concerns and the demand for a method that
would be of use in everyday life rather than in academic disputations.
In contrast to the standard interpretation, it has been argued that
*In pseudodialecticos* is not a serious refutation of
scholasticism, but rather a sophistical exercise replete with
fallacious arguments, whose purpose is to teach a valuable lesson in
scholastic dialectic challenging the reader to detect the many logical
fallacies that it contains. According to this interpretation, *In
pseudodialecticos* offers no serious arguments against scholastic
logical theory or principles (see Perreiah, 2014, ch.4).
A more detailed criticism can be found in *De disciplinis* of
1531. This encyclopedic treatise is divided into three parts: *De
causis corruptarum artium* (*On the Causes of the Corruption of
the Arts*), seven books devoted to a thoroughgoing critique of the
foundations of contemporary education; *De tradendis
disciplinis* (*On Handing Down the Disciplines*), five
books in which Vives outlines his program for educational reform; and
five shorter treatises *De artibus* (*On the Arts*),
dealing mainly with logic and metaphysics. These five treatises
include *De prima philosophia* (*On First Philosophy*),
a compendium of Aristotelian physics and metaphysics from a Christian
point of view; *De censura veri* (*On the Assessment of
Truth*), a discussion of the proposition and the forms of
argumentation; *De explanatione cuiusque essentiae* (*On the
Explanation of Each Essence*); *De instrumento
probabilitatis* (*On the Instrument of Probability*), which
contains a theory of knowledge, as well as a detailed account of
dialectical invention; and *De disputatione* (*On
Disputation*), in which he discusses non-formal proofs. In these
treatises, Vives not only continues the trends in humanist dialectic
initiated by Valla and Agricola, but also displays a familiarity with
philosophical technicalities that was unusual among humanists and that
reveals the more traditionally Aristotelian aspects of his thought.
His appraisal of the Aristotelian corpus is summarised in *Censura
de Aristotelis operibus* (*Assessment of Aristotle's
Works*, 1538). A posthumously published treatise entitled
*Dialectices libri quatuor* (*Four Books of Dialectic*,
1550) appears to be a youthful work that Vives evidently did not
consider suitable for publication.
Vives' criticism of scholastic logic hinges on a profound
analysis of the arts of discourse. For him, the supremacy of the
*sermo communis* (ordinary discourse) over the abstract
language of metaphysics is indisputable. Philosophy ought not to
invent the language and subject of its own specific investigation (VI,
140). In *De causis corruptarum artium*, he writes:
"enraged against nature, about which they know nothing, the
dialecticians have constructed another one for themselves--that
is to say, the nature of formalities, individual natures
(*ecceitates*), realities, relations, Platonic ideas and other
monstrosities which cannot be understood even by those who have
invented them" (VI, 190-1). Instead of the formal language
of the dialecticians, which he found completely unsuited to interpret
reality, he proposes the less rigorous but more concrete universe of
everyday communication, which answers all our practical needs and aims
to provide a knowledge that is useful.
## 3. Epistemology and history
Vives was pessimistic about the possibility of attaining knowledge as
understood in Aristotelian terms; and his thought anticipates the
moderate skepticism of early modern philosophers such as Francisco
Sanches (1551-1623) and Pierre Gassendi (1592-1655). Vives
belongs, like Francis Bacon (1561-1626), to the so-called
"maker's knowledge" tradition, which regards
knowledge as a kind of making or as a capacity to make (see A.
Perez-Ramos, *Francis Bacon's Idea of Science and the
Maker's Knowledge Tradition*, Oxford and New York: Clarendon
Press and Oxford University Press, 1988). He often insists on the
practical nature of knowledge; for instance, in *De causis
corruptarum artium*, he maintains that peasants and artisans know
nature far better than many philosophers (VI, 190). In *Satellitium
animi* (*The Soul's Escort*, 1524), a collection of
aphorisms dedicated to Princess Mary, he points out that "man
knows as far as he can make" (IV, 63). A central tenet of the
maker's knowledge tradition is that man cannot gain access to
nature's intimate works, since these, as *opera divina*
(divine works), are only known to God, their maker.
According to Vives, things have two different layers: one external,
consisting in the sensible accidents of the thing, and another,
internal, and therefore hidden, which is the essence of the thing
(III, 197). "The true and genuine essences of all things",
he writes, "are not known by us in themselves. They hide
concealed in the innermost part of each thing where our mind, enclosed
by the bulk of the body and the darkness of life, cannot
penetrate" (III, 406-7). Vives subscribes to the
Aristotelian principle that all of our knowledge has its origin in
perception. We cannot learn anything, he maintains, except through the
senses (III, 193 and 378). But he also maintains that the human mind
"must realize that, since it is locked up in a dark prison and
surrounded by obscurity, it is prevented from understanding many
things and cannot clearly observe or know what it wants: neither the
concealed essence of material things, nor the quality and character of
immaterial things; nor can it, on account of the gloom of the body,
use its acuity and swiftness" (III, 329). In other words, since
the senses cannot grasp what is incorporeal or hidden, sense
perception does not yield any knowledge of the essence of things but
only of their accidents. Vives' view, however, is that sensory
knowledge must nonetheless be transcended by means of reasoning. Yet,
according to him, the best that human reason can accomplish in this
process is to provide a judgment grounded in all the available
evidence, thereby increasing the probability of the conclusion. In his
view, our knowledge of the essence of a thing is only an approximate
guess based on the sensible operations of the thing in question (III,
122).
The most reliable guide for human inquiry, he argues, is
mankind's natural propensity toward what is good and true. This
light of our mind, as he also calls it, is always, directly or
indirectly, inclined toward what is good and true, and can be regarded
as the beginning and origin of prudence and of all sciences and arts.
This natural propensity can be perfected if it is subjected to
teaching and exercise, just as the seeds of plants grow better if they
are cultivated by the industrious hands of a farmer. He found
philosophical grounds for this idea in Cicero's report (see,
e.g., *De natura deorum* (*On the Nature of the Gods*),
I.43-5) of the Hellenistic theory of *anticipationes*
(anticipations) and of *naturales informationes* (natural
notions), which we have not learned from teachers or custom, but are
instead derived and received from nature (III, 356-7). The
topics, which Vives conceives as a reflection of the ontological
order, represent another valuable instrument for human inquiry. In his
view, the topics are a set of universal aspects of things that help to
bring order to the great variety of nature. As such, they play an
important role as organizing principles of knowledge. They are like a
grid through which knowledge can be acquired and arguments formulated
(see Nauta, 2015). Nevertheless, human knowledge can be nothing other
than a finite participation in creation. Because of the limitations
that characterize man's fallen state, investigations into the
realm of nature can only lead to conjectures, and not to firm and
indubitable knowledge, which we neither deserve nor need. In *De
prima philosophia*, Vives writes: "human inquiry comes to
conjectural conclusions, for we do not deserve certain knowledge
(*scientia*), stained by sin as we are and hence burdened with
the great weight of the body; nor do we need it, for we see that man
is ordained lord and master of everything in the sublunary
world" (III, 188). In his opinion, certainty is not a
prerequisite for advances in science and philosophy; and as a
criterion for scientific progress and for the rational conduct of
life, he advocates a method consisting in sound judgment based on
experience.
History, seen as the sum of all human experience, is therefore of
great importance for all branches of learning. In *De tradendis
disciplinis*, Vives maintains that "history appears to
surpass all disciplines, since it either gives birth to or nourishes,
develops [and] cultivates all arts" (VI, 291). In this sense,
history is not primarily regarded as the memory of great deeds or the
source of useful examples, but as a process of development. In
principle, every new generation is better equipped than the preceding
one, since it can derive advantage from all earlier experience:
"It is therefore clear - he maintains - that if only
we apply our minds sufficiently, we can formulate better opinions
about matters of life and nature than could Aristotle, Plato, or any
of the ancients" (VI, 6-7). According to Vives, the saying
"we are like dwarfs on the shoulders of giants" is plainly
false. We are not dwarfs, nor were they giants. All humans are
composed of the same structure (VI, 39). The idea of progress plays an
essential role in Vives' conception of intellectual history, and
several of the cultural problems he deals with, such as the causes of
the corruption of the arts, are approached from an historical
perspective.
## 4. Moral and social philosophy
Vives' moral philosophy stems mainly from his Christian humanism
and is aimed at the reform of both individuals and society. He often
proclaims the superiority of Christian ethics over pagan wisdom (I,
23; VI, 209-10). In *De causis corruptarum artium*, he
argues at length that Aristotle's ethics, on account of its
worldly conception of happiness and virtue, is completely incompatible
with the Christianity: "we cannot serve both Christ and
Aristotle, whose teaching are diametrically opposed to each
other" (VI, 218). He has more sympathy for Platonism and
Stoicism, which he believes are broadly in line with Christian
morality. In *De initiis, sectis et laudibus philosophiae*, he
even asserts: "I do not think, in fact, that there is any truer
Christian than the Stoic sage" (III, 17).
In *Introductio ad sapientiam*, inspired by the teaching of the
Stoics, he recommends self-knowledge as the first step toward virtue,
which he regards as the culmination of human perfection. We should
not, in his judgment, call anything our own except our soul, in which
learning and virtue, or their opposites, are to be found. The body is
"a bondslave of the soul" (*mancipium animi*),
while such things as riches, power, nobility, honor, dignity, and
their contraries, are completely external to man (I, 2; VI, 401). Vice
follows from a wrong judgment about the value of things:
"Nothing - he writes - is more destructive in human
life than a corrupt judgment which gives no object its proper
value" (I, 1). To be wise, however, is not only to have true
opinions about things, but also to translate this knowledge into
action by desiring honorable things and avoiding evil (I, 2). Wisdom
therefore requires the subordination of the passions to the control of
the intellect.
Vives holds that the best means to secure the reform of society is
through the moral and practical training of the individual. In his
view, there are two related paths which are necessary to develop our
humanity: education and action. Education is fundamental in order for
us to rise above our animal instincts and realize our full potential
as human beings. However, learning needs to be applied in every day
life, especially for the public good (see Verbeke, 2014). Man, by his
own nature, is a social being: "Experience proves every day that
man was created by God for society, both in this and in the eternal
life. For this reason God inspired in man an admirable disposition of
benevolence and good will toward other men" (VI, 222-3).
In the first book of *De subventione pauperum*, which consists
of a theoretical discussion of the human condition, he stresses not
only our need for and dependency on others, but also our natural
inclination to love and help one another. He regards the development
of society as a distinctly human achievement, based on the ability to
profit from experience and turn knowledge to useful ends. Social
problems, such as poverty and war, are the result of emotional
disorders. During Vives' lifetime, Europe experienced dissension
and war between princes and within the church, as well as the
increased threat posed by Muslim expansion into Western Europe. He
addressed the problem of political and religious disturbances in
several works, which also deal with the psychological origins of
discord, the proper conduct of all the offices of the commonweal, and
the theme of Christian harmony. His mayor political text on European
war and peace is *De concordia et discordia in humano genere*
where he sets out the case for the origins of discord in society and
he aims to show how peace and concord can be fostered through
knowledge of human nature, especially the emotions. According to him,
the virtue of the people can only be maintained and promoted in peace.
"In war, as in a sick body, no member exercises its office
properly" (V, 180-1). Vives deplored the Italian war
between France and Spain (1521-6), which, he felt, completely
ignored the rights of the suffering population, and accused Francis I
and Charles V of irresponsibility and criminal ambition (VI, 480). In
these circumstances, he often referred to the notion of natural law,
which, as he explains in his preface to Cicero's *De
legibus* (*On the Laws*), has the same validity everywhere
because it was impressed on the heart of every human being before
birth (V, 494).
## 5. Psychology
Vives' philosophical reflections on the human soul are mainly
concentrated in *De anima et vita*, published in 1538, which
provides the psychological underpinning for many of his educational
ideas and can be characterized as a prolegomenon to moral philosophy.
He attempts to reconcile the Aristotelian view of the soul as an
organizing and animating principle with the Platonic conception of the
soul as an immaterial and immortal substance. He also pays close
attention to physiology and, following the Galenic tradition,
maintains that our mental capacities depend on the temperament of our
body.
The structure of the treatise is indebted to the traditional approach
of faculty psychology, in which the soul is said to be composed of a
number of different faculties or powers, each directed toward a
different object and responsible for a distinct operation. The first
book covers the functions of the vegetative soul (nutrition, growth
and reproduction), of the sensitive soul (the five external senses),
and of the cogitative soul (the internal senses, i.e., a variety of
cognitive faculties, including imagination, fantasy and the estimative
power, which are located in the three ventricles of the brain, and
whose acts follows from the acts of the external senses). The second
book deals with the functions of the rational soul and its three
faculties (mind, will, and memory), as well as with topics stemming
from Aristotle's *Parva naturalia*, such as sleep,
dreams, and longevity. The third and final book explores the emotions,
which Vives, rejecting the Stoic view, regards as natural responses to
the way things appear to us and as essential constituents of human
life.
He defines the soul as "the principal agent inhabiting a body
adapted to life (*agens praecipuum, habitans in corpore apto ad
vitam*)." The soul is called an agent in the sense that it
acts through instruments--e.g., heat, humors and spirits--by
means of its own power. That it is the principal agent means that,
even though its instruments act on the body, they do not operate by
means of their own power but only through the power that they receive
from the soul (III, 335-6). The organs governing our rational
functions consist of fine and bright spirits exhaled from the
pericardial blood. Although the heart is the source and origin of all
the rational soul's operations, the head is their workshop. In
fact, the mind does not apprehend, nor is it affected, unless the
spirits reach the brain (III, 365-6).
The enormous importance Vives attaches to the exploration of the
emotions is reflected in the fact that he describes the branch of
philosophy that provides a remedy for the severe diseases of the soul
not only as "the foundation of all morality, private as well as
public" (III, 299-300), but also as "the supreme
form of learning and knowledge" (I, 17). Emotions (*affectus
sive affectiones*) are defined as "the acts of those
faculties which nature gave to our souls for the pursuit of good and
the avoidance of evil, by means of which we are led toward the good
and we move away from or against evil". He stresses that the
terms "good" and "evil" in this definition
mean, not what is in reality good or evil, but rather what each person
judges to be good or evil (III, 422). Therefore, the more pure and
elevated the judgment, the more it takes account of what is genuinely
good and true, admitting fewer and less intense emotions and becoming
disturbed more rarely. Immoderate and confused movements, on the other
hand, are the result of ignorance, thoughtlessness, and false
judgments, since we judge the good or evil to be greater than it
really is (III, 425).
One of the most distinctive features of Vives' study of the
human soul is the fundamental role that psychological inquiry came to
play in his reform program. His use of psychological principles in his
writings often surpasses that of previous authors in both scope and
detail. He applies these principles, for instance, not only to
individual conduct and education, but also to professional practice,
social reform, and practical affairs in general. According to Vives,
psychology is relevant to all disciplines. "The study of
man's soul", he writes in *De tradendis
disciplinis*, "exercises a most helpful influence on all
kinds of knowledge, because our knowledge is determined by the
intelligence and comprehension of our minds, not by the things
themselves" (VI, 375).
## 6. Influence
Vives' works, which went through hundreds of editions and were
translated into several vernacular languages, continued to be widely
read and extremely influential during the century after their
publication. His critical attitude toward the Aristotelian orthodoxy
of his day left a mark on several authors. Mario Nizolio
(1488-1567) cites Vives numerous times in *De veris
principiis et vera ratione philosophandi contra pseudophilosophos*
(*On the True Principles and True Method of Philosophizing against
the Pseudo-Philosophers*, 1553), an attack on Aristotelian
dialectic and metaphysics, which G. W. Leibniz (1646-1716)
considered to be worth editing more than a hundred years later. In
*Quod nihil scitur* (*That Nothing is Known*, 1581), one
of the best systematic expositions of philosophical skepticism
produced during the sixteenth century, the Portuguese philosopher and
medical writer Francisco Sanches displays a familiarity with *De
disciplinis*, and there are indications that he might also have
been acquainted with *In pseudodialecticos*. In
*Exercitationes paradoxicae adversus Aristoteleos*
(*Paradoxical Exercises against the Aristotelians*, 1624), a
skeptical attack on Aristotelianism, Pierre Gassendi says that reading
Vives gave him courage and helped him to free himself from the
dogmatism of Peripatetic philosophy.
Psychology was another area in which Vives enjoyed considerable
success. Philip Melanchthon (1497-1560) recommended *De anima
et vita* in the prefatory letter to his *Commentarius de
anima* (*Commentary on the Soul*, 1540). Vives'
influence on the naturalistic pedagogy of the Spaniard Juan Huarte de
San Juan (c.1529-1588), in his celebrated *El examen de
ingenios para las ciencias* (*The Examination of Men's
Wits*, 1575), is undeniable. In his discussion of the passions of
the soul, the Jesuit Francisco Suarez (1548-1617) counted
Vives among his authorities, pointing out that the study of the
emotions belongs to natural philosophy and medicine as well as to
moral philosophy. Vives' treatise was also an important source
of inspiration for Robert Burton (1577-1640), who, in *The
Anatomy of Melancholy* (1621), repeatedly quotes from *De anima
et vita*. The reference to Vives by Rene Descartes
(1596-1650) in *Les Passions de l'ame* (1649)
suggests that he had read the book.
Although Vives is rarely mentioned in the scholarly literature on the
Scottish philosophy of "Common Sense," the impact of his
thought on leading exponents of the school was significant. William
Hamilton (1788-1856) praised Vives' insights on memory and
the laws of association. In his "Contributions Towards a History
of the Doctrine of Mental Suggestion or Association", he quotes
extensive portions of Vives' account of memory and maintains
that the observations of "the Spanish Aristotelian"
comprise "nearly all of principal moment that has been said upon
this subject, either before or since". Moreover, Vives'
*Dialectices libri quatuor* (1550) was one of the primary
sources of Thomas Reid (1710-1796) in his "A Brief Account
of Aristotle's Logic" (1774).
During the second half of the nineteenth century and the fist decades
of the twentieth, Vives was read and studied by philosophers such as
Ernest Renan (1823-1892), Friedrich Albert Lange
(1828-1875), Wilhelm Dilthey (1833-1911), Pierre Duhem
(1861-1916), Ernst Cassirer (1874-1945), and Jose
Ortega y Gasset (1883-1955). Lange regards him as one of the
most important reformers of philosophy of his time and a precursor of
both Bacon and Descartes. According to Ortega y Gasset, Vives'
method, firmly based on experience, and his emphasis on the need to
found a new culture grounded, not in barren speculation, but in the
usefulness of knowledge, anticipate some elements of the modern
*Zeitgeist*. |
freewill | ## 1. Major Historical Contributions
### 1.1 Ancient and Medieval Period
One finds scholarly debate on the 'origin' of the notion
of free will in Western philosophy. (See, e.g., Dihle (1982) and, in
response Frede (2011), with Dihle finding it in St. Augustine
(354-430 CE) and Frede in the Stoic Epictetus (c. 55-c.
135 CE).) But this debate presupposes a fairly particular and highly
conceptualized concept of free will, with Dihle's later
'origin' reflecting his having a yet more particular
concept in view than Frede. If, instead, we look more generally for
philosophical reflection on choice-directed control over one's
own actions, then we find significant discussion in Plato and
Aristotle (cf. Irwin 1992). Indeed, on this matter, as with so many
other major philosophical issues, Plato and Aristotle give importantly
different emphases that inform much subsequent thought.
In Book IV of *The Republic*, Plato posits rational, spirited,
and appetitive aspects to the human soul. The wise person strives for
inner 'justice', a condition in which each part of the
soul plays its proper role--reason as the guide, the spirited
nature as the ally of reason, exhorting oneself to do what reason
deems proper, and the passions as subjugated to the determinations of
reason. In the absence of justice, the individual is enslaved to the
passions. Hence, freedom for Plato is a kind of self-mastery, attained
by developing the virtues of wisdom, courage, and temperance,
resulting in one's liberation from the tyranny of base desires
and acquisition of a more accurate understanding and resolute pursuit
of the Good (Hecht 2014).
While Aristotle shares with Plato a concern for cultivating virtues,
he gives greater theoretical attention to the role of choice in
initiating individual actions which, over time, result in habits, for
good or ill. In Book III of the *Nicomachean Ethics*, Aristotle
says that, unlike nonrational agents, we have the power to do or not
to do, and much of what we do is voluntary, such that its origin is
'in us' and we are 'aware of the particular
circumstances of the action'. Furthermore, mature humans make
choices after deliberating about different available means to our
ends, drawing on rational principles of action. Choose consistently
well (poorly), and a virtuous (vicious) character will form over time,
and it is in our power to be either virtuous or vicious.
A question that Aristotle seems to recognize, while not satisfactorily
answering, is whether the choice an individual makes on any given
occasion is wholly determined by his internal state--perception
of his circumstances and his relevant beliefs, desires, and general
character dispositions (wherever on the continuum between virtue and
vice he may be)--and external circumstances. He says that
"the man is the father of his actions as of
children"--that is, a person's character shapes how
she acts. One might worry that this seems to entail that the person
could not have done otherwise--at the moment of choice, she has
no control over what her present character is--and so she is not
responsible for choosing as she does. Aristotle responds by contending
that her present character is partly a result of *previous*
choices she made. While this claim is plausible enough, it seems to
'pass the buck', since 'the man is the father'
of those earlier choices and actions, too.
We note just a few contributions of the subsequent centuries of the
Hellenistic era. (See Bobzien 1998.) This period was dominated by
debates between Epicureans, Stoics, and the Academic Skeptics, and as
it concerned freedom of the will, the debate centered on the place of
determinism or of fate in governing human actions and lives. The
Stoics and the Epicureans believed that all ordinary things, human
souls included, are corporeal and governed by natural laws or
principles. Stoics believed that all human choice and behavior was
causally determined, but held that this was compatible with our
actions being 'up to us'. Chrysippus ably defended this
position by contending that your actions are 'up to you'
when they come about 'through you'--when the
determining factors of your action are not external circumstances
compelling you to act as you do but are instead your own choices
grounded in your perception of the options before you. Hence, for
moral responsibility, the issue is not whether one's choices are
determined (they are) but in what manner they are determined. Epicurus
and his followers had a more mechanistic conception of bodily action
than the Stoics. They held that all things (human soul included) are
constituted by atoms, whose law-governed behavior fixes the behavior
of everything made of such atoms. But they rejected determinism by
supposing that atoms, though law-governed, are susceptible to slight
'swerves' or departures from the usual paths. Epicurus has
often been understood as seeking to ground the freedom of human
willings in such indeterministic swerves, but this is a matter of
controversy. If this understanding of his aim is correct, how he
thought that this scheme might work in detail is not known. (What
little we know about his views in this matter stem chiefly from the
account given in his follower Lucretius's six-book poem, *On
the Nature of Things*. See Bobzien 2000 for discussion.)
A final notable figure of this period was
Alexander of Aphrodisias,
the most important Peripatetic commentator on Aristotle. In his
*On Fate*, Alexander sharply criticizes the positions of the
Stoics. He goes on to resolve the ambiguity in Aristotle on the
question of the determining nature of character on individual choices
by maintaining that, given all such shaping factors, it remains open
to the person when she acts freely to do or not to do what she in fact
does. Many scholars see Alexander as the first unambiguously
'libertarian' theorist of the will (for more information
about such theories see section 2 below).
Augustine (354-430) is the central bridge between the ancient
and medieval eras of philosophy. His mature thinking about the will
was influenced by his early encounter with late classical Neoplatonist
thought, which is then transformed by the theological views he
embraces in his adult Christian conversion, famously recounted in his
*Confessions*. In that work and in the earlier *On the Free
Choice of the Will*, Augustine struggles to draw together into a
coherent whole the doctrines that creaturely misuse of freedom, not
God, is the source of evil in the world and that the human will has
been corrupted through the 'fall' from grace of the
earliest human beings, necessitating a salvation that is attained
*entirely* through the actions of God, even as it requires,
constitutively, an individual's willed response of faith. The
details of Augustine's positive account remain a matter of
controversy. He clearly affirms that the will is by its nature a
self-determining power--no powers external to it determine its
choice--and that this feature is the basis of its freedom. But he
does not explicitly rule out the will's being internally
determined by psychological factors, as Chrysippus held, and Augustine
had theological reasons that might favor (as well as others that would
oppose) the thesis that all things are determined in some manner by
God. Scholars divide on whether Augustine was a libertarian or instead
a kind of compatibilist with respect to metaphysical freedom.
(Macdonald 1999 and Stump 2006 argue the former, Baker 2003 and
Couenhoven 2007 the latter.) It is clear, however, that Augustine
thought that we are powerfully shaped by wrongly-ordered desires that
can make it impossible for us to *wholeheartedly* will ends
contrary to those desires, for a sustained period of time. This
condition entails an absence of something more valuable, 'true
freedom', in which our wills are aligned with the Good, a
freedom that can be attained only by a transformative operation of
divine grace. This latter, psychological conception of freedom of will
clearly echoes Plato's notion of the soul's (possible)
inner justice.
Thomas Aquinas (1225-1274) attempted to synthesize major strands
of Aristotle's systematic philosophy with Christian theology,
and so Aquinas begins his complex discussion of human action and
choice by agreeing with Aristotle that creatures such as ourselves who
are endowed with both intellect and will are hardwired to will certain
general ends ordered to the most general goal of goodness. Will is
*rational* desire: we cannot move towards that which does not
appear to us at the time to be good. Freedom enters the picture when
we consider various means to these ends and move ourselves to activity
in pursuit of certain of them. Our will is free in that it is not
fixed by nature on any particular means, and they generally do not
appear to us either as unqualifiedly good or as uniquely satisfying
the end we wish to fulfill. Furthermore, what appears to us to be good
can vary widely--even, over time, intra-personally. So much is
consistent with saying that in a given total circumstance (including
one's present beliefs and desires), one is necessitated to will
as one does. For this reason, some commentators have taken Aquinas to
be a kind of compatibilist concerning freedom and causal or
theological determinism. In his most extended defense of the thesis
that the will is not 'compelled' (*DM* 6), Aquinas
notes three ways that the will might reject an option it sees as
attractive: (i) it finds another option more attractive, (ii) it comes
to think of some circumstance rendering an alternative more favorable
"by some chance circumstance, external or internal", and
(iii) the person is momentarily disposed to find an alternative
attractive by virtue of a non-innate state that is subject to the will
(e.g., being angry vs being at peace). The first consideration is
clearly consistent with compatibilism. The second at best points to a
kind of contingency that is not grounded in the activity of the will
itself. And one wanting to read Aquinas as a libertarian might worry
that his third consideration just passes the buck: even if we do
sometimes have an ability to directly modify perception-coloring
states such as moods, Aquinas's account of will as rational
desire seems to indicate that we *will* do so only if it seems
to us on balance to be good to do so. Those who read Aquinas as a
libertarian point to the following further remark in this text:
"Will itself can interfere with the process [of some
cause's moving the will] either by refusing to consider what
attracts it to will or by considering its opposite: namely, that there
is a bad side to what is being proposed..." (Reply to 15;
see also DV 24.2). For discussion, see MacDonald (1998), Stump (2003,
ch. 9) and especially Hoffman & Michon (2017), which offers the
most comprehensive analysis of relevant texts to date.
John Duns Scotus (1265/66-1308) was the stoutest defender in the
medieval era of a strongly libertarian conception of the will,
maintaining on introspective grounds that will by its very nature is
such that "nothing other than the will is the total cause"
of its activity (*QAM*). Indeed, he held the unusual view that
not only up to but *at* the very instant that one is willing
*X*, it is possible for one to will *Y* or at least not to will
*X*. (He articulates this view through the puzzling claim that a
single instant of time comprises two 'instants of nature',
at the first but not the second of which alternative possibilities are
preserved.) In opposition to Aquinas and other medieval Aristotelians,
Scotus maintained that a precondition of our freedom is that there are
two fundamentally distinct ways things can seem good to us: as
practically advantageous to us or as according with justice. Contrary
to some popular accounts, however, Scotus allowed that the scope of
available alternatives for a person will be more or less constricted.
He grants that we are not capable of willing something in which we see
no good whatsoever, nor of positively repudiating something which
appears to us as unqualifiedly good. However, in accordance with his
uncompromising position that nothing can be the total cause of the
will other than itself, he held that where something does appear to us
as unqualifiedly good (perfectly suited both to our advantage and
justice)--viz., in the 'beatific vision' of God in
the afterlife--we still can *refrain from willing* it. For
discussion, see
John Duns Scotus, SS5.2.
### 1.2 Modern Period and Twentieth Century
The problem of free will was an important topic in the modern period,
with all the major figures wading into it (Descartes 1641 [1988], 1644
[1988]; Hobbes 1654 [1999], 1656 [1999]; Spinoza 1677 [1992];
Malebranche 1684 [1993]; Leibniz 1686 [1991]; Locke 1690 [1975]; Hume
1740 [1978], 1748 [1975]; Edwards 1754 [1957]; Kant 1781 [1998], 1785
[1998], 1788 [2015]; Reid 1788 [1969]). After less sustained attention
in the 19th Century (most notable were Schopenhauer 1841 [1999] and
Nietzsche 1886 [1966]), it was widely discussed again among early
twentieth century philosophers (Moore 1912; Hobart 1934; Schlick 1939;
Nowell-Smith 1948, 1954; Campbell 1951; Ayer 1954; Smart 1961). The
centrality of the problem of free will to the various projects of
early modern philosophers can be traced to two widely, though not
universally, shared assumptions. The first is that without belief in
free will, there would be little reason for us to act morally. More
carefully, it was widely assumed that belief in an afterlife in which
a just God rewards and punishes us according to our right or wrong use
of free will was key to motivating us to be moral (Russell 2008, chs.
16-17). Life before death affords us many examples in which vice
is better rewarded than virtue and so without knowledge of a final
judgment in the afterlife, we would have little reason to pursue
virtue and justice when they depart from self-interest. And without
free will there can be no final judgement.
The second widely shared assumption is that free will seems difficult
to reconcile with what we know about the world. While this assumption
is shared by the majority of early modern philosophers, what
specifically it is about the world that seems to conflict with freedom
differs from philosopher to philosopher. For some, the worry is
primarily theological. How can we make sense of contingency and
freedom in a world determined by a God who must choose the best
possible world to create? For some, the worry was primarily
metaphysical. The principle of sufficient reason--roughly, the
idea that every event must have a reason or cause--was a
cornerstone of Leibniz's and Spinoza's metaphysics. How
does contingency and freedom fit into such a world? For some, the
worry was primarily scientific (Descartes). Given that a proper
understanding of the physical world is one in which all physical
objects are governed by deterministic laws of nature, how does
contingency and freedom fit into such a world? Of course, for some,
all three worries were in play in their work (this is true especially
of Leibniz).
Despite many disagreements about how best to solve these worries,
there were three claims that were widely, although not universally,
agreed upon. The first was that free will has two aspects: the freedom
to do otherwise and the power of self-determination. The second is
that an adequate account of free will must entail that free agents are
morally responsible agents and/or fit subjects for punishment. Ideas
about moral responsibility were often a yard stick by which analyses
of free will were measured, with critics objecting to an analysis of
free will by arguing that agents who satisfied the analysis would not,
intuitively, be morally responsible for their actions. The third is
that compatibilism--the thesis that free will is compatible with
determinism--is true. (Spinoza, Reid, and Kant are the clear
exceptions to this, though some also see Descartes as an
incompatibilist [Ragland 2006].)
Since a detailed discussion of these philosophers' accounts of
free will would take us too far afield, we want instead to focus on
isolating a two-step strategy for defending compatibilism that emerges
in the early modern period and continued to exert considerable force
into the early twentieth century (and perhaps is still at work today).
Advocates of this two-step strategy have come to be known as
"classical compatibilists". The first step was to argue
that the contrary of freedom is not determinism but external
constraint on doing what one wants to do. For example, Hobbes contends
that liberty is "the absence of all the impediments to action
that are not contained in the nature and intrinsical quality of the
agent" (Hobbes 1654 [1999], 38; cf. Hume 1748 [1975] VIII.1;
Edwards 1754 [1957]; Ayer 1954). This idea led many compatibilists,
especially the more empiricist-inclined, to develop desire- or
preference-based analyses of both the freedom to do otherwise and
self-determination. An agent has the freedom to do otherwise than
\(\phi\) just in case if she *preferred* or *willed* to
do otherwise, she would have done otherwise (Hobbes 1654 [1999], 16;
Locke 1690 [1975]) II.xx.8; Hume 1748 [1975] VIII.1; Moore 1912; Ayer
1954). The freedom to do otherwise does not require that you are able
to act contrary to your strongest motivation but simply that your
action be dependent on your strongest motivation in the sense that
*had* you desired something else more strongly, then you would
have pursued that alternative end. (We will discuss this analysis in
more detail below in section 2.2.) Similarly, an agent self-determines
her \(\phi\)-ing just in case \(\phi\) is caused by her *strongest
desires* or *preferences* at the time of action (Hobbes
1654 [1999]; Locke 1690 [1975]; Edwards 1754 [1957]). (We will discuss
this analysis in more detail below in section 2.4.) Given these
analyses, determinism seems innocuous to freedom.
The second step was to argue that any attempt to analyze free will in
a way that putatively captures a deeper or more robust sense of
freedom leads to intractable conundrums. The most important examples
of this attempt to capture a deeper sense of freedom in the modern
period are Immanuel Kant (1781 [1998], 1785 [1998], 1788 [2015]) and
Thomas Reid (1788 [1969]) and in the early twentieth century C. A.
Campbell (1951). These philosophers argued that the above
compatibilist analyses of the freedom to do otherwise and
self-determination are, at best, insufficient for free will, and, at
worst, incompatible with it. With respect to the classical
compatibilist analysis of the freedom to do otherwise, these critics
argued that the freedom to do otherwise requires not just that an
agent could have *acted* differently if he had willed
differently, but also that he could have *willed* differently.
Free will requires more than free action. With respect to classical
compatibilists' analysis of self-determination, they argued that
self-determination requires that the agent--rather than his
desires, preferences, or any other mental state--cause his free
choices and actions. Reid explains:
>
>
> I consider the determination of the will as an effect. This effect
> must have a cause which had the power to produce it; and the cause
> must be either the person himself, whose will it is, or some other
> being.... If the person was the cause of that determination of
> his own will, he was free in that action, and it is justly imputed to
> him, whether it be good or bad. But, if another being was the cause of
> this determination, either producing it immediately, or by means and
> instruments under his direction, then the determination is the act and
> deed of that being, and is solely imputed to him. (1788 [1969] IV.i,
> 265)
>
>
>
Classical compatibilists argued that both claims are incoherent. While
it is intelligible to ask whether a man willed to do what he did, it
is incoherent to ask whether a man willed to will what he did:
>
>
> For to ask whether a man is at liberty to will either motion or rest,
> speaking or silence, which he pleases, is to ask whether a man can
> *will* what he *wills*, or be pleased with what he is
> pleased with? A question which, I think, needs no answer; and they who
> make a question of it must suppose one will to determine the acts of
> another, and another to determine that, and so on *in
> infinitum*. (Locke 1690 [1975] II.xx.25; cf. Hobbes 1656 [1999],
> 72)
>
>
>
In response to libertarians' claim that self-determination
requires that the agent, rather than his motives, cause his actions,
it was objected that this removes the agent from the natural causal
order, which is clearly unintelligible for human animals (Hobbes 1654
[1999], 38). It is important to recognize that an implication of the
second step of the strategy is that free will is not only compatible
with determinism but actually requires determinism (cf. Hume 1748
[1975] VIII). This was a widely shared assumption among compatibilists
up through the mid-twentieth century.
Spinoza's *Ethics* (1677 [1992]) is an important
departure from the above dialectic. He endorses a strong form of
necessitarianism in which everything is categorically necessary as
opposed to the conditional necessity embraced by most compatibilists,
and he contends that there is no room in such a world for divine or
creaturely free will. Thus, Spinoza is a free will skeptic.
Interestingly, Spinoza is also keen to deny that the nonexistence of
free will has the dire implications often assumed. As noted above,
many in the modern period saw belief in free will and an afterlife in
which God rewards the just and punishes the wicked as necessary to
motivate us to act morally. According to Spinoza, so far from this
being necessary to motivate us to be moral, it actually distorts our
pursuit of morality. True moral living, Spinoza thinks, sees virtue as
its own reward (Part V, Prop. 42). Moreover, while free will is a
chimera, humans are still capable of freedom or self-determination.
Such self-determination, which admits of degrees on Spinoza's
view, arises when our emotions are determined by true ideas about the
nature of reality. The emotional lives of the free persons are ones in
which "we desire nothing but that which must be, nor, in an
absolute sense, can we find contentment in anything but truth. And so
in so far as we rightly understand these matters, the endeavor of the
better part of us is in harmony with the order of the whole of
Nature" (Part IV, Appendix). Spinoza is an important forerunner
to the many free will skeptics in the twentieth century, a position
that continues to attract strong support (see Strawson 1986; Double
1992; Smilansky 2000; Pereboom 2001, 2014; Levy 2011; Waller 2011;
Caruso 2012; Vilhauer 2012. For further discussion see the entry
skepticism about moral responsibility).
It is worth observing that in many of these disputes about the nature
of free will there is an underlying dispute about the nature of moral
responsibility. This is seen clearly in Hobbes (1654 [1999]) and early
twentieth century philosophers' defenses of compatibilism.
Underlying the belief that free will is incompatible with determinism
is the thought that no one would be morally responsible for any
actions in a deterministic world in the sense that no one would
*deserve* blame or punishment. Hobbes responded to this charge
in part by endorsing broadly consequentialist justifications of blame
and punishment: we are justified in blaming or punishing because these
practices deter future harmful actions and/or contribute to reforming
the offender (1654 [1999], 24-25; cf. Schlick 1939; Nowell-Smith
1948; Smart 1961). While many, perhaps even most, compatibilists have
come to reject this consequentialist approach to moral responsibility
in the wake of P. F. Strawson's 1962 landmark essay
'Freedom and Resentment' (though see Vargas (2013) and
McGeer (2014) for contemporary defenses of compatibilism that appeal
to forward-looking considerations) there is still a general lesson to
be learned: disputes about free will are often a function of
underlying disputes about the nature and value of moral
responsibility.
## 2. The Nature of Free Will
### 2.1 Free Will and Moral Responsibility
As should be clear from this short discussion of the history of the
idea of free will, free will has traditionally been conceived of as a
kind of power to control one's choices and actions. When an
agent exercises free will over her choices and actions, her choices
and actions are *up to her*. But up to her in what sense? As
should be clear from our historical survey, two common (and
compatible) answers are: (i) up to her in the sense that she is able
to choose otherwise, or at minimum that she is able not to choose or
act as she does, and (ii) up to her in the sense that she is the
source of her action. However, there is widespread controversy both
over whether each of these conditions is required for free will and if
so, how to understand the kind or sense of freedom to do otherwise or
sourcehood that is required. While some seek to resolve these
controversies in part by careful articulation of our experiences of
deliberation, choice, and action (Nozick 1981, ch. 4; van Inwagen
1983, ch. 1), many seek to resolve these
controversies by appealing to the nature of moral responsibility. The
idea is that the kind of control or sense of up-to-meness involved in
free will is the kind of control or sense of up-to-meness relevant to
moral responsibility (Double 1992, 12; Ekstrom 2000, 7-8;
Smilansky 2000, 16; Widerker and McKenna 2003, 2; Vargas 2007, 128;
Nelkin 2011, 151-52; Levy 2011, 1; Pereboom 2014, 1-2).
Indeed, some go so far as to define 'free will' as
'the strongest control condition--whatever that turns out
to be--necessary for moral responsibility' (Wolf 1990,
3-4; Fischer 1994, 3; Mele 2006, 17). Given this connection, we
can determine whether the freedom to do otherwise and the power of
self-determination are constitutive of free will and, if so, in what
sense, by considering what it takes to be a morally responsible agent.
On these latter characterizations of free will, understanding free
will is inextricably linked to, and perhaps even derivative from,
understanding moral responsibility. And even those who demur from this
claim regarding conceptual priority typically see a close link between
these two ideas. Consequently, to appreciate the current debates
surrounding the nature of free will, we need to say something about
the nature of moral responsibility.
It is now widely accepted that there are different species of moral
responsibility. It is common (though not uncontroversial) to
distinguish moral responsibility as *answerability* from moral
responsibility as *attributability* from moral responsibility
as *accountability* (Watson 1996; Fischer and Tognazzini 2011;
Shoemaker 2011. See Smith (2012) for a critique of this taxonomy).
These different species of moral responsibility differ along three
dimensions: (i) the kind of responses licensed toward the responsible
agent, (ii) the nature of the licensing relation, and (iii) the
necessary and sufficient conditions for licensing the relevant kind of
responses toward the agent. For example, some argue that when an agent
is morally responsible in the attributability sense, certain
*judgments* about the agent--such as judgments concerning
the virtues and vices of the agent--are *fitting*, and
that the fittingness of such judgments does not depend on whether the
agent in question possessed the freedom to do otherwise (cf. Watson
1996).
While keeping this controversy about the nature of moral
responsibility firmly in mind (see the entry on
moral responsibility
for a more detailed discussion of these issues), we think it is fair
to say that the most commonly assumed understanding of moral
responsibility in the historical and contemporary discussion of the
problem of free will is *moral responsibility as
accountability* in something like the following sense:
An agent \(S\) is morally accountable for performing an action
\(\phi\) \(=\_{df.}\) \(S\) deserves praise if \(\phi\) goes beyond
what can be reasonably expected of \(S\) and \(S\) deserves blame if
\(\phi\) is morally wrong.
The central notions in this definition are *praise*,
*blame*, and *desert*. The majority of contemporary
philosophers have followed Strawson (1962) in contending that praising
and blaming an agent consist in experiencing (or at least being
disposed to experience (cf. Wallace 1994, 70-71)) reactive
attitudes or emotions directed toward the agent, such as gratitude,
approbation, and pride in the case of praise, and resentment,
indignation, and guilt in the case of blame. (See Sher (2006) and
Scanlon (2008) for important dissents from this trend. See the entry
on
blame
for a more detailed discussion.) These emotions, in turn, dispose us
to act in a variety of ways. For example, blame disposes us to respond
with some kind of hostility toward the blameworthy agent, such as
verbal rebuke or partial withdrawal of good will. But while these
kinds of dispositions are essential to our blaming someone, their
manifestation is not: it is possible to blame someone with very little
change in attitudes or actions toward the agent. Blaming someone might
be immediately followed by forgiveness as an end of the matter.
By 'desert', we have in mind what Derk Pereboom has called
*basic desert*:
>
>
> The desert at issue here is basic in the sense that the agent would
> deserve to be blamed or praised just because she has performed the
> action, given an understanding of its moral status, and not, for
> example, merely by virtue of consequentialist or contractualist
> considerations. (2014, 2)
>
>
>
As we understand desert, if an agent deserves blame, then we have a
strong *pro tanto* reason to blame him simply in virtue of his
being accountable for doing wrong. Importantly, these reasons can be
outweighed by other considerations. While an agent may deserve blame,
it might, all things considered, be best to forgive him
unconditionally instead.
When an agent is morally responsible for doing something wrong, he is
blame*worthy*: he deserves hard treatment marked by resentment
and indignation and the actions these emotions dispose us toward, such
as censure, rebuke, and ostracism. However, it would seem unfair to
treat agents in these ways unless their actions were *up to
them*. Thus, we arrive at the core connection between free will
and moral responsibility: agents deserve praise or blame only if their
actions are up to them--only if they have free will.
Consequently, we can assess analyses of free will by their
implications for judgments of moral responsibility. We note that some
might reject the claim that free will is necessary for moral
responsibility (e.g., Frankfurt 1971; Stump 1988), but even for these
theorists an adequate analysis of free will must specify a
*sufficient* condition for the kind of control at play in moral
responsibility.
In what follows, we focus our attention on the two most commonly cited
features of free will: the freedom to do otherwise and sourcehood.
While some seem to think that free will consists exclusively in either
the freedom to do otherwise (van Inwagen 2008) or in sourcehood
(Zagzebski 2000), many philosophers hold that free will involves both
conditions--though philosophers often emphasize one condition
over the other depending on their dialectical situation or
argumentative purposes (cf. Watson 1987). In what follows, we will
describe the most common characterizations of these two
conditions.
### 2.2 The Freedom to Do Otherwise
For most newcomers to the problem of free will, it will seem obvious
that an action is up to an agent only if she had the freedom to do
otherwise. But what does this freedom come to? The freedom to do
otherwise is clearly a modal property of agents, but it is
controversial just what species of modality is at stake. It must be
more than mere *possibility*: to have the freedom to do
otherwise consists in more than the mere possibility of something
else's happening. A more plausible and widely endorsed
understanding claims the relevant modality is *ability* or
*power* (Locke 1690 [1975], II.xx; Reid 1788 [1969],
II.i-ii; D. Locke 1973; Clarke 2009; Vihvelin 2013). But
abilities themselves seem to come in different varieties (Lewis 1976;
Horgan 1979; van Inwagen 1983, ch. 1; Mele 2003; Clarke 2009; Vihvelin
2013, ch. 1; Franklin 2015; Cyr and Swenson 2019; Hofmann 2022;
Whittle 2022), so a claim that an agent has 'the ability to do
otherwise' is potentially ambiguous or indeterminate; in
philosophical discussion, the sense of ability appealed to needs to be
spelled out. A satisfactory account of the freedom to do otherwise
owes us both an account of the kind of ability in terms of which the
freedom to do otherwise is analyzed, and an argument for why this kind
of ability (as opposed to some other species) is the one constitutive
of the freedom to do otherwise. As we will see, philosophers sometimes
leave this second debt unpaid.
The contemporary literature takes its cue from classical
compatibilism's recognized failure to deliver a satisfactory
analysis of the freedom to do otherwise. As we saw above, classical
compatibilists (Hobbes 1654 [1999], 1656 [1999]; Locke 1690 [1975];
Hume 1740 [1978], 1748 [1975]; Edwards 1754 [1957]; Moore 1912;
Schlick 1939; Ayer 1954) sought to analyze the freedom to do otherwise
in terms of a simple conditional analysis of ability:
**Simple Conditional Analysis:** An agent \(S\) has the
ability to do otherwise if and only if, were \(S\) to choose to do
otherwise, then \(S\) would do otherwise.
Part of the attraction of this analysis is that it obviously
reconciles the freedom to do otherwise with determinism. While the
truth of determinism entails that one's action is inevitable
given the past and laws of nature, there is nothing about determinism
that implies that *if* one had chosen otherwise, then one would
not do otherwise.
There are two problems with the **Simple Conditional
Analysis**. The first is that it is, at best, an analysis of
free action, not free will (cf. Reid 1788 [1969]; Chisholm 1966; 1976,
ch. 2; Lehrer 1968, 1976). It only tells us when an agent has the
ability to *do* otherwise, not when an agent has the ability to
*choose* to do otherwise. One might be tempted to think that
there is an easy fix along the following lines:
**Simple Conditional Analysis\*:** An agent \(S\) has the
ability to choose otherwise if and only if, were \(S\) to desire or
prefer to choose otherwise, then \(S\) would choose otherwise.
The problem is that we often fail to choose to do things we want to
choose, even when it appears that we had the ability to choose
otherwise (one might think the same problem attends the original
analysis). Suppose that, in deciding how to spend my evening, I have a
desire to choose to read and a desire to choose to watch a movie.
Suppose that I choose to read. By all appearances, I had the ability
to choose to watch a movie. And yet, according to the **Simple
Conditional Analysis\***, I lack this freedom, since the
conditional 'if I were to desire to choose to watch a movie,
then I would choose to watch a movie' is false. I do desire to
choose to watch a movie and yet I do not choose to watch a movie. It
is unclear how to remedy this problem. On the one hand, we might
refine the antecedent by replacing 'desire' with
'strongest desire' (cf. Hobbes 1654 [1999], 1656 [1999];
Edwards 1754 [1957]). The problem is that this assumes, implausibly,
that we always choose what we most strongly desire (for criticisms of
this view see Reid 1788 [1969]; Campbell 1951; Wallace 1999; Holton
2009). On the other hand, we might refine the consequent by replacing
'would choose to do otherwise' with either 'would
probably choose to do otherwise' or 'might choose to do
otherwise'. But each of these proposals is also problematic. If
'probably' means 'more likely than not', then
this revised conditional still seems too strong: it seems possible to
have the ability to choose otherwise even when one's so choosing
is unlikely. If we opt for 'might', then the relevant
sense of modality needs to be spelled out.
Even if there are fixes to these problems, there is a yet deeper
problem with these analyses. There are some agents who clearly lack
the freedom to do otherwise and yet satisfy the conditional at the
heart of these analyses. That is, although these agents lack the
freedom to do otherwise, it is, for example, true of them that
*if* they chose otherwise, they would do otherwise. Picking up
on an argument developed by Keith Lehrer (1968; cf. Campbell 1951;
Broad 1952; Chisholm 1966), consider an agoraphobic, Luke, who, when
faced with the prospect of entering an open space, is subject not
merely to an irresistible desire to refrain from intentionally going
outside, but an irresistible desire to refrain from even
*choosing* to go outside. Given Luke's psychology, there
is no possible world in which he suffers from his agoraphobia and
chooses to go outside. It may well nevertheless be true that
*if* Luke chose to go outside, then he would have gone outside.
After all, any possible world in which he chooses to go outside will
be a world in which he no longer suffers (to the same degree) from his
agoraphobia, and thus we have no reason to doubt that in those worlds
he would go outside as a result of his choosing to go outside. The
same kind of counterexample applies with equal force to the
conditional 'if \(S\) desired to choose otherwise, then \(S\)
would choose otherwise'.
While simple conditional analyses admirably make clear the species of
ability to which they appeal, they fail to show that this species of
ability is constitutive of the freedom to do otherwise. Agents need a
stronger ability to do otherwise than characterized by such simple
conditionals. Some argue that the fundamental source of the above
problems is the *conditional* nature of these analyses
(Campbell 1951; Austin 1961; Chisholm 1966; Lehrer 1976; van Inwagen
1983, ch. 4). The sense of ability relevant to the freedom to do
otherwise is the 'all-in sense'--that is, holding
everything fixed up to the time of the decision or action--and
this sense, so it is argued, can only be captured by a categorical
analysis of the ability to do otherwise:
**Categorical Analysis:** An agent \(S\) has the ability
to choose or do otherwise than \(\phi\) at time \(t\) if and only if
it was possible, holding fixed everything up to \(t\), that \(S\)
choose or do otherwise than \(\phi\) at \(t\).
This analysis gets the right verdict in Luke's case. He lacks
the ability to do otherwise than refrain from choosing to go outside,
according to this analysis, because there is no possible world in
which he suffers from his agoraphobia and yet chooses to go outside.
Unlike the above conditional analyses, the **Categorical
Analysis** requires that we hold fixed Luke's agoraphobia
when considering alternative possibilities.
If the **Categorical Analysis** is correct, then free
will is incompatible with determinism. According to the thesis of
determinism, all deterministic possible worlds with the same pasts and
laws of nature have the same futures (Lewis 1979; van Inwagen 1983,
3). Suppose John is in deterministic world \(W\) and refrains from
raising his hand at time \(t\). Since \(W\) is deterministic, it
follows that any possible world \(W^\*\) that has the same past and
laws up to \(t\) must have the same future, including John's
refraining from raising his hand at \(t\). Therefore, John lacked the
ability, and thus freedom, to raise his hand.
This argument, carefully articulated in the late 1960s and early 1970s
by Carl Ginet (1966, 1990) and Peter van Inwagen (1975, 1983) and
refined in important ways by John Martin Fischer (1994), has come to
be known as the Consequence Argument. van Inwagen offers the following
informal statement of the argument:
>
>
> If determinism is true, then our acts are the consequences of the laws
> of nature and events in the remote past. But it is not up to us what
> went on before we were born [i.e., we do not have the ability to
> change the past], and neither is it up to us what the laws of nature
> are [i.e., we do not have the ability to break the laws of nature].
> Therefore, the consequences of these things (including our present
> acts) are not up to us. (van Inwagen 1983, 16; cf. Fischer 1994, ch.
> 1)
>
>
>
Like the **Simple Conditional Analysis**, a virtue of the
**Categorical Analysis** is that it spells out clearly
the kind of ability appealed to in its analysis of the freedom to do
otherwise, but like the **Simple Conditional Analysis**,
critics have argued that the sense of ability it captures is not the
sense at the heart of free will. The objection here, though, is not
that the analysis is too permissive or weak, but rather that it is too
restrictive or strong.
While there have been numerous different replies along these lines
(e.g., Lehrer 1980; Slote 1982; Watson 1986. See the entry on
arguments for incompatibilism
for a more extensive discussion of and bibliography for the
Consequence Argument), the most influential of these objections is due
to David Lewis (1981). Lewis contended that van Inwagen's
argument equivocated on 'is able to break a law of
nature'. We can distinguish two senses of 'is able to
break a law of nature':
(Weak Thesis) I am able to do something such that, if I did it, a law
of nature would be broken.
(Strong Thesis) I am able to do something such that, if I did it, it
would constitute a law of nature's being broken or would cause a
law of nature to be broken.
If we are committed to the **Categorical Analysis**, then
those desiring to defend compatibilism seem to be committed to the
sense of ability in 'is able to break a law of nature'
along the lines of the strong thesis. Lewis agrees with van Inwagen
that it is "incredible" to think humans have such an
ability (Lewis 1981, 113), but maintains that compatibilists need only
appeal to the ability to break a law of nature in the weak sense.
While it is absurd to think that humans are able to do something that
*is* a violation of a law of nature or *causes* a law of
nature to be broken, there is nothing incredible, so Lewis claimed, in
thinking that humans are able to do something such that *if*
they did it, a law of nature would be broken. In essence, Lewis is
arguing that incompatibilists like van Inwagen have failed to
adequately motivate the restrictiveness of the **Categorical
Analysis**.
Some incompatibilists have responded to Lewis by contending that even
the weak ability is incredible (van Inwagen
2004). But there is a different and often overlooked problem for
Lewis: the weak ability seems to be too weak. Returning to the case of
John's refraining from raising his hand, Lewis maintains that
the following three propositions are consistent:
(i)
John is able to raise his hand.
(ii)
A necessary condition for John's raising his hand fails to
obtain (i.e., that the laws of nature or past are different than they
actually are).
(iii)
John is not able to do anything that would constitute this
necessary condition's obtaining or cause this necessary
condition to obtain (i.e., he is unable to do anything that would
constitute or cause a law of nature to be broken or the past to be
different).
One might think that (ii) and (iii) are incompatible with (i).
Consider again Luke, our agoraphobic. Suppose that his agoraphobia
affects him in such a way that he will only intentionally go outside
if he chooses to go outside, and yet his agoraphobia makes it
impossible for him to make this choice. In this case, a necessary
condition for Luke's intentionally going outside is his choosing
to go outside. Moreover, Luke is not able to choose or cause himself
to choose to go outside. Intuitively, this would seem to imply that
Luke *lacks* the freedom to go outside. But this implication
does not follow for Lewis. From the fact that Luke is able to go
outside only if he chooses to go outside and the fact that Luke is not
able to choose to go outside, it does not *follow*, on
Lewis's account, that Luke lacks the ability to go outside.
Consequently, Lewis's account fails to explain why Luke lacks
the ability to go outside (cf. Speak 2011). (For other important
criticisms of Lewis, see Ginet [1990, ch. 5] and Fischer [1994, ch.
4].)
While Lewis may be right that the **Categorical
Analysis** is too restrictive, his argument, all by itself,
doesn't seem to establish this. His argument is successful only
if (a) he can provide an alternative analysis of ability that entails
that Luke's agoraphobia robs him of the ability to go outside
and (b) does not entail that determinism robs John of the ability to
raise his hand (cf. Pendergraft 2010). Lewis must point out a
principled difference between these two cases. As should be clear from
the above, the **Simple Conditional Analysis** is of no
help. However, some recent work by Michael Smith (2003), Kadri
Vihvelin (2004; 2013), and Michael Fara (2008) have attempted to fill
this gap. What unites these theorists--whom Clarke (2009) has
called the 'new dispositionalists'--is their attempt
to appeal to recent advances in the metaphysics of dispositions to
arrive at a revised conditional analysis of the freedom to do
otherwise. The most perspicuous of these accounts is offered by
Vihvelin (2004), who argues that an agent's having the ability
to do otherwise is solely a function of the agent's intrinsic
properties. (It is important to note that Vihvelin [2013] has come to
reject the view that free will consists exclusively in the kind of
ability analyzed below.) Building on Lewis's work on the
metaphysics of dispositions, she arrives at the following analysis of
ability:
**Revised Conditional Analysis of Ability**: \(S\) has
the ability at time \(t\) to do \(X\) iff, for some intrinsic property
or set of properties \(B\) that \(S\) has at \(t\), for some time
\(t'\) after \(t\), if \(S\) chose (decided, intended, or tried) at
\(t\) to do \(X\), and \(S\) were to retain \(B\) until \(t'\),
\(S\)'s choosing (deciding, intending, or trying) to do \(X\)
and \(S\)'s having \(B\) would jointly be an \(S\)-complete
cause of \(S\)'s doing \(X\). (Vihvelin 2004, 438)
Lewis defines an '\(S\)-complete cause' as "a cause
complete insofar as havings of properties intrinsic to [\(S\)] are
concerned, though perhaps omitting some events extrinsic to
[\(S\)]" (cf. Lewis 1997, 156). In other words, an
\(S\)-complete cause of \(S\)'s doing \(\phi\) requires that
\(S\) possess all the intrinsic properties relevant to \(S\)'s
causing \(S\)'s doing \(\phi\). This analysis appears to afford
Vihvelin the basis for a principled difference between agoraphobics
and merely determined agents. We must hold fixed an agent's
phobias since they are intrinsic properties of agents, but we need not
hold fixed the laws of nature because these are not intrinsic
properties of agents. (It should be noted that the assumption that
intrinsic properties are wholly separable from the laws of nature is
disputed by 'dispositional essentialists.' See the entry
on
metaphysics of causation.)
Vihvelin's analysis appears to be restrictive enough to exclude
phobics from having the freedom to do otherwise, but permissive enough
to allow that some agents in deterministic worlds have the freedom to
do otherwise.
But appearances can be deceiving. The new dispositionalist claims have
received some serious criticism, with the majority of the criticisms
maintaining that these analyses are still too permissive (Clarke 2009;
Whittle 2010; Franklin 2011b). For example, Randolph Clarke argues
that Vihvelin's analysis fails to overcome the original problem
with the **Simple Conditional Analysis**. He writes,
"A phobic agent might, on some occasion, be unable to choose to
*A* and unable to *A* without so choosing, while
retaining all that she would need to implement such a choice, should
she make it. Despite lacking the ability to choose to *A*, the
agent might have some set of intrinsic properties *B* such
that, if she chose to *A* and retained *B*, then her
choosing to *A* and her having *B* would jointly be an
agent-complete cause of her *A*-ing" (Clarke 2009, p.
329).
The **Categorical Analysis**, and thus incompatibilism
about free will and determinism, remains an attractive option for many
philosophers precisely because it seems that compatibilists have yet
to furnish an analysis of the freedom to do otherwise that implies
that phobics clearly lack the ability to choose or do otherwise that
is relevant to moral responsibility and yet some merely determined
agents have this ability.
### 2.3 Freedom to Do Otherwise vs. Sourcehood Accounts
Some have tried to avoid these lingering problems for compatibilists
by arguing that the freedom to do otherwise is not required for free
will or moral responsibility. What matters for an agent's
freedom and responsibility, so it is argued, is the *source* of
her action--how her action was brought about. The most prominent
strategy for defending this move appeals to 'Frankfurt-style
cases'. In a ground-breaking article, Harry Frankfurt (1969)
presented a series of thought experiments intended to show that it is
possible that agents are morally responsible for their actions and yet
they lack the ability to do otherwise. While Frankfurt (1971) took
this to show that moral responsibility and free will come
apart--free will requires the ability to do otherwise but moral
responsibility does not--if we define 'free will' as
'the strongest control condition required for moral
responsibility' (cf. Wolf 1990, 3-4; Fischer 1994, 3; Mele
2006, 17), then if Frankfurt-style cases show that moral
responsibility does not require the ability to do otherwise, then they
also show that free will does not require the ability to do otherwise.
Let us consider this challenge in more detail.
Here is a representative Frankfurt-style case:
>
> Imagine, if you will, that Black is a quite nifty (and even generally
> nice) neurosurgeon. But in performing an operation on Jones to remove
> a brain tumor, Black inserts a mechanism into Jones's brain
> which enables Black to monitor and control Jones's activities.
> Jones, meanwhile, knows nothing of this. Black exercises this control
> through a sophisticated computer which he has programmed so that,
> among other things, it monitors Jones's voting behavior. If
> Jones were to show any inclination to vote for Bush, then the
> computer, through the mechanism in Jones's brain, intervenes to
> ensure that he actually decides to vote for Clinton and does so vote.
> But if Jones decides on his own to vote for Clinton, the computer does
> nothing but continue to monitor--without affecting--the
> goings-on in Jones's head. (Fischer 2006, 38)
>
Fischer goes on to suppose that Jones "decides to vote for
Clinton on his own", without any interference from Black, and
maintains that in such a case Jones is morally responsible for his
decision. Fischer draws two interrelated conclusions from this case.
The first, negative conclusion, is that the ability to do otherwise is
not necessary for moral responsibility. Jones is unable to refrain
from deciding to vote for Clinton, and yet, so long as Jones decides
to vote for Clinton on his own, his decision is free and one for which
he is morally responsible. The second, positive conclusion, is that
freedom and responsibility are functions of the *actual
sequence*. What matters for an agent's freedom and moral
responsibility is not what might have happened, but how his action was
actually brought about. What matters is not whether the agent had the
ability to do otherwise, but whether he was the source of his
actions.
The success of Frankfurt-style cases is hotly contested. An early and
far-reaching criticism is due to David Widerker (1995), Carl Ginet
(1996), and Robert Kane (1996, 142-43). According to this
criticism, proponents of Frankfurt-style cases face a dilemma: either
these cases assume that the connection between the indicator (in our
case, the absence of Jones's showing any inclination to decide
to vote for Bush) and the agent's decision (here, Jones's
deciding to vote for Clinton) is deterministic or not. If the
connection is deterministic, then Frankfurt-style cases cannot be
expected to convince incompatibilists that the ability to do otherwise
is not necessary for moral responsibility and/or free will, since
Jones's action will be deterministically brought about by
factors beyond his control, leading incompatibilists to conclude that
Jones is not morally responsible for his decision. But if the
connection is nondeterministic, then it is possible even in the
absence of showing any inclination to decide to vote for Bush, that
Jones decides to vote for Bush, and so he retains the ability to do
otherwise. Either way Frankfurt-style cases fail to show that Jones is
both morally responsible for his decision and yet is unable to do
otherwise.
While some have argued that even Frankfurt-style cases that assume
determinism are effective (see, e.g., Fischer 1999, 2010, 2013 and
Haji and McKenna 2004 and for criticisms of this approach, see Goetz
2005, Palmer 2005, 2014, Widerker and Goetz 2013, and Cohen 2017), the
majority of proponents of Frankfurt-style cases have attempted to
revise these cases so that they are explicitly nondeterministic and
yet still show that the agent was morally responsible even though he
lacked the ability to do otherwise--or, at least that he lacked
any ability to do otherwise that could be relevant to
*grounding* the agent's moral responsibility (see, e.g.,
Mele and Robb 1998, 2003, Pereboom 2001, 2014, McKenna 2003, Hunt
2005, and for criticisms of these cases see Ginet 2002, Timpe 2006,
Widerker 2006, Franklin 2011c, Moya 2011, Palmer 2011, 2013, Robinson
2014, Capes 2016, Capes and Swenson 2017, and Elzein 2017).
Supposing that Frankfurt-style cases are successful, what exactly do
they show? In our view, they show neither that free will and moral
responsibility do not require an ability to do otherwise *in any
sense* nor that compatibilism is true. Frankfurt-style cases are
of clear help to the compatibilists' position (though see Speak
2007 for a dissenting opinion). The Consequence Argument raises a
powerful challenge to the cogency of compatibilism. But if
Frankfurt-style cases are successful, agents can act freely in the
sense relevant to moral responsibility while lacking the ability to do
otherwise in the all-in sense. This allows compatibilists to concede
that the all-in ability to do otherwise is incompatible with
determinism, and yet insist that it is irrelevant to the question of
the compatibility of determinism with moral responsibility (and
perhaps even free will, depending on how we define this) (cf. Fischer
1987, 1994. For a challenge to the move from not strictly necessary to
irrelevant, see O'Connor [2000, 20-22] and in reply,
Fischer [2006, 152-56].). But, of course, showing that an
argument for the falsity of compatibilism is irrelevant does not show
that compatibilism is true. Indeed, many incompatibilists maintain
that Frankfurt-style cases are successful and defend incompatibilism
not via the Consequence Argument, but by way of arguments that attempt
to show that agents in deterministic worlds cannot be the
'source' of their actions in the way that moral
responsibility requires (Stump 1999; Zagzebski 2000; Pereboom 2001,
2014). Thus, if successful, Frankfurt-style cases would be at best the
first step in defending compatibilism. The second step must offer an
analysis of the kind of sourcehood constitutive of free will that
entails that free will is compatible with determinism (cf. Fischer
1982).
Furthermore, while proponents of Frankfurt-style cases often maintain
that these cases show that no ability to do otherwise is necessary for
moral responsibility ("I have employed the Frankfurt-type
example to argue that this sense of control [i.e. the one required for
moral responsibility] need not involve *any* alternative
possibilities" [Fischer 2006, p. 40; emphasis ours]), we believe
that this conclusion overreaches. At best, Frankfurt-style cases show
that the ability to do otherwise in *the all-in sense*--in
the sense defined by the **Categorical
Analysis**--is not necessary for free will or moral
responsibility (cf. Franklin 2015). To appreciate this, let us assume
that in the above Frankfurt-style case Jones lacks the ability to do
otherwise in the all-in sense: there is no possible world in which we
hold fixed the past and laws and yet Jones does otherwise, since all
such worlds include Black and his preparations for preventing Jones
from doing otherwise should Jones show any inclination. Even if this
is all true, it should take only a little reflection to recognize that
in this case Jones is able to do otherwise in certain weaker senses we
might attach to that phrase, and compatibilists in fact still think
that the ability to do otherwise in some such senses is necessary for
free will and moral responsibility. Consequently, even though
Frankfurt-style cases have, as a matter of fact, moved many
compatibilists away from emphasizing ability to do otherwise to
emphasizing sourcehood, we suggest that this move is best seen as a
weakening of the ability-to-do-otherwise condition on moral
responsibility (but see Cyr 2017 and Kittle 2019 for criticisms of
this claim). (A potentially important exception to this claim is
Sartorio [2016], who appealing to some controversial ideas in the
metaphysics of causation appears to argue that no sense of the ability
to do otherwise is necessary for control in the sense at stake for
moral responsibility, but instead what matters is whether the agent is
the cause of the action. We simply note that Sartorio's account
of causation is a modal one [see especially Sartorio (2016,
94-95, 132-37)] and thus it is far from clear that her
account of freedom and responsibility is really an exception.)
### 2.4 Compatibilist Accounts of Sourcehood
In this section, we will assume that Frankfurt-style cases are
successful in order to consider two prominent compatibilist attempts
to construct analyses of the sourcehood condition (though see the
entry on
compatibilism
for a more systematic survey of compatibilist theories of free will).
The first, and perhaps most popular, compatibilist model is a
reasons-responsiveness model. According to this model, an
agent's action \(\phi\) is free just in case the agent or manner
in which the action is brought about is responsive to the reasons
available to the agent at the time of action. While compatibilists
develop this kind of account in different ways, the most detailed
proposal is due to John Martin Fischer (1994, 2006, 2010, 2012;
Fischer and Ravizza 1998. For similar compatibilist treatments of
reasons-responsiveness, see Wolf 1990, Wallace 1994, Haji 1998, Nelkin
2011, McKenna 2013, Vargas 2013, Sartorio 2016). Fischer and Ravizza
argue that an agent's action is free and one for which he is
morally responsible only if the mechanism that issued in the action is
moderately reasons-responsive (Fischer and Ravizza 1998, ch. 3). By
'mechanism', Fischer and Ravizza simply mean "the
way the action was brought about" (38). One mechanism they often
discuss is practical deliberation. For example, in the case of Jones
discussed above, his decision to vote for Clinton on his own was
brought about by the process of practical deliberation. What must be
true of this process, this mechanism, for it to be moderately
reasons-responsive? Fischer and Ravizza maintain that moderate
reasons-responsiveness consists in two conditions: reasons-receptivity
and reasons-reactivity. A mechanism's reasons-receptivity
depends on the agent's cognitive capacities, such as being
capable of understanding moral reasons and the implications of their
actions (69-73). The second condition is more important for us
in the present context. A mechanism's reasons-reactivity depends
on how the mechanism *would* react given different reasons for
action. Fischer and Ravizza argue that the kind of reasons-reactivity
at stake is weak reasons-reactivity, where this merely requires that
there is some possible world in which the laws of nature remain the
same, the same mechanism operates, there is a sufficient reason to do
otherwise, and the mechanism brings about the alternative action in
response to this sufficient reason (73-76). On this analysis,
while Jones, due to the activity of Black, lacks the 'all-in' sense of
the ability to do otherwise, he is nevertheless morally responsible
for deciding to vote for Clinton because his action finds its source
in Jones's practical deliberation that is moderately
reasons-responsive.
Fischer and Ravizza's theory of freedom and responsibility has shifted
the focus of much recent debate to questions of sourcehood. Moreover,
one might argue that this theory is a clear improvement over classical
compatibilism with respect to handling cases of phobia. By focusing on
mechanisms, Fischer and Ravizza can argue that our agoraphobic Luke is
not morally responsible for deciding to refrain from going outside
because the mechanism that issues in this action--namely his
agoraphobia--is not moderately reasons-responsive. There is no
world with the same laws of nature as our own, this mechanism
operates, and yet it reacts to a sufficient reason to go outside. No
matter what reasons there are for Luke to go outside, when acting on
this mechanism, he will always refrain from going outside (cf. Fischer
1987, 74).
Before turning to our second compatibilist model, it is worth noting
that it would be a mistake to think that Fischer and Ravizza's
account is a sourcehood account to the exclusion of the ability to do
otherwise in *any* sense. As we have just seen, Fischer and
Ravizza place clear *modal* requirements on mechanisms that
issue in actions with respect to which agents are free and morally
responsible. Indeed, this should be clear from the very idea of
reasons-responsiveness. Whether one is responsive depends not merely
on how one does respond, but also on how one *would* respond.
Thus, any account that makes reasons-responsiveness an essential
condition of free will is an account that makes the ability to do
otherwise, in some sense, necessary for free will (Fischer [2018]
concedes this point, though, as noted above, the reader should
consider Sartorio [2016] as a potential counterexample to this
claim).
The second main compatibilist model of sourcehood is an identification
model. Accounts of sourcehood of this kind lay stress on
self-determination or autonomy: to be the source of her action the
agent must self-determine her action. Like the contemporary discussion
of the ability to do otherwise, the contemporary discussion of the
power of self-determination begins with the failure of classical
compatibilism to produce an acceptable definition. According to
classical compatibilists, self-determination simply consists in the
agent's action being determined by her strongest motive. On the
assumption that some compulsive agents' compulsions operate by
generating irresistible desires to act in certain ways, the classical
compatibilist analysis of self-determination implies that these
compulsive actions are self-determined. While Hobbes seems willing to
accept this implication (1656 [1999], 78), most contemporary
compatibilists concede that this result is unacceptable.
Beginning with the work of Harry Frankfurt (1971) and Gary Watson
(1975), many compatibilists have developed identification accounts of
self-determination that attempt to draw a distinction between an
agent's desires or motives that are internal to the agent and
those that are external. The idea is that while agents are not (or at
least may not be) identical to any motivations (or bundle of
motivations), they are *identified* with a subset of their
motivations, rendering these motivations internal to the agent in such
a way that any actions brought about by these motivations are
*self*-determined. The identification relation is not an
identity relation, but something weaker (cf. Bratman 2000, 39n12).
What the precise nature of the identification relation is and to which
attitudes an agent stands in this relation is hotly disputed.
Lippert-Rasmussen (2003) helpfully divides identification accounts
into two main types. The first are "authority" accounts,
according to which agents are identified with attitudes that are
*authorized* to speak for them (368). The second are
authenticity accounts, according to which agents are identified with
attitudes that *reveal* who they truly are (368). (But see
Shoemaker 2015 for an ecumenical account of identification that blends
these two accounts.) Proposed attitudes to which agents are said to
stand in the identification relation include higher-order desires
(Frankfurt 1971), cares or loves (Frankfurt 1993, 1994; Shoemaker
2003; Jaworska 2007; Sripada 2016), self-governing policies (Bratman
2000), the desire to make sense of oneself (Velleman 1992, 2009), and
perceptions (or judgments) of the good (or best) (Watson 1975; Stump
1988; Ekstrom 1993; Mitchell-Yellin 2015).
The distinction between internal and external motivations allows
identification theorists to enrich classical compatibilists'
understanding of constraint, while remaining compatibilists about free
will and determinism. According to classical compatibilists, the only
kind of constraint is external (e.g., broken cars and broken legs),
but addictions and phobias seem just as threatening to free will.
Identification theorists have the resources to concede that some
constraints are internal. For example, they can argue that our
agoraphobic Luke is not free in refraining from going outside even
though this decision was caused by his strongest desires because he is
not identified with his strongest desires. On compatibilist
identification accounts, what matters for self-determination is not
whether our actions are determined or undetermined, but whether they
are brought about by motives with which the agent is identified:
exercises of the power of self-determination consists in an
agent's actions being brought about, in part, by an
agent's motives with which she is identified. (It is important
to note that while we have distinguished reasons-responsive accounts
from identification accounts, there is nothing preventing one from
combing both elements in a complete analysis of free will.)
Even if these reasons-responsive and identification compatibilist
accounts of sourcehood might successfully side-step the Consequence
Argument, they must come to grips with a second incompatibilist
argument: the Manipulation Argument. The general problem raised by
this line of argument is that whatever proposed compatibilist
conditions for an agent \(S\)'s being free with respect to, and
morally responsible for, some action \(\phi\), it will seem that
agents can be manipulated into satisfying these conditions with
respect to \(\phi\) and, yet, precisely because they are manipulated
into satisfying these conditions, their freedom and responsibility
seem undermined. The two most influential forms of the Manipulation
Argument are Pereboom's Four-case Argument (2001, ch. 4; 2014,
ch. 4) and Mele's Zygote Argument (2006, ch. 7. See Todd 2010,
2012 for developments of Mele's argument). As the structure of
Mele's version is simpler, we will focus on it.
Imagine a goddess Diana who creates a zygote \(Z\) in Mary in some
deterministic world. Suppose that Diana creates \(Z\) as she does
because she wants Jones to be murdered thirty years later. From her
knowledge of the laws of nature in her world and her knowledge of the
state of the world just prior to her creating \(Z\), she knows that a
zygote with precisely \(Z\)'s constitution located in Mary will
develop into an agent Ernie who, thirty years later, will murder Jones
as a result of his moderately reasons-responsive mechanism and on the
basis of motivations with which he is identified (whatever those might
be). Suppose Diana succeeds in her plan and Ernie murders Jones as a
result of her manipulation.
Many judge that Ernie is not morally responsible for murdering Jones
even though he satisfies both the reasons-responsive and
identification criteria. There are two possible lines of reply open to
compatibilists. On the soft-line reply, compatibilists attempt to show
that there is a relevant difference between manipulated agents such as
Ernie and agents who satisfy their account (McKenna 2008, 470). For
example, Fischer and Ravizza propose a second condition on sourcehood:
in addition to a mechanism's being moderately
reasons-responsive, an agent is morally responsible for the output of
such a mechanism only if the agent has come to take responsibility for
the mechanism, where an agent has taken responsibility for a mechanism
\(M\) just in case (i) she believes that she is an agent when acting
from \(M\), (ii) she believes that she is an apt target for blame and
praise for acting from \(M\), and (iii) her beliefs specified in (i)
and (ii) are "based, in an appropriate way, on [her]
evidence" (Fischer and Ravizza 1998, 238). The problem with this
reply is that we can easily imagine Diana creating Ernie so that his
murdering Jones is a result not only of a moderately
reasons-responsive mechanism, but also a mechanism for which he has
taken responsibility. On the hard-line reply, compatibilists concede
that, despite initial appearances, the manipulated agent is free and
morally responsible and attempt to ameliorate the seeming
counterintuitiveness of this concession (McKenna 2008, 470-71).
Here compatibilists might point out that the idea of being manipulated
is worrisome only so long as the manipulators are interfering with an
agent's development. But if the manipulators simply create a
person, and then allow that person's life to unfold without any
further inference, the manipulators' activity is no threat to
freedom (McKenna 2008; Fischer 2011; Sartorio 2016, ch. 5). (For other
responses to the Manipulation Argument, see Kearns 2012; Sripada 2012;
McKenna 2014.)
### 2.5 Libertarian Accounts of Sourcehood
Despite these compatibilist replies, to some the idea that the
entirety of a *free* agent's life can be determined, and
in this way controlled, by another agent will seem incredible. Some
take the lesson of the Manipulation Argument to be that no
compatibilist account of sourcehood or self-determination is
satisfactory. True sourcehood--the kind of sourcehood that can
actually ground an agent's freedom and
responsibility--requires, so it is argued, that one's
action not be causally determined by factors beyond one's
control.
Libertarians, while united in endorsing this negative condition on
sourcehood, are deeply divided concerning which further positive
conditions may be required. It is important to note that while
libertarians are united in insisting that compatibilist accounts of
sourcehood are insufficient, they are not committed to thinking that
the conditions of freedom spelled out in terms either of
reasons-responsiveness or of identification are not necessary. For
example, Stump (1988, 1996, 2010) builds a sophisticated libertarian
model of free will out of resources originally developed within
Frankfurt's identification model (see also Ekstrom 1993, 2000;
Franklin 2014) and nearly all libertarians agree that exercises of
free will require agents to be reasons-responsive (e.g., Kane 1996; Clarke 2003, chs. 8-9; Franklin 2018, ch.
2). Moreover, while this section focuses on libertarian accounts of
sourcehood, we remind readers that most (if not all) libertarians
think that the freedom to do otherwise is also necessary for free will
and moral responsibility.
There are three main libertarian options for understanding sourcehood
or self-determination: non-causal libertarianism (Ginet 1990, 2008;
McCann 1998; Lowe 2008; Goetz 2009; Pink 2017; Palmer 2021),
event-causal libertarianism (Wiggins 1973; Kane 1996, 1999, 2011,
2016; Mele 1995, chs. 11-12; 2006, chs. 4-5; 2017; Ekstrom
2000, 2019; Clarke 2003, chs. 2-6; Franklin 2018), and
agent-causal libertarianism (Reid 1788 [1969]; Chisholm 1966, 1976;
Taylor 1966; O'Connor 2000; Clarke 1993; 1996;
2003, chs. 8-10; Griffith 2010; Steward 2012). Non-causal
libertarians contend that exercises of the power of self-determination
need not (or perhaps even cannot) be caused or causally structured.
According to this view, we control our volition or choice simply in
virtue of its being ours--its occurring in us. We do not exert a
special kind of causality in bringing it about; instead, it is an
intrinsically active event, intrinsically something we *do*.
While there may be causal influences upon our choice, there need not
be, and any such causal influence is wholly irrelevant to
understanding why it occurs. Reasons provide an autonomous, non-causal
form of explanation. Provided our choice is not wholly determined by
prior factors, it is free and under our control simply in virtue of
being ours. Non-causal views have failed to garner wide support among
libertarians since, for many, self-*determination* seems to be
an essentially causal notion (cf. Mele 2000 and Clarke 2003, ch. 2). This dispute hinges on the necessary
conditions on the concept of causal power, and relatedly on whether
power simpliciter admits causal and non-causal variants. For
discussion, see O'Connor (2021).
Most libertarians endorse an event-causal or agent-causal account of
sourcehood. Both these accounts maintain that exercises of the power
of self-determination consist partly in the agent's bringing
about her choice or action, but they disagree on how to analyze *an
agent's bringing about her choice*. While event-causal
libertarianism admits of different species, at the heart of this view
is the idea that self-determining an action requires, at minimum, that
the agent cause the action and that an agent's causing his
action is *wholly reducible* to mental states and other events
involving the agent nondeviantly causing his action. Consider an
agent's raising his hand. According to the event-causal model at
its most basic level, an agent's raising his hand consists in
the agent's causing his hand to rise and his causing his hand to
rise consists in *apt* mental states and events involving the
agent--such as the agent's desire to ask a question and his
belief that he can ask a question by raising his
hand--*nondeviantly* causing his hand to rise. (The
nondeviance clause is required since it seems possible that an event
be brought about by one's desires and beliefs and yet not be
self-determined, or even an action for that matter, due to the unusual
causal path leading from the desires and beliefs to action. Imagine a
would-be accomplice of an assassin believes that his dropping his
cigarette is the signal for the assassin to shoot his intended victim
and he desires to drop his cigarette and yet this belief and desire so
unnerve him that he accidentally drops his cigarette. While the event
of dropping the cigarette is caused by a relevant desire and belief it
does not seem to be self-determined and perhaps is not even an action
[cf. Davidson 1973].) To fully spell out this account, event-causal
libertarians must specify which mental states and events are apt (cf.
Brand 1979)--which mental states and events are the springs of
self-determined actions--and what nondeviance consists in (cf.
Bishop 1989). (We note that this has proven very difficult, enough so
that some take the problem to spell doom for event-causal theories of
action. Such philosophers [e.g., Taylor 1966 and Sehon 2005] take
agential power to be conceptually and/or ontologically primitive and
understand reasons explanations of action in irreducibly teleological
terms. See Stout 2010 for a brisk survey of discussions of this
topic.) For ease, in what follows we will assume that apt mental
states are an agent's reasons that favor the action.
Event-causal libertarians, of course, contend that self-determination
requires more than nondeviant causation by agents' reasons: for
it is possible that agents' actions in deterministic worlds are
nondeviantly caused by apt mental states and events.
Self-determination requires *nondeterministic* causation, in a
nondeviant way, by an agent's reasons. While historically many
have thought that nondeterministic causation is impossible (Hobbes
1654 [1999], 1656 [1999]; Hume 1740 [1978], 1748 [1975]), with the
advent of quantum physics and, from a very different direction, an
influential essay by G.E.M. Anscombe (1971), it is now widely
assumed that nondeterministic (or probabilistic) causation is
possible. There are two importantly different ways to understand
nondeterministic causation: as the causation of probability or as the
probability of causation. Under
the causation of probability model, a nondeterministic cause \(C\)
causes (or causally contributes to) the objective probability of the
outcome's occurring rather than the outcome itself. On this
account, \(S\)'s reasons do not cause his decision but there
being a certain antecedent objective probability of its occurring, and
the decision itself is uncaused. On the competing probability of
causation model, a nondeterministic cause \(C\) causes the outcome of
a nondeterministic process. Given that \(C\) is a nondeterministic
cause of the outcome, it was possible given the exact same past and
laws of nature that \(C\) not cause the outcome (perhaps because it
was possible that some other event cause some other outcome)--the
probability of this causal transaction's occurring was less than
\(1\). Given that event-causal libertarians maintain that
self-determined actions, and thus free actions, must be caused, they
are committed to the probability of causation model of
nondeterministic causation (cf. Franklin 2018, 25-26). (We note
that Balaguer [2010] is skeptical of the above distinction, and it is
thus unclear whether he should best be classified as a non-causal or
event-causal libertarian, though see Balaguer [2014] for evidence that
it is best to treat him as a non-causalist.) Consequently, according
to event-causal libertarians, when an agent \(S\) self-determines his
choice \(\phi\), then \(S\)'s reasons \(r\_1\)
nondeterministically cause (in a nondeviant way) \(\phi\), and it was
possible, given the past and laws, that \(r\_1\) not have caused
\(\phi\), but rather some of \(S\)'s other reasons \(r\_2\)
nondeterministically caused (in a nondeviant way) a different action
\(\psi\).
Agent-causal libertarians contend that the event-causal picture fails
to capture self-determination, for it fails to accord the agent with a
power to *settle* what she does. Pereboom offers a forceful
statement of this worry:
>
>
> On an event-causal libertarian picture, the relevant causal conditions
> antecedent to the decision, i.e., the occurrence of certain
> agent-involving events, do not settle whether the decision will occur,
> but only render the occurrence of the decision about \(50\%\)
> probable. In fact, because no occurrence of antecedent events settles
> whether the decision will occur, and only antecedent events are
> causally relevant, *nothing* settles whether the decision will
> occur. (Pereboom 2014, 32; cf. Watson 1987, 1996; Clarke 2003 [ch. 8], 2011; Griffith 2010; Shabo 2011, 2013;
> Steward 2012 [ch. 3]; and Schlosser 2014); and for critical
> assessment, see Clarke 2019.
>
>
>
On the event-causal picture, the agent's causal contribution to
her actions is exhausted by the causal contribution of her reasons,
and yet her reasons leave open which decisions she will make, and this
seems insufficient for self-determination.
But what more must be added? Agent-causal libertarians maintain that
self-determination requires that *the agent herself* play a
causal role over and above the causal role played by her reasons. Some
agent-causal libertarians deny that an agent's reasons play any
direct causal role in bringing about an agent's self-determined
actions (Chisholm 1966; O'Connor 2000, ch. 5), whereas
others allow or even require that self-determined actions be caused in
part by the agent's reasons (Clarke 2003, ch. 9; Steward 2012,
ch. 3). But all agent-causal libertarians insist that exercises of the
power of self-determination do not reduce to nondeterministic
causation by apt mental states: agent-causation does not reduce to
event-causation.
Agent-causal libertarianism seems to capture an aspect of
self-determination that neither the above compatibilists accounts nor
event-causal libertarian accounts capture. (Some compatibilists even
accept this and try to incorporate agent-causation into a
compatibilist understanding of free will. See Markosian 1999, 2012;
Nelkin 2011.) These accounts reduce the causal role of the self to
states and events to which the agent is not identical (even if he is
identified with them). But how can *self*-determination of my
actions wholly reduce to determination of my actions by things
*other* than the self? Richard Taylor nicely expresses this
intuition: "If I believe that something not identical to myself
was the cause of my behavior--some event wholly external to
myself, for instance, or even one internal to myself, such as a nerve
impulse, volition, or whatnot--then I cannot regard the behavior
as being an act of mine, unless I further believed that I was the
cause of that external or internal event" (1974, 55; cf.
Franklin 2016).
Despite its powerful intuitive pull for some, many have argued that
agent-causal libertarianism is obscure or even incoherent. The stock
objection used to be that the very idea of
agent-causation--causation by agents that is not reducible to
causation by mental states and events involving the agent--is
incoherent, but this objection has become less common due to
pioneering work by Chisholm (1966, 1976), Taylor (1974),
O'Connor (2000, 2011), Clarke (2003), and Steward 2012, ch. 8). More common objections now concern, first, how to understand the
relationship between agent-causation and an agent's reasons (or
motivations in general), and, second, the empirical adequacy of
agent-causal libertarianism. With respect to the first worry, it is
widely assumed that the only (or at least best) way to understand
reasons-explanation and motivational influence is within a causal
account of reasons, where reasons cause our actions (Davidson 1963;
Mele 1992). If agent-causal libertarians accept that self-determined
actions, in addition to being agent-caused, must also be caused by
agents' reasons that favored those actions, then agent-causal
libertarians need to explain how to integrate these causes (for a
detailed attempt to do just this, see Clarke 2003, ch. 8). Given that
these two causes seem distinct, is it not possible that the agent
cause his decision to \(\phi\) and yet the agent's reasons
simultaneously cause an incompatible decision to \(\psi\)? If
agent-causal libertarians side-step this difficult question by denying
that reasons cause action, then they must explain how reasons can
explain and motivate action without causing it; and this has turned
out to be no easy task. (For more general attempts to
understand reasons-explanation and motivation within a non-causal
framework see Schueler 1995, 2003; Sehon 2005). For further discussion
see the entry on
incompatibilist (nondeterministic) theories of free will.
Finally, we note that some recent philosophers have questioned the
presumed difference between event- and agent-causation by arguing that
*all* causation is object or substance causation. They argue
that the dominant tendency to understand 'garden variety'
causal transactions in the world as relations between events is an
unfortunate legacy of David Hume's rejection of substance and
causation as basic metaphysical categories. On the competing
metaphysical picture of the world, the event or state of an
object's having some property such as mass is its having a
causal power, which in suitable circumstances it exercises to bring
about a characteristic effect. Applied to human agents in an account
of free will, the account suggests a picture on which an agent's
having desires, beliefs, and intentions are rational powers to will
particular courses of action, and where the agent's willing is
not determined in any one direction, she wills freely. An advantage
for the agent-causalist who embraces this broader metaphysics is
'ideological' parsimony. For different developments and
defenses of this approach, see Lowe (2008), Swinburne (2013), and O'Connor (2021); and for reason
to doubt that a substance-causal metaphysics helps to allay skepticism
concerning free will, see Clarke and Reed (2015).
## 3. Do We Have Free Will?
Most philosophers theorizing about free will take themselves to be
attempting to analyze a near-universal power of mature human beings.
But as we've noted above, there have been free will skeptics in
both ancient and (especially) modern times. (Israel 2001 highlights a
number of such skeptics in the early modern period.) In this section,
we summarize the main lines of argument both for and against the
reality of human freedom of will.
### 3.1 Arguments *Against* the Reality of Free Will
There are both a priori and empirical arguments against free will (See
the entry on
skepticism about moral responsibility).
Several of these start with an argument that free will is
incompatible with causal determinism, which we will not rehearse here.
Instead, we focus on arguments that human beings lack free will,
against the background assumption that freedom and causal determinism
are incompatible.
The most radical a priori argument is that free will is not merely
contingently absent but is impossible. Nietzsche 1886 [1966] argues to
this effect, and more recently it has been argued by Galen Strawson
(1986, ch. 2; 1994, 2002). Strawson associates free will with being
'ultimately morally responsible' for one's actions.
He argues that, because how one acts is a result of, or explained by,
"how one is, mentally speaking" (\(M\)), for one to be
responsible for that choice one must be responsible for \(M\). To be
responsible for \(M\), one must have chosen to be \(M\)
itself--and that not blindly, but deliberately, in accordance
with some reasons \(r\_1\). But for that choice to be a responsible
one, one must have chosen to be such as to be moved by \(r\_1\),
requiring some further reasons \(r\_2\) for such a choice. And so on,
*ad infinitum*. Free choice requires an impossible infinite
regress of choices to be the way one is in making choices.
There have been numerous replies to Strawson's argument. Mele
(1995, 221ff.) argues that Strawson misconstrues the locus of freedom
and responsibility. Freedom is principally a feature of our actions,
and only derivatively of our characters from which such actions
spring. The task of the theorist is to show how one is in rational,
reflective control of the choices one makes, consistent with there
being no freedom-negating conditions. While this seems right, when
considering those theories that make one's free control to
reside directly in the causal efficacy of one's reasons (such as
compatibilist reasons-responsive accounts or event-causal
libertarianism), it is not beside the point to reflect on how one came
to be that way in the first place and to worry that such reflection
should lead one to conclude that true responsibility (and hence
freedom) is undermined, since a complete distal source of any action
may be found external to the agent. Clarke (2003, 170-76) argues
that an effective reply may be made by indeterminists, and, in
particular, by nondeterministic agent-causal theorists. Such theorists
contend that (i) aspects of 'how one is, mentally
speaking', fully *explain* an agent's choice
without causally determining it and (ii) the agent himself causes the
choice that is made (so that the agent's antecedent state, while
grounding an explanation of the action, is not the complete causal
source of it). Since the agent's exercise of this power is
causally undetermined, it is not true that there is a sufficient
'ultimate' source of it external to the agent. Finally,
Mele (2006, 129-34, and 2017, 212-16) and O'Connor
(2009b) suggest that freedom and moral responsibility come in
degrees and grow over time, reflecting the fact that 'how one
is, mentally speaking' is increasingly shaped by one's own
past choices. Furthermore, some choices for a given individual may
reflect more freedom and responsibility than others, which may be the
kernel of truth behind Strawson's sweeping argument. (For
discussion of the ways that nature, nurture, and contingent
circumstances shape our behavior and raise deep issues concerning the
*extent* of our freedom and responsibility, see Levy 2011 and
Russell 2017, chs. 10-12.)
A second family of arguments against free will contend that, in one
way or another, nondeterministic theories of freedom entail either
that agents lack control over their choices or that the choices cannot
be adequately explained. These arguments are variously called the
'Mind', 'Rollback', or 'Luck'
argument, with the latter admitting of several versions. (For
statements of such arguments, see van Inwagen 1983, ch. 4; 2000; Haji
2001; Mele 2006; Shabo 2011, 2013, 2020; Coffman 2015). We note that
some philosophers advance such arguments not as parts of a general
case against free will, but merely as showing the inadequacy of
specific accounts of free will [see, e.g., Griffith 2010].) They each
describe imagined cases--individual cases, or comparison of
intra- or inter-world duplicate antecedent conditions followed by
diverging outcomes--designed to elicit the judgment that the
occurrence of a choice that had remained unsettled given *all*
prior causal factors can only be a 'matter of chance',
'random', or 'a matter of luck'. Such terms
have been imported from other contexts and have come to function as
quasi-technical, unanalyzed concepts in these debates, and it is
perhaps more helpful to avoid such proxies and to conduct the debates
directly in terms of the metaphysical notion of control and epistemic
notion of explanation. Where the arguments question whether an
undetermined agent can exercise appropriate control over the choice he
makes, proponents of nondeterministic theories often reply that
control is not exercised prior to, but at the time of the
choice--in the very act of bringing it about (see, e.g., Clarke
2005 and O'Connor 2007). Where the arguments question whether
undetermined choices can be adequately explained, the reply often
consists in identifying a form of explanation other than the form
demanded by the critic--a 'noncontrastive'
explanation, perhaps, rather than a 'contrastive'
explanation, or a species of contrastive explanation consistent with
indeterminism (see, e.g., Kane 1999;
Clarke, 2003, ch. 8; and Franklin 2011a; 2018, ch. 5).
We now consider empirical arguments against human freedom. Some of
these stem from the physical sciences (while making assumptions
concerning the way physical phenomena fix psychological phenomena) and
others from neuroscience and psychology.
It used to be common for philosophers to argue that there is empirical
reason to believe that the world in general is causally determined,
and since human beings are parts of the world, they are too. Many took
this to be strongly confirmed by the spectacular success of Isaac
Newton's framework for understanding the universe as governed
everywhere by fairly simple, exceptionless laws of motion. But the
quantum revolution of the early twentieth century has made that
'clockwork universe' image at least doubtful at the level
of basic physics. While quantum mechanics has proven spectacularly
successful as a framework for making precise and accurate predictions
of certain observable phenomena, its implications for the causal
structure of reality is still not well understood, and there are
competing indeterministic and deterministic interpretations. See the
entry on
quantum mechanics
for detailed discussion.) It is possible that indeterminacy on the
small-scale, supposing it to be genuine, 'cancels out' at
the macroscopic scale of birds and buildings and people, so that
behavior at this scale is virtually deterministic. But this idea, once
common, is now being challenged empirically, even at the level of
basic biology. Furthermore, the social, biological, and medical
sciences, too, are rife with merely statistical generalizations.
Plainly, the jury is out on all these inter-theoretic questions. But
that is just a way to say that current science does *not*
decisively support the idea that everything we do is pre-determined by
the past, and ultimately by the distant past, wholly out of our
control. For discussion, see Balaguer (2009), Koch (2009), Roskies
(2014), Ellis (2016).
Maybe, then, we are subject to myriad causal influences, but the sum
total of these influences doesn't determine what we do, they
only make it more or less likely that we'll do this or that. Now
some of the a priori no-free-will arguments above center on
nondeterministic theories according to which there are objective
antecedent probabilities associated with each possible choice outcome.
Why objective probabilities of this kind might present special
problems beyond those posed by the absence of determinism has been
insufficiently explored to date. (For brief discussion, see Vicens
2016 and O'Connor 2016.) But one philosopher who argues that
there is reason to hold that our actions, if undetermined, are
governed by objective probabilities and that this fact calls into
question whether we act freely is Derk Pereboom (2001, ch. 3; 2014,
ch. 3). Pereboom notes that our best physical theories indicate that
statistical laws govern isolated, small-scale physical events, and he
infers from the thesis that human beings are wholly physically
composed that such statistical laws will also govern all the physical
components of human actions. Finally, Pereboom maintains that
agent-causal libertarianism offers the correct analysis of free will.
He then invites us to imagine that the antecedent probability of some
physical component of an action occurring is \(0.32\). If the action
is free while not violating the statistical law, then, in a scenario
with a large enough number of instances, this action would have to be
freely chosen close to \(32\) percent of the time. This leads to the
problem of "wild coincidences":
>
>
> if the occurrence of these physical components were settled by the
> choices of agent-causes, then their actually being chosen close to 32
> percent of the time would amount to a coincidence no less wild than
> the coincidence of possible actions whose physical components have an
> antecedent probability of about 0.99 being chosen, over large enough
> number of instances, close to 99 percent of the time. The proposal
> that agent-caused free choices do not diverge from what the
> statistical laws predict for the physical components of our actions
> would run so sharply counter to what we would expect as to make it
> incredible. (2014, 67)
>
>
>
Clarke (2010) questions the implicit assumption that free agent-causal
choices should be expected *not* to conform to physical
statistical laws, while O'Connor (2009a) challenges the more
general assumption that freedom requires that agent-causal choices not
be governed by statistical laws of any kind, as they plausibly would
be if the relevant psychological states/powers are strongly emergent
from physical states of the human brain. Finally, Runyan 2018 argues
that Pereboom's case rests on an implausible empirical assumption
concerning the evolution of objective probabilities concerning types
of behavior over time.
Pereboom's empirical basis for free will skepticism is very
general. Others see support for free will skepticism from specific
findings and theories in the human sciences. They point to evidence
that we can be unconsciously influenced in the choices we make by a
range of factors, including ones that are not motivationally relevant;
that we can come to believe that we chose to initiate a behavior that
in fact was artificially induced; that people subject to certain
neurological disorders will sometimes engage in purposive behavior
while sincerely believing that they are not directing them. Finally, a
great deal of attention has been given to the work of neuroscientist
Benjamin Libet (2002). Libet conducted some simple experiments that
seemed to reveal the existence of 'preparatory' brain
activity (the 'readiness potential') shortly before a subject engages
in an ostensibly spontaneous action. (Libet interpreted this activity
as the brain's 'deciding' what to do before we are
consciously settled on a course of action.) Wegner (2002) surveys all
of these findings (some of which are due to his own work as a social
psychologist) and argues on their basis that the experience of
conscious willing is 'an illusion'. For criticism of such
arguments, see Mele (2009); Nahmias (2014);
Mudrik et al. (2022); and several contributions to Maoz and
Sinnott-Armstrong (2022). Libet's interpretation of the readiness
potential has come in for severe criticism. After extensive subsequent
study, neuroscientists are uncertain what it signifies. For thorough
review of the evidence, see Schurger et al. (2021).
While Pereboom and others point to these empirical considerations in
defense of free will skepticism, other philosophers see them as
reasons to favor a more modest free will agnosticism (Kearns 2015) or
to promote revisionism about the 'folk idea of free will'
(Vargas 2013; Nichols 2015).
### 3.2 Arguments *for* the Reality of Free Will
If one is a compatibilist, then a case for the reality of free will
requires evidence for our being effective agents who for the most part
are aware of what we do and why we are doing it. If one is an
incompatibilist, then the case requires *in addition* evidence
for causal indeterminism, occurring in the right locations in the
process leading from deliberation to action. Many think that we
already have third-personal 'neutral' scientific evidence
for much of human behavior's satisfying modest compatibilist
requirements, such as Fischer and Ravizza's
reasons-responsiveness account. However, given the immaturity of
social science and the controversy over whether psychological states
'reduce' in some sense to underlying physical states (and
what this might entail for the reality of mental causation), this
claim is doubtful. A more promising case for our satisfying (at least)
compatibilist requirements on freedom is that effective agency is
*presupposed* by all scientific inquiry and so cannot
rationally be doubted (which fact is overlooked by some of the more
extreme 'willusionists' such as Wegner).
However, effective intervention in the world (in scientific practice
and elsewhere) does not (obviously) require that our behavior be
causally undetermined, so the 'freedom is rationally
presupposed' argument cannot be launched for such an
understanding of freedom. Instead, incompatibilists usually give one
of the following two bases for rational belief in freedom (both of
which can be given by compatibilists, too).
First, philosophers have long claimed that we have introspective
evidence of freedom in our experience of action, or perhaps of
consciously attended or deliberated action. Augustine and Scotus,
discussed earlier, are two examples among many. In recent years,
philosophers have been more carefully scrutinizing the experience of
agency and a debate has emerged concerning its contents, and in
particular whether it supports an indeterministic theory of human free
action. For discussion, see Deery et al. (2013), Guillon (2014),
Horgan (2015), and Bayne (2017).
Second, philosophers (e.g., Reid 1788 [1969], Swinburne 2013)
sometimes claim that our belief in the reality of free will is
epistemically basic, or reasonable without requiring independent
evidential support. Most philosophers hold that some beliefs have that
status, on pain of our having no justified beliefs whatever. It is
controversial, however, just which beliefs do because it is
controversial which criteria a belief must satisfy to qualify for that
privileged status. It is perhaps necessary that a basic belief be
'instinctive' (unreflectively held) for all or most human
beings; that it be embedded in regular experience; and that it be
central to our understanding of an important aspect of the world. Our
belief in free will seems to meet these criteria, but whether they are
sufficient is debated. (O'Connor 2019 proposes that free will
belief is epistemically basic but defeasible.) Other philosophers
defend a variation on this stance, maintaining instead that belief in
the reality of moral responsibility is epistemically basic, and that
since moral responsibility entails free will, or so it is claimed, we
may *infer* the reality of free will (see, e.g., van Inwagen
1983, 206-13).
## 4. Theological Wrinkles
### 4.1 Free Will and God's Power, Knowledge, and Goodness
A large portion of Western philosophical work on free will has been
written within an overarching theological framework, according to
which God is the ultimate source, sustainer, and end of all else. Some
of these thinkers draw the conclusion that God must be a sufficient,
wholly determining cause for everything that happens; all of them
suppose that every creaturely act necessarily depends on the
explanatorily prior, cooperative activity of God. It is also commonly
presumed by philosophical theists that human beings are free and
responsible (on pain of attributing evil in the world to God alone,
and so impugning His perfect goodness). Hence, those who believe that
God is omni-determining typically are compatibilists with respect to
freedom and (in this case) theological determinism. Edwards (1754
[1957]) is a good example. But those who suppose that God's
sustaining activity (and special activity of conferring grace) is only
a necessary condition on the outcome of human free choices need to
tell a more subtle story, on which omnipotent God's cooperative
activity can be (explanatorily) prior to a human choice and yet the
outcome of that choice be settled only by the choice itself. For
important medieval discussions--the apex of philosophical
reflection on theological concerns--see the relevant portions of
Al-Ghazali *IP*, Aquinas *BW* and Scotus *QAM*.
Three positions (given in order of logical strength) on God's
activity vis-a-vis creaturely activity were variously defended
by thinkers of this area: mere conservationism, concurrentism, and
occasionalism. These positions turn on subtle distinctions, which have
recently been explored by Freddoso (1988), Kvanvig and McCann (1991),
Koons (2002), Grant (2016 and 2019), and Judisch (2016).
Many suppose that there is a challenge to human freedom stemming not
only from God's perfect power but also from his perfect
knowledge. A standard argument for the incompatibility of free will
and causal determinism has a close theological analogue. Recall van
Inwagen's influential formulation of the 'Consequence
Argument':
>
>
> If determinism is true, then our acts are the consequences of the laws
> of nature and events in the remote past. But it is not up to us what
> went on before we were born, and neither is it up to us what the laws
> of nature are. Therefore, the consequences of these things (including
> our present acts) are not up to us. (van Inwagen 1983, 16)
>
>
>
And now consider an argument that turns on God's comprehensive
and infallible knowledge of the future:
>
>
> If infallible divine foreknowledge is true, then our acts are the
> (logical) consequences of God's beliefs in the remote past.
> (Since God *cannot* get things wrong, his believing that
> something will be so entails that it will be so.) But it is not up to
> us what beliefs God had before we were born, and neither is it up to
> us that God's beliefs are necessarily true. Therefore, the
> consequences of these things (including our present acts) are not up
> to us.
>
>
>
An excellent discussion of these arguments in tandem and attempts to
point to relevant disanalogies between causal determinism and
infallible foreknowledge may be found in the introduction to Fischer
(1989). See also the entry on
foreknowledge and free will.
Another issue concerns how knowledge of God, the ultimate Good, would
impact human freedom. Many philosophical theologians, especially the
medieval Aristotelians, were drawn to the idea that human beings
cannot but will that which they take to be an unqualified good. (As
noted above, Duns Scotus is an exception to this consensus, as were
Ockham and Suarez subsequently, but their dissent is limited.) Hence,
if there is an afterlife, in which humans 'see God face to
face,' they will *inevitably* be drawn to Him. Following
Pascal, Murray (1993, 2002) argues that a good God would choose to
make His existence and character less than certain for human beings,
for the sake of preserving their freedom. (He will do so, the argument
goes, at least for a period of time in which human beings participate
in their own character formation.) If it is a good for human beings
that they freely choose to respond in love to God and to act in
obedience to His will, then God must maintain an 'epistemic
distance' from them lest they be overwhelmed by His goodness or
power and respond out of necessity, rather than freedom. (See also the
other essays in Howard-Snyder and Moser 2002.)
If it is true that God withholds our ability to be certain of his
existence for the sake of our freedom, then it is natural to conclude
that humans will lack freedom in heaven. And it is anyways common to
traditional Jewish, Christian, and Muslim theologies to maintain that
humans cannot sin in heaven. Even so, traditional Christian theology
at least maintains that human persons in heaven are free. What sort of
freedom is in view here, and how does it relate to mundane freedom?
Two good recent discussions of these questions are Pawl and Timpe
(2009) and Tamburro (2017).
### 4.2 God's Freedom
Finally, there is the question of the freedom of God himself. Perfect
goodness is an essential, not acquired, attribute of God. God cannot
lie or be in any way immoral in His dealings with His creatures
(appearances notwithstanding). Unless we take the minority position on
which this is a trivial claim, since whatever God does
*definitionally* counts as good, this appears to be a
significant, inner constraint on God's freedom. Did we not
contemplate immediately above that human freedom would be curtailed by
our having an unmistakable awareness of what is in fact the Good? And
yet is it not passing strange to suppose that God should be less than
perfectly free?
One suggested solution to this puzzle takes as its point of departure
the distinction noted in section 2.3 between the ability to do
otherwise and sourcehood, proposing that the core metaphysical feature
of freedom is being the ultimate source, or originator, of one's
choices. For human beings or any created persons who owe their
existence to factors outside themselves, the only way their acts of
will could find their ultimate origin in themselves is for such acts
not to be determined by their character and circumstances. For if all
my willings were wholly determined, then if we were to trace my causal
history back far enough, we would ultimately arrive at external
factors that gave rise to me, with my particular genetic dispositions.
My motives at the time would not be the *ultimate* source of my
willings, only the most *proximate* ones. Only by there being
less than deterministic connections between external influences and
choices, then, is it be possible for me to be an ultimate source of my
activity, concerning which I may truly say, "the buck stops
here."
As is generally the case, things are different on this point in the
case of God. As Anselm observed, even if God's character
absolutely precludes His performing certain actions in certain
contexts, this will not imply that some external factor is in any way
a partial origin of His willings and refrainings from willing. Indeed,
this would not be so even if he were determined by character to will
*everything* which He wills. God's nature owes its
existence to nothing. Thus, God would be the sole and ultimate source
of His will even if He couldn't will otherwise.
Well, then, might God have willed otherwise in *any* respect?
The majority view in the history of philosophical theology is that He
indeed could have. He might have chosen not to create anything at all.
And given that He did create, He might have created any number of
alternatives to what we observe. But there have been noteworthy
thinkers who argued the contrary position, along with others who
clearly felt the pull of the contrary position even while resisting
it. The most famous such thinker is Leibniz (1710 [1985]), who argued
that God, being both perfectly good and perfectly powerful, cannot
fail to will the best possible world. Leibniz insisted that this is
consistent with saying that God is able to will otherwise, although
his defense of this last claim is notoriously difficult to make out
satisfactorily. Many read Leibniz, *malgre lui*, as one
whose basic commitments imply that God could not have willed other
than He does in any respect.
One might challenge Leibniz's reasoning on this point by
questioning the assumption that there is a uniquely best possible
Creation (an option noted by Adams 1987, though he challenges instead
Leibniz's conclusion based on it). One way this could be is if
there is no well-ordering of worlds: some pairs of worlds are
sufficiently different in kind that they are incommensurate with each
other (neither is better than the other, nor are they equal) and no
world is better than either of them. Another way this could be is if
there is no upper limit on goodness of worlds: for every possible
world God might have created, there are others (infinitely many, in
fact) which are better. If such is the case, one might argue, it is
reasonable for God to arbitrarily choose which world to create from
among those worlds exceeding some threshold value of overall
goodness.
However, William Rowe (2004) has countered that the thesis that there
is no upper limit on goodness of worlds has a very different
consequence: it shows that there could not be a morally perfect
Creator! For suppose our world has an on-balance moral value of \(n\)
and that God chose to create it despite being aware of possibilities
having values higher than \(n\) that He was able to create. It seems
we can now imagine a morally better Creator: one having the same
options who chooses to create a better world. For critical replies to
Rowe, see Almeida (2008, ch. 1), Kray (2010), and Zimmerman (2018).
Finally, Norman Kretzmann (1997, 220-25) has argued in the
context of Aquinas's theological system that there is strong
pressure to say that God must have created something or other, though
it may well have been open to Him to create any of a number of
contingent orders. The reason is that there is no plausible account of
how an absolutely perfect God might have a *resistible*
motivation--one consideration among other, competing
considerations--for creating something rather than nothing. (It
obviously cannot have to do with any sort of utility, for example.)
The best general understanding of God's being motivated to
create at all--one which in places Aquinas himself comes very
close to endorsing--is to see it as reflecting the fact that
God's very being, which is goodness, necessarily diffuses
itself. Perfect goodness will naturally communicate itself outwardly;
God who is perfect goodness will naturally create, generating a
dependent reality that imperfectly reflects that goodness. Wainwright
(1996) discusses a somewhat similar line of thought in the Puritan
thinker Jonathan Edwards. Alexander Pruss (2016), however, raises
substantial grounds for doubt concerning this line of thought;
O'Connor (2022) offers a rejoinder. |
voltaire | ## 1. Voltaire's Life: The Philosopher as Critic and Public Activist
Voltaire only began to identify himself with philosophy and the *philosophe* identity during middle age. His work *Lettres philosophiques*, published in 1734 when he was forty years old, was the key turning point in this transformation. Before this date, Voltaire's life in no way pointed him toward the philosophical destiny that he was later to assume. His early orientation toward literature and libertine sociability, however, shaped his philosophical identity in crucial ways.
### 1.1 Voltaire's Early Years (1694-1726)
Francois-Marie d'Arouet was born in 1694, the fourth of five children, to a well-to-do public official and his well bred aristocratic wife. In its fusion of traditional French aristocratic pedigree with the new wealth and power of royal bureaucratic administration, the d'Arouet family was representative of elite society in France during the reign of Louis XIV. The young Francois-Marie acquired from his parents the benefits of prosperity and political favor, and from the Jesuits at the prestigious College Louis-le-Grand in Paris he also acquired a first-class education. Francois-Marie also acquired an introduction to modern letters from his father who was active in the literary culture of the period both in Paris and at the royal court of Versailles. Francois senior appears to have enjoyed the company of men of letters, yet his frustration with his son's ambition to become a writer is notorious. From early in his youth, Voltaire aspired to emulate his idols Moliere, Racine, and Corneille and become a playwright, yet Voltaire's father strenuously opposed the idea, hoping to install his son instead in a position of public authority. First as a law student, then as a lawyer's apprentice, and finally as a secretary to a French diplomat, Voltaire attempted to fulfill his father's wishes. But in each case, he ended up abandoning his posts, sometimes amidst scandal.
Escaping from the burdens of these public obligations, Voltaire would retreat into the libertine sociability of Paris. It was here in the 1720s, during the culturally vibrant period of the Regency government between the reigns of Louis XIV and XV (1715-1723), that Voltaire established one dimension of his identity. His wit and congeniality were legendary even as a youth, so he had few difficulties establishing himself as a popular figure in Regency literary circles. He also learned how to play the patronage game so important to those with writerly ambitions. Thanks, therefore, to some artfully composed writings, a couple of well-made contacts, more than a few *bon mots*, and a little successful investing, especially during John Law's Mississippi Bubble fiasco, Voltaire was able to establish himself as an independent man of letters in Paris. His literary debut occurred in 1718 with the publication of his *Oedipe*, a reworking of the ancient tragedy that evoked the French classicism of Racine and Corneille. The play was first performed at the home of the Duchesse du Maine at Sceaux, a sign of Voltaire's quick ascent to the very pinnacle of elite literary society. Its published title page also announced the new pen name that Voltaire would ever after deploy.
During the Regency, Voltaire circulated widely in elite circles such as those that congregated at Sceaux, but he also cultivated more illicit and libertine sociability as well. This pairing was not at all uncommon during this time, and Voltaire's intellectual work in the 1720s--a mix of poems and plays that shifted between playful libertinism and serious classicism seemingly without pause--illustrated perfectly the values of pleasure, *honnetete*, and good taste that were the watchwords of this cultural milieu. Philosophy was also a part of this mix, and during the Regency the young Voltaire was especially shaped by his contacts with the English aristocrat, freethinker,and Jacobite Lord Bolingbroke. Bolingbroke lived in exile in France during the Regency period, and Voltaire was a frequent visitor to La Source, the Englishman's estate near Orleans. The chateau served as a reunion point for a wide range of intellectuals, and many believe that Voltaire was first introduced to natural philosophy generally, and to the work of Locke and the English Newtonians specifically, at Bolingbroke's estate. It was certainly true that these ideas, especially in their more deistic and libertine configurations, were at the heart of Bolingbroke's identity.
### 1.2 The English Period (1726-1729)
Yet even if Voltaire was introduced to English philosophy in this way, its influence on his thought was most shaped by his brief exile in England between 1726-29. The occasion for his departure was an affair of honor. A very powerful aristocrat, the Duc de Rohan, accused Voltaire of defamation, and in the face of this charge the untitled writer chose to save face and avoid more serious prosecution by leaving the country indefinitely. In the spring of 1726, therefore, Voltaire left Paris for England.
It was during his English period that Voltaire's transition into his mature *philosophe* identity began. Bolingbroke, whose address Voltaire left in Paris as his own forwarding address, was one conduit of influence. In particular, Voltaire met through Bolingbroke Jonathan Swift, Alexander Pope, and John Gay, writers who were at that moment beginning to experiment with the use of literary forms such as the novel and theater in the creation of a new kind of critical public politics. Swift's *Gulliver's Travels*, which appeared only months before Voltaire's arrival, is the most famous exemplar of this new fusion of writing with political criticism. Later the same year Bolingbroke also brought out the first issue of the *Craftsman*, a political journal that served as the public platform for his circle's Tory opposition to the Whig oligarchy in England. The *Craftsman* helped to create English political journalism in the grand style, and for the next three years Voltaire moved in Bolingbroke's circle, absorbing the culture and sharing in the public political contestation that was percolating all around him.
Voltaire did not restrict himself to Bolingbroke's circle alone, however. After Bolingbroke, his primary contact in England was a merchant by the name of Everard Fawkener. Fawkener introduced Voltaire to a side of London life entirely different from that offered by Bolingbroke's circle of Tory intellectuals. This included the Whig circles that Bolingbroke's group opposed. It also included figures such as Samuel Clarke and other self-proclaimed Newtonians. Voltaire did not meet Newton himself before Sir Isaac's death in March, 1727, but he did meet his sister--learning from her the famous myth of Newton's apple, which Voltaire would play a major role in making famous. Voltaire also came to know the other Newtonians in Clarke's circle, and since he became proficient enough with English to write letters and even fiction in the language, it is very likely that he immersed himself in their writings as well. Voltaire also visited Holland during these years, forming important contacts with Dutch journalists and publishers and meeting Willem's Gravesande and other Dutch Newtonian savants. Given his other activities, it is also likely that Voltaire frequented the coffeehouses of London even if no firm evidence survives confirming that he did. It would not be surprising, therefore, to learn that Voltaire attended the Newtonian public lectures of John Theophilus Desaguliers or those of one of his rivals. Whatever the precise conduits, all of his encounters in England made Voltaire into a very knowledgeable student of English natural philosophy.
### 1.3 Becoming a *Philosophe*
When French officials granted Voltaire permission to re-enter Paris in 1729, he was devoid of pensions and banned from the royal court at Versailles. But he was also a different kind of writer and thinker. It is no doubt overly grandiose to say with Lord Morley that, "Voltaire left France a poet and returned to it a sage." It is also an exaggeration to say that he was transformed from a poet into a *philosophe* while in England. For one, these two sides of Voltaire's intellectual identity were forever intertwined, and he never experienced an absolute transformation from one into the other at any point in his life. But the English years did trigger a transformation in him.
After his return to France, Voltaire worked hard to restore his sources of financial and political support. The financial problems were the easiest to solve. In 1729, the French government staged a sort of lottery to help amortize some of the royal debt. A friend perceived an opportunity for investors in the structure of the government's offering, and at a dinner attended by Voltaire he formed a society to purchase shares. Voltaire participated, and in the fall of that year when the returns were posted he had made a fortune. Voltaire's inheritance from his father also became available to him at the same time, and from this date forward Voltaire never again struggled financially. This result was no insignificant development since Voltaire's financial independence effectively freed him from one dimension of the patronage system so necessary to aspiring writers and intellectuals in the period. In particular, while other writers were required to appeal to powerful financial patrons in order to secure the livelihood that made possible their intellectual careers, Voltaire was never again beholden to these imperatives.
The patronage structures of Old Regime France provided more than economic support to writers, however, and restoring the *credit* upon which his reputation as a writer and thinker depended was far less simple. Gradually, however, through a combination of artfully written plays, poems, and essays and careful self-presentation in Parisian society, Voltaire began to regain his public stature. In the fall of 1732, when the next stage in his career began to unfold, Voltaire was residing at the royal court of Versailles, a sign that his re-establishment in French society was all but complete.
During this rehabilitation, Voltaire also formed a new relationship that was to prove profoundly influential in the subsequent decades. He became reacquainted with Emilie Le Tonnier de Breteuil,the daughter of one of his earliest patrons, who married in 1722 to become the Marquise du Chatelet. Emilie du Chatelet was twenty-nine years old in the spring of 1733 when Voltaire began his relationship with her. She was also a uniquely accomplished woman. Du Chatelet's father, the Baron de Breteuil, hosted a regular gathering of men of letters that included Voltaire, and his daughter, ten years younger than Voltaire, shared in these associations. Her father also ensured that Emilie received an education that was exceptional for girls at the time. She studied Greek and Latin and trained in mathematics, and when Voltaire reconnected with her in 1733 she was a very knowledgeable thinker in her own right even if her own intellectual career, which would include an original treatise in natural philosophy and a complete French translation of Newton's *Principia Mathematica*--still the only complete French translation ever published--had not yet begun. Her intellectual talents combined with her vivacious personality drew Voltaire to her, and although Du Chatelet was a titled aristocrat married to an important military officer, the couple was able to form a lasting partnership that did not interfere with Du Chatelet's marriage. This arrangement proved especially beneficial to Voltaire when scandal forced him to flee Paris and to establish himself permanently at the Du Chatelet family estate at Cirey. From 1734, when this arrangement began, to 1749, when Du Chatelet died during childbirth, Cirey was the home to each along with the site of an intense intellectual collaboration. It was during this period that both Voltaire and Du Chatelet became widely known philosophical figures, and the intellectual history of each before 1749 is most accurately described as the history of the couple's joint intellectual endeavors.
### 1.4 The Newton Wars (1732-1745)
For Voltaire, the events that sent him fleeing to Cirey were also the impetus for much of his work while there. While in England, Voltaire had begun to compose a set of letters framed according to the well-established genre of a traveler reporting to friends back home about foreign lands. Montesquieu's 1721 *Lettres Persanes*, which offered a set of fictionalized letters by Persians allegedly traveling in France, and Swift's 1726 *Gulliver's Travels* were clear influences when Voltaire conceived his work. But unlike the authors of these overtly fictionalized accounts, Voltaire innovated by adopting a journalistic stance instead, one that offered readers an empirically recognizable account of several aspects of English society. Originally titled *Letters on England*, Voltaire left a draft of the text with a London publisher before returning home in 1729. Once in France, he began to expand the work, adding to the letters drafted while in England, which focused largely on the different religious sects of England and the English Parliament, several new letters including some on English philosophy. The new text, which included letters on Bacon, Locke, Newton and the details of Newtonian natural philosophy along with an account of the English practice of inoculation for smallpox, also acquired a new title when it was first published in France in 1734: *Lettres philosophiques*.
Before it appeared, Voltaire attempted to get official permission for the book from the royal censors, a requirement in France at the time. His publisher, however, ultimately released the book without these approvals and without Voltaire's permission. This made the first edition of the *Lettres philosophiques* illicit, a fact that contributed to the scandal that it triggered, but one that in no way explains the furor the book caused. Historians in fact still scratch their heads when trying to understand why Voltaire's *Lettres philosophiques* proved to be so controversial. The only thing that is clear is that the work did cause a sensation that subsequently triggered a rapid and overwhelming response on the part of the French authorities. The book was publicly burned by the royal hangman several months after its release, and this act turned Voltaire into a widely known intellectual outlaw. Had it been executed, a royal *lettre de cachet* would have sent Voltaire to the royal prison of the Bastille as a result of his authorship of *Lettres philosophiques*; instead, he was able to flee with Du Chatelet to Cirey where the couple used the sovereignty granted by her aristocratic title to create a safe haven and base for Voltaire's new position as a philosophical rebel and writer in exile.
Had Voltaire been able to avoid the scandal triggered by the *Lettres philosophiques*, it is highly likely that he would have chosen to do so. Yet once it was thrust upon him, he adopted the identity of the philosophical exile and outlaw writer with conviction, using it to create a new identity for himself, one that was to have far reaching consequences for the history of Western philosophy. At first, Newtonian science served as the vehicle for this transformation. In the decades before 1734, a series of controversies had erupted, especially in France, about the character and legitimacy of Newtonian science, especially the theory of universal gravitation and the physics of gravitational attraction through empty space. Voltaire positioned his *Lettres philosophiques* as an intervention into these controversies, drafting a famous and widely cited letter that used an opposition between Newton and Descartes to frame a set of fundamental differences between English and French philosophy at the time. He also included other letters about Newtonian science in the work while linking (or so he claimed) the philosophies of Bacon, Locke, and Newton into an English philosophical complex that he championed as a remedy for the perceived errors and illusions perpetuated on the French by Rene Descartes and Nicolas Malebranche. Voltaire did not invent this framework, but he did use it to enflame a set of debates that were then raging, debates that placed him and a small group of young members of the Royal Academy of Sciences in Paris into apparent opposition to the older and more established members of this bastion of official French science. Once installed at Cirey, both Voltaire and Du Chatelet further exploited this apparent division by engaging in a campaign on behalf of Newtonianism, one that continually targeted an imagined monolith called French Academic Cartesianism as the enemy against which they in the name of Newtonianism were fighting.
The centerpiece of this campaign was Voltaire's *Elements de la Philosophie de Newton*, which was first published in 1738 and then again in 1745 in a new and definitive edition that included a new section, first published in 1740, devoted to Newton's metaphysics. Voltaire offered this book as a clear, accurate, and accessible account of Newton's philosophy suitable for ignorant Frenchman (a group that he imagined to be large). But he also conceived of it as a *machine de guerre* directed against the Cartesian establishment, which he believed was holding France back from the modern light of scientific truth. Vociferous criticism of Voltaire and his work quickly erupted, with some critics emphasizing his rebellious and immoral proclivities while others focused on his precise scientific views. Voltaire collapsed both challenges into a singular vision of his enemy as "backward Cartesianism". As he fought fiercely to defend his positions, an unprecedented culture war erupted in France centered on the character and value of Newtonian natural philosophy. Du Chatelet contributed to this campaign by writing a celebratory review of Voltaire's *Elements* in the *Journal des savants*, the most authoritative French learned periodical of the day. The couple also added to their scientific credibility by receiving separate honorable mentions in the 1738 Paris Academy prize contest on the nature of fire. Voltaire likewise worked tirelessly rebutting critics and advancing his positions in pamphlets and contributions to learned periodicals. By 1745, when the definitive edition of Voltaire's *Elements* was published, the tides of thought were turning his way, and by 1750 the perception had become widespread that France had been converted from backward, erroneous Cartesianism to modern, Enlightened Newtonianism thanks to the heroic intellectual efforts of figures like Voltaire.
### 1.5 From French Newtonian to Enlightenment *Philosophe* (1745-1755)
This apparent victory in the Newton Wars of the 1730s and 1740s allowed Voltaire's new philosophical identity to solidify. Especially crucial was the way that it allowed Voltaire's outlaw status, which he had never fully repudiated, to be rehabilitated in the public mind as a necessary and heroic defense of philosophical truth against the enemies of error and prejudice. From this perspective, Voltaire's critical stance could be reintegrated into traditional Old Regime society as a new kind of legitimate intellectual martyrdom. Since Voltaire also coupled his explicitly philosophical writings and polemics during the 1730s and 1740s with an equally extensive stream of plays, poems, stories, and narrative histories, many of which were orthogonal in both tone and content to the explicit campaigns of the Newton Wars, Voltaire was further able to reestablish his old identity as an Old Regime man of letters despite the scandals of these years. In 1745, Voltaire was named the Royal Historiographer of France, a title bestowed upon him as a result of his histories of Louis XIV and the Swedish King Charles II. This royal office also triggered the writing of arguably Voltaire's most widely read and influential book, at least in the eighteenth century, *Essais sur les moeurs et l'esprit des nations* (1751), a pioneering work of universal history. The position also legitimated him as an officially sanctioned savant. In 1749, after the death of du Chatelet, Voltaire reinforced this impression by accepting an invitation to join the court of the young Frederick the Great in Prussia, a move that further assimilated him into the power structures of Old Regime society.
Had this assimilationist trajectory continued during the remainder of Voltaire's life, his legacy in the history of Western philosophy might not have been so great. Yet during the 1750s, a set of new developments pulled Voltaire back toward his more radical and controversial identity and allowed him to rekindle the critical *philosophe* persona that he had innovated during the Newton Wars. The first step in this direction involved a dispute with his onetime colleague and ally, Pierre-Louis Moreau de Maupertuis. Maupertuis had preceded Voltaire as the first aggressive advocate for Newtonian science in France. When Voltaire was preparing his own Newtonian intervention in the *Lettres philosophiques* in 1732, he consulted with Maupertuis, who was by this date a pensioner in the French Royal Academy of Sciences. It was largely around Maupertuis that the young cohort of French academic Newtonians gathered during the Newton wars of 1730s and 40s, and with Voltaire fighting his own public campaigns on behalf of this same cause during the same period, the two men became the most visible faces of French Newtonianism even if they never really worked as a team in this effort. Like Voltaire, Maupertuis also shared a relationship with Emilie du Chatelet, one that included mathematical collaborations that far exceeded Voltaire's capacities. Maupertuis was also an occasional guest at Cirey, and a correspondent with both du Chatelet and Voltaire throughout these years. But in 1745 Maupertuis surprised all of French society by moving to Berlin to accept the directorship of Frederick the Great's newly reformed Berlin Academy of Sciences.
Maupertuis's thought at the time of his departure for Prussia was turning toward the metaphysics and rationalist epistemology of Leibniz as a solution to certain questions in natural philosophy. Du Chatelet also shared this tendency, producing in 1740 her *Institutions de physiques*, a systematic attempt to wed Newtonian mechanics with Leibnizian rationalism and metaphysics. Voltaire found this Leibnizian turn dyspeptic, and he began to craft an anti-Leibnizian discourse in the 1740s that became a bulwark of his brand of Newtonianism. This placed him in opposition to Du Chatelet, even if this intellectual rift in no way soured their relationship. Yet after she died in 1749, and Voltaire joined Maupertuis at Frederick the Great's court in Berlin, this anti-Leibnizianism became the centerpiece of a rift with Maupertuis. Voltaire's public satire of the President of the Royal Academy of Sciences of Berlin published in late 1752, which presented Maupertuis as a despotic philosophical buffoon, forced Frederick to make a choice. He sided with Maupertuis, ordering Voltaire to either retract his libelous text or leave Berlin. Voltaire chose the latter, falling once again into the role of scandalous rebel and exile as a result of his writings.
### 1.6 Fighting for *Philosophie* (1755-1778)
This event proved to be Voltaire's last official rupture with establishment authority. Rather than returning home to Paris and restoring his reputation, Voltaire instead settled in Geneva. When this austere Calvinist enclave proved completely unwelcoming, he took further steps toward independence by using his personal fortune to buy a chateau of his own in the hinterlands between France and Switzerland. Voltaire installed himself permanently at Ferney in early 1759, and from this date until his death in 1778 he made the chateau his permanent home and capital, at least in the minds of his intellectual allies, of the emerging French Enlightenment.
During this period, Voltaire also adopted what would become his most famous and influential intellectual stance, announcing himself as a member of the "party of humanity" and devoting himself toward waging war against the twin hydras of fanaticism and superstition. While the singular defense of Newtonian science had focused Voltaire's polemical energies in the 1730s and 1740s, after 1750 the program became the defense of *philosophie* tout court and the defeat of its perceived enemies within the ecclesiastical and aristo-monarchical establishment. In this way, Enlightenment *philosophie* became associated through Voltaire with the cultural and political program encapsulated in his famous motto, "*Ecrasez l'infame!*" ("Crush the infamy!"). This entanglement of philosophy with social criticism and reformist political action, a contingent historical outcome of Voltaire's particular intellectual career, would become his most lasting contribution to the history of philosophy.
The first cause to galvanize this new program was Diderot and d'Alembert's *Encyclopedie*. The first volume of this compendium of definitions appeared in 1751, and almost instantly the work became buried in the kind of scandal to which Voltaire had grown accustomed. Voltaire saw in the controversy a new call to action, and he joined forces with the project soon after its appearance, penning numerous articles that began to appear with volume 5 in 1755. Scandal continued to chase the *Encyclopedie*, however, and in 1759 the work's publication privilege was revoked in France, an act that did not kill the project but forced it into illicit production in Switzerland. During these scandals, Voltaire fought vigorously alongside the project's editors to defend the work, fusing the *Encyclopedie*'s enemies, particularly the Parisian Jesuits who edited the monthly periodical the *Journal de Trevoux*, into a monolithic "infamy" devoted to eradicating truth and light from the world. This framing was recapitulated by the opponents of the *Encyclopedie*, who began to speak of the loose assemblage of authors who contributed articles to the work as a subversive coterie of *philosophes* devoted to undermining legitimate social and moral order.
As this polemic crystallized and grew in both energy and influence, Voltaire embraced its terms and made them his cause. He formed particularly close ties with d'Alembert, and with him began to generalize a broad program for Enlightenment centered on rallying the newly self-conscious *philosophes* (a term often used synonymously with the *Encyclopedistes*) toward political and intellectual change. In this program, the *philosophes* were not unified by any shared philosophy but through a commitment to the program of defending *philosophie* itself against its perceived enemies. They were also imagined as activists fighting to eradicate error and superstition from the world. The ongoing defense of the *Encyclopedie* was one rallying point, and soon the removal of the Jesuits--the great enemies of Enlightenment, the *philosophes* proclaimed--became a second unifying cause. This effort achieved victory in 1763, and soon the *philosophes* were attempting to infiltrate the academies and other institutions of knowledge in France. One climax in this effort was reached in 1774 when the *Encyclopediste* and friend of Voltaire and the *philosophes*, Anne-Robert Jacques Turgot, was named Controller-General of France, the most powerful ministerial position in the kingdom, by the newly crowned King Louis XVI. Voltaire and his allies had paved the way for this victory through a barrage of writings throughout the 1760s and 1770s that presented *philosophie* like that espoused by Turgot as an agent of enlightened reform and its critics as prejudicial defenders of an ossified tradition.
Voltaire did bring out one explicitly philosophical book in support this campaign, his *Dictionnaire philosophique* of 1764-1770. This book republished his articles from the original *Encyclopedie* while adding new entries conceived in the spirit of the original work. Yet to fully understand the brand of *philosophie* that Voltaire made foundational to the Enlightenment, one needs to recognize that it just as often circulated in fictional stories, satires, poems, pamphlets, and other less obviously philosophical genres. Voltaire's most widely known text, for instance, *Candide, ou l'Optimisme*, first published in 1759, is a fictional story of a wandering traveler engaged in a set of farcical adventures. Yet contained in the text is a serious attack on Leibnizian philosophy, one that in many ways marks the culmination of Voltaire's decades long attack on this philosophy started during the Newton wars. *Philosophie a la Voltaire* also came in the form of political activism, such as his public defense of Jean Calas who, Voltaire argued, was a victim of a despotic state and an irrational and brutal judicial system. Voltaire often attached philosophical reflection to this political advocacy, such as when he facilitated a French translation of Cesare Beccaria's treatise on humanitarian justice and penal reform and then prefaced the work with his own essay on justice and religious toleration (Calas was a French protestant persecuted by a Catholic monarchy). Public philosophic campaigns such as these that channeled critical reason in a direct, oppositionalist way against the perceived injustices and absurdities of Old Regime life were the hallmark of *philosophie* as Voltaire understood the term.
### 1.7 Voltaire, *Philosophe* Icon of Enlightenment *Philosophie* (1778-Present)
Voltaire lived long enough to see some of his long-term legacies start to concretize. With the ascension of Louis XVI in 1774 and the appointment of Turgot as Controller-General, the French establishment began to embrace the *philosophes* and their agenda in a new way. Critics of Voltaire and his program for *philosophie* remained powerful, however, and they would continue to survive as the necessary backdrop to the positive image of the Enlightenment *philosophe* as a modernizer, progressive reformer, and courageous scourge against traditional authority that Voltaire bequeathed to later generations. During Voltaire's lifetime, this new acceptance translated into a final return to Paris in early 1778. Here, as a frail and sickly octogenarian, Voltaire was welcomed by the city as the hero of the Enlightenment that he now personified. A statue was commissioned as a permanent shrine to his legacy, and a public performance of his play *Irene* was performed in a way that allowed its author to be celebrated as a national hero. Voltaire died several weeks after these events, but the canonization that they initiated has continued right up until the present.
Western philosophy was profoundly shaped by the conception of the *philosophe* and the program for Enlightenment *philosophie* that Voltaire came to personify. The model he offered of the *philosophe* as critical public citizen and advocate first and foremost, and as abstruse and systematic thinker only when absolutely necessary, was especially influential in the subsequent development of the European philosophy. Also influential was the example he offered of the philosopher measuring the value of any philosophy according by its ability to effect social change. In this respect, Karl Marx's famous thesis that philosophy should aspire to change the world, not merely interpret it, owes more than a little debt Voltaire. The link between Voltaire and Marx was also established through the French revolutionary tradition, which similarly adopted Voltaire as one of its founding heroes. Voltaire was the first person to be honored with re-burial in the newly created Pantheon of the Great Men of France that the new revolutionary government created in 1791. This act served as a tribute to the connections that the revolutionaries saw between Voltaire's philosophical program and the cause of revolutionary modernization as a whole. In a similar way, Voltaire remains today an iconic hero for everyone who sees a positive linkage between critical reason and political resistance in projects of progressive, modernizing reform.
## 2. Voltaire's Enlightenment Philosophy
Voltaire's philosophical legacy ultimately resides as much in how he practiced philosophy, and in the ends toward which he directed his philosophical activity, as in any specific doctrine or original idea. Yet the particular philosophical positions he took, and the way that he used his wider philosophical campaigns to champion certain understandings while disparaging others, did create a constellation appropriately called Voltaire's Enlightenment philosophy. True to Voltaire's character, this constellation is best described as a set of intellectual stances and orientations rather than as a set of doctrines or systematically defended positions. Nevertheless, others found in Voltaire both a model of the well-oriented *philosophe* and a set of particular philosophical positions appropriate to this stance. Each side of this equation played a key role in defining the Enlightenment *philosophie* that Voltaire came to personify.
### 2.1 Liberty
Central to this complex is Voltaire's conception of liberty. Around this category, Voltaire's social activism and his relatively rare excursions into systematic philosophy also converged. In 1734, in the wake of the scandals triggered by the *Lettres philosophiques*, Voltaire wrote, but left unfinished at Cirey, a *Traite de metaphysique* that explored the question of human freedom in philosophical terms. The question was particularly central to European philosophical discussions at the time, and Voltaire's work explicitly referenced thinkers like Hobbes and Leibniz while wrestling with the questions of materialism, determinism, and providential purpose that were then central to the writings of the so-called deists, figures such as John Toland and Anthony Collins. The great debate between Samuel Clarke and Leibniz over the principles of Newtonian natural philosophy was also influential as Voltaire struggled to understand the nature of human existence and ethics within a cosmos governed by rational principles and impersonal laws.
Voltaire adopted a stance in this text somewhere between the strict determinism of rationalist materialists and the transcendent spiritualism and voluntarism of contemporary Christian natural theologians. For Voltaire, humans are not deterministic machines of matter and motion, and free will thus exists. But humans are also natural beings governed by inexorable natural laws, and his ethics anchored right action in a self that possessed the natural light of reason immanently. This stance distanced him from more radical deists like Toland, and he reinforced this position by also adopting an elitist understanding of the role of religion in society. For Voltaire, those equipped to understand their own reason could find the proper course of free action themselves. But since many were incapable of such self-knowledge and self-control, religion, he claimed, was a necessary guarantor of social order. This stance distanced Voltaire from the republican politics of Toland and other materialists, and Voltaire echoed these ideas in his political musings, where he remained throughout his life a liberal, reform-minded monarchist and a skeptic with respect to republican and democratic ideas.
In the *Lettres philosophiques*, Voltaire had suggested a more radical position with respect to human determinism, especially in his letter on Locke, which emphasized the materialist reading of the Lockean soul that was then a popular figure in radical philosophical discourse. Some readers singled out this part of the book as the major source of its controversy, and in a similar vein the very materialist account of "*Ame*," or the soul, which appeared in volume 1 of Diderot and d'Alembert's *Encyclopedie*, was also a flashpoint of controversy. Voltaire also defined his own understanding of the soul in similar terms in his own *Dictionnaire philosophique*. What these examples point to is Voltaire's willingness, even eagerness, to publicly defend controversial views even when his own, more private and more considered writings often complicated the understanding that his more public and polemical writings insisted upon. In these cases, one often sees Voltaire defending less a carefully reasoned position on a complex philosophical problem than adopting a political position designed to assert his conviction that liberty of speech, no matter what the topic, is sacred and cannot be violated.
Voltaire never actually said "I disagree with what you say, but I will defend to the death your right to say it." Yet the myth that associates this dictum with his name remains very powerful, and one still hears his legacy invoked through the redeclaration of this pronouncement that he never actually declared. Part of the deep cultural tie that joins Voltaire to this dictum is the fact that even while he did not write these precise words, they do capture, however imprecisely, the spirit of his philosophy of liberty. In his voluminous correspondence especially, and in the details of many of his more polemical public texts, one does find Voltaire articulating a view of intellectual and civil liberty that makes him an unquestioned forerunner of modern civil libertarianism. He never authored any single philosophical treatise on this topic, however, yet the memory of his life and philosophical campaigns was influential in advancing these ideas nevertheless. Voltaire's influence is palpably present, for example, in Kant's famous argument in his essay "What is Enlightenment?" that Enlightenment stems from the free and public use of critical reason, and from the liberty that allows such critical debate to proceed untrammeled. The absence of a singular text that anchors this linkage in Voltaire's collected works in no way removes the unmistakable presence of Voltaire's influence upon Kant's formulation.
### 2.2 Hedonism
Voltaire's notion of liberty also anchored his hedonistic morality, another key feature of Voltaire's Enlightenment philosophy. One vehicle for this philosophy was Voltaire's salacious poetry, a genre that both reflected in its eroticism and sexual innuendo the lived culture of libertinism that was an important feature of Voltaire's biography. But Voltaire also contributed to philosophical libertinism and hedonism through his celebration of moral freedom through sexual liberty. Voltaire's avowed hedonism became a central feature of his wider philosophical identity since his libertine writings and conduct were always invoked by those who wanted to indict him for being a reckless subversive devoted to undermining legitimate social order. Voltaire's refusal to defer to such charges, and his vigor in opposing them through a defense of the very libertinism that was used against him, also injected a positive philosophical program into these public struggles that was very influential. In particular, through his cultivation of a happily libertine persona, and his application of philosophical reason toward the moral defense of this identity, often through the widely accessible vehicles of poetry and witty prose, Voltaire became a leading force in the wider Enlightenment articulation of a morality grounded in the positive valuation of personal, and especially bodily, pleasure, and an ethics rooted in a hedonistic calculus of maximizing pleasure and minimizing pain. He also advanced this cause by sustaining an unending attack upon the repressive and, to his mind, anti-human demands of traditional Christian asceticism, especially priestly celibacy, and the moral codes of sexual restraint and bodily self-abnegation that were still central to the traditional moral teachings of the day.
This same hedonistic ethics was also crucial to the development of liberal political economy during the Enlightenment, and Voltaire applied his own libertinism toward this project as well. In the wake of the scandals triggered by Mandeville's famous argument in *The Fable of the Bees* (a poem, it should be remembered) that the pursuit of private vice, namely greed, leads to public benefits, namely economic prosperity, a French debate about the value of luxury as a moral good erupted that drew Voltaire's pen. In the 1730s, he drafted a poem called *Le Mondain* that celebrated hedonistic worldly living as a positive force for society, and not as the corrupting element that traditional Christian morality held it to be. In his *Essay sur les moeurs* he also joined with other Enlightenment historians in celebrating the role of material acquisition and commerce in advancing the progress of civilization. Adam Smith would famously make similar arguments in his founding tract of Enlightenment liberalism, *On the Wealth of Nations*, published in 1776. Voltaire was certainly no great contributor to the political economic science that Smith practiced, but he did contribute to the wider philosophical campaigns that made the concepts of liberty and hedonistic morality central to their work both widely known and more generally accepted.
The ineradicable good of personal and philosophical liberty is arguably the master theme in Voltaire's philosophy, and if it is, then two other themes are closely related to it. One is the importance of skepticism, and the second is the importance of empirical science as a solvent to dogmatism and the pernicious authority it engenders.
### 2.3 Skepticism
Voltaire's skepticism descended directly from the neo-Pyrrhonian revival of the Renaissance, and owes a debt in particular to Montaigne, whose essays wedded the stance of doubt with the positive construction of a self grounded in philosophical skepticism. Pierre Bayle's skepticism was equally influential, and what Voltaire shared with these forerunners, and what separated him from other strands of skepticism, such as the one manifest in Descartes, is the insistence upon the value of the skeptical position in its own right as a final and complete philosophical stance. Among the philosophical tendencies that Voltaire most deplored, in fact, were those that he associated most powerfully with Descartes who, he believed, began in skepticism but then left it behind in the name of some positive philosophical project designed to eradicate or resolve it. Such urges usually led to the production of what Voltaire liked to call "philosophical romances," which is to say systematic accounts that overcome doubt by appealing to the imagination and its need for coherent explanations. Such explanations, Voltaire argued, are fictions, not philosophy, and the philosopher needs to recognize that very often the most philosophical explanation of all is to offer no explanation at all.
Such skepticism often acted as bulwark for Voltaire's defense of liberty since he argued that no authority, no matter how sacred, should be immune to challenge by critical reason. Voltaire's views on religion as manifest in his private writings are complex, and based on the evidence of these texts it would be wrong to call Voltaire an atheist, or even an anti-Christian so long as one accepts a broad understanding of what Christianity can entail. But even if his personal religious views were subtle, Voltaire was unwavering in his hostility to church authority and the power of the clergy. For similar reasons, he also grew as he matured ever more hostile toward the sacred mysteries upon which monarchs and Old Regime aristocratic society based their authority. In these cases, Voltaire's skepticism was harnessed to his libertarian convictions through his continual effort to use critical reason as a solvent for these "superstitions" and the authority they anchored. The philosophical authority of *romanciers* such as Descartes, Malebranche, and Leibniz was similarly subjected to the same critique, and here one sees how the defense of skepticism and liberty, more than any deeply held opposition to religiosity per se, was often the most powerful motivator for Voltaire.
From this perspective, Voltaire might fruitfully be compared with Socrates, another founding figure in Western philosophy who made a refusal to declaim systematic philosophical positions a central feature of his philosophical identity. Socrates's repeated assertion that he knew nothing was echoed in Voltaire's insistence that the true philosopher is the one who dares not to know and then has the courage to admit his ignorance publicly. Voltaire was also, like Socrates, a public critic and controversialist who defined philosophy primarily in terms of its power to liberate individuals from domination at the hands of authoritarian dogmatism and irrational prejudice. Yet while Socrates championed rigorous philosophical dialectic as the agent of this emancipation, Voltaire saw this same dialectical rationalism at the heart of the dogmatism that he sought to overcome. Voltaire often used satire, mockery and wit to undermine the alleged rigor of philosophical dialectic, and while Socrates saw this kind of rhetorical word play as the very essence of the erroneous sophism that he sought to alleviate, Voltaire cultivated linguistic cleverness as a solvent to the false and deceptive dialectic that anchored traditional philosophy.
### 2.4 Newtonian Empirical Science
Against the acceptance of ignorance that rigorous skepticism often demanded, and against the false escape from it found in sophistical knowledge--or what Voltaire called imaginative philosophical romances--Voltaire offered a different solution than the rigorous dialectical reasoning of Socrates: namely, the power and value of careful empirical science. Here one sees the debt that Voltaire owed to the currents of Newtonianism that played such a strong role in launching his career. Voltaire's own critical discourse against imaginative philosophical romances originated, in fact, with English and Dutch Newtonians, many of whom were expatriate French Huguenots, who developed these tropes as rhetorical weapons in their battles with Leibniz and European Cartesians who challenged the innovations of Newtonian natural philosophy. In his *Principia Mathematica* (1687; 2nd rev. edition 1713), Newton had offered a complete mathematical and empirical description of how celestial and terrestrial bodies behaved. Yet when asked to explain how bodies were able to act in the way that he mathematically and empirically demonstrated that they did, Newton famously replied "I feign no hypotheses." From the perspective of traditional natural philosophy, this was tantamount to hand waving since offering rigorous causal accounts of the nature of bodies in motion was the very essence of this branch of the sciences. Newton's major philosophical innovation rested, however, in challenging this very epistemological foundation, and the assertion and defense of Newton's position against its many critics, not least by Voltaire, became arguably the central dynamic of philosophical change in the first half of the eighteenth century.
While Newtonian epistemology admitted of many variations, at its core rested a new skepticism about the validity of apriori rationalist accounts of nature and a new assertion of brute empirical fact as a valid philosophical understanding in its own right. European Natural philosophers in the second half of the seventeenth century had thrown out the metaphysics and physics of Aristotle with its four part causality and teleological understanding of bodies, motion and the cosmic order. In its place, however, a new mechanical causality was introduced that attempted to explain the world in equally comprehensive terms through the mechanisms of an inert matter acting by direct contact and action alone. This approach lead to the vortical account of celestial mechanics, a view that held material bodies to be swimming in an ethereal sea whose action pushed and pulled objects in the manner we observe. What could not be observed, however, was the ethereal sea itself, or the other agents of this supposedly comprehensive mechanical cosmos. Yet rationality nevertheless dictated that such mechanisms must exist since without them philosophy would be returned to the occult causes of the Aristotelian natural tendencies and teleological principles. Figuring out what these point-contact mechanisms were and how they worked was, therefore, the charge of the new mechanical natural philosophy of the late seventeenth century. Figures such as Descartes, Huygens, and Leibniz established their scientific reputations through efforts to realize this goal.
Newton pointed natural philosophy in a new direction. He offered mathematical analysis anchored in inescapable empirical fact as the new foundation for a rigorous account of the cosmos. From this perspective, the great error of both Aristotelian and the new mechanical natural philosophy was its failure to adhere strictly enough to empirical facts. Vortical mechanics, for example, claimed that matter was moved by the action of an invisible agent, yet this, the Newtonians began to argue, was not to explain what is really happening but to imagine a fiction that gives us a speciously satisfactory rational explanation of it. Natural philosophy needs to resist the allure of such rational imaginings and to instead deal only with the empirically provable. Moreover, the Newtonians argued, if a set of irrefutable facts cannot be explained other then by accepting the brute facticity of their truth, this is not a failure of philosophical explanation so much as a devotion to appropriate rigor. Such epistemological battles became especially intense around Newton's theory of universal gravitation. Few questioned that Newton had demonstrated an irrefutable mathematical law whereby bodies appear to attract one another in relation to their masses and in inverse relation to the square of the distance between them. But was this rigorous mathematical and empirical description a philosophical account of bodies in motion? Critics such as Leibniz said no, since mathematical description was not the same thing as philosophical explanation, and Newton refused to offer an explanation of how and why gravity operated the way that it did. The Newtonians countered that phenomenal descriptions were scientifically adequate so long as they were grounded in empirical facts, and since no facts had yet been discerned that explained what gravity is or how it works, no scientific account of it was yet possible. They further insisted that it was enough that gravity did operate the way that Newton said it did, and that this was its own justification for accepting his theory. They further mocked those who insisted on dreaming up chimeras like the celestial vortices as explanations for phenomena when no empirical evidence existed to support of such theories.
The previous summary describes the general core of the Newtonian position in the intense philosophical contests of the first decades of the eighteenth century. It also describes Voltaire's own stance in these same battles. His contribution, therefore, was not centered on any innovation within these very familiar Newtonian themes; rather, it was his accomplishment to become a leading evangelist for this new Newtonian epistemology, and by consequence a major reason for its widespread dissemination and acceptance in France and throughout Europe. A comparison with David Hume's role in this same development might help to illuminate the distinct contributions of each. Both Hume and Voltaire began with the same skepticism about rationalist philosophy, and each embraced the Newtonian criterion that made empirical fact the only guarantor of truth in philosophy. Yet Hume's target remained traditional philosophy, and his contribution was to extend skepticism all the way to the point of denying the feasibility of transcendental philosophy itself. This argument would famously awake Kant's dogmatic slumbers and lead to the reconstitution of transcendental philosophy in new terms, but Voltaire had different fish to fry. His attachment was to the new Newtonian empirical scientists, and while he was never more than a dilettante scientist himself, his devotion to this form of natural inquiry made him in some respects the leading philosophical advocate and ideologist for the new empirico-scientific conception of philosophy that Newton initiated.
For Voltaire (and many other eighteenth-century Newtonians) the most important project was defending empirical science as an alternative to traditional natural philosophy. This involved sharing in Hume's critique of abstract rationalist systems, but it also involved the very different project of defending empirical induction and experimental reasoning as the new epistemology appropriate for a modern Enlightened philosophy. In particular, Voltaire fought vigorously against the rationalist epistemology that critics used to challenge Newtonian reasoning. His famous conclusion in *Candide*, for example, that optimism was a philosophical chimera produced when dialectical reason remains detached from brute empirical facts owed a great debt to his Newtonian convictions. His alternative offered in the same text of a life devoted to simple tasks with clear, tangible, and most importantly useful ends was also derived from the utilitarian discourse that Newtonians also used to justify their science. Voltaire's campaign on behalf of smallpox inoculation, which began with his letter on the topic in the *Lettres philosophiques*, was similarly grounded in an appeal to the facts of the case as an antidote to the fears generated by logical deductions from seemingly sound axiomatic principles. All of Voltaire's public campaigns, in fact, deployed empirical fact as the ultimate solvent for irrational prejudice and blind adherence to preexisting understandings. In this respect, his philosophy as manifest in each was deeply indebted to the epistemological convictions he gleaned from Newtonianism.
### 2.5 Toward Science without Metaphysics
Voltaire also contributed directly to the new relationship between science and philosophy that the Newtonian revolution made central to Enlightenment modernity. Especially important was his critique of metaphysics and his argument that it be eliminated from any well-ordered science. At the center of the Newtonian innovations in natural philosophy was the argument that questions of body per se were either irrelevant to, or distracting from, a well focused natural science. Against Leibniz, for example, who insisted that all physics begin with an accurate and comprehensive conception of the nature of bodies as such, Newton argued that the character of bodies was irrelevant to physics since this science should restrict itself to a quantified description of empirical effects only and resist the urge to speculate about that which cannot be seen or measured. This removal of metaphysics from physics was central to the overall Newtonian stance toward science, but no one fought more vigorously for it, or did more to clarify the distinction and give it a public audience than Voltaire.
The battles with Leibnizianism in the 1740s were the great theater for Voltaire's work in this regard. In 1740, responding to Du Chatelet's efforts in her *Institutions de physiques* to reconnect metaphysics and physics through a synthesis of Leibniz with Newton, Voltaire made his opposition to such a project explicit in reviews and other essays he published. He did the same during the brief revival of the so-called "vis viva controversy" triggered by du Chatelet's treatise, defending the empirical and mechanical conception of body and force against those who defended Leibniz's more metaphysical conception of the same thing. In the same period, Voltaire also composed a short book entitled *La Metaphysique de Newton*, publishing it in 1740 as an implicit counterpoint to Chatelet's *Institutions*. This tract did not so much articulate Newton's metaphysics as celebrate the fact that he avoided practicing such speculations altogether. It also accused Leibniz of becoming deluded by his zeal to make metaphysics the foundation of physics. In the definitive 1745 edition of his *Elements de la philosophie de Newton*, Voltaire also appended his tract on Newton's metaphysics as the book's introduction, thus framing his own understanding of the relationship between metaphysics and empirical science in direct opposition to Chatelet's Leibnizian understanding of the same. He also added personal invective and satire to this same position in his indictment of Maupertuis in the 1750s, linking Maupertuis's turn toward metaphysical approaches to physics in the 1740s with his increasingly deluded philosophical understanding and his authoritarian manner of dealing with his colleagues and critics.
While Voltaire's attacks on Maupertuis crossed the line into *ad hominem*, at their core was a fierce defense of the way that metaphysical reasoning both occludes and deludes the work of the physical scientist. Moreover, to the extent that eighteenth-century Newtonianism provoked two major trends in later philosophy, first the reconstitution of transcendental philosophy *a la* Kant through his "Copernican Revolution" that relocated the remains of metaphysics in the a priori categories of reason, and second, the marginalization of metaphysics altogether through the celebration of philosophical positivism and the anti-speculative scientific method that anchored it, Voltaire should be seen as a major progenitor of the latter. By also attaching what many in the nineteenth century saw as Voltaire's proto-positivism to his celebrated campaigns to eradicate priestly and aristo-monarchical authority through the debunking of the "irrational superstitions" that appeared to anchor such authority, Voltaire's legacy also cemented the alleged linkage that joined positivist science on the one hand with secularizing disenchantment and dechristianization on the other. In this way, Voltaire should be seen as the initiator of a philosophical tradition that runs from him to Auguste Comte and Charles Darwin, and then on to Karl Popper and Richard Dawkins in the twentieth century. |
voluntarism-theological | ## 1. Metaethical and normative theological voluntarism
To be a theological voluntarist with respect to some moral status is
to hold that entities have that status in virtue of some act of divine
will. But some instances of this view are metaethical theses; some
instances of it are normative theses.
Consider, for example, theological voluntarism about the status of
acts as obligatory or non-obligatory. One might hold that there is a
single supreme obligation, the obligation to obey God. Every
particular type of act that one might perform thus has its moral
status as obligatory or non-obligatory in virtue of God's having
commanded the performance of acts of that type or God's not
having commanded acts of that type. This is a common version of divine
command theory, according to which all of the more workaday
obligations that we are under (not to steal from each other, not to
murder each other, to help each other out when it would not be
inconvenient, etc.) bind us as a result of the exercise of God's
supreme practical authority.
The view just described is a version of *normative* theological
voluntarism. It is a normative view because it asserts that some
normative state of affairs obtains--namely, the normative state
of affairs *its being obligatory to obey God*. And it is a
version of theological voluntarism because it holds that all other
normative states of affairs, at least those involving obligation,
obtain in virtue of God's commanding activity.
Metaethical theological voluntarism, by contrast, does not assert the
obtaining of any normative state of affairs. (For a contrary view, see
Heathwood 2012, p. 7.) It is possible for one to be a metaethical
theological voluntarist and to hold that no normative states of
affairs obtain. Rather, metaethical theological voluntarists aim to
say something interesting and informative about moral concepts,
properties, or states of affairs; and they want to say something
interesting and informative about them by connecting them to acts of
the divine will. Metaethical theological voluntarists might claim that
(e.g.) obligation is a theological concept, or that the property of
being obligatory is a theological property, or that obligations are
caused immediately by the divine will. But note that none of these
views asserts that there are any obligations.
### 1.1 Theological voluntarism and theism
One does not have to be a theist in order to be a theological
voluntarist. One can affirm normative theological voluntarism or
metaethical theological voluntarism while failing to affirm theism;
atheists and agnostics can be theological voluntarists of either
stripe. With respect to normative theological voluntarism: one might
claim that while it is true that any being that merits the title of
'God' merits obedience, we should not believe that there
is such a being. (Compare: if, through some glitch in promotions,
there happened to be no lieutenants in the army at some time, it would
not cease to be true that privates ought to obey lieutenants. One
could believe that that there are no lieutenants while believing that
privates ought to obey their lieutenants.) With respect to metaethical
theological voluntarism: one might claim that, for example, the
concept of obligation is ineliminably theistic, though there is no
God; that God does not exist counts not against metaethical
theological voluntarism but rather against the claim that the concept
of obligation has application. (Compare: one might believe that
'sin' is properly defined as 'offense against
God.' One can clearly affirm this definition while rejecting
God's existence; all that one is committed to thereby is that
there really are no sins.)
### 1.2 Theological voluntarism and moral skepticism
Call a 'moral skeptic' one who disbelieves or withholds
judgment on the claim that any normative state of affairs obtains. One
can affirm metaethical theological voluntarism while being a moral
skeptic; one cannot affirm normative theological voluntarism while
being a moral skeptic. A metaethical theological voluntarist might
claim that no normative state of affairs could be made to obtain
without certain acts of divine will, but because there is no God, or
because there is a God that has not performed the requisite acts of
will, no normative states of affairs obtain. A normative theological
voluntarist cannot, however, be a moral skeptic. Because the normative
theological voluntarist is committed to the obtaining of at least one
normative state of affairs--for example, *its being obligatory
to obey God*--the conjunction of moral skepticism and
normative theological voluntarism is not a coherent combination of
views.
My concern in the rest of this article will be with the metaethical
version of theological voluntarism; any further references to
theological voluntarism are, unless otherwise noted, to the
metaethical version of the position. Theological voluntarism thus
understood is consistent either with the affirmation or with the
denial of theism and moral skepticism. Taking a negative stand on
theism or a positive stand on moral skepticism should not prevent one
from taking seriously theological voluntarism as a philosophical
position. This is an important point, because it is often thought that
theological voluntarism is only for theists, or only for moral
nonskeptics. While it is true that some of the arguments for
theological voluntarism take theism, or the existence of moral
obligations, as premises, not all of them do.
## 2. Metaethical theological voluntarism
Metaethics is concerned with the formulation of interesting and
informative accounts of normative concepts, properties, and states of
affairs; and a metaethics that is a version of theological voluntarism
will formulate such accounts in terms of some acts of divine will.
This statement of the position is highly abstract, but it cannot be
made less abstract without making difficult choices among rival
formulations of the view.
### 2.1 Considerations in Favor
The considerations to be offered in favor of theological voluntarism
are, at this level, similarly abstract. I will discuss three types of
consideration: *historical*, *theological*, and
*metaethical*.
#### Historical considerations in favor of theological voluntarism
Some of the considerations in favor of metaethical theological
voluntarism are historical. Both theists and nontheists have been
impressed by the extent to which at least some moral concepts
developed in tandem with theological concepts, and it may therefore be
the case that there could be no adequate explication of some moral
concepts without appeal to theological ones. On this view, it is not
merely historical accident that at least some moral concepts had their
origin in contexts of theistic belief and practice; rather, these
concepts have their origin essentially in such contexts, and become
distorted and unintelligible when exported from those contexts (see,
for example, Anscombe 1958).
#### Theological considerations in favor of theological voluntarism
Some of the considerations in favor of theological voluntarism have
their source in matters regarding the divine nature. Several such
arguments are summarized in Idziak 1979 (pp. 8-10). Some appeal
to *omnipotence*: since God is both omnipotent and impeccable,
theological voluntarism must be true: for if God cannot act in a way
that is morally wrong, then God's power would be limited by
other normative states of affairs were theological voluntarism not the
case. Some appeal to God's *freedom*: since God is free
and impeccable, theological voluntarism must be true: for if moral
requirements existed prior to God's willing them, requirements
that an impeccable God could not violate, God's liberty would be
compromised. Some appeal to God's status as supremely
*lovable* and *deserving of allegiance*: if theism is
true, then the world of value must be a theocentric one, and so any
moral view that does not place God at its center is bound to be
inadequate. Even if individually insufficient as justifications for
adopting theological voluntarism, collectively they may suggest some
desiderata for a moral view: that God must be at the center of a moral
theory, and, in particular, that the realm of the moral must be
dependent on God's free choices. It seems that any moral theory
that met these desiderata would count as a version of theological
voluntarism.
#### Metaethical considerations in favor of theological voluntarism
A third set of considerations in favor of theological voluntarism has
its source in metaethics proper, in the attempt to provide adequate
philosophical accounts of the various formal features exhibited by
moral concepts, properties, and states of affairs. One might claim,
that is, that theological voluntarism makes the best sense of the
formal features of morality that both theists and nontheists
acknowledge.
Consider first the *normativity* of morals. Both theists and
nontheists have been impressed by the weirdness of normativity, with
its very otherness, and have thought that whatever we say about
normativity, it will have to be a story not about natural properties
but nonnatural ones (cf. Moore 1903, section 13). John Mackie, an
atheist, and George Mavrodes, a theist, have both drawn from this the
same moral: if there is a God, then the normativity of morality can be
understood in theistic terms; otherwise, the normativity of morality
is unintelligible (Mavrodes 1986; Mackie 1977, p. 48). As Robert Adams
has suggested, given the serious difficulties present in understanding
moral properties as natural properties, it is worthwhile taking
seriously the hypothesis that morality is not just a nonnatural matter
but a supernatural one (Adams 1973, p. 105). For the standard
objections against understanding normativity as a nonnatural property
concern our inability to say anything further about that nonnatural
property itself and about our ability to grasp that property (see,
e.g., M. Smith 1994, pp. 21-25). But if morality is to be
understood in terms of God's commands, we can give an
informative account of what these unusual properties are; and if it is
understood in terms of God's commands, then we can give an
informative account of how God, being the creator and sustainer of us
rational beings, can ensure that we can have an adequate epistemic
grasp of the moral domain (Adams 1979a, pp. 137-138).
Consider next the *impartiality* of morals. The domain of the
moral, unlike the domain of value generally, is governed by the
requirements of impartiality. To use Sidgwick's phrase, the
point of view of morality is not one's personal point of view
but rather "the point of view ... of the Universe"
(Sidgwick 1907, p. 382). But, to remark on the perfectly obvious, the
Universe does not have a point of view. Various writers have employed
fictions to try to provide some sense to this idea: Adam Smith's
impartial and benevolent spectator, Firth's ideal observer, and
Rawls' contractors who see the world *sub specie
aeternitatis* come to mind most immediately (Smith 1759, Pt III,
Ch 8; Firth 1958; and Rawls 1971, p. 587). But theological voluntarism
can provide a straightforward understanding of the impartiality of
morals by appealing to the claim that the demands of morality arise
from the demands of someone who in fact has an impartial and supremely
deep love for all of the beings that are morality's proper
objects.
Consider next the *overridingness* of morals. The domain of the
moral, it is commonly thought, consists in a range of values that can
demand absolute allegiance, in the sense that it is never reasonable
to act contrary to what those values finally require. One deep
difficulty with this view, formulated in a number of ways but perhaps
most memorably by Sidgwick (1907, pp. 497-509), is that it is
hard to see how moral value automatically trumps other kinds of value
(e.g. prudential value) when they conflict. But if the domain of the
moral is to be understood in terms of the will of a being who can make
it possible that, or even ensure that, the balance of reasons is
always in favor of acting in accordance with the moral demand, then
the overridingness of morals becomes far easier to explain (Layman
2006; Evans 2013, pp. 29-30).
Consider next the *content* of morals. There is a strong case
to be made that moral judgments cannot have just any content: they
must be concerned, somehow, with what exhibits respect for certain
beings, or with what promotes their interests (cf. Foot 1958, pp.
510-512; M. Smith 1994, p. 40; Cuneo & Shafer-Landau 2014,
pp. 404-407). Theological voluntarism has a ready explanation
for the content of morals being what it is: it is that moral demands
arise from a being that loves that being's creation.
So there are some general reasons to think theological voluntarism
promising. The reasons are stronger yet when one is proceeding from
theistic starting points. (This is not trivial, since a number of
theistic philosophers reject theological voluntarism.) But these
reasons, while suggestive, are rather generic: they point to the
promise possessed by theological voluntarism, though they do not fix
for us on a particular formulation of the view. The general schema for
a particular theological voluntarist position is 'evaluative
status *M* stands in dependence relation *D* to divine
act *A*' (cf. Quinn 1999, p. 53, which I follow here
except to substitute the more general 'evaluative' for
Quinn's more specific 'moral'). So there are at
least three choices that have to be made. We need to say something
about
what sorts of evaluative statuses
depend on God's will. We need to say something about what are
the
relevant acts of divine will.
And we need to say something about what the
dependence relation
is supposed to be. (These are not independent questions, of
course.)
### 2.2 Which evaluative statuses?
A metaethical view can be more or less comprehensive, aiming to cover
more or fewer evaluative statuses. A metaethical view might claim to
provide an account of all evaluative notions, or of all normative
notions, or of all moral notions, or or some set of moral notions.
(Roughly, and taking the notion of an evaluative property as
fundamental: for a notion to be normative is for it to be a certain
sort of evaluative notion, one that is essentially action-guiding; for
a notion to be moral is for it to be a certain sort of normative
notion, one that exhibits impartiality.) No one claims that
theological voluntarism provides an account of all evaluative notions.
The real contenders are the latter three.
There are good reasons to reject the claim that all normative notions
are to be understood in relation to God's will. The main reason
is that, as we will see below, it is important that there be items
with normative statuses independent of God's will in order to
explain how God's will, even if free, is not arbitrary. And it
is not as if the view that some normative statuses are not to be
explained in terms of God's will must be repugnant to a
theocentric metaethics: for, after all, one might understand such
statuses in theological, even if not voluntaristic, terms. Adams, for
example, understands some notions of goodness in terms of likeness to
God, an understanding that is unquestionably theocentric though not
voluntaristic (Adams 1999, pp. 28-38; also Murphy 2011, pp.
148-160).
Most of the current debate over the evaluative statuses to be
explained by theological voluntarism, then, concerns whether the
entire set or only some proper subset of moral statuses is to be
understood in both theological and voluntaristic terms. Quinn's
1978 work offers a theological voluntarist view on which all moral
statuses are to be understood in terms of God's will. But Adams
rejects this view, and Quinn, following Adams and Alston (1990),
eventually rejected it as well. These writers hold that only moral
properties in the "obligation family," properties like
those of *being obligatory*, *being permissible*,
*being required*, and *being right* (where *being
right* involves a constraint on conduct, rather than being merely
fitting), are to be understood in theological voluntarist terms. Call
their view the *restricted* moral view; call the view that all
moral statuses are to be understood in voluntarist terms the
*unrestricted* moral view.
The restricted moral view has been defended with more and less
impressive arguments. (See Murphy 2012.) The less impressive arguments
are those that appeal to the idea that there must be moral properties
that are not explained in terms of God's will in order to deal
with some of the classic objections to theological voluntarism. To
preserve the notion that God is good, one might say, we need to
restrict the aspirations of theological voluntarism to those of
explaining a proper subset of moral notions, leaving the remainder for
an account of God's goodness; or, to make intelligible the
commands that God chooses to give, we need to set aside some group of
moral notions to be explained in other than theological voluntarist
terms and that can therefore enter into our account of the
intelligibility of God's choices to give certain commands rather
than others. But these considerations are not, after all, entirely
persuasive. For one might well explain the notion that God is good and
account for the intelligibility of divine commands by appeal to
normative notions that are nonmoral. (See Section 3
below
for further discussion of these arguments.)
More plausible are arguments that suggest that there is something in
particular about obligation that makes it fit for a theological
voluntarist explanation, some feature that is not shared with notions
like moral virtue and moral good. Adams suggests, with some
plausibility, that the notion of obligation is ineliminably social,
that it must involve a relationship between persons, a relationship in
which a demand is made (Adams 1987b, p. 264; also Adams 1999, pp.
245-246; for a nontheistic view defending a similar position,
see Darwall 2006). This feature of obligation makes it different from
notions of goodness and virtue, which do not seem to have this
essentially social element. That obligation is special in this way
does not, of course, show that notions of moral virtue and moral
goodness do not also need to be treated in a theological voluntarist
way. It could be that even if obligation most obviously requires this
treatment, the points made earlier about the promise of theological
voluntarism also extend to other moral notions, even if in a less
pressing way. (See
Section 3.3
for further discussion, and evaluation, of this point.)
There are no obviously decisive reasons for the theological
voluntarist to adhere to either the restricted or the unrestricted
moral view. (For a set of arguments that the restricted theological
voluntarists have failed to avoid objections to theological
voluntarism while leaving the positive arguments for the view intact,
see Murphy 2012.) But theological voluntarists have wanted to say at
least that the properties in the obligation family are to be accounted
for in terms of this view. In the remainder of this article, it will
be assumed that theological voluntarism is about properties in the
obligation family, though we will occasionally consider how the view
could be extended to other moral properties as well.
### 2.3 Which act of divine will?
Assume, then, that theological voluntarism is an account of
obligation-type properties. A second issue concerning the proper
formulation of the view concerns the relevant act of divine will. Is
the requisite act of divine will to be understood as an act of
commanding, or instead as some mental act like choosing, intending,
preferring, or wishing? And if one holds that the act of the divine
will is a mental act, should the mental act to which the theological
voluntarist appeals in order to account for obligation be one whose
object is the action that is made obligatory, or one whose object is
the state of affairs that the action is obligatory? We have, to
simplify matters, three options:
* (1)
That it is obligatory for *A* to ph depends on God's
commanding *A* to ph.
* (2)
That it is obligatory for *A* to ph depends on God's
willing that *A* ph.
* (3)
That it is obligatory for *A* to ph depends on God's
willing that it be obligatory for *A* to ph.
One might think that the central issue here would be to decide between
the speech-act view (1) and the mental acts views (2) and (3); it
might be thought to be less important, an issue of intramural interest
only, to decide between (2) and (3). But this is not right. The
important debate is between (1) and (2). For (3) is, understood in one
way, no competitor with either (1) or (2); and understood differently,
it has little argumentative support.
#### The dispute between (1) and (2)
There is an ongoing debate whether (1) or (2) is the better
formulation of theological voluntarism about obligation. There are
initially plausible points on both sides of the issue. In favor of
(1), one might appeal to the centrality of the image of God as
commander in the Abrahamic faiths. In favor of (2), one might appeal to
the centrality to theistic belief and practice of the idea that doing
God's will is the standard for the moral life.
With only these initial points, there can be no resolution, and so
defenders of these two formulations of theological voluntarism have
sought other argumentative routes. One might try to reduce (2) to
absurdity. One who is rational does not intend what one knows will not
happen; and, on the orthodox conception of God, God is both rational
and omniscient. This entails that God will never intend something that
will not happen. If obligation arises from divine intentions, though,
then no obligations will ever be violated. Since this is absurd, one
should prefer (1) over (2). But defenders of (2) have a plausible
response. First, defenders of (1) are in no better a position than
defenders of (2). For it is a sincerity condition on the giving of
commands that the commander intend that the commanded perform the
action; and so if this objection reduces (2) to absurdity, the only
way that the defender of (1) can avoid having his or her position
reduced to absurdity is by holding that God is not necessarily
sincere. Second, the notion of intention admits of various readings,
and there is a reading of intention suitable for theological
voluntarism that does not have the untoward result that no created
rational beings could ever act contrary to a divine intention. It is
standard to distinguish between God's *antecedent* and
God's *consequent* will: God's consequent will is
God's will absolutely considered, as bearing on all actual
circumstances; God's antecedent will is God's will
considered with respect to some proper subset of actual circumstances.
(To use an example of Aquinas's, one drawn from the discussion
in which the distinction between antecedent and consequent will is
made [*Summa Theologiae*, Ia, Q. 19, A. 6]: while in one way
God wills that all persons be saved, in another way God does not will
that all be saved; indeed, God wills that some persons be damned. What
makes this coherent is that the sense of willing in which God wills
that all be saved is antecedent: prior to a consideration of all of
the particulars of persons' situations, God wills their
salvation; but in light of all of the particulars of persons'
situations--including the circumstance that some persons have
willingly rejected friendship with God--God wills their
damnation.) So while there is a sense in which it is true that all
that God intends must come to pass, this sense is that of consequent
intending rather than antecedent intending. On (2), then, one can say
that the theological voluntarist holds that obligation depends on
certain of God's *antecedent* intentions. (See also
Murphy 1998, pp. 17-21.)
We might ask, in order to bring the differences between these views
better out into the light, what our considered opinion is in cases in
which a divine antecedent intention that *A* ph and a divine
command that *A* ph pull apart. It is far from clear that it
is a real option for God to command that *A* ph while not
intending that *A* ph. Though this possibility is endorsed
by Wierenga 1983 (p. 390) and is at least entertained in Murphy 1998
(p. 9), for God to issue such a command would be for God to command
insincerely--something that many would be loath to allow. (See
also Adams 1999, p. 260 and Murphy 2002, 2.5). But the other option
appears unproblematic enough. God might intend for humans to act a
certain way while not commanding them to do so. In such a scenario,
one might ask, is obligation engendered? If yes, then it seems to
count in favor of the divine will view (2); if no, then it seems to
count in favor of the divine command view (1).
Adams claims that obligations are not engendered in such cases;
actually made demands are necessary. He offers three reasons for
preferring the command conception in these cases. The first is that
holding that obligation is a matter of divine command rather than
divine will makes possible a distinction between the obligatory and
the supererogatory: we can say that while God's commands somehow
makes certain acts obligatory, if God does will that we perform some
act but does not command it, performing that act is supererogatory.
The second is an appeal to the idea that theological voluntarism is a
social conception of obligation: obligation arises in the context of
the social relationship between God and created rational beings. (Not
all versions of theological voluntarism affirm this; see the
discussion of (3) below.) But, Adams says, in social relationships
obligations arise only when demands are actually made. And the third
reason is that there is something unsavory about obligations allegedly
resulting from an act of divine will that is not expressed as a
command: "Games in which one party incurs guilt for failing to
guess the unexpressed wishes of the other party are not nice games.
They are no nicer if God is thought of as a party to them"
(Adams 1999, p. 261).
But the defender of the divine will view (2) has some responses
available. The defender of this view can say, with respect to the
first point, that a divine will view can capture the difference
between the obligatory and the supererogatory not by appeal to the
difference between acts of divine will that are expressed as commands
and those that are not but rather as a difference between distinct
types of act of divine will: the difference, say, between what God
intends that we do and what God merely prefers that we do (cf. Quinn
1999, p. 56). Or, in the alternative, the distinction can be made
within the divine will: one might characterize as obligatory those
actions that God wills that one perform, and as supererogatory those
actions that God wills that one perform *if one is willing to do
so*, so that if one performs the action, one is doing what God
wills, but if one does not, one is not doing what God wills that one
not do. With respect to the second and third points, the defender of
the divine will view can directly challenge Adams' view that
obligations generated within social relationships must always be
expressed as demands. Spouses, for example, often take themselves to
be obligated by what their spouses intend with respect to their
behavior; indeed, it would be unseemly to hold oneself to be bound by
one's spouse's will only if the spouse has actually made a
demand on one. ("How can you blame me for not helping you empty
the dishwasher? You didn't tell me to!" does not often go
over well.) One often wants another to perform some action without
being told to; many actions have their value only through being
performed without being prompted by a command. But, on (1), no act of
the form 'ph*-ing*, though God has not told me to
ph' could ever be obligatory.
Here is a thought experiment that may help to decide the dispute
between these two camps. For it to be possible for one to give another
a command to ph, there must be a linguistic practice available to
the addressee in terms of which the speaker can formulate a command.
This is not just for the sake of having the means to communicate a
command: rather, commands are essentially linguistic items, and cannot
be defined except in such terms. Imagine, though, that a certain
created rational being, Mary, inhabits a linguistic community in which
there is no practice of commanding. One can successfully make
assertions to Mary, and among these assertions can be assertions about
one's own psychological states, but one cannot successfully
command Mary to do anything. Here is the question: so long as
Mary's linguistic resources are confined to those afforded by
this practice, can God impose obligations on her? The defender of (1)
will have to say No: for Mary cannot be commanded to do anything. The
defender of (2) will have to say Yes: God could have an antecedent
intention that Mary perform some action and (to sidestep worries about
being under an obligation that one cannot know about) could inform
Mary that God has that intention with respect to her conduct.
The debate between defenders of (1) and defenders of (2) is ongoing,
and at present far from conclusive. (For some recent interventions
into this debate, see Mann 2005b, Miller 2009a, and Jordan 2012.)
#### Option (3)
The other formulation of theological voluntarism that we noted is that
in which the act of divine will is that of willing that the state of
affairs that it is obligatory for *A* to ph obtain. Unlike
the formulations (1) and (2), which admit of various sorts of
dependence relationship between the act of divine will and the
obligation, (3) is limited to something like a causal picture. (It
obviously could not be that *its being obligatory for A to
ph* is identical to *God's willing that it be
obligatory for A to ph*, on pain of a vicious regress.) The
idea expressed here is that ultimately all obligations are present
because of efficacious acts of the divine will, in particular, acts of
willing that those obligations be in force.
This account is compatible with (1) and (2), because it could be that
the way that God makes it the case that an act is obligatory is
necessarily through the giving of commands (as in (1)) or through
antecedently intending (as in (2)) the performance of that action. So
it is not really, in its most general form, a *competitor* to
(1) and (2). It could be made a competitor by adding claims about the
way that the divine will brings it about that these obligations
obtain. A defender of (3) might add that on his or her view the divine
intention that *A* be obligated to ph is the immediate,
total, and exclusive cause of its being obligatory for *A* to
ph (cf. Quinn 1999, p. 55). If so, then a divine command that
*A* ph or a divine antecedent intention that *A*
ph could not be partial or mediate causes of its being obligatory
for *A* to ph. But note that even thus strengthened (3) is
compatible with (1) or (2) understood as an identity claim: if the
claim is that obligations just are divine commands or divine
intentions, then the compatibility with (1) and (2) is
reestablished.
Even if it turns out that (3) is not an obvious competitor with (1)
and (2), it is still worth asking whether it is true. Though Quinn
eventually rejected (3), at one point he argued for it by appeal to
*divine sovereignty*: because every state of affairs that obtains
that does not involve God's existing depends on God's
will, a fortiori every normative state of affairs that does not
involve God's existing depends on God's will. (Quinn
thought that *its being obligatory to obey God* is a state of
affairs that involves God's existing [Quinn 1990, pp.
298-299], but for a reason to reject that claim see Murphy 1998,
pp. 12-13.) It does not seem, though, that this argument would
support (3) in the strengthened version that holds that the dependence
must be immediate, total, and exclusive. After all, very few folks
want to say that every state of affairs that is brought about by
God's will is brought about by God's will exclusively,
totally, and immediately. While it is plausibly part of theism that
every state of affairs that obtains, apart from those that involve
God, is somehow dependent on God's will, this does not show that
deontic states of affairs are more interestingly connected to the
divine will than states of affairs involving mathematics, or physics,
or accounting (Murphy 1998, pp. 14-16).
We may put (3) to the side, then. While some formulations of it may
very well be true, those formulations for which there is argumentative
support do not establish much in the way of interesting metaethical
conclusions. The debate concerning whether (1) and (2) is the more
adequate formulation of theological voluntarism is ongoing, and we
should thus proceed in a way that is as far as possible neutral
between the two (though admittedly unwieldy) by saying that
*A*s moral obligations to ph depend on God's
commands/intentions that *A* ph. (By
'intentions' I will mean antecedent intentions.) Allowing
for both of these possibilities, what can be said about the
relationship of dependence holding between divine commands/intentions
and moral obligations?
### 2.4 What sort of dependence?
The third issue that must be dealt with in providing a formulation of
theological voluntarism is that of the specification of the dependence
relationship that holds between divine commands/intentions that A
ph and the moral obligation of *A* to ph. There have been
several options considered whose nature and merits are worth
discussing here. On an *analysis* view, it is part of the
meaning of 'it is morally obligatory for *A* to
ph' that God commands/intends that *A* ph. On a
*reduction* view, the state of affairs *its being obligatory
for A to* ph is the state of affairs *God's
commanding/intending that A* ph. On a *supervenience*
view, *its being obligatory for A* to ph supervenes on
*God's commanding/intending that A* ph. On a causal
view, necessarily, *its being obligatory for A* to ph is
caused by *God's commanding/intending that A* ph, and
necessarily, *God's commanding/intending that A* ph
causes it to be obligatory for *A* to ph.
#### Causation
The causal view is defended by Quinn (1979, 1990, 1999), and in a
particularly strong form: on Quinn's view, the causal connection
between *God's antecedently intending that A* ph and
*its being obligatory for A to* ph exhibits totality,
exclusivity, activity, immediacy, and necessity.
There are at least three serious difficulties for the causation
formulation. The first we may call the 'Humean worry'.
Once we allow that *its being morally obligatory to* ph is
distinct from *God's commanding/intending*
ph*-ing*, there is the question of what reason we would have
for thinking that *its being morally obligatory to* ph
necessarily obtains if *God's commanding/intending*
ph*-ing* obtains. And whatever answer the defender of this
view offers, it must be consistent with the causation formulation of
theological voluntarism. But it is unclear what would do the trick.
One way to try to make this necessary connection is by holding that
there is a prior moral obligation to obey God; and so, whenever God
gives a command/has an intention that one perform some action, it
follows that one is morally obligated to perform the action
commanded/intended. But we cannot take this route, because if the
causation formulation is correct, then *all* moral obligations
are caused entirely by God's commanding/intending activity;
there cannot be, then, this *prior* moral obligation to obey
God that would serve as part of the *explanans* for the
necessary connection between divine commands/intentions and moral
obligations.
The causal view that is a version of (2) (that is, that *its being
obligatory for A* to ph depends causally on *God's
willing that A ph*) should be distinguished carefully from the
causal view that is a version of (3) (that is, that *its being
obligatory for *A* to ph* depends causally on
*God's willing that it be obligatory for *A* to
ph*). The causal formulation of (3) has at least some
plausibility as a result of God's sovereignty and
omnipotence--though it is in the end unclear why we should move
from the claim that God is the ultimate source of all being to the
claim that, for all deontic states of affairs, God's willing
that that deontic state of affairs obtain is the *immediate*,
*total*, and *exclusive* cause of its obtaining. Most of
us would not, after all, find intuitively compelling a move from the
claim that God is ultimate source of all being to the claim that, for
all *physical* states of affairs, God's willing that that
physical state of affairs obtain is the immediate, total, and
exclusive cause of its obtaining. (That view is
'occasionalism', and occasionalism is a distinctly
minority view even among theists; see Murphy 2011 (pp. 140-142)
for a comparison between theological voluntarism and occasionalism.)
The causal view as an instance of (2), though, seems to have even less
in the way of argumentative support. Why would one think that
God's intending that *A* ph is an immediate, total,
and exclusive cause of a deontic state of affairs' obtaining? In
the absence of some evidence for such a connection, it is hard to see
why one would be attracted to this formulation of theological
voluntarism.
The second worry about the causal formulation I will call the
'lack of precedent worry'. Moral properties and states of
affairs supervene on nonmoral properties and states of affairs. The
intuitive idea is that there can be no differences in moral status
without some difference in nonmoral status. The causal formulation
satisfies the supervenience constraint--the differences in
nonmoral status concern God's commands/intentions--but it
does so in a way that is unprecedented and mysterious. When we look at
the specific ways in which changes in nonmoral facts can make a
difference to the moral facts that hold, there is a pretty limited
number of intelligible relationships that can hold between these
nonmoral facts and the moral facts that supervene on them. A nonmoral
fact can be part of what constitutes a reason to perform an action.
(That you promised to ph can be cited in explaining why you have a
reason to ph your promising to ph constitutes, at least in part,
the reason that you have to ph.) It can be part of an enabling
condition for that reason. (The existence of a social practice of
promising can be cited in explaining why you have a reason to ph;
the existence of that practice might explain why your promise has the
reason-giving force that it has.) It can be cited as a
defeater-defeater for a reason. (While the fact the promisee told you
that you need not fulfill your promise to ph typically releases you
from your promise to ph, the fact that you threatened to beat up
the promisee if he or she did not tell you that you need not fulfill
your promise invalidates that release, and can be cited in explaining
why you have a reason to ph.) But while theological voluntarism
holds that a fact--the fact that God commands/intends that one
ph--explains why one has a reason (in this case, an
obligation) to ph, the causal view holds that this fact falls into
none of the familiar explanatory categories: it is not constitutive of
the reason, it is not an enabling condition for the reason, it is not
a defeater-defeater for the reason. The way that the fact is supposed
to explain the reason is *merely causal*: it just brings the
reason about, exclusively, totally, immediately. This is an entirely
unfamiliar phenomenon: nowhere else do we encounter a merely causal
connection between a nonmoral fact and a moral one. (The appeal to the
very strangeness of divine causation itself is not sufficient to
answer the objection. For there is an extra strangeness here: that the
relationship between nonmoral and moral facts is in every case with
which we are familiar a rational relationship, whereas on the causal
formulation of theological voluntarism the relationship is merely
causal. Creation *ex nihilo* does not constitute carte blanche
to multiply strangenesses. Interestingly, though, Wielenberg 2018 (p.
18), which rejects the project of theistic ethics, appeals to just
this sort of causation to explain the supervenience of the moral on
the non-moral.)
The third worry is the 'no authority worry'. Theological
voluntarism can be defended on the basis of considerations proper to
metaethics--that, for example, theological voluntarism provides
the best explanation for the impartiality of morals, or for its
overridingness, or for its normativity, or for its content. But
theological voluntarists have tended to argue that theological
voluntarism has something specific to offer to theists. One of these
benefits on offer is that theological voluntarism fits well with the
centrality of the virtue of obedience in theistic thought and practice
(Quinn 1992, p. 510; Adams 1973, pp. 99-103). God is a being who
is *to be obeyed*, is someone who is a *practical
authority* over us.
For one to be a practical authority over another is, at least, for one
to have some sort of control over others' reasons for action.
Whatever else practical authority is, it is the ability to make a
difference with respect to someone's reasons to act. The control
involved in practical authority is, however, of a specific sort: it is
*constitutive* control. When a party is an authority over
another, his or her dictates constitute, at least in part, reasons for
action for that other. (One piece of evidence for this is that when we
take *A* to be an authority over us, we will cite
'*A* told us to' as a reason for action.) But if
God's commands to ph have merely causal power to bring about
obligations to ph, then the resultant state of affairs that is the
reason for action is *its being obligatory to ph*--a
state of affairs that need not be in any way constituted by
God's issuing any commands. No version of theological
voluntarism that is built simply around God's causal role in
actualizing moral obligations implies that God is a practical
authority. (See Murphy 2002, 4.3.)
#### Supervenience
The supervenience account is defended by Hare (2001). Suppose that we
continue to interpret supervenience intuitively as the
no-difference-in-moral-properties-without-some-difference-in-nonmoral-properties
thesis. We can see very quickly that the theological voluntarist has
to say something more about the sort of supervenience he or she has in
mind in order to present what is genuinely a theological voluntarist
account of moral obligation. For suppose that one puts forward a view
on which both of the following claims are true: the moral law does not
depend on, nor is it identical with, God's commands; but God
necessarily commands us to follow the moral law. While it is obvious
is that this not a version of theological voluntarism at
all--moral obligation in no way depends on divine
command--it satisfies the intuitive description of what is
involved in the supervenience of the moral on the nonmoral: for there
could be, on this view, no differences in moral status without some
difference in divine commands. So if one is to put forward a
supervenience formulation of theological voluntarism, then one will
have to either be a little bit more doctrinaire about the
supervenience relationship, so that it will exclude the nonvoluntarist
view just described, or one will have to say more than that moral
obligations supervene on divine commands. For our purposes here, they
come to the same thing: that there is something more to the
supervenience formulation of theological voluntarism than the claim
that there are no differences in agents' moral obligations
without some differences in the divine commands that have been imposed
on that agent.
What is called for here is, pretty obviously, just some particular
relationship of ontological dependence. It will not be that of
causation, for reasons we have already examined. But neither does the
defender of the supervenience view want it to be the extreme
dependence of moral obligations on divine commands affirmed by the
reduction formulation, on which moral obligations *just are*
divine commands. To avoid collapse into the reduction formulation, it
has to hold that moral obligations are distinct from divine commands.
It can make this distinction in one of two ways. It could say that
moral obligation is wholly distinct from divine command--that is,
that the state of affairs *its being morally obligatory* to
ph is not constituted even in part by *God's
commanding* ph*-ing*. Or it could say that moral
obligation is only partially constituted by divine command--that
is, that the state of affairs *its being obligatory to ph*,
while not identical with *God's commanding*
ph*-ing*, includes the state of affairs *God's
commanding* ph*-ing* (and some other state of affairs
besides). Let us consider each of these possibilities in turn.
Suppose first that the defender of the supervenience view affirms that
moral obligation is wholly distinct from divine command. If so, then
all of the arguments that were raised against the causation
formulation can be leveled against the supervenience view. The no
authority issue will arise. Because the states of affairs *its
being obligatory to* ph and *God's commanding*
ph*-ing* will be distinct, the supervenience account lacks
the resources to underwrite divine authority. For God is authoritative
only if God's commands are themselves reasons for action, but if
the states of affairs *its being obligatory to* ph and
*God's commanding* ph*-ing* are distinct, then
God's commands will not be themselves reasons for action on the
adequately strengthened supervenience view. And if these commands are
not themselves reasons for action, then God does not constitutively
actualize reasons for action by His commands; and if God does not
constitutively actualize reasons for action by His commands, then God
is not authoritative. The lack of precedent issue will arise. For the
adequately strengthened supervenience view cannot view obligations as
constituted by divine commands, and no theological voluntarist worthy
of the name will see God's commands as merely enablers or
defeater-defeaters for obligations; and so the relationship between
divine commands and moral obligations is bound to be unprecedented and
mysterious. And the Humean issue will arise. For the causation view
is, after all, just the adequately strengthened supervenience view
plus the claim that the dependence relationship involved in a
particular sort of causal dependence. So, understood as affirming a
dependence relationship between wholly distinct moral obligations and
divine commands, the supervenience view has all of the problems of the
causation view.
So the only hope for the supervenience formulation is to hold that
God's commands are proper parts of moral obligations: for if
those commands are identical with moral obligations, then the
supervenience view collapses into the reduction view, and if moral
obligations are wholly distinct from divine commands, then the
supervenience view fails for the reasons that the causation view
fails. There are, however, serious difficulties for this partial
constitution version: in particular, if one is committed to saying
that *God's commanding* ph*-ing* partly
constitutes *its being morally obligatory to* ph, it is hard
to see what state of affairs the theological voluntarist would be
tempted to say is also necessary for moral obligation to be fully
constituted. Obviously this other state of affairs cannot be one that
involves moral obligation, on pain of circularity. (So, the
theological voluntarist cannot say that *its being morally
obligatory to* ph just is the complex state of affairs
consisting both of *God's commanding* ph*-ing*
and *its being morally obligatory to do what God commands*.)
Further, in order to remain faithful to the basic idea of the
supervenience version, we would have to say that any state of affairs
that is held to constitute *its being morally obligatory to*
ph along with *God's commanding* ph*-ing*
must be a state of affairs that is certain to obtain if
*God's commanding* ph*-ing* obtains. Otherwise,
it might be the case that *its being morally obligatory to*
ph does not supervene on *God's commanding*
ph*-ing*, for there would be two possible worlds, in both of
which *God's commanding* ph*-ing* obtains, but
in only one of which does *its being morally obligatory to*
ph obtain. This runs contrary to even the basic idea of the
supervenience view, on which there are no differences in moral
obligations without a difference in divine commands.
These limitations make it hard to imagine what a motivated version of
this form of the supervenience view would look like. We have to
imagine a view of the following form. It is nonnegotiable that the
state of affairs *its being morally obligatory to* ph is
partially constituted by *God's commanding*
ph*-ing*. It is nonnegotiable that there is, apart from
*God's commanding* ph*-ing*, at least one state
of affairs S that partially constituted *its being morally
obligatory to* ph. It is nonnegotiable that S either obtains
necessarily or at the very least necessarily obtains if
*God's commanding* ph*-ing* obtains (otherwise
moral obligation would not supervene on divine command). And it is
nonnegotiable that S not involve moral obligation. The only remotely
plausible candidates for S that come to mind are normative states of
affairs that fall short of the obligatory, for example,
ph*-ing's being good* (or *virtuous*, or
*praiseworthy*). One might say, for example, that *its being
morally obligatory to* ph is constituted jointly by
*God's commanding* ph*-ing* and
ph*-ing's being virtuous*. But it is unclear what
motivation one would have for affirming such a position. It cannot be
for the sake of making sure that God cannot impose a moral obligation
to do something that is not virtuous: for, ex hypothesi, we know
already that ph*-ing's being virtuous* obtains
whenever *God's commanding* ph*-ing* obtains,
for otherwise *its being morally obligatory to* ph would not
supervene on *God's commanding* ph*-ing*.
The difficulty that faces the defender of the supervenience view can
be framed as a dilemma. If the defender of that view holds that the
state of affairs *its being morally obligatory to* ph is
wholly distinct from *God's commanding*
ph*-ing*, then he or she is refuted by the considerations
that refute the causation view. If, on the other hand, the defender of
the supervenience view holds that the state of affairs *its being
morally obligatory to* ph is partially but not wholly
constituted by *God's commanding* ph*-ing*,
then there is pressure to explain why he or she does not simply affirm
the reduction view, on which *its being morally obligatory to*
ph just is *God's commanding* ph*-ing*.
Unless the defender of the supervenience view identifies the state of
affairs that, in addition to *God's commanding*
ph*-ing*, makes for a moral obligation to ph, then his or
her unwillingness to adopt the reduction view will look unmotivated
and arbitrary.
#### Analysis
According to the analysis view, defended in Adams 1973, the concept of
the morally obligatory is to be analyzed as that of being commanded by
a loving God. Adams did not put this view forward as an account of the
meaning of 'obligation' generally, but only of its meaning
as employed in Judeo-Christian moral discourse. As evidence for this
analysis, Adams appealed to the freedom with which users of that
discourse moved between claims of the form '*x* is
obligatory' and '*x* is God's will' or
'*x* is God's command.'
There are a couple of central difficulties for this position. The
first is that it seems to imply that those inside and those outside
the Judeo-Christian practice of moral discourse have never disagreed
when one has affirmed a claim of the form ' ph-ing is
obligatory' and the other denied a claim of that form. For they
do not, on Adams' account, mean the same thing when they use
these terms. In atheistic moral discourse, a masterful user of the
language can say 'it is not true that God has commanded
ph-ing, but ph-ing is nonetheless obligatory'; in
Judeo-Christian moral discourse, on Adams' view, one shows
oneself to be either unintelligible or not a masterful user of moral
language if one were to speak thus. Adams was aware of this
difficulty, and attempted to mitigate it: he argued that the agreement
over which items the term 'obligatory' applied to, and the
appropriate attitudinal and volitional responses to those things
correctly described as 'obligatory,' made possible
substantive moral discourse (Adams 1973, pp. 116-120). But all
this seems to do is to explain how a simulacrum of genuine moral
discourse is preserved; it does not show that what we get is the real
thing.
The second difficulty is that of dealing with those within the
Judeo-Christian tradition of moral discourse who employed or continue
to employ moral language in a way that is out of step with
Adams' analysis. Now, it is not sufficient to refute a suggested
analysis of some term that users of that term have questioned or even
rejected that analysis. But if we take the task of analyzing terms to
be that of making explicit and systematizing the platitudes employing
that term affirmed by masterful users of that term (Smith 1994, pp.
29-32), and we note that many thoughtful Jews and Christians who
otherwise appear to be masterful users of the language of moral
obligation have rejected, either explicitly or implicitly, the notion
that an act is obligatory if and only if it has been commanded by God,
then we would have some reason to doubt whether the analysis
formulation of theological voluntarism is defensible.
#### Reduction
Adams' maneuver in the face of these difficulties was to move
from the analysis to the reduction version of theological voluntarism.
He decided that the meaning of the term 'morally
obligatory' was common to theists and nontheists. There is a
common concept of the morally obligatory, a common concept that makes
possible substantive agreement and disagreement between theists and
nontheists. This common concept is neutral between theism and
nontheism. But, following the now standard Kripke-Putnam line, Adams
affirms that there are necessary a posteriori truths, among which are
included property identifications. He argued that the property
*being wrong* is identical to the property *being contrary
to the commands of (a loving) God* because the property *being
contrary to the commands of (a loving) God* best fills the role
assigned by the concept of wrongness (Adams 1979a, pp. 133-142;
see also Adams 1999, pp. 252-258). By conceptual analysis alone
we can know only that wrongness is a property of actions (and perhaps
intentions and attitudes); that people are generally opposed to what
they regard as wrong; that wrongness is a reason, perhaps a conclusive
reason, for opposing an act; and that there are certain acts (e.g.
torture for fun) that are wrong. But given traditional theistic
beliefs, the best candidate property to fill the role set by the
concept of wrongness is that of being contrary to (a loving)
God's commands. For that property is an objective property of
actions. Further, given Christian views about the content of
God's commands, this identification fits well with widespread
pre-theoretical intuitions about wrongness; and given Christian views
about human receptivity to divine communication and God's
willingness to communicate both naturally and supernaturally,
God's commands have a causal role in our acquisition of moral
knowledge (Adams 1979, p. 139; see also Adams 1999, pp. 257).
The reduction formulation avoids the most troublesome implications of
the analysis formulation, for it allows that there is a common concept
of obligation, so that those within the Judeo-Christian tradition and
those outside it can engage in moral debate and can have substantive
agreements and disagreements with each other, and so that those within
the Judeo-Christian tradition can raise substantive questions about
the relationship between God and obligation without ipso facto
excluding themselves from the class of masterful users of the moral
concepts of that community. The reduction formulation allows that the
concept of obligation may be nontheistic while the property that best
fills the role assigned to it by that concept is a theistic one.
#### Analysis vs. reduction
Nevertheless, it remains an open question whether the reduction view
is superior to the analysis view. One might argue that Adams'
analogy to 'H2O is water' is inappropriate, as
the identification with water with H2O is clearly a
posteriori, whereas the identification of the morally obligatory with
the commanded by God is a priori. For if Adams is right in his
characterization of the concept of obligation, it is not as if those
who do not have the ability to infer from 'this is morally
obligatory' to 'this is commanded by a loving God'
(and vice versa) are just missing out on an interesting extra fact,
the way that those without rudimentary chemistry are missing out on an
interesting extra fact if they do not know that water is
H2O. The term 'water' can play its role in our
practical lives perfectly well without our knowing that it is
H2O. The term 'morally obligatory' cannot play
its role in our practical lives without our knowing that the morally
obligatory is the commanded by God. No unintelligibility creeps into
the life of agents that do not grasp that water is H2O;
unintelligibility creeps into the life of agents that do not grasp
that the morally obligatory is what is commanded by God.
Why might one think that the masterful use of 'morally
obligatory' requires recognition that the morally obligatory is
what is commanded by God? If Adams is right, it is part of the meaning
of obligation that obligations are social in character (Adams 1999, p.
233) and involve actually made demands by one party in the social
relationship on another (Adams 1999, pp. 245-246). It is the
fact that a demand is actually made that gives sense to the notion
that one has to perform an action, rather than merely that it would be
good, even the best, to do it (Adams 1999, p. 246). But if it is part
of the meaning of 'morally obligatory' that one is part of
a certain social relationship in which demands are actually made, then
it is no longer just an interesting further fact that the property
that best answers to the concept 'morally obligatory' is
the property *being commanded by God*. Rather, one who denies
that there is a God or that God actually makes demands on human beings
must fail to use the term 'morally obligatory'
masterfully. For think of the other marks of the moral, especially
those of impartiality and overridingness. For one to think of an act
as obligatory is for one to think of it as being actually imposed on
one as a demand; for every obligation, on Adams' view, there is
someone who imposes that obligation by commanding. It is clear a
priori that the only being that could impose the sort of obligation
that could plausibly be classified as moral would be God. How, then,
could one be a masterful user of 'moral obligation'
without grasping that moral obligations are demands imposed by
God?
This analysis view would not, unlike Adams' earlier formulation,
require the subdivision of linguistic communities. One could say that
the meaning of 'morally obligatory' includes 'being
commanded by God,' for both theists *and* nontheists.
Those who do not grasp that it is of the essence of obligations to be
divinely commanded--whether theists or nontheists--fail to
be masterful users of the language of moral obligation. To embrace
this view is to return to the position of Anscombe 1958, according to
which we should hold that the concept of obligation is inherently
theological. On this view, we should not allow that Judeo-Christian
moral practice has a different concept of obligation. Rather, the
theological understanding of obligation is the authentic one, and
nontheological concepts of obligation are unintelligible truncations.
## 3. Perennial difficulties for metaethical theological voluntarism
Apart from the difficulties that must be handled by particular
formulations of theological voluntarism, there are a number of
objections that have been levelled against theological voluntarist
views as such. A wide variety of these objections are helpfully
discussed in Quinn 1999 (pp. 65-71) and Evans 2013 (pp.
88-117). Here I will consider only two objections, but they are
the two that are characteristically taken to be the most powerful
perennial objections to theological voluntarism: first, that
theological voluntarism is incompatible with any substantive sense in
which God is good; second, that theological voluntarism entails the
arbitrariness of morality. While these objections have been answered
plausibly in recent formulations of theological voluntarism, the way
that these objections have been answered leave theological
voluntarists open to a different objection: that theological
voluntarism is not adequately motivated as a philosophical position,
even for theists. I conclude with a brief discussion of this
worry.
### 3.1 Theological voluntarism and God's goodness
God is, by definition, good. This is both a fixed point concerning
God's nature and a plausibility-making feature of theological
voluntarism. If one were to deny that God is good (understood *de
dicto*--that is, 'if there is a being that qualifies as
God, then that being is good'), one would call one's own
competence in use of the term 'God' into question. And
even if it were allowed that one can employ the term 'God'
masterfully while denying that God is good, if one were to deny that
God is good, then one would undercut one's capacity to defend
theological voluntarism. For theological voluntarism is plausible only
if God is an exalted being; but a being that is not good is not an
exalted being.
That God is good is a fixed point for theistic discourse in general
and for theological voluntarism in particular provides the basis for a
common objection to theological voluntarism: that theological
voluntarism makes it impossible to say that, in any substantive sense,
God is good. The most straightforward formulation of the objection is
as follows. For God to be good is for God to be morally good. But if
moral goodness is to be understood in theological voluntarist terms,
then God's goodness consists only in God's measuring up to
a standard that God has set for Godself. While this is perhaps an
admirable resoluteness--it is, other things being equal, a good
thing to live up to your own standards--it is hardly the sort of
thing that provokes in us the admiration that God's goodness is
supposed to provoke.
Now, one might dispute the claim that if God's goodness consists
simply in God's living up to a standard that God has set for
Himself, then that goodness is far less admirable than we would have
supposed. (See, for a nice discussion of this issue, Clark 1982, esp.
pp. 341-343.) Suppose, though, that we grant this part of the
argument. How powerful is the objection from God's goodness
against theological voluntarism?
As we noted earlier, theological voluntarism comes in a variety of
strengths. One dimension along which a theological voluntarist view
might be assessed as stronger or weaker is in terms of the range of
normative properties that it attempts to account for in theological
voluntarist terms. The strength of the objection from God's
goodness is directly proportional to the size of the range of
normative properties that one wishes to explain in theological
voluntarist terms (see also Alston 1990). If one wishes only to
account for a proper subset of moral notions, such as obligation, with
one's theological voluntarism, then the objection from
God's goodness is very weak; if one wishes to provide a sweeping
account of normativity in theological voluntarist terms, then the
objection is much stronger.
Suppose, for example, that one defends a version of theological
voluntarism that accounts only for obligation. If moral obligation
only is dependent on acts of the divine will, one can appeal to moral
notions other than deontic ones in order to provide a substantive
sense in which God is good. Granting to some extent the force of the
objection, we can say, on this view, that God's moral goodness
cannot consist in God's adhering to what is morally obligatory.
But there are other ways to assess God morally other than in terms of
the morally obligatory. Adams, for example, holds that God should be
understood as benevolent and as just, and indeed concedes that his
theological voluntarist account of obligation as the divinely
commanded is implausible unless God is thus understood (Adams 1999,
pp. 253-255). The ascription to God of these moral virtues is
entirely consistent with his theological voluntarism, for his
theological voluntarism is not meant to provide any account of the
moral virtues. One can hold that God's moral goodness involves
supereminent possession of the virtues, at least insofar as those
virtues do not presuppose weakness and vulnerability. God is good
because God is supremely just, loyal, faithful, benevolent, and so
forth. It seems that ascribing to God supereminent possession of these
virtues would be enough to account for God's supreme moral
goodness: it is, after all, in such terms that God is praised in the
Psalms.
It has been argued that this appeal to God's justice is
illegitimate within a theological voluntarist account, because what is
just is a matter of moral requirement, and so to suppose that
God's acting justly is metaphysically prior to God's
imposing all moral requirements by way of commanding is incoherent
(Hooker 2001, p. 334). But the theological voluntarist may deny that
acting justly is morally required prior to God's commanding it,
any more than acting courageously, temperately, or prudently are
morally required prior to God's commanding us to act
courageously, temperately, or prudently. Just as one can coherently
acknowledge the excellence of temperance while wondering whether one
is *morally obligated* to act temperately, one can coherently
acknowledge the excellence of justice while wondering whether one is
morally obligated to act justly. It thus seems an available strategy
for the theological voluntarist who holds a restricted view of the
range of moral properties explained by God's commands/intentions
to appeal to justice in accounting for God's goodness.
Matters become more difficult for theological voluntarist views that
aim to provide accounts of all moral notions in terms of God's
will. If one held to such an ambitious version of theological
voluntarism--if one were to hold, say, that a state of affairs is
morally good because it is a state of affairs that God wishes to
obtain for its own sake, and that a character trait is a moral virtue
because it is a property that God wants one to have for its own sake,
and that an action is morally obligatory because it is antecedently
intended by God, and so forth--then obviously the gambit employed
by the less ambitious theological voluntarist is unavailable. The more
ambitious theological voluntarist should hold, instead, that
God's goodness is not to be understood in moral terms.
God's being good might be understood in terms of God's
being *good to us*, where us includes all created rational
beings, or all created sentient beings, or whatever class of created
beings to which one thinks that God has a special relationship. What
it is for God to be good to us would be for God to be loving--to
will each of our goods, and to do so in a way that plays no favorites.
This understanding of 'loving' does not run afoul of
theological voluntarism construed as an account of all moral goods,
because 'our goods' is to be interpreted in terms of
prudential goodness, what makes each of us well-off (but see Chandler
1985).
Suppose, though, that one were to go all the way, holding that
theological voluntarism is the correct account of all normative
notions: on this extremely ambitious view, anything that is
intrinsically action-guiding depends on God's will. I think that
the 'God is good to us' understanding of God's
goodness is ruled out on this approach: for the notion of 'good
to us' is a normative notion. Perhaps one could hold, on this
view, that 'God is good' affirms of God some sort of
metaphysical goodness, fullness of being. This surely makes God
exalted, but it is not clear whether the will of such a being is
plausibly understood as the source of all normative statuses. It is
also less than clear that 'God is good,' on this reading,
is the claim that God possesses a particular perfection, rather than
is merely a reminder that God has a variety of perfections.
To sum up, then: for each of the various formulations of theological
voluntarism, there seems to be some way of answering the charge that
the view undercuts the notion that God is good. But the strain needed
to answer the charge becomes greater the wider the range of normative
properties that the formulation of theological voluntarism aims to
explain.
### 3.2 Theological voluntarism and arbitrariness
It is also an extraordinarily popular charge against theological
voluntarism that it entails, objectionably, that morality is
arbitrary. There is, however, more than one objection here, and the
different objections need to be distinguished and answered
individually. One claim is that theological voluntarism implies that
God's commands/intentions, on which moral statuses depend, must
be arbitrary. A distinct claim is that theological voluntarism implies
that the content of morality is itself arbitrary, that it is of the
essence of morality to exhibit a certain rational structure, and that
theological voluntarism precludes its having that structure. I will
consider each of these objections in turn.
One arbitrariness objection against theological voluntarism is that if
theological voluntarism is true, then God's commands/intentions
must be arbitrary; and it cannot be that morality could wholly depend
on something arbitrary; and so theological voluntarism must be false.
In favor of the claim that if theological voluntarism were true, then
morality would be arbitrary: morality would be arbitrary, on
theological voluntarism, if God lacks reasons for the
commands/intentions that God gives/has; but because theological
voluntarism holds that reasons depend on God's
commands/intentions, it is clear that there could ultimately be no
reason for God's commanding/intending one thing rather than
another. In favor of the claim that morality could not wholly depend
on something arbitrary: when we say that some moral state of affairs
obtains, we take it that there is a reason for that moral state of
affairs obtaining rather than another. Moral states of affairs do not
just happen to obtain.
Just as in the case of the objection from God's goodness, the
strength of this version of the objection from arbitrariness depends
on the formulation of theological voluntarism that is being attacked.
The arbitrariness objection becomes more difficult to answer the
stronger the relationship between God's intentions/commands and
moral properties is held to be; and it becomes more difficult to
answer the more normative properties one attempts to account for by
appeal to God's intentions/commands.
The arbitrariness objection has less force if one holds that, say,
only moral obligations are to be accounted for by theological
voluntarism. The claim made by the objector is that morality is
arbitrary on theological voluntarism, because God has no reason for
having one set of commands/intentions rather than another. But this is
so only if one appeals to the very strong form of theological
voluntarism on which all normative states of affairs depend on
God's will. If one holds that only moral obligations are
determined by God's will, then God might have moral reasons for
selecting one set of commands/intentions rather than another: that,
for example, one set of commands/intentions is more benevolent, or
just, or loyal, than another. If one holds that all moral properties
are determined by God's will, then God might have nonmoral
reasons for selecting one set of commands/intentions rather than
another: that one set of commands/intentions is more loving than
another. The fewer normative properties that a version of theological
voluntarism attempts to account for, the less susceptible it is to the
claim that theological voluntarism implies the arbitrariness of
God's commands/intentions. (For further discussion of the role
that restriction of theological voluntarism to a proper subset of
normative statuses has had in answering these perennial objections,
see Murphy 2012.)
Now, one might respond on behalf of this version of the arbitrariness
objection that even if it is true that there can be reasons for God to
choose the commands/intentions that God chooses, it is unlikely that
these reasons would wholly determine God's choice of
commands/intentions, and so there would be some latitude for
arbitrariness in God's choices/intentions. But of itself this is
not much of a worry. The initial claim pressed against theological
voluntarism was that it made all of God's commands/intentions
ultimately arbitrary, and morality could not depend on something so
thoroughly arbitrary. But the chastened claim--that there is some
arbitrariness in God's commands--is far less troubling on
its own. We are already familiar with morality depending to some
extent on arbitrary facts about the world: if one thinks about the
particular requirements that he or she is under, one will note
straightaway the extent to which these requirements have resulted from
contingent and indeed fluky facts about oneself, one's
relationships, and one's circumstances. It does not seem that
allowing that God has some choices to make concerning what to
command/intend with respect to the conduct of created rational beings
that are undetermined by reasons must introduce an intolerable
arbitrariness into the total set of divine commands/intentions. (See
also Carson 2012.)
Allowing for such pockets of divine discretion does not provide
backing for this version of the objection from arbitrariness, but
rather offers a premise for the other version of the objection from
arbitrariness. This other version of the objection from arbitrariness
holds that moral states of affairs exhibit a certain rational
structure that they would not have if theological voluntarism were
true. Here is the idea, roughly formulated. Suppose that some moral
state of affairs obtains--that it is the case that murder is
wrong, or that lying is objectionable, or that courage is a virtue, or
that Sharon's snubbing me in that way was unforgivable. The idea
is that for any such moral state of affairs, the following is true:
either we can provide a *justification* for the obtaining of
that moral state of affairs, or that moral state of affairs is
*necessary*. A justification of an obtaining moral state of
affairs *A* is some obtaining moral state of affairs *B*
(where *A* is not identical with *B*), which in
conjunction with the other non-moral facts entails that *A*
obtains. So, for example: it may be the justification for
*murder's being prima facie wrong* that murder is an
intentional harm (non-moral fact) and intentionally harming is prima
facie wrong (obtaining moral state of affairs). Now, presumably not
all moral states of affairs can be justified: eventually there will be
basic moral states of affairs, for which no justification can be
given. But it would be very unsatisfactory to say that these basic
moral states of affairs just happen to obtain. So any basic moral
states of affairs must obtain necessarily. Perhaps *unrelieved
suffering's being bad* is a state of affairs of this sort,
or perhaps *rational beings' being worthy of
respect*.
The claim that the structure of morality is not arbitrary is, put
positively, the claim that every obtaining moral state of affairs
either has a justification or is necessary. And, thus, what those who
claim that theological voluntarism entails that morality is
objectionably arbitrary mean is that if theological voluntarism is
true, then there are some moral states of affairs that both lack a
justification and are not necessary. The view that God's
commands/intentions are not wholly determined by reasons offer our
basis for holding that there are some moral states of affairs that
both lack a justification and are not necessary. For consider some act
of ph-ing that is not subsumed under any other issued divine
command and which is such that God lacks decisive reasons to command
or not to command its performance. In the possible world in which God
issues a command to ph, there is a moral state of
affairs--*its being obligatory to* ph--which lacks
a justification (for the action is subsumed under no other divine
command) and is not necessary (for God might have failed to command
the action).
One might respond to this sort of worry by proposing that what God
commands/intends with respect to human action, God commands/intends
necessarily. But this seems either to understate the divine freedom or
to overstate the determination of God's commands by reasons
(Murphy 2002, pp. 83-85). More plausible is the denial of the
claim that morality must exhibit the particular structure presupposed
in the objection. While I think that in general the subsumption model
of justification is innocuous enough--even particularists can
affirm it, if they affirm even the most minimal doctrine of moral
supervenience--its appeal to necessary moral states of affairs as
the only proper starting point is dubious. It is not clear why the
starting points for justification have to be necessary moral states of
affairs, for two reasons.
First, if these moral states of affairs are basic, then of course
their moral status must not be explained by appeal to other moral
states of affairs, but that does not mean they must be necessary; they
might be contingent, and have their moral status explained in some way
other than an appeal to another moral state of affairs. It could be,
for example, that the explanation of them appeals to a contingent
nonmoral state of affairs plus some necessary state of affairs
concerning a connection between that contingent nonmoral state of
affairs and the moral state of affairs. Theological voluntarism would
be an instance of this latter model. (For a discussion of a similar
objection raised against theological voluntarism by Ralph Cudworth
(1731), and a similar response on behalf of theological voluntarism,
see Schroeder 2005.)
Second, the appeal to necessary moral states of affairs as the
stopping point for explanation seems to assume that necessary moral
states of affairs somehow are not in need of explanation. But the very
fact that some state of affairs obtains necessarily does not entail
that its obtaining does not require explanation, and there is no
reason why moral states of affairs would be special on this score. So
one might think that not only is it possible for justifications to
bottom out in contingent moral states of affairs, there is no reason
to think that such justifications are as such any less adequate than
justifications that bottom out in necessary moral states of affairs.
(See Murphy 2011, pp. 47-49.)
### 3.3 Is theological voluntarism adequately motivated?
Both with respect to the objection from God's goodness and with
respect to the objection from arbitrariness, the now-standard
theological voluntarist response is not to bite the bullet but rather
to restrict the range of normative properties of which theological
voluntarism is supposed to provide an account. So Adams, Quinn, and
Alston all recommend theological voluntarism only as a theory about
properties like *being morally obligatory*, and not about any
other normative properties. The worry is that allowing that there is
adequate motivation to refuse to understand these other normative
properties in theological voluntarist terms might commit one to
holding that there is adequate motivation to refuse to understand
obligation-type properties in theological voluntarist terms. Look, one
might say: if you are willing to hold that all moral properties other
than those in the obligation family are to be understood in
non-theological voluntarist terms, what is to stop us from holding
that obligation is to be understood in non-theological voluntarist
terms as well? If we are willing to give up theological voluntarism in
some moral domains, why not in all of them?
The most well-developed account of why we should treat obligation as
special is Adams', on which obligation is apt for theological
voluntarist treatment because of its intrinsic link to demands made
within social relationships (Adams 1987b, and Adams 1999, pp.
231-258; see also Evans 2014, p. 27). But it is also unclear
whether this is persuasive. We may grant that obligations result from
demands, but only if we emphasize (as Adams does) that it is demands
from *authorities* that result in obligations. But what makes
someone an authority is that by his or her dictates he or she can give
reasons for action of a certain kind. There is some dispute over what
kind of reasons they must be, but for our purposes we can just follow
Joseph Raz, who holds that genuine authorities give "protected
reasons" by their dictates, where a protected reason to ph is
a reason to ph and a reason to disregard some reasons against
ph-ing. If one is an authority over another with respect to
ph-ing, then one's dictate that the other ph is a
protected reason for the other to ph (Raz 1979, p. 18).
But now here is the question. We agree that obligations arise from
authoritative dictates. And we agree that for a dictate to be
authoritative is for it to constitute a certain sort of reason, let us
say a protected one. But why, then, would we identify obligations with
protected reasons that result from demands, rather than (as Raz does)
with protected reasons themselves, whatever their source? If, after
all, there were other ways of producing protected reasons other than
through the giving of commands, what would be the point of saying
'oh, but though there is a protected reason to ph, it
isn't really *obligatory* to ph.' Surely if
there were any point to this remark it would be purely verbal, and of
no philosophical / normative interest.
It turns out, then, that whether Adams' move is enough to
motivate theological voluntarism about obligation is dependent on
whether there are in fact any protected reasons (or reasons of
whatever structure that one thinks that authoritative dictates must
give) that are not dependent on demands being made. If there are such
reasons--which natural law theorists, for example, would
hold--then Adams' gambit will not work, and the theological
voluntarist will have to look elsewhere for motivation to understand
obligation in those terms.
Now, one might say that by appealing to the idea that obligations
arise from demands made within social relationships as the basis for
the theological voluntarist's account of moral obligation, we
are misconstruing the key claim that they make about the social nature
of obligation. The social nature of obligation, voluntarists might
retort, first and foremost concerns the fact that when one is under an
obligation, there is someone else who is entitled to hold one
accountable for failures to adhere to the content of that obligation
(Adams 1999, p. 233; Evans 2013, p 14). In the case of the moral law,
the party who has standing to hold us accountable is God.
Even if one accepts this claim about the nature of obligation --
it has some plausibility, as there is some basis for holding that not
everything that one has protected reason to do counts as obligatory
-- and the claim that only God has adequate standing to hold us
accountable for adhering to morality, it is still not at all clear how
to provide a satisfactory rationale for a theological voluntarist view
of moral obligation. For theological voluntarism with respect to moral
obligations holds that the existence of moral obligations depends on
God's bringing them about via God's command or some other
act of divine will; but there is no obvious argument for the view that
God's being essential to holding us accountable for following
the norms of morality entails God's commanding or willing those
moral norms into existence. Even if we grant, then, to the theological
voluntarists their desired premises regarding the social character of
obligation and God's ideal status as someone to hold us
accountable for adhering to the norms of morality, theological
voluntarists still have a worrisome gap to overcome in offering
positive reasons to affirm that view. |
voting | ## 1. The Rationality of Voting
The act of voting has an opportunity cost. It takes time and effort
that could be used for other valuable things, such as working for pay,
volunteering at a soup kitchen, or playing video games. Further,
identifying issues, gathering political information, thinking or
deliberating about that information, and so on, also take time and
effort which could be spent doing other valuable things. Economics, in
its simplest form, predicts that rational people will perform an
activity only if doing so maximizes expected utility. However,
economists have long worried that, that for nearly every individual
citizen, voting does not maximize expected utility. This leads to the
"paradox of voting"(Downs 1957): Since the expected costs
(including opportunity costs) of voting appear to exceed the expected
benefits, and since voters could always instead perform some action
with positive overall utility, it's surprising that anyone
votes.
However, whether voting is rational or not depends on just what voters
are trying to do. Instrumental theories of the rationality of voting
hold that it can be rational to vote when the voter's goal is to
influence or change the outcome of an election, including the
"mandate" the winning candidate receives. (The mandate
theory of elections holds that a candidate's effectiveness in
office, i.e., her ability to get things done, is in part a function of
how large or small a lead she had over her competing candidates during
the election.) In contrast, the expressive theory of voting holds that
voters vote in order to express themselves and their fidelity to
certain groups or ideas. Alternatively, one might hold that voting is
rational because it is has consumption value; many people enjoy
political participation for its own sake or for being able to show
others that they voted. Finally, if one believes, as most democratic
citizens say they do (Mackie 2010), that voting is a substantial moral
obligation, then voting could be rational because it is necessary to
discharge one's obligation.
### 1.1 Voting to Change the Outcome
One reason a person might vote is to influence, or attempt to change,
the outcome of an election. Suppose there are two candidates, *D*
and *R*. Suppose Sally prefers *D* to *R*. Suppose she
correctly believes that *D* would do a trillion dollars more
overall good than *R* would do. If her beliefs were correct, then
by hypothesis, it would be best if *D* won.
Here, casting the expected value difference between the two candidates
in monetary terms is a simplifying assumption. Whether political
outcomes can be described in monetary terms as such is not without
controversy. To illustrate, suppose the difference between two
candidates came down entirely to how many lives would be lost in the
way they would conduct a current war. Whether we can translate
"lives lost" into dollar terms is controversial. Further,
whether we can commensurate all the distinct goods and harms a
candidate might cause onto a common scale is also controversial.
Even if the expected value difference between two candidates could be
expressed on some common value scale, such as in monetary terms, this
leaves open whether the typical voter is aware of or can generally
estimate that difference. Empirical work generally finds that most
voters are badly informed, and further, that many of them are not
voting for the purpose of promoting certain policies or platforms over
others (Achen and Bartels 2016; Kinder and Kalmoe 2017; Mason 2017).
Beyond that, estimating the value difference between candidates
requires evaluating complex counterfactuals, estimating what various
candidates are likely to achieve, and determining what the outcomes of
these actions would be (Freiman 2020).
These worries aside, even if Sally is correct that *D* will do a
trillion dollars more good than *R*, this does not yet show it is
rational for Sally to vote for *D*. Instead, this depends on how
likely it is that her vote will make a difference. In much the same
way, it might be worth $200 million to win the lottery, but that does
not imply it is rational to buy a lottery ticket.
Suppose Sally's only goal, in voting, is to change the outcome
of the election between two major candidates. In that case, the
expected value of her vote (\(U\_v\)) is:
\[
U\_v = p[V(D) - V(R)] - C
\]
where *p* represents the probability that Sally's vote is
decisive, \([V(D) - V(R)]\) represents (in monetary terms) the
difference in the expected value of the two candidates, and *C*
represents the opportunity cost of voting. In short, the value of her
vote is the value of the difference between the two candidates
discounted by her chance of being decisive, minus the opportunity cost
of voting. In this way, voting is indeed like buying a lottery ticket.
Unless \(p[V(D) - V(R)] > C\), then it is (given Sally's stated
goals) irrational for her to vote.
The equation above models the rationality of Sally's choice to
vote under the assumption that she is simply trying to change the
outcome of the election, and gets no further benefit from
voting. Further, the equation assumes her vote confers no other
benefit to others than having some chance of changing which candidate
wins. However, these are controversial simplifying assumptions. It is
possible that the choice to cast a vote may induce others to vote,
might improve the quality of the ground decision by adding cognitive
diversity, might have some marginal influence on which candidates or
platforms parties run, or might have some other effect not modeled in
the equation above.
Again, it is controversial among some philosophers whether the
difference in value between two candidates can be expressed, in
principle, in monetary terms. Nevertheless, the point generalizes in
some way. If we are discussing the instrumental value of a vote, then
the general point is that the vote depends on the expected difference
in value between the chosen candidate and the next best alternative,
discounted by the probability of the vote breaking a tie, and we must
then take into account the opportunity cost of voting. For instance,
if two candidates were identical except that one would save one more
life than another, but one had a 1 in 1 billion chance of being
decisive, and instead of voting one could save a drowning toddler,
then it seems voting is not worthwhile, even if we cannot assign an
exact monetary value to thee consequences.
There is some debate among economists, political scientists, and
philosophers over the precise way to calculate the probability that a
vote will be decisive. Nevertheless, they generally agree that the
probability that the modal individual voter in a typical election will
break a tie is small. Binomial models of voting estimate the
probability of a vote being decisive by modeling voters as if they
were weighted coins and then asking what the probability is that a
weighted coin will come up heads exactly 50% of the time. These models
generally imply that the probability of being decisive, if any
candidate has a lead, is vanishingly small, which in turn implies that
the expected benefit of voting (i.e., \(p[V(D) - V(R)]\)) for a good
candidate is worth far less than a millionth of a penny (G. Brennan
and Lomasky 1993: 56-7, 119). A more optimistic estimate in the
literature, which uses statistical estimate techniques based on past
elections, claims that in a typical presidential election, American
voters have widely varying chances of being decisive depending on
which state they vote in. This model still predicts that a typical
vote in a "safe" states, like California, has a
vanishingly small chance of making a difference, but suggests that
that a vote in very close states could have on the order of a 1 in 10
million chance of breaking a tie (Edlin, Gelman, and Kaplan
2007). Thus, on both of these popular models, whether voting for the
purpose of changing the outcome is rational depends upon the facts on
the ground, including how close the election is and how significant
the value difference is between the candidates. The binomial model
suggests it will almost never be rational to vote, while the
statistical model suggests it will be rational for voters to vote in
sufficiently close elections or in swing states.
However, some claim even these assessments are optimistic. One worry
is that if a major election in most places came down to a single vote,
the issue might be decided in the courts after extensive lawsuits
(Somin 2013). Further, in making such estimates, we have assumed that
voters can reliably identify which candidate is better and reliably
estimate how much better that candidate is. But perhaps voters cannot.
After all, showing that individual votes matter is a double-edged
sword; the more expected good an individual vote can do, the more
expected harm it can do (J. Brennan 2011a; Freiman 2020).
### 1.2 Voting to Change the "Mandate"
One popular response to the paradox of voting is to posit that voters
are not trying to determine who wins, but instead trying to change the
"mandate" the elected candidate receives. The assumption
here is that an elected official's efficacy--i.e., her
ability to get things done in office--depends in part on how
large of a majority vote she received. If that were true, I might vote
for what I expect to be the winning candidate in order to increase her
mandate, or vote against the expected winner to reduce her mandate.
The virtue of the mandate hypothesis, if it were true, is that it
could explain why it would be rational to vote even in elections where
one candidate enjoys a massive lead coming into the election.
However, the mandate argument faces two major problems. First, even if
we assume that such mandates exist, to know whether voting is
rational, we would need to know how much the *nth*
voter's vote increases the marginal effectiveness of her
preferred candidate, or reduces the marginal effectiveness of her
dispreferred candidate. Suppose voting for the expected winning
candidate costs me $15 worth of my time. It would be rational for me
to vote only if I believed my individual vote would give the winning
candidate at least $15 worth of electoral efficacy (and I care about
the increased efficiency as much or more than my opportunity costs).
In principle, whether individual votes change the
"mandate" this much is something that political scientists
could measure, and indeed, they have tried to do so.
But this brings us to the second, deeper problem: Political scientists
have done extensive empirical work trying to test whether electoral
mandates exist, and they now roundly reject the mandate hypothesis
(Dahl 1990b; Noel 2010). A winning candidate's ability to get
things done is generally not affected by how small or large of a
margin she wins by.
Perhaps voting is rational not as a way of trying to change how
effective the elected politician will be, but instead as a way of
trying to change the kind of mandate the winning politician enjoys
(Guerrero 2010). Perhaps a vote could transform a candidate from a
delegate to a trustee. A delegate tries to do what she believes her
constituents want, but a trustee has the normative legitimacy to do
what she believes is best.
Suppose for the sake of argument that trustee representatives are
significantly more valuable than delegates, and that what makes a
representative a trustee rather than a delegate is her large margin of
victory. Unfortunately, this does not yet show that the expected
benefits of voting exceed the expected costs. Suppose (as in Guerrero
2010: 289) that the distinction between a delegate and trustee lies on
a continuum, like difference between bald and hairy. To show voting is
rational, one would need to show that the marginal impact of an
individual vote, as it moves a candidate a marginal degree from
delegate to trustee, is higher than the opportunity cost of voting. If
voting costs me $15 worth of time, then, on this theory, it would be
rational to vote only if my vote is expected to move my favorite
candidate from delegate to trustee by an increment worth at least $15
(Guerrero 2010: 295-297).
Alternatively, suppose that there were a determinate threshold (either
known or unknown) of votes at which a winning candidate is suddenly
transformed from being a delegate to a trustee. By casting a vote, the
voter has some chance of decisively pushing her favored candidate over
this threshold. However, just as the probability that her vote will
decide the election is vanishingly small, so the probability that her
vote will decisively transform a representative from a delegate into a
trustee would be vanishingly small. Indeed, the formula for
determining decisiveness in transforming a candidate into a trustee
would be roughly the same as determining whether the voter would break
a tie. Thus, suppose it's a billion or even a trillion dollars
better for a representative to be a trustee rather than a candidate.
Even if so, the expected benefit of an individual vote is still less
than a penny, which is lower than the opportunity cost of voting.
Again, it's wonderful to win the lottery, but that doesn't
mean it's rational to buy a ticket.
### 1.3 Other Reasons to Vote
Other philosophers have attempted to shift the focus on other ways
individual votes might be said to "make a difference".
Perhaps by voting, a voter has a significant chance of being among the
"causally efficacious set" of votes, or is in some way
causally responsible for the outcome (Tuck 2008; Goldman 1999).
On these theories, what voters value is not changing the outcome, but
being agents who have participated in causing various outcomes. These
causal theories of voting claim that voting is rational provided the
voter sufficiently cares about being a cause or among the joint causes
of the outcome. Voters vote because they wish to bear the right kind
of causal responsibility for outcomes, even if their individual
influence is small.
What these alternative theories make clear is that whether voting is
rational depends in part upon what the voters' goals are. If
their goal is to in some way change the outcome of the election, or to
change which policies are implemented, then voting is indeed
irrational, or rational only in unusual circumstances or for a small
subset of voters. However, perhaps voters have other goals.
The expressive theory of voting (G. Brennan and Lomasky 1993) holds
that voters vote in order to express themselves. On the expressive
theory, voting is a consumption activity rather than a productive
activity; it is more like reading a book for pleasure than it is like
reading a book to develop a new skill. On this theory, though the act
of voting is private, voters regard voting as an apt way to
demonstrate and express their commitment to their political team.
Voting is like wearing a Metallica T-shirt at a concert or doing the
wave at a sports game. Sports fans who paint their faces the team
colors do not generally believe that they, as individuals, will change
the outcome of the game, but instead wish to demonstrate their
commitment to their team. Even when watching games alone, sports fans
cheer and clap for their teams. Perhaps voting is like this.
This "expressive theory of voting" is untroubled by and
indeed partly supported by the empirical findings that most voters are
ignorant about basic political facts (Somin 2013; Delli Carpini and
Keeter, 1996). The expressive theory is also untroubled by and indeed
partly supported by work in political psychology showing that most
citizens suffer from significant "intergroup bias": we
tend to automatically form groups, and to be irrationally loyal to and
forgiving of our own group while irrationally hateful of other groups
(Lodge and Taber 2013; Haidt 2012; Westen, Blagov, Harenski, Kilts,
and Hamann 2006; Westen 2008). Voters might adopt ideologies in order
to signal to themselves and others that they are certain kinds of
people. For example, suppose Bob wants to express that he is a patriot
and a tough guy. He thus endorses hawkish military actions, e.g., that
the United States nuke Russia for interfering with Ukraine. It would
be disastrous for Bob were the US to do what he wants. However, since
Bob's individual vote for a militaristic candidate has little
hope of being decisive, Bob can afford to indulge irrational and
misinformed beliefs about public policy and express those beliefs at
the polls.
Another simple and plausible argument is that it can be rational to
vote in order to discharge a perceived duty to vote (Mackie 2010).
Surveys indicate that most citizens in fact believe there is a duty to
vote or to "do their share" (Mackie 2010: 8-9). If
there are such duties, and these duties are sufficiently weighty, then
it would be rational for most voters to vote.
## 2. The Moral Obligation to Vote
Surveys show that most citizens in contemporary democracies believe
there is some sort of moral obligation to vote (Mackie 2010:
8-9). Other surveys show most moral and political philosophers
agree (Schwitzgebel and Rust 2010). They tend to believe that citizens
have a duty to vote even when these citizens rightly believe their
favored party or candidate has no serious chance of winning (Campbell,
Gurin, and Mill 1954: 195). Further, most people seem to think that
the duty to vote specifically means a duty to turn out to vote
(perhaps only to cast a blank ballot), rather than a duty to vote a
particular way. On this view, citizens have a duty simply to cast a
vote, but nearly any good-faith vote is morally acceptable.
Many popular arguments for a duty to vote rely upon the idea that
individual votes make a significant difference. For instance, one
might argue that that there is a duty to vote because there is a duty
to protect oneself, a duty to help others, or to produce good
government, or the like. However, these arguments face the problem, as
discussed in section 1, that individual votes have vanishingly small
instrumental value (or disvalue)
For instance, one early hypothesis was that voting might be a form of
insurance, meant to to prevent democracy from collapsing (Downs 1957:
257). Following this suggestion, suppose one hypothesizes that
citizens have a duty to vote in order to help prevent democracy from
collapsing. Suppose there is some determinate threshold of votes under
which a democracy becomes unstable and collapses. The problem here is
that just as there is a vanishingly small probability that any
individual's vote would decide the election, so there is a
vanishingly small chance that any vote would decisively put the number
of votes above that threshold. Alternatively, suppose that as fewer
and fewer citizens vote, the probability of democracy collapsing
becomes incrementally higher. If so, to show there is a duty to vote,
one would first need to show that the marginal expected benefits of
the *nth* vote, in reducing the chance of democratic collapse,
exceed the expected costs (including opportunity costs).
A plausible argument for a duty to vote would thus not depend on
individual votes having significant expected value or impact on
government or civic culture. Instead, a plausible argument for a duty
to vote should presume that individual votes make little difference in
changing the outcome of election, but then identify a reason why
citizens should vote anyway.
One suggestion (Beerbohm 2012) is that citizens have a duty to vote to
avoid complicity with injustice. On this view, representatives act in
the name of the citizens. Citizens count as partial authors of the
law, even when the citizens do not vote or participate in government.
Citizens who refuse to vote are thus complicit in allowing their
representatives to commit injustice. Perhaps failure to resist
injustice counts as kind of sponsorship. (This theory thus implies
that citizens do not merely have a duty to vote rather than abstain,
but specifically have a duty to vote for candidates and policies that
will reduce injustice.)
Another popular argument, which does not turn on the efficacy of
individual votes, is the "Generalization Argument":
>
> What if everyone were to stay home and not vote? The results would be
> disastrous! Therefore, I (you/she) should vote. (Lomasky and G.
> Brennan 2000: 75)
>
This popular argument can be parodied in a way that exposes its
weakness. Consider:
>
> What if everyone were to stay home and not farm? Then we would all
> starve to death! Therefore, I (you/she) should each become farmers.
> (Lomasky and G. Brennan 2000: 76)
>
The problem with this argument, as stated, is that even if it would be
disastrous if no one or too few performed some activity, it does not
follow that everyone ought to perform it. Instead, one conclude that
it matters that sufficient number of people perform the activity. In
the case of farming, we think it's permissible for people to
decide for themselves whether to farm or not, because market
incentives suffice to ensure that enough people farm.
However, even if the Generalization Argument, as stated, is unsound,
perhaps it is on to something. There are certain classes of actions in
which we tend to presume everyone ought to participate (or ought not
to participate). For instance, suppose a university places a sign
saying, "Keep off the newly planted grass." It's not
as though the grass will die if one person walks on it once. If I were
allowed to walk on it at will while the rest of you refrained from
doing so, the grass would probably be fine. Still, it would seem
unfair if the university allowed me to walk on the grass at will but
forbade everyone else from doing so. It seems more appropriate to
impose the duty to keep off the lawn equally on everyone. Similarly,
if the government wants to raise money to provide a public good, it
could just tax a randomly chosen minority of the citizens. However, it
seems more fair or just for everyone (at least above a certain income
threshold) to pay some taxes, to share in the burden of providing
police protection.
We should thus ask: is voting more like the first kind of activity, in
which it is only imperative that enough people do it, or the second
kind, in which it's imperative that everyone do it? One
difference between the two kinds of activities is what abstention does
to others. If I abstain from farming, I don't thereby take
advantage of or free ride on farmers' efforts. Rather, I
compensate them for whatever food I eat by buying that food on the
market. In the second set of cases, if I freely walk across the lawn
while everyone else walks around it, or if I enjoy police protection
but don't pay taxes, it appears I free ride on others'
efforts. They bear an uncompensated differential burden in maintaining
the grass or providing police protection, and I seem to be taking
advantage of them.
A defender of a duty to vote might thus argue that non-voters free
ride on voters. Non-voters benefit from the government that voters
provide, but do not themselves help to provide government.
There are at least a few arguments for a duty to vote that do not
depend on the controversial assumption that individual votes make a
difference:
1. *The Generalization/Public Goods/Debt to Society
Argument*: Claims that citizens who abstain from voting thereby
free ride on the provision of good government, or fail to pay their
"debts to society".
2. *The Civic Virtue Argument*: Claims that citizens have a
duty to exercise civic virtue, and thus to vote.
3. *The Complicity Argument*: Claims that citizens have a
duty to vote (for just outcomes) in order to avoid being complicit in
the injustice their governments commit.
However, there is a general challenge to these arguments in support of
a duty to vote. Call this the *particularity problem*: To show
that there is a duty to vote, it is not enough to appeal to some goal
*G* that citizens plausibly have a duty to support, and then to
argue that voting is one way they can support or help achieve
*G*. Instead, proponents of a duty to vote need to show
specifically that voting is the only way, or the required way, to
support *G* (J. Brennan 2011a). The worry is that the three
arguments above might only show that voting is one way among many to
discharge the duty in question. Indeed, it might not be even be an
especially good way, let alone the only or obligatory way to discharge
the duty.
For instance, suppose one argues that citizens should vote because
they ought to exercise civic virtue. One must explain why a duty to
exercise civic virtue specifically implies a duty to vote, rather than
a duty just to perform one of thousands of possible acts of civic
virtue. Or, if a citizen has a duty to to be an agent who helps
promote other citizens' well-being, it seems this duty could be
discharged by volunteering, making art, or working at a productive job
that adds to the social surplus. If a citizen has a duty to to avoid
complicity in injustice, it seems that rather than voting, she could
engage in civil disobedience; write letters to newspaper editors,
pamphlets, or political theory books; donate money; engage in
conscientious abstention; protest; assassinate criminal political
leaders; or do any number of other activities. It's unclear why
voting is special or required.
Note that the particularity problem need not be framed in
consequentialist terms, i.e., both defenders and critics of the duty
to vote need not say that what determines whether voting is morally
required depends on whether voting has the highest expected
consequences. Rather, the issue is whether voting is simply one of
many ways to discharge an underlying duty or respond to underlying
reasons, or whether voting is in some way special and unique, such
that these reasons select voting in particular as an obligatory means
of responding to these underlying reasons.
Maskivker (2019) responds partly to this objection by saying, in
effect, "Why not both?" J. Brennan (2011a) and Freiman (2020) say
that the underlying grounds for any duty to vote can be discharged
(and discharged better) through actions other than voting. Maskivker
takes this to suggest not that voting is optional, but that one should
vote (if one is already sufficiently well-informed and
publicly-spirited) and also performs these other actions. Maskivker
grounds her argument on a deontological duty of easy aid: if one can
provide aid to others at very low cost to oneself, then one should do
so. For already well-informed citizens, voting is an instance of easy
aid.
### 2.1 A General Moral Obligation Not to Vote?
While many hold that it is obligatory to vote, a few have argued that
many people have an obligation not to vote under special
circumstances. For instance, Sheehy (2002) argues that voting when one
is indifferent to the election is unfair. He argues that if one's
vote makes a difference, it could be to disappoint what otherwise
would be have been the majority coalition, whose position is now
thwarted by those who, by hypothesis, have no preference.
Another argument holds that voting might be wrong because it is an
ineffective form of altruism. Freiman (2020) argues that when people
discharge their obligations to help and aid others, they are obligated
to pursue effective rather than ineffective forms of altruism (see
also MacAskill 2015). For instance, suppose one has an obligation to
give a certain amount to charity each year. This obligation is not
fundamentally about spending a certain percentage of one's money. If
a person gave 10% of their income to a charity that did no good at
all, or which made the world worse, one would not have discharged the
obligation to act beneficently. Similarly, Freiman argues, if a person
is voting for the purpose of aiding and helping others, then they
would at the very least need to be sufficiently well-informed to vote
for the better candidate, a condition few voters meet (see section 3.2
below). In part because most voters are in no position to judge
whether they are voting for the better or worse candidates, and in
part simply because individual votes make little difference, votes and
most other forms of political action (such as donating to political campaigns,
canvassing, volunteering, and the like) are highly ineffective forms
of altruism. Freiman claims that we are instead obligated to pursue
effective forms of altruism, such as collecting and making donations
to the Against Malaria Foundation.
## 3. Moral Obligations Regarding How One Votes
Most people appear to believe that there is a duty to cast a vote
(perhaps including a blank ballot) rather than abstain (Mackie 2010:
8-9), but this leaves open whether they believe there is a duty
to vote in any particular way. Some philosophers and political
theorists have argued there are ethical obligations attached to how
one chooses to vote. For instance, many deliberative democrats (see
Christiano 2006) believe not only that every citizen has a duty to
vote, but also that they must vote in publicly-spirited ways, after
engaging in various forms of democratic deliberation. In contrast,
some (G. Brennan and Lomasky 1993; J. Brennan 2009, 2011a)
argue that while there is no general duty to vote (abstention is
permissible), those citizens who do choose to vote have duties
affecting how they vote. They argue that while it is not wrong to
abstain, it is wrong to vote *badly*, in some theory-specified
sense of "badly".
Note that the question of how one ought to vote is distinct from the
question of whether one ought to have the right to vote. The right to
vote licenses a citizen to cast a vote. It requires the state to
permit the citizen to vote and then requires the state to count that
vote. This leaves open whether some ways a voter could vote could be
morally wrong, or whether other ways of voting might be morally
obligatory. In parallel, my right of free association arguably
includes the right to join the Ku Klux Klan, while my right of free
speech arguably includes the right to advocate an unjust war. Still,
it would be morally wrong for me to do either of these things, though
doing so is within my rights. Thus, just as someone can, without
contradiction, say, "You have the right to have racist
attitudes, but you should not," so a person can, without
contradiction, say, "You have the right to vote for that
candidate, but you should not."
A theory of voting ethics might include answers to any of the
following questions:
1. *The Intended Beneficiary of the Vote*: Whose interests
should the voter take into account when casting a vote? May the voter
vote selfishly, or should she vote sociotropically? If the latter, on
behalf of which group ought she vote: her demographic group(s), her
local jurisdiction, the nation, or the entire world? Is it permissible
to vote when one has no stake in the election, or is otherwise
indifferent to the outcome?
2. *The Substance of the Vote*: Are there particular
candidates or policies that the voter is obligated to support, or not
to support? For instance, is a voter obligated to vote for whatever
would best produce the most just outcomes, according to the correct
theory of justice? Must the voter vote for candidates with good
character? May the voter vote strategically, or must she vote in
accordance with her sincere preferences?
3. *Epistemic Duties Regarding Voting*: Are voters required
to have a particular degree of knowledge, or exhibit a particular kind
of epistemic rationality, in forming their voting preferences? Is it
permissible to vote in ignorance, on the basis of beliefs about social
scientific matters that are formed without sufficient evidence?
### 3.1 The Expressivist Ethics of Voting
Recall that one important theory of voting behavior holds that most
citizens vote not in order to influence the outcome of the election or
influence government policies, but in order to express themselves (G.
Brennan and Lomasky 1993). They vote to signal to themselves and to
others that they are loyal to certain ideas, ideals, or groups. For
instance, I might vote Democrat to signal that I'm compassionate
and fair, or Republican to signal I'm responsible, moral, and
tough. If voting is primarily an expressive act, then perhaps the
ethics of voting is an ethics of expression (G. Brennan and Lomasky
1993: 167-198). We can assess the morality of voting by asking
what it says about a voter that she voted like that:
>
> To cast a Klan ballot is to *identify oneself* in a morally
> significant way with the racist policies that the organization
> espouses. One thereby lays oneself open to associated moral liability
> whether the candidate has a small, large, or zero probability of
> gaining victory, and whether or not one's own vote has an
> appreciable likelihood of affecting the election result. (G. Brennan
> and Lomasky 1993: 186)
>
The idea here is that if it's wrong (even if it's within
my rights) in general for me to express sincere racist attitudes, and
so it's wrong for me to express sincere racist commitments at
the polls. Similar remarks apply to other wrongful attitudes. To the
extent it is wrong for me to express sincere support for illiberal,
reckless, or bad ideas, it would also be wrong for me to vote for
candidates who support those ideas.
Of course, the question of just what counts as wrongful and
permissible expression is complicated. There is also a complicated
question of just what voting expresses. What I think my vote expresses
might be different from what it expresses to others, or it might be
that it expresses different things to different people. The
expressivist theory of voting ethics acknowledges these difficulties,
and replies that whatever we would say about the ethics of expression
in general should presumably apply to expressive voting.
### 3.2 The Epistemic Ethics of Voting
Consider the question: What do doctors owe patients, parents owe
children, or jurors owe defendants (or, perhaps, society)? Doctors owe
patients proper care, and to discharge their duties, they must 1) aim
to promote their patients' interests, and 2) reason about how to
do so in a sufficiently informed and rational way. Parents similarly
owe such duties to their children. Jurors similarly owe society at
large, or perhaps more specifically the defendant, duties to 1) try to
determine the truth, and 2) do so in an informed and rational way. The
doctors, parents, and jurors are fiduciaries of others. They owe a
duty of care, and this duty of care brings with it certain
*epistemic responsibilities*.
One might try to argue that voters owe similar duties of care to the
governed. Perhaps voters should vote 1) for what they perceive to be
the best outcomes (consistent with strategic voting) and 2) make such
decisions in a sufficiently informed and rational way. How voters vote
has significant impact on political outcomes, and can help determine
matters of peace and war, life and death, prosperity and poverty.
Majority voters do not just choose for themselves, but for everyone,
including dissenting minorities, children, non-voters, resident
aliens, and people in other countries affected by their decisions. For
this reason, voting seems to be a morally charged activity (Christiano
2006; J. Brennan 2011a; Beerbohm 2012).
That said, one clear disanalogy between the relationship doctors have
with patients and voters have with the governed is that individual
voters have only a vanishingly small chance of making a difference.
The expected harm of an incompetent individual vote is vanishingly
small, while the expected harm of incompetent individual medical
decisions is high.
However, perhaps the point holds anyway. Define a "collectively
harmful activity" as an activity in which a group is imposing or
threatening to impose harm, or unjust risk of harm, upon other
innocent people, but the harm will be imposed regardless of whether
individual members of that group drop out. It's plausible that
one might have an obligation to refrain from participating in such
activities, i.e., a duty to keep one's hands clean.
To illustrate, suppose a 100-member firing squad is about to shoot an
innocent child. Each bullet will hit the child at the same time, and
each shot would, on its own, be sufficient to kill her. You cannot
stop them, so the child will die regardless of what you do. Now,
suppose they offer you the opportunity to join in and shoot the child
with them. You can make the 101st shot. Again, the child will die
regardless of what you do. Is it permissible for you join the firing
squad? Most people have a strong intuition that it is wrong to join
the squad and shoot the child. One plausible explanation of why it is
wrong is that there may be a general moral prohibition against
participating in these kinds of activities. In these kinds of cases,
we should try to keep our hands clean.
Perhaps this "clean-hands principle" can be generalized to
explain why individual acts of ignorant, irrational, or malicious
voting are wrong. The firing-squad example is somewhat analogous to
voting in an election. Adding or subtracting a shooter to the firing
squad makes no difference--the girl will die anyway. Similarly,
with elections, individual votes make no difference. In both cases,
the outcome is causally overdetermined. Still, the irresponsible voter
is much like a person who volunteers to shoot in the firing squad. Her
individual bad vote is of no consequence--just as an individual
shot is of no consequence--but she is participating in a
collectively harmful activity when she could easily keep her hands
clean (J. Brennan 2011a, 68-94).
## 4. The Justice of Compulsory Voting
Voting rates in many contemporary democracies are (according to many
observers) low, and seem in general to be falling. The United States,
for instance, barely manages about 60% in presidential elections and
45% in other elections (Brennan and Hill 2014: 3). Many other
countries have similarly low rates. Some democratic theorists,
politicians, and others think this is problematic, and advocate
compulsory voting as a solution. In a compulsory voting regime,
citizens are required to vote by law; if they fail to vote without a
valid excuse, they incur some sort of penalty.
One major argument for compulsory voting is what we might call the
Demographic or Representativeness Argument (Lijphart 1997; Engelen
2007; Galston 2011; Hill in J. Brennan and Hill 2014: 154-173;
Singh 2015). The argument begins by noting that in voluntary voting
regimes, citizens who choose to vote are systematically different from
those who choose to abstain. The rich are more likely to vote than the
poor. The old are more likely to vote than the young. Men are more
likely to vote than women. In many countries, ethnic minorities are
less likely to vote than ethnic majorities. More highly educated
people are more likely to vote than less highly educated people.
Married people are more likely to vote than non-married people.
Political partisans are more likely to vote than true independents
(Leighley and Nagler 1992; Evans 2003: 152-6). In short, under
voluntary voting, the electorate--the citizens who actually
choose to vote--are not fully representative of the public at
large. The Demographic Argument holds that since politicians tend to
give voters what they want, in a voluntary voting regime, politicians
will tend to advance the interests of advantaged citizens (who vote
disproportionately) over the disadvantaged (who tend not to vote).
Compulsory voting would tend to ensure that the disadvantaged vote in
higher numbers, and would thus tend to ensure that everyone's
interests are properly represented.
Relatedly, one might argue compulsory voting helps citizens overcome
an "assurance problem" (Hill 2006). The thought here is
that an individual voter realizes her individual vote has little
significance. What's important is that enough other voters like
her vote. However, she cannot easily coordinate with other voters and
ensure they will vote with her. Compulsory voting solves this problem.
For this reason, Lisa Hill (2006: 214-15) concludes,
"Rather than perceiving the compulsion as yet another unwelcome
form of state coercion, compulsory voting may be better understood as
a coordination necessity in mass societies of individual strangers
unable to communicate and coordinate their preferences."
Whether the Demographic Argument succeeds or not depends on a few
assumptions about voter and politician behavior. First, political
scientists overwhelmingly find that voters do not vote their
self-interest, but instead vote for what they perceive to be the
national interest. (See the dozens of papers cited at Brennan and Hill
2014: 38-9n28.) Second, it might turn out that disadvantaged
citizens are not informed enough to vote in ways that promote their
interests--they might not have sufficient social scientific
knowledge to know which candidates or political parties will help them
(Delli Carpini and Keeter 1996; Caplan 2007; Somin 2013). Third, it
may be that even in a compulsory voting regime, politicians can get
away with ignoring the policy preferences of most voters (Gilens 2012;
Bartels 2010).
In fact, contrary to many theorists' expectations, it appears
that compulsory voting has no significant effect on individual
political knowledge (that is, it does not induce ignorant voters to
become better informed), individual political conversation and
persuasion, individual propensity to contact politicians, the
propensity to work with others to address concerns, participation in
campaign activities, the likelihood of being contacted by a party or
politician, the quality of representation, electoral integrity, the
proportion of female members of parliament, support for small or third
parties, support for the left, or support for the far right (Birch
2009; Highton and Wolfinger 2001). Political scientists have also been
unable to demonstrate that compulsory voting leads to more egalitarian
or left-leaning policy outcomes. The empirical literature so far shows
that compulsory voting gets citizens to vote, but it's not clear
it does much else.
## 5. The Ethics of Vote Buying
Many citizens of modern democracies believe that vote buying and
selling are immoral (Tetlock 2000). Many philosophers agree; they
argue it is wrong to buy, trade, or sell votes (Satz 2010: 102; Sandel
2012: 104-5). Richard Hasen reviews the literature on vote
buying and concludes that people have offered three main arguments
against it. He says,
>
> Despite the almost universal condemnation of core vote buying,
> commentators disagree on the underlying rationales for its
> prohibition. Some offer an equality argument against vote buying: the
> poor are more likely to sell their votes than are the wealthy, leading
> to political outcomes favoring the wealthy. Others offer an efficiency
> argument against vote buying: vote buying allows buyers to engage in
> rent-seeking that diminishes overall social wealth. Finally, some
> commentators offer an inalienability argument against vote buying:
> votes belong to the community as a whole and should not be alienable
> by individual voters. This alienability argument may support an
> anti-commodification norm that causes voters to make public-regarding
> voting decisions. (Hasen 2000: 1325)
>
Two of the concerns here are consequentialist: the worry is that in a
regime where vote-buying is legal, votes will be bought and sold in
socially destructive ways. However, whether vote buying is destructive
is a subject of serious social scientific debate; some economists
think markets in votes would in fact produce greater efficiency
(Buchanan and Tullock 1962; Haefele 1971; Mueller 1973; Philipson and
Snyder 1996; Hasen 2000: 1332). The third concern is deontological: it
holds that votes are just not the kind of thing that ought be for
sale, even if it turned out that vote-buying and selling did not lead
to bad consequences.
Many people think vote selling is wrong because it would lead to bad
or corrupt voting. But, if that is the problem, then perhaps the
permissibility of vote buying and selling should be assessed on a
case-by-case basis. Perhaps the rightness or wrongness of individual
acts of vote buying and selling depends entirely on how the vote
seller votes (J. Brennan 2011a: 135-160; Brennan and Jaworski
2015: 183-194). Suppose I pay a person to vote in a good way.
For instance, suppose I pay indifferent people to vote on behalf of
women's rights, or for the Correct Theory of Justice, whatever
that might be. Or, suppose I think turnout is too low, and so I pay a
well-informed person to vote her conscience. It is unclear why we
should conclude in either case that I have done something wrong,
rather than conclude that I have done everyone a small public service.
Certain objections to vote buying and selling appear to prove too
much; these objections lead to conclusions that the objectors are not
willing to support. For instance, one common argument against voting
selling is that paying a person to vote imposes an externality on
third parties. However, so does persuading others to vote or to vote
in certain ways (Freiman 2014: 762). If paying you to vote for
*X* is wrong because it imposes a third party cost, then for the
sake of consistency, I should also conclude that persuading you to
vote for *X*, say, on the basis of a good argument, is equally
problematic.
As another example, some object to voting markets on the grounds that
votes should be for the common good, rather than for narrow
self-interest (Satz 2010: 103; Sandel 2012: 10). Others say that
voting should "be an act undertaken only after collectively
deliberating about what it is in the common good" (Satz 2010:
103). Some claim that vote markets should be illegal for this reason.
Perhaps it's permissible to forbid vote selling because
commodified votes are likely to be cast against the common good.
However, if that is sufficient reason to forbid markets in votes, then
it is unclear why we should not, e.g., forbid highly ignorant,
irrational, or selfish voters from voting, as their votes are also
unusually likely to undermine the common good (Freiman 2014:
771-772). Further these arguments appear to leave open that a
person could permissibly sell her vote, provided she does so after
deliberating and provided she votes for the common good. It might be
that if vote selling were legal, most or even all vote sellers would
vote in destructive ways, but that does not show that vote selling is
inherently wrong.
One pressing issue, though, is whether vote buying is compatible with
the secret ballot (Maloberti 2018). Regardless of whether vote buying
is enforced through legal means (such as through enforceable
contracts) or social means (such as through the reputation mechanism
in eBay or through simply social disapproval), to enforce vote buying
seems to require that voters in some way actively prove they voted in
various ways. But, if so, then this will partly eliminate the secret
ballot and possibly lead to increased clientelism, in which
politicians make targeted promises to particular bands of voters
rather than serve the common good (Maloberti 2018).
Not all objections to vote-buying have this consequentialist flavor.
Some argue that vote buying is wrong for deontological grounds, for
instance, on the grounds that vote buying in some way is incompatible
with the social meaning of voting (e.g. Walzer 1984). Some view voting
is an expressive act, and the meaning of that expression is
socially-determined. To buy and sell votes may signal disrespect to
others in light of this social meaning.
## 6. Who Should Be Allowed to Vote? Should Everyone Receive Equal Voting Rights?
The dominant view among political philosophers is that we ought to
have some sort of representative democracy, and that each adult ought
to have one vote, of equal weight to every other adult's, in any
election in her jurisdiction. This view has recently come under
criticism, though, both from friends and foes of democracy.
Before one even asks whether "one person, one vote" is the
right policy, one needs to determine just who counts as part of the
demos. Call this the boundary problem or the problem of constituting
the demos (Goodin 2007: 40; Ron 2017). Democracy is the rule of the
people. But one fundamental question is just who constitutes
"the people". This is no small problem. Before one can
judge that a democracy is fair, or adequately responds to
citizens' interests, one needs to know who "counts"
and who does not.
One might be inclined to say that everyone living under a particular
government's jurisdiction is part of the demos and is thus
entitled to a vote. However, in fact, most democracies exclude
children and teenagers, felons, the mentally infirm, and non-citizens
living in a government's territory from being able to vote, but
at the same time allow their citizens living in foreign countries to
vote (Lopez-Guerra 2014: 1).
There are a number of competing theories here. The "all affected
interests" theory (Dahl 1990a: 64) holds that anyone who is
affected by a political decision or a political institution is part of
the demos. The basic argument is that anyone who is affected by a
political decision-making process should have some say over that
process. However, this principle suffers from multiple problems. It
may be incoherent or useless, as we might not know or be able to know
who is affected by a decision until after the decision is made (Goodin
2007: 52). For example (taken from Goodin 2007: 53), suppose the UK
votes on whether to transfer 5% of its GDP to its former African
colonies. We cannot assess whether the members of the former African
colonies are among the affected interests until we know what the
outcome of the vote is. If the vote is yay, then they are affected; if
the vote is nay, then they are not. (See Owen 2012 for a response.)
Further, the "all affected interests" theory would often
include non-citizens and exclude citizens. Sometimes political
decisions made in one country have a significant effect on citizens of
another country; sometimes political decisions made in one country
have little or no effect on some of the citizens of that country.
One solution (Goodin 2007: 55) to this problem (of who counts as an
affected party) is to hold that all people with *possibly* or
*potentially* affected interests constitute part of the polity.
This principle implies, however, that for many decisions, the demos is
smaller than the nation-state, and for others, it is larger. For
instance, when the United States decides whether to elect a
warmongering or pacifist candidate, this affects not only Americans,
but a large percentage of people worldwide.
Other major theories offered as solutions to the boundary problem face
similar problems. For example, the coercion theory holds that anyone
subject to coercion from a political body ought to have a say
(Lopez-Guerra 2005). But this principle might be also be seen
as over-inclusive (Song 2009), as it would require that resident
aliens, tourists, or even enemy combatants be granted a right to vote,
as they are also subject to a state's coercive power. Further,
who will be coerced depends on the outcome of a decision. If a state
decides to impose some laws, it will coerce certain people, and if the
state declines to impose those laws, then it will not. If we try to
overcome this by saying anyone *potentially* subject to a given
state's coercive power ought to have a say, then this seems to
imply that almost everyone worldwide should have a say in most
states' major decisions.
The commonsense view of the demos, i.e., that the demos includes all
and only adult members of a nation-state, may be hard to defend.
Goodin (2007: 49) proposes that what makes citizens special is that
their interests are interlinked. This may be an accidental feature of
arbitrarily-decided national borders, but once these borders are in
place, citizens will find that their interests tend to more linked
together than with citizens of other polities. But whether this is
true is also highly contingent.
### 6.1 Democratic Challenges to One Person, One Vote
The idea of "One person, one vote" is supposedly grounded
on a commitment to egalitarianism. Some philosophers believe that
democracy with equal voting rights is necessary to ensure that
government gives equal consideration to everyone's interests
(Christiano 1996, 2008). However, it is not clear that giving every
citizen an equal right to vote reliably results in decisions that give
equal consideration to everyone's interests. In many decisions,
many citizens have little to nothing at stake, while other citizens
have a great deal at stake. Thus, one alternative proposal is that
citizens' votes should be weighted by how much they have a stake
in the decision. This preserves equality not by giving everyone an
equal chance of being decisive in every decision, but by giving
everyone's interests equal weight. Otherwise, in a system of one
person, one vote, issues that are deeply important to the few might
continually lose out to issues of only minor interest to the many
(Brighouse and Fleurbaey 2010).
There are a number of other independent arguments for this conclusion.
Perhaps proportional voting enhances citizens' autonomy, by
giving them greater control over those issues in which they have
greater stakes, while few would regard it as significant loss of
autonomy were they to have reduced control over issues that do not
concern them. Further, though the argument for this conclusion is too
technical to cover here in depth (Brighouse and Fleurbaey 2010; List
2013), it may be that apportioning political power according to
one's stake in the outcome can overcome some of the well-known
paradoxes of democracy, such as the Condorcet Paradox (which show that
democracies might have intransitive preferences, i.e., the majority
might prefer A to B, B to C, and yet also prefer C to A).
However, even if this proposal seems plausible in theory, it is
unclear how a democracy might reliably instantiate this in practice.
Before allowing a vote, a democratic polity would need to determine to
what extent different citizens have a stake in the decision, and then
somehow weight their votes accordingly. In real life,
special-interests groups and others would likely try to use vote
weighting for their own ends. Citizens might regard unequal voting
rights as evidence of corruption or electoral manipulation (Christiano
2008: 34-45).
### 6.2 Non-Democratic Challenges to One Person, One Vote
Early defenders of democracy were concerned to show democracy is
superior to aristocracy, monarchy, or oligarchy. However, in recent
years, *epistocracy* has emerged as a major contender to
democracy (Estlund 2003, 2007; Landemore 2012). A system is said to be
epistocratic to the extent that the system formally allocates
political power on the basis of knowledge or political competence. For
instance, an epistocracy might give university-educated citizens
additional votes (Mill 1861), exclude citizens from voting unless they
can pass a voter qualification exam, weigh votes by each voter's
degree of political knowledge while correcting for the influence of
demographic factors, or create panels of experts who have the right to
veto democratic legislation (Caplan 2007; J. Brennan 2011b;
Lopez-Guerra 2014; Mulligan 2015).
Arguments for epistocracy generally center on concerns about
democratic incompetence. Epistocrats hold that democracy imbues
citizens with the right to vote in a promiscuous way. Ample empirical
research has shown that the mean, median, and modal levels of basic
political knowledge (let alone social scientific knowledge) among
citizens is extremely low (Somin 2013; Caplan 2007; Delli Carpini and
Keeter 1996). Further, political knowledge makes a significant
difference in how citizens vote and what policies they support
(Althaus 1998, 2003; Caplan 2007; Gilens 2012). Epistocrats believe
that restricting or weighting votes would protect against some of the
downsides of democratic incompetence.
One argument for epistocracy is that the legitimacy of political
decisions depends upon them being made competently and in good faith.
Consider, as an analogy: In a criminal trial, the jury's
decision is high stakes; their decision can remove a person's
rights or greatly harm their life, liberty, welfare, or property. If a
jury made its decision out of ignorance, malice, whimsy, or on the
basis of irrational and biased thought processes, we arguably should
not and probably would not regard the jury's decision as
authoritative or legitimate. Instead, we think the criminal has a
right to a trial conducted by competent people in good faith. In many
respects, electoral decisions are similar to jury decisions: they also
are high stakes, and can result in innocent people losing their lives,
liberty, welfare, or property. If the legitimacy and authority of a
jury decision depends upon the jury making a competent decision in
good faith, then perhaps so should the legitimacy and authority of
most other governmental decisions, including the decisions that
electorates and their representatives make. Now, suppose, in light of
widespread voter ignorance and irrationality, it turns out that
democratic electorates tend to make incompetent decisions. If so, then
this seems to provide at least presumptive grounds for favoring
epistocracy over democracy (J. Brennan 2011b).
Some dispute whether epistocracy would in fact perform better than
democracy, even in principle. Epistocracy generally attempts to
generate better political outcomes by in some way raising the average
reliability of political decision-makers. Political scientists Lu Hong
and Scott Page (2004) adduced a mathematical theorem showing that
under the right conditions, cognitive diversity among the participants
in a collective decision more strongly contributes to the group making
a smart decision than does increasing the individual
participants' reliability. On the Hong-Page theorem, it is
possible that having a large number of diverse but unreliable
decision-makers in a collective decision will outperform having a
smaller number of less diverse but more reliable decision-makers.
There is some debate over whether the Hong-Page theorem has any
mathematical substance (Thompson 2014 claims it does not), whether
real-world political decisions meet the conditions of the theorem, and
if so, to what extent that justifies universal suffrage, or merely
shows that having widespread but restricted suffrage is superior to
having highly restricted suffrage (Landemore 2012; Somin 2013:
113-5).
Relatedly, Condorcet's Jury Theorem holds that under the right
conditions, provided the average voter is reliable, as more and more
voters are added to a collective decision, the probability that the
democracy will make the right choice approaches 1 (List and Goodin
2001). However, assuming the theorem applies to real-life democratic
decisions, whether the theorem supports or condemns democracy depends
on how reliable voters are. If voters do systematically worse than
chance (e.g., Althaus 2003; Caplan 2007), then the theorem instead
implies that large democracies almost always make the wrong
choice.
One worry about certain forms of epistocracy, such as a system in
which voters must earn the right to vote by passing an examination, is
that such systems might make decisions that are biased toward members
of certain demographic groups. After all, political knowledge is not
evenly dispersed among all demographic groups. (At the very least, the
kinds of knowledge political scientists have been studying are not
evenly distributed. Whether other kinds of knowledge are better
distributed is an open question.) On average, in the United States, on
measures of basic political knowledge, whites know more than blacks,
people in the Northeast know more than people in the South, men know
more than women, middle-aged people know more than the young or old,
and high-income people know more than the poor (Delli Carpini and
Keeter 1996: 137-177). If such a voter examination system were
implemented, the resulting electorate would be whiter, maler, richer,
more middle-aged, and better employed than the population at large.
Democrats might reasonably worry that for this very reason an
epistocracy would not take the interests of non-whites, women, the
poor, or the unemployed into proper consideration.
However, at least one form of epistocracy may be able to avoid this
objection. Consider, for instance, the "enfranchisement
lottery":
>
> The enfranchisement lottery consists of two devices. First, there
> would be a sortition to disenfranchise the vast majority of the
> population. Prior to every election, all but a random sample of the
> public would be excluded. I call this device the exclusionary
> sortition because it merely tells us who will not be entitled to vote
> in a given contest. Indeed, those who survive the sortition (the
> pre-voters) would not be automatically enfranchised. Like everyone in
> the larger group from which they are drawn, pre-voters would be
> assumed to be insufficiently competent to vote. This is where the
> second device comes in. To finally become enfranchised and vote,
> pre-voters would gather in relatively small groups to participate in a
> competence-building process carefully designed to optimize their
> knowledge about the alternatives on the ballot. (Lopez-Guerra
> 2014: 4; cf. Ackerman and Fishkin 2005)
>
Under this scheme, no one has any presumptive right to vote. Instead,
everyone has, by default, equal eligibility to be selected to become a
voter. Before the enfranchisement lottery takes place, candidates
would proceed with their campaigns as they do in democracy. However,
they campaign without knowing which citizens in particular will
eventually acquire the right to vote. Immediately before the election,
a random but representative subset of citizens is then selected by
lottery. These citizens are not automatically granted the right to
vote. Instead, the chosen citizens merely acquire permission to earn
the right to vote. To earn this right, they must then participate in
some sort of competence-building exercise, such as studying party
platforms or meeting in a deliberative forum with one another. In
practice this system might suffer corruption or abuse, but,
epistocrats respond, so does democracy in practice. For epistocrats,
the question is which system works better, i.e., produces the best or
most substantively just outcomes, all things considered.
One important deontological objection to epistocracy is that it may be
incompatible with public reason liberalism (Estlund 2007). Public
reason liberals hold that distribution of coercive political power is
legitimate and authoritative only if all reasonable people subject to
that power have strong enough grounds to endorse a justification for
that power (Vallier and D'Agostino 2013). By definition,
epistocracy imbues some citizens with greater power than others on the
grounds that these citizens have greater social scientific knowledge.
However, the objection goes, reasonable people could disagree about
just what counts as expertise and just who the experts are. If
reasonable people disagree about what counts as expertise and who the
experts are, then epistocracy distributes political power on terms not
all reasonable people have conclusive grounds to endorse. Epistocracy
thus distributes political power on terms not all reasonable people
have conclusive grounds to endorse. (See, however, Mulligan 2015.) |
voting-methods | ## 1. The Problem: Who *Should* be Elected?
Suppose that there is a group of 21 voters who need to make a decision
about which of four candidates should be elected. Let the names of the
candidates be \(A\), \(B\), \(C\) and \(D\). Your job, as a social
planner, is to determine which of these 4 candidates should win the
election given the *opinions* of all the voters. The first step
is to elicit the voters' opinions about the candidates. Suppose that
you ask each voter to rank the 4 candidates from best to worst (not
allowing ties). The following table summarizes the voters' rankings of
the candidates in this hypothetical election scenario.
| | |
| --- | --- |
| # Voters | Ranking |
| 3 | \(A\s B\s C\s D\) |
| 5 | \(A\s C\s B\s D\) |
| 7 | \(B\s D\s C\s A\) |
| 6 | \(C\s B\s D\s A\) |
Read the table as follows: Each row represents a ranking for a group of voters
in which candidates to the left are ranked higher. The numbers in the first column
indicate the number of voters with that particular
ranking. So, for example, the third row in the table indicates that
7 voters have the ranking \(B\s D\s C\s A\) which means that each of the 7 voters
rank \(B\) first, \(D\) second, \(C\) third and \(A\) last.
Suppose that, as the social planner, you do not have any personal
interest in the outcome of this election. Given the voters' expressed
opinions, which candidate should win the election? Since the voters
disagree about the ranking of the candidates, there is no obvious
candidate that best represents the group's opinion. If there were
only two candidates to choose from, there is a very straightforward
answer: The winner should be the candidate or alternative that is supported
by more than 50 percent of the voters (cf. the discussion below about
May's Theorem in Section 4.2). However, if there are more than two
candidates, as in the above example, the statement "the
candidate that is supported by more than 50 percent of the
voters" can be interpreted in different ways, leading to
different ideas about who should win the election.
One candidate who, at first sight, seems to be a good choice to win
the election is \(A\). Candidate \(A\) is ranked first by more voters
than any other candidate. (\(A\) is ranked first by 8 voters,
\(B\) is ranked first by 7; \(C\) is ranked first by 6; and
\(D\) is not ranked first by any of the voters.) Of course, 13 people
rank \(A\) *last*. So, while more voters rank \(A\) first than
any other candidate, more than half of the voters rank \(A\) last.
This suggests that \(A\) should *not* be elected.
None of the voters rank \(D\) first. This fact alone does not rule out
\(D\) as a possible winner of the election. However, note that
every voter ranks candidate \(B\) above candidate \(D\). While this
does not mean that \(B\) should necessarily win the election, it does
suggest that \(D\) should not win the election.
The choice, then, boils down to \(B\) and \(C\). It turns out that there are good
arguments for each of \(B\) and \(C\) to be elected. The debate about
which of \(B\) or \(C\) should be elected started in the 18th-century
as an argument between the two founding fathers of voting theory,
Jean-Charles de Borda (1733-1799) and M.J.A.N. de Caritat,
Marquis de Condorcet (1743-1794). For a history of voting theory
as an academic discipline, including Condorcet's and Borda's writings,
see McLean and Urken (1995). I sketch the intuitive arguments for the
election of \(B\) and \(C\) below.
*Candidate \(C\) should win*. Initially, this might seem like
an odd choice since both \(A\) and \(B\) receive more first place
votes than \(C\) (only 6 voters rank \(C\) first while 8 voters rank \(A\)
first and 7 voters rank \(B\) first). However, note
how the population would vote in the various two-way elections comparing
\(C\) with each of the other candidates:
| | | | |
| --- | --- | --- | --- |
| # Voters | \(C\) versus \(A\) | \(C\) versus \(B\) | \(C\) versus \(D\) |
| 3 | \(\bA\s \gB\s \bC\s \gD\) | \(\gA\s \bB\s \bC\s \gD\) | \(\gA\s \gB\s \bC\s \bD\) |
| 5 | \(\bA\s \bC\s \gB\s \gD\) | \(\gA\s \bC\s \bB\s \gD\) | \(\gA\s \bC\s \gB\s \bD\) |
| 7 | \(\gB\s \gD\s \bC\s \bA\) | \(\bB\s \gD\s \bC\s \gA\) | \(\gB\s \bD\s \bC\s \gA\) |
| 6 | \( \bC\s \gB\s \gD\s \bA\) | \( \bC\s \bB\s \gD\s \gA\) | \( \bC\s \gB\s \bD\s \gA\) |
| Totals: | 13 rank \(C\) above \(A\)8 rank \(A\) above \(C\) | 11 rank \(C\) above \(B\)10 rank \(B\) above \(C\) | 14 rank \(C\) above \(D\)7 rank \(D\) above \(C\) |
Condorcet's idea is that \(C\) should be declared the winner since she beats
every other candidate in a one-on-one election. A candidate with this
property is called a **Condorcet winner**. We can similarly define
a **Condorcet loser**. In fact, in the above example, candidate
\(A\) is the Condorcet loser since she loses to every other candidate
in a one-on-one election.
*Candidate \(B\) should win*. Consider \(B\)'s performance in
the one-on-one elections.
| | | | |
| --- | --- | --- | --- |
| # Voters | \(B\) versus \(A\) | \(B\) versus \(C\) | \(B\) versus \(D\) |
| 3 | \(\bA\s \bB\s \gC\s \gD\) | \(\gA\s \bB\s \bC\s \gD\) | \(\gA\s \bB\s \gC\s \bD\) |
| 5 | \(\bA\s \gC\s \bB\s \gD\) | \(\gA\s \bC\s \bB\s \gD\) | \(\gA\s \gC\s \bB\s \bD\) |
| 7 | \(\bB\s \gD\s \gC\s \bA\) | \(\bB\s \gD\s \bC\s \gA\) | \(\bB\s \bD\s \gC\s \gA\) |
| 6 | \( \gC\s \bB\s \gD\s \bA\) | \( \bC\s \bB\s \gD\s \gA\) | \( \gC\s \bB\s \bD\s \gA\) |
| Totals: | 13 rank \(B\) above \(A\)8 rank \(A\) above \(B\) | 10 rank \(B\) above \(C\)11 rank \(C\) above \(B\) | 21 rank \(B\) above \(D\)0 rank \(D\) above \(B\) |
Candidate \(B\) performs the same as \(C\) in a head-to-head election
with \(A\), loses to \(C\) by only one vote and beats \(D\) in a
landslide (everyone prefers \(B\) over \(D\)). Borda suggests that we should
take into account *all* of these facts when determining which
candidate best represents the overall group opinion. To do this, Borda
assigns a score to each candidate that reflects how much support he or she has
among the electorate. Then, the
candidate with the largest score is declared the winner. One way to
calculate the score for each candidate is as follows (I will give an
alternative method, which is easier to use, in the next section):
* \(A\) receives 24 points (8 votes in each of the three
head-to-head elections)
* \(B\) receives 44 points (13 points in the competition against
\(A\), plus 10 in the competition against \(C\) plus 21 in the
competition against \(D\))
* \(C\) receives 38 points (13 points in the competition against
\(A\), plus 11 in the competition against \(B\) plus 14 in the
competition against \(D\))
* \(D\) receives 20 points (13 points in the competition against
\(A\), plus 0 in the competition against \(B\) plus 7 in the
competition against \(C\))
The candidate with the highest score (in this case, \(B\)) is the one
who should be elected.
Both Condorcet and Borda suggest comparing candidates in
one-on-one elections in order to determine the winner. While Condorcet
tallies how many of the head-to-head races each candidate wins, Borda
suggests that one should look at the margin of victory or loss.
The debate about whether to elect the Condorcet winner or the Borda
winner is not settled. Proponents of electing the Condorcet winner
include Mathias Risse (2001, 2004, 2005) and Steven Brams (2008);
Proponents of electing the Borda winner include Donald Saari (2003,
2006) and Michael Dummett (1984). See Section 3.1.1
for further issues comparing the Condorcet and Borda winners.
The take-away message from this discussion is that in many election
scenarios with more than two candidates, there may not always be one
obvious candidate that best reflects the overall group opinion. The
remainder of this entry will discuss different methods, or procedures,
that can be used to determine the winner(s) given the a group of
voters' opinions. Each of these methods is intended to be an answer to
the following question:
Given a group of people faced with some decision, how should a central
authority combine the individual opinions so as to best reflect the
"overall group opinion"?
A complete analysis of this question would incorporate a number of
different issues ranging from central topics in political philosophy
about the nature of democracy and the "will of the people"
to the psychology of decision making. In this article, I focus on one
aspect of this question: the formal analysis of algorithms that aggregate
the opinions of a group of voters (i.e., voting methods). Consult, for
example, Riker 1982, Mackie 2003, and Christiano 2008 for a more comprehensive
analysis of the above question, incorporating many of the issues raised in
this article.
### 1.1 Notation
In this article, I will keep the formal details to a minimum; however,
it is useful at this point to settle on some terminology. Let \(V\)
and \(X\) be finite sets. The elements of \(V\) are called voters and
I will use lowercase letters \(i, j, k, \ldots\) or integers \(1, 2,
3, \ldots\) to denote them. The elements of \(X\) are called
candidates, or alternatives, and I will use uppercase letters \(A, B,
C, \ldots \) to denote them.
Different voting methods require different types of information from
the voters as input. The input requested from the voters are called
**ballots**. One standard example of a ballot is a **ranking**
of the set of candidates. Formally, a ranking of \(X\) is a relation
\(P\) on \(X\), where \(Y\mathrel{P} Z\) means that "\(Y\) is
ranked above \(Z\)," satisfying three constraints: (1) \(P\) is
*complete*: any two distinct candidates are ranked (for all
candidates \(Y\) and \(Z\), if \(Y\ne Z\), then either \(Y\mathrel{P}
Z\) or \(Z\mathrel{P} Y\)); (2) \(P\) is *transitive*: if a
candidate \(Y\) is ranked above a candidate \(W\) and \(W\) is
ranked above a candidate \(Z\), then \(Y\) is ranked above
\(Z\) (for all
candidates \(Y, Z\), and \(W\), if \(Y\mathrel{P} W\) and \(W\mathrel{P} Z\), then \(Y\mathrel{P} Z\)); and (3) \(P\) is *irreflexive*: no candidate is ranked
above itself (there is no candidate \(Y\) such that \(Y\mathrel{P}
Y\)). For example, suppose that there are three candidates \(X =\{A,
B, C\}\). Then, the six possible rankings of \(X\) are listed in the
following table:
| | |
| --- | --- |
| # Voters | Ranking |
| \(n\_1\) | \(A\s B\s C\) |
| \(n\_2\) | \(A\s C\s B\) |
| \(n\_3\) | \(B\s A\s C\) |
| \(n\_4\) | \(B\s C\s A\) |
| \(n\_5\) | \(C\s A\s B\) |
| \(n\_6\) | \(C\s B\s A\) |
I can now be more precise about the definition of a Condorcet winner
(loser). Given a ranking from each voter, the **majority
relation** orders the candidates in terms of how they perform in
one-on-one elections. More precisely, for candidates \(Y\) and \(Z\),
write \(Y \mathrel{>\_M} Z\), provided that more voters rank candidate
\(Y\) above candidate \(Z\) than the other way around. So, if the
distribution of rankings is given in the above table, we have:
\[\begin{align}
A\mathrel{>\_M} B\ &\text{ just in case } n\_1 + n\_2 + n\_5 > n\_3 + n\_4 + n\_6 \\
A\mathrel{>\_M} C\ &\text{ just in case } n\_1 + n\_2 + n\_3 > n\_4 + n\_5 + n\_6 \\
B \mathrel{>\_M} C\ &\text{ just in case } n\_1 + n\_3 + n\_4 > n\_2 + n\_5 + n\_6
\end{align}\]
A candidate \(Y\) is called the **Condorcet winner** in an election
scenario if \(Y\) is the maximum of the majority ordering \(>\_M\) for
that election scenario (that is, \(Y\) is the Condorcet winner if
\(Y\mathrel{>\_M} Z\) for all other candidates \(Z\)). The **Condorcet
loser** is the candidate that is the minimum of the majority
ordering.
Rankings are one type of ballot. In this article, we will see examples
of other types of ballots, such as selecting a single candidate,
selecting a subset of candidates or assigning grades to candidates.
Given a set of ballots \(\mathcal{B}\), a **profile** for a set of
voters specifies the ballot selected by each voter. Formally, a
profile for set of voters \(V=\{1,\ldots, n\}\) and a set of ballots
\(\mathcal{B}\) is a sequence \(\bb=(b\_1,\ldots, b\_n)\), where
for each voter \(i\), \(b\_i\) is the ballot from \(\mathcal{B}\)
submitted by voter \(i\).
A **voting method** is a function that assigns to each possible
profile a *group decision*. The group decision may be a single
candidate (the winning candidate), a set of candidates (when ties are
allowed), or an ordering of the candidates (possibly allowing ties).
Note that since a profile identifies the voter associated with each ballot, a
voting method may take this information into account. This means that
voting methods can be designed that select a winner (or winners) based only
on the ballots of some subset of voters while ignoring all the other voters' ballots.
An extreme example of this is the so-called Arrovian dictatorship for voter \(d\)
that assigns to each profile the candidate ranked first by \(d\).
A natural way to rule out these types of voting methods is to require that
a voting method is **anonymous**: the group decision should
depend only on the number of voters that chose each ballot. This means that
if two profiles are permutations of each other, then a voting method that is
anonymous must assign the same group decision to both profiles. When studying
voting methods that are anonymous, it is convenient to assume the inputs are **anonymized
profiles**. An anonymous profile for a set of ballots
\(\mathcal{B}\) is a function from \(\mathcal{B}\) to the set of
integers \(\mathbb{N}\). The election scenario discussed in the
previous section is an example of an anonymized profile (assuming that
each ranking not displayed in the table is assigned the number 0). In
the remainder of this article (unless otherwise specified), I will
restrict attention to anonymized profiles.
I conclude this section with a few comments on the relationship
between the ballots in a profile and the voters' opinions about the
candidates. Two issues are important to keep in mind. First, the
ballots used by a voting method are intended to reflect *some*
aspect of the voters' opinions about the candidates. Voters may choose
a ballot that best expresses their personal preference about the set
of candidates or their judgements about the relative strengths of the
candidates. A common assumption in the voting theory literature is
that a ranking of the set of candidates expresses a voter's
*ordinal* preference ordering over the set of candidates (see
the entry on preferences, Hansson and Grune-Yanoff 2009, for an
extended discussion of issues surrounding the formal modeling of
preferences). Other types of ballots represent information that cannot
be inferred directly from a voter's *ordinal* preference
ordering, for example, by describing the *intensity* of a
preference for a particular candidate (see Section 2.3). Second, it is
important to be precise about the type of considerations voters take
into account when selecting a ballot. One approach is to assume that
voters choose *sincerely* by selecting the ballot that best
reflects their opinion about the the different candidates. A second
approach assumes that the voters choose *strategically*. In
this case, a voter selects a ballot that she *expects* to lead
to her most desired outcome given the information she has about how
the other members of the group will vote. Strategic voting is an
important topic in voting theory and social choice theory (see Taylor
2005 and Section 3.3 of List 2013 for a discussion and pointers to the
literature), but in this article, unless otherwise stated, I assume
that voters choose sincerely (cf. Section 4.1).
## 2. Examples of Voting Methods
A quick survey of elections held in different democratic societies
throughout the world reveals a wide variety of voting methods. In this
section, I discuss some of the key methods that have been analyzed in
the voting theory literature. These methods may be of interest because
they are widely used (e.g., Plurality Rule or Plurality Rule with
Runoff) or because they are of theoretical interest (e.g., Dodgson's
method).
I start with the most widely used method:
**Plurality Rule**:
Each voter selects one candidate (or none if voters can abstain), and
the candidate(s) with the most votes win.
Plurality rule (also called **First Past the Post**) is a very
simple method that is widely used despite its many problems. The most
pervasive problem is the fact that plurality rule can elect a
Condorcet loser. Borda (1784) observed this phenomenon in the 18th
century (see also the example from Section 1).
| | |
| --- | --- |
| # Voters | Ranking |
| 1 | \(A\s B\s C\) |
| 7 | \(A\s C\s B\) |
| 7 | \(B\s C\s A\) |
| 6 | \(C\s B\s A\) |
Candidate \(A\) is the Condorcet loser (both \(B\) and \(C\) beat
candidate \(A\), 13 - 8); however, \(A\) is the plurality rule
winner (assuming the voters vote for the candidate that they rank first).
In fact, the plurality ranking (\(A\) is first with 8
votes, \(B\) is second with 7 votes and \(C\) is third with 6
votes) reverses the majority ordering \(C\mathrel{>\_M} B\mathrel{>\_M}
A\). See Laslier 2012 for further criticisms of Plurality Rule and
comparisons with other voting methods discussed in this article. One
response to the above phenomenon is to require that candidates pass a
certain threshold to be declared the winner.
**Quota Rule**:
Suppose that \(q\), called the **quota**, is any number between 0
and 1. Each voter selects one candidate (or none if voters can
abstain), and the winners are the candidates that receive at least
\(q \times \# V\) votes, where \(\# V\) is the number of voters. **Majority Rule** is a quota rule with \(q=0.5\) (a candidate is the
**strict** or **absolute** majority winner if that candidate
receives strictly more than \(0.5 \times \# V\) votes). **Unanimity
Rule** is a quota rule with \(q=1\).
An important problem with quota rules is that they do not identify a
winner in every election scenario. For instance, in the above election
scenario, there are no majority winners since none of the candidates
are ranked first by more than 50% of the voters.
A criticism of both plurality and quota rules is that they severely
limit what voters can express about their opinions of the candidates.
In the remainder of this section, I discuss voting methods that use
ballots that are more expressive than simply selecting a single
candidate. Section 2.1 discusses voting methods that require voters to
rank the alternatives. Section 2.2 discusses voting methods that
require voters to assign grades to the alternatives (from some fixed
set of grades). Finally, Section 2.3 discusses two voting methods in
which the voters may have different levels of influence on the group
decision. In this article, I focus on voting methods that either are
familiar or help illustrate important ideas. Consult Brams and
Fishburn 2002, Felsenthal 2012, and Nurmi 1987 for discussions of
voting methods not covered in this article.
### 2.1 Ranking Methods: Scoring Rules and Multi-Stage Methods
The voting methods discussed in this section require the voters to
**rank** the candidates (see section 1.1 for the definition of a
ranking). Providing a ranking of the candidates is much more
expressive than simply selecting a single candidate. However,
ranking *all* of the candidates can be very demanding,
especially when there is a large number of them, since it can be
difficult for voters to make distinctions between all the
candidates. The most well-known example of a voting method that uses
the voters' rankings is Borda Count:
**Borda Count**:
Each voter provides a ranking of the candidates. Then, a score (the
Borda score) is assigned to each candidate by a voter as follows: If
there are \(n\) candidates, give \(n-1\) points to the candidate ranked
first, \(n-2\) points to the candidate ranked second,..., 1 point to
the candidate ranked second to last and 0 points to candidate ranked
last. So, the Borda score of candidate \(A\), denoted \(\BS(A)\), is
calculated as follows (where \(\#U\) denotes the number elements in
the set \(U)\):
\[\begin{align}
\BS(A) =\ &(n-1)\times \# \{i\ |\ i \text{ ranks \(A\) first}\}\\
&+ (n-2)\times \# \{i\ |\ i \text{ ranks \(A\) second}\} \\
&+ \cdots \\
&+ 1\times \# \{i\ |\ i \text{ ranks \(A\) second to last}\}\\
&+ 0\times \# \{i\ |\ i \text{ ranks \(A\) last}\}
\end{align}\]
The candidate with the highest Borda score wins.
Recall the example discussed in the introduction to Section 1. For
each alternative, the Borda scores can be calculated using the above
method:
\[\begin{align}
\BS(A) &= 3 \times 8 + 2 \times 0 + 1 \times 0 + 0 \times 13 = 24 \\
\BS(B) &= 3 \times 7 + 2 \times 9 + 1 \times 5 + 0 \times 0 = 44 \\
\BS(C) &= 3 \times 6 + 2 \times 5 + 1 \times 10 + 0 \times 0 = 38 \\
\BS(D) &= 3 \times 0 + 2 \times 7 + 1 \times 6 + 0 \times 8 = 20
\end{align}\]
Borda Count is an example of a **scoring rule**. A scoring rule is
any method that calculates a score based on weights assigned to
candidates according to where they fall in the voters' rankings. That
is, a scoring rule for \(n\) candidates is defined as follows: Fix a
sequence of numbers \((s\_1, s\_2, \ldots, s\_n)\) where \(s\_k\ge
s\_{k+1}\) for all \(k=1,\ldots, n-1\). For each \(k\), \(s\_k \)
is the score assigned to a alternatives ranked in position \(k\).
Then, the score for alternative \(A\), denoted \(Score(A)\), is
calculated as follows:
\[\begin{align}
\textit{Score}(A)=\ &s\_1\times \# \{i\ |\ i \text{ ranks \(A\) first}\}\\
&+ s\_2\times \# \{i\ |\ i \text{ ranks \(A\) second}\}\\
&+ \cdots \\
&+ s\_n\times \# \{i\ |\ i \text{ ranks \(A\) last}\}.
\end{align}\]
Borda count for \(n\) alternatives uses scores \((n-1, n-2, \ldots,
0)\) (call \(\BS(X)\) the Borda score for candidate \(X\)).
Note that Plurality Rule can be viewed as a scoring rule that
assigns 1 point to the first ranked candidate and 0 points to the
other candidates. So, the **plurality score** of a candidate \(X\) is the number
of voters that rank \(X\) first. Building on this idea, **\(k\)-Approval Voting**
is a scoring method that gives 1 point to each candidate that is
ranked in position \(k\) or higher, and 0 points to all other
candidates. To illustrate \(k\)-Approval Voting, consider the
following election scenario:
| | |
| --- | --- |
| # Voters | Ranking |
| 2 | \(A\s D\s B\s C\) |
| 2 | \(B\s D\s A\s C\) |
| 1 | \(C\s A\s B\s D\) |
* The winners according to 1-Approval Voting (which is the same as
Plurality Rule) are \(A\) and \(B.\)
* The winner according 2-Approval Voting is \(D.\)
* The winners according to 3-Approval Voting are \(A\) and \(B.\)
Note that the Condorcet winner is \(A\), so none of the above
methods *guarantee* that the Condorcet winner is elected
(whether \(A\) is elected using 1-Approval or 3-Approval depends on
the tie-breaking mechanism that is used).
A second way to make a voting method sensitive to more than the
voters' top choice is to hold "multi-stage" elections. The
idea is to successively remove candidates that perform poorly in the
election until there is one candidate that is ranked first by more
than 50% of the voters (i.e., there is a strict majority winner). The
different stages can be actual "runoff" elections in which
voters are asked to evaluate a reduced set of candidates; or they can
be built in to the way the winner is calculated by asking voters to
submit rankings over the set of all candidates. The first example of a
multi-stage method is used to elect the French president.
**Plurality with Runoff**:
Start with a plurality vote to determine the top two candidates (the
candidates ranked first and second according to their plurality scores).
If a candidate is ranked first by more than 50% of the voters, then
that candidate is declared the winner. If there is no candidate with
a strict majority of first place votes, then there is a runoff
between the top two candidates (or more if there are ties). The
candidate(s) with the most votes in the runoff elections is(are) declared the
winner(s).
Rather than focusing on the top two candidates, one can also
iteratively remove the candidate(s) with the fewest first-place votes:
**The Hare Rule**:
The ballots are rankings of the candidates. If a candidate is ranked
first by more than 50% of the voters, then that candidate is declared
the winner. If there is no candidate with a strict majority of first
place votes, repeatedly delete the candidate or candidates that
receive the fewest first-place votes (i.e., the candidate(s) with the lowest plurality
score(s)). The first candidate to be ranked
first by strict majority of voters is declared the winner (if there is
no such candidate, then the remaining candidate(s) are declared the
winners).
The Hare Rule is also called **Ranked-Choice Voting**, **Alternative Vote**, and
**Instant Runoff**. If there are only three candidates, then the above two voting methods
are the same (removing the candidate with the lowest plurality score is
the same as keeping the two candidates with highest and second-highest plurality score). The following example
shows that they can select different winners when there are more than
three candidates:
| | |
| --- | --- |
| # Voters | Ranking |
| 7 | \(A\s B\s C\s D\) |
| 5 | \(B\s C\s D\s A\) |
| 4 | \(D\s B\s C\s A\) |
| 3 | \(C\s D\s A\s B\) |
| Candidate \(A\) is the Plurality with Runoff winner
Candidate \(D\) is the Hare Rule winner |
Candidate \(A\) is the Plurality with Runoff winner: Candidates \(A\)
and \(B\) are the top two candidates, being ranked first by 7 and 5
voters, respectively. In the runoff election (using the rankings from
the above table), the groups voting for candidates \(C\) and \(D\)
transfer their support to candidates \(B\) and \(A,\) respectively,
with \(A\) winning 10 - 9.
Candidate \(D\) is the Hare Rule winner: In the first round, candidate
\(C\) is eliminated since she is only ranked first by 3 voters. This
group's votes are transferred to \(D\), giving him 7 votes. This means
that in the second round, candidate \(B\) is ranked first by the
fewest voters (5 voters rank \(B\) first in the profile with candidate
\(C\) removed), and so is eliminated. After the elimination of
candidate \(B\), candidate \(D\) has a strict majority of the
first-place votes: 12 voters ranking him first (note that in this
round the group in the second column transfers all their votes to
\(D\) since \(C\) was eliminated in an earlier round).
The core idea of multi-stage methods is to successively remove
candidates that perform "poorly" in an election. For the Hare Rule,
performing poorly is interpreted as receiving the fewest first place
votes. There are other ways to identify "poorly performing" candidates
in an election scenario. For instance, the Coombs Rule successively
removes candidates that are ranked last by the most voters (see
Grofman and Feld 2004 for an overview of Coombs Rule).
**Coombs Rule**:
The ballots are rankings of the candidates. If a candidate is ranked
first by more than 50% of the voters, then that candidate is declared
the winner. If there is no candidate with a strict majority of first
place votes, repeatedly delete the candidate or candidates that
receive the most last-place votes. The first candidate to be ranked
first by a strict majority of voters is declared the winner (if there is
no such candidate, then the remaining candidate(s) are declared the
winners).
In the above example, candidate \(B\) wins the election using Coombs
Rule. In the first round, \(A\), with 9 last-place votes, is
eliminated. Then, candidate \(B\) receives 12 first-place votes, which
is a strict majority, and so is declared the winner.
There is a technical issue that is important to keep in mind regarding
the above definitions of the multi-stage voting methods. When identifying
the poorly performing candidates in each round, there may be ties (i.e., there
may be more than one candidate with the lowest plurality score or more than one candidate
ranked last by the most voters). In the above definitions, I assume that all of the poorly
performing candidates will be removed in each round. An alternative approach would use a tie-breaking rule to select one of the poorly performing candidates to be removed at each round.
### 2.2 Voting by Grading
The voting methods discussed in this section can be viewed as
generalizations of scoring methods, such as Borda Count. In a scoring
method, a voter's ranking is an assignment of *grades* (e.g.,
"1st place", "2nd place", "3rd place", ... , "last place") to the
candidates. Requiring voters to rank all the candidates means that (1)
every candidate is assigned a grade, (2) there are the same number of
possible grades as the number of candidates, and (3) different
candidates must be assigned different grades. In this section, we drop
assumptions (2) and (3), assuming a fixed number of grades for every
set of candidates and allowing different candidates to be assigned the
same grade.
The first example gives voters the option to either select a candidate
that they want to vote *for* (as in plurality rule) or to
select a candidate that they want to vote *against*.
**Negative Voting**:
Each voter is allowed to choose one candidate to either vote
*for* (giving the candidate one point) or to vote
*against* (giving the candidate -1 points). The winner(s)
is(are) the candidate(s) with the highest total number of points (i.e., the candidate
with the greatest score, where the score is the total number of positive votes minus the total
number of negative votes).
Negative voting is tantamount to allowing the voters to support either
a single candidate or all but one candidate (taking a point away from
a candidate \(C\) is equivalent to giving one point to all candidates
except \(C\)). That is, the voters are asked to choose a set of
candidates that they support, where the choice is between sets
consisting of a single candidate or sets consisting of all except one
candidate. The next voting method generalizes this idea by allowing
voters to choose *any* subset of candidates:
**Approval Voting**:
Each voter selects a *subset* of the candidates (where the
empty set means the voter abstains) and the candidate(s) with selected by
the most voters wins.
If a candidate \(X\) is in the set of candidates selected by
a voter, we say that the voter approves of candidate \(X\). Then, the approval winner is the
candidate with the most approvals. Approval voting has been extensively discussed by Steven Brams and Peter Fishburn (Brams and Fishburn 2007; Brams 2008). See, also, the
recent collection of articles devoted to approval voting (Laslier and
Sanver 2010).
Approval voting forces voters to think about the decision problem
differently: They are asked to determine which candidates they
*approve* of rather than selecting a single candidate to voter
*for* or determining the relative ranking of the candidates.
That is, the voters are asked which candidates are above a certain
"threshold of acceptance". Ranking a set of candidates and
selecting the candidates that are approved are two different aspects
of a voters overall opinion about the candidates. They are related but
cannot be derived from each other. See Brams and Sanver 2009, for
examples of voting methods that ask voters to both select a set of
candidates that they approve *and* to (linearly) rank the
candidates.
Approval voting is a very flexible method. Recall the election
scenario illustrating the \(k\)-Approval Voting methods:
| | |
| --- | --- |
| # Voters | Ranking |
| 2 | \(\underline{A}\s D\s B\s C\) |
| 2 | \(\underline{B}\s D\s A\s C\) |
| 1 | \(\underline{C}\s \underline{A}\s B\s D\) |
In this election scenario, \(k\)-Approval for \(k=1,2,3\) cannot
guarantee that the Condorcet winner \(A\) is elected. The Approval
ballot \((\{A\},\{B\}, \{A, C\})\) does elect the Condorcet winner. In
fact, Brams (2008, Chapter 2) proves that if there is a unique
Condorcet winner, then that candidate may be elected under approval
voting (assuming that all voters vote *sincerely*: see Brams
2008, Chapter 2, for a discussion). Note that approval voting may also
elect other candidates (perhaps even the Condorcet loser). Whether
this flexibility of Approval Voting should be seen as a virtue or a
vice is debated in Brams, Fishburn and Merrill 1988a, 1988b and Saari
and van Newenhizen 1988a, 1988b.
Approval Voting asks voters to express something about their
*intensity* of preference for the candidates by assigning one
of two grades: "Approve" or "Don't Approve". Expanding on this idea,
some voting methods assume that there is a fixed set of grades, or a
*grading language*, that voters can assign to each candidate.
See Chapters 7 and 8 from Balinksi and Laraki 2010 for examples and a
discussion of grading languages (cf. Morreau 2016).
There are different ways to determine the winner(s) given a profile of
ballots that assign grades to each candidate. The main approach is to
calculate a "group" grade for each candidate, then select the
candidate with the best overall group grade. In order to calculate a
group grade for each candidate, it is convenient to use numbers for
the grading language. Then, there are two natural ways to determine
the group grade for a candidate: calculating the mean, or average, of
the grades or calculating the median of the grades.
**Cumulative Voting**:
Each voter is asked to distribute a fixed number of points, say ten,
among the candidates in any way they please. The candidate(s) with the
most total points wins the election.
**Score Voting (also called Range Voting)**:
The grades are a finite set of numbers. The ballots are an assignment
of grades to the candidates. The candidate(s) with the largest average
grade is declared the winner(s).
Cumulative Voting and Score Voting are similar. The important
difference is that Cumulative Voting requires that the sum of the
grades assigned to the candidates by each voter is the same. The next
procedure, proposed by Balinski and Laraki 2010 (cf. Bassett and
Persky 1999 and
the discussion of this method at rangevoting.org),
selects the candidate(s) with the largest *median* grade rather
than the largest mean grade.
**Majority Judgement**:
The grades are a finite set of numbers (cf. discussion of common grading languages).
The ballots are an assignment of grades to the candidates. The candidate(s)
with the largest median grade is(are) declared the winner(s). See
Balinski and Laraki 2007 and 2010 for further refinements of this voting method
that use different methods for breaking ties when there are multiple candidates
with the largest median grade.
I conclude this section with an example that illustrates Score Voting
and Majority Judgement. Suppose that there are 3 candidates \(\{A, B,
C\}\), 5 grades \(\{0,1,2,3,4\}\) (with the assumption that the larger
the number, the higher the grade), and 5 voters. The table below
describes an election scenario. The candidates are listed in the first row.
Each row describes an assignment of grades to a candidate by a set of voters.
| | |
| --- | --- |
| | Grade (0-4) for: |
| # Voters | \(A\) | \(B\) | \(C\) |
| 1 | 4 | 3 | 1 |
| 1 | 4 | 3 | 2 |
| 1 | 2 | 0 | 3 |
| 1 | 2 | 3 | 4 |
| 1 | 1 | 0 | 2 |
| Mean: | 2.6 | 1.8 | 2.4 |
| Median: | 2 | 3 | 2 |
The bottom two rows give the mean and median grade for each
candidate. Candidate \(A\) is the score voting winner with the greatest
mean grade, and candidate \(B\) is the majority judgement winner with
the greatest median grade.
There are two types of debates about the voting methods introduced in this section.
The first concerns the choice of the *grading language* that voters use
to evaluate the candidates. Consult Balinski and Laraki 2010 amd Morreau 2016 for an extensive discussion of the types of considerations that influence the choice of a grading language. Brams and Potthoff 2015 argue that two grades, as in Approval Voting, is best to avoid certain paradoxical outcomes. To illustrate, note that, in the above example, if the candidates are ranked by
the voters according to the grades that are assigned, then candidate
\(C\) is the Condorcet winner (since 3 voters assign higher grades to
\(C\) than to \(A\) or \(B\)). However, neither Score Voting nor Majority Judgement selects candidate \(C\).
The second type of debate concerns the method used to calculate the group grade for each candidate (i.e., whether to use the mean as in Score Voting or the median as in Majority Judgement). One important issue is whether voters have an incentive to misrepresent their evaluations of the candidates. Consider the voter in the middle column that assigns the grade of 2 to \(A\), 0 to \(B\), and 3 to \(C\). Suppose that these grades represents the voter's true evaluations of the candidates. If this voter increases the grade for \(C\) to 4 and decreases the grade for \(A\) to 1 (and the other voters do not change their grades), then the average grade for \(A\) becomes 2.4 and the average grade for \(C\) becomes 2.6, which better reflects the voter's true evaluations of the candidates (and results in \(C\) being elected according to Score Voting). Thus, this voter has an incentive to misrepresent her grades. Note that the median grades for the candidates do not change after this voter changes her grades. Indeed, Balinski and Laraki 2010, chapter 10, argue that using the median to assign group grades to candidates encourages voters to submit grades that reflect their true evaluations of the candidates. The key idea of their argument is as follows: If a voter's true grade matches the median grade for a candidate, then the voter does not have an incentive to assign a different grade. If a voter's true grade is greater than the median grade for a candidate, then raising the grade will not change the candidate's grade and lowering the voter's grade may result in the candidate receiving a grade that is lowering than the voter's true evaluation. Similarly, if a voter's true grade is lower than the median grade for a candidate, then lowering the grade will not change the candidate's grade and raising the voter's grade may result in the candidate receiving a grade that is higher than the voter's true evaluation. Thus, if voters are focused on ensuring that the group grades for the candidates best reflects their true evaluations of the candidates, then voters do not have an incentive to misrepresent their grades. However, as pointed out in Felsenthal and Machover 2008 (Example 3.3), voters can manipulate the outcome of an election using Majority Judgement to ensure a preferred candidate is elected (cf. the discussion of strategic voting in Section 4.1 and Section 3.3 of List 2013). Suppose that the voter in the middle column assigns the grade of 4 to candidate \(A\), 0 to candidate \(B\) and 3 to candidate \(C\). Assuming the other voters do not change their grades, the majority judgement winner is now \(A\), which the voter ranks higher than the original majority judgement winner \(B.\) Consult Balinski and Laraki 2010, 2014 and Edelman 2012b for arguments in favor of electing candidates with the greatest median grade; and Felsenthal and Machover 2008, Gehrlein and Lepelley 2003, and Laslier 2011 for arguments against electing candidates with the greatest median grade.
### 2.3 Quadratic Voting and Liquid Democracy
In this section, I briefly discuss two new approaches to voting that
do not fit nicely into the categories of voting methods introduced in
the previous sections. While both of these methods can be used to
select representatives, such as a president, the primary application
is a group of people voting directly on propositions, or referendums.
**Quadratic Voting**: When more than 50% of the voters support an
alternative, most voting methods will select that alternative. Indeed,
when there are only two alternatives, such as when voting for or
against a proposition, there are many arguments that identify majority
rule as the best and most stable group decision method (May 1952;
Maskin 1995). One well-known problem with always selecting the
majority winner is the so-called *tyranny of the majority*. A
complete discussion of this issue is beyond the scope of this article.
The main problem from the point of view of the analysis of voting
methods is that there may be situations in which a majority of the
voters weakly support a proposition while there is a sizable minority
of voters that have a strong preference against the proposition.
One way of dealing with this problem is to increase the quota required
to accept a proposition. However, this gives too much power to a small
group of voters. For instance, with Unanimity Rule a single voter can
block a proposal from being accepted. Arguably, a better solution is
to use ballots that allow voters to express something about their
intensity of preference for the alternatives. Setting aside issues
about interpersonal comparisons of utility (see, for instance, Hausman
1995), this is the benefit of using the voting methods discussed in
Section 2.2, such as Score Voting or Majority Judgement. These voting
methods assume that there is a fixed set of *grades* that the
voters use to express their intensity of preference. One challenge is
finding an appropriate set of grades for a population of voters. Too
few grades makes it harder for a sizable minority with strong
preferences to override the majority opinion, but too many grades
makes it easy for a vocal minority to overrule the majority opinion.
Using ideas from mechanism design (Groves and Ledyard 1977; Hylland and
Zeckhauser 1980), the economist E. Glen Weyl developed a voting method
called Quadratic Voting that mitigates some of the above issues
(Lalley and Weyl 2018a). The idea is to think of an election as a
market (Posner and Weyl, 2018, Chapter 2). Each voter can purchase
votes at a costs that is quadratic in the number of votes. For
instance, a voter must pay $25 for 5 votes (either in favor or against
a proposition). After the election, the money collected is distributed
on a *pro rata* basis to the voters. There are a variety of
economic arguments that justify why voters should pay \(v^2\) to
purchase \(v\) votes (Lalley and Weyl 2018b; Goeree and Zhang 2017).
See Posner and Weyl 2015 and 2017 for further discussion and a
vigorous defense of the use of Quadratic Voting in national elections.
Consult Laurence and Sher 2017 for two arguments against the use of Quadratic Voting.
Both arguments are derived from the presence of wealth inequality. The first
argument is that it is ambiguous whether the Quadratic Voting decision really outperforms a decision using majority rule from the perspective of utilitarianism
(see Driver 2014 and Sinnott-Armstrong 2019 for overviews of utilitarianism).
The second argument is that any vote-buying mechanism will have a hard
time meeting a legitimacy requirement, familiar from the theory of democratic
institutions (cf. Fabienne 2017).
**Liquid Democracy**: Using Quadratic Voting, the voters' opinions
may end up being weighted differently: Voters that purchase more of a
voice have more influence over the election. There are other reasons
why some voters' opinions may have more weight than others when making
a decision about some issue. For instance, a voter may have been
elected to represent a constituency, or a voter may be recognized as
an expert on the issue under consideration. An alternative approach to
group decision making is *direct democracy* in which every
citizen is asked to vote on every political issue. Asking the citizens
to vote on *every* issue faces a number of challenges,
nicely explained by Green-Armytage (2015, pg. 191):
>
> Direct democracy without any option for representation is problematic.
> Even if it were possible for every citizen to learn everything they
> could possibly know about every political issue, people who did this
> would be able to do little else, and massive amounts of time would be
> wasted in duplicated effort. Or, if every citizen voted but most
> people did not take the time to learn about the issues, the results
> would be highly random and/or highly sensitive to overly simplistic
> public relations campaigns. Or, if only a few citizens voted,
> particular demographic and ideological groups would likely be
> under-represented
>
One way to deal with some of the problems raised in the above quote is to
use *proxy voting*, in which voters can delegate their vote
on some issues (Miller 1969). Liquid Democracy is a form of proxy voting
in which voters can delegate their votes to other voters (ideally, to voters that are
well-informed about the issue under consideration). What distinguishes
Liquid Democracy from proxy voting is that proxies may further
delegate the votes entrusted to them. For example, suppose that there
is a vote to accept or reject a proposition. Each voter is given the
option to delegate their vote to another voter, called a proxy. The
proxies, in turn, are given the option to delegate their votes to yet
another voter. The voters that decide to not transfer their votes cast
a vote weighted by the number of voters who entrusted them as a proxy,
either directly or indirectly.
While there has been some discussion of proxy voting in the political
science literature (Miller 1969; Alger 2006; Green-Armytage 2015),
most studies of Liquid Democracy can be found in the computer science
literature. A notable exception is Blum and Zuber 2016 that justifies
Liquid Democracy, understood as a procedure for democratic
decision-making, within normative democratic theory. An overview of
the origins of Liquid Democracy and pointers to other online
discussions can be found in Behrens 2017. Formal studies of Liquid
Democracy have focused on: the possibility of delegation cycles and
the relationship with the theory of judgement aggregation (Christoff
and Grossi 2017); the rationality of delegating votes (Bloembergen,
Grossi and Lackner 2018); the potential problems that arise when many
voters delegate votes to only a few voters (Kang et al. 2018; Golz et
al. 2018); and generalizations of Liquid Democracy beyond binary
choices (Brill and Talmon 2018; Zhang and Zhou 2017).
### 2.4 Criteria for Comparing Voting Methods
This section introduced different methods for making a group decision.
One striking fact about the voting methods discussed in this section
is that they can identify different winners given the same collection
of ballots. This raises an important question: How should we
*compare* the different voting methods? Can we argue that some
voting methods are better than others? There are a number of different
criteria that can be used to compare and contrast different voting
methods:
1. **Pragmatic concerns**: Is the procedure easy to *use*?
Is it *legal* to use a particular voting method for a national
or local election? The importance of "ease of use" should
not be underestimated: Despite its many flaws, plurality rule
(arguably the simplest voting procedure to use and understand) is, by
far, the most commonly used method (cf. the discussion by Levin and
Nalebuff 1995, p. 19). Furthermore, there are a variety
of consideration that go into selecting an appropriate voting
method for an institution (Edelman 2012a).
2. **Behavioral considerations**: Do the different procedures
*really* lead to different outcomes in practice? An interesting
strand of research, *behavorial social choice*, incorporates
empirical data about actual elections into the general theory of
voting (This is discussed briefly in Section 5. See Regenwetter *et
al*. 2006, for an extensive discussion).
3. **Information required from the voters**: What type of
information do the ballots convey? While ranking methods (e.g., Borda
Count) require the voter to compare *all* of the candidates, it
is often useful to ask the voters to report something about the
"intensities" of their preferences over the candidates. Of
course, there is a trade-off: Limiting what voters can express about
their opinions of the candidates often makes a procedure much easier
to use and understand. Also related to these issues is the work of
Brennan and Lomasky 1993 (among others) on *expressive voting*
(cf. Wodak 2019 and Aragones et al. 2011 for analyses along these lines
touching on issues raised in this article).
4. **Axiomatic characterization results and voting paradoxes**:
Much of the work in voting theory has focused on comparing and
contrasting voting procedures in terms of abstract principles that
they satisfy. The goal is to characterize the different voting
procedures in terms of *normative* principles of group decision
making. See Sections 3 and 4.2 for discussions.
## 3. Voting Paradoxes
In this section, I introduce and discuss a number of *voting
paradoxes* -- i.e., anomalies that highlight problems with
different voting methods. Consult Saari 1995 and Nurmi 1999 for
penetrating analyses that explain the underlying mathematics behind
the different voting paradoxes.
### 3.1 Condorcet's Paradox
A very common assumption is that a *rational* preference
ordering must be *transitive* (i.e., if \(A\) is preferred to
\(B\), and \(B\) is preferred to \(C\), then \(A\) must be preferred
to \(C\)). See the entry on preferences (Hansson and Grune-Yanoff
2009) for an extended discussion of the rationale behind this
assumption. Indeed, if a voter's preference ordering is not
transitive, for instance, allowing for cycles (e.g., an ordering of \(A, B, C\) with
\(A \succ B \succ C \succ A\), where \(X\succ Y\) means \(X\) is strictly preferred to \(Y\)), then there is no alternative that the voter can be said to actually support (for each
alternative, there is another alternative that the voter strictly prefers). Many
authors argue that voters with cyclic preference orderings have
inconsistent opinions about the candidates and should be
*ignored* by a voting method (in particular, Condorcet
forcefully argued this point). A key observation of Condorcet (which
has become known as the Condorcet Paradox) is that the majority ordering
may have cycles (even when all the voters submit *rankings* of the alternatives).
Condorcet's original example was more complicated, but the following
situation with three voters and three candidates illustrates the
phenomenon:
| | |
| --- | --- |
| # Voters | Ranking |
| 1 | \(A\s B\s C\) |
| 1 | \(B\s C\s A\) |
| 1 | \(C\s A\s B\) |
Note that we have:
* Candidate \(A\) beats candidate \(B\) in a one-on-one
election: 2 voters rank \(A\) above \(B\) compared to 1 voter ranking \(B\) above \(A\).
* Candidate \(B\) beats candidate \(C\) in a one-on-one
election: 2 voters rank \(B\) above \(C\) compared to 1 voter ranking \(C\) above \(B\).
* Candidate \(C\) beats candidate \(A\) in a one-on-one
election: 2 voters rank \(C\) above \(A\) compared to 1 voter ranking \(A\) above \(C\).
That is, there is a **majority cycle** \(A>\_M B >\_M C >\_M A\). This
means that there is no Condorcet winner. This simple, but fundamental
observation has been extensively studied (Gehrlein 2006; Schwartz
2018).
#### 3.1.1 Electing the Condorcet Winner
The Condorcet Paradox shows that there may not always be a Condorcet
winner in an election. However, one natural requirement for a voting
method is that if there is a Condorcet winner, then that candidate
should be elected. Voting methods that satisfy this property are
called **Condorcet consistent**. Many of the methods introduced
above are not Condorcet consistent. I already presented an example
showing that plurality rule is not Condorcet consistent (in fact,
plurality rule may even elect the Condorcet *loser*).
The example from Section 1 shows that Borda Count is not Condorcet
consistent. In fact, this is an instance of a general phenomenon that
Fishburn (1974) called **Condorcet's other paradox**. Consider the
following voting situation with 81 voters and three candidates from
Condorcet 1785.
| | |
| --- | --- |
| # Voters | Ranking |
| 30 | \(A\s B\s C\) |
| 1 | \(A\s C\s B\) |
| 29 | \(B\s A\s C\) |
| 10 | \(B\s C\s A\) |
| 10 | \(C\s A\s B\) |
| 1 | \(C\s B\s A\) |
The majority ordering is \(A >\_M B >\_M C\), so \(A\) is the Condorcet
winner. Using the Borda rule, we have:
\[\begin{align}
\BS(A) &= 2\times 31 + 1 \times 39 + 0 \times 11 = 101 \\
\BS(B) &= 2 \times 39 + 1 \times 31 + 0 \times 11 = 109 \\
\BS(C) &= 2 \times 11 + 1 \times 11 + 0 \times 59 = 33
\end{align}\]
So, candidate \(B\) is the Borda winner. Condorcet pointed out
something more: The only way to elect candidate \(A\) using
*any* scoring method is to assign more points to candidates
ranked second than to candidates ranked first. Recall that a scoring
method for 3 candidates fixes weights \(s\_1\ge s\_2\ge s\_3\), where
\(s\_1\) points are assigned to candidates ranked 1st, \(s\_2\) points
are assigned to candidates ranked 2nd, and \(s\_3\) points are assigned
to candidates ranked last. To simplify the calculation, assume that
candidates ranked last receive 0 points (i.e., \(s\_3=0\)). Then, the
scores assigned to candidates \(A\) and \(B\) are:
\[\begin{align}
Score(A) &= s\_1 \times 31 + s\_2 \times 39 + 0 \times 11 \\
Score(B) &= s\_1 \times 39 + s\_2 \times 31 + 0 \times 11
\end{align}\]
So, in order for \(Score(A) > Score(B)\), we must have \((s\_1 \times
31 + s\_2 \times 39) > (s\_1 \times 39 + s\_2 \times 31)\), which implies
that \(s\_2 > s\_1\). But, of course, it is counterintuitive to give
more points for being ranked second than for being ranked first. Peter
Fishburn generalized this example as follows:
**Theorem** (Fishburn 1974).
For all \(m\ge 3\), there is some voting situation with a Condorcet
winner such that every scoring rule will have at least
\(m-2\) candidates with a greater score than the Condorcet winner.
So, no scoring rule is Condorcet consistent, but what about other
methods? A number of voting methods were devised specifically to
*guarantee* that a Condorcet winner will be elected, if one
exists. The examples below give a flavor of different types of
Condorcet consistent methods. (See Brams and Fishburn, 2002, and
Fishburn, 1977, for more examples and a discussion of
Condorcet consistent methods.)
**Condorcet's Rule**:
Each voter submits a ranking of the candidates. If there is a
Condorcet winner, then that candidate wins the election. Otherwise,
all candidates tie for the win.
**Black's Procedure**:
Each voter submits a ranking of the candidates. If there is a
Condorcet winner, then that candidate is the winner. Otherwise, use
Borda Count to determine the winners.
**Nanson's Method**:
Each voter submits a ranking of the candidates. Calculate the Borda
score for each candidate. The candidates with a Borda score below the
average of the Borda scores are eliminated. The Borda scores of the
candidates are re-calculated and the process continues until there is
only one candidate remaining. (See Niou, 1987, for a discussion of
this voting method.)
**Copeland's Rule**:
Each voter submits a ranking of the candidates. A *win-loss
record* for candidate \(B\) is calculated as follows:
\[
WL(B) = \#\{C\ \mid\ B >\_M C\} - \#\{C\ \mid\ C >\_M B\}
\]
The Copeland winner is the candidate that maximizes the win-loss
record.
**Schwartz's Set Method**:
Each voter submits a ranking of the candidates. The winners are the
smallest set of candidates that are not beaten in a one-on-one
election by any candidate outside the set (Schwartz 1986).
**Dodgson's Method**:
Each voter submits a ranking of the candidates. For each candidate,
determine the fewest number of pairwise swaps in the voters' rankings
needed to make that candidate the Condorcet winner. The candidate(s)
with the fewest swaps is(are) declared the winner(s).
The last method was proposed by Charles Dodgson (better known by the
pseudonym Lewis Carroll). Interestingly, this is an example of a
procedure in which it is computationally difficult to compute the
winner (that is, the problem of calculating the winner is
NP-complete). See Bartholdi *et al*. 1989 for a discussion.
These voting methods (and the other Condorcet consistent methods)
guarantee that a Condorcet winner, if one exists, will be elected.
But, *should* a Condorcet winner be elected? Many people argue
that there is something amiss with a voting method that does not
always elect a Condorcet winner (if one exists). The idea is that a
Condorcet winner best reflects the *overall group opinion* and is
stable in the sense that it will defeat any challenger in a one-on-one
contest using Majority Rule. The most persuasive argument that the
Condorcet winner should not always be elected comes from the work of
Donald Saari (1995, 2001). Consider again Condorcet's example of 81
voters.
| | |
| --- | --- |
| # Voters | Ranking |
| 30 | \(A\s B\s C\) |
| 1 | \(A\s C\s B\) |
| 29 | \(B\s A\s C\) |
| 10 | \(B\s C\s A\) |
| 10 | \(C\s A\s B\) |
| 1 | \(C\s B\s A\) |
This is another example that shows that Borda's method need not elect
the Condorcet winner. The majority ordering is
\[
A >\_M B >\_M C,
\]
while the ranking given by the Borda score is
\[
B >\_{\Borda} A >\_{\Borda} C.
\]
However, there is an argument that candidate \(B\) is the best choice
for this electorate. Saari's central observation is to note that the
81 voters can be divided into three groups:
| | | |
| --- | --- | --- |
| | # Voters | Ranking |
| | 10 | \(A\s B\s C\) |
| Group 1 | 10 | \(B\s C\s A\) |
| | 10 | \(C\s A\s B\) |
| | 1 | \(A\s C\s B\) |
| Group 2 | 1 | \(C\s B\s A\) |
| | 1 | \(B\s A\s C\) |
| Group 3 | 20 | \(A\s B\s C\) |
| 28 | \(B\s A\s C\) |
Groups 1 and 2 constitute majority cycles with the voters evenly
distributed among the three possible rankings. Such profiles are
called **Condorcet components**. These profiles form a
perfect symmetry among the rankings. So, within each of these groups,
it is natural to assume that the voters' opinions cancel each other out; therefore, the decision
should depend only on the voters in group 3. In group 3, candidate
\(B\) is the clear winner.
Balinski and Laraki (2010, pgs. 74-83) have an interesting spin on
Saari's argument. Let \(V\) be a ranking voting method (i.e., a voting
method that requires voters to rank the alternatives). Say that \(V\)
**cancels properly** if for all profiles \(\bR\), if \(V\)
selects \(A\) as a winner in \(\bP\), then \(V\) selects \(A\) as
a winner in any profile \(\bP+\bC\), where \(\bC\) is a
Condorcet component and \(\bP+\bC\) is the profile that
contains all the rankings from \(\bP\) and \(\bC\). Balinski
and Laraki (2010, pg. 77) prove that there is no Condorcet consistent
voting method that cancels properly. (See the discussion of the
multiple districts paradox in Section 3.3 for a proof of a closely
related result.)
### 3.2 Failures of Monotonicity
A voting method is **monotonic** provided that receiving more
support from the voters is always better for a candidate. There are
different ways to make this idea precise (see Fishburn, 1982, Sanver
and Zwicker, 2012, and Felsenthal and Tideman, 2013). For instance,
moving up in the rankings should not adversely affect a
candidate's chances to win an election. It is easy to see that
Plurality Rule is monotonic in this sense: The more voters that rank a
candidate first, the better chance the candidate has to
win. Surprisingly, there are voting methods that do not satisfy this
natural property. The most well-known example is Plurality with
Runoff. Consider the two scenarios below. Note that the only
difference between the them is the ranking of the fourth group of
voters. This group of two voters ranks \(B\) above \(A\) above \(C\)
in scenario 1 and swaps \(B\) and \(A\) in scenario 2 (so, \(A\) is
now their top-ranked candidate; \(B\) is ranked second; and \(C\) is
still ranked third).
| | | |
| --- | --- | --- |
| # Voters | *Scenario 1*Ranking | *Scenario 2*Ranking |
| 6 | \(A\s B\s C\) | \(A\s B\s C\) |
| 5 | \(C\s A\s B\) | \(C\s A\s B\) |
| 4 | \(B\s C\s A\) | \(B\s C\s A\) |
| 2 | \(\bB\s \bA\s C\) | \(\bA\s \bB\s C\) |
| *Scenario
1*: Candidate \(A\) is the Plurality with Runoff winner |
| *Scenario 2*: Candidate
\(C\) is the Plurality with Runoff winner |
In scenario 1, candidates \(A\) and \(B\) both have a plurality score
of 6 while candidate \(C\) has a plurality score of 5. So, \(A\) and
\(B\) move on to the runoff election. Assuming the voters do not
change their rankings, the 5 voters that rank \(C\) transfer their
support to candidate \(A\), giving her a total of 11 to win the runoff
election. However, in scenario 2, even after moving up in the
rankings of the fourth group (\(A\) is now ranked first by this
group), candidate \(A\) does *not* win this election. In fact,
by trying to give more support to the winner of the election in
scenario 1, rather than solidifying \(A\)'s win, the last
group's least-preferred candidate ended up winning the election!
The problem arises because in scenario 2, candidates \(A\) and \(B\)
are swapped in the last group's ranking. This means that
\(A\)'s plurality score increases by 2 and \(B\)'s
plurality score decreases by 2. As a consequence, \(A\) and \(C\) move
on to the runoff election rather than \(A\) and \(B\). Candidate
\(C\) wins the runoff election with 9 voters that rank \(C\) above
\(A\) compared to 8 voters that rank \(A\) above \(C\).
The above example is surprising since it shows that, when using
Plurality with Runoff, it may not always be beneficial for a candidate
to move up in some of the voter's rankings. The other voting methods
that violate monotonicity include Coombs Rule, Hare Rule, Dodgson's
Method and Nanson's Method. See Felsenthal and Nurmi 2017 for further
discussion of voting methods that are not monotonic.
### 3.3 Variable Population Paradoxes
In this section, I discuss two related paradoxes that involve changes
to the population of voters.
**No-Show Paradox**: One way that a candidate may receive
"more support" is to have more voters show up to an
election that support them. Voting methods that do not satisfy this
version of monotonicity are said to be susceptible to the **no-show
paradox** (Fishburn and Brams 1983). Suppose that there are 3
candidates and 11 voters with the following rankings:
| | |
| --- | --- |
| # Voters | Ranking |
| 4 | \(A\s B\s C\) |
| 3 | \(B\s C\s A\) |
| 1 | \(C\s A\s B\) |
| 3 | \(C\s B\s A\) |
| Candidate \(C\) is the Plurality with Runoff winner |
In the first round, candidates \(A\) and \(C\) are both ranked first
by 4 voters while \(B\) is ranked first by only 3 voters. So, \(A\)
and \(C\) move to the runoff round. In this round, the voters in the
second column transfer their votes to candidate \(C\), so candidate
\(C\) is the winner beating \(A\) 7-4. Suppose that 2 voters in the
first group do not show up to the election:
| | |
| --- | --- |
| # Voters | Ranking |
| \(\mathbf{2}\) | \(A\s B\s C\) |
| 3 | \(B\s C\s A\) |
| 1 | \(C\s A\s B\) |
| 3 | \(C\s B\s A\) |
| Candidate \(B\) is the Plurality with Runoff winner |
In this election, candidate \(A\) has the lowest plurality score in
the first round, so candidates \(B\) and \(C\) move to the runoff
round. The first group's votes are transferred to \(B\), so \(B\) is
the winner beating \(C\) 5-4. Since the 2 voters that did not show up
to this election rank \(B\) above \(C\), they prefer the outcome of
the second election in which they did not participate!
Plurality with Runoff is not the only voting method that is
susceptible to the no-show paradox. The Coombs Rule, Hare Rule and
Majority Judgement (using the tie-breaking mechanism from Balinski and Laraki 2010)
are all susceptible to the no-show paradox. It turns out that always
electing a Condorcet winner, if one exists, makes a voting method
susceptible to the above failure of monotonicity.
**Theorem** (Moulin 1988).
If there are four or more candidates, then every Condorcet consistent
voting method is susceptible to the no-show paradox.
See Perez 2001, Campbell and Kelly 2002, Jimeno et al.
2009, Duddy 2014, Brandt et al. 2017, 2019, and Nunez and Sanver 2017
for further discussions and generalizations of this result.
**Multiple Districts Paradox**: Suppose that a population is
divided into districts. If a candidate wins each of the districts, one
would expect that candidate to win the election over the entire
population of voters (assuming that the two districts divide the set of voters
into disjoint sets). This is certainly true for Plurality Rule: If a
candidate is ranked first by the most voters in each of
the districts, then that candidate will also be ranked first by a
the most voters over the entire population. Interestingly, this is
not true for all voting methods (Fishburn and Brams 1983). The example
below illustrates the paradox for Coombs Rule.
| | | |
| --- | --- | --- |
| | # Voters | Ranking |
| District 1 | 3 | \(A\s B\s C\) |
| 3 | \(B\s C\s A\) |
| 3 | \(C\s A\s B\) |
| 1 | \(C\s B\s A\) |
| District 2 | 2 | \(A\s B\s C\) |
| 3 | \(B\s A\s C\) |
| District 1: Candidate \(B\) is the Coombs winner |
| District 2: Candidate \(B\) is the Coombs winner |
Candidate \(B\) wins both districts:
**District 1**: There are a total of 10 voters in this district.
None of the candidates are ranked first by 6 or more voters, so
candidate \(A\), who is ranked last by 4 voters (compared to 3 voters
ranking each of \(C\) and \(B\) last), is eliminated.
In the second round, candidate \(B\) wins the election since 6 voters rank \(B\) above \(C\) and 4 voters rank \(C\) above \(B\).
**District 2**: There are a total of 5 voters in this district.
Candidate \(B\) is ranked first by a strict majority of voters, so
\(B\) wins the election.
Combining the two districts gives the following table:
| | | |
| --- | --- | --- |
| | # Voters | Ranking |
| Districts 1 + 2 | 5 | \(A\s B\s C\) |
| 3 | \(B\s C\s A\) |
| 3 | \(C\s A\s B\) |
| 1 | \(C\s B\s A\) |
| 3 | \(B\s A\s C\) |
| Candidate \(A\) is the Coombs winner |
There are 15 total voters in the combined districts. None of the
candidates are ranked first by 8 or more of the voters. Candidate
\(C\) receives the most last-place votes, so is eliminated in the
first round. In the second round, candidate \(A\) is beats candidate
\(B\) by 1 vote (8 voters rank \(A\) above \(B\) and 7 voters rank
\(B\) above \(A\)), and so is declared the winner. Thus, even though
\(B\) wins both districts, candidate \(A\) wins the election when the
districts are combined.
The other voting methods that are susceptible to the
multiple-districts paradox include Plurality with Runoff, The Hare
Rule, and Majority Judgement. Note that these methods are also
susceptible to the no-show paradox. As is the case with the no-show
paradox, every Condorcet consistent voting method is susceptible to
the multiple districts paradox (see Zwicker, 2016, Proposition 2.5). I
sketch the proof of this from Zwicker 2016 (pg. 40) since it adds to
the discussion at the end of Section 3.1 about whether the Condorcet
winner should be elected.
Suppose that \(V\) is a voting method that always selects the
Condorcet winner (if one exists) and that \(V\) is not susceptible to
the multiple-districts paradox. This means that if a candidate \(X\)
is among the winners according to \(V\) in each of two districts, then
\(X\) must be among the winners according to \(V\) in the combined
districts. Consider the following two districts.
| | | |
| --- | --- | --- |
| | # Voters | Ranking |
| District 1 | 2 | \(A\s B\s C\) |
| 2 | \(B\s C\s A\) |
| 2 | \(C\s A\s B\) |
| District 2 | 1 | \(A\s B\s C\) |
| 2 | \(B\s A\s C\) |
Note that in district 2 candidate \(B\) is the Condorcet winner, so
must be the only winner according to \(V\). In district 1, there are
no Condorcet winners. If candidate \(B\) is among the winners
according to \(V\), then, in order to not be susceptible to the
multiple districts paradox, \(B\) must be among the winners in the
combined districts. In fact, since \(B\) is the only winner in
district 2, \(B\) must be the only winner in the combined districts.
However, in the combined districts, candidate \(A\) is the Condorcet
winner, so must be the (unique) winner according to \(V\). This is a
contradiction, so \(B\) cannot be among the winners according to \(V\)
in district 1. A similar argument shows that neither \(A\) nor \(C\)
can be among the winners according to \(V\) in district 1 by swapping
\(A\) and \(B\) in the first case and \(B\) with \(C\) in the second
case in the rankings of the voters in district 2. Since \(V\) must
assign at least one winner to every profile, this is a contradiction;
and so, \(V\) is susceptible to the multiple districts paradox.
One last comment about this paradox: It is an example of a more
general phenomenon known as Simpson's Paradox (Malinas and Bigelow
2009). See Saari (2001, Section 4.2) for a discussion of Simpson's
Paradox in the context of voting theory.
### 3.4 The Multiple Elections Paradox
The paradox discussed in this section, first introduced by Brams,
Kilgour and Zwicker (1998), has a somewhat different structure from
the paradoxes discussed above. Voters are taking part in a
*referendum*, where they are asked their opinion directly about
various propositions (cf. the discussion of Quadratic Voting and Liquid Democracy
in Section 2.3). So, voters must select either "yes"
(Y) or "no" (N) for each proposition. Suppose that there
are 13 voters who cast the following votes for the three propositions (so
voters can cast one of eight possible votes):
| | |
| --- | --- |
| # Voters | Propositions |
| 1 | YYY |
| 1 | YYN |
| 1 | YNY |
| 3 | YNN |
| 1 | NYY |
| 3 | NYN |
| 3 | NNY |
| 0 | NNN |
When the votes are tallied for each proposition separately, the
outcome is N for each proposition (N wins 7-6 for all three
propositions). Putting this information together, this means that NNN
is the outcome of this election. However, there is *no support*
for this outcome in this population of voters. This raises an important
question about what outcome reflects the group opinion: Viewing each proposition
separately, there is clear support for N on each proposition; however,
there is no support for the entire package of N for all propositions.
Brams et al. (1998, pg. 234) nicely summarise the issue as follows:
>
> The paradox does not just highlight problems of aggregation and
> packaging, however, but strikes at the core of social
> choice--both what it means and how to uncover it. In our view,
> the paradox shows there may be a clash between two different meanings
> of social choice, leaving unsettled the best way to uncover what this
> elusive quantity is.
>
See Scarsini 1998, Lacy and Niou 2000, Xia et al. 2007, and Lang and Xia 2009 for further discussion of this paradox.
A similar issue is raised by **Anscombe's paradox** (Anscombe
1976), in which:
>
> It is possible for a majority of voters to be on the losing side of a
> majority of issues.
>
This phenomenon is illustrated by the following example with five
voters voting on three different issues (the voters either vote
'yes' or 'no' on the different issues).
| | | | |
| --- | --- | --- | --- |
| | Issue 1 | Issue 2 | Issue 3 |
| Voter 1 | yes | yes | no |
| Voter 2 | no | no | no |
| Voter 3 | no | yes | yes |
| Voter 4 | yes | no | yes |
| Voter 5 | yes | no | yes |
| Majority: | yes | no | yes |
However, a majority of the voters (voters 1, 2 and 3) do *not*
support the majority outcome on a majority of the issues (note that
voter 1 does not support the majority outcome on issues 2 and 3; voter
2 does not support the majority outcome on issues 1 and 3; and voter 3
does not support the majority outcome on issues 1 and 2)!
The issue is more interesting when the voters do not vote directly on
the issues, but on candidates that take positions on the different
issues. Suppose there are two candidates \(A\) and \(B\) who take the
following positions on the three issues:
| | | | |
| --- | --- | --- | --- |
| | Issue 1 | Issue 2 | Issue 3 |
| Candidate \(A\) | yes | no | yes |
| Candidate \(B\) | no | yes | no |
Candidate \(A\) takes the majority position, agreeing with a majority
of the voters on each issue, and candidate \(B\) takes the opposite,
minority position. Under the natural assumption that voters will vote
for the candidate who agrees with their position on a majority of the
issues, candidate \(B\) will win the election (each of the voters 1, 2
and 3 agree with \(B\) on two of the three issues, so \(B\) wins the
election 3-2)! This version of the paradox is known as
**Ostrogorski's Paradox** (Ostrogorski 1902). See Kelly 1989; Rae
and Daudt 1976; Wagner 1983, 1984; and Saari 2001, Section 4.6, for
analyses of this paradox, and Pigozzi 2005 for the relationship with
the judgement aggregation literature (List 2013, Section 5).
## 4. Topics in Voting Theory
### 4.1 Strategizing
In the discussion above, I have assumed that voters select ballots
*sincerely*. That is, the voters are simply trying to
communicate their opinions about the candidates under the constraints
of the chosen voting method. However, in many contexts, it makes sense to
assume that voters choose *strategically*. One need only look to recent
U.S. elections to see concrete examples of strategic voting. The most
often cited example is the 2000 U.S. election: Many voters who ranked
third-party candidate Ralph Nader first voted for their second choice
(typically Al Gore). A detailed overview of the literature on
strategic voting is beyond the scope of this article (see Taylor 2005 and
Section 3.3 of List 2013 for discussions and pointers to the relevant literature; also see
Poundstone 2008 for an entertaining and informative discussion of the
occurrence of this phenomenon in many actual elections). I will
explain the main issues, focusing on specific voting rules.
There are two general types of manipulation that can be studied in the
context of voting. The first is manipulation by a moderator or outside
party that has the authority to set the agenda or select the voting
method that will be used. So, the outcome of an election is not
manipulated from within by unhappy voters, but, rather, it is
*controlled* by an outside authority figure. To illustrate this
type of control, consider a population with three voters whose
rankings of four candidates are given in the table below:
| | |
| --- | --- |
| # Voters | Ranking |
| 1 | \(B\s D\s C\s A\) |
| 1 | \(A\s B\s D\s C\) |
| 1 | \(C\s A\s B\s D\) |
Note that everyone prefers candidate \(B\) over candidate \(D\).
Nonetheless, a moderator can ask the right questions so that candidate
\(D\) ends up being elected. The moderator proceeds as follows: First,
ask the voters if they prefer candidate \(A\) or candidate \(B\).
Since the voters prefer \(A\) to \(B\) by a margin of 2 to 1, the
moderator declares that candidate \(B\) is no longer in the running.
The moderator then asks voters to choose between candidate \(A\) and
candidate \(C\). Candidate \(C\) wins this election 2-1, so
candidate \(A\) is removed. Finally, in the last round the chairman
asks voters to choose between candidates \(C\) and \(D\).
Candidate \(D\) wins this election 2-1 and is declared the
winner.
A second type of manipulation focuses on how the voters themselves can
manipulate the outcome of an election by *misrepresenting*
their preferences. Consider the following two election scenarios
with 7 voters and 3 candidates:
| | | |
| --- | --- | --- |
| # Voters | *Scenario 1*Ranking | *Scenario 2*Ranking |
| 1 | \(C\s D\s B\s A\) | \(C\s D\s B\s A\) |
| 1 | \(B\s A\s C\s D\) | \(B\s A\s C\s D\) |
| 1 | \(A\s \bC\s \bB\s \bD\) | \(A\s \bB\s \bD\s \bC\) |
| 1 | \(A\s C\s D\s B\) | \(A\s C\s D\s B\) |
| 1 | \(D\s C\s A\s B\) | \(D\s C\s A\s B\) |
| *Scenario 1*: Candidate \(C\) is the Borda winner (\(\BS(A)=9, \BS(B)=5, \BS(C)=10\), and \(\BS(D)=6\)) |
| *Scenario 2*: Candidate \(A\) is the Borda winner
(\(\BS(A)=9, \BS(B)=6, \BS(C)=8\), and \(\BS(D)=7\)) |
The only difference between the two election scenarios is that the third voter
changed the ranking of the bottom three candidates. In election scenario 1, the third
voter has candidate \(A\) ranked first, then \(C\) ranked second, \(B\) ranked third
and \(D\) ranked last. In election scenario 2, this voter still has \(A\) ranked
first, but ranks \(B\) second, \(D\) third and \(C\) last. In election scenario 1, candidate \(C\) is the Borda Count winner (the Borda scores are \(\BS(A)=9, \BS(B)=5, \BS(C)=10\), and \(\BS(D)=6\)). In the election scenario 2, candidate \(A\) is
the Borda Count winner (the Borda scores are \(\BS(A)=9, \BS(B)=6, \BS(C)=8\), and \(\BS(D)=7\)).
According to her ranking in election scenario 1, this voter prefers the outcome in election scenario 2 (candidate \(A\), the Borda winner in election scenario 2, is ranked above candidate \(C\), the Borda winner in election scenario 1). So, if we assume that
election scenario 1 represents the "true" preferences of the
electorate, it is in the interest of the third voter to misrepresent
her preferences as in election scenario 2. This is an instance of a general result known as the
**Gibbard-Satterthwaite Theorem** (Gibbard 1973; Satterthwaite
1975): Under natural assumptions, there is no voting method that
*guarantees* that voters will choose their ballots sincerely
(for a precise statement of this theorem
see Theorem 3.1.2 from Taylor 2005 or Section 3.3 of List 2013).
### 4.2 Characterization Results
Much of the literature on voting theory (and, more generally, social
choice theory) is focused on so-called *axiomatic characterization
results*. The main goal is to characterize different voting
methods in terms of abstract principles of collective decision making.
See Pauly 2008 and Endriss 2011 for interesting discussions of
axiomatic characterization results from a logician's point-of-view.
Consult List 2013 and Gaertner 2006 for introductions to
the vast literature on axiomatic characterizations in social choice theory.
In this article, I focus on a few key axioms and results and how they
relate to the voting methods and paradoxes discussed above. I start with
three core principles.
**Anonymity**:
The names of the voters do not matter: If two
voters swap their ballots, then the outcome of the election is
unaffected.
**Neutrality**:
The names of the candidates, or alternatives, do not
matter: If two candidates are exchanged in every ballot, then the
outcome of the election changes accordingly.
**Universal Domain**:
There are no restrictions on the voter's
choice of ballots. In other words, no profile of ballots can be
ignored by a voting method. One way to make this precise is to require
that voting methods are *total functions* on the set of all
profiles (recall that a profile is a sequence of ballots, one from
each voter).
These properties ensure that the outcome of an election depends only
on the voters' ballots, with all the voters and candidates being treated equally.
Other properties are intended to rule out some of the paradoxes and
anomalies discussed above. In section 4.1, there is an example of a
situation in which a candidate is elected, even though *all*
the voters prefer a different candidate. The next principle rules out
such situations:
**Unanimity** (also called the **Pareto Principle**):
If candidate
\(A\) is ranked above candidate \(B\) by *all* voters, then
candidate \(B\) should not win the election.
These are natural properties to impose on any voting method. A
surprising consequence of these properties is that they rule out
another natural property that one may want to impose: Say that a
voting method is **resolute** if the method always selects one
winner (i.e., there are no ties). Suppose that \(V\) is a voting
method that requires voters to rank the candidates and that there are
at least 3 candidates and enough voters to form a Condorcet
component (a profile generating a majority cycle with voters evenly
distributed among the different rankings). First, consider the situation when
there are exactly 3 candidates (in this case, we do not need to assume Unanimity).
Divide the set of voters into
three groups of size \(n\) and consider the Condorcet component:
| | |
| --- | --- |
| # Voters | Ranking |
| \(n\) | \(A\s B\s C\) |
| \(n\) | \(B\s C\s A\) |
| \(n\) | \(C\s A\s B\) |
By Universal Domain and resoluteness, \(V\) must select exactly one of
\(A\), \(B\), or \(C\) as the winner. Assume that \(V\) select \(A\)
as the winner (the argument when \(V\) selects the other candidates is similar).
Now, consider the profile in which every voter swaps candidate \(A\)
and \(B\) in their rankings:
| | |
| --- | --- |
| # Voters | Ranking |
| \(n\) | \(B\s A\s C\) |
| \(n\) | \(A\s C\s B\) |
| \(n\) | \(C\s B\s A\) |
By Neutrality and Universal Domain, \(V\) must elect candidate \(B\) in this election scenario. Now, consider the profile in which every voter in the above election scenario swaps candidates \(B\) and \(C\):
| | |
| --- | --- |
| # Voters | Ranking |
| \(n\) | \(C\s A\s B\) |
| \(n\) | \(A\s B\s C\) |
| \(n\) | \(B\s C\s A\) |
By Neutrality and Universal Domain, \(V\) must elect candidate \(C\)
in this election scenario. Notice that this last election scenario
can be generated by permuting the voters in the first election
scenario (to generate the last election scenario from the first
election scenario, move the first group of voters to the 2nd position,
the 2nd group of voters to the 3rd position and the 3rd group of
voters to the first position). But this contradicts Anonymity since
this requires \(V\) to elect the same candidate in the first and third
election scenario. To extend this result to more than 3 candidates,
consider a profile in which candidates \(A\), \(B\), and \(C\) are all
ranked above any other candidate and the restriction to these three
candidates forms a Condorcet component. If \(V\) satisfies Unanimity,
then no candidate except \(A\), \(B\) or \(C\) can be elected. Then,
the above argument shows that \(V\) cannot satisfy Resoluteness,
Universal Domain, Neutrality, and Anonymity. That is, there are no
Resolute voting methods that satisfy Universal Domain, Anonymity,
Neutrality, and Unanimity for 3 or more candidates (note that I have
assumed that the number of voters is a multiple of 3, see Moulin 1983
for the full proof).
Section 3.2 discussed examples in which candidates end up losing an
election as a result of more support from some of the voters. There
are many ways to state properties that require a voting method to be
*monotonic*. The following strong version (called **Positive
Responsiveness** in the literature) is used to characterize majority
rule when there are only two candidates:
**Positive Responsiveness**:
If candidate \(A\) is a winner or
tied for the win and moves up in some of the voter's rankings, then
candidate \(A\) is the unique winner.
I can now state our first characterization result. Note that in all of
the example discussed above, it is crucial that there are three or
more candidates (for example, stating Condorcet's paradox requires there
to be three or more candidates). When there are only two
candidates, or alternatives, Majority Rule (choose the alternative ranked
first by more than 50% of the voters) can be singled out as "best":
**Theorem** (May 1952).
A voting method for choosing between two candidates satisfies
Neutrality, Anonymity, Unanimity and Positive Responsiveness if and only if the
method is majority rule.
See May 1952 for a precise statement of this theorem and Asan and
Sanver 2002, Maskin 1995, and Woeginger 2003 for
alternative characterizations of majority rule.
A key assumption in the proof May's theorem and subsequent results is the
restriction to voting on two alternatives. When there are only two
alternatives, the definition of a ballot can be simplified since a
ranking of two alternatives boils down to selecting the alternative
that is ranked first. The above characterizations of Majority
Rule work in a more general setting since they also allow
voters to *abstain* (which is ambiguous between not voting
and being indifferent between the alternatives). So, if the alternatives
are \(\{A,B\}\), then there are three possible ballots: selecting \(A\),
selecting \(B\), or abstaining (which is treated as selecting both \(A\) and \(B\)).
A natural question is whether there are May-style characterization theorems
for more than two alternatives. A crucial issue is that rankings of more than
two alternatives are much more informative than selecting an alternative or abstaining. By restricting the information required
from a voter to selecting one of the alternatives or abstaining,
Goodin and List 2006 prove that the axioms used in May's Theorem characterize
Plurality Rule when there are more than two alternatives. They also show that a
minor modification of the axioms characterize Approval Voting when voters are allowed to
select more than one alternative.
Note that focusing on voting methods that limit the information required from
the voters to selecting one or more of the alternatives hides all the interesting
phenomena discussed in the previous sections, such as the existence of a Condorcet paradox.
Returning to the study of voting methods that require voters to rank the alternatives,
the most important characterization result is Ken Arrow's celebrated impossibility
theorem (1963). Arrow showed that there is no *social welfare function* (a social
welfare function maps the voters' rankings (possibly allowing ties) to
a single social ranking) satisfying universal domain, unanimity,
non-dictatorship (there is no voter \(d\) such that for all profiles,
if \(d\) ranks \(A\) above \(B\) in the profile, then the social
ordering ranks \(A\) above \(B\)) and the following key property:
**Independence of Irrelevant Alternatives**:
The social ranking
(higher, lower, or indifferent) of two candidates \(A\) and \(B\)
depends only on the relative rankings of \(A\) and \(B\) for each
voter.
This means that if the voters' rankings of two candidates \(A\) and
\(B\) are the same in two different election scenarios, then the
social rankings of \(A\) and \(B\) must be the same. This is a very
strong property that has been extensively criticized (see Gaertner,
2006, for pointers to the relevant literature, and Cato, 2014, for a
discussion of generalizations of this property). It is beyond the
scope of this article to go into detail about the proof and the
ramifications of Arrow's theorem (see Morreau, 2014, for this
discussion), but I note that many of the voting methods we have
discussed do not satisfy the above property. A striking example of a
voting method that does not satisfy Independence of Irrelevant
Alternatives is Borda Count. Consider the following two election
scenarios:
| | | |
| --- | --- | --- |
| # Voters | *Scenario 1*Ranking | *Scenario 2*Ranking |
| 3 | \(A\s B\s C\s \bX\) | \(A\s B\s C\s \bX\) |
| 2 | \(B\s C\s A\s \bX\) | \(B\s C\s \bX\s A\) |
| 2 | \(C\s A\s B\s \bX\) | \(C\s \bX\s A\s B\) |
| *Scenario 1*: The Borda ranking is \(A >\_{\Borda} B >\_{\Borda} C >\_{\Borda} X\) (\(\BS(A)=15\), \(\BS(B)=14\), \(\BS(C)=13\), and \(\BS(X)=0\)) |
| *Scenario 2*: The Borda ranking is \(C >\_{\Borda} B >\_{\Borda} A >\_{\Borda} X\) (\(\BS(A)=11\), \(\BS(B)=12\), \(\BS(C)=13\), and \(\BS(X)=6\)) |
Notice that the relative rankings of candidates \(A\), \(B\) and \(C\)
are the same in both election scenarios. In the election scenario 2, the
ranking of candidate \(X\), that is uniformly ranked in last place in
election scenario 1, is changed. The ranking according to the
Borda score of the candidates in election scenario 1 puts \(A\) first with 15
points, \(B\) second with 14 points, \(C\) third with 13 points, and
\(X\) last with 0 points. In election scenario 2, the ranking of \(A\), \(B\)
and \(C\) is reversed: Candidate \(C\) is first with 13 voters;
candidate \(B\) is second with 12 points; candidate \(A\) is third
with 11 points; and candidate \(X\) is last with 6 points. So, even
though the relative rankings of candidates \(A\), \(B\) and \(C\) do
not differ in the two election scenarios, the position of candidate \(X\)
in the voters' rankings reverses the Borda rankings of these candidates.
In Section 3.3, it was noted that a number of methods (including all
Condorcet consistent methods) are susceptible to the multiple
districts paradox. An example of a method that is not susceptible to
the multiple districts paradox is Plurality Rule: If a candidate
receives the most first place votes in two different districts, then
that candidate must receive the most first place votes in the combined
the districts. More generally, no scoring rule is susceptible to the
multiple districts paradox. This property is called reinforcement:
**Reinforcement**:
Suppose that \(N\_1\) and \(N\_2\) are
disjoint sets of voters facing the same set of candidates. Further,
suppose that \(W\_1\) is the set of winners for the population \(N\_1\),
and \(W\_2\) is the set of winners for the population \(N\_2\). If there
is at least one candidate that wins both elections, then the winner(s)
for the entire population (including voters from both \(N\_1\) and
\(N\_2\)) is the set of candidates that are in both \(W\_1\) and \(W\_2\)
(i.e., the winners for the entire population is \(W\_1\cap W\_2\)).
The reinforcement property explicitly rules out the multiple-districts
paradox (so, candidates that win all sub-elections are guaranteed to
win the full election). In order to characterize all scoring rules,
one additional technical property is needed:
**Continuity**:
Suppose that a group of voters \(N\_1\) elects a
candidate \(A\) and a disjoint group of voters \(N\_2\) elects a
different candidate \(B\). Then there must be some number \(m\) such
that the population consisting of the subgroup \(N\_2\) together with
\(m\) copies of \(N\_1\) will elect \(A\).
We then have:
**Theorem** (Young 1975).
Suppose that \(V\) is a voting method that requires voters to rank the
candidates. Then, \(V\) satisfies Anonymity, Neutrality, Reinforcement
and Continuity if and only if the method is a scoring rule.
See Merlin 2003 and Chebotarev and Smais 1998 for surveys of other
characterizations of scoring rules. Additional axioms single out Borda
Count among all scoring methods (Young 1974; Gardenfors 1973; Nitzan
and Rubinstein 1981). In fact, Saari has argued that "any fault
or paradox admitted by Borda's method also must be admitted by all
other positional voting methods" (Saari 1989, pg. 454). For
example, it is often remarked that Borda Count (and all scoring rules)
can be easily manipulated by the voters. Saari (1995, Section 5.3.1)
shows that among all scores rules Borda Count is the least susceptible
to manipulation (in the sense that it has the fewest profiles where a
small percentage of voters can manipulate the outcome).
I have glossed over an important detail of Young's characterization of
scoring rules. Note that the reinforcement property refers to the
behavior of a voting method on different populations of voters. To
make this precise, the formal definition of a voting method must allow for
domains that include profiles (i.e., sequences of ballots) of different
lengths. To do this, it is convenient to assume that the domain of a
voting method is an anonymized profile: Given a set of ballots
\(\mathcal{B}\), an anonymous profile is a function
\(\pi:\mathcal{B}\rightarrow\mathbb{N}\). Let \(\Pi\) be the set of
all anonymous profiles. A **variable domain voting method** assigns
a non-empty set of voters to each anonymous profile--i.e., it is a function
\(V:\Pi\rightarrow \wp(X)-\emptyset\)). Of course, this builds in the
property of Anonymity into the definition of a voting method. For this
reason, Young (1975) does not need to state Anonymity as a
characterizing property of scoring rules.
Young's axioms identify scoring rules out of the set of all functions
defined from ballots that are rankings of candidates. In order to
characterize the voting methods from Section 2.2, we need to change
the set of ballots. For example, in order to characterize Approval
Voting, the set of ballots \(\mathcal{B}\) is the set of non-empty
subsets of the set of candidates--i.e.,
\(\mathcal{B}=\wp(X)-\emptyset\) (selecting the ballot \(X\)
consisting of all candidates means that the voter *abstains*).
Two additional axioms are needed to characterize Approval Voting:
**Faithfulness**:
If there is exactly one voter in the population,
then the winners are the set of voters chosen by that voter.
**Cancellation**:
If all candidates receive the same number of
votes (i.e., they are elements of the same number of ballots) from the
participating voters, then all candidates are winning.
We then have:
**Theorem** (Fishburn 1978b; Alos-Ferrer 2006 ).
A variable domain voting method where the ballots are non-empty sets
of candidates is Approval Voting if and only if it satisfies
Faithfulness, Cancellation, and Reinforcement.
Note that Approval Voting satisfies Neutrality even though it is not
listed as one of the characterizing properties in the above
theorem. This is because Alos-Ferrer (2006) showed that Neutrality is
a consequence of Faithfulness, Cancellation and Reinforcement. See
Fishburn 1978a and Baigent and Xu 1991 for alternative
characterizations of Approval Voting, and Xu 2010 for a survey of the
characterizations of Approval Voting (cf. the characterization of
Approval Voting from Goodin and List 2006).
Myerson (1995) introduced a general framework for characterizing
*abstract scoring rules* that include Borda Count and Approval
Voting as examples. The key idea is to think of a ballot, called a
**signal** or a **vote**, as a function from candidates to a set
\(\mathcal{V}\), where \(\mathcal{V}\) is a set of numbers. That is,
the set of ballots is a subset of \(\mathcal{V}^X\) (the set of functions
from \(X\) to \(\mathcal{V}\)). Then, an anonymous profile of signals
assigns a score to each candidate \(X\) by summing the numbers
assigned to \(X\) by each voter. This allows us to define voting methods
by specifying the set of ballots:
* Plurality Rule: The ballots are functions assigning 0 or 1 to the
candidates such that exactly one candidate is assigned 1: \(\{v\ |\
v\in \{0,1\}^X\) and there is an \(A\in X\) such that \(v(A)=1\) and
for all \(B\), if \(B\ne A\), then \(v(B)=0\}\)
* Approval Voting: The ballots are functions assigning 0 or 1 to
the candidates: \(\{v\ |\ v\in \{0,1\}^X \}\)
* Borda Count: The ballots are functions assigning numbers from the
set \(\{\#X, \#X-1,\ldots,0\}\) such that each candidate is assigned
exactly one of the numbers: \(\{v\ |\ v\in\{\# X, \# X - 1, \ldots,
0\}^X\) such that \(v\) is a bijection\(\}\)
* Range Voting: The ballots are assignments of real numbers between
0 and 1 to candidates: \([0,1]^X = \{v \ |\ v:X\rightarrow [0,1] \}\)
* Cumulative Voting: The ballots are assignments of real numbers
between 0 and 1 to candidates such that the assignments sum to 1: \(
\{v \ |\ v\in [0,1]^X\) and \(\sum\_{A\in X} v(A)=1\}\)
* Formal Utilitarian: The ballots are assignments of real numbers
to candidates: \(\mathbb{R}^X = \{v \ |\ v:X\rightarrow\mathbb{R}\}\).
Myerson (1995) showed that an abstract voting rule is an abstract
scoring rule if and only if it satisfies Reinforcement, Universal
Domain (i.e. it is defined for all anonymous profiles), a version of
the Neutrality property (adapted to the more abstract setting), and
the Continuity property, which is called **Overwhelming Majority**.
Pivato (2013) generalizes this result, and Gaertner and Xu (2012)
provide a related characterization result (using different
properties). Pivato (2014) characterizes Formal Utilitarian and Range
Voting within the class of abstract scoring rules, and Mace (2018)
extends this approach to cover a wider class of grading voting methods
(including Majority Judgement).
### 4.3 Voting to Track the Truth
The voting methods discussed above have been judged on
*procedural* grounds. This "proceduralist approach to
collective decision making" is defined by Coleman and Ferejohn
(1986, p. 7) as one that "identifies a set of ideals with which
any collective decision-making procedure ought to comply. ... [A]
process of collective decision making would be more or less
justifiable depending on the extent to which it satisfies them."
The authors add that a distinguishing feature of proceduralism is that
"what justifies a [collective] decision-making procedure is
strictly a necessary property of the procedure -- one entailed by
the definition of the procedure alone." Indeed, the
characterization theorems discussed in the previous section can be
viewed as an implementation of this idea (cf. Riker 1982). The general
view is to analyze voting methods in terms of "fairness
criteria" that ensure that a given method is sensitive to
*all* of the voters' opinions in the right way.
However, one may not be interested only in whether a collective
decision was arrived at "in the right way," but in whether
or not the collective decision is *correct*. This
*epistemic* approach to voting is nicely explained by Joshua
Cohen (1986, p. 34):
>
> An epistemic interpretation of voting has three main elements: (1) an
> independent standard of correct decisions -- that is, an account
> of justice or of the common good that is independent of current
> consensus and the outcome of votes; (2) a cognitive account of voting
> -- that is, the view that voting expresses beliefs about what the
> correct policies are according to the independent standard, not
> personal preferences for policies; and (3) an account of decision
> making as a process of the adjustment of beliefs, adjustments that are
> undertaken in part in light of the evidence about the correct answer
> that is provided by the beliefs of others.
>
>
Under this interpretation of voting, a given method is judged on how
well it "tracks the truth" of some objective fact (the
truth of which is independent of the method being used). A
comprehensive comparison of these two approaches to voting touches on
a number of issues surrounding the justification of democracy (cf.
Christiano 2008); however, I will not focus on these broader issues
here. Instead, I briefly discuss an analysis of Majority Rule that
takes this epistemic approach.
The most well-known analysis comes from the writings of Condorcet
(1785). The following theorem, which is attributed to Condorcet and
was first proved formally by Laplace, shows that if there are only two
options, then majority rule is, in fact, the best procedure from an
epistemic point of view. This is interesting because it also shows
that a proceduralist analysis and an epistemic analysis both single
out Majority Rule as the "best" voting method when there
are only two candidates.
Assume that there are \(n\) voters that have to decide between two
alternatives. Exactly one of these alternatives is (objectively)
"correct" or "better." The typical example
here is a jury deciding whether or not a defendant is guilty. The two
assumptions of the Condorcet jury theorem are:
**Independence**:
The voters' opinions are probabilistically
independent (so, the probability that two or more voters are correct
is the product of the probability that each individual voter is
correct).
**Voter Competence**:
The probability that a voter makes the
correct decision is greater than 1/2 (and this probability is the same
for all voters, though this is not crucial).
See Dietrich 2008 for a critical discussion of these two assumptions.
The classic theorem is:
**Condorcet Jury Theorem**.
Suppose that Independence and Voter Competence are both satisfied.
Then, as the group size increases, the probability that the majority
chooses the correct option increases and converges to certainty.
See Nitzan 2010 (part III) and Dietrich and Spiekermann 2013 for modern
expositions of this theorem, and Goodin and Spiekermann 2018 for
implications for the theory of democracy.
Condorcet envisioned that the above argument could be adapted to
voting situations with more than two alternatives. Young (1975, 1988, 1995)
was the first to fully work out this
idea (cf. List and Goodin 2001 who generalize the Condorcet Jury
Theorem to more than two alternatives in a different framework).
He showed (among other things) that the Borda Count can be
viewed as the *maximum likelihood estimator* for identifying
the *best* candidate. Conitzer and Sandholm (2005), Conitzer et
al. (2009), Xia et al. (2010), and Xia (2016) take these ideas further
by classifying different voting methods according to whether or not
the methods can be viewed as a *maximum likelihood estimator*
(for a noise model). The most general results along these lines can be
found in Pivato 2013 which contains a series of results showing when
voting methods can be interpreted as different kinds of statistical
'estimators'.
### 4.4 Computational Social Choice
One of the most active and exciting areas of research that is focused,
in part, on the study of voting methods and voting paradoxes is
*computational social choice*. This is an interdisciplinary
research area that uses ideas and techniques from theoretical computer
science and artificial intelligence to provide new perspectives and to
ask new questions about methods for making group decisions; and to use
voting methods in computational domains, such as recommendation
systems, information retrieval, and crowdsourcing. It is beyond the
scope of this article to survey this entire research area. Readers are
encouraged to consult the *Handbook of Computational Social
Choice* (Brandt et al. 2016) for an overview of this field (cf.
also Endriss 2017). In the remainder of this section, I briefly
highlight some work from this research area related to issues
discussed in this article.
Section 4.1 discussed election scenarios in which voters choose their
ballots strategically and briefly introduced the Gibbard-Satterthwaite
Theorem. This theorem shows that every voting method satisfying
natural properties has profiles in which there is some voter, called a
**manipulator**, that can achieve a better outcome by selecting a
ballot that misrepresents her preferences. Importantly, in order to
successfully manipulate an election, the manipulator must not only
know which voting method is being used but also how the other members
of society are voting. Although there is some debate about whether
manipulation in this sense is in fact a problem (Dowding and van Hees
2008; Conitzer and Walsh, 2016, Section 6.2), there is interest in
mechanisms that incentivize voters to report their
"truthful" preferences. In a seminal paper, Bartholdi et
al. (1989) argue that the complexity of computing which ballot will
lead to a preferred outcome for the manipulator may provide a barrier
to voting insincerely. See Faliszewski and Procaccia 2010, Faliszewski
et al. 2010, Walsh 2011, Brandt et al. 2013, and Conitzer and Walsh
2016 for surveys of the literature on this and related questions, such
as the the complexity of determining the winner given a voting method
and the complexity of determining which voter or voters should be
*bribed* to change their vote to achieve a given outcome.
One of the most interesting lines of research in computational social
choice is to use techniques and ideas from AI and theoretical computer
science to design new voting methods. The main idea is to think of
voting methods as solutions to an optimization problem. Consider the
space of all rankings of the alternatives \(X\). Given a profile of
rankings, the voting problem is to find an "optimal" group
ranking (cf. the discussion or *distance-based
rationalizations* of voting methods from Elkind et al. 2015). What
counts as an "optimal" group ranking depends on
assumptions about the type of the decision that the group is making.
One assumption is that the voters have real-valued **utilities**
for each candidate, but are only able to report rankings of the
alternatives (it is assumed that the rankings represent the utility
functions). The voting problem is to identify the candidates that
maximizes the (expected) social welfare (the average of the voters'
utilities), given the partial information about the voters'
utilities--i.e., the profile of rankings of the candidates. See
Pivato 2015 for a discussion of this approach to voting and Boutilier
et al. 2015 for algorithms that solve different versions of this
problem. A second assumption is that there is an objectively correct
ranking of the alternatives and the voters' rankings are noisy
estimates of this ground truth. This way of thinking about the voting
problem was introduced by Condorcet and discussed in Section 4.3.
Procaccia et al. (2016) import ideas from the theory of
error-correcting codes to develop an interesting new approach to
aggregate rankings viewed as noisy estimates of some ground truth.
## 5. Concluding Remarks
### 5.1 From Theory to Practice
As with any mathematical analysis of social phenomena, questions
abound about the "real-life" implications of the
theoretical analysis of the voting methods given above. The main
question is whether the voting paradoxes are simply features of the
formal framework used to represent an election scenario or
formalizations of real-life phenomena. This raises a number of subtle
issues about the scope of mathematical modeling in the social
sciences, many of which fall outside the scope of this article. I
conclude with a brief discussion of two questions that shed some light
on how one should interpret the above analysis.
**How *likely* is a Condorcet Paradox or any of the other
voting paradoxes?** There are two ways to approach this question.
The first is to calculate the probability that a majority cycle will
occur in an election scenario. There is a sizable literature devoted
to analytically deriving the probability of a majority cycle occurring
in election scenarios of varying sizes (see Gehrlein 2006, and
Regenwetter *et al*. 2006, for overviews of this literature).
The calculations depend on assumptions about the distribution of
rankings among the voters. One distribution that is
typically used is the so-called **impartial culture**, where each
ranking is possible and occurs with equal probability. For
example, if there are three candidates, and it is assumed that the
voters' ballots are rankings of the candidates, then each possible ranking
can occur with probability 1/6. Under this assumption,
the probability of a majority cycle occurring has been calculated (see
Gehrlein 2006, for details). Riker (1982, p. 122) has a table of the
relevant calculations. Two observations about this data: First, as the
number of candidates and voters increases, the probability of a
majority cycles increases to certainty. Second, for a fixed number of
candidates, the probability of a majority cycle still increases,
though not necessarily to certainty (the number of voters is the
independent variable here). For example, if there are five candidates
and seven voters, then the probability of a majority cycle is 21.5
percent. This probability increases to 25.1 percent as the number of
voters increases to infinity (keeping the number of candidates fixed)
and to 100 percent as the number of candidates increases to infinity
(keeping the number of voters fixed). Prima facie, this result
suggests that we should expect to see instances of the Condorcet and
related paradoxes in large elections. Of course, this interpretation
takes it for granted that the impartial culture is a realistic
assumption. Many authors have noted that the impartial culture is a
significant idealization that almost certainly does not occur in
real-life elections. Tsetlin et al. (2003) go even further arguing
that the impartial culture is a worst-case scenario in the sense that
*any* deviation results in lower probabilities of a majority
cycle (see Regenwetter *et al*. 2006, for a complete discussion
of this issue, and List and Goodin 2001, Appendix 3, for a related result).
A second way to argue that the above theoretical observations are
robust is to find supporting empirical evidence. For instance, is
there evidence that majority cycles have occurred in actual elections?
While Riker (1982) offers a number of intriguing examples, the most
comprehensive analysis of the empirical evidence for majority cycles
is provided by Mackie (2003, especially Chapters 14 and 15). The
conclusion is that, in striking contrast to the probabilistic analysis
referenced above, majority cycles typically have not occurred in
actual elections. However, this literature has not reached a consensus
about this issue (cf. Riker 1982): The problem is that the available
data typically does not include voters' opinions about *all*
pairwise comparison of candidates, which is needed to determine if
there is a majority cycle. So, this information must be
*inferred* (for example, by using statistical methods) from the
given data.
A related line of research focuses on the influence of factors,
such as polls (Reijngoud and Endriss 2012),
social networks (Santoro and Beck 2017, Stirling 2016) and deliberation
among the voters (List 2018), on the profiles of ballots that are actually
realized in an election. For instance, List et al. 2013 has evidence
suggesting that deliberation reduces the probability of a Condorcet cycle occurring.
**How do the different voting methods compare in actual elections?** In this article, I have analyzed voting methods under highly
idealized assumptions. But, in the end, we are interested in a very
practical question: Which method should a group adopt? Of
course, any answer to this question will depend on many factors that
go beyond the abstract analysis given above (cf. Edelman 2012a). An interesting line of
research focuses on incorporating *empirical evidence* into the
general theory of voting. Evidence can come in the form of a computer
simulation, a detailed analysis of a particular voting method in
real-life elections (for example, see Brams 2008, Chapter 1, which
analyzes Approval voting in practice), or as *in situ*
experiments in which voters are asked to fill in additional ballots
during an actual election (Laslier 2010, 2011).
The most striking results can be found in the work of
Michael Regenwetter and his colleagues. They have analyzed datasets
from a variety of elections, showing that many of the usual voting
methods that are considered irreconcilable (e.g., Plurality Rule, Borda
Count and the Condorcet consistent methods from Section 3.1.1) are, in fact, in
perfect agreement. This suggests that the "theoretical
literature may promote overly pessimistic views about the likelihood
of consensus among consensus methods" (Regenwetter *et
al*. 2009, p. 840). See Regenwetter *et al*. 2006 for an
introduction to the methods used in these analyses and Regenwetter
*et al*. 2009 for the current state-of-the-art.
### 5.2 Further Reading
My objective in this article has been to introduce different voting methods
and to highlight key results and issues that facilitate comparisons
between the voting methods. To dive more into the details of the
topics introduced in this article, see Saari 2001, 2008, Nurmi 1998,
Brams and Fishburn 2002, Zwicker 2012, and the collection of articles
in Felsenthal and Machover 2012. Some important topics related to the
study of voting methods not discussed in this article include:
* Saari's influential geometric approach to study of voting methods
and paradoxes (Saari 1995);
* The study of measures of *voting power*: the probability
that a single voter is decisive in an election (Gelman et al. 2002;
Felsenthal and Machover 1998);
* The study of *probabilistic* voting methods: a voting
method in which the output is a lottery over the set of candidates
rather than a ranking or set of candidates (Brandt 2017);
* Questions about whether it is rational to vote in an election
(Brennan 2016); and
* The study of methods for ensuring fair and proportional
representation (Balinksi and Young 1982; Pukelsheim 2017).
Finally, consult List 2013 and Morreau 2014 for a discussion of
broader issues in theory of social choice. |
wang-yangming | ## 1. Life
This philosopher's family name was "Wang," his personal
name was "Shouren," and his "courtesy name"
was
"Bo-an."[1]
However, he is normally known today as
"Wang Yangming," based on a nickname he adopted when he
was living in the Yangming Grotto of Kuaiji Mountain. Born in 1472
near Hangzhou in what is now Zhejiang Province, Wang was the son of a
successful official. As such, he would have received a fairly
conventional education, with a focus on the Four Books of the
Confucian tradition: the *Analects* (the sayings of Confucius
and his immediate disciples), the *Great Learning* (believed to
consist of an opening statement by Confucius with a commentary on it
by his leading disciple, Zengzi), the *Mean* (attributed to
Zisi, the grandson of Confucius, who was also a student of Zengzi),
and the *Mengzi* (the sayings and dialogues of Mencius, a
student of Zisi). The young Wang would have literally committed these
classics to memory, along with the commentaries on them by the master
of orthodox Confucianism, Zhu Xi (1130-1200). The study of
these classics-cum-commentary was thought to be morally
edifying; however, people also studied them in order to pass the civil
service examinations, which were the primary route to government
power, and with it wealth and prestige. At the age of seventeen
(1489), Wang had a conversation with a Daoist priest that left him
deeply intrigued with this alternative philosophical system and way of
life. Wang was also attracted to Buddhism, and remained torn between
Daoism, Buddhism and Confucianism for much of his early life. Whereas
Confucianism emphasizes our ethical obligations to others, especially
family members, and public service in government, the Daoism and
Buddhism of Wang's era encouraged people to overcome their attachment
to the physical world. Wang continued the serious study of Zhu Xi's
interpretation of Confucianism, but was disillusioned by an experience
in which he and a friend made a determined effort to apply what they
took to be Zhu Xi's method for achieving sagehood:
>
> ...my friend Qian and I discussed the idea that to become a sage
> or a worthy one must investigate all the things in the world. But how
> can a person have such tremendous energy? I therefore pointed to the
> bamboos in front of the pavilion and told him to investigate them and
> see. Day and night Qian went ahead trying to investigate to the
> utmost the Pattern of the bamboos. He exhausted his mind and
> thoughts, and on the third day he was tired out and took sick. At
> first I said that it was because his energy and strength were
> insufficient. Therefore I myself went to try to investigate to the
> utmost. From morning till night, I was unable to find the Pattern of
> the bamboos. On the seventh day I also became sick because I thought
> too hard. In consequence we sighed to each other and said that it was
> impossible to be a sage or a worthy, for we do not have the tremendous
> energy to investigate things that they have. (Translation modified
> from Chan 1963, 249)
>
As we shall see (Section 2, below), it is unclear whether Wang and
his friend were correctly applying what Zhu Xi meant by "the
investigation of things." However, Wang's experience of finding
it impractical to seek for the Pattern of the universe in external
things left a deep impression on him, and influenced the later course
of his philosophy.
Wang continued to study Daoism as well as Buddhism, but also showed
a keen interest in military techniques and the craft of writing
elegant compositions. Meanwhile, he progressed through the various
levels of the civil service examinations, finally passing the highest
level in 1499. After this, Wang had a meteoric rise in the
government, including distinguished service in offices overseeing
public works, criminal prosecution, and the examination system.
During this period, Wang began to express disdain for overly refined
literary compositions like those he had produced in his earlier years.
Wang would later criticize those who "waste their time competing
with one another writing flowery compositions in order to win acclaim
in their age, and...no longer comprehend conduct that honors what
is fundamental, esteems what is real, reverts to simplicity, and
returns to purity" (Tiwald and Van Norden 2014, 275).
In addition, Wang started to turn his back on Daoism and Buddhism,
which he came to regard as socially irresponsible:
"...simply because they did not understand what it is to
rest in the ultimate good and instead exerted their selfish minds
toward the achievement of excessively lofty goals, [Buddhists and
Daoists] were lost in vagaries, illusions, emptiness, and stillness
and had nothing to do with the family, state or world" (Tiwald
and Van Norden 2014, 244, gloss mine). Nonetheless, Wang continued to
show an intimate familiarity with Daoist and Buddhist literature and
concepts throughout his career.
A life-changing event for Wang occurred in 1506. A eunuch
who had assumed illegitimate influence at court had several able
officials imprisoned for opposing him. Wang wrote a
"memorial" to the emperor in protest. The eunuch
responded by having Wang publicly beaten and exiled to an
insignificant position in a semi-civilized part of what is now
Guizhou Province. Wang had to face considerable physical and
psychological hardship in this post, but through these challenges he
achieved a deep philosophical awakening (1508), which he later
expressed in a poem he wrote for his students:
>
> Everyone has within an unerring compass;
> The root and source of
> the myriad transformations lies in the mind.
> I laugh when I
> think that, earlier, I saw things the other way around;
>
> Following branches and leaves, I searched outside! (Ivanhoe 2009, 181)
>
In other words, looking outside oneself for moral truth, as he and
his friend Qian had tried to do when studying the bamboos, was
ignoring the root of moral insight, which is one's own innate
understanding.
Fortunately for Wang, in the year when his term of service in
Guizhou was over (1510), the eunuch who had Wang beaten and banished
was himself executed. Wang's official career quickly returned to its
stratospheric level of achievement, with high-ranking posts and
exceptional achievements in both civil and military positions. This
inevitably led to opposition from the faction-ridden court. Wang
was even accused of conspiring with the leader of a rebellion that
Wang had himself put down.
Wang had begun to attract devoted disciples even before his exile
to Guizhou, and they gradually compiled the *Record for
Practice* (the anthology of his sayings, correspondence, and
dialogues that is one of our primary sources for Wang's
philosophy). It is reflective of Wang's philosophy that the
discussions recorded in this work occurred in the midst of his active
life in public affairs. Near the end of his life, Wang was called
upon to suppress yet another rebellion (1527). The night before he
left, one of his disciples recorded the "Inquiry on the
*Great Learning*," which was intended as a primer of
Wang's philosophy for new disciples. Wang put down the rebellion, but
his health had been declining for several years, and he died soon
afterward (1529).
On his deathbed, Wang said, " 'This mind'
is luminous and bright. What more is there to
say?"[2]
## 2. Intellectual Context
During Wang's lifetime, the dominant intellectual movement was
Neo-Confucianism (in Chinese, *Daoxue*, or
the Learning of the Way). Neo-Confucianism traces its origins
to Han Yu and Li Ao in the late Tang dynasty (618-906), but it
only came to intellectual maturity in the Song and Southern Song
dynasties (960-1279), with the theorizing of Zhou Dunyi, Zhang
Zai, Cheng Yi, his brother Cheng Hao, and Zhu Xi.
Neo-Confucianism originally developed as a Confucian reaction
against Buddhism. Ironically, though, the Neo-Confucians were
deeply influenced by Buddhism and adopted many key Buddhists concepts,
including the notions that the diverse phenomena of the universe are
manifestations of some underlying unity, and that selfishness is the
fundamental vice.
Zhu Xi synthesized the thought of earlier Neo-Confucians, and
his interpretation of the Four Books became the basis of the civil
service examinations (see Section 1, above). Zhu held that everything
in existence has two aspects, Pattern and *qi*. Pattern
(*li*, also translated "principle") is the
underlying structure of the universe. *Qi* is the
spatio-temporal stuff that constitutes concrete objects. The
complete Pattern is fully present in each entity in the universe;
however, entities are individuated by having distinct allotments of
*qi* (variously translated as "ether,"
"psychophysical stuff," "vital fluid,"
"material force," and others). In addition to having
spatio-temporal location, *qi* has qualitative
distinctions, described metaphorically in terms of degrees of
"clarity" and "turbidity." The more
"clear" the *qi*, the greater the extent to which
the Pattern manifests. This explains how speciation is possible. For
example, plants have more "clear" *qi* than do
rocks, which is reflected in the more complex repertoire of
interactions plants have with their environments.
Neo-Confucian metaphysics, ethics, and philosophical
psychology are systematically connected. Humans have the most clear
*qi* of any being. However, some humans have especially clear
*qi*, and as a result manifest the virtues, while others have
comparatively turbid *qi*, and therefore are prone to vice.
The state of one's *qi* is not fixed, though. Through ethical
cultivation, any human can clarify his *qi*, and thereby become
more virtuous. Similarly, through laxness, one can allow one's
*qi* to morally deteriorate. Because the Pattern is fully
present in each human, everyone has complete innate ethical knowledge.
However, due to the obscurations of *qi*, the manifestations of
this knowledge are sporadic and inconsistent. Thus, a king who shows
benevolence when he takes pity on an ox being led to slaughter and
spares it may fail to show benevolence in his treatment of his own
subjects (*Mengzi* 1A7). Similarly, a person might manifest
righteousness when he disdains to even consider an illicit sexual
relationship; however, the same person might see nothing shameful
about flattering the worst traits and most foolish policies of a ruler
whom he serves (*Mengzi* 3B3). Because humans all share the
same Pattern, we are parts of a potentially harmonious whole. In some
sense, we form "one body" with other humans and the rest
of the universe. Consequently, the fundamental human virtue is
benevolence, which involves recognition of our shared nature with
others.[3]
Correspondingly, the fundamental vice is selfishness. Cheng Hao uses
a medical metaphor to explain the relationship between ethics and
personal identity:
>
> A medical text describes numbness of the hands and feet as being
> "unfeeling." This expression describes it perfectly.
> Benevolent people regard Heaven, Earth, and the myriad things as one
> Substance. Nothing is not oneself. If you recognize something as
> yourself, there are no limits to how far [your compassion] will go.
> But if you do not identify something as part of yourself, you will
> have nothing to do with it. This is like how, when hands and feet are
> "unfeeling," the *qi* does not flow and one does
> not identify them as part of oneself. (Tiwald and Van Norden 2014,
> 201)
>
Just as those whose hands and feet are "unfeeling" are
not bothered by injuries to their own limbs, so do those who are
ethically "unfeeling" fail to show concern for other
humans. In both cases, one fails to act appropriately because one
fails to recognize that something is a part of oneself.
Neo-Confucians regarded the ancient philosopher Mencius (4th
century BCE) as having presented an especially incisive interpretation
of Confucianism. They referred to him as the "Second
Sage," second in importance only to Confucius himself, and often
appealed to his vocabulary, arguments, and
examples.[4]
In particular,
Neo-Confucians adopted Mencius's view that human nature is good,
and that wrongdoing is the result of humans ignoring the promptings of
their innate virtues. Physical desires are one of the primary causes
for the failure to heed one's innate moral sense. There is nothing
intrinsically immoral about desires for fine food, sex, wealth, etc.,
but we will be led to wrongdoing if we pursue them without giving
thought to the well-being of others (*Mengzi* 6A15).
Neo-Confucians also adopted Mencius's list of four cardinal
virtues: benevolence, righteousness, wisdom, and propriety.
Benevolence is manifested in the emotion of compassion, which involves
sympathizing with the suffering of others and acting appropriately on
that sympathy. For instance, a benevolent ruler could never be
indifferent to the well-being of his subjects, and would work
tirelessly to alleviate their suffering. Righteousness is manifested
in the emotion of disdain or shame at the thought of doing what is
dishonorable, especially in the face of temptations of wealth or
physical desire. For example, a righteous person would not cheat at a
game, or accept a bribe. Wisdom is manifested in approval or
disapproval that reflects sound judgment about means-end
deliberation and about the character of others. Thus, a wise official
would know which policies to support and which to oppose, and would be
inventive in devising solutions to complex problems. Propriety is
manifested in respect or deference toward elders and legitimate
authorities, particularly as expressed in ceremonies or etiquette.
For instance, a person with propriety would willingly defer to elder
family members most of the time, but would be motivated to serve a
guest before his elder brother.
The preceding views are shared, in broad outline, by all
Neo-Confucian philosophers. The primary issue they debated was
the proper method of ethical cultivation. In other words, how is it
possible for us to consistently and reliably access and act on our
innate ethical knowledge? Other disagreements among them (on
seemingly recondite issues of metaphysics and the interpretation of
the Confucian classics) can often be traced to more fundamental
disagreements over ethical cultivation. For Zhu Xi, almost all humans
are born with *qi* that is so turbid that it is impossible for
them to consistently know what is virtuous and act accordingly without
external assistance. The remedy is carefully studying the classics
written by the ancient sages, ideally under the guidance of a wise
teacher. Although the classics were written in response to concrete
historical situations, their words allow us to grasp the Pattern at a
certain level of abstraction from its instantiations. However, Zhu Xi
states that merely understanding the Pattern is insufficient for the
exercise of virtue. One must achieve Sincerity
(*cheng*), a continual awareness of moral knowledge in
the face of
temptations.[5]
In his own lifetime, Zhu Xi's leading critic was Lu Xiangshan
(1139-1193). Lu argued that, because the Pattern is fully
present in the mind of each human, it is not necessary to engage in an
intellectually demanding process of study in order to recover one's
moral knowledge:
>
> Righteousness and Pattern are in the minds of human beings. As a
> matter of fact, these are what Heaven has endowed us with, and they
> can never be effaced or eliminated [from our minds]. If one becomes
> obsessed with [desires for] things and reaches the point where one
> violates Pattern and transgresses righteousness, usually this is
> simply because one fails to reflect upon these things [i.e., the
> righteousness and Pattern that lie within one]. If one truly is able
> to turn back and reflect upon these things, then what is right and
> wrong and what one should cleave to and what one should subtly reject
> will begin to stir, separate, become clear, and leave one resolute and
> without doubts. (Tiwald and Van Norden 2014, 251-52, glosses in
> original translation)
>
In emphasizing the innate human capacity for moral judgment, Lu
adopted the phrase "pure knowing" from *Mengzi*
7A15. In its original context, Mencius writes that "Among babes
in arms there are none that do not know to love their parents,"
and describes this as an instance of "pure knowing," which
humans "know without pondering" (Tiwald and Van Norden
2014, 218). Lu explains:
>
> Pure knowing lies within human beings; although some people become
> mired in dissolution, pure knowing still remains undiminished and
> enduring [within them]. ... Truly, if they can turn back and seek
> after it, then, without needing to make a concerted effort, what is
> right and wrong, what is fine and foul, will become exceedingly clear,
> and they will decide for themselves what to like and dislike, what to
> pursue and what to abandon. (Tiwald and Van Norden 2014, 252)
>
Although there were always Confucians like Lu who disagreed with
Zhu Xi, the latter's interpretation became dominant after the
government sponsored it as the official interpretation for the civil
service examinations. By the time of Wang Yangming, Zhu Xi's approach
had ossified into a stale orthodoxy that careerists mindlessly
parroted in an effort to pass the examinations.
## 3. Unity of Knowing and Acting
Some aspects of Wang's philosophy can be understood as refining or
drawing out the full implications of Lu Xiangshan's critique of Zhu
Xi. Like Lu, Wang stressed that the Pattern is fully present in the
mind of every person: "The mind *is* Pattern. Is there
any affair outside the mind? Is there any Pattern outside the
mind?" (Tiwald and Van Norden 2014,
264)[6]
As a result, the
theoretical study of ethics is unnecessary. All a moral agent needs
to do is exercise "pure knowing." This view may seem naive
at first glance. However, this is partially due to the different foci
of Chinese and Western ethics. Metaethics and normative ethical
theory are substantial parts of Western ethics, but less central to
Chinese ethics. In addition, recent Western ethics manifests an
interest in abstract ethical dilemmas that is perhaps disproportionate
to the relevance of these quandaries in practice. In contrast, the
focus of much Chinese ethics, particularly Confucianism, is applied
ethical cultivation. Consequently, Lu and Wang were primarily
interested in having a genuine positive effect on the ethical lives of
their followers. There is some plausibility to the claim that (even
in our own complex, multicultural intellectual context) most of us
know what our basic ethical obligations are. For example, as a
teacher, I have an obligation to grade assignments promptly and
fairly. As a colleague, I have an obligation to take my turn serving
as chair. As a parent, I have an obligation to make time to share my
children's interests. Due to selfish desires, it is tempting to
procrastinate in grading essays, or try to evade departmental service,
or ignore the emotional needs of one's children. However, it is not
as if it is genuinely difficult to figure out that these are our
obligations.
Wang's most distinctive and well-known doctrine is the unity of
knowing and acting (*zhi xing he yi*).
In order to grasp the significance of this doctrine, consider a
student who is being questioned by a college honor code panel after
being caught plagiarizing an essay. If the honor code panel asked the
student whether he knew that plagiarism was wrong, it would not be
surprising for the student to reply, "Yes, I knew it was wrong
to plagiarize my essay, but I wanted so much to get an easy A that I
gave in to temptation." Western philosophers would describe this
as a case of *akrasia*, or weakness of will. Zhu Xi, along
with many Western philosophers like Aristotle, would acknowledge that
the student had correctly described his psychological state. However,
Wang Yangming would deny that the student actually knew that
plagiarism was wrong: "There never have been people who know but
do not act. Those who 'know' but do not act simply do not
yet know" (Tiwald and Van Norden 2014, 267). One of Wang's
primary claims is that merely verbal assent is inadequate to
demonstrate actual knowledge: "One cannot say he knows filial
piety or brotherly respect simply because he knows how to *say*
something filial or brotherly. Knowing pain offers another good
example. One must have experienced pain oneself in order to know
pain." Wang also adduces cold and hunger as things that one must
experience in order to know. We might object that Wang's example
shows, at most, that someone must have had an experience of goodness
or evil *at some point in his life* in order to know what it
means for something to be good or evil. This falls short of the
conclusion that Wang needs, though, for just as I can know what you
mean when you say "I am hungry" even if I am not currently
motivated to eat, so might I know that "plagiarism is
wrong," even if I am not motivated to avoid plagiarism myself in
this circumstance.
A more promising line of argument for Wang is his appropriation of an
example from the *Great Learning* (Commentary 6), which
suggests that loving the good is "like loving a lovely
sight," while hating evil is "like hating a hateful
odor" (Tiwald and Van Norden 2014, 191). To recognize an odor
as disgusting (a kind of knowing) is to be repulsed by it (a
motivation), which leads to avoiding the odor or eliminating its
source (actions). The Chinese phrase rendered "loving a lovely
sight" (*hao hao se*) has connotations
of finding someone sexually
attractive.[7]
To regard someone as sexually attractive (a
kind of cognition) is to be drawn toward them (a kind of motivation
that can lead to action). To this, one might object that Wang's
examples show, at most, that knowing something is good or bad requires
that one have *some* level of appropriate motivation. Even if
we grant the relevance of Wang's examples, it is possible to have an
intrinsic motivation that does not result in action. Recognizing that
someone is sexually attractive certainly does not always result in
pursuing an assignation, for example.
Perhaps Wang's most compelling line of argument is a pragmatic one
about the aim or purpose behind discussing knowing and acting. He
allows that it may be useful and legitimate to discuss knowing
separately from acting or acting separately from knowing, but only in
order to address the failings of specific kinds of individuals. On
the one hand, "there is a type of person in the world who
foolishly acts upon impulse without engaging in the slightest thought
or reflection. Because they always act blindly and recklessly, it is
necessary to talk to them about knowing...," without
emphasizing acting. On the other hand, "[t]here is also a type
of person who is vague and irresolute; they engage in speculation
while suspended in a vacuum and are unwilling to apply themselves to
any concrete actions" (268). These latter people benefit from
advice that emphasizes action, without necessarily discussing
knowledge. However, those in Wang's era who distinguish knowledge and
action (and here he has in mind those who follow the orthodox
philosophy of Zhu Xi) "separate knowing and acting into two
distinct tasks to perform and think that one must first know and only
then can one act." As a result, they become nothing more than
pedantic bookworms, who study ethics without ever living up to its
ideals or trying to achieve positive change in the world around them.
Wang concludes, "My current teaching regarding the unity of
knowing and acting is a medicine directed precisely at this
disease."
## 4. Interpretation of the *Great Learning*
In the standard Confucian curriculum of Wang's era, the *Great
Learning* was the first of the Four Books that students were
assigned, and Zhu Xi's commentary on it often made a lasting
impression on them. In the opening of the *Great Learning*,
Confucius describes the steps in self-cultivation:
>
> The ancients who desired to enlighten the enlightened Virtue of the
> world would first put their states in order. Those who desired to put
> their states in order would first regulate their families. Those who
> desired to regulate their families would first cultivate their
> selves. Those who desired to cultivate their selves would first
> correct their minds. Those who desired to correct their minds would
> first make their thoughts have Sincerity. Those who desired to make
> their thoughts have Sincerity would first extend their knowledge.
> Extending knowledge lies in *ge wu*. (Translation
> slightly modified from Tiwald and Van Norden 2014, 188-189)
>
*Ge wu* is left unexplained in the *Great Learning.*
However, Zhu Xi, following the interpretation of Cheng Yi, claims
that
>
> ...what is meant by "extending knowledge lies in *ge
> wu*" is that desiring to extend my knowledge lies in
> encountering things and exhaustively investigating their Pattern. In
> general, the human mind is sentient and never fails to have knowledge,
> while the things of the world never fail to have the Pattern. It is
> only because the Pattern is not yet exhaustively investigated that
> knowledge is not fully fathomed. Consequently, at the beginning of
> education in the Great Learning, the learner must be made to encounter
> the things of the world, and never fail to follow the Pattern that one
> already knows and further exhaust it, seeking to arrive at the
> farthest points. When one has exerted effort for a long time, one
> day, like something suddenly cracking open, one will know in a manner
> that binds it all together. (Translation slightly modified from Tiwald
> and Van Norden 2014, 191)
>
Following Zhu Xi's interpretations, most translators today render
*ge wu* as "investigating things." Wang and his
friend Qian were trying to "investigate things" in this
manner when they stared attentively at the bamboos, struggling to
grasp their underlying Pattern. It is questionable, though, whether
this is what Zhu Xi had in mind. Zhu certainly thought the Pattern
was present in everything, and could potentially be investigated in
even the most mundane of objects. However, he emphasized that the
best method to learn about the Pattern was through studying the
classics texts of Confucianism, particularly the Four Books.
As important as learning was for Zhu Xi, he stressed that it was
possible to know what is right and wrong yet not act on it:
"Knowledge and action always need each other. It's like how
eyes cannot walk without feet, but feet cannot see without eyes. If
we discuss them in terms of their sequence, knowledge comes first.
But if we discuss them in terms of importance, action is what is
important" (Tiwald and Van Norden 2014, 180-81). What
connects knowledge and action, on Zhu Xi's account, is Sincerity.
Sincerity (*cheng*) is a subtle and multifaceted notion
in Neo-Confucian thought. However, the *Great Learning*
focuses on one key aspect of it: "What is meant by 'making
thoughts have Sincerity' is to let there be no self-deception.
It is like hating a hateful odor, or loving a lovely sight. The is
called not being conflicted" (Tiwald and Van Norden 2014, 191).
In other words, humans engage in wrongdoing through a sort of
self-deception in which they allow themselves to ignore the
promptings of their moral sense, and become motivated solely by their
physical desires for sex, food, wealth, etc. Once one knows what is
right and wrong, one must then make an effort to keep that knowledge
present to one's consciousness, so that it becomes motivationally
efficacious. As Zhu Xi explains, "...those who desire to
cultivate themselves, when they know to do good in order to avoid the
bad, must then genuinely make an effort and forbid
self-deception, making their hatred of the [ethically] hateful
be like their hating a hateful odor, and their loving what is good
like their loving a lovely sight" (Tiwald and Van Norden 2014,
192).
Wang challenged almost every aspect of Zhu Xi's interpretation of
the *Great Learning.* He regarded Zhu Xi's account as not just
theoretically mistaken but dangerously misleading for those seeking to
improve themselves ethically. However, Wang recognized that students
who had memorized Zhu Xi's interpretation to prepare for the civil
service examinations would have difficulty understanding the text any
other way. Consequently, when he accepted a new disciple, Wang often
began by explaining his alternative interpretation of the *Great
Learning*, and invited students to ask questions. Wang's approach
is preserved in a brief but densely argued work, "Questions on
the *Great Learning*." Let's begin at the apparent
foundation of the *Great Learning*'s program: *ge wu.*
Whereas Zhu Xi argues that *ge wu* is literally "reaching
things," meaning to intellectually grasp the Pattern in things
and situations, Wang argues that *ge wu* means
"rectifying things," including both one's own thoughts and
the objects of those thoughts. For Zhu Xi, *ge wu* is
primarily about gaining knowledge, while for Wang it is about
motivation and
action.[8]
The fact that the *Great Learning* explicates *ge wu*
in terms of "extending knowledge" might seem to support
Zhu Xi's interpretation. However, Wang offers a plausible alternative
explanation: "To fully extend one's knowledge is not like the
so-called 'filling out' of what one knows that later
scholars [like Zhu Xi] talk about. It is simply to extend fully the
'pure knowing' of my own mind. Pure knowing is what
[Mencius] was talking about when he said that all humans have
'the mind of approval and disapproval.' The mind of
approval and disapproval 'knows without pondering' and is
'capable without learning' " (Tiwald and Van
Norden 2014, 248, glosses
mine).[9]
Wang is suggesting that "extending
knowledge" refers simply to exercising our innate faculty of
moral awareness, which Mencius refers to as "pure
knowing." As David S. Nivison explains, "We might say that
Wang's 'extending' of knowledge is more like extending
one's arm than, say, extending one's vocabulary" (Nivison 1996a,
225).
According to Zhu Xi, the opening of the *Great Learning*
lists a series of steps that are, at least to some extent, temporally
distinct. Wang demurs:
>
> While one can say that there is an ordering of first and last in this
> sequence of spiritual training, the training itself is a unified whole
> that cannot be divided into any ordering of first and last. While
> this sequence of spiritual training cannot be divided into any
> ordering of first and last, only when every aspect of its practice is
> highly refined can one be sure that it will not be deficient in the
> slightest degree. (Tiwald and Van Norden 2014, 250)
>
Wang's argument here is similar to the one he made about the verbal
distinction between knowing and acting (Section 3, above). He claimed
that the ancient sages recognized that knowing and acting were
ultimately one thing, but sometimes discussed them separately for
pedagogic purposes, to help those who underemphasized one aspect of
this unity. Similarly, Wang suggests, the *Great Learning*
uses multiple terms to describe various aspects of the unified
exercise of moral agency, but does not mean to suggest that these are
actually distinct temporal stages:
>
> ...we can say that "self," "mind,"
> "thoughts," "knowledge," and
> "things" describe the sequence of spiritual training.
> While each has its own place, in reality they are but a single thing.
> We can say that "*ge wu*," "extending,"
> "making Sincere," "correcting," and
> "cultivating" describe the spiritual training used in the
> course of this sequence. While each has its own name, in reality they
> are but a single affair. What do we mean by "self"? It
> is the way we refer to the physical operations of the mind. What do
> we mean by "mind"? It is the way we refer to the luminous
> and intelligent master of the person. (Translation slightly modified
> from Tiwald and Van Norden 2014, 247)
>
Wang even interprets the key metaphor of the *Great
Learning* in a very different way from Zhu Xi. For Zhu Xi, hating
evil "like hating a hateful odor," and loving good
"like loving a lovely sight" are goals that we must aspire
to, but can only achieve after an arduous process of ethical
cultivation. For Wang Yangming, these phrases describe what our
attitudes toward good and evil can and should be at the very inception
of ethical cultivation:
>
> ...the *Great Learning* gives us examples of true knowing
> and acting, saying it is "like loving a lovely sight" or
> "hating a hateful odor." Seeing a lovely sight is a case
> of knowing, while loving a lovely sight is a case of acting. As soon
> as one sees that lovely sight, one naturally loves it. It is not as
> if you first see it and only then, intentionally, you decide to love
> it. Smelling a hateful odor is a case of knowing, while hating a
> hateful odor is a case of acting. As soon as one smells that hateful
> odor, one naturally hates it. It is not as if you first smell it and
> only then, intentionally, you decide to hate it. Consider the case of
> a person with a stuffed-up nose. Even if he sees a malodorous
> object right in front of him, the smell does not reach him, and so he
> does not hate it. This is simply not to know the hateful
> odor. (Tiwald and Van Norden 2014, 267)
>
In summary, Zhu Xi and Wang agree that the *Great Learning*
is an authoritative statement on ethical cultivation, expressing the
wisdom of the ancient sages. However, for Zhu Xi, it is analogous to
a recipe, with distinct steps that must be performed in order. For
Wang, the *Great Learning* is analogous to a description of a
painting, in which shading, coloring, composition, perspective and
other factors are aspects of a unified effect.
## 5. Metaphysics
Wang was not primarily interested in theoretical issues. However,
some of his comments suggest a subtle metaphysical view that supports
his conception of ethics. This metaphysics is phrased in terms of the
Pattern/*qi* framework (see Section 2, above), and also makes
use of the Substance/Function distinction. Substance
(*ti*), literally body, is an entity in itself, while
Function is its characteristic or appropriate activity or
manifestation: a lamp is Substance, its light is function; an eye is
Substance, seeing is its Function; water is Substance, waves are its
Function. The Substance/Function distinction goes back to the Daoists
of the Han dynasty (202 BCE-220 CE) and became central among
Chinese Buddhists before being picked up by Neo-Confucians.
Part of the attraction of this vocabulary is that it gives
philosophers drawn to a sort of monism the ability to distinguish
between two aspects of what they regard as ultimately a unity.
Wang argues that the human mind is identical with the Pattern of
the universe, and as such it forms "one body"
(*yi ti*, one Substance) with "Heaven, Earth,
and the myriad creatures" of the world. The difference between
the virtuous and the vicious is that the former recognize that their
minds form one body with everything else, while the latter,
"because of the space between their own physical form and those
of others, regard themselves as separate" (Tiwald and Van Norden
2014, 241). As evidence for the claim that the minds of all humans
form one body with the rest of the universe, Wang appeals to a thought
experiment first formulated by Mencius: "when they see a child
[about to] fall into a well, [humans] cannot avoid having a mind of
alarm and compassion for the child" (Tiwald and Van Norden 2014,
241-242; *Mengzi* 2A6). Wang then anticipates a series
of objections, and offers further thought experiments to motivate the
conclusion that only some underlying metaphysical identity between
humans and other things can account for the broad range of our
reactions to them:
>
> Someone might object that this response is because the child belongs
> to the same species. But when they hear the anguished cries or see
> the frightened appearance of birds or beasts, they cannot avoid a
> sense of being unable to bear it. ... Someone might object that
> this response is because birds and beasts are sentient creatures. But
> when they see grass or trees uprooted and torn apart, they cannot
> avoid feeling a sense of sympathy and distress. ... Someone might
> object that this response is because grass and trees have life and
> vitality. But when they see tiles and stones broken and destroyed,
> they cannot avoid feeling a sense of concern and regret. ... This
> shows that the benevolence that forms one body [with Heaven, Earth,
> and the myriad creatures] is something that even the minds of petty
> people possess. (Tiwald and Van Norden 2014, 242)
>
Wang's first three thought experiments seem fairly compelling at
first glance. Almost all humans would, at least as a first instinct,
have "alarm and compassion" if suddenly confronted with
the sight of a child about to fall into a well. In addition, humans
often do show pity for the suffering of non-human animals.
Finally, the fact that humans maintain public parks and personal
gardens shows some kind of concern for plants.
It is not a decisive objection to Wang's view that humans often
fail to manifest benevolence to other humans, non-human animals,
or plants. Wang is arguing that all humans manifest these responses
sometimes (and that this is best explained by his favored
metaphysics). Like all Neo-Confucians, Wang readily
acknowledges that selfish desires frequently block the manifestation
of our shared nature. However, there are three lines of objection to
Wang's view that are harder to dismiss. (1) There is considerable
empirical evidence that some humans never manifest *any*
compassion for the suffering of others. In the technical literature
of psychology, these people are a subset of those identified as having
"Antisocial Personality Disorder" as defined in
DSM-V.[10]
(2) Wang asserts that when humans "see tiles and stones broken
and destroyed, they cannot avoid feeling a sense of concern and
regret." This claim is important for Wang's argument, because he
takes this reaction to be evidence for the conclusion that our minds
are ultimately "one Substance" with *everything* in
the universe, not merely with members of our species, or other
sentient creatures, or other living things. We can perhaps motivate
Wang's intuition by considering how we might react if we saw that
someone had spray painted graffiti on Half Dome in Yosemite National
Park. The defacement of this scenic beauty would probably provoke
sadness in those of us with an eye for natural beauty. However, it is
certainly not obvious that everyone manifests even sporadic concern
for "tiles and stones," which is what he needs for his
conclusion. (3) Even if we grant Wang his intuitions, it is not clear
that the particular metaphysics he appeals to is the best explanation
of those intuitions. As Darwin himself suggested, our compassion for
other humans can be explained in evolutionary terms. In addition, the
"biophilia hypothesis" (Wilson [1984]) provides an
evolutionary explanation for the human fondness for other animals and
plants. There is not an obvious evolutionary explanation for why
humans seem engaged by non-living natural beauty, like mountain
peaks, but as we have seen it seems questionable how common this trait
is.
## 6. Influence
Wang is regarded, along with Lu Xiangshan, as one of the founders
of the Lu-Wang School of Neo-Confucianism, or the School
of Mind. This is one of the two major wings of
Neo-Confucianism, along with the Cheng-Zhu School (named
after Cheng Yi and Zhu Xi), or School of Pattern. He has frequently
been an inspiration for critics of the orthodox Cheng-Zhu
School, not just in China but also in Japan. In Japan, his philosophy
is referred to as *Oyomeigaku*, and its major
adherents included Nakae Toju (1608-1648) and Kumazawa
Banzan (1619-1691). Wang's thought was also an inspiration for
some of the leaders of the Meiji Restoration (1868), which began
Japan's rapid
modernization.[11]
In China, Confucianism underwent a significant shift during the
Qing dynasty (1644-1911) with the development of the Evidential
Research movement. Evidential Research emphasized carefully
documented and tightly argued work on concrete issues of philology,
history, and even mathematics and civil engineering. Such scholars
generally looked with disdain on "the Song-Ming
Confucians" (including both Zhu Xi and Wang Yangming), whom they
accused of producing "empty words" that could not be
substantiated.[12]
One of the few Evidential Research
scholars with a serious interest in ethics was Dai Zhen
(1724-1777). However, he too was critical of both the
Cheng-Zhu and the Lu-Wang schools. The three major prongs
of Dai's critique of Neo-Confucians like Wang were that they (1)
encouraged people to treat their subjective opinions as the
deliverances of some infallible moral sense, (2) projected
Buddhist-inspired concepts back onto the ancient Confucian
classics, and (3) ignored the ethical value of ordinary physical
desires.
One of the major trends in contemporary Chinese philosophy is
"New Confucianism." New Confucianism is a movement to
adapt Confucianism to modern thought, showing how it is consistent
with democracy and modern science. New Confucianism is distinct from
what we in the West call Neo-Confucianism, but it adopts many
Neo-Confucian concepts, in particular the view that humans share
a trans-personal nature which is constituted by the universal
Pattern. Many New Confucians agree with Mou Zongsan (1909-1995)
that Wang had a deeper and more orthodox understanding of Confucianism
than did Zhu
Xi.[13]
Wang's philosophy is of considerable intrinsic interest, because of
the ingenuity of his arguments, the systematicity of his views, and
the precision of his textual exegesis. Beyond that, Wang's work has
the potential to inform contemporary ethics. Although his particular
metaphysics may not be appealing, many of his ideas can be
naturalized. It may be hard to believe that everything is unified by
a shared, underlying Pattern, but it does seem plausible that we are
deeply dependent upon one another and upon our natural environment for
our survival and our identities. I am a husband, a father, a teacher,
and a researcher, but only because I have a wife, children, students,
and colleagues. In some sense, we do form "one body" with
others, and Wang provides provocative ideas about how we should
respond to this insight. In addition, Wang's fundamental criticism of
Zhu Xi's approach, that it produces pedants who only study and talk
about ethics, rather than people who strive to actually *be*
ethical, has considerable contemporary relevance, particularly given
the empirical evidence that our current practices of ethical education
have little positive effect on ethical behavior (Schwitzgebel and Rust
2014; and cf. Schwitzgebel 2013 (Other Internet Resources)). |
war | ## 1. Traditionalists and Revisionists
Contemporary just war theory is dominated by two camps: traditionalist
and
revisionist.[3]
The traditionalists might as readily be called legalists. Their views
on the morality of war are substantially led by international law,
especially the law of armed conflict. They aim to provide those laws
with morally defensible foundations. States (and only states) are
permitted to go to war only for national defence, defence of other
states, or to intervene to avert "crimes that shock the moral
conscience of mankind" (Walzer 2006: 107). Civilians may not be
targeted in war, but all combatants, whatever they are fighting for,
are morally permitted to target one another, even when doing so
foreseeably harms some civilians (so long as it does not do so
excessively).[4]
Revisionists question the moral standing of states and the
permissibility of national defence, argue for expanded permissions for
humanitarian intervention, problematise civilian immunity, and contend
that combatants fighting for wrongful aims cannot do anything right,
besides lay down their weapons.
Most revisionists are *moral* revisionists only: they deny that
the contemporary law of armed conflict is intrinsically morally
justified, but believe, mostly for pragmatic reasons, that it need not
be substantially changed. Some, however, are both morally and legally
revisionist. And even moral revisionists' disagreement with the
traditionalists is hardly ersatz: most believe that, faced with a
clash between what is morally and what is legally permitted or
prohibited, individuals should follow their conscience rather than the
law.[5]
The traditionalist view received its most prominent exposition the
same year as it was decisively codified in international law, in the
first additional protocol to the Geneva Conventions. Michael
Walzer's *Just and Unjust Wars*, first published in 1977,
has been extraordinarily influential among philosophers, political
scientists, international lawyers, and military practitioners. Among
its key contributions were its defence of central traditionalist
positions on national defence, humanitarian intervention,
discrimination, and combatant equality.
Early revisionists challenged Walzer's views on national defence
(Luban 1980a) and humanitarian intervention (Luban 1980b). Revisionist
criticism of combatant equality and discrimination followed (Holmes
1989; McMahan 1994; Norman 1995). Since then there has been an
explosion of revisionist rebuttals of Walzer (for example Rodin 2002;
McMahan 2004b; McPherson 2004; Arneson 2006; Fabre 2009; McMahan 2009;
Fabre 2012).
Concurrently, many philosophers welcomed Walzer's conclusions,
but rejected his arguments. They have accordingly sought firmer
foundations for broadly traditionalist positions on national defence
(Benbaji 2014; Moore 2014), humanitarian intervention (Coady 2002),
discrimination (Rodin 2008b; Dill and Shue 2012; Lazar 2015c), and
especially combatant equality (Zohar 1993; Kutz 2005; Benbaji 2008;
Shue 2008; Steinhoff 2008; Emerton and Handfield 2009; Benbaji
2011).
We will delve deeper into these debates in what follows. First,
though, some methodological groundwork. Traditionalists and
revisionists alike often rely on methodological or second-order
premises, to the extent that one might think that the first-order
questions are really just proxy battles through which they work out
their deeper disagreements (Lazar and Valentini forthcoming).
## 2. How Should We Think about the Morality of War?
### 2.1 Historical vs Contemporary Just War Theory
For the sake of concision this entry discusses only contemporary analytical
philosophers working on war. Readers are directed to the excellent work
of philosophers and intellectual historians such as Greg Reichberg,
Pablo Kalmanovitz, Daniel Schwartz, and Rory Cox to gain further
insights about historical just war theory (see, in particular, Cox
2016; Kalmanovitz 2016; Reichberg 2016; Schwartz 2016).
### 2.2 Institutions and Actions
Within contemporary analytical philosophy, there are two different
ways in which moral and political philosophers think about war (Lazar
and Valentini forthcoming). On the first, *institutionalist*,
approach, philosophers' primary goal is to establish what the
institutions regulating war should be. In particular, we should
prescribe morally justified laws of war. We then tell individuals and
groups that they ought to follow those laws. On the second approach,
we should focus first on the moral reasons that apply directly to
individual and group actions, without the mediating factor of
institutions. We tell individuals and groups to act as their moral
reasons dictate. Since this approach focuses not on the institutions
that govern our interactions, but on those interactions themselves, we
will call it the "interactional"
approach.[6]
In general, the institutionalist approach is favoured by indirect
consequentialists and contractualists. Indirect consequentialists
believe these institutions are justified just in case they will in
fact have better long-run results than any feasible alternative
institutions (see Mavrodes 1975; Dill and Shue 2012; Shue 2013;
Waldron 2016). Contractualists believe these institutions ground or
reflect either an actual or a hypothetical contract among states
and/or their citizens, which specifies the terms of their interaction
in war (see Benbaji 2008, 2011, 2014; Statman 2014).
Non-contractualist deontologists and direct- or act-consequentialists
tend to prefer the interactional approach. Their central question is:
what moral reasons bear directly on the permissibility of killing in
war? This focus on killing might seem myopic--war involves much
more violence and destruction than the killing alone. However,
typically this is just a heuristic device; since we typically think of
killing as the most presumptively wrongful kind of harm, whatever
arguments one identifies that justify killing are likely also to
justify lesser wrongs. And if the killing that war involves cannot be
justified, then we should endorse pacifism.
Any normative theory of war should pay attention both to what the laws
of war should be, and to what we morally ought to do. These are two
distinct but equally important questions. And they entail the
importance of a third: what ought we to do all things considered, for
example when law and morality conflict? Too much recent just war
theory has focused on arguing that philosophical attention should be
reserved to one of the first two of these questions (Buchanan 2006;
Shue 2008, 2010; Rodin 2011b). Not enough has concentrated on the
third (though see McMahan 2008; Lazar 2012a).
Although this entry touches on the first question, it focuses on the
second. Addressing the first requires detailed empirical research and
pragmatic political speculation, both of which are beyond my remit
here. Addressing the third takes us too deep into the minutiae of
contemporary just war theory for an encyclopaedia entry.
What's more, even institutionalists need some answer to the
second question--and so some account of the interactional
morality of war. Rule-consequentialists need an account of the good
(bad) that they are hoping that the ideal laws of war will maximise
(minimise) in the long run. This means, for example, deciding whether
to aim to minimise all harm, or only to minimise wrongful harm. The
latter course is much more plausible--we wouldn't want laws
of war that, for example, licensed genocide just in case doing so
leads to fewer deaths overall. But to follow this course, we need to
know which harms are (extra-institutionally) wrongful. Similarly,
contractualists typically acknowledge various constraints on the kinds
of rules that could form the basis of a legitimate contract, which,
again, we cannot work out without thinking about the
extra-institutional morality of war (Benbaji 2011).
### 2.3 Overarching Disputes in Contemporary Analytical Just War Theory
Even within interactional just war theory, several second-order
disagreements underscore first-order disputes. First: when thinking
about the ethics of war, what kinds of cases should we use to test our
intuitions and our principles? We can start by thinking about actual
wars and realistic wartime scenarios, paying attention to
international affairs and military history. Or, more clinically, we
can construct hypothetical cases to isolate variables and test their
impact on our intuitions.
Some early revisionists relied heavily on highly artificial cases
(e.g., McMahan 1994; Rodin 2002). They were criticized for this by
traditionalists, who generally use more empirically-informed examples
(Walzer 2006). But one's standpoint on the substantive questions
at issue between traditionalists and revisionists need not be
predetermined by one's methodology. Revisionists can pay close
attention to actual conflicts (e.g., Fabre 2012). Traditionalists can
use artificial hypotheticals (e.g., Emerton and Handfield 2009; Lazar
2013).
Abstraction forestalls unhelpful disputes over historical details. It
also reduces bias--we are inclined to view actual conflicts
through the lens of our own political allegiances. But it also has
costs. We should be proportionately less confident of our intuitions
the more removed the test case is from our lived experience.
Philosophers' scenarios involving mind-control, armed
pedestrians, trolleys, meteorites, and incredibly complicated causal
sequences are pure exercises in imagination. How can we trust our
judgements about such cases more than we trust our views on actual,
realistic scenarios? What's more, abandoning the harrowing
experience of war for sanitized hypothetical cases might be not merely
epistemically unsound, but also disrespectful of the victims of war.
Lastly, cleaned-up examples often omit morally relevant
details--for instance, assuming that everyone has all the
information relevant to their choice, rather than acknowledging the
"fog of war", and making no allowances for fear or
trauma.
Artificial hypotheticals have their place, but any conclusions they
support must be tested against the messy reality of war. What's
more, our intuitive judgements should be the starting-point for
investigation, rather than its end.
The second divide is related to the first. *Reductivists* think
that killing in war must be justified by the same properties that
justify killing outside of war. *Non-reductivists*, sometimes
called *exceptionalists*, think that some properties justify
killing in war that do not justify killing outside of
war.[7]
Most exceptionalists think that specific features of killing in war
make it morally different from killing in ordinary life--for
example, the scale of the conflict, widespread and egregious
non-compliance with fundamental moral norms, the political interests
at stake, the acute uncertainty, the existence of the law of armed
conflict, or the fact that the parties to the conflict are organized
groups. A paradigm reductivist, by contrast, might argue
that justified wars are mere aggregates of justified acts of
individual self- and other-defence (see Rodin 2002; McMahan 2004a).
Reductivists are much more likely to use far-fetched hypothetical
cases, since they think there is nothing special about warfare. The
opposite is true for exceptionalists. Walzer's first critics
relied on reductivist premises to undermine the principles of national
defence (Luban 1980a; Rodin 2002), discrimination (Holmes 1989;
McMahan 1994), and combatant equality (Holmes 1989; McMahan 1994).
Many traditionalists replied by rejecting reductivism, arguing that
there is something special about war that justifies a divergence from
the kinds of judgements that are appropriate to other kinds of
conflict (Zohar 1993; Kutz 2005; Benbaji 2008; Dill and Shue 2012).
Again, some philosophers buck these overarching trends (for
reductivist traditionalist arguments, see e.g., Emerton and Handfield
2009; Lazar 2015c; Haque 2017; for non-reductivist revisionist
arguments, see e.g., Ryan 2016).
The debate between reductivism and exceptionalism is
overblown--the concept of "war" is vague, and while
typical wars involve properties that are not instantiated in typical
conflicts outside of war, we can always come up with far-fetched
hypotheticals that don't involve those properties, which we
wouldn't call "wars". But this masks a deeper
methodological disagreement: when thinking about the morality of war,
should we start by thinking about war, or by thinking about the
permissible use of force outside of war? Should we model justified
killing in war on justified killing outside of war? Or, in focusing on
the justification of killing in war, might we then discover that there
are some non-canonical cases of permissible killing outside of war? My
own view is that thinking about justified killing outside of war has
its place, but must be complemented by thinking about war
directly.
Next, we can distinguish between individualists and collectivists; and
we can subdivide them further into *evaluative* and
*descriptive* categories. Evaluative individualists think that
a collective's moral significance is wholly reducible to its
contribution to the well-being of the individuals who compose it.
Evaluative collectivists think that collectives can matter
independently of how they contribute to individual well-being.
Descriptive individualists think that any act that might appear to be
collective is reducible to component acts by individuals. Descriptive
collectivists deny this, thinking that some acts are irreducibly
collective.[8]
Again, the dialectic of contemporary just war theory involves
revisionists first arguing that we cannot vindicate traditionalist
positions on descriptively and evaluatively individualist grounds,
with some traditionalists then responding by rejecting descriptive
(Kutz 2005; Walzer 2006; Lazar 2012b) and evaluative individualism
(Zohar 1993). And again there are outliers--individualist
traditionalists (e.g., Emerton and Handfield 2009) and collectivist
revisionists (e.g., Bazargan 2013).
Unlike the reductivist/exceptionalist divide, the
individualist/collectivist split cannot be resolved by thinking about
the morality of war on its own. War is a useful test case for theories
of collective action and the value of collectives, but no more than
that. Intuitions about war are no substitute for a theory of
collective action. Perhaps some collectives have value beyond their
contribution to the well-being of their members. For example, they
might instantiate justice, or solidarity, which can be impersonally
valuable (Temkin 1993). It is doubtful, however, that groups have
*interests* independent from the well-being of their members.
On the descriptive side, even if we can reduce collective actions to
the actions of individual members, this probably involves such
complicated contortions that we should seriously question whether it
is worth doing (Lazar 2012b).
### 2.4 Dividing up the Subject Matter
Traditionally, just war theorists divide their enquiry into reflection
on the resort to war--*jus ad bellum*--and conduct in
war--*jus in bello*. More recently, they have added an
account of permissible action post-war, or *jus post bellum*.
Others suggest an independent focus on war exit, which they have
variously called *jus ex bello* and *jus terminatio*
(Moellendorf 2008; Rodin 2008a). These Latin labels, though
unfortunately obscurantist, serve as a useful shorthand. When we refer
to *ad bellum* justice, we mean to evaluate the permissibility
of the war as a whole. This is particularly salient when deciding to
launch the war. But it is also crucial for the decision to continue
fighting. *Jus ex bello*, then, fits within *jus ad
bellum*. The *jus in bello* denotes the permissibility of
particular actions that compose the war, short of the war as a
whole.
### 2.5 The Decisive Role of Necessity and Proportionality
Traditional just war theory construes *jus ad bellum* and
*jus in bello* as sets of principles, satisfying which is
necessary and sufficient for a war's being permissible. *Jus
ad bellum* typically comprises the following six principles:
1. Just Cause: the war is an attempt to avert the right kind of
injury.
2. Legitimate Authority: the war is fought by an entity that has the
authority to fight such wars.
3. Right Intention: that entity intends to achieve the just cause,
rather than using it as an excuse to achieve some wrongful end.
4. Reasonable Prospects of Success: the war is sufficiently likely
to achieve its aims.
5. Proportionality: the morally weighted goods achieved by the war
outweigh the morally weighted bads that it will cause.
6. Last Resort (Necessity): there is no other less harmful way to
achieve the just cause.
Typically the *jus in bello* list comprises:
1. Discrimination: belligerents must always distinguish between
military objectives and civilians, and intentionally attack only
military objectives.
2. Proportionality: foreseen but unintended harms must be
proportionate to the military advantage achieved.
3. Necessity: the least harmful means feasible must be used.
These all matter to the ethics of war, and will be addressed below.
However, it is unhelpful to view them as a checklist of necessary and
sufficient conditions. When they are properly understood, only
proportionality and necessity (in the guise of last resort in the
*jus ad bellum*) are necessary conditions for a war, or an act
in a war, to be permissible, since no matter how badly going to war
fails the other *ad bellum* criteria (for example) it might
still be permissible because it is the least awful of one's
alternatives, and so satisfies the necessity and proportionality
constraints.
To get an intuitive grasp on necessity and proportionality, note that
if someone threatens my life, then killing her would be proportionate;
but if I could stop her by knocking her out, then killing her would be
unnecessary, and so impermissible. The necessity and proportionality
constraints have the same root: with few exceptions (perhaps when it
is deserved), harm is intrinsically bad. Harms (and indeed all bads)
that we cause must therefore be justified by some positive reason that
counts in their favour--such as good achieved or evil averted
(Lazar 2012a). Both the necessity and proportionality constraints
involve comparing the bads caused by an action with the goods that it
achieves. They differ only in the kinds of options they compare.
The use of force is proportionate when the harm done is
counterbalanced by the good achieved in averting a threat. To
determine this, we typically compare the candidate course of action
with what would happen if we allowed the threat to eventuate.
Of course, in most cases we will have more than one means of averting
or mitigating the threat. And a harmful option can be permissible only
if *all* the harm that it involves is justified by a
corresponding good achieved. If some alternative would as successfully
avert the threat, but cause less harm, then the more harmful option is
impermissible, because it involves unnecessary harm.
Where an option *O* aims to avert a threat *T*, we determine
*O*'s necessity by comparing it with all the other options
that will either mitigate or avert *T*. We determine its
proportionality by comparing it with the harm suffered if *T*
should come about. The only difference between the proportionality and
necessity constraints is that the former involves comparing
one's action with a very specific counterfactual
scenario--in which we don't act to avert the
threat--while the latter involves comparing it with all your
available options that have some prospect of averting or mitigating
the threat. In my view, we should simply expand this so that the
necessity constraint compares all your available options bar none.
Then proportionality would essentially involve comparing each option
with the alternative of doing nothing, while necessity would involve
comparing all options (including doing nothing) in terms of their
respective balances of goods and bads. On this approach, necessity
would subsume proportionality. But this is a technical point with
little substantive payoff.
More substantively, necessity and proportionality judgements concern
consequences, and yet they are typically made *ex ante*, before
we know what the results of our actions will be. They must therefore
be modified to take this uncertainty into account. The most obvious
solution is simply to refer to *expected* threats and
*expected* harms, where the expected harm of an option *O*
is the probability-weighted average of the harms that might result if
I take *O*, and the expected threat is the probability-weighted
average of the consequences of doing nothing to prevent the
threat--allowing for the possibility that the threat might not
eventuate at all (Lazar 2012b). We would also have to factor in the
options' probability of averting the threat. This simple move
obscures a number of important and undertheorised issues that we
cannot discuss in detail here. For now, we must simply note that
proportionality and necessity must be appropriately indexed to the
agent's uncertainty.
Necessity and proportionality judgements involve weighing harms
inflicted and threats averted, indeed all relevant goods and bads. The
simplest way to proceed would be to aggregate the harms to individual
people on each side, and call the act proportionate just in case it
averts more harm than it causes, and necessary just in case no
alternative involves less harm. But few deontologists, and indeed few
non-philosophers, think in this naively aggregative way. Instead we
should weight harms (etc.) according to factors such as whether the
agent is directly responsible for them, and whether they are intended
or merely
foreseen.[9]
Many also think that we can, perhaps even must, give greater
importance in our deliberations to our loved ones (for example) than
to those tied to us only by the common bonds of humanity (Hurka 2007;
Lazar 2013; for criticism, see Lefkowitz 2009). Similarly, we might justifiably
prioritise defending our own state's sovereignty and territorial
integrity, even when doing so would not be impartially
best.[10]
Only when all these and other factors (many of which are discussed
below) are taken into consideration can we form defensible conclusions
about which options are necessary and proportionate.
The other elements of the ethics of war contribute to the evaluation
of proportionality and necessity, in one (or more) of three ways:
identifying positive reasons in favour of fighting; delineating the
negative reasons against fighting; or as staging-posts on the way to
judgements of necessity and proportionality.
Given the gravity of the decision to go to war, only very serious
threats can give us just cause to fight. So if just cause is
satisfied, then you have weighty positive reasons to fight. Lacking
just cause does not *in itself* aggravate the badness of
fighting, but does make it less likely that the people killed in
pursuing one's war aims will be liable to be killed (more on
this below, and see McMahan 2005a), which makes killing very hard to
justify. Even if having a just cause is not strictly speaking a
necessary condition for warfare to be permissible, the absence of a
just cause makes it very difficult for a war to satisfy
proportionality.
If legitimate authority is satisfied then additional positive reasons
count in favour of fighting (see below). If it is not satisfied, then
this adds an additional reason against fighting, which must be
overcome for fighting to be proportionate.
The "reasonable prospects of success" criterion is a
surmountable hurdle on the way to proportionality. Typically, if a war
lacks reasonable prospects of success, then it will be
disproportionate, since wars always involve causing significant harms,
and if those harms are likely to be pointless then they are unlikely
to be justified. But of course sometimes one's likelihood of
victory is very low, and yet fighting is still the best available
option, and so necessary and proportionate. Having reasonable
prospects of success matters only for the same reasons that necessity
and proportionality matter. If necessity and proportionality are
satisfied, then the reasonable prospects of success standard is
irrelevant.
Right intention may also be irrelevant, but insofar as it matters its
absence would be a reason against fighting; having the right intention
does not give a positive reason to fight.
Lastly, discrimination is crucial to establishing proportionality and
necessity, because it tells us how to weigh the lives taken in
war.
## 3. *Jus ad Bellum*
### 3.1 Just Cause
Wars destroy lives and environments. In the eight years following the
Iraq invasion in 2003, half a million deaths were either directly or
indirectly caused by the war (Hagopian et al. 2013). Direct casualties
from the Second World War numbered over 60 million, about 3 per cent
of the world's population. War's environmental costs are
less commonly researched, but are obviously also extraordinary (Austin
and Bruch 2000). Armed forces use fuels in Olympian quantities: in the
years from 2000-2013, the US Department of Defense accounted for
around 80% of US federal government energy usage, between 0.75 and 1
quadrillion BTUs per year--a little less than *all* the
energy use that year in Denmark and Bulgaria, a little more than
Slovakia and Serbia (Energy Information Administration 2015a,b). They
also directly and indirectly destroy habitats and natural
resources--consider, for example, the Gulf war oil spill (El-Baz
and Makharita 1994). For both our planet and its inhabitants, wars are
truly among the very worst things we can do.
War can be necessary and proportionate only if it serves an end worth
all this death and destruction. Hence the importance of having a just
cause. And hence too the widespread belief that just causes are few
and far between. Indeed, traditional just war theory recognizes only
two kinds of justification for war: national defence (of one's
own state or of an ally) and humanitarian intervention. What's
more, humanitarian intervention is permissible only to avert the very
gravest of tragedies--"crimes that shock the moral
conscience of mankind" (Walzer 2006: 107).
Walzer argued that states' claims to sovereignty and territorial
integrity are grounded in the human rights of their citizens, in three
ways. First, states ensure individual security. Rights to life and
liberty have value "only if they also have dimension"
(Walzer 2006: 58), which they derive from states'
borders--"within that world, men and women... are safe
from attack; once the lines are crossed, safety is gone" (Walzer
2006: 57). Second, states protect a common life, made by their
citizens over centuries of interaction. If the common life of a
political community is valued by its citizens, then it is worth
fighting for. Third, they have also formed a political association, an
organic social contract, whereby individuals have, over time and in
informal ways, conceded aspects of their liberty to the community, to
secure greater freedom for all.
These arguments for national defence are double-edged. They helped
explain why wars of national defence are permissible, but also make
justifying humanitarian intervention harder. One can in principle
successfully conclude a war in defence of oneself or one's allies without any lasting
damage to the political sovereignty or territorial integrity of any of
the contending parties. In Walzer's view, humanitarian
interventions, in which one typically defends people against their own
state, necessarily undermine political sovereignty and territorial
integrity. So they must meet a higher burden of justification.
Walzer's traditionalist stances on national defence and
humanitarian intervention met heavy criticism. Early sceptics (Doppelt
1978; Beitz 1980; Luban 1980a) challenged Walzer's appeal to the
value of collective freedom, noting that in diverse political
communities freedom for the majority can mean oppression for the
minority (see also Caney 2006). In modern states, can we even speak of
a single common life? Even if we can, do wars really threaten it,
besides in extreme cases? And even if our common life and culture were
threatened, would their defence really justify killing innocent
people?
Critics also excoriated Walzer's appeal to individual rights
(see especially Wasserstrom 1978; Luban 1980b). They questioned the
normative purchase of his metaphor of the organic social contract (if
hypothetical contracts aren't worth the paper they're not
written on, then what are metaphorical contracts worth?). They
challenged his claim that states guarantee individual security: most
obviously, when humanitarian intervention seems warranted, the state
is typically the greatest threat to its members.
David Rodin (2002) advanced the quintessentially reductivist critique
of Walzer, showing that his attempt to ground state defence in
individual defensive rights could not succeed. He popularized the
"bloodless invasion objection" to this argument for
national defensive rights. Suppose an unjustly aggressing army would
secure its objectives without loss of life if only the victim state
offers no resistance (the 2001 invasion of Afghanistan, and the 2003
invasion of Iraq, arguably meet this description, as might some of
Russia's territorial expansions). If the right of national
defence is grounded in states' members' rights to
security, then in these cases there would be no right of national
defence, because their lives are at risk only if the victim state
fights back. And yet we typically do think it permissible to fight
against annexation and regime change.
By undermining the value of sovereignty, revisionists lowered the bar
against intervening militarily in other states. Often these arguments
were directly linked: some think that if states cannot protect the
security of their members, then they lack any rights to sovereignty
that a military intervention could undermine (Shue
1997).[11]
Caney (2005) argues that military intervention could be permissible
were it to serve individual human rights better than non-intervention.
Others countenance so-called "redistributive wars", fought
on behalf of the global poor to force rich states to address the
widespread violations of fundamental human rights caused by their
economic policies (Luban 1980b; Fabre 2012; Lippert-Rasmussen 2013;
Overland 2013).
Other philosophers, equally unpersuaded by Walzer's arguments,
nonetheless reject a substantively revisionist take on just cause. If
the individual self-defence-based view of *jus ad bellum*
cannot justify lethal defence against "lesser aggression",
then we *could* follow Rodin (2014), and argue for radically
revisionist conclusions about just cause; or we could instead reject
the individual self-defence-based approach to justifying killing in
war (Emerton and Handfield 2014; Lazar 2014).
Some think we can solve the "problem of lesser aggression"
by invoking the importance of deterrence, as well as the impossibility
of knowing for sure that aggression will be bloodless (Fabre 2014).
Others think that we must take proper account of people's
interest in having a democratically elected, or at least home-grown,
government, to justify national defence. On one popular account,
although no individual could permissibly kill to protect her own
"political interests", when enough people are threatened,
their aggregated interests justify going to war (Hurka 2007; Frowe
2014). Counterintuitively, this means that more populous states have,
other things equal, more expansive rights of national defence.
However, perhaps states have a group right to national defence, which
requires only that a sufficient number of individuals have the
relevant political interests--any excess over the threshold is
morally irrelevant. Many already think about national
self-determination in this way: the population of the group seeking
independence has to be sufficiently large before we take their claim
seriously, but differences above that threshold matter much less
(Margalit and Raz 1990).
The revisionist take on humanitarian intervention might also have some
troubling results. If sovereignty and territorial integrity matter
little, then shouldn't we use military force more often? As Kutz
(2014) has argued, revisionist views on national defence might license
the kind of military adventurism that went so badly wrong in Iraq,
where states have so little regard for sovereignty that they go to war
to improve the domestic political institutions of their
adversaries.
We can resolve this worry in one of two ways. First, recall just how
infrequently military intervention succeeds. Since it so often not
only fails, but actually makes things worse, we should use it only
when the ongoing crimes are so severe that we would take any risk to
try to stop them.
Second, perhaps the political interests underpinning the state's
right to national defence are not simply interests in being part of an
ideal liberal democracy, but in being governed by, very broadly,
members of one's own nation, or perhaps even an interest in
collective self-determination. This may take us back to Walzer's
"romance of the nation-state", but people clearly do care
about something like this. Unless we want to restrict rights of
national defence to liberal democracies alone (bearing in mind how few
of them there are in the world), we have to recognize that our
political interests are not all exclusively liberal-democratic.
What of redistributive wars? Too often arguments on this topic
artfully distinguish between just cause and other conditions of
*jus ad bellum* (Fabre 2012). Even when used by powerful states
against weak adversaries, military force is rarely a moral triumph. It
tends to cause more problems than it solves. Redistributive wars, as
fought on behalf of the "global poor" against the
"global rich", would obviously fail to achieve their
objectives, indeed they would radically exacerbate the suffering of
those they aim to help. So they would be disproportionate, and cannot
satisfy the necessity constraint. The theoretical point that, in
principle, not only national defence and humanitarian intervention
could give just causes for war is sound. But this example is in
practice irrelevant (for a robust critique of redistributive
wars, see Benbaji 2014).
And yet, given the likely path of climate change, the future might see
resource wars grow in salience. As powerful states find themselves
lacking crucial resources, held by other states, we might find that
military attack is the best available means to secure these resources,
and save lives. Perhaps in some such circumstances resource wars could
be a realistic option.
### 3.2 Just Peace
The goods and bads relevant to *ad bellum* proportionality and
necessity extend far beyond the armistice. This is obvious, but has
recently received much-needed emphasis, both among philosophers and in
the broader public debate sparked by the conflicts in Iraq and
Afghanistan (Bass 2004; Coady 2008; May 2012). Achieving your just
cause is not enough. The aftermath of the war must also be
sufficiently tolerable if the war is to be proportionate, all things
considered. It is an open question how far into the future we have to
look to assess the morally relevant consequences of conflict.
### 3.3 Legitimate Authority
Historically, just war theory has been dominated by statists. Most
branches of the tradition have had some version of a
"legitimate", "proper" or "right"
authority constraint, construed as a necessary condition for a war to
be *ad bellum*
just.[12]
In practice, this means that sovereigns and states have rights that
non-state actors lack. International law gives only states rights of
national defence and bestows "combatant rights" primarily
on the soldiers of states. Although Walzer said little about
legitimate authority, his arguments all assume that states have a
special moral standing that non-state actors lack.
The traditionalist, then, says it matters that the body fighting the
war have the appropriate authority to do so. Some think that authority
is grounded in the overall legitimacy of the state. Others think that
overall legitimacy is irrelevant--what matters is whether the
body fighting the war is authorized to do so by the polity that it
represents (Lazar forthcoming-b). Either way, states are much more
likely to satisfy the legitimate authority condition than non-state
actors.
Revisionists push back: relying on reductivist premises, they argue
that killing in war is justified by the protection of individual
rights, and our licence to defend our rights need not be mediated
through state institutions. Either we should disregard the legitimate
authority condition or we should see it as something that non-state
actors can, in fact, fulfil (Fabre 2008; Finlay 2010; Schwenkenbecher
2013).
Overall, state legitimacy definitely seems relevant for some questions
in war (Estlund 2007; Renzo 2013). But authorization is more
fundamental. Ideally, the body fighting the war should be authorized
to do so by the institutions of a constitutional democracy. Looser
forms of authorization are clearly possible; even a state that is not
overall legitimate might nonetheless be authorized by its polity to
fight wars of national defence.
Authorization of this kind matters to *jus ad bellum* in two
ways. First, fighting a war without authorization constitutes an
additional wrong, which has to be weighed against the goods that
fighting will bring about, and must pass the proportionality and
necessity tests. When a government involves its polity in a war, it
uses the resources of the community at large, as well as its name, and
exposes it to both moral and prudential risks (Lazar forthcoming-b).
Doing this unauthorized is obviously deeply morally problematic. Any
form of undemocratic decision-making by governments is objectionable;
taking decisions of this magnitude without the population's
granting you the right to do so is especially wrong.
Second, authorization can allow the government to act on positive
reasons for fighting that would otherwise be unavailable. Consider the
claim that wars of national defence are in part justified by the
political interests of the citizens of the defending
state--interests, for example, in democratic participation or in
collective self-determination. A government may defend these
aggregated political interests only if it is authorized to do so.
Otherwise fighting would contravene the very interests in
self-determination that it is supposed to protect. But if it is
authorized, then that additional set of reasons supports fighting.
As a result, democratic states enjoy somewhat more expansive war
rights than non-democratic states and non-state movements. The latter
two groups cannot often claim the same degree of authorization as
democratic states. Although this might not vindicate the current bias
in international law towards states, it does suggest that it
corresponds to something more than the naked self-interest of the
framers of international law--which were, of course, states. This
obviously has significant implications for civil wars (see Parry
2016).
### 3.4 Proportionality
The central task of the proportionality constraint, recall, is to
identify reasons that tell in favour of fighting and those that tell
against it. Much of the latter task is reserved for the discussion of
*jus in bello* below, since it concerns weighing lives in
war.
Among the goods that help make a war proportionate, we have already
considered those in the just cause and others connected to just peace
and legitimate authority. Additionally, many also think that
proportionality can be swayed by reasonable partiality towards
one's own state and co-citizens. Think back to the political
interests that help justify national defence. If we were wholly
impartial, then we should choose the course that will best realise
people's political interests overall. So if fighting the
defensive war would undermine the political interests of the adversary
state's citizens more than it would undermine our own, then we
should refuse to fight. But this is not how we typically think about
the permission to resort to war: we are typically entitled to be
somewhat partial towards the political interests of our
co-citizens.
Some propose further constraints on what goods can count towards the
proportionality of a war. McMahan and McKim (1993) argued that
benefits like economic progress cannot make an otherwise
disproportionate war proportionate. This is probably true in practice,
but perhaps not in principle--that would require a kind of
lexical priority between lives taken and economic benefits, and
lexical priorities are notoriously hard to defend. After all, economic
progress saves lives.
Some goods lack weight in *ad bellum* proportionality, not
because they are lexically inferior to other values at stake, but
because they are conditional in particular ways. Soldiers have
conditional obligations to fulfil their roles, grounded in their
contracts, oaths, and their co-citizens' legitimate
expectations. That carrying out an operation fulfils my oath gives me
a reason to perform that operation, which has to be weighed in the
proportionality calculation (Lazar 2015b). But these reasons cannot
contribute to *ad bellum* proportionality in the same way,
because they are conditional on the war as a whole being fought.
Political leaders cannot plausibly say: "were it not for all the
oaths that would be fulfilled by fighting, this war would be
disproportionate". This is because fighting counts as fulfilling
those oaths only if the political leader decides to take her armed
forces to war.
Another reason to differentiate between proportionality *ad
bellum* and *in bello* is that the relevant comparators
change for the two kinds of assessment. In a loose sense, we determine
proportionality by asking whether some option is better than doing
nothing. The comparator for assessing the war as a whole, then, is
*not fighting at all*, ending the war as a whole. That option
is not available when considering particular actions within the
war--one can only decide whether or not to perform this
particular action.
### 3.5 Last Resort (Necessity)
Are pre-emptive wars, fought in anticipation of an imminent enemy
attack, permissible? What of preventive wars, in which the assault
occurs prior to the enemy having any realistic plan of attack (see, in
general, Shue and Rodin 2007)? Neoconservatives have recently argued,
superficially plausibly, that the criterion of last resort can be
satisfied long before the enemy finally launches an attack (see
President 2002). The right answer here is boringly familiar. In
principle, of course this is possible. But, in practice, we almost
always overestimate the likelihood of success from military means and
overlook the unintended consequences of our actions. International law
must therefore retain its restrictions, to deter the kind of
overzealous implementation of the last-resort principle that we saw in
the 2003 invasion of Iraq (Buchanan and Keohane 2004; Luban 2004).
Another frequently discussed question: what does the
"last" in last resort really mean? The idea is simple, and
is identical to *in bello* necessity. Going to war must be
compared with the alternative available strategies for dealing with
the enemy (which also includes the various ways in which we could
submit). Going to war is literally a last resort when no other
available means has any prospect of averting the threat. But our
circumstances are not often this straitened. Other options always have
*some* chance of success. So if you have a diplomatic
alternative to war, which is less harmful than going to war, and is at
least as likely to avert the threat, then going to war is not a last
resort. If the diplomatic alternative is less harmful, as well as less
likely to avert the threat, then the question is whether the reduction
in expected harm is great enough for us to be required to accept the
reduction in likelihood of averting the threat. If not, then war is
your last
resort.[13]
## 4. *Jus in Bello*
### 4.1 Walzer and his Critics
The traditionalist *jus in bello*, as reflected in
international law, holds that conduct in war must satisfy three
principles:
1. Discrimination: Targeting noncombatants is
impermissible.[14]
2. Proportionality: Collaterally harming noncombatants (that is,
harming them foreseeably, but unintendedly) is permissible only if the
harms are proportionate to the goals the attack is intended to
achieve.[15]
3. Necessity: Collaterally harming noncombatants is permissible only
if, in the pursuit of one's military objectives, the least
harmful means feasible are
chosen.[16]
These principles divide the possible victims of war into two classes:
combatants and noncombatants. They place no constraints on killing
combatants.[17]
But--outside of "supreme emergencies", rare
circumstances in which intentionally killing noncombatants is
necessary to avert an unconscionable threat--noncombatants may be
killed only unintendedly and, even then, only if the harm they suffer
is necessary and proportionate to the intended goals of the
attack.[18]
Obviously, then, much hangs on what makes one a combatant. This entry adopts a
conservative definition. Combatants are (most) members of the
organized armed forces of a group that is at war, as well as others
who directly participate in hostilities or have a continuous combat
function (for discussion, see Haque 2017). Noncombatants are
not combatants. There are, of course, many hard cases, especially in
asymmetric wars, but they are not considered here. "Soldier" is
used interchangeably with "combatant" and
"civilian" interchangeably with
"noncombatant".
Both traditionalist just war theory and international law explicitly
license fighting in accordance with these constraints, regardless of
one's objectives. In other words, they endorse:
Combatant Equality: Soldiers who satisfy Discrimination,
Proportionality, and Necessity fight permissibly, regardless of what
they are fighting for.
[19]
We discuss Proportionality and Necessity below; for now let us
concentrate on Michael Walzer's influential argument for
Discrimination and Combatant Equality, which has proved very
controversial.
Individual human beings enjoy fundamental rights to life and liberty,
which prohibit others from harming them in certain ways. Since
fighting wars obviously involves depriving others of life and liberty,
according to Walzer, it can be permissible only if each of the victims
has, "through some act of his own ... surrendered or lost
his rights" (Walzer 2006: 135). He then claims that,
"simply by fighting", all combatants "have lost
their title to life and liberty" (Walzer 2006: 136). First,
merely by posing a threat to me, a person alienates himself from me,
and from our common humanity, and so himself becomes a legitimate
target of lethal force (Walzer 2006: 142). Second, by participating in
the armed forces, a combatant has "allowed himself to be made
into a dangerous man" (Walzer 2006: 145), and thus surrendered
his rights. By contrast, noncombatants are "men and women with
rights, and... they cannot be used for some military purpose,
even if it is a legitimate purpose" (Walzer 2006: 137). This
introduces the concept of liability into the debate, which we need to
define carefully. On most accounts, that a person is liable to be
killed means that she is not wronged by being killed. Often this is
understood, as it was in Walzer, in terms of rights: everyone starts
out with a right to life, but that right can be forfeited or lost,
such that one can be killed without that right being violated or
infringed. Walzer and his critics all agreed that killing a person
intentionally is permissible only if either she has lost the
protection of her right to life, or if the good achieved thereby is
very great indeed, enough that, though she is wronged, it is not all
things considered wrong to kill her. Her right is permissibly
infringed. Walzer and his critics believe that such cases are very
rare in war, arising only when the alternative to intentionally
violating people's right to life is an imminent catastrophe on
the order of Nazi victory in Europe (this is an example of a supreme emergency).
These simple building blocks give us both Discrimination and Combatant
Equality--the former, because noncombatants, in virtue of
retaining their rights, are not legitimate objects of attack; the
latter, because all combatants lose their rights, regardless of what
they are fighting for: hence, as long as they attack only enemy
combatants, they fight legitimately, because they do not violate
anyone's rights.
These arguments have faced withering criticism. The simplest objection
against Combatant Equality brings it into conflict with
Proportionality (McMahan 1994; Rodin 2002; Hurka 2005). Unintended
noncombatant deaths are permissible only if proportionate to the
military objective sought. This means the objective is worth that much
innocent suffering. But military objectives are merely means to an
end. Their worth depends on how valuable the end is. How many innocent
deaths would be proportionate to Al Shabab's successfully
gaining control of Mogadishu now or to Iraq's capturing Kuwaiti
territory and oil reserves in 1991? In each case the answer is
obvious: none.
Proportionality is about weighing the evil inflicted against the evil
averted (Lee 2012). But the military success of unjust combatants does
not avert evil, it is itself evil. Evil intentionally inflicted can
only add to, not counterbalance, unintended evils. Combatant Equality
cannot be true.
Other arguments against Combatant Equality focus on Walzer's
account of how one loses the right to life. They typically start by
accepting his premise that permissible killing in war does not violate
the rights of the victims against being killed, at least for
intentional
killing.[20]
This contrasts with the view that sometimes people's rights to
life can be overridden, so war can be permissible despite infringing
people's rights. Walzer's critics then show that his
account of how we lose our right to life is simply not plausible.
Merely posing a threat to others--even a lethal threat--is
not sufficient to warrant the loss of one's fundamental rights,
because sometimes one threatens others' lives for very good
reasons (McMahan 1994). The soldiers of the Kurdish Peshmerga,
heroically fighting to rescue Yazidis from ISIL's genocidal
attacks, do not thereby lose their rights not to be killed by their
adversaries. Posing threats to others in the pursuit of a just aim,
where those others are actively trying to thwart that just aim, cannot
void or vitiate one's fundamental natural rights against being
harmed by those very people. The consent-based argument is equally
implausible as a general defence for Combatant Equality. Unjust
combatants have something to gain from waiving their rights against
lethal attack, if doing so causes just combatants to effect the same
waiver. And on most views, many unjust combatants have nothing to
lose, since by participating in an unjust war they have at least
weakened if not lost those rights already. Just combatants, by
contrast, have something to lose, and nothing to gain. So why would
combatants fighting for a just cause consent to be harmed by their
adversaries, in the pursuit of an unjust end?
Walzer's case for Combatant Equality rests on showing that just
combatants lose their rights to life. His critics have shown that his
arguments to this end fail. So Combatant Equality is false. But they
have shown more than this. Inspired by Walzer to look at the
conditions under which we lose our rights to life, his critics have
made theoretical advances that place other central tenets of *jus
in bello* in jeopardy. They argued, *contra* Walzer, that
posing a threat is not sufficient for liability to be killed (McMahan
1994, 2009). But they also showed that posing the threat oneself is
not necessary for liability either. This is more controversial, but
revisionists have long argued that liability is grounded, in war as
elsewhere, in one's responsibility for contributing to a
wrongful threat. The US president, for example, is responsible for a
drone strike she orders, even though she does not fire the
weapon herself.
As many have noted, this argument undermines Discrimination (McMahan
1994; Arneson 2006; Fabre 2012; Frowe 2014). In many states,
noncombatants play an important role in the resort to military force.
In modern industrialized countries, as much as 25 per cent of the
population works in war-related industries (Downes 2006: 157-8;
see also Gross 2010: 159; Valentino et al. 2010: 351); we provide the
belligerents with crucial financial and other services; we support and
sustain the soldiers who do the fighting; we pay our taxes and in
democracies we vote. Our contributions to the state's capacity
over time give it the strength and support to concentrate on
war.[21]
If the state's war is unjust, then many noncombatants are
responsible for contributing to wrongful threats. If that is enough
for them to lose their rights to life, then they are permissible
targets.
McMahan (2011a) has sought to avert this troubling implication of his
arguments by contending that almost all noncombatants on the unjust
side (unjust noncombatants) are less responsible than all unjust
combatants. But this involves applying a double standard, talking up
the responsibility of combatants, while talking down that of
noncombatants, and mistakes a central element in his account of
liability to be killed. On his view, a person is liable to be killed
in self- or other-defence in virtue of being, of those able to bear an
unavoidable and indivisible harm, the one who is most responsible for
this situation coming about (McMahan 2002, 2005b). Even if
noncombatants are only *minimally* responsible for their
states' unjust wars--that is, they are not blameworthy,
they merely voluntarily acted in a way that foreseeably contributed to
this result--on McMahan's view this is enough to make them
liable to be killed, if doing so is necessary to save the lives of
wholly innocent combatants and noncombatants on the just side (see
especially McMahan 2009: 225).
One response is to reject this comparative account of how
responsibility determines liability, and argue for a non-comparative
approach, according to which one's degree of responsibility must
be great enough to warrant such a severe derogation from one's
fundamental rights. But if we do this, we must surely concede that
many combatants on the unjust side are not sufficiently responsible
for unjustified threats to be liable to be killed. Whether through
fear, disgust, principle or ineptitude, many combatants are wholly
ineffective in war, and contribute little or nothing to threats posed
by their side. The much-cited research of S. L. A. Marshall claimed
that only 15-25 per cent of Allied soldiers in the Second World
War who could have fired their weapons did so (Marshall 1978). Most
soldiers have a natural aversion to killing, which even intensive
psychological training may not overcome (Grossman 1995). Many
contribute no more to unjustified threats than do noncombatants. They
also lack the "*mens rea*" that might make
liability appropriate in the absence of a significant causal
contribution. They are not often blameworthy. The loss of their right
to life is not a fitting response to their conduct.
If Walzer is right that in war, outside of supreme emergencies, we may
intentionally kill only people who are liable to be killed, and if a
significant proportion of unjust combatants and noncombatants are
responsible to the same degree as one another for unjustified threats,
and if liability is determined by responsibility, then we must decide
between two unpalatable alternatives. If we set a high threshold of
responsibility for liability, to ensure that noncombatants are not
liable to be killed, then we will also exempt many combatants from
liability. In ordinary wars, which do not involve supreme emergencies,
intentionally killing such non-liable combatants would be
impermissible. This moves us towards a kind of pacifism--though
warfare can in principle be justified, it is so hard to fight without
intentionally killing the non-liable that in practice we must be
pacifists (May 2015). But if we set the threshold of responsibility
low, ensuring that all unjust combatants are liable, then many
noncombatants will be liable too, thus rendering them permissible
targets and seriously undermining Discrimination. We are torn between
pacifism on the one hand, and realism on the other. This is the
"responsibility dilemma" for just war theory (Lazar
2010).
### 4.2 Killing Combatants
Just war theory has meaning only if we can explain why killing some
combatants in war is allowed, but we are not thereby licensed to kill
everyone in the enemy state. Here the competing forces of realism and
pacifism are at their most compelling. It is unsurprising, therefore,
that so much recent work has focused on this topic. We cannot do
justice to all the arguments here, but will instead consider three kinds of
response: all-out revisionist; moderate traditionalist; and all-out
traditionalist.
The first camp faces two challenges: to justify intentionally killing
apparently non-liable unjust combatants; but to do this without
reopening the door to Combatant Equality, or indeed further
undermining Discrimination. Their main move is to argue that, despite
appearances, all and only unjust combatants are in fact liable to be
killed.
McMahan argues that liability to be killed need not, in fact,
presuppose responsibility for an unjustified threat. Instead, unjust
combatants' responsibility for just combatants' reasonable
beliefs that they are liable may be enough to ground forfeiture of their rights
(McMahan 2011a). Some argue that combatants' responsibility for
being in the wrong place at the wrong time is enough (likening them to
voluntary human
shields).[22]
More radically still, some philosophers abandon the insistence on
individual responsibility, arguing that unjust combatants are
collectively responsible for contributing to unjustified threats, even
if they are individually ineffective (or even counterproductive) (Kamm
2004; Bazargan 2013).
Lazar (forthcoming-a) suggests these arguments are
unpersuasive. Complicity might be relevant to the costs one is
required to bear in war, but most liberals will baulk at the idea of
losing one's right to life in virtue of things that other people
did. And if combatants can be complicitously liable for what their
comrades-in-arms did, then why shouldn't noncombatants be
complicitously liable also?
Blameworthy responsibility for other people's false beliefs does
seem relevant to the ethics of self- and other-defence. That said,
consider an idiot who pretends to be a suicide bomber as a prank, and
is shot by a police officer (Ferzan 2005; McMahan 2005c). Is killing
him objectively permissible? It seems doubtful. The officer's justified
belief that the prankster posed a threat clearly diminishes the
wrongfulness of killing him (Lazar 2015a). And certainly the
prankster's fault excuses the officer of any guilt. But killing
the prankster still seems objectively wrong. Even if someone's
blameworthy responsibility for false beliefs could make killing him
objectively permissible, most philosophers agree that many unjust
combatants are not to blame for the injustice of their wars (McMahan
1994; Lazar 2010). And it is *much* less plausible that
blameless responsibility for beliefs can make one a permissible
target. Even if it did, this would count in favour of moderate
Combatant Equality, since most just combatants are also blamelessly
responsible for unjust combatants' reasonable beliefs that they
are liable to be killed.
Moderate traditionalists think we can avoid the realist and pacifist
horns of the responsibility dilemma only by conceding a moderate form
of Combatant Equality. The argument proceeds in three stages. First,
endorse a non-comparative, high threshold of responsibility for
liability, such that most noncombatants in most conflicts are not
responsible enough to be liable to be killed. This helps explain why
killing civilians in war is so hard to justify. Of course, it also
entails that many combatants will be innocent too. The second step,
then, is to defend the principle of *Moral Distinction*,
according to which killing civilians is worse than killing soldiers.
This is obviously true if the soldiers are liable and the civilians
are not. But the challenge is to show that killing *non-liable*
civilians is worse than killing *non-liable* soldiers. If we
can do that, then the permissibility of intentionally killing
non-liable soldiers does not entail that intentionally killing
non-liable noncombatants is permissible. Of course, one might still
argue that, even if Moral Distinction is true, we should endorse
pacifism. But, and this is the third stage, the less seriously
wrongful some act is, the lesser the good that must be realised by
performing that act, for it to be all things considered permissible.
If intentionally killing innocent combatants is not the worst kind of
killing one can do, then the good that must be realised for it to be
all things considered permissible is less than is the case for, for
example, intentionally killing innocent civilians, which philosophers
tend to think can be permissible only in a supreme emergency. This
could mean that intentionally killing innocent soldiers is permissible
even in the ordinary circumstances of war.
Warfare can be justified, then, by a combination of liability and
lesser evil grounds. Some unjust combatants lose their rights not to
be killed. Others' rights can be overridden without that
implying that unjust noncombatants' rights may be overridden
too. We can reject the pacifist horn of the responsibility dilemma.
But a moderate Combatant Equality is likely to be true: since killing
innocent combatants is not the worst kind of killing, it is
correspondingly easier for unjust combatants to justify using lethal
force (at least against just combatants). This increases the range of
cases in which they can satisfy Discrimination, Proportionality, and
Necessity, and so fight permissibly.
Much hangs, then, on the arguments for Moral Distinction. Some focus
on why killing innocent noncombatants is especially wrongful; others
on why killing innocent combatants is not so bad. This section
considers the second kind of argument, returning to the first in the
next section.
The revisionists' arguments mentioned above might not ground
liability, but do perhaps justify *some* reason to prefer
harming combatants. Combatants can better avoid harm than
noncombatants. Combatants surely do have *somewhat* greater
responsibilities to bear costs to avert the wrongful actions of their
comrades-in-arms than do noncombatants. And the readiness of most
combatants to fight--regardless of whether their cause is
just--likely means that even just combatants have somewhat
muddied status relative to noncombatants. They conform to their
opponents' rights only by accident. They have weaker grounds for
complaint when they are wrongfully killed than do noncombatants, who
more robustly respect the rights of others (on robustness and respect,
see Pettit 2015).
Additionally, when combatants kill other combatants, they typically
believe that they are doing so permissibly. Most often they believe
that their cause is just, and that this is a legitimate means to bring
it about. But, insofar as they are lawful combatants, they will also
believe that international law constrains their actions, so that by
fighting in accordance with it they are acting permissibly. Lazar
(2015c) argues that killing people when you know that doing so is
objectively wrong is more seriously objectionable than doing so when
you reasonably believe that you are acting permissibly.
The consent-based argument for Combatant Equality fails because of its
empirical, not its normative premise. If combatants in fact waived
their rights not to be killed by their adversaries, even when fighting
a just war, then that would clearly affect their adversaries'
reasons for action, reducing the wrongfulness of killing anyone who
had waived that right. The problem is that they have not waived their
rights not to be killed. However, they often do offer a more limited
implicit waiver of their rights. The purpose of having armed forces,
and the intention of many who serve in them, is to protect civilians
from the predations of war. This means both countering threats to and
drawing fire away from them. Combatants interpose themselves between
the enemy and their civilian compatriots, and fight on their
compatriots' behalf. If they abide by the laws of war, they
clearly distinguish themselves from the civilian population, wearing a
uniform and carrying their weapons openly. They implicitly say to
their adversaries: "you ought to put down your weapons. But if
you are going to fight, then *fight us*". This
constitutes a limited waiver of their rights against harm. Like a full
waiver, it alters the reasons confronting their
adversaries--under these circumstances, other things equal it is
worse to kill the noncombatants. Of course, in most cases unjust
combatants ought simply to stop fighting. But this conditional waiver
of their opponents' rights means that, if they are not going to
put down arms, they do better to target combatants than
noncombatants.
Of course, one might think that in virtue of their altruistic
self-sacrifice, just combatants are actually the *least*
deserving of the harms of war (Tadros 2014). But, first, warfare is
not a means for ensuring that people get their just deserts. More
importantly, given that their altruism is specifically intended to
draw fire away from their compatriot noncombatants, it would be
perverse to treat this as a reason to do precisely what they are
trying to prevent.
These arguments and others suggest that killing innocent combatants is
not the worst kind of killing one can do. It might therefore be all
things considered permissible in the ordinary circumstances of war,
provided enough good is achieved thereby. If unjust combatants attack
only just combatants, and if they achieve some valuable objective by
doing so--defence of their comrades, their co-citizens, or their
territory--they therefore might fight permissibly, even though
they violate the just combatants' rights (Kamm 2004; Hurka 2005;
Kamm 2005; Steinhoff 2008; Lazar 2013). At least, it is more plausible
that they can fight permissibly than if we regarded every just
combatant's death as equivalent to the worst kind of murder.
This does not vindicate Combatant Equality--it simply shows that,
more often than one might think, unjust combatants can fight
permissibly. Add to that the fact that all wars are morally
heterogeneous, involving just and unjust phases (Bazargan 2013), and
we quickly see that even if Combatant Equality in the laws of war
lacks fundamental moral foundations, it is a sensible approximation of
the truth.
Some philosophers, however, seek a more robust defence of Combatant
Equality. The three most prominent lines are institutionalist. A
contractualist argument (Benbaji 2008, 2011) starts by observing that
states (and their populations) need disciplined armies for the
purposes of national defence. If soldiers always had to decide for
themselves whether a particular war was just, many states could not
raise armies when they need to. They would be unable to deter
aggression. All states, and all people, benefit from an arrangement
whereby individual combatants waive their rights not to be killed by
one another--allowing them to obey their state's commands
without second-guessing every deployment. Combatants tacitly consent
to waive their rights in this way, given common knowledge that
fighting in accordance with the laws of war involves such a waiver.
Moreover, their assent is "morally effective" because it
is consistent with a fair and optimal contract among states.
International law does appear to change the moral standing of
combatants. If you join the armed forces of a state, you know that, at
international law, you thereby become a legitimate target in armed
conflict. This has to be relevant to the wrongfulness of harming you,
even if you are fighting for a just cause. But Benbaji's
argument is more ambitious than this. He thinks that soldiers waive
their rights not to be killed by one another--not the limited,
conditional waiver described above, but an outright waiver, that
absolves their adversaries of any wrongdoing (though it does not so
absolve their military and political leaders).
The first problem with this proposal is that it rests on contentious
empirical speculation about whether soldiers in fact consent in this
way. But setting that aside, second, it is radically statist, implying
that international law simply doesn't apply to asymmetric
conflicts between states and non-state actors, since the latter are
not part of the appropriate conventions. This gives international law
shallow foundations, which fail to support the visceral outrage that
breaches of international law typically evoke. It also suggests that
states that either don't ratify major articles of international
law, or that withdraw from agreements, can escape its strictures. This
seems mistaken. Third, we typically regard waivers of fundamental
rights as reversible when new information comes to light. Why
shouldn't just combatants be allowed to withdraw their
rights-waiver when they are fighting a just war? Many regard the right
to life as inalienable; even if we deny this, we must surely doubt
whether you can alienate it once and for all, under conditions of
inadequate information. Additionally, suppose that you want to join
the armed forces only to fight a specific just war (McMahan 2011b).
Why should you waive your rights against harm in this case, given that
you plan only to fight now? Fourth, and most seriously, even if
Benbaji's argument explained why killing combatants in war is
permissible regardless of the cause you are serving, it cannot explain
why unintentionally killing noncombatants as a side-effect of
one's actions is permissible. By joining the armed forces of
their state, soldiers at least do *something* that implies
their consent to the regime of international law that structures that
role. But noncombatants do not consent to this regime. Soldiers
fighting for unjust causes will inevitably kill many innocent
civilians. If those deaths cannot be rendered proportionate, then
Combatant Equality does not hold.
The second institutionalist argument starts from the belief that we
have a duty to obey the law of our legitimate state. This gives unjust
combatants, ordered to fight an unjust war, some reason to obey those
orders. We can ground this in different ways. Estlund (2007) argues
that the duty to obey orders derives from the epistemic authority of
the state--it is more likely than an individual soldier to know
whether this war is just (see Renzo 2013 for criticism); Cheyney Ryan
(2011) emphasizes the democratic source of the state's
authority, as well as the crucial importance of maintaining civilian
control of the military. These are genuine moral
reasons that should weigh in soldiers' deliberations. But are
they really weighty enough to ground Combatant Equality? It seems doubtful.
They cannot systematically override unjust combatants'
obligations not to kill innocent people. This point stands regardless
of whether these reasons weigh in the balance, or are exclusionary
reasons that block others from being considered (Raz 1985). The rights
of innocent people not to be killed are the weightiest, most
fundamental rights around. For some other reason to outweigh them, or
exclude them from deliberation, it would have to be extremely
powerful. Combatants' obligations to obey orders simply are not
weighty enough--as everyone recognises with respect to obedience
to unlawful *in bello* commands (McMahan 2009: 66ff).
Like the first argument, the third institutionalist argument grounds
Combatant Equality in its long-term results. But instead of focusing
on states' ability to defend themselves, it emphasizes the
importance of limiting the horrors of war, given that we know that people
deceive themselves about the justice of their cause (Shue 2008, 2010;
Dill and Shue 2012; Shue 2013; Waldron 2016). Since combatants and
their leaders almost always believe themselves to be in the right, any
injunction to unjust combatants to lay down their arms would simply be
ignored, while any additional permissions to harm noncombatants would
be abused by both sides. In almost all wars, it is *sufficient*
to achieve military victory that you target only combatants. If doing
this will minimize wrongful deaths in the long run, we should enjoin
that all sides, regardless of their aims, respect Discrimination.
Additionally, while it is extremely difficult to secure international
agreement even about what in fact constitutes a just cause for war
(witness the controversy over the Rome statute on crimes of aggression, which took many years of negotiation before diplomats agreed an uneasy compromise), the traditionalist principles
of *jus in bello* already have broad international support.
They are hard-won concessions that we should abandon only if we are
sure that the new regime will be an improvement (Roberts 2008).
Although this argument is plausible, it doesn't address the same
question as the act-focused arguments that preceded it. One thing we
can ask is: given a particular situation, what ought we to do? How
ought soldiers to act in Afghanistan, or Mali, or Syria, or Somalia?
And when we ask this question, we shouldn't start by assuming
that we or they will obviously fail to comply with any exacting moral
standards that we might propose (Lazar 2012a; Lazar and Valentini
forthcoming). When considering our own actions, and those of people over whom
we have influence, we should select from all the available options,
not rule some out because we know ourselves to be too immoral to take
them. When designing institutions and laws, on the other hand, of
course we should think about how people are likely to respond to them.
We need to answer both kinds of questions: what *really* ought
I to do? And what should the laws be, given my and others'
predictable frailty?
A moderate Combatant Equality, then, is the likely consequence of
avoiding the pacifist horn of the responsibility dilemma. To show that
killing in war is permissible, we need to show that intentionally
killing innocent combatants is not as seriously wrongful as
intentionally killing innocent noncombatants. And if killing innocent
combatants is not the worst kind of killing, it can more plausibly be
justified by the goods achieved in ordinary wars, outside of supreme
emergencies. On this view, contrary to the views of both Walzer and
his critics, much of the intended killing in justified wars is
permissible not because the targets are liable to be killed, but
because infringing their rights is a permissible lesser evil. But this
principle applies regardless of whether you are on the just or the
unjust side. This in turn increases the range of cases in which
combatants fighting on the unjust side will be able to fight
permissibly: instead of needing to achieve some good comparable to
averting a supreme emergency in order to justify infringing the rights
of just combatants, they need only achieve more prosaic kinds of
goods, since these are not the worst kinds of rights infringements. So
unjust combatants' associative duties to protect one another and
their compatriots, their duties to obey their legitimate governments,
and other such considerations, can sometimes make intentionally
killing just combatants a permissible lesser evil, and unintentionally
killing noncombatants proportionate. This means that the existing laws
of war are a closer approximation of combatants' true moral
obligations than many revisionists think. Nonetheless, much of the
killing done by unjust combatants in war is still objectively
wrong.
### 4.3 Sparing Civilians
The middle path in just war theory depends on showing that killing
civilians is worse than killing soldiers. This section discusses
arguments to explain why killing civilians is distinctly
objectionable. We discuss the significance of *intentional*
killing when considering proportionality, below.
These arguments are discussed at great length in Lazar (2015c), and
are presented only briefly here. They rest on a key point: Moral
Distinction says that killing civilians is worse than killing
soldiers. It *does not* say that killing civilians is worse
than killing soldiers, *other things equal*. Lazar
holds that stronger principle but does not think that the intrinsic
differences between killing civilians and killing soldiers--the
properties that are *necessarily* instantiated in those two
kinds of killings--are weighty enough to provide Moral
Distinction with the kind of normative force needed to protect
noncombatants in war. That protection depends on mobilising multiple
foundations for Moral Distinction, which include many properties that
are *contingently but consistently* instantiated in acts that
kill civilians and kill soldiers, which make killing civilians worse.
We cannot ground Moral Distinction in any one of these properties
alone, since each is susceptible to counterexamples. But when they are
all taken together, they justify a relatively sharp line between
harming noncombatants and harming combatants. There are, of course,
hard cases, but these must be decided by appealing to the salient
underlying properties rather than to the mere fact of membership in
one group or the other.
First, at least deliberately killing civilians in war usually fails
even the most relaxed interpretation of the necessity constraint. This
is not always true--killing is necessary if it is effective at
achieving your objective, and no other effective options are
available. Killing civilians sometimes meets this description. It is
often effective: the blockade of Germany helped end the first world
war, though it may have caused as many as half a million civilian
deaths; Russian targeting of civilians in Chechnya reduced Russian
combatant casualties (Lyall 2009); Taliban anti-civilian tactics have
been effective in Afghanistan. And these attacks are often the last
recourse of groups at war (Valentino 2004); when all other options
have failed or become too costly, targeting civilians is relatively
easy to do. Indeed, as recent terrorist attacks have shown (Mumbai and
Paris, for example), fewer than ten motivated gunmen with basic
weaponry can bring the world's most vibrant cities screeching to
a halt. So, killing civilians *can* satisfy the necessity
constraint. Nonetheless, attacks on civilians are often wholly wanton,
and there is a special contempt expressed in killing innocent people
either wantonly or for its own sake. At least if you have
*some* strategic goal in sight, you might believe that
something is at stake that outweighs the innocent lives taken. Those
who kill civilians pointlessly express their total disregard for their
victims in doing so.
Second, even when killing civilians is effective, it is usually so
*opportunistically* (Quinn 1989; Frowe 2008; Quong 2009; Tadros
2011). That is, the civilians' suffering is used as a means to
compel their compatriots and their leaders to end their war. Sieges
and aerial bombardments of civilian population centres seek to break
the will of the population and of their government. Combatants, by
contrast, are almost always killed *eliminatively*--their
deaths are not used to derive a benefit that could not be had without
using them in this way; instead they are killed to solve a problem
that they themselves pose. This too seems relevant to the relative
wrongfulness of these kinds of attacks. Of course, at the strategic
level every death is intended as a message to the enemy leadership,
that the costs of continuing to fight outweigh the benefits. But at
the tactical level, where the actual killing takes place, soldiers
typically kill soldiers eliminatively, while they kill civilians
opportunistically. If this difference is morally important, as many
think, and if acts that kill civilians are opportunistic much more
often than are acts that kill soldiers, then acts that kill civilians
are, in general, worse than acts that kill soldiers. This lends
further support to Moral Distinction.
Third, as already noted above, the agent's beliefs can affect
the objective seriousness of her act of killing. Killing someone when
you have solid grounds to think that doing so is objectively
permissible wrongs that person less seriously than when your epistemic
basis for harming them is weaker. More precisely, killing an innocent
person is more seriously wrongful the more reason the killer had to
believe that she was not liable to be killed (Lazar 2015a).
Last, in ordinary thinking about the morality of war, the two
properties most commonly cited to explain the distinctive wrongfulness
of harming civilians, after their innocence, are their vulnerability
and their defencelessness. Lazar (2015c) suspects that the duties to
protect the vulnerable and not to harm the defenceless are almost as
basic as the duty not to harm the innocent. (Note that these duties
apply only when their object is morally innocent.) Obviously, on any
plausible analysis, civilians are more vulnerable and defenceless than
soldiers, so if killing innocent people who are more vulnerable and
defenceless is worse than killing those who are less so, then killing
civilians is worse than killing soldiers.
Undoubtedly soldiers are also often vulnerable too--one thinks of
the "Highway of Death", in Iraq 1991, when American forces
destroyed multiple armoured divisions of the Iraqi army, which were
completely unprotected (many of the personnel in those divisions
escaped into the desert). But this example just shows that killing
soldiers, when they are vulnerable and defenceless, is harder to
justify than when they are not. Provided the empirical claim that
soldiers are less vulnerable and defenceless than civilians is true,
this simply supports the case for Moral Distinction.
### 4.4 Proportionality
Holding the principle of Moral Distinction allows one to escape the
realist and pacifist horns of the responsibility dilemma, while still
giving responsibility its due. Even revisionists who deny moderate
Combatant Equality could endorse Moral Distinction, and thereby
retain the very plausible insight that it is worse to kill just
noncombatants than to kill just combatants. And, if they are to
account for most people's considered judgements about war, even
pacifists need some account of why killing civilians is worse than
killing soldiers.
However, Moral Distinction is not Discrimination. It is a comparative
claim, and it says nothing about intentions. Discrimination, by
contrast, prohibits intentionally attacking noncombatants, except in
supreme emergencies. It is the counterpart of Proportionality, which
places a much weaker bar on unintentionally killing noncombatants.
Only a terrible crisis could make it permissible to intentionally
attack noncombatants. But the ordinary goods achieved in individual
battles can justify unintentional killing. What justifies this radical
distinction?
This is one of the oldest questions in normative ethics (though for
the recent debate, see Quinn 1989; Rickless 1997; McIntyre 2001;
Delaney 2006; Thomson 2008; Tadros 2015). On most accounts, those who
intend harm to their victims show them a more objectionable kind of disrespect
than those who unavoidably harm them as a side-effect. Perhaps the
best case for the significance of intentions is, first, in a general
argument that mental states are relevant to objective permissibility
(Christopher 1998; see also Tadros 2011). And second, we need a rich
and unified theoretical account of the specific mental states that
matter in this way, into which intentions fit. It may be that the
special prohibition of intentional attacks on civilians overstates the
moral truth. Intentions do matter. Other things equal, intentional
killings are worse than unintended killings (though some unintended
killings that are wholly negligent or indifferent to the victim are
nearly as bad as intentional killings). But the difference between
them is not categorical. It cannot sustain the contrast between a
near-absolute prohibition on one hand, and a sweeping permission on
the other.
Of course, this is precisely the kind of nuance that would be
disastrous if implemented in international law or if internalized as a
norm by combatants. Weighing lives in war is informationally
incredibly demanding. Soldiers need a principle they can apply.
Discrimination is that principle. It is not *merely* a rule of
thumb, since it entails something that is morally
grounded--killing civilians is worse than killing soldiers. But
it is *also* a rule of thumb, because it draws a starker
contrast between intended and unintended killing than is intrinsically
morally justified.
As already noted, proportionality and necessity contain within them
almost every other question in the ethics of war; we now consider two
further points.
First, proportionality in international law is markedly different from
the version of the principle that first-order moral theory supports.
At law, an act of war is proportionate insofar as the harm to
civilians is not excessive in relation to the concrete and direct
military advantage realized thereby. As noted above, in first-order
moral terms, this is unintelligible. But there might be a better
institutional argument for this neutral conception of proportionality.
Proportionality calculations involve many substantive value
judgements--for example, about the significance of moral status,
intentions, risk, vulnerability, defencelessness, and so on. These are
all highly controversial topics. Reasonable disagreement abounds. Many
liberals think that coercive laws should be justified in terms that
others can reasonably accept, rather than depending on controversial
elements of one's overarching moral theory (Rawls 1996: 217).
The law of armed conflict is coercive; violation constitutes a war
crime, for which one can be punished. Of course, a more complex law
would not be justiciable, but we also have principled grounds for not
basing international law on controversial contemporary disputes in
just war theory. Perhaps the current standard can be endorsed from
within a wider range of overarching moral theories than could anything
closer to the truth.
Second, setting aside the law and focusing again on morality, many
think that responsibility is crucial to thinking about
proportionality, in the following way. Suppose the Free Syrian Army
(FSA) launches an assault on Raqqa, stronghold of ISIL. They predict
that they will cause a number of civilian casualties in their assault,
but that this is only because ISIL has chosen to operate from within a
civilian area, forcing people to be "involuntary human
shields". Some think that ISIL's responsibility for
putting those civilians at risk allows the FSA to give those
civilians' lives less weight in their deliberations than would
be appropriate if ISIL had not used them as human shields (Walzer
2009; Keinon 2014).
But one could also consider the following: Even if ISIL is primarily
at fault for using civilians as cover, why should this mean that those
civilians enjoy weaker protections against being harmed? We typically
think that one should only lose or forfeit one's rights through
*one's own* actions. But on this argument, civilians
enjoy weaker protections against being killed through no fault or
choice of their own. Some might think that more permissive standards
apply for involuntary human shields because of the additional value of
deterring people from taking advantage of morality in this kind of way
(Smilansky 2010; Keinon 2014). But that argument seems oddly circular:
we punish people for taking advantage of our moral restraint by not
showing moral restraint. What's more, this changes the act from
one that foreseeably kills civilians as an unavoidable side-effect of
countering the military threat to one that kills those civilians as a
means to deter future abuses. This instrumentalizes them in a way that
makes harming them still harder to justify.
### 4.5 Necessity
The foregoing considerations are all also relevant to necessity. They
allow us to weigh the harms at stake, so that we can determine whether
the morally weighted harm inflicted can be reduced at a reasonable
cost to the agents. The basic structure of necessity is the same
*in bello* as it is *ad bellum*, though obviously the
same differences in substance arise as for proportionality. Some
reasons apply only to *in bello* necessity judgements, not to
*ad bellum* ones, because they are conditional on the
background assumption that the war as a whole will continue. This
means that we cannot reach judgements of the necessity of the war as a
whole by simply aggregating our judgements about the individual
actions that together constitute the war.
For example, *in bello* one of the central questions when
applying the necessity principle is: how much risk to our own troops
are we required to bear in order to minimize harms to the innocent?
Some option can be necessary simply in virtue of the fact that it
saves some of our combatants' lives. *Ad bellum*,
evaluating the war as a whole, we must of course consider the risk to
our own combatants. But we do so in a different way--we ask
whether the goods achieved by the war as a whole will justify putting
our combatants at risk. We don't then count among the goods
achieved by the war the fact that multiple actions within the war will
save the lives of individual combatants. We cannot count averting
threats that will arise only if we decide to go to war among the goods
that justify the decision to go to war.
This relates directly to the largely ignored requirement in
international law that combatants must
>
> take all feasible precautions in the choice of means and methods of
> attack with a view to avoiding, and in any event to minimizing,
> incidental loss of civilian life, injury to civilians and damage to
> civilian objects. (Geneva Convention, Article 57, 2(a)(ii))
>
This has deep moral foundations: combatants in war are morally
*required* to reduce the risk to innocents until doing so
further would involve an unreasonably high cost to them, which they
cannot be required to bear. Working out when that point is reached
involves thinking through: soldiers' role-obligations to assume
risks; the difference between doing harm to civilians and allowing it
to happen to oneself or one's comrades-in-arms; the importance
of associative duties to protect one's comrades; and all the
considerations already adduced in favour of Moral Distinction. This
calculus is very hard to perform. My own view is that combatants ought
to give significant priority to the lives of civilians (Walzer and
Margalit 2009; McMahan 2010b). This is in stark contrast to existing
practice (Luban 2014).
## 5. The Future of Just War Theory
Much recent work has used either traditionalist or revisionist just
war theory to consider new developments in the practice of warfare,
especially the use of drones, and the possible development of
autonomous weapons systems. Others have focused on the ethics of
non-state conflicts, and asymmetric wars. Very few contemporary wars
fit the nation-state model of the mid-twentieth century, and conflicts
involving non-state actors raise interesting questions for legitimate
authority and the principle of Discrimination in particular (Parry
2016). A third development, provoked by the terrible failure to plan
ahead in Iraq and Afghanistan, is the wave of reflection on the
aftermath of war. This topic, *jus post bellum*, is addressed
separately.
As to the philosophical foundations of just war theory: the
traditionalist and revisionist positions are now well staked out. But
the really interesting questions that remain to be answered should be
approached without thinking in terms of that split. Most notably,
*political* philosophers may have something more to contribute
to the just war theory debate. It would be interesting, too, to think
with a more open mind about the institutions of international law
(nobody has yet vindicated the claim that the law of armed conflict
has authority, for example), and also about the role of the military
within nation-states, outside of wartime (Ryan 2016).
The collective dimensions of warfare could be more fully
explored. Several philosophers have considered how soldiers "act
together" when they fight (Zohar 1993; Kutz 2005; Bazargan
2013). But few have reflected on whether group agency is present and
morally relevant in war. And yet it is superficially very natural to
discuss wars in these terms, especially in evaluating the war as a
whole. When the British parliament debated in late 2015 whether to
join the war against ISIL in Syria and Iraq, undoubtedly each MP was
thinking also about what *she* ought to do. But most of them
were asking themselves what *the United Kingdom* ought to
do. This group action might be wholly reducible to the individual
actions of which it is composed. But this still raises interesting
questions: in particular, how should I justify my actions, as an
individual who is acting on behalf of the group? Must I appeal only to
reasons that apply to me? Or can I act on reasons that apply to the
group's other members or to the group as a whole? And can I
assess the permissibility of my actions without assessing the group
action of which they are part? Despite the prominence of collectivist
thinking in war, discussion of war's group morality is very much
in its infancy. |
james-ward | ## 1. Life
James Ward was born in Hull, Yorkshire, on 27th January 1843, as a
son of James Ward, senior, a merchant with some scientific and
philosophical ambition (in a book entitled *God, Man and the
Bible*, he tried to give a scientific explanation of the Flood).
His father's enterprises were disastrous, however, so that lack
of resources was to be a constant source of anxiety for the family,
which included an uneducated, hard-working, deeply religious mother,
six sisters and a brother (another son died).
After one of his father's bankruptcies, Ward remained for two
years without schooling and occupation. At the age of thirteen and a
half, the boy was let free to occupy his time as he wished. In the
small country town where his family had moved, Waterloo near Liverpool,
he developed a spirit of solitude and a love of nature. A
'Memoir' written by the philosopher's daughter, Olwen
Ward Campbell, enables us to get a glimpse of his life in these early
years: "Waterloo was only a village in those days, and wild
stretches of sandhills lay beyond it for many miles. Here he used to
wander for hours on end, alone with his thoughts and the sea
birds" (1927: 7).
In addition to the family's economic difficulties,
Ward's youth was shaped by the narrow Calvinistic faith in which
he was raised. In 1863 he entered Spring Hill College (later
incorporated in Mansfield College, Oxford) with the aim of becoming a
minister, but became increasingly skeptical of the doctrines he was
expected to preach. In the middle of his religious crisis, Ward moved
to Germany for a temporary period of study. In Gottingen, he
attended lectures by one of the most influential philosophers of the
time, Rudolf Hermann Lotze. It is probably from Lotze that Ward gained
a first substantial appreciation of the philosophy of Leibniz, whose
theory of monads was later to provide the basis of his own
metaphysics.
Upon returning to England, Ward made a sincere yet vain attempt to
retain his faith and accepted a pastorate in Cambridge. His spiritual
crisis reached its climax and the point of no return in 1872, when he
definitively abandoned the project of becoming a minister and applied
as a non-collegiate student at Cambridge. One year later, he won a
scholarship in Moral Science at Trinity College.
Important lessons were learned in the anguish of these vicissitudes
and deliberations. In a letter to a friend, Ward writes: "I am
coming to see more clearly every day that man is only half free, or
rather that his freedom is not what I once thought it. It is not the
power to choose anything, but only the power to choose between
alternatives offered, and what these shall be circumstances determine
quite as will" (59). Surely, this insight owes more to the
struggle of existence than to abstract speculation. Interestingly,
Ward's mature philosophy comprises a defense of human freedom and
sustains the idea that--independently of circumstances outside of
our control--we always retain a capacity for responsible
action.
Ward made the best out of the difficult challenge of beginning a new
life at the age of thirty. In 1874 he obtained a first in his Tripos;
one year later, he won a fellowship with a dissertation entitled
'The Relation of Physiology to Psychology,' part of which
was published in the first issue of *Mind* in 1876. These
successes marked the beginning of a distinguished career. He first
became known for his article on 'Psychology' (1886) in the
*Encyclopaedia Britannica*, in which he criticised Mill and
Bain's associationism, and in the years 1896-98 he delivered the
prestigious Gifford Lectures, later published under the title
*Naturalism and Agnosticism* (1899). In this book, Ward attacked
the various forms of scientific materialism that were current in the
second half of the nineteenth century. The lectures were well received
by his contemporaries and there was a widespread agreement that Ward
had been effective in his refutation. According to Alfred Edward
Taylor, who reviewed the book in *Mind* in 1900, "one may
assert without much fear of contradiction that Prof. Ward's
Gifford Lectures are the philosophical book of the last year"
(Taylor 1900: 244).
G. E. Moore and Bertrand Russell were among Ward's students at
Cambridge, where he had been elected to the new Chair of Mental
Philosophy and Logic in 1897. Moore has left a vivid testimony of his
personality and teaching style; as he recalls,
>
>
> We sat around a table in his rooms at Trinity, and Ward had a large
> notebook open on the table before him. But he did not read his
> lectures; he talked; and, while he talked, he was obviously thinking
> hard about the subject he was talking of and searching for the best way
> of putting what he wanted to convey. (Moore 1942: 17)
Moore describes Ward as a melancholy personality, overwhelmed by the
difficulties of philosophical thinking--'*Das Denken ist
schwer*' was one of his favorite sayings. (16) And while
Moore was also impressed by Ward's 'extreme sincerity and
conscientiousness' (18), Russell singles him out as one for whom
he had a 'great personal affection' (Russell 1946: 10).
Ward's reputation as a philosopher must have been very high, for he
was invited to deliver a second series of Gifford Lectures. These
were held in the years 1907-1910 and published in 1911
as *The Realm of Ends or Pluralism and Theism*; it is in this
book that he provides the most complete exposition of his idealistic
metaphysics.
## 2. Metaphysics
Ward reached his metaphysics--a theory of interacting
monads--by way of criticism of the two main philosophical
tendencies of his day, scientific materialism and
'absolute' (or 'monistic') idealism. The
critique of materialism is developed at considerable length in the
first series of Gifford Lectures, *Naturalism and Agnosticism*.
Ward observes here that the view of reality as a system of inert
material particles subject to strict deterministic laws cannot explain
the world's contingency, a fundamental aspect of experienced
reality that needs to be taken at face value. Moreover, materialism
fails to provide an adequate ontology for biology, for it cannot
explain the emergence of life from lifeless matter. The doctrine is
also inconsistent with the latest development of physics, which is
moving away from the seventeenth-century, Democritean conception of the
atom as an inert particle--a microscopic, indivisible
'thing.'
Ward's overall conclusion is that materialism is so beset with
difficulties that the real question is not whether it is true, but why
it has come to be believed in the first place. Ward traces the origin
of this doctrine in a tendency to confound abstractions with concrete
realities. At the beginning of inquiry, the scientist is faced with a
concrete whole of experience, but takes notice only of some aspects of
it. This is wholly justified, but errors are likely to be committed if
he overlooks the richness of the empirical basis from which his notions
are derived. He is then apt to "confound his descriptive
apparatus with the actual phenomena it is devised to describe"
(1899: Vol. 1, 81). Materialism is the metaphysics of the scientist
lacking understanding of his own procedure.
These reflections led Ward to the idealistic conclusion that reality
must be interpreted 'in terms of Mind.' Absolute
idealism--the then widespread view that reality is a cosmic
consciousness or single 'Experience'--will not do, for
Ward regards it as explanatorily vacuous. The theory that the world of
finite things is but the 'appearance' of the One does
nothing to make the nature of the world more intelligible. Why does the
One appear in the way it does? And why does it have to appear at all?
These questions find no answer in the works of great monistic thinkers
such as Spinoza and Hegel, nor do their British followers do any better
in this respect. Ward substantiates this claim with a quotation from
Bradley's *Appearance and Reality* (1893), where he admits
that the 'fact of fragmentariness,' that is, why the One
appears in the form of a multiplicity, cannot be explained.
The upshot of these two parallel lines of arguments--the
critique of materialism and Absolute idealism--is that some form
of pluralistic idealism must be true. And it is only natural at this
point to turn to Leibniz, and especially so for a thinker so well
acquainted with German philosophy. According to Ward, however,
Leibniz's metaphysics needs to be amended in one fundamental
respect:
>
>
> ...the well-known *Monadology* of Leibniz may be taken
> as the type, to which all modern attempts to construct a pluralistic
> philosophy more or less conform. But the theology on which Leibniz from
> the outset strove to found his *Monadology*, is, in the first
> instance at all events, set aside; and in particular his famous
> doctrine of pre-established harmony is rejected altogether. (1911:
> 53-54)
Ward's main crux is that pre-established harmony fails to
leave any room open for contingency and genuine novelties. In
Leibniz's system, evolution can only be interpreted as
'preformation,' the gradual unfolding of what in a
compressed form is already present at the beginning. This is
inconsistent with a correct understanding of evolution, which involves
'epigenesis'--the coming into being of unexpected
facts.
Interestingly, the same criticism is advanced against philosophies
such as Hegel's that conceives of natural history as the
'externalization' of a primordial 'Idea.' Ward
notices, however, that Hegel's description of nature--as
'a bacchantic God' (148), and as providing 'the
spectacle of a contingency that runs out into endless detail'
(139)--is insightful and wholly in line with his own; indeed, he
goes so far as to say that 'there is a strong undercurrent of
[Leibnizian] pluralism running through the whole of Hegel's
philosophy.' (159)
Part of Leibniz's motivation for holding to pre-established
harmony was the alleged impossibility of making sense of
*transeunt* causation--that is, causation understood as a
direct relation between distinct monads (as opposed to
*immanent* causation, which holds between successive stages of a
monad's life). If direct interaction is impossible, how to
account for the correlations between things except by assuming that it
had been established by a supreme Architect? In the hands of Leibniz,
this surprising view as to the nature of causation becomes the
foundation of an original proof of God's existence.
Ward attacks Leibniz's reasoning at its very basis. Causal
interaction was understood by Leibniz in terms of the doctrine of
*physical influx*. On that model, direct causal interaction is
rejected because properties cannot exist unless as inhering in
substances; in passing from one substance to another, however, the
communicated property would have to exist in a detached condition,
which seems absurd. Could this simple piece of reasoning suffice to
reject what we seem to experience all the time, namely our power to act
directly upon external realities and to be immediately affected by
them? In his answer to Leibniz, Ward remarks that the theory of
*physical influx* categorizes the terms of a causal relation as
substances to which qualities inhere; on this theory, even the monads
are therefore conceived as if they were things. This involves a special
sort of category-mistake: 'Monads'--Ward
says--"are conative, that is are feeling and striving
subjects or persons in the widest sense, not inert particles or
things" (260).
In other words, since monads are not 'things' but
'subjects,' the argument against direct monadic
interaction is blatantly invalid. As Ward puts it: "If the
Leibnizian assumption, that there are no beings devoid of perception
and spontaneity... is otherwise sound, then the objections to
transeunt action between things become irrelevant" (219). Hence,
in a polemical reversion of Leibniz's famous metaphor, he concludes
that 'all monads have windows' (260).
## 3. Natural Philosophy
If pre-established harmony fails to account for the world's
contingency, it must be asked whether Ward's theory of
interacting monads can explain nature's order and regularity. The
element of contingency that pervades the world is immediately accounted
for by the assumption that the ultimate constituents of reality are
'living' subjects, but how do orderly processes emerge?
Ward formulates this question as follows: "Can we conceive such
an interaction of spontaneous agents..., taking on the appearance
of mechanism?" (1927: 239).
The solution consists in interpreting nature after a social analogy.
The following thought-experiment illustrates the idea: "Let us
imagine a great multitude of human beings, varying in tastes and
endowments as widely as human beings are known to do, and let us
suppose this multitude suddenly to find themselves, as Adam and Eve
did, in an ample Paradise enriched sufficiently with diverse natural
resources to make the attainment of high civilization possible"
(1911: 54). It is obvious to Ward that such individuals would
gradually achieve a hierarchical, specialized form of social
organization. To a hypothetical external observer, the initial phases
of the new life of the selected individuals will appear utterly
chaotic, but in due course order and regularity would emerge:
"...in place of an incoherent multitude, all seemingly
acting at random, we should have a social and economic organization,
every member of which had his appropriate place and function"
(55-56).
It is after this fashion that nature should be understood. Consider
an apparently inert object such as a piece of rock. This is not an
aggregate of material atoms, but a society of active subjects. Starting
out from a state of mutual isolation, Ward speculates, a plurality of
monads might end up finding a satisfactory *modus vivendi*.
These monads might then continue to stick together--either because
they are unwilling to break out of the social structures in which they
are embedded, or because they have become unable to do so. To clarify
this concept, Ward appeals to an ingenious simile:
>
>
> ...as there are some individuals who are restless, enterprising
> and inventing to the end of their days, so there are others who early
> become supine and contented, the slaves of custom and let well alone.
> The simpler their standard of well-being and the less differentiated
> their environment the more monotonous their behavior will be and the
> more inert they will appear. (60)
Thus, monads might get imprisoned within a form of life in the same
way in which persons might get imprisoned in their habits. While the
more energetic monads will try to escape routine, other will acquiesce
in it. It is the passivity and repetitiveness of the monad's
behavior that generates 'the appearance of mechanism.'
In spite of the rejection of pre-established harmony, this
conception still owes much to Leibniz's teaching. On Ward's
view, the physical objects apprehended in ordinary experience are the
appearance to us of a structured conglomerate, or societal whole, of
living subjects (1927: 239). Such appearances are not illusions. Our
immediate sensations fail to reproduce the object's real internal
constitution, yet they do point toward realities actually existing
independently of the perceiving subject. Ward's metaphysical
idealism thus comes with a form of epistemological realism, while his
conception of every-day physical objects is virtually identical with
Leibniz's notion of *phenomena* *bene fundata*.
Stated in Kantian terminology, physical objects are the appearances to
us of things whose *noumenal* being or intrinsic nature is
mental or experiential. According to Ward (1911: 392), Kant too was
forcefully drawn to this position, even in his critical phase and in
spite of his notorious claim that *noumena* lie outside the
categories of the understanding and sensible intuition.
Ward's social account of the natural world provides the
foundation for an original account of natural laws that he conceived of
as *statistical*, *evolved* and *transient*. In
the first place, in a universe that is ultimately social, natural laws
must be of the like of economic or anthropological laws; as such, they
hold for groups of monads rather than for individual ones and are
statistical generalizations rather than eternally fixed decrees.
Secondly, natural laws are to be regarded as the products of evolution
because they record the habitual behavior of groups of socialized
monads, and habits have to be acquired or 'learned' over
time. Thirdly, it is a historical platitude that societies emerge and
decay. Since the natural laws dominating a given epoch depend upon that
epoch's social order, natural laws cannot be regarded as
immutable; like all finite realities, they too are likely to change in
the course of cosmic history.
Another notable Cambridge thinker and Leibnizian scholar, Charles
Dunbar Broad, nicely summarized Ward's conception with the
following words:
>
>
> ...it seems quite impossible to explain the higher types of
> mental fact materialistically, whilst it does not seem impossible to
> regard physical and chemical laws as statistical uniformities about
> very large collections of very stupid minds. (Broad 1975: 169)
Ward's account might be profitably compared with a similar one
provided by Charles Sanders Peirce in his important essay of 1891,
'The Architecture of Theories.' Here, Peirce advocates the
view that "the only intelligible theory of the universe is that
of objective idealism, that matter is effete mind, inveterate habits
becoming physical laws" (153). Furthermore, he contends that
"the only possible way of accounting for the laws of nature and
for uniformity in general is to suppose them results of evolution. This
supposes them not to be absolute, not to be obeyed precisely. It makes
an element of indeterminacy, spontaneity, or absolute chance in
nature" (148).
Despite the analogies, a major point of controversy is represented
by Peirce's *tychism*--the doctrine hinted at in the
just quoted passage that 'absolute chance' (*tyche*)
is of the essence of the universe. The disagreement is not as to
whether there is a level of reality that escapes external
determination--a striking anticipation of Heisenberg's
indeterminacy principle that both Peirce and Ward endorse. The question
is as to how that most fundamental level is to be understood. In a
passage that can only be interpreted as a reply to Peirce, Ward
writes:
>
>
> Some pluralists, very ill advisedly as I think, have identified this
> element [contingency, what is unpredictable in the world] with pure
> chance and even proposed to elevate it to the place of a guiding
> principle under the title of 'tychism' [...] But every
> act of a conative agent is determined by--what may, in a wide
> sense, be called--a motive, and motivation is incompatible with
> chance, though in the concrete it be not reducible to law. (1911:
> 76)
This critique rests upon the argument that--since monads are
'subjects' in the full sense of the term--they cannot
act at random; specifically, monads act either in order to secure
self-preservation or to achieve self-realization. This doctrine is
phenomenologically well-grounded. Our own psychic life provides us with
an immediate grasp of the inner life of a monadic creature, and it is
clear to Ward that we do experience ourselves as motivated in this
way.
In sum, for Ward it is the 'teleological' that grounds
the 'mechanical.' A monad always aims at self-preservation
and self-realization; this is the simplest way in which the
monad's *conatus*--or will to live--manifests
itself. At the same time, self-preservation and self-realization are
two fundamental metaphysical parameters, the defining features of each
monad's 'character.' If the need for
self-preservation predominates, the monad will acquiesce in its
acquired status. These are Broad's 'stupid minds' and
represents the conservative elements in nature. *Per contra*,
'intelligent' monads actively seek self-realization. Their
importance in the cosmic scheme cannot be underestimated; eventually,
it is their craving for novelties that powers evolution and prevents
history from coming to a halt.
## 4. Philosophy of Mind
Ward's theory of monads is a radical form of panpsychism; on
this view, experience is not solely ubiquitous--it is all there
truly his. In the last decades of the nineteenth century, the debate on
panpsychism was launched by the publication in 1878 of William Kingdon
Clifford's 'On the Nature of Things-in-Themselves.'
This essay was perfectly attuned to the *Zeitgeist*, since it
linked panpsychism with evolutionary theory. Clifford appeals to the
non-emerge argument: the transition from inert matter to conscious
beings would be unintelligible, a kind of unfathomable *creatio ex
nihilo*; hence, we are forced to conclude that
'experience' is an intrinsic, aboriginal feature of
matter.
Ward apparently agrees with the general drift of this argument. In
an essay significantly entitled 'Mechanism and Morals,' he
explains that one need not fear the theory of evolution; this does not
degrade either consciousness or human life, but leads to a
spiritualized view of matter:
>
>
> It is interesting... to notice that in the support which it
> lends to pampsychist [*sic*] views the theory of evolution seems
> likely to have an effect on science the precise opposite of that which
> it exercised at first. That was a leveling down, this will be a
> leveling up. At first it appeared as if man were only to be linked with
> the ape, now it would seem that the atom, if reality at all, may be
> linked with man. (1927: 247)
Ward's endorsement of Clifford's evolutionary argument,
however, points towards a tension in his philosophy. If evolution means
'epigenesis'--the coming into being of what was not
potentially present at the beginning--it is not clear why Ward
should be worried by the generation of mind out of matter. Conscious
existence might well be one of the novelties evolution is capable of
producing.
Ward does, at any rate, vehemently reject Clifford's specific
version of panpsychism. Clifford speculated that each atom of matter
was associated with a quantum of experience, a small piece of
'mind-stuff.' This piece of 'mind-stuff' was
conceived as an atom of experience, amounting to something less than a
complete thought or feeling. Clifford also held that thoughts and
feelings could be constituted simply by way of combination. "When
molecules are so combined as to form the brain and nervous system of a
vertebrate," he wrote, "the corresponding
elements of mind-stuff are so combined as to form some kind of
consciousness." Analogously, "when matter takes the complex
form of a living human brain, the corresponding mind-stuff takes the
form of a human consciousness, having intelligence and volition"
(65). In this way, Clifford believed, it would be possible to explain
why the genesis of complex material structures in the course of
evolution is accompanied by the parallel emergence of higher forms of
sentience and mental activity.
The theory is best regarded as a form of psycho-physical
parallelism; this is, at all events, how Ward interprets it in
*Naturalism and Agnosticism*, just before subjecting
'Clifford's wild speculations concerning mind-stuff'
(1899: Vol. 2, 12) to a devastating critique.
Ward's criticizes Clifford's theory on three main
accounts. (1) In the first place, the whole theory presupposes a
point-to-point correspondence between matter and mind. Clifford's
conception of an atom, however, is obsolete: 'if the speculations
of Lord Kelvin and others are to be accepted, and the prime atom itself
is a state of motion in a primitive homogeneous medium [the ether],
what is the mental equivalent of this primordial medium?' (114)
In other words, Clifford's theory couples material atoms with
simple ideas. But if the atom is not a simple particle but a state of
an underlying substance, it is not so easy to see what precisely in our
consciousness could correspond to it.
(2) Secondly, Clifford's views--'a maze of
psychological barbarism' (15)--totally misrepresent the
nature of human consciousness. It is inconceivable how there could be
experiences apart from subjects of experiences or larger wholes of
consciousness, such as the smallest pieces of mind-dust will have to
be: "Nobody bent on psychological precision would speak of ideas
as either conscious or intelligent, but still less would he speak of
ideas existing in isolation apart from, and prior to, a consciousness
and intelligence" (15-16). Clifford fails to recognize the
fundamental psychological fact of the unity of consciousness; the
psychical field is not made up of simple sensations in the way in which
a mosaic-picture is made up of little stones.
(3) Lastly, the mind-dust theory faces its own problem of emergence.
Why should the transition from inert matter to consciousness be
regarded as more mysterious than the transition from little pieces of
consciousness to the single, unified consciousness of a human being?
"Allowing that it [mind-dust] is not mind, he [Clifford] makes no
attempt to show how from such dust a living mind could ever
spring" (15). Thus, Clifford's theory has no explanatory
advantage over competitor accounts of the mind-body relation; there is
still a 'gap' that needs to be closed.
One easily sees that Ward's theory of monads does not face any
of these difficulties. (1) The correlation between the psychical and
the physical is not a one-to-one correspondence between discrete
physical atoms and discrete psychical atoms, but is explained in terms
of a Leibnizian theory of 'confused perception.' (2) There
are no 'subjectless' experiences, but all experienced
contents are 'owned' by the monads, each of which is a
genuine unity containing distinct perceptual
contents--Leibniz's '*petites
perceptions*' (1911: 256), but is not literally
'made up' by them. The relationship between the mind and
the monads in the body, Ward also remarks in this connection (196), is
not that of a whole to its parts but that of a dominant to its
subordinate monads. (3) And lastly, there are no mysterious transitions
in Ward's theory. The growth of the mind--from the inchoate
experiences of infants to the relatively sophisticated experiences of
adult human beings--shows that 'lower' states are
superseded by 'higher' ones thanks to intersubjective,
social intercourse.
Hence, Ward sums up his discussion by claiming the superiority of a
Leibnizian approach over Clifford's mind-stuff theory:
>
>
> Had he [Clifford] followed Leibniz instead, he could have speculated
> as to simple minds to his heart's content, but would never have
> imagined that absurdity, 'a piece of mind-stuff,' to which
> his fearless and logical interpretation of atomistic psychology had led
> him; he would never have imagined that ... mind ... could be
> described in terms that have a meaning only when applied to the
> complexity of material structure. (1899: Vol. 2, 16)
Nevertheless, Ward's theory does leave several questions open.
In the first place, if our conscious apprehension of the external world
is mediated by our direct apprehension of the monads constituting the
body and more specifically the brain--as Ward contends (1911:
257-258)--why are we not aware of our neurons? Secondly,
Ward's theory escapes the composition-problem, but this is now
replaced by the problem of monadic interaction. How does the dominant
monad influence the lower ones and rule over them? Thirdly, there is a
question as to the nature of space. Granted that direct monadic
interaction is real, where does it occur? Interaction would seem to
require a common dimension or medium in order to be possible at all;
what kind of dimension could host spiritual beings such as the monads
are supposed to be?
Disappointingly enough, on all these crucial issues Ward has nothing
to say. In the absence of a satisfactory treatment of these topics,
however, his theory becomes intrinsically unstable, as further
reflection might show that the positions just reached will have to be
abandoned.
## 5. Natural Theology
As in the rest of his philosophy, evolutionary theory also looms large
in Ward's natural theology. The existence of God, Ward frankly
recognizes, cannot be proved. However, a theory of interacting monads
cannot guarantee that the universe will keep moving towards a state of
increasing harmony and cohesion: "without such a spiritual
continuity as theism alone seems able to ensure," Ward remarks,
"it looks as if a pluralistic world were condemned to a
Sisyphean task" (215). At the same time, the worldly monads need
the leadership of a supreme agent if harmony has to be widespread
across the entire universe. Accordingly, Ward's main problem is to
understand how God can be related to the world of monads and guide
them towards the realization of societies of ever increasing
perfections.
It is clear to Ward that there must be an ontological gap between
God and the monads. God cannot be a *primus inter
pares*--only a monad among other monads (as Leibniz's
language sometimes suggests). Since God is the world's *ratio
essendi*--its ontological foundation, rather than its creator
in time--He must be regarded as altogether transcendent. At the
same time, since God has to lure the evolutionary process, He cannot
merely maintain the world in existence, but must somehow actively enter
into it; His relation to the world must therefore involve an aspect of
immanence. This gives rise to a dilemma: how could God be both immanent
and transcendent? Ward can do no better than to provide an analogy in
which God is compared with a Cosmic Artist:
>
>
> We may discern perhaps a faint and distant analogy... in what
> we are wont to style the creations of genius... the immortal works
> of art, the things of beauty that are a joy forever, we regard
> as... the spontaneous output of productive imagination, of a free
> spirit that embodies itself in its work, lives in it and loves it.
> (238-239)
Needless to say, this analogy is too 'faint and distant'
to be of any real help.
Ward strikes an original note in his attempt to accommodate the
reality of God with epigenesis. Since he believes that novelties could
only be generated by the interplay of genuinely free agents, he
conceives of God as limiting himself in the exercise of his power, so
as to let the monads free: "Unless creators are created,"
he says, "nothing is really created" (437). Having
renounced omnipotence, God cannot have knowledge of a future he does
not control anymore. This means that the notion of divine
omniscience--which in the case of Leibniz takes the form of the
doctrine of complete concepts--must be abandoned. God lacks
knowledge of future contingents; nevertheless, he is aware of all the
possibilities open to a monad at any moment, so that the future will
never come to him as unexpected: "God, who knows both tendencies
and possibilities completely, is beyond surprise and his purpose
beyond frustration" (479). Clearly, such a God cannot be the
perfect, immutable God of Leibniz and of classical theism, but is a
developing God whose knowledge of the world increases with the
unfolding of cosmic history. How does such a God lure the world
towards greater perfection? This question too is left unanswered;
always candid, Ward acknowledges that God's "*modus
operandi*...is to us inscrutable" (479).
## 6. Legacy
Ward's metaphysics has had very little impact upon later
Anglo-American philosophy, but it was not without influence in the
Cambridge philosophical world in the first two or three decades of the
twentieth century. Bertrand Russell explicitly recognizes his
indebtedness to his teacher in his scholarly study of 1900, *A
Critical Examination of the Philosophy of Leibniz*. Although he
complained that Ward's philosophy was 'dull and
antiquate' in a letter to Ottoline Morrell (Griffin 1991: 39), it
is arguable that the influence run deeper, as Russell cultivated an
interest in those questions at the verge between natural philosophy,
philosophy of mind and metaphysics that so much mattered to Ward. This
becomes clear as soon as one considers that he is not solely the
celebrated author of 'On Denoting' (1905) and *The
Principles of Mathematics* (1903), but also of *The Analysis of
Mind* (1921) and *The Analysis of Matter* (1927).
Other Cambridge thinkers that might have been influenced by Ward are
John Ellis McTaggart, C. D. Broad, and Alfred North Whitehead. In
*The Nature of Existence* (1921/1927) McTaggart developed a
system of idealistic metaphysics that bears some resemblance with
Leibniz's theory of monads. Broad says in his
'Autobiography' (1959) that his idealist professor had
'little influence' on his thought (50), yet he favorably
(and acutely) reviewed Ward's *The Realm of Ends* and
devoted an important study to Leibniz's philosophy. Arguably,
the greatest ascertained influence was exerted on Alfred North
Whitehead; the system of metaphysics that he exposed in *Process
and Reality* (1929) incorporates all the crucial ideas of
Ward's philosophy of nature, while his conception of the
God-World relationship is a way of bringing to completion Ward's
fragmentary account. (Basile 2009: 61-62, 142-43) One
might reasonably expect that, outside of Cambridge, his philosophy
should have been taken into serious consideration by thinkers with an
interest in *Naturphilosophie*, and especially so by a
philosopher usually referred to as an 'idealist' such as
Robin G. Collingwood, but there seems to be no documentary evidence of
this. American process philosopher and theologian Charles Hartshorne,
whose speculative system approximates to Whitehead's, displays
in print knowledge of Ward's works. (1963: 19)
Ward's philosophy is little known today and scholarly
discussions of his thought are scanty. In the widely read *A
Hundred Years of Philosophy*, John Passmore compares it
unfavourably with McTaggart's: "In McTaggart's case,
the difficulty is to give a summary account of a highly intricate
pattern of argument; in Ward's case, to decide what he really
meant to say on the issues of central philosophical importance"
(82). In an age in love with logical technicalities, this is like
putting a heavy stone on the philosopher's grave.
Passmore's judgment is not entirely justified; the essentials
of Ward's philosophical *credo* are clear enough. The
reasons for the oblivion in which his thought has fallen are historical
and have much to do with a general shift of philosophical concerns, as
he belonged to a philosophical world that was swept away *as a
whole* by the mounting tide of linguistic philosophy--a world
in which other towering figures were, just to mention a few names,
forgotten but truly exceptional personalities like Francis Herbert
Bradley, Bernard Bosanquet, Samuel Alexander, and George Santayana. It
is not a sheer accident that Ward's chair was inherited by Moore, and
then by Wittgenstein. Nevertheless, it would be a mistake to regard his
philosophy as little more than a relic from the past. The question
concerning the mind's place in nature is still at very centre of
philosophical debate; panpsychism has been recently recognized as a
hypothesis worth contemplating by acclaimed philosophers such as Nagel
(1979) and Chalmers (1996); more surprisingly perhaps, the idea that
the riddle of the universe might be solved by a theory of interacting
monads has resurfaced in recent 'revisionary' work by Galen
Strawson (2006: 274).
At a deeper level, Ward's thought is admirable for his sincere
attachment to the ideal of philosophical speculation as a non-dogmatic
source of orientation. His belief in the creative powers and
'freedom' of the monad--which he defended from
Peircian tychism, scientistic determinism and traditional
theism--implicitly grounds an ethics of responsibility, while his
evolutionary theism (a form of *meliorism*, rather than of
*optimism*) inspires a confident yet not naive attitude towards
life and its challenges. Surely, Ward's own
vicissitudes--from debilitating poverty to public recognition as
Cambridge Professor of Mental Philosophy--are reflected in his
grand vision of reality as comprising an indefinite number of conative
subjects obstinately striving to achieve self-realization and social
harmonization; in this only sustained, as he was wont to say, by
'faith in light to come' (1927: 66). |
transmission-justification-warrant | ## 1. Introduction
Arthur has the measles and stays in a closed environment with his
sister Mafalda. If Mafalda ends up contracting the measles herself
because of her staying in close contact with Arthur, there must be
something--the measles virus--which is transmitted from
Arthur to Mafalda in virtue of a relation--the relation of
staying in close contact with one another--instantiated by the
two siblings. Epistemologists have been devoting their attention to
the fact that *epistemic* properties--like being justified
or known--are often transmitted in a similar way. An example (not
discussed in this entry) is the transmission of knowledge via
testimony. A different but equally important phenomenon (which
occupies center stage in this entry) is the transmission of epistemic
justification across an inference or argument from one proposition to
another.
Consider proposition \(q\) constituting the conclusion of an argument
having proposition \(p\) as premise (where \(p\) can be a set or
conjunction of propositions). If \(p\) is justified for a subject
\(s\) by evidence \(e\), this justification may transmit to \(q\) if
\(s\) is aware of the entailment between \(p\) and \(q\). When this
happens, \(q\) is justified for \(s\) *in virtue of* her
justification for \(p\) based on \(e\) and her knowledge of the
inferential relation from \(p\) to \(q\). Consider this example from
Wright (2002):
*Toadstool*
\(\rE\_1\).
Three hours ago, Jones inadvertently consumed a large risotto of
Boletus Satana.
\(\rP\_1\).
Jones has absorbed a lethal quantity of the toxins that toadstool
contains.
Therefore:
\(\rQ\_1\).
Jones will shortly die.
Here \(s\) can deductively infer
\(\rQ\_1\)
from
\(\rP\_1\)
given background information.[1]
Suppose \(s\) acquires justification for \(\rP\_1\) by learning
\(\rE\_1\).
\( s\) will also acquire justification for \(\rQ\_1\) in virtue of her
knowledge of the inferential relation from \(\rP\_1\) to \(\rQ\_1\) and
her justification for
\(\rP\_1\).[2]
Thus, \(s\)'s justification for \(\rP\_1\) will transmit to
\(\rQ\_1\).
It is widely recognized that epistemic justification sometimes
*fails* to transmit across an inference or argument. In
interesting cases of transmission failure, \(s\) has justification for
believing \(p\) and knows the inferential link between \(p\) and
\(q\), but \(s\) has no justification for believing \(q\) *in
virtue of* her justification for \(p\) and her knowledge of the
inferential link. Here is an example from Wright (2003). Suppose
\(s\)'s background information entails that Jessica and Jocelyn
are indistinguishable twins. Consider this possible reasoning:
*Twins*
\(\rE\_2\).
This girl looks just like Jessica.
\(\rP\_2\).
This girl is actually Jessica.
Therefore:
\(\rQ\_2\).
This girl is not Jocelyn.
In *Twins*
\(\rE\_2\)
can give \(s\) justification for believing
\(\rP\_2\)
only if \(s\) has *independent* justification for believing
\(\rQ\_2\)
in the first instance. Suppose that \(s\) does have independent
justification for believing \(\rQ\_2\), and imagine that \(s\) learns
\(\rE\_2\). In this case \(s\) will acquire justification for believing
\(\rP\_2\) from \(\rE\_2\). But it is intuitive that \(s\) will acquire
no justification for \(\rQ\_2\) *in virtue of* her justification
for believing \(\rP\_2\) based on \(\rE\_2\) and her knowledge of the
inferential link between \(\rP\_2\) and \(\rQ\_2\). So, *Twins*
instantiates transmission failure when \(\rQ\_2\) is independently
justified.
An argument incapable of transmitting to its conclusion a
*specific* justification for its premise(s)--e.g., a
justification based on evidence \(e\)--may be able to transmit to
the same conclusion a different justification for its
premise(s)--e.g., one based on different evidence \(e^\*\).
Replace for instance
\(\rE\_2\)
with
\(\rE\_2^\*.\)
This girl's passport certifies she is Jessica.
in *Twins*,
\(\rE\_2^\*\)
can provide \(s\) with justification for believing
\(\rP\_2\)
even if \(s\) has no independent justification for
\(\rQ\_2\).
Suppose then that \(s\) has no independent justification for
\(\rQ\_2\), and that she acquires \(\rE\_2^\*\). It is intuitive that
\(s\) will acquire justification from \(\rE\_2^\*\) for \(\rP\_2\) that
does transmit to \(\rQ\_2\). Now the inference from \(\rP\_2\) to
\(\rQ\_2\) instantiates epistemic transmission.
Although many of the epistemologists participating in the debate on
epistemic transmission and transmission failure speak of transmission
of *warrant*, rather than *justification*, almost all of
them use the term 'warrant' to refer to some kind of
epistemic justification (in
Sect. 3.3
we will however consider an account of warrant transmission failure
in which 'warrant' is interpreted differently.) Most
epistemologists investigating epistemic transmission and transmission
failure--e.g., Wright (2011, 2007, 2004, 2003, 2002 and 1985),
Davies (2003, 2000 and 1998), Dretske (2005), Pryor (2004), Moretti
(2012) and Moretti & Piazza (2013)--broadly identify the
epistemic property capable of being transmitted with
*propositional*
justification.[3]
Only a few authors explicitly focus on transmission of
*doxastic* justification--e.g., Silins (2005), Davies
(2009) and Tucker (2010a and 2010b
[OIR]).
In this entry we follow the majority in discussing transmission and
transmission failure of justification as phenomena primarily
pertaining to propositional justification. (See however the supplement
on
Transmission of Propositional Justification *versus* Transmission of Doxastic Justification.)
Epistemologists typically concentrate on transmission of
(propositional or doxastic) justification across *deductively
valid* (given background information) arguments. The fact that
justification can transmit across deduction is crucial for our
cognitive processes because it makes the advancement of
knowledge--or justified belief--through deductive reasoning
possible. Suppose evidence \(e\) gives you justification for believing
hypothesis \(p\), and you know that \(p\) entails another proposition
\(q\) that you haven't directly checked. If the justification
you have for believing \(p\) transmits to its unchecked prediction
\(q\) through the entailment, you acquire justification for believing
\(q\) too.
Epistemologists may analyze epistemic transmission across
*inductive* (or *ampliative*) inferences too. Yet this
topic has received much less attention in the literature on epistemic
transmission. (See however interesting remarks in Tucker 2010a.)
In the remaining part of this entry we will focus on transmission and
transmission failure of propositional justification across deductive
inference. Unless differently specified, by 'epistemic
justification' or 'justification' we always mean
'propositional justification'.
## 2. Epistemic Transmission
As said, \(s\)'s justification for \(p\) based on evidence \(e\)
transmits across entailment from \(p\) to \(p\)'s consequence
\(q\) whenever \(q\) is justified for \(s\) in virtue of \(s\)'s
justification for \(p\) based on \(e\) and her knowledge of
\(q\)'s deducibility from \(p\). This initial characterization
can be distilled into three conditions individually necessary and
jointly sufficient for epistemic transmission:
>
>
> \(s\)'s justification for \(p\) based on \(e\) transmits to
> \(p\)'s logical consequence \(q\) if and only if:
>
>
>
> (C1)
> \(s\) has justification for believing \(p\) based on \(e\),
> (C2)
> \(s\) knows that \(q\) is deducible from \(p\),
> (C3)
> \(s\) has justification for believing \(q\) *in virtue of*
> the satisfaction of
> (C1)
> and
> (C2).
>
>
Condition
(C3)
is crucial for distinguishing *transmission* of justification
across (known) entailment from *closure* of justification
across (known) entailment. Saying that \(s\)'s justification for
believing \(p\) is closed under \(p\)'s (known) entailment to
\(q\) is saying that:
>
>
> If
>
>
>
> (C1)
> \(s\) has justification for believing \(p\) and
> (C2)
> \(s\) knows that \(p\) entails \(q\), then
> (C3\(^{\textrm{c}}\))
> \(s\) has justification for believing \(q\).
>
>
One can coherently accept the above principle--known as the
*principle of epistemic closure*--but
deny a corresponding *principle of epistemic transmission*,
cashed out in terms of the conditions outlined above:
>
>
> If
>
>
>
> (C1)
> \(s\) has justification for believing \(p\) and
> (C2)
> \(s\) knows that \(p\) entails \(q\), then
> (C3)
> \(s\) has justification for believing \(q\) *in virtue of*
> the satisfaction of (C1) and (C2).
>
>
The *principle of epistemic closure* just requires that when
\(s\) has justification for believing \(p\) and knows that \(q\) is
deducible from \(p, s\) have justification for believing \(q\). In
addition, the *principle of epistemic transmission* requires
this justification to be had in virtue of her having justification for
\(p\) and knowing that \(p\) entails \(q\) (for further discussion see
Tucker 2010b
[OIR]).
Another important distinction is the one between the *principle of
epistemic transmission* and a different principle of transmission
discussed by Pritchard (2012a) under the label of *evidential
transmission principle*. According to it,
>
>
> If \(s\) perceptually knows that \(p\) in virtue of evidence set
> \(e\), and \(s\) competently deduces \(q\) from \(p\) (thereby coming
> to believe that \(q\) while retaining her knowledge that \(p)\), then
> \(s\) knows that \(q\), where that knowledge is sufficiently supported
> by \(e\). (Pritchard 2012a: 75)
>
>
>
Pritchard's principle, to begin with, concerns (perceptual)
knowledge and not propositional justification. Moreover, it deserves
emphasis that there are inferences that apparently satisfy
Pritchard's principle but fail to satisfy the *principle of
epistemic transmission*. Consider for instance the following
triad:
*Zebra*
\(\rE\_3\).
The animals in the pen look like zebras.
\(\rP\_3\).
The animals in the pen are zebras.
Therefore:
\(\rQ\_3\).
The animals in the pen are not mules cleverly disguised to look
like zebras.
According to Pritchard, the evidence set of a normal zoo-goer standing
before the zebra enclosure typically includes, above and beyond
\(\rE\_3\),
the background proposition that
\((\rE\_B)\)
to disguise mules like zebras would be very costly and
time-consuming, would bring no comparable benefit and would be
relatively easy to unmask.
Suppose \(s\) knows
\(\rP\_3\)
on the basis of her evidence set, and competently deduces
\(\rQ\_3\)
from \(\rP\_3\) thereby coming to believe \(\rQ\_3\). Pritchard's
*evidential transmission principle* apparently accommodates the
fact that \(s\) thereby comes to know \(\rQ\_3\). For her evidence
set--because of its inclusion of
\(\rE\_B\)--sufficiently
supports \(\rQ\_3\). But the *principle of epistemic
transmission* is not satisfied. Although
(C1\(\_{\textit{Zebra}}\))
\(s\) has justification for \(\rP\_3\) and
(C2\(\_{\textit{Zebra}}\))
she knows that \(\rP\_3\) entails \(\rQ\_3\),
she has justification for believing \(\rQ\_3\) in virtue of, not
(C1\(\_{\textit{Zebra}}\)) and (C2\(\_{\textit{Zebra}}\)),
but the independent
support lent to it by \(\rE\_B\).
Condition
(C3)
suffices to distinguish the notion of epistemic transmission from
different notions in the neighborhood. However, as it stands, it is
still unsuitable for the purpose to completely characterize epistemic
transmission. The problem is that there are cases in which it is
intuitive that the justification for \(p\) based on \(e\) transmits to
\(q\) even if (C3), strictly speaking, is *not* satisfied.
These cases can be described as situations in which *only part*
of the justification that \(s\) has for \(q\) is based on her
justification for \(p\) and her knowledge of the entailment from \(p\)
to \(q\). Consider this example. Suppose you are traveling on a train
heading to Edinburgh. At 16:00, as you enter Newcastle upon Tyne, you
spot the train station sign. Then, at 16:05, the ticket controller
tells you that you are not yet in Scotland. Now consider the following
reasoning:
*Journey*
\(\rE\_4\).
At 16:05 the ticket controller tells you that you are not yet in
Scotland.
\(\rP\_4\).
You are not yet in Scotland.
Therefore:
\(\rQ\_4\).
You are not yet in Edinburgh.
As you learn
\(\rE\_4\),
given suitable background information, you get justification for
\(\rP\_4\);
moreover, to the extent to which you know that not being in Scotland
is sufficient for not being in Edinburgh, you also acquire via
transmission justification for
\(\rQ\_4\).
This additional justification is transmitted irrespective of the fact
that you *already* have justification for \(\rQ\_4\), acquired
by spotting the train station sign 'Newcastle upon Tyne'.
If
(C3)
were read as requiring that the *whole* of the justification
available for a proposition \(q\) were had in virtue of the
satisfaction of
(C1)
and
(C2),
cases like these would become invisible.
A way to deal with this complication is to amend the tripartite
analysis of epistemic transmission by turning
(C3)
into (C3\(^{+}\)), saying that *at least part* of the
justification that \(s\) has for \(q\) has been achieved by her in
virtue of the satisfaction of
(C1)
and
(C2).
Let us say that a justification for \(q\) is an *additional*
justification for \(q\) whenever it is not a *first-time*
justification for it. Condition (C3\(^{+}\)) can be reformulated as
this disjunction:
(C3\(^{+}\))
\(s\) has *first-time* justification for \(q\) in virtue
of the satisfaction of
(C1)
and
(C2),
or
\(s\) has an *additional* justification for \(q\) in virtue
of the satisfaction of
(C1)
and
(C2).
Much of the extant literature on epistemic transmission concentrates
on examples of transmission of first-time justification. These
examples include
*Toadstool*.
We have seen, however, that what intuitively transmits in other cases
is simply additional justification. Epistemologists have identified at
least two--possibly overlapping--kinds of additional
justification (cf. Moretti & Piazza 2013).
One is what can be called *independent* justification because
it appears--intuitively--independent of the original
justification for \(q\). This notion of justification can probably be
sharpened by appealing to counterfactual analysis. Suppose
\(s\)'s justification for \(p\) based on \(e\) transmits to
\(p\)'s logical consequence \(q\). This justification
transmitted to \(q\) is an additional *independent*
justification just in case these three conditions are met:
(IN1)
\(s\) was already justified in believing \(q\) before acquiring
\(e\),
(IN2)
as \(s\) acquires \(e, s\) is still justified in believing \(q\),
and
(IN3)
if \(s\) had not been antecedently justified in believing \(q\),
upon learning \(e, s\) would have acquired via transmission a
first-time justification for believing \(q\).
*Journey*
instantiates transmission of justification by meeting
(IN1),
(IN2), and
(IN3).
Thus, *Journey* exemplifies a case of transmission of
additional independent justification.
Consider now that justification for a proposition or belief normally
comes in degrees of strength. The second kind of additional
justification can be characterized as *quantitatively
strengthening* justification. Suppose again that \(s\)'s
justification for \(p\) based on \(e\) transmits to \(p\)'s
consequence \(q\). This justification transmitted to \(q\) is an
additional *quantitatively strengthening* justification just in
case these two conditions are satisfied:
(STR1)
\(s\) was already justified in believing \(q\) before acquiring
\(e\), and
(STR2)
as \(s\) acquires \(e\), the strength of \(s\)'s
justification for believing \(q\) increases.
Here is an example from Moretti (2012). Your background information
says that only one ticket out of 5,000 of a fair lottery has been
bought by a person born in 1970, and that all other tickets have been
bought by older or younger people. Consider now this reasoning:
*Lottery*
\(\rE\_5\).
The lottery winner's passport certifies she was born in
1980.
\(\rP\_5\).
The lottery's winner was born in 1980.
Therefore:
\(\rQ\_5\).
The lottery's winner was not born in 1970.
Given its high chance,
\(\rQ\_5\)
is already justified on your background information only. It is
intuitive that as you learn
\(\rE\_5\),
you acquire an additional quantitatively strengthening justification
for \(\rQ\_5\) via transmission. For your justification for
\(\rP\_5\)
transmitted to \(\rQ\_5\) is intuitively quantitatively stronger than
your initial justification for \(\rQ\_5\).
In many cases when \(q\) receives via transmission from \(p\) an
additional independent justification, \(q\) also receives a
quantitatively strengthening justification. This is not always true
though. For there seem to be cases in which an additional independent
justification transmitted from \(p\) to \(q\) intuitively
*lessens* an antecedent justification for \(q\) (cf. Wright
2011).
An interesting question is whether it is true that as \(q\) gets via
transmission from \(p\) an additional quantitatively strengthening
justification, \(q\) also gets an independent justification. This
seems true in some cases--for instance in
*Lottery*.
It is controversial, however, if it is the case that
*whenever* \(q\) gets via transmission a quantitatively
strengthening justification, \(q\) also gets an independent
justification. Wright (2011) and Moretti & Piazza (2013) describe
two examples in which a subject allegedly receives via transmission a
quantitatively strengthening but not independent additional
justification.
To summarize, *additional* justification comes in two species
at least: *independent* justification and *quantitatively
strengthening* justification. This enables us to lay down three
specifications of the general condition
(C3\(^{+}\))
necessary for justification transmission, each of which represents a
condition necessary for the transmission of one particular type of
justification. Let's call these specifications
(C3\(^{\textrm{ft}}\)),
(C3\(^{\textrm{ai}}\)), and
(C3\(^{\textrm{aqs}}\)).
(C3\(^{\textrm{ft}}\))
\(s\) has *first time* justification for \(q\) in virtue
of the satisfaction of
(C1)
and
(C2).
(C3\(^{\textrm{ai}}\))
\(s\) has *additional independent* justification for \(q\)
in virtue of the satisfaction of
(C1)
and
(C2).
(C3\(^{\textrm{aqs}}\))
\(s\) has *additional quantitatively strengthening*
justification for \(q\) in virtue of the satisfaction of
(C1)
and
(C2).
Transmission of first-time justification makes the advancement of
justified belief through deductive reasoning possible. Yet the
acquisition of first-time justification for \(q\) isn't the sole
possible improvement of one's epistemic position relative to
\(q\) that one could expect from transmission of justification.
Moretti & Piazza (2013), for instance, describe a variety of
different ways in which s's epistemic standing toward a
proposition can be improved upon acquiring an additional justification
for it via transmission.
We conclude this section with a note about *transmissivity as
resolving doubts*. Let us say that \(s\) *doubts* \(q\)
just in case \(s\) either *disbelieves* or *withholds*
belief about \(q\), namely, refrains from both believing and
disbelieving \(q\) *after deciding about* \(q\). \(s\)'s
doubting \(q\) should be distinguished from \(s\)'s being
*open minded* about \(q\) and from \(s\)'s having
*no* doxastic attitude whatsoever towards \(q\) (cf. Tucker
2010a). Let us say that a deductively valid argument from \(p\) to
\(q\) is able to resolve doubt about its conclusion just in case it is
possible for \(s\) to be rationally moved from doubting \(q\) to
believing \(q\) solely in virtue of grasping the argument from \(p\)
to \(q\) and the evidence offered for \(p\).
Some authors (e.g., Davies 1998, 2003, 2004, 2009; McLaughlin 2000;
Wright 2002, 2003, 2007) have proposed that we should conceive of an
argument's epistemic transmissivity in a way that is very
closely related or even identical to the argument's ability to
resolve doubt about its conclusion. Whereas some of these authors have
eventually conceded that epistemic transmissivity cannot be defined as
ability to resolve doubt (e.g., Wright 2011), others have attempted to
articulate this view in full (see mainly Davies 2003, 2004, 2009).
However, there is nowadays wide agreement in the literature that the
property of being a transmissive argument *doesn't*
coincide with the one of being an argument able to resolve doubt about
its conclusion (see for example, Beebee 2001; Coliva 2010; Markie
2005; Pryor 2004; Bergmann 2004, 2006; White 2006; Silins 2005; Tucker
2010a). A good reason to think so is that whereas the property of
being transmissive appears to be a genuinely *epistemic*
property of an argument, the one of resolving doubt seems to be only a
*dialectical* feature of it, which varies with the audience
whose doubt the argument is used to address.
## 3. Failure of Epistemic Transmission
### 3.1 Trivial Transmission Failure *versus* Non-Transmissivity
It is acknowledged that justification sometimes fails to transmit
across known entailment (the acknowledgment dates back at least to
Wright 1985). Moreover, it is no overstatement to say that the
literature has investigated transmission failure more extensively than
transmission of justification. As we have seen, justification based on
\(e\) transmits from \(p\) to \(q\) across the entailment if and only
if
(C1)
\(s\) has justification for \(p\) based on \(e\),
(C2)
\(s\) knows that \(q\) is deducible from \(p\), and
(C3\(^{+}\))
at least part of \(s\)'s justification for \(q\) is based
on the satisfaction of (C1) and (C2).
It follows from this characterization that *no* justification
based on *e transmits* from \(p\) to \(q\) across the
entailment if
(C1),
(C2), or
(C3\(^{+}\))
are not satisfied. These are cases of transmission failure.
Some philosophers have argued that knowledge and justification are not
always *closed* under competent (single-premise or
multiple-premise) deduction. In the recent literature, the explanation
of closure failure has often been essayed in terms of
*agglomeration of epistemic risk*. This type of explanation is
less controversial when applied to multi-premise deduction. An example
of it concerning justification is the deduction from a high number
\(n\) of premises, each of which specifies that a different ticket in
a fair lottery won't win, which are individually justified even
though each premise is *somewhat* risky, to the conclusion that
none of these \(n\) tickets will win, which is *too risky* to
be justified. (For more controversial cases in which knowledge or
justification closure would fail across single-premise deduction
because of risk accumulation, see for example Lasonen-Aarnio 2008 and
Schechter 2013; for responses, see for instance Smith 2013 and 2016.)
These cases of failure of epistemic closure can be taken to also
involve failure of justification *transmission*. For instance,
it can be argued that even if
(C1\(^{\*}\))
\(s\) has justification for each of the \(n\) premises stating
that a ticket won't win, and
(C2\(^{\*}\))
\(s\) knows that it follows from these premises that none of the
\(n\) tickets will win,
\(s\) has no justification--and, therefore, no justification in
virtue of the satisfaction of
(C1\(^{\*}\))
and
(C2\(^{\*}\))--for
believing that none of the \(n\) tickets will win. In these cases,
transmission failure appears to be a consequence of closure failure,
and it is therefore natural to treat the former simply as a
symptomatic manifestation of the latter. For this reason, we will
follow the current literature on transmission failure in not
discussing these cases. (For an analysis of closure failure, see the
entry on
epistemic closure,
especially section 6.)
The current literature treats transmission failure as a self-standing
phenomenon, in the sense that it focuses on cases in which
transmission failure is not imputed--and thus not considered
reducible--to an underlying failure of closure. For the rest of
this entry, we shall follow suit and investigate transmission failure
in this pure or genuine form.
The most trivial cases of transmission failure are such that
(C3\(^{+}\))
is unsatisfied because
(C1)
or
(C2)
are unsatisfied, but it is also true that (C3\(^{+}\))
would have been satisfied if both
(C1)
and
(C2)
have been satisfied (cf. Tucker 2010b
[OIR]).
It deserves emphasis that these cases involve arguments that
aren't unsuitable for the purpose to transmit justification
depending on evidence \(e\): had the epistemic circumstances been
congenial to the satisfaction of
(C1)
and
(C2),
these arguments would have transmitted the justification based on
\(e\) from \(p\) to \(q\). As we could put the point, these arguments
are *transmissive* of the justification depending on
\(e\).[4]
Cases of transmission failure of a more interesting variety are those
in which, regardless of the validity of closure of justification,
(C3\(^{+}\))
isn't satisfied because it could not have been satisfied, no
matter whether or not
(C1)
and
(C2)
have been satisfied. These cases concern arguments
*non-transmissive* of justification depending on a given
evidence, i.e., arguments incapable of transmitting justification
depending on that evidence under any epistemic circumstance. An
example in point is
*Twins*.
Suppose first that \(s\) has independent justification for
\(\rQ\_2\).
In those circumstances
(C1\(\_{\textit{Twins}}\))
\(s\) has justification from
\(\rE\_2\)
for
\(\rP\_2\).
Furthermore
(C2\(\_{\textit{Twins}}\))
\(s\) does know that
\(\rP\_2\)
entails
\(\rQ\_2\).
However, \(s\) hasn't justification for \(\rQ\_2\) in virtue of
the satisfaction of
(C1\(\_{\textit{Twins}}\))
and
(C2\(\_{\textit{Twins}}\)).
So
(C3\(^{+}\))
is not met. Suppose now that \(s\) has no independent justification
for \(\rQ\_2\). Then, it isn't the case that
(C1\(\_{\textit{Twins}}\))
\(s\) has justification from
\(\rE\_2\)
for
\(\rP\_2\).
So,
(C3\(^{+}\))
is not met either. Since there is no other possibility--either
\(s\) has independent justification for
\(\rQ\_2\)
or she doesn't--it cannot be the case that \(s\)'s
belief that \(\rQ\_2\) is justified *in virtue of* her
justification for
\(\rP\_2\)
from
\(\rE\_2\)
and her knowledge that \(\rP\_2\) entails \(\rQ\_2\).
Note that none of the cases of transmission failure of justification
considered above entails failure of epistemic closure. For in none of
these cases \(s\) has justification for believing \(p, s\) knows that
\(p\) entails \(q\), and \(s\) fails to have justification for
believing \(q\).
Unsurprisingly, the epistemologists contributing to the literature on
transmission failure have principally devoted their attention to cases
involving non-transmissive arguments. Epistemologists have endeavored
to identify conditions whose satisfaction suffices to make an argument
*non-transmissive* of justification based on a given evidence.
The next section reviews the most influential of these proposals.
### 3.2 Varieties of Non-Transmissive Arguments
Some non-transmissive arguments *explicitly* feature their
conclusion among their premises. Suppose \(p\) is justified for \(s\)
by \(e\), and consider the premise-circular argument that deduces
\(p\) from itself. This argument cannot satisfy
(C3\(^{+}\))
even if
(C1)
and
(C2)
are satisfied. The reason is that no part of \(s\)'s
justification for \(p\) can be acquired *in virtue of*, among
other things, the satisfaction of
(C2).
For if
(C1)
is satisfied, \(p\) is justified for \(s\) by \(e\)
*independently* of \(s\)'s knowledge of \(p\)'s
self-entailment, thus not in virtue of
(C2).
Non-transmissive arguments are not necessarily premise-circular. A
different source of non-transmissivity instantiating a subtler form of
circularity is the dependence of evidential relations on background or
collateral information. This type of dependence is a rather familiar
phenomenon: the boiling of a kettle gives one justification for
believing that the temperature of the liquid inside is approximately
100degC *only if* one knows that the liquid is water and that
atmospheric pressure is the one of the sea-level. It doesn't if
one knows that the kettle is on the top of a high mountain, or the
liquid is, say, sulfuric acid.
Wright argues, for instance, that the following epistemic set-up,
which he calls the *information-dependence template*, suffices
for an argument's inability to transmit justification.
>
>
> A body of evidence, \(e\), is an information-dependent justification
> for a particular proposition \(p\) if whether \(e\) justifies \(p\)
> depends on what one has by way of collateral information, \(i\).
> [...] Such a relationship is always liable to generate examples
> of transmission failure: it will do so just when the particular \(e,
> p\), and \(i\) have the feature that needed elements of the relevant
> \(i\) are themselves entailed by \(p\) (together perhaps with other
> warranted premises). In that case, any warrant supplied by \(e\) for
> \(p\) will not be transmissible to those elements of \(i\). (Wright
> 2003: 59, edited.)
>
>
>
The claim that \(s\)'s justification from \(e\) for \(p\)
requires \(s\) to have background information \(i\) is customarily
understood as equivalent (in this context) to the claim that
\(s\)'s justification from \(e\) for \(p\) depends on some type
of independent justification for believing or accepting
\(i\).[5]
Instantiating the *information-dependence template* appears
sufficient for an argument's inability to transmit
*first-time* justification. Consider again this triad:
*Twins*
\(\rE\_2\).
This girl looks just like Jessica.
\(\rP\_2\).
This girl is actually Jessica.
Therefore:
\(\rQ\_2\).
This girl is not Jocelyn.
Suppose \(s\)'s background information entails that Jessica and
Jocelyn are indistinguishable twins. Imagine that \(s\) acquires
\(\rE\_2\).
It is intuitive that \(\rE\_2\) could justify
\(\rP\_2\)
for \(s\) only if \(s\) had *independent* justification for
believing
\(\rQ\_2\)
in the first instance. Thus, *Twins* instantiates the
*information-dependence template*. Note that \(s\) acquires
first-time justification for \(\rQ\_2\) in *Twins* only if
(C1\(\_{\textit{Twins}}\))
\(\rE\_2\)
gives her justification for
\(\rP\_2\),
(C2\(\_{\textit{Twins}}\))
\(s\) knows that
\(\rP\_2\)
entails
\(\rQ\_2\)
and
(C3\(^{\textrm{ft}}\_{\textit{Twins}}\))
\(s\) acquires first-time justification for believing
\(\rQ\_2\)
*in virtue of*
(C1\(\_{\textit{Twins}}\))
and
(C2\(\_{\textit{Twins}}\)).
The satisfaction of
(C3\(^{\textrm{ft}}\_{\textit{Twins}}\))
requires \(s\)'s justification for believing
\(\rQ\_2\)
*not* to be independent of \(s\)'s justification from
\(\rE\_2\)
for
\(\rP\_2\).
However, if
(C1\(\_{\textit{Twins}}\))
is true, the *informational-dependence template* requires
\(s\) to have justification for believing \(\rQ\_2\)
*independently* of \(s\)'s justification from \(\rE\_2\)
for \(\rP\_2\). Thus, when the *information-dependence template*
is instantiated,
(C1\(\_{\textit{Twins}}\))
and (C3\(^{\textrm{ft}}\_{\textit{Twins}}\)) cannot be satisfied at
once. In general, no argument satisfying this template together with a
given evidence will be transmissive of first-time justification based
on that evidence.
One may wonder whether a deductive argument from \(p\) to \(q\)
instantiating the *information-dependence template* is unable
to transmit *additional* justification for \(q\). The answer
seems affirmative when the additional justification is
*independent* justification. Suppose the
*information-dependence template* is instantiated such that
\(s\)'s justification for \(p\) from \(e\) depends on
\(s\)'s independent justification for \(q\). Note that \(s\)
acquires additional independent justification for \(q\) only if
(C1)
\(e\) gives her justification for \(p\),
(C2)
\(s\) knows that \(p\) entails \(q\), and
(C3\(^{\textrm{ai}}\))
\(s\) acquires an additional independent justification in virtue
of (C1) and (C2).
This additional justification is independent of \(s\)'s
antecedent justification for \(q\) only if, in particular, condition
(IN3)
of the characterization of additional independent justification is
satisfied. (IN3) says that if \(s\) had not been antecedently
justified in believing \(q\), upon learning \(e, s\) would have
acquired via transmission a first-time justification for believing
\(q\). (IN3) entails that if \(q\) were not antecedently justified for
\(s, e\) would still justify \(p\) for \(s\). Hence, the satisfaction
of (IN3) is incompatible with the instantiation of the
*information-dependence template*, which entails that if \(s\)
had not antecedent justification for \(q, e\) would *not*
justify \(p\) for \(s\). The instantiation of the
*information-dependence template* then precludes transmission
of independent justification.
As suggested by Wright (2007), the instantiation of the
*information-dependence template* might also appear sufficient
for an argument's inability to transmit *additional
quantitatively strengthening* justification. This claim might
appear intuitively plausible: perhaps it is reasonable that if the
justification from \(e\) for \(p\) depends on independent
justification for another proposition \(q\), the strength of the
justification available for \(q\) sets an upper bound to the strength
of the justification possibly supplied by \(e\) for \(p\). However,
the examples by Wright (2011) and Moretti & Piazza (2013)
mentioned in
Sect. 2
appear to undermine this intuition. For they involve arguments that
instantiate the *information-dependent template* yet seem to
transmit quantitatively strengthening justification to their
conclusions. (Alspector-Kelly 2015 suggests that these arguments are a
symptom that Wright's explanation of non-transmissivity is
overall inadequate.)
Some authors have attempted Bayesian formalizations of the
*information dependence template* (see the supplement on
Bayesian Formalizations of the *Information-Dependence Template*.)
Furthermore, Coliva (2012) has proposed a variant of the same
template. In accordance with the *information-dependence
template*, \(s\)'s justification from \(e\) for \(p\) fails
to transmit to \(p\)'s consequence \(q\) whenever \(s\)'s
possessing that justification for \(p\) requires \(s\)'s
independent justification for \(q\). According to Coliva's
variant, \(s\)'s justification from \(e\) for \(p\) fails to
transmit to \(q\) whenever \(s\)'s possessing the latter
justification for \(p\) requires \(s\)'s independent
*assumption* of \(q\), *whether this assumption is justified
or not*. Pryor (2012: SS VII) can be read as pressing
objections against Coliva's template, which are addressed in
Coliva (2012).
We have seen that non-transmissivity may depend on premise-circularity
or reliance on collateral information. There is at least a third
possibility: an argument can be non-transmissive of the justification
for its premise(s) based on given evidence because the evidence
justifies *directly* the conclusion--i.e.,
*independently* of the argument itself (cf. Davies 2009). In
this case the argument instantiates *indirectness*, for
\(s\)'s going through the argument would result in nothing but
an indirect (and unneeded) detour for justifying its conclusion. If
\(e\) directly justifies \(q\), no part of the justification for \(q\)
is based on, among other things, \(s\)'s knowledge of the
inferential relation between \(p\) and \(q\). So
(C3\(^{+}\))
is unfulfilled whether or not
(C1)
and
(C2)
are fulfilled. Here is an example from Wright (2002):
*Soccer*
\(\rE\_6\).
Jones has just kicked the ball between the white posts.
\(\rP\_6\).
Jones has just scored a goal.
Therefore:
\(\rQ\_6\).
A game of soccer is taking place.
Suppose \(s\) learns evidence
\(\rE\_6\).
On ordinary background information,
(C1\(\_{\textit{Socc}}\))
\(\rE\_6\)
justifies
\(\rP\_6\)
for \(s\),
(C2\(\_{\textit{Socc}}\))
\(s\) knows that
\(\rP\_6\)
entails
\(\rQ\_6\),
and
\(\rE\_6\)
also justifies
\(\rQ\_6\)
for \(s\).
It seems false, however, that
\(\rQ\_6\)
is justified for \(s\) *in virtue of* the satisfaction of
(C1\(\_{\textit{Socc}}\))
and
(C2\(\_{\textit{Socc}}\)).
Quite the reverse, \(\rQ\_6\) seems justified for \(s\) by
\(\rE\_6\)
independently the satisfaction of (C1\(\_{\textit{Socc}}\)) and
(C2\(\_{\textit{Socc}}\)). This is so because in the (imagined) actual
scenario it seems true that \(s\) would still possess a justification
for \(\rQ\_6\) based on \(\rE\_6\) even if \(\rE\_6\) did *not*
justify
\(\rP\_6\)
for \(s\). In fact suppose \(s\) had noticed that the referee's
assistant raised her flag to signal Jones's off-side. Against
this altered background information, \(\rE\_6\) would no longer justify
\(\rP\_6\) for \(s\) but it would still justify \(\rQ\_6\) for \(s\).
Thus, *Soccer* is non-transmissive of the justification
depending on \(\rE\_6\). In general, no argument instantiating
*indirectness* in relation to some evidence is transmissive of
justification based on that evidence.
The *information-dependence template* and *indirectness*
are diagnostics for a deductive argument's inability to transmit
the justification for its premise(s) \(p\) based on evidence \(e\) to
its conclusion \(q\), where \(e\) is conceived of as a
*believed* proposition capable of supplying
*inferential* and (typically) *fallible* justification
for \(p\). (Note that even though \(e\) is conceived of as a belief,
the collateral information \(i\), which is part of the
*template*, doesn't need to be believed on some views.)
The justification for a proposition \(p\) might come in other forms.
For instance, it has been proposed that a proposition \(p\) about the
perceivable environment around the subject \(s\) can be justified by
the fact that \(s\) *sees* that \(p\) (where
'*sees*' is taken to be factive). In this case,
\(s\) is claimed to attain a kind of *non-inferential* and
*infallible* justification for believing \(p\). This view has
been explored by *epistemological disjunctivists* (see for
instance McDowell 1982, 1994 and 2008, and Pritchard 2007, 2008, 2009,
2011 and 2012a).
One might find it intuitively plausible that when \(s\) sees that \(p,
s\) attains non-inferential and infallible justification for believing
\(p\) that doesn't rely on \(s\)'s background information.
Since this justification for believing \(p\) would be *a
fortiori* unconstrained by \(s\)'s independent justification
for believing any consequence \(q\) of \(p\), in these cases the
information-dependence template could not possibly be instantiated.
Therefore, one might be tempted to conclude that when \(s\) sees that
\(p, s\) acquires a justification that typically transmits to the
propositions that \(s\) knows to be deducible from \(p\) (cf. Wright
2002). Pritchard (2009, 2012a) comes very close to endorsing this view
explicitly.
The latter contention has not stayed unchallenged. Suppose one accepts
a notion of epistemic justification with *internalist*
resonances saying that a factor \(J\) is relevant to \(s\)'s
justification only if \(s\) is able to determine, by reflection alone,
whether \(J\) is or is not realized. On this notion, \(s\)'s
seeing that \(p\) cannot provide \(s\) with justification for
believing \(p\) unless \(s\) can rationally claim that she is
*seeing* that \(p\) upon reflection alone. Seeing that \(p\),
however, is *subjectively* indistinguishable from hallucinating
that \(p\), or from being in some other delusional state in which it
merely *seems* to \(s\) that she is seeing that \(p\). Hence,
one may find it compelling that \(s\) can claim by reflection alone
that she's seeing that \(p\) only if \(s\) has an independent
reason for ruling out that it merely seems to her that she does (cf.
Wright 2011). If this is true, for many deductive arguments from \(p\)
to \(q, s\) won't be able to acquire non-inferential and
infallible justification for believing \(p\) of the type described by
the disjunctivist and transmit it to \(q\). This will happen whenever
\(q\) is the logical negation of a proposition ascribing to \(s\) some
delusionary mental state in which it merely seems to her that she
directly perceives that \(p\).
Wright's *disjunctive template* is meant to be a
diagnostic of transmission failure of non-inferential justification
when epistemic justification is conceived of in the internalist
fashion suggested
above.[6]
According to Wright (2000), for any propositions \(p, q\) and \(r\)
and subject \(s\), the *disjunctive template* is instantiated
whenever:
(D1)
\(p\) entails \(q\);
(D2)
\(s\)'s justification for \(p\) consists in \(s\)'s
being in a state subjectively indistinguishable from a state in which
\(r\) would be true;
(D3)
\(r\) is incompatible with \(p\);
(D4)
\(r\) would be true if \(q\) were false.
To see how this template works, consider again the following
triad:
*Zebra*
\(\rE\_3\).
The animals in the pen look like zebras.
\(\rP\_3\).
The animals in the pen are zebras.
Therefore:
\(\rQ\_3\).
The animals in the pen are not mules cleverly disguised to look
like zebras.
The justification from
\(\rE\_3\)
for
\(\rP\_3\)
arguably fails to transmit across the inference from \(\rP\_3\) to
\(\rQ\_3\)
because of the satisfaction of the *information-dependence
template*. For it seems true that \(s\) could acquire a
justification for believing \(\rP\_3\) on the basis of \(\rE\_3\) only
if \(s\) had an independent justification for believing \(\rQ\_3\). Now
suppose that \(s\)'s justification for \(\rP\_3\) is based on,
not \(\rE\_3\) but \(s\)'s *seeing* that \(\rP\_3\).
Let's call *Zebra\** the corresponding variant of
*Zebra*. Given the *non-inferential* nature of the
justification considered for \(\rP\_3\), *Zebra\** could not
instantiate the *information-dependence template*. However, it
is easy to check that *Zebra\** instantiates the *disjunctive
template*. To begin with,
(D1\(\_{\textit{Zebra}}\))
\(\rP\_3\)
entails
\(\rQ\_3\);
(D2\(\_{\textit{Zebra}}\))
\(s\)'s justification for believing
\(\rP\_3\)
is constituted by \(s\)'s seeing that \(\rP\_3\), which is
subjectively indistinguishable from the state that \(s\) would be in
if it were true that
\(\rR\_{\textit{Zebra}}\).
the animals in the pen are mules cleverly disguised to look like
zebras;
(D3\(\_{\textit{Zebra}}\))
\(\rR\_{\textit{Zebra}}\)
is incompatible with
\(\rP\_3\);
and, trivially,
(D4\(\_{\textit{Zebra}}\))
if
\(\rQ\_3\)
were false
\(\rR\_{\textit{Zebra}}\)
would be true.
Since *Zebra\** instantiates the *disjunctive template*,
it is non-transmissive of at least *first-time* justification.
In fact note that \(s\) acquires first-time justification for
\(\rQ\_3\)
in this case if and only if
(C1\(\_{\textit{Zebra}}\))
\(s\) has justification for
\(\rP\_3\)
based on seeing that \(\rP\_3\),
(C2\(\_{\textit{Zebra}}\))
\(s\) knows that
\(\rP\_3\)
entails
\(\rQ\_3\),
and
(C3\(^{\textrm{ft}}\_{\textit{Zebra}}\))
\(s\) acquires first-time justification for believing
\(\rQ\_3\)
in virtue of
(C1\(\_{\textit{Zebra}}\))
and
(C2\(\_{\textit{Zebra}}\)).
Also note that
(C3\(^{\textrm{ft}}\_{\textit{Zebra}}\))
requires \(s\)'s justification for believing
\(\rQ\_3\)
*not* to be independent of \(s\)'s justification for
\(\rP\_3\)
based on seeing that \(\rP\_3\). However, if
(C1\(\_{\textit{Zebra}}\))
is true, the *disjunctive template* requires \(s\) to have
justification for believing \(\rQ\_3\) *independent* of
\(s\)'s justification for \(\rP\_3\) based on her seeing that
\(\rP\_3\). (For if \(s\) could not independently exclude that
\(\rQ\_3\) is false, given (D4), (D3) and (D2), \(s\) could not exclude
that the incompatible alternative \(\rR\_{\textit{Zebra}}\) to
\(\rP\_3\), which \(s\) cannot subjectively distinguish from \(\rP\_3\)
on the ground of her seeing that \(\rP\_3\), is true.) Thus, when the
*disjunctive template* is instantiated,
(C1\(\_{\textit{Zebra}}\))
and (C3\(^{\textrm{ft}}\_{\textit{Zebra}}\)) cannot be satisfied at
once.
The disjunctive template has been criticized by McLaughlin (2003) on
the ground that the template is instantiated whenever the
justification for \(p\) is fallible, i.e. compatible with
\(p\)'s falsity. Here is an example from Brown (2004). Take this
deductive argument:
*Fox*
\(\rP\_7\).
The animal in the garbage is a fox.
Therefore:
\(\rQ\_7\).
The animal in the garbage is not a cat.
Suppose \(s\) has a fallible justification for believing
\(\rP\_7\)
based on \(s\)'s *experience* as if the animal in the
garbage is a fox. Take now \(\rR\_{\textit{fox}}\) to be
\(\rP\_7\)'s logical negation. Since the justification that \(s\)
has for \(\rP\_7\) is fallible, condition
(D2)
above is met *by default*. As one can easily check, conditions
(D1),
(D3), and
(D4)
are also met. So,
*Fox*
instantiates the *disjunctive template*. Yet it is intuitive
that *Fox* does transmit justification to its conclusion.
One could respond to McLaughlin that his objection is misplaced
because the *disjunctive template* is meant to apply to
*infallible*, and not fallible, justification. A more
interesting response to McLaughlin is to refine some condition listed
in the *disjunctive template* to block McLaughlin's
argument while letting this template account for transmission failure
of both fallible and infallible justification. Wright (2011) for
instance suggests replacing
(D3)
with the following
condition:[7]
(D3\(^{\*}\))
\(r\) is incompatible with some presupposition of the cognitive
project of obtaining justification for \(p\) in the relevant
fashion.
According to Wright's (2011) characterization, a presupposition
of a cognitive project is any condition such that doubting it before
carrying out the project would rationally commit one to doubting the
significance or competence of the project irrespective of its
outcome.[8]
For a wide class of cognitive projects, examples of these
presuppositions include: the normal and proper functioning of the
relevant cognitive faculties, the reliability of utilized instruments,
the obtaining of the circumstances congenial to the proposed method of
investigation, the soundness of relevant principles of inference
utilized in developing and collating one's results, and so
on.
With
(D3\(^{\*}\))
in the place of
(D3),
*Fox* no longer instantiates the *disjunctive
template*. For the truth of \(\rR\_{\textit{fox}}\), stating that
the animal in the garbage is not a fox, appears to jeopardize no
presupposition of the cognitive project of obtaining perceptual
justification for
\(\rP\_7\).
Thus (D3\(^{\*}\)) is not fulfilled. On the other hand, arguments that
intuitively don't transmit do satisfy (D3\(^{\*}\)). Take
*Zebra\**. In this case \(\rR\_{\textit{fox}}\) states that the
animals in the pen are mules cleverly disguised to look like zebras.
Since \(\rR\_{\textit{fox}}\) entails that conditions are unsuitable
for attaining perceptual justification for believing
\(\rP\_3\),
\(\rR\_{\textit{fox}}\) looks incompatible with a presupposition of
the cognitive project of obtaining perceptual justification for
\(\rP\_3\). Thus, \(\rR\_{\textit{fox}}\) does satisfy (D3\(^{\*}\)) in
this case.
### 3.3 Non-Standard Accounts
In this section we outline two interesting branches in the literature
on transmission failure of justification and warrant across valid
inference. We start with Smith's (2009) non-standard account of
non-transmissivity of justification. Then, we present
Alspector-Kelly's (2015) account of non-transmissivity of
Plantinga's warrant.
According to Smith (2009), epistemic justification requires
*reliability*. \(s\)'s belief that \(p\), held on the
basis of \(s\)'s belief that \(e\), is reliable in Smith's
sense just in case in all possible worlds including \(e\) as true and
that are as *normal* (from the perspective of the actual world)
as the truth of \(e\) permits, \(p\) is also true. Consider
*Zebra*
again.
*Zebra*
\(\rE\_3\).
The animals in the pen look like zebras.
\(\rP\_3\).
The animals in the pen are zebras.
Therefore:
\(\rQ\_3\).
The animals in the pen are not mules cleverly disguised to look
like zebras.
\(s\)'s belief that
\(\rP\_3\)
is reliable in Smith's sense when it is based on \(s\)'s
belief that
\(\rE\_3\).
(Disguising mules to make them look like zebras is certainly an
*abnormal* practice.) Thus, all possible \(\rE\_3\)-worlds that
are as normal as the truth of \(\rE\_3\) permits aren't worlds in
which the animals in the pen are cleverly disguised mules. Rather,
they are \(\rP\_3\)-worlds--i.e., worlds in which the animals in
the pen *are* zebras.
Smith describes two ways in which a belief can possess this property
of being reliable. One is that a belief that \(p\) has it in virtue of
the modal relationship with its basis \(e\). In this case, \(e\) is a
contributing reliable basis. Another possibility is when it is the
content of the belief that \(p\), rather than the belief's modal
relationship with its basis \(e\), that guarantees by itself the
belief's reliability. In this case, \(e\) is a non-contributing
reliable basis. An example of the first kind is \(s\)'s belief
that
\(\rP\_3\),
which is reliable because of its modal relationship with
\(\rE\_3\).
There are obviously many sufficiently normal worlds in which
\(\rP\_3\) is false, but no sufficiently normal world in which
\(\rP\_3\) is false and \(\rE\_3\) true. An example of the second kind
is \(s\)'s belief that
\(\rQ\_3\)
as based on \(\rE\_3\). It is this belief's content, and not its
modal relationship with \(\rE\_3\), that guarantees its reliability. As
Smith puts it, there are no sufficiently normal worlds in which
\(\rE\_3\) is true and \(\rQ\_3\) is false, but this is simply because
there are no sufficiently normal worlds in which \(\rQ\_3\) is
false.
According to Smith, a deductive inference from \(p\) to \(q\) fails to
transmit to \(q\) justification relative to \(p\)'s basis \(e\),
if \(e\) is a contributing reliable basis for believing \(p\) but is a
non-contributing reliable basis for believing \(q\). In this case the
inference fails to *explain* \(q\)'s reliability: if
\(s\) deduced one proposition from another, she would reliably believe
\(q\), but not--not even in part--in virtue of having
inferred \(q\) from \(p\) (as held on the basis of \(e)\).
*Zebra* fails to transmit to
\(\rQ\_3\)
justification relative to
\(\rP\_3\)'s
basis
\(\rE\_3\)
in this
sense.[9]
Let's turn to Alspector-Kelly (2015). As we have seen,
Wright's analysis of failure of warrant transmission interprets
the epistemic good whose transmission is in question as, roughly, the
same as epistemic
justification.[10]
By so doing, Wright departs from Plantinga's (1993a, 1993b)
influential understanding of warrant as the epistemic quality that
(whatever it is) suffices to turn true belief into knowledge. One
might wonder, however, if there are deductive arguments incapable of
transmitting warrant in Plantinga's sense. (Hereafter we use
'warrant' only to refer to Plantinga's warrant.)
Alspector-Kelly answers this question affirmatively contending that
certain arguments cannot transmit warrant because the cognitive
project of establishing their conclusion via inferring it from their
premise is *procedurally self-defeating.*
Alspector-Kelly follows Wright (2012) in characterizing a cognitive
project as a pair of a question and a procedure that one executes to
answer the question. An *enabling condition* of a cognitive
project is, for Alspector-Kelly, any proposition such that, unless it
is true, one cannot learn the answer to its defining question by
executing the procedure associated with it. That a given object \(o\)
is illuminated, for example, is an enabling condition of the cognitive
project of determining its color by looking at it. For one cannot
learn by sight that \(o\) is of a given color unless \(o\) is
illuminated.
Enabling conditions can be *opaque*. An enabling condition of a
cognitive project is opaque, relative to some actual or possible
result, if it is the case that whenever the execution of the
associated procedure yields this result, it would have produced the
same result had the condition been unfulfilled. The enabling condition
that \(o\) be illuminated of the cognitive project just considered is
not opaque, since looking at \(o\) never produces the same response
about \(o\)'s color when \(o\) is illuminated and when it
isn't. (In the second case, it produces *no*
response.)
Now consider the cognitive project of establishing by sight whether
(\(\rP\_3)\)
the animals enclosed in the pen are zebras. An enabling condition of
this project states that
(\(\rQ\_3)\)
the animals in the pen are not mules disguised to look like zebras.
For one couldn't learn whether \(\rP\_3\) is true by looking, if
\(\rQ\_3\)were true. \(\rQ\_3\) is opaque with respect to the possible
outcome that \(\rP\_3\). In fact, suppose \(\rQ\_3\) is satisfied
because the pen contains (undisguised) zebras. In this case, looking
at them will attest they are zebras, and the execution of the
procedure associated with this project will yield the outcome that
\(\rP\_3\). But this is exactly the outcome that would be generated by
the execution of the same procedure if \(\rQ\_3\) were *not*
satisfied. In this case too, looking at the animals would produce the
response that \(\rP\_3\).
We can now elucidate the notion of a *procedurally
self-defeating* cognitive project. The distinguishing feature of
any project of this type is that it seeks to answer the question
whether an *opaque* enabling condition--call it
'\(q\)'--of the project itself is fulfilled. A
project of this type has the defect that it necessarily produces the
response that \(q\) is fulfilled when \(q\) is unfulfilled. Given
this, it is intuitive that it cannot yield warrant to believe
\(q\).
As an example of a procedurally self-defeating project, consider
trying to answer the question whether your informant is sincere by
asking *her* this very question. Your informant's being
sincere is an enabling condition of the very project you are carrying
out. If your informant were not sincere, you couldn't learn
anything from her. This condition is also opaque with respect to the
possible response that she is sincere. For your informant would
respond that she is sincere both in case she is so and in case she
isn't. Since the execution of this project is guaranteed to
yield the result that your informant is sincere when she isn't,
it is intuitive that it cannot yield warrant to believe that the
informant is sincere.
Alspector-Kelly contends that arguments like *Zebra*
don't transmit warrant because the cognitive project of
determining inferentially whether their conclusion is true is
procedurally self-defeating in the sense advertised. Let's apply
the explanation to *Zebra*.
Imagine you carry out *Zebra*. This initially commits you to
executing the project of establishing whether
\(\rP\_3\)
by looking. Suppose you get
\(\rE\_3\)
and hence \(\rP\_3\) as an outcome. As we have seen,
\(\rQ\_3\)
is an opaque enabling condition relative to the outcome \(\rP\_3\) of
the cognitive project you have executed. Imagine that now, having
received \(\rP\_3\) as a response to your first question, you embark in
the second project which carrying out *Zebra* commits you to:
establishing whether \(\rQ\_3\) is true by inference from \(\rP\_3\).
This project appears doomed.
\(\rQ\_3\)
is an enabling condition of the initial project of determining
whether
\(\rP\_3\),
which is opaque with respect to the very *premise* \(\rP\_3\).
It follows from this that \(\rQ\_3\) is also an enabling condition of
the second project. For if \(\rQ\_3\) were unfulfilled, you
couldn't learn \(\rP\_3\) and, *a fortiori,* you
couldn't learn anything else--\(\rQ\_3\) included--by
inference from \(\rP\_3\). Another consequence is that \(\rQ\_3\) is an
enabling condition of the second project that is *opaque*
relative to the very outcome that \(\rQ\_3\). Suppose first that
\(\rQ\_3\) is true and that, having visually inspected the pen, you
have verified \(\rP\_3\). You then execute the inferential procedure
associated with the second project and produce the outcome that
\(\rQ\_3\). Now consider the case in which \(\rQ\_3\) is false. Looking
into the pen still generates the outcome that \(\rP\_3\) is true. Thus,
when you execute the procedure associated with the second project, you
still infer that \(\rQ\_3\) is true. Since you would get the result
that \(\rQ\_3\) is true even if \(\rQ\_3\) were false, the second
project is procedurally self-defeating. In conclusion, carrying out
*Zebra* commits you to executing a project that cannot generate
warrant for believing its conclusion \(\rQ\_3\). This explanation of
non-transmissivity generalizes to all structurally similar
arguments.
## 4. Applications
The notions of transmissive and non-transmissive argument, above and
beyond being investigated for their own sake, have been put to work in
relation to specific philosophical problems and issues. An important
problem is whether Moore's infamous proof of an external world
is successful, and whether so are structurally similar Moorean
arguments directed against the perceptual skeptic and--more
recently--the non-believer. Another issue pertains to the
solution of McKinsey paradox. A third issue concerns
Boghossian's (2001, 2003) explanation of our logical knowledge
via implicit definitions, criticized as resting on a non-transmissive
argument schema (see for instance Ebert 2005 and Jenkins 2008). As the
debate focusing on the last topic is only at an early stage of its
development, it is preferable to concentrate on the first two, which
will be reviewed in respectively
Sect. 4.1
and
Sect. 4.2
below.
### 4.1 Moorean Proofs, Perceptual Justification and Religious Justification
Much of the contemporary debate on Moore's proof of an external
world (see Moore 1939) is interwoven with the topic of epistemic
transmission and its failure. Moore's proof can be reconstructed
as follows:
*Moore*
\(\rE\_8\).
My experience is in all respects as of a hand held up in front of
my face.
\(\rP\_8\).
Here is a hand.
Therefore:
\(\rQ\_8\).
There is a material world (since any hand is a material object
existing in space).
Evidence
\(\rE\_8\)
in *Moore* is constituted by a proposition *believed*
by \(s\). One might suggest, however, that this is a misinterpretation
of Moore's proof (and variants of it that we shall consider
shortly). One might argue that what is meant to give \(s\)
justification for believing
\(\rP\_8\)
is \(s\)'s *experience* of a hand. Nevertheless, most of
the epistemologists participating in this debate implicitly or
explicitly assume that one's experience as if \(p\) and
one's belief that one has an experience as if \(p\) have the
same justifying power (cf. White 2006 and Silins 2007).
Many philosophers find Moore's proof unsuccessful. Philosophers
have proposed explanations of this impression according to which
*Moore* is non-transmissive in some of the senses described in
Sect. 3
(see mainly Wright 1985, 2002, 2007 and
2011).[11] A different explanation is that Moore's proof does
transmit justification but is dialectically ineffective (see mainly
Pryor 2004).
According to Wright, there exist *cornerstone propositions* (or
simply *cornerstones*), where \(c\) is a cornerstone for an
area of discourse \(d\) just in case for any proposition \(p\)
belonging to \(d, p\) could not be justified for any subject \(s\) if
\(s\) had no independent justification for accepting \(c\) (see mainly
Wright
2004).[12]
Cornerstones for the area of discourse about perceivable things are
for instance the *logical negations of skeptical conjectures*,
such as the proposition that one's experiences are nothing but
one's hallucinations caused by a Cartesian demon or the Matrix.
Wright contends that the conclusion
\(\rQ\_8\)
of *Moore* is also a cornerstone for the area of discourse
about perceivable things. Adapting terminology introduced by Pryor
(2004), Wright's conception of the architecture of perceptual
justification thus treats \(\rQ\_8\) *conservatively* with
respect to any perceptual hypothesis \(p\): if \(s\) had no
independent justification for \(\rQ\_8\), no proposition \(e\)
describing an apparent perception could supply \(s\) with *prima
facie* justification for any perceptual hypothesis \(p\). It
follows from this that *Moore* instantiates the
*information-dependence template* considered in
Sect. 3.2.
For \(\rQ\_8\) is part of the collateral information which \(s\) needs
independent justification for if \(s\) is to receive some
justification for
\(\rP\_8\)
from
\(\rE\_8\).
Hence *Moore* is non-transmissive (see mainly Wright 2002).
Note that the thesis that Moore's proof is epistemologically
useless because non-transmissive in this sense is compatible with the
claim that by learning \(\rE\_8, s\) does acquire a justification for
believing \(\rP\_8\). For instance, Wright (2004) contends that we all
have a special kind of *non-evidential* justification, which he
calls *entitlement*, for accepting \(\rQ\_8\) as well as other
cornerstones in
general.[13]
So, by learning \(\rE\_8\) we do acquire justification for \(\rP\_8\).
Wright's analysis of Moore's proof and Wright's
conservatism have mostly been criticized in conjunction with his
theory of entitlement. A presentation of these objections is beyond
the scope of this entry. (See however Davies 2004; Pritchard 2005;
Jenkins 2007. For a defense of Wright's views see for instance
Neta 2007 and Wright 2014.)
As anticipated, other philosophers contend that Moore's proof
does transmit justification and that its ineffectiveness has a
different explanation. An important conception of the architecture of
perceptual justification, called *dogmatism* in Pryor (2000,
2004), embraces a generalized form of *liberalism* about
perceptual justification. This form of liberalism is opposed to
Wright's *conservatism*, and claims that to have
*prima facie* perceptual justification for believing \(p\) from
an apparent perception that \(p, s\) doesn't need independent
justification for believing the negation of skeptical conjectures or
non-perceiving hypotheses like \(\notQ\_8\). This is so, for the
dogmatist, because our experiences give us *immediate* and
*prima facie* justification for believing their contents.
Saying that perceptual justification is immediate is saying that it
doesn't presuppose--not even in part--justification
for anything else. Saying that justification is prima facie is saying
that it can be defeated by additional evidence. Our perceptual
justification would be defeated, for example, by evidence that a
relevant non-perceiving hypothesis is true or just as probable as its
negation. For instance, \(s\)'s perceptual justification for
\(\rP\_8\) would be defeated by evidence that \(\notQ\_8\) is true, or
that
\(\rQ\_8\)
and \(\notQ\_8\) are equally probable. On this point the dogmatist and
Wright do agree. They disagree on whether \(s\)'s perceptual
justification for
\(\rP\_8\)
requires independent justification for believing or accepting
\(\rQ\_8\). The dogmatist denies that \(s\) needs that independent
justification. Thus, according to the dogmatist, Moore's proof
*transmits* the perceptual justification available for its
premise to the
conclusion.[14]
The dogmatist argues (or may argue), however, that Moore's proof
is *dialectically* flawed (cf. Pryor 2004). The contention is
that Moore's is unsuccessful because it is useless for the
purpose to convince the idealist or the external world skeptic that
there is an external (material) world. In short, neither the idealist
nor the global skeptic believes that there is an external world. Since
they don't believe
\(\rQ\_8\),
they are rationally required to distrust any *perceptual*
evidence offered in favor of
\(\rP\_8\)
in the first instance. For this reason they both will reject
Moore's proof as one based on an unjustified
premise.[15]
Moretti (2014) suggests that the dogmatist could alternatively
contend that Moore's proof is non-transmissive because
\(s\)'s experience of a hand gives \(s\) *immediate*
justification for believing both the premise \(\rP\_8\) and the
*conclusion* \(\rQ\_8\) of the proof at once. According to this
diagnosis, Moore's proof is *epistemically* flawed
because it instantiates a variant of *indirectness* (in which
the evidence is an *experience* of a hand). Pryor's
analysis of Moore's proof has principally been criticized in
conjunction with his liberalism in epistemology of perception. (See
Cohen 2002, 2005; Schiffer 2004; Wright 2007; White 2006; Siegel &
Silins 2015.)
Some authors (e.g., Wright 2003; Pryor 2004; White 2006; Silins 2007;
Neta 2010) have investigated whether certain *variants* of
Moore are transmissive of justification. These arguments start from a
premise like
\(\rP\_8\),
describing a (supposed) perceivable state of affairs of the external
world, and deduce from it the logical negation of a relevant
*skeptical conjecture*. Consider for example the variant of
*Moore* that starts from \(\rP\_8\) and replaces
\(\rQ\_8\)
with:
\(\rQ\_8^\*\).
It is not the case that I am a handless brain in a vat fed with
the hallucination of a hand held up in front of my face.
Let's call *Moore\** this variant of
*Moore*.
While dogmatists *a la* Pryor argue that
*Moore\** is transmissive but dialectically flawed (cf. Pryor
2004), conservatives *a la* Wright contend that it is
non-transmissive (cf. Wright 2007). Although it remains very
controversial whether or not *Moore* is transmissive,
epistemologists have found some *prima facie* reason to think
that arguments like *Moore\** are non-transmissive.
An important difference between *Moore* and *Moore\** is
this: whereas the logical negation of
\(\rQ\_8\)
does *not* explain the evidential statement
\(\rE\_8\)
("My experience is in all respects as of a hand held up in
front of my face") adduced in support of
\(\rP\_8\),
the logical negation of
\(\rQ\_8^\*\)--\(\notQ\_8^\*\)--somewhat
*explains* \(\rE\_8\). Since \(\notQ\_8^\*\) provides a potential
explanation of \(\rE\_8\), is it intuitive that \(\rE\_8\) is
*evidence* (perhaps very week) for believing \(\notQ\_8^\*\). It
is easy to conclude from this that \(s\) cannot acquire justification
for believing \(\rQ\_8^\*\) via transmission through *Moore\**
upon learning \(\rE\_8\). For it is intuitive that if this were the
case, \(\rE\_8\) should count as evidence for \(\rQ\_8^\*\). But this is
impossible: one and the same proposition cannot simultaneously be
evidence for a hypothesis and its logical negation. By formalizing
intuitions of this type, White (2006) has put forward a simple
Bayesian argument to the effect that *Moore\** and similar
variants of *Moore* are not transmissive of
justification.[16]
(See Silins 2007 for discussion. For responses to White, see
Weatherson 2007; Kung 2010; Moretti 2015.)
From the above analysis it is easy to conclude that the
*information-dependence template* is satisfied by
*Moore\** and akin proofs. In fact note that if
\(\rE\_8\)
is evidence for both
\(\rP\_8\)
and \(\notQ\_8^\*\), it seems correct to say that \(s\) can acquire a
justification for believing \(\rP\_8\) only if \(s\) has independent
justification for *disbelieving* \(\notQ\_8^\*\) and thus
*believing* \(\rQ\_8^\*\). Since \(\notQ\_8^\*\) counts as a
*non-perceiving* hypothesis for Pryor, this gives us a reason
to doubt dogmatism (cf. White
2006).[17]
Coliva (2012, 2015) defends a view--baptized by her
*moderatism*--that aims to be a middle way between
Wright's conservatism and Pryor's dogmatism. The moderate
contends--against the conservative--that cornerstones cannot
be justified and that to possess perceptual justification \(s\) needs
no justification for accepting any cornerstones. On the other hand,
the moderate claims--against the dogmatist--that to possess
perceptual justification \(s\) needs to *assume* (without
justification) relevant
cornerstones.[18]
By relying on her variant of the *information-dependent
template* described in
Sect. 3.2,
Coliva concludes that neither *Moore* nor any proof like
*Moore\** is transmissive. (For a critical discussion of
moderatism see Avnur 2017; Baghramian 2017; Millar 2017; Volpe 2017;
Coliva 2017.)
Epistemological disjunctivists like McDowell and Pritchard have argued
that in paradigmatic cases of perceptual knowledge, what is meant to
give \(s\) justification for believing
\(\rP\_8\)
is, not
\(\rE\_8\),
but \(s\)'s factive state of seeing that \(\rP\_8\). This seems
to have consequences for the question whether *Moore\**
transmits propositional justification. Pritchard explicitly defends
the claim that when \(s\) sees that \(\rP\_8\), thereby learning that
\(\rP\_8, s\) can come to know by inference from \(\rP\_8\) the negation
of any skeptical hypothesis inconsistent with \(\rP\_8\), like
\(\rQ\_8^\*\)
(cf. Pritchard 2012a: 129-30). This may encourage the belief
that, for Pritchard, *Moore\** can transmit the justification
for \(\rP\_8\), based on *s'*s seeing that \(\rP\_8\), to
\(\rQ\_8^\*\) (see Lockhart 2018 for an explicit argument to this
effect). This claim must however be handled with some care.
As we have seen in
Sect. 2,
Pritchard contends that when one knows \(p\) on the basis of evidence
\(e\), one can know \(p\)'s consequence \(q\) by inference from
\(p\) only if \(e\) sufficiently supports \(q\). For Pritchard this
condition is met when \(s\)'s support for believing
\(\rP\_8\)
is constituted by \(s\)'s seeing that \(\rP\_8, s\)'s
epistemic situation is objectively good (i.e., \(s\)'s cognitive
faculties are working properly in a cooperative environment) and the
skeptical hypothesis ruled out by
\(\rQ\_8^\*\)
has not been epistemically motivated. For, in this case, \(s\) has a
reflectively accessible *factive* support for believing
\(\rP\_8\) that *entails*--and so sufficiently
supports--\(\rQ\_8^\*\). Thus, in this case, nothing stands in the
way of \(s\) competently deducing \(\rQ\_8^\*\) from \(\rP\_8\), thereby
coming to know \(\rQ\_8^\*\) on the basis of \(\rP\_8\).
If upon *deducing* one proposition from another, \(s\) comes to
justifiably believe
\(\rQ\_8^\*\)
for the first-time, the inference from
\(\rP\_8\)
to \(\rQ\_8^\*\) presumably transmits *doxastic* justification.
(See the supplement on
Transmission of Propositional Justification *versus* Transmission of Doxastic Justification.)
It is more dubious, however, that when \(s\)'s support for
\(\rP\_8\) is constituted by \(s\)'s seeing that \(\rP\_8\),
*Moore\** is also transmissive of *propositional*
justification. For instance, one might contend that *Moore\** is
non-transmissive because it instantiates the *disjunctive
template* described in
Sect. 3.2
(cf. Wright 2002). To start with, \(\rP\_8\) entails \(\rQ\_8^\*\), so
(D1)
is satisfied. Let \(\rR^{\*}\_{\textit{Moore}}\) be the proposition
that this is no hand but \(s\) is victim of a hallucination of a hand
held up before her face. Since \(\rR^{\*}\_{\textit{Moore}}\) would be
true if \(\rQ\_8^\*\) were false,
(D4)
is also satisfied. Furthermore, take the grounds of \(s\)'s
justification for \(\rP\_8\) to be \(s\)'s seeing that \(\rP\_8\).
Since this experience is a state for \(s\) indistinguishable from one
in which \(\rR^{\*}\_{\textit{Moore}}\) is true,
(D2)
is also satisfied. Finally, it might be argued that the proposition
that one is not hallucinating is a presupposition of the cognitive
project of learning about one's environment through perception.
It follows that \(\rR^{\*}\_{\textit{Moore}}\) is incompatible with a
presupposition of \(s\)'s cognitive project of learning about
one's environment through perception. Thus
(D3\(^{\*}\))
appears fulfilled too. So, *Moore\** won't transmit the
justification that \(s\) has for \(\rP\_8\) to \(\rQ\_8^\*\).
To resist this conclusion, a disjunctivist might insist that
*Moore\**, relative to \(s\)'s support for
\(\rP\_8\)
supplied by \(s\)'s seeing that \(\rP\_8\), doesn't always
instantiate the *disjunctive template* because
(D2)
isn't necessarily fulfilled (cf. Lockhart 2018.) By invoking a
distinction drawn by Pritchard (2012a), one might contend that (D2)
isn't necessarily fulfilled because \(s\), though unable to
introspectively discriminate seeing that \(\rP\_8\) from hallucinating
it, may have evidence that favors the hypothesis that she is in the
first state, which makes the state reflectively accessible. For
Pritchard, this happens--as we have seen--when \(s\) sees
that \(\rP\_8\) in good epistemic conditions and the skeptical
conjecture that \(s\) is having a hallucination of a hand hasn't
been epistemically motivated. In this case, \(s\) can come to know
\(\rQ\_8^\*\)
by inference from \(\rP\_8\) even if she is unable to introspectively
discriminate one situation from the other.
Pritchard's thesis that, in good epistemic conditions,
\(s\)'s factive support for believing
\(\rP\_8\)
coinciding with \(s\)'s seeing that \(\rP\_8\) is reflectively
accessible is controversial (cf. Piazza 2016 and Lockhart 2018). Since
this thesis is essential to the contention that *Moore\** may
not instantiate the *disjunctive template* and may thus be
transmissive of propositional justification, the latter contention is
also controversial.
So far we have focused on perceptual justification. Now let's
switch to *religious* justification. Shaw (2019) aims to
motivate a view in religious epistemology--he contends that the
'theist in the street', though unaware of the traditional
arguments for theism, can find independent rational support for
believing that God exists through proofs for His existence. These
proofs are based on religious experiences and parallel in structure
Moore's proof of an external world interpreted as having the
premise supported by an experience.
A Moorean proof for the existence of God is one that deduces a belief
that God exists from what Alston (1991) calls a 'manifestation
belief' (M-belief, for short). An M-belief is a belief based
non-inferentially on a corresponding mystical experience. Typical
M-beliefs are about God's acts toward a given subject at a given
time: for example, beliefs about God's bringing comfort to me,
reproving me for some wrongdoing, or demonstrating His love for me,
and so on. Here is Moorean proof for the existence of God:
*God*
\(\rP\_9\).
God is comforting me just now.
Therefore:
\(\rQ\_9\).
God exists.
Note that the 'good' case in which my experience that
\(\rP\_9\)
is veridical has a corresponding, subjectively indistinguishable,
'bad' case, in which it only seems to me that \(\rP\_9\)
because I'm suffering from a delusional religious experience. A
concern is, therefore, that *God* may instantiate the
*disjunctive template*. In this case, my experience as if
\(\rP\_9\) could actually justify \(\rP\_9\) only if I had independent
justification for
\(\rQ\_9\)
(cf. Pritchard 2012b). If so, *God* isn't
transmissive.
Shaw (2019) concedes that
*God*
is dialectically ineffective. To defend its transmissivity, he
appeals to religious epistemological disjunctivism, which says that an
M-belief that \(p\) can enjoy infallible rational support in virtue of
one's *pneuming* that \(p\), where this mental state is
both factive and accessible on reflection. Shaw intends
'pneuming that \(p\)' to stand as a kind of
religious-perceptual analogue to 'seeing that \(p\)' (cf.
Shaw 2016, 2019). When it comes to *God*, my belief that
\(\rP\_9\) is supposed to be justified by my pneuming that God is
comforting me just now. As we have seen in the case of
*Moore\**, a worry is that conceiving of the justification for
\(\rP\_9\) along epistemological disjunctivist lines may not suffice to
exempt *God* from the charge of instantiating the
*disjunctive template* and being thus non-transmissive.
### 4.2 McKinsey's Paradox
McKinsey (1991, 2002, 2003, 2007) has offered a reductio argument for
the incompatibility of first-person privileged access to mental
content and externalism about mental content. The privileged access
thesis roughly says that it is necessarily true that if \(s\) is
thinking that \(x\), then \(s\) can in principle know *a
priori* (or in a non-empirical way) that she is thinking that
\(x\). Externalism about mental content roughly says that predicates
of the form 'is thinking that \(x\)'--e.g., 'is
thinking that water is wet'--express properties that are
*wide*, in the sense that possession of these properties by
\(s\) logically or conceptually implies the existence of relevant
contingent objects external to \(s\)'s mind--e.g., water.
McKinsey argues that \(s\) may reason along these lines:
*Water*
\(\rP\_{10}\).
I am thinking that water is wet.
\(\rP\_{11}\).
If I am thinking that water is wet, then I have (or my linguistic
community has) been embedded in an environment that contains
water.
Therefore:
\(\rQ\_{10}\).
I have (or my linguistic community has) been embedded in an
environment that contains water.
*Water* produces an absurdity. If the privileged access thesis
is true, \(s\) knows
\(\rP\_{10}\)
*non-empirically*. If semantic externalism is true, \(s\)
knows
\(\rP\_{11}\)
*a priori* by conceptual analysis. Since \(\rP\_{10}\) and
\(\rP\_{11}\) do entail
\(\rQ\_{10}\)
and knowledge is presumably closed under known entailment, \(s\)
knows \(\rQ\_{10}\)--which is an *empirical*
proposition--by simply competently deducing it from \(\rP\_{10}\)
and \(\rP\_{11}\) and without conducting any *empirical
investigation*. Since this is absurd, McKinsey concludes that the
privileged access thesis or semantic externalism must be false.
One way to resist McKinsey's *incompatibilist* conclusion
that the privileged access thesis and externalism about mental content
cannot be true together is to argue that
*Water*
is non-transmissive. Since knowledge is presumably closed under known
entailment, it remains true that \(s\) cannot know
\(\rP\_{10}\)
and
\(\rP\_{11}\)
while failing to know
\(\rQ\_{10}\).
However, McKinsey's paradox originates from the stronger
conclusion--motivated by the claim that *Water* is a
deductively valid argument featuring premises knowable
non-empirically--that \(s\), by running *Water*, could
know *non-empirically* the *empirical* proposition
\(\rQ\_{10}\) that she or members of her community have had contact
with water. This is precisely what could not happen if *Water*
is non-transmissive: in this case \(s\) couldn't learn
\(\rQ\_{10}\) on the basis of her non-empirical justification for
\(\rP\_{10}\) and \(\rP\_{11}\), and her knowledge of the entailment
between \(\rP\_{10}\), \(\rP\_{11}\), and \(\rQ\_{10}\) (see mainly
Wright 2000, 2003,
2011).[19]
A first possibility to defend the thesis that *Water* is
non-transmissive is to argue that it instantiates the *disjunctive
template* (considered in
Sect. 3.2)
(cf. Wright 2000, 2003, 2011). If *Water* is non-transmissive,
\(s\) could acquire a justification for, or knowledge of,
\(\rP\_{10}\)
and
\(\rP\_{11}\)
only if \(s\) had an *independent* justification for, or
knowledge of,
\(\rQ\_{10}\).
(And to avoid the absurd result that McKinsey recoils from, this
independent justification should be empirical.) If this diagnosis is
correct, one need not deny \(\rP\_{10}\) or \(\rP\_{11}\) to reject the
intuitively false claim that \(s\) could know the empirical
proposition \(\rQ\_{10}\) in virtue of only non-empirical
knowledge.
To substantiate the thesis that *Water* instantiates the
*disjunctive template* one should first emphasize that the kind
of externalism about mental content underlying \(\rP\_{10}\) is
compatible with the possibility that \(s\) suffers from illusion of
content. Were this to happen with \(\rP\_{10}, s\) would seem to
introspect that she believes that water is wet whereas there is
nothing like that content to be believed by \(s\) in the first
instance. Consider:
\(\rR\_{\textit{water}}\).
'water' refers to *no* natural kind so that
there is *no* content expressed by the sentence 'water is
wet'.
\(s\)'s state of having introspective justification for
believing
\(\rP\_{10}\)
is arguably subjectively indistinguishable from a situation in which
\(\rR\_{\textit{water}}\) is true. Thus condition
(D2)
of the *disjunctive template* is met. Moreover,
\(\rR\_{\textit{water}}\) appears incompatible with an obvious
presupposition of \(s\)'s cognitive project of attaining
introspective justification for believing \(\rP\_{10}\), at least if
the content that water is wet embedded in \(\rP\_{10}\) is constrained
by \(s\) or her linguistic community having being in contact with
water. Thus condition
(D3\(^{\*}\))
is also met. Furthermore, \(\rP\_{10}\) entails \(\rQ\_{10}\) (when
\(\rP\_{11}\) is in background information). Hence, condition
(D1)
is fulfilled. If one could also show that
(D4)
is satisfied in *Water*, in the sense that if \(\rQ\_{10}\)
were false \(\rR\_{\textit{water}}\) would be true, one would have
shown that the *disjunctive template* is satisfied by
*Water*. Wright (2000) takes (D4) to be fulfilled and concludes
that the *disjunctive template* is satisfied by
*Water*.
Unfortunately, the claim that
(D4)
is satisfied in *Water* cannot easily be vindicated (cf.
Wright 2003). Condition (D4) is satisfied in *Water* only if it
is true that if \(s\) (or \(s\)'s linguistic community) had not
been embedded in an environment that contains *water*, the term
'water' would have referred to no natural kind. This is
true only if the closest possible world \(w\) in which this
counterfactual's antecedent is true is like Boghossian's
(1997) *Dry Earth*--namely, a world where no one has ever
had any contact with any kind of watery stuff, and all apparent
contacts with it are always due to multi-sensory hallucination. If
\(w\) is not *Dry Earth*, but Putnam's *Twin
Earth*, however, the counterfactual turns out false, as in this
possible world people usually have contact with some other watery
stuff that they call 'water'. So, in this world
'water' refers to a natural kind, though not to
H2O. Since determining which of *Dry Earth* or
*Twin Earth* is modally closer to the actual world (supposing
\(s\) is in the actual world)--and so determining whether (D4) is
satisfied in *Water*--is a potentially elusive task, the
claim that *Water* instantiates the *disjunctive
template* appears less than fully
warranted.[20]
An alternative dissolution of McKinsey paradox--also based on a
diagnosis of non-transmissivity--seems to be available if one
considers the proposition that
\(\rQ\_{11}\).
\(s\) (or \(s\)'s linguistic community) has been embedded
in an environment containing some *watery substance* (cf.
Wright 2003 and 2011).
This alternative strategy assumes that
\(\rQ\_{11}\),
rather than
\(\rQ\_{10}\),
is a presupposition of \(s\)'s cognitive project of attaining
introspective justification for
\(\rP\_{10}\).
Even if *Water* doesn't instantiate the *disjunctive
template*, a new diagnosis of what's wrong with
McKinsey's paradox could rest on the claim that the different
argument yielded by expanding *Water* with
\(\rQ\_{11}\)
as conclusion--call it *Water\**--instantiates the
*disjunctive template*. If
\(\rR\_{\textit{water}}\)
is the same proposition as above, it is easy to see that
*Water\** satisfies conditions
(D4),
(D2), and
(D3\(^{\*}\))
of the *disjunctive template*. Furthermore
(D1)
is satisfied at least in the sense that it seems *a priori*
that
\(\rP\_{10}\)
via
\(\rQ\_{10}\)
entails
\(\rQ\_{11}\)
(if
\(\rP\_{11}\)
is in background information) (cf. Wright 2011). On this novel
diagnosis of non-transmissivity, what would be paradoxical is that
\(s\) could earn justification for \(\rQ\_{11}\) in virtue of her
non-empirical justification for \(\rP\_{10}\) and \(\rP\_{11}\) and her
knowledge of the *a priori* link from \(\rP\_{10}\),
\(\rP\_{11}\) via \(\rQ\_{10}\) to \(\rQ\_{11}\). If one follows
Wright's (2003)
suggestion[21]
that \(s\) is *entitled* to accept \(\rQ\_{11}\)--namely,
the presupposition that there is a watery substance that provides
'water' with its extension--*Water* becomes
*innocuously* transmissive, and the apparent paradox
surrounding *Water* vanishes. This is so at least if one grants
that it is *a priori* that water is the watery stuff of our
actual acquaintance, once it is presupposed that there is any watery
stuff of our actual acquaintance. For useful criticism of responses of
this type to McKinsey's paradox see Sainsbury (2000), Pritchard
(2002), Brown (2003, 2004), McLaughlin (2003), McKinsey (2003), and
Kallestrup (2011). |
watsuji-tetsuro | ## 1. Life and Career
Watsuji Tetsuro (1889-1960) was one of a small group of
philosophers in Japan during the twentieth century who brought
Japanese philosophy to the attention of the world. In terms of his
influence, exemplary scholarship, and originality he ranks with
Nishida Kitaro, Tanabe Hajime, and Nishitani Keiji. The latter
three were all members of the so-called Kyoto School, and while
Watsuji is not usually thought of as being a member of this school,
the influence and tone of his work clearly shows him to be a
like-minded thinker.
The Kyoto School, of which Nishida was the pioneering founder, is so
identified because of its common focus: Nishida's important
work, East/West comparative philosophy, and an on-going attempt to
give expression to Japanese ideas and concepts by means of the clarity
afforded by Western philosophical tools and techniques. Whereas the
general emphasis of the Kyoto School is on epistemology, metaphysics
and logic, Watsuji's primary focus came to be ethics, although
his earlier studies of Schopenhauer, Nietzsche, and Kierkegaard ranged
far beyond the ethical. And while his impressive output (an original
nineteen volume collected works has been expanded by Yuasa Yasuo
(1925-2005), his former student who became one of Japan's
leading contemporary philosophers, to twenty-seven volumes) includes
wide-ranging studies in the history of both Eastern and Western
philosophy, his focus was not on Nishida and Tanabe, but on
reconstructing the origins of Japanese culture more or less from the
ground up. Hence, while many of his ideas, e.g. the centrality of the
concept of 'nothingness' or 'emptiness,' and
dialectical contradiction, show the influence of the Kyoto School
thinkers, his own creative approach led him to formulate them
differently, and to apply them to ethical, political, and cultural
issues. Nevertheless, given the overlap of concepts, it would be a
mistake to exclude him from some sort of honorary membership in the
Kyoto School (see the entry on the
Kyoto School).
And given that there never was an actual 'school' at all,
it is enough to say that Watsuji was an active force in Japanese
philosophy in the twentieth century, along with Nishida, Tanabe, and
later on, Nishitani.
Watsuji was born in 1889, the second son of a physician, in Himeji
City, in Hyogo Prefecture. He entered the prestigious First Higher
School in Tokyo (now Tokyo University), in 1906, graduating in 1909.
Later that same year, he entered Tokyo Imperial University, where his
specialization was philosophy, in the Faculty of Literature. He
married Takase Teru in 1912, and a daughter, Kyoko was born to
them in 1914.
As a student at Himeji Middle School, he displayed a passion for
literature, especially Western literature, and "is said to have
been fired with the ambition to become a poet like Byron"
(Furukawai 1961, 217). He wrote stories and plays, and was coeditor of
a literary magazine. When he entered the First Higher School in Tokyo,
even though he had decided to pursue philosophy seriously, he remained
"as deeply immersed in Byron as ever and attracted chiefly to
things literary and dramatic" (Furukawa 1961, 218). He is said
to have had several excellent teachers at this school, and his
interests were considerably expanded in the arts and in literature.
The Headmaster, Nitobe Inazo, was of particular importance. James
Kodera writes that "Nitobe's book on *Bushido, The
Soul of Japan*, began to awaken Tetsuro not only to the
Eastern heritage but also to the study of ethics" (Kodera 1987,
6). Still, while this early glimpse into the forgotten depths of his
own culture may have planted the seeds of later inquiries and
insights, Watsuji gained sustenance and insight at this time in his
academic career from his reading in Western Romanticism and
Individualism.
Without a doubt, the most important influence in Watsuji's early
years was the brilliant novelist Natsume Soseki, considered a
foremost interpreter of modern Japan. James Kodera writes that Watsuji
found in Soseki "a human being struggling with the human
condition in the particularities of early modern Japan, suddenly
exposed to the West and yet struggling to sustain his own
identity" (Kodera 1987, 6). Soseki was beginning to abandon
his unqualified admiration of Western individualism, and was embarking
on a critique of both individualism and the modern culture of the
Western world. It was not only a move away from the values which he
had come to associate with Western culture, but a move back to the
values of his own Japanese cultural tradition. Watsuji never met
Soseki while at the University of Tokyo, but did meet him in 1913
and became a member of a study group that met at Soseki's
home. Soseki died three years later, when Watsuji was 27, and
upon his death, Watsuji began to compose a lengthy reminiscence of
him. His reflections were published in 1918 (in his *Guzo
saiko*), and they marked Watsuji's own transformation
from advocate of Western ways to critic of the West, turning toward a
reconsideration of Japanese and Eastern cultural resources. It is
perhaps telling that in a series that Soseki wrote for a popular
Tokyo newspaper during 1912-1913, "Soseki depicted
the plight of the modern individual as one of painful loneliness and
helplessness" (Kodera 1987, 6). He saw egoism as the source of
the plight, and has Ichiro, the hero of the serially appearing
novel *Wayfarer*, conclude that "there is no bridge
leading from one man to another; loneliness, loneliness, thou [are]
mine home" (Kodera 1987, 6). A solution to the modern
predicament of estrangement, loneliness and helplessness came to be
found, for Watsuji, in our social interconnections. What modern
Western society was losing with great rapidity, was still evident in
Japanese society, where individuality was tempered with a strong
social consciousness, and this more balanced sense of self could be
found in the earliest of Japanese cultural documents.
Soseki's proposed solution was to "follow Heaven,
forsake the self" (Kodera 1987, 6), but Watsuji's more
humanistic remedy for an egoism which inevitably led to estrangement,
was to reinvest in one's social interconnectedness. We are not
only by nature social beings, but we inevitably come into the world
already in relationship: with our language and culture, traditions and
expectations, parents, caregivers, and teachers. It is a myth of
abstraction that we come into the world as isolated egos.
Watsuji graduated from Tokyo University in 1912, but not without a
frantic effort to write a second thesis in a very short span of time,
because the topic of his first thesis was deemed unacceptable.
Furukawa Tetsushi notes that "at the time the atmosphere in the
Faculty of Philosophy was inimical to the study of a poet-philosopher
like Nietzsche. Consequently, Watsuji's 'Nietzschean
Studies' was rejected as a suitable graduation thesis"
(Furukawa 1961, 219). In its place he was obliged to substitute a
second thesis on Schopenhauer, which he entitled
"Schopenhauer's Pessimism and Theory of Salvation."
This thesis "was presented only just in time" for him to
graduate (Furukawa 1961, 219). Both theses were eventually published.
Watsuji enrolled in the graduate school of Tokyo Imperial University
in 1912, the same year that he completed his undergraduate studies.
His studies of Schopenhauer and Nietzsche in 1912, and of Kierkegaard
in 1915 provide ample evidence of his interest in and competence in
Western philosophy. At the same time, he continued to study the
Romantic poets, Byron, Shelley, Tennyson and Keats, being torn between
his literary and philosophical interests. In any case, perhaps because
his literary attempts were "complete failures," he decided
to give "up literary creation and devoted all his exertions to
the writing of critical essays and philosophical treatises"
(Furukawa 1961, 218). It was not long before he was in demand as a
teacher, first as a lecturer at Toyo University in 1920, an
instructor at Hosei University in 1922, an instructor at
Keio University in 1922-23, and at the Tsuda Eigaku-juku
from 1922-24 (Dilworth et al 1998, 221). But his real break came in
1925, when Nishida Kitaro and Hatano Seiichi offered him the
position of lecturer in the Philosophy Department of the Faculty of
Literature at Kyoto Imperial University, where he was to take on the
responsibility for the courses in ethics. This put him at the center
of the developing Kyoto School philosophy. As was the custom at the
time with promising young scholars, he was sent to Germany in 1927 on
a three-year scholarship. His reflections on that sojourn in Europe
became the substance of his highly successful book,
*Fudo*, which has been translated into English as
*Climate and Culture*. In fact, Watsuji spent only fourteen
months in Europe, being forced to return to Japan in the summer of
1928 because of the death of his father. In 1929, he also took on a
part-time position at Ryukoku University, and in 1931, he became
a professor at Kyoto Imperial University. In 1934, he was appointed
professor in the Faculty of Literature at Tokyo Imperial University,
and he continued to hold this important post until his retirement in
1949. Perhaps part of the reason that he is so often viewed as someone
on the periphery of the Kyoto School philosophic tradition has to do
with his work in Tokyo, geographically removed from the center of
discussion and dialogue in Kyoto.
One cannot help but be impressed by the extent of his published
output, as well as by its remarkable diversity, spanning literature,
the arts, philosophy, cultural theory, as well as Japanese, Chinese,
Indian, and Western traditions. In addition to his studies of
Schopenhauer (1912), Nietzsche (1913), and Kierkegaard (1915), in
1919, he published *Koji junrei* (*Pilgrimages to the
Ancient Temples in Nara* ), his study of the temples and artistic
treasures of Nara, a work that became exceptionally popular. In 1920
came *Nihon kodai bunka* (*Ancient Japanese Culture*), a
study of Japanese antiquity, including Japan's most ancient
writings, the *Kojiki and Nihongi*. In 1925, he published the
first volume of *Nihon seishinshi kenkyu* (*A Study of
the History of the Japanese Spirit*), with the second volume
appearing in 1935. This study contained his investigation of
Dogen's work (in *Shamon Dogen*, [*The Monk
Dogen* ]), and it can be said that it was Watsuji who
single-handedly brought Dogen's work out of nearly total
obscurity, into the forefront of philosophical discussion. Also, in
1925, he published *Kirisutokyo no bunkashiteki igi*
(*The Significance of Primitive Christianity in Cultural
History*), followed by *Genshi Bukkyo no jissen
tetsugaku* (*The Practical Philosophy of Primitive
Buddhism*) in 1927.
Returning from Europe in 1928, he continued at Kyoto Imperial
University until his appointment as professor at Tokyo Imperial
University in 1934. In 1929, he edited Dogen's
*Shobogenzo zuimonki*. Watsuji received his
degree of Doctor of Letters, based on his *The Practical Philosophy
of Primitive Buddhism* in 1932. He wrote as well *A Critique of
Homer* at about this time, a work that was not published until
1946. Also in 1932 he wrote *Porisuteki ningen no rinrigaku*
(*The Ethics of the Man of the Greek Polis*), which was not
published until 1948. In 1936, he completed his *Koshi*
(Confucius). In 1935, he completed the important *Ethics as the
Study of Man* (*Ningen no gaku to shite no rinrigaku*), to
be followed by his three volume expansion of his views on ethics,
*Rinrigaku* (*Ethics*) appearing successively in 1937,
1942, and 1949.
In 1938, he published *Jinkaku to jinruisei* (*Personality
and Humanity*), and in 1943, he published his two volume
*Sonno shiso to sono dento* (*The Idea of
Reverence for the Emperor and the Imperial Tradition*). This
latter publication is one of the works for which Watsuji was branded a
right-wing, reactionary thinker. In 1944, he published a volume of two
essays, *Nihon no shindo* (The Way of the Japanese [Loyal]
Subject; and *Amerika no kokominsei* (The Character of the
American People), and in 1948, *The Symbol of National Unity*
(*Kokumin togo no shocho*). His last
publications were the best-selling *Sakoku--Nihonno
higeki* (*A Closed Country--The Tragedy of Japan*) in
1950, *Uzumoreta Nihon* (*Buried Japan*) in 1951,
*Nihon rinri shisoshi* (*History of Japanese Ethical
Thought*) in two volumes in 1953, *Katsura rikyu: seisaku
katei no kosatsu* (*The Katsura Imperial Villa:
Reflections on the Process of Its Construction*) in 1955, and
*Nihon geijutsu kenkyu* (*A Study of Japanese
Art*), also published in 1955.
By any standard, this is an impressive array of major publications,
several of them extremely influential both with the world of
scholarship, and among the general public as well. One cannot but be
impressed by the breadth of Watsuji's interests, by the depth of
his scholarship, and by his ability as a remarkably clear and graceful
writer. Yet his *Fudo* (*Climate and Culture*),
and his studies in ethics, particularly his *Rinrigaku* stand
out as his two most influential publications.
## 2. The Philosophy of Watsuji
The foundations of Watsuji's thought were the extensive studies
in Western philosophy that he engaged in during his earlier years, up
until 1917 or 1918, followed by his extensive studies in Japanese and
Far Eastern philosophy and culture. Upon his return from Europe in
1928, and as a direct result of being among the very first to read
Martin Heidegger's *Sein und Zeit*, Watsuji began work on
*Fudo* (*Fudo ningen-gakuteki kosatsu*),
translated into English as *Climate and Culture*.
'*Fudo*' means "wind and
earth...the natural environment of a given land" (Watsuji
1988 [1961], 1). We are all inescapably environed by our land, its
geography and topography, its climate and weather patterns,
temperature and humidity, soils and oceans, its flora and fauna, and
so on, in addition to the resultant human styles of living, related
artifacts, architecture, food choices, and clothing. This is but a
partial list, but even this sketchy list makes clear that Watsuji is
calling attention to the many ways in which our environment, taken in
the broad sense, shapes who we are from birth to
death. Heidegger's emphasis was on time and the individual, and
too little, according to Watsuji, on space and the social dimensions
of human beings. When we add to our sense of climate as including not
only the natural geographic setting of a people and the region's
weather patterns, but also the social environment of family,
community, society at large, lifestyle, and even the technological
apparatus that supports community survival and interaction, then we
begin to glimpse what Watsuji had in mind by climate, and how there
exists a mutuality of influence from human to environment, and
environment to human being which allows for the continued evolution of
both. Climate is the entire interconnected network of influences that
together create an entire people's attitudes and
values. "History and nature," remarks Yuasa Yasuo,
"like man's mind and body, are in an inseparable
relationship" (Yuasa 1996, 168). Culture is that mutuality of
influence, recorded over eons of time past, which continues to effect
the cultural present of a people. Who we are is not simply what we
think, or what we choose as individuals in our aloneness, but is also
the result of the climatic space into which we are born, live, love,
and die.
Even before his travels in Europe (1927-28), Watsuji was
convinced that one's environment was central in shaping persons
and cultures. In *Guzo Saiku* (*Revival of Idols*),
published in 1918, he had concluded that "all inquiries into the
culture of Japan must in their final reduction go back to the study of
her nature" (Furukawa 1961, 214). From about 1918 on,
Watsuji's focus became the articulation of what it is that
constitutes the Japanese spirit. *Ancient Japanese Culture*,
which he wrote in 1920, is an attempt to revitalize Japan's
oldest Chronicles (the *Kojiki* and *Nihongi*) using
modern literary techniques as well as newly available archaeological
evidence. He treated these collections of ancient stories, legends,
poems, songs, and myths as literature, rather than as sacred
scripture. Quoting from the *Kojiki* the imaginative story of
creation, he then glosses this rich account: "'When the
land was still young and as a piece of floating grease, drifting about
as does a jellyfish, there came into existence a god, issuing from
what grew up like a reed-bud...' [is] a superb...image
of a piece of grease floating about without definite form like a thin,
muddy substance far thicker than water, yet not solid, and of the
image of a soft jellyfish with a formless form drifting on the water,
and of the image of exuberant life of a reed-bud sprouting powerfully
out of the muddy water of a swampy marsh. There is no other
description that I know of that so graphically depicts the state of
the world before creation" (Furukawa 1961, 221-22). It is
a concrete and graphic depiction of the formlessness out of which all
things arose, and is perhaps an early attempt at talk of nothingness,
a central idea of the Kyoto School, and later on for Watsuji as well.
Not only can one discern his early interest in the importance of
nothingness in his thinking, but there is also indication that he was
sensitive to the non-dual immediacy of experience, which Nishida
described as *pure experience* (see Section 2.1 of the entry
on
Nishida Kitaro.
As though discovering the roots of Nishida's pure experience,
he writes that "the ancient poets, whose feelings still retain a
virgin simplicity as a single undivided experience, are not yet
troubled by this division of the subjective and the objective"
(as found in Japan's oldest collection of poems, the
*Man'yoshu*) (Furukawa 1961, 224). Natural
beauty, for the poet, is as yet an undivided experience, a pure
experience which is prior to the subject/object dichotomy, a total
immersion in the moment without thought or reflection, in an ecstasy
of feeling. In his Preface to *Revival of Idols* he warns that
he intends not "a mere 'revival of the old,'"
but rather what he wished to achieve "is nothing more or nothing
less than to advance such life as lives in the everlasting New"
(Furukawa 1961, 227). To revive the old, then, is to cause it to shine
anew, but in the light of contemporary issues and concerns. It is to
revive its meaning for us, here and now, and not merely to show that
it had meaning in the past. As part of one's cultural climate,
the past inevitably operates still in the present of every Japanese.
It was this still active element that Watsuji sought to uncover, and
to express in such a way as to allow others to share in the present
infused with the past in the consciousness of their own lives, rather
than in some unconscious and only partially developed way. Similarly,
he attempted to reveal the relevance of primitive Christianity,
primitive Buddhism, and Confucianism as cultural inheritances which
continue to shape people in various 'climates' around the
world. He made explicit what was implicit, and he sought to revitalize
the active ingredients of cultural traditions for right
'now'.
In the Preface to *Climate and Culture*, Watsuji states that
when we come to consider both the natural and the human cultural
climate in which we find ourselves, we render both nature and culture
as "already objectified, with the result that we find ourselves
examining the relation between object and object, and there is no link
with subjective human existence" (Watsuji 1961, v). Watsuji
distinguishes his interpretation of *fudo*,
"literally 'Wind and Earth' in Japanese"
(Watsuji 1961, 1) from what he maintained was its then conventional
understanding simply as a term used for natural environment --
something that is a resource to be used, or an object separate from or
merely alongside our being-in-the-world. The phenomena of climate must
be seen "as expressions of subjective human existence and not of
natural environment" (Watsuji 1961, v). He explains this using
the example of the phenomenon of *cold*, a single climatic
feature. Ordinarily, we think of 'us' and
'coldness' as objectively distinct and separate from us.
Phenomenologically, however, we only come to know that it is cold
after we actually feel it as cold. Coldness does not press upon us
from the outside; rather, we are already out in the cold. As Heidegger
emphasized, we '*ex-istere*' outside of ourselves,
and in this case, in the cold. It is not the cold which is outside of
us, but we who are already out in the cold. And we feel this cold in
common with other people. We all talk about the weather. To
*existere*, then, means that we experience the cold with other
'I's. We experience coldness within ourselves, with
others, and "in relation to the soil, the topographic and scenic
features and so on of a given land" (Watsuji 1961, 5). In a
telling, and poetically adept passage, he writes that a cold wind may
be experienced as a sharp mountain blast, or a dry wind sweeping
through a city at the end of winter, or "the spring breeze may
be one which blows off cherry blossoms or which caresses the
waves" (Watsuji 1961, 5). All weather is as much
'subjective' as it is 'objective.'
Because of the cold, we must decide upon sources of heat for our
houses, design and create appropriate clothing (for each of the
seasons and conditions), seek proper ventilation, defend ourselves and
our houses against special conditions (floods, monsoon rains,
typhoons, tornadoes, earthquakes, volcanic eruptions, tsunamis, etc.),
counteract excessive humidity in some way, and we must learn how to
grow our food and eat in ways compatible with the climatic conditions,
and our capabilities of farming and gathering. We apprehend ourselves
in climate, revealing ourselves to ourselves as both social and
historical beings. Here is the crux of Watsuji's insight, and of
his criticism of Heidegger's *Sein und Zeit*: to
emphasize our being in time is "to discover human existence on
the level only of individual consciousness" (Watsuji 1961, 9).
As temporal beings, we can exist alone, in isolative reflection. On
the other hand, if we recognize the 'dual character' of
human beings as existing in both time and space, as both individuals
and social beings, "then it is immediately clear that space must
be regarded as linked with time" (Watsuji 1961, 9). Space is
inextricably linked with time, and the individual and social aspects
of ourselves are inextricably linked as well, and our history and
culture are linked to our climate. And while change is constant, and
hence all structures are continuously evolving, this evolution is
inextricably linked to our history, traditions, and cultural forms of
expression. We discover ourselves in climate, and it is because of
climate that we have come to express ourselves as we have:
"climate...is the agent by which human life is
objectivised" (Watsuji 1961, 14). In order to more faithfully
convey Watsuji's notion of *fudo*, Augustin Berque
suggests "milieu" as a more preferable translation than
climate. *Milieu* more accurately captures the mutual
co-constituting at the heart of *fudo*. For Watsuji, the
notion of *fudo* is supposed to suggest that the spatial,
environmental, and collective aspects of human existence are all
intertwined (Berque, 1994, 498).
Climate, or milieu, serves as the always present background to what
becomes the foreground focus for Watsuji, the study of Japanese ethics
in practice and in theory. Ethics is the study of the ways in which
men and women, adults and children, the rulers and those ruled, have
come to deal with each other in their specific climatic conditions.
Ethics is the pattern of proper and effective social interaction.
## 3. Ethics
Watsuji's objection to individualistic ethics, which he
associated with virtually all Western thinkers to some degree, is that
it loses touch with the vast network of interconnections that serves
to make us human. We are individuals inescapably immersed in the
space/time world, together with others. Individual persons, if
conceived of in isolation from their various social contexts, do not
and cannot exist except as abstractions. Our way of being in the world
is an expression of countless people and countless actions performed
in a particular 'climate,' which together have shaped us
as we are. Indeed a human being is a unified structure of past,
present, and future; each of us is an intersection of past and future,
in the present 'now.' There is no possibility of the
isolation of the ego, and yet many write as though there were. They
are able to make a case of it, in part because they ignore the
spatiality of *ningen* (human being), focussing on
*ningen*'s temporality. Watsuji believed that it was far
more difficult to consider a human being as strictly an individual,
when thought of as a being in space. Spatially, we move in a common
field, and that field is cultural in that it is criss-crossed by roads
and paths, and even by forms of communication such as messenger
services, postal routes, newspapers, flyers, broadcasts over great
distances, all in addition to everyday polite conversation. Watsuji
makes a point of the legend of the isolated and hopelessly marooned
Robinson Crusoe, for even Robinson Crusoe continued to be culturally
connected, continuing to speak an inherited language, and improvising
housing, food, and clothing based on past social experiences, and
continuing to hope for rescue at the hands of unknown others. Watsuji
rejects all such 'desert island' constructions as mere
abstractions. Thomas Hobbes imagined a state of nature in which we are
radically discrete individuals, at a time before significant social
interconnections have been established. Watsuji counters that we are
inescapably born into social relationships, beginning with one's
mother, and one's caregivers. Our very beginnings are etched by
the relational interconnections which keep us alive, educate us, and
initiate us into the proper ways of social interaction.
At the center of Watsuji's study of Japanese ethics is his
analysis of the human person, in Japanese, *ningen*. In his
*Rinrigaku*, he affirms that ethics is, in the final analysis,
the study of human persons. Offering an etymological analysis, as he
does so often, he displays the important complexity in the meaning of
*ningen*. *Ningen* is composed of two characters,
*nin*, meaning 'person' or 'human
being,' and *gen*, meaning 'space' or
'between.' He cautions that it is imperative to recognize
that a human being is not just an individual, but is also a member of
many social groupings. We are individuals, and yet we are not just
individuals, for we are also social beings; and we are social beings,
but we are not just social beings, for we are also individuals. Many
who interpret Watsuji forget the importance which he gave to this
balanced and dual-nature of a human being. They read the words, but
then go on to argue that he really gives priority to the
collectivistic or social aspect of what it means to be a human being.
That such an imbalance often occurs in Japanese society may be the
reason for this conclusion. Yet it does not fit Watsuji's
theoretical position, which is that we are, at one and the same time,
both individual and social. In *A Study of the History of the
Japanese Spirit* (1935) Watsuji cautions that "...the
communion between man and man does not mean their becoming merely one.
It is only through the fact that men are unique individuals that a
cooperation between 'man and man' can be realized"
(Watsuji 1935, 112). The tension between one's individual and
one's social nature must not be slackened, or else the one is
likely to overwhelm the other. He makes this point even clearer in
discussing the creation of *renga* poetry, in the same volume.
*Renga* poems are not created by a single individual but by a
group of poets, with each individual verse linked to the next, and
each verse the creation of a single individual, and yet each must
cohere with the 'poetic sphere' as a whole. Watsuji
concludes, "if there are self-centered persons in the company, a
certain 'distortion' will be felt and group spirit itself
will not be produced. When there are people, who, lacking
individuality, are influenced only by others' suggestions, a
certain 'lack of power' will be felt, and a creative
enthusiasm will not appear. It is by means of attaining to Nothingness
while each remains individual to the last, or in other words, by means
of movements based on the great Void by persons each of whom has
attained his own fulfilment, that the company will be complete and
interest for creativity will be roused" (Watsuji 1935, 113).
Individuality is not, and must not be lost, else the balance is
destroyed, and creativity will not effectively arise. What is required
is that we become selfless, no longer self-centered, and open to the
communal sense of the whole group or society. It is a sense of
individuality that is aware of social, public interconnections.
One expresses one's individuality by *negating* the
social group or by rebelling against various social expectations or
requirements. To be an individual demands that one negate the
supremacy of the group. On the other hand, to envision oneself as a
member of a group is to negate one's individuality. But is this
an instance of poor logic? One can remain an individual and as such
join as many groups as one wishes. Or one can think of oneself as an
individual and yet as a parent, a worker, an artist, a theatre goer,
and so forth. Watsuji understood this, but his argument is that it is
possible to think in such ways only if one has already granted logical
priority to the individual *qua* individual. Whatever group one
belongs to, one belongs to it as an individual, and this individuality
is not quenchable, except through death, or inauthenticity.
Nevertheless, Watsuji's conception of what he calls the
'negation of negation' has a quite different, and perhaps
deeper emphasis. To extricate ourselves from one or another
socio-cultural inheritance, perhaps the acceptance of the Shinto
faith, one has to rebel against this socio-cultural form by affirming
one's individuality in such a way as to negate its overt
influence on oneself. This is to negate an aspect of one's
history by affirming one's individuality. But the second
negation occurs when one becomes a truly ethical human being, and one
negates one's individual separateness by abandoning one's
individual independence from others. What we have now is a forgetting
of the self, as Dogen urged ("to study the way is to study
the self, to study the self is to forget the self, to forget the self
is to become enlightened by all things"), which yields a
'selfless' morality. To be truly human is not the
asserting of one's individuality, but an annihilation of
self-centeredness such that one is now identified with others in a
nondualistic merging of self and others. Benevolence or compassion
results from this selfless identification. This is our authentic
'home ground,' and it rekindles our awareness of our true
and original nature. This home ground he calls
'nothingness,' about which more will be said below.
Watsuji's analysis of *gen* is of equal interest. He
makes much of the notion of 'betweenness,' or
'relatedness.' He traces *gen* (*ken*) back
to its earlier form, *aida* or *aidagara*, which refers
to the space or place in which people are located, and in which the
various crossroads of relational interconnection are established.
Watsuji's now famous former student, Yuasa Yasuo, observes that
"this betweenness consists of the various human relationships of
our life-world. To put it simply, it is the network which provides
humanity with a social meaning, for example, one's being an
inhabitant of this or that town or a member of a certain business
firm. To live as a person means...to exist in such
betweenness" (Yuasa 1987, 37). As individuals, we are private
beings, but as social beings we are public beings. We enter the world
already within a network of relationships and obligations. Each of us
is a nexus of pathways and roads, and our betweenness is already
etched by the natural and cultural climate that we inherit and live
our lives within. With the social aspect of self being more
foregrounded in Japan than in the West (especially at the time Watsuji
was writing), it is imperative, therefore, that one know how to
navigate these relational waters successfully, appropriately, and with
relative ease and assurance. The study of these relational
navigational patterns--between the individual and the family,
self and society, as well as one's relationship to the
milieu--is the study of ethics.
Watsuji usually writes of *ningen sonzai*, and *sonzai*
(existence) is composed of two characters, *son* (which means
to preserve, to sustain over time), and *zai* (to stay in
place, and in this case, to persevere in one's relationships).
*Ningen sonzai*, then, refers to human nature as individual yet
social, private as well as public, with our coming together in
relationship occurring in the betweenness between us, which
relationships we preserve and nourish to the fullest. Ethics has to do
with the ways in which we, as human beings, respect, preserve, and
persevere in the vast complexity of interconnections which etch
themselves upon us as individuals, thereby forming our natures as
social selves, and providing the necessary foundation for the creation
of cooperative and workable societies.
The Japanese word for ethics is *rinri*, which is composed of
two characters, *rin* and *ri*. *Rin* means
'fellows,' 'company,' and specifically refers
to a system of relations guiding human association. *Ri* means
'reason,' or 'principle,' the rational
ordering of human relationships. These principles are what make it
possible for human beings to live in a cooperative community. Watsuji
refers to the ancient Confucian patterns of human interaction as
between parent and child, lord and vassal, husband and wife, young and
old, and friend and friend. Presumably, one also acquires a sense of
the appropriate and ethical in all other relationships as one grows to
maturity in society. If enacted properly these relationships, which
occur in the betweenness between us, serve as the oil which lubricates
interaction with others in such a way as to minimize abrasive
occurrences, and to maximize smooth and positive relationships. One
can think of the betweenness between each of us as a *basho*,
an empty space, in which we can either reach out to the other in order
to create a relationship of positive value, or to shrink back, or to
lash out, making a bad situation worse. The space is pure potential,
and what we do with it depends on the degree to which we can encounter
the other in a fruitful and appropriate manner in that betweenness.
Nevertheless, every encounter is already etched with the cultural
traditions of genuine encounter; ideally positive expectation, good
will, open-heartedness, cheerfulness, sincerity, fellow-feeling, and
availability. Ethics "consists of the laws of social
existence" writes Watsuji (Watsuji 1996, 11).
## 4. Emptiness
The annihilation of the self, as the negation of negation
"constitutes the basis of every selfless morality since ancient
times," asserts Watsuji (Watsuji 1996, 225). The negating of the
group or society, and the emptying of the individual in
Watsuji's sense of the negation of each by the other pole of
*ningen*, makes evident that both are ultimately
'empty,' causing one to reflect upon that which is
ultimate, and at the base of both one's individuality and the
groups with which one associates. The losing of self is a returning to
one's authenticity, to one's home ground as that source
from which all things derive, and by which they are sustained. It is
the abandonment of the self as independent which paves the way for the
nondual relation between the self and others that terminates in the
activity of benevolence and compassion through a unification of minds.
The ethics of benevolence is the development of the capacity to
embrace others as oneself or, more precisely, to forget one's
self such that the distinction between the self and other does not
arise in this nondualistic awareness. One has now abandoned
one's self, one's individuality, and become the authentic
ground of the self and the other as the realization of absolute
totality. Ethics is now a matter of spontaneous compassion,
spontaneous caring, and concern for the whole. This is the birth of
selfless morality, for which the only counterparts in the West are the
mystical traditions and perhaps some forms of religiosity in which it
is God who moves in us, and not we ourselves. The double negation
referred to earlier whereby the individual is negated by the group
aspect of self, and the group aspect is in turn negated by the
individual aspect, is not to be taken as a complete negation that
obliterates that which is negated. The negated are preserved, else
there would be no true self-contradiction. This robust sense of the
importance of self-contradiction shares much with the more developed
sense of the identity of self-contradiction about which Nishida said
so much (see the entry on
Nishida Kitaro).
What is stressed by both thinkers is that some judgments of logical
contradiction are at best penultimate judgments, which may point us in
the direction of a more comprehensive and accurate understanding of
our own experience. Watsuji refers to '*wakaru* '
(to understand), which is derived from '*wakeru* '
(to divide); and in order to understand, one must already have
presupposed something whole, that is to say, a system or unity. For
example, in self-reflection, we make our own self, *other*. Yet
the distinction reveals the original unity, for the self is other as
objectified and of course is divided from the originally unified self.
To think of a thing is to distinguish it from something else, and yet
in order to make such a distinction, the two must already have had
something in common. Thus, to emphasize the contradiction is to plunge
into the world as many; to emphasize the context, or background, or
matrix is to plunge into the world as one. Readers will be familiar
with the logical formulation, often encountered in Zen but ubiquitous
in Buddhism generally, that *A* is *A*; and *A*
is not-*A*; therefore, *A* is *A*. There is a
double negation in evidence here: an individual is an individual, and
yet an individual is not individual unless one stands opposed to other
individuals. That an individual stands opposed to others means that
one is related to others as a member of a group or groups. Because an
individual is a member of a group, one is both a member as an
individual, and an otherwise isolated individual as a member of a
group. We are both, in mutual interactive negation, and as such we are
determined by the group or community, and yet we ourselves determine
and shape the group or community. As such we are living
self-contradictions, and, therefore, living identities of
self-contradiction.
Morality, for Watsuji, is a coming back to authentic unity through an
initial opposition between the self and other, and then a
re-establishing of betweenness between self and other, ideally
culminating in a nondualistic connection between the self and others
that actually negates any trace of difference or opposition in the
emptiness of the home ground. This is the negation of negation, and it
occurs in both time and space. It is not simply a matter of
enlightenment as a private, individual experience, calling one to
awareness of the interconnectedness of all things. Rather, it is a
spatio-temporal series of interconnected actions, occurring in the
betweenness between us, which leads us to an awareness of betweenness
that ultimately eliminates the self and other, but of course, only
from within a nondualist perspective. Dualistically comprehended, both
the self and other are preserved. As the Zen saying goes, "not
one, not two." As Taigen Dan Leighton explains,
"nonduality is not about transcending the duality of form and
emptiness. This deeper nonduality is not the opposite of duality, but
the synthesis of duality and nonduality, with both included, and both
seen as ultimately not separate, but as integrated"(Leighton
2004, 35). For Watsuji, emptiness is what makes the subject-object
relation, or better, the relation both inherent in *ningen* and
between *ningen* possible. What is left is betweenness itself
in which human actions occur. In this sense, betweenness is emptiness,
and emptiness is betweenness. Betweenness is the place where
compassion arises and is acted out selflessly in the spatio-temporal
theatre of the world. It is that which makes possible the variety of
relationships of which human beings are capable. Watsuji's
*ningen* views each person's ethical identity as
integrally related to that of others and extends it beyond merely
human relations. Self and other, self and nature, are viewed as
inseparable from ethical identity and are related to, rather than
viewed as opposed to or exclusive of each other. In Watsuji's
*ningen* we find this nondualism lived out in ways that extend
ethical identity beyond relations between human beings to encompass
the world in which we live -- it is part of *milieu* or
*fudo*.
## 5. The State
Watsuji's theory of the state, and his vocal support of the
Emperor system, garnered considerable criticism after the Second World
War. La Fleur maintains that Watsuji's detractors were
"dominant in Japanese intellectual life from 1945 to
approximately 1975," while his admirers became "newly
articulate since the decade of the 1980s" (LaFleur 1994, 453).
The former group considered his position to be a dangerous one. He
argued that the culmination of the double negation, which he conceived
of as a single movement, was the restoration of the absolute totality.
In other words, while the individual negates the group in order to be
an authentic and independent individual, the second negation is to
abandon one's independence as an individual in the full
realization of totality. It is a 'surrender' to totality,
moving one beyond the myriad specific groups to the one total and
absolute whole. It would seem natural that this ultimate wholeness
would be the home ground of nothingness, and in a way it is. But
Watsuji argues that it is the state that takes on the authority of
totality. It is the highest and best social structure thus far. The
political implications of this position could easily result in a
totalitarian state ethics. Indeed, Watsuji extolled the superiority of
traditional Japanese culture because of its emphasis on self-negation,
including the making of the ultimate sacrifice if required by the
Emperor. In *America's National Character* (1944), he
contrasted this willingness to an assumed selfishness or egocentrism
found in the West, together with a utilitarian ethic of expediency,
which he felt was rarely able to commit to self-surrender in aid of
the state. What he saw as most exemplary in the Japanese way of life
was the Bushido ideal of "the absolute negativity of the
subject" (Odin 1996, 67), through which the totality of the
whole is able to be achieved. There is no doubt that Watsuji's
position could easily be interpreted as a totalitarian state ethics.
Yet, insofar as Watsuji's analysis of Japanese ethics is an
account of how the Japanese do actually act in the world, then it is
little surprise that the Japanese errors of excess which culminated in
the fascism of the Second World War period should be found somehow
implicit and possible in Watsuji's acute presentation of
Japanese cultural history. Perhaps the fault to be found lies not in
his analysis per se, but rather in his all too sanguine collapsing of
the descriptive and the prescriptive. That the Japanese
way-in-the-world might include a totalitarian seed is something which
demands a normative warning. Surely this is not what should be
applauded as an aspect of the alleged superiority of Japanese culture,
nor should Bushido in and of itself be taken as a blameless path to
the highest of ethical achievements. The willingness to be loyal,
whatever the rights and wrongs of the situation might be, in order to
remain loyal to one's Lord, however evil or foolhardy he might
be, is not an adequate or rational position, and it is surely not
laudable ethically. It is, perhaps, the way the samurai saw
themselves, as martial servants loyal to death. But the
'is' here is clearly not a moral 'ought'.
What Watsuji adds to this picture which makes it extremely difficult
to condemn him too harshly, and possibly to condemn him at all, is his
adament insistence in the third volume of *Rinrigaku* that no
one nation has charted the correct political and cultural path, and
that the diversity of each is to be both encouraged and respected. He
writes of unity in diversity, and rejects the idea that any nation has
the right to culturally assimilate another. Each nation is shaped by
its particular geography, climate, culture and history, and the
resultant diversity is both to be protected and appreciated, and the
notion of a universal state is, therefore, but an unwanted and
dangerous delusion. We must know our own traditions, and to cherish
those, but we must not extol them as superior out of our ignorance of
other ways and cultural traditions. In fact, such ignorance, resulting
from Japan's unwise self-imposed isolationism, Watsuji saw as
the most tragic flaw in Japan's own history, and a cause of the
Second World War. Japan knew so little of the outside world in the
late nineteenth and early twentieth centuries, that it exaggerated its
own worth and power, and vastly underestimated the importance of
political and diplomatic involvement in the happenings of the rest of
the world. Nationalism must not express itself at the expense of
internationalism, and internationalism must not establish itself at
the expense of nationalism. Here is another pair of seeming
contradictions to be held together by the unity of mutual interaction;
the one modifies the other, and that tension is not to be resolved.
Internationalism must be a unity of independent and distinctive
nations.
Five months before the end of the Second World War, Watsuji organized
a study group to re-think the Tokugawa Period (1600-1868), which
resulted in his popular work, *Closed Nation: Japan's
Tragedy* (*Sakoku*), published in 1950. Some critics
reacted by dubbing this work "Watsuji's Tragedy"
(LaFleur 2001, 1), yet La Fleur insists that *Sakoku*, together
with a collection of other contemporaneous writings, provides a
serious and important insight into Watsuji's later philosophical
position. His focal thesis was that the tragedy of the Japanese
involvement in the Second World War was a direct result of
Japan's policy of national seclusion. Isolationism took Japan
out of the events of the world's activities for two centuries,
allowing the West to outstrip Japan in terms of science and
technology. But even more important is Watsuji's insistence that
the nationalists and militarists should have seen that Japan was in no
position to win the war since both "men and materiel were in
increasingly short supply" (LaFleur 2001, 3). Not that Watsuji
opposed the war, but the point he was making was that the tragedy came
to be because the earlier isolationist attitude had been revived from
1940 on. Japan was out of touch with the strengths and determination
of its adversaries, and seemingly convinced that the character of the
Japanese people would ensure success. Furthermore, coupled with this
short-term return to a head-in-the-sand isolationism of the 1940s, was
Watsuji's insistence that throughout most of Japan's
history, the Japanese people had demonstrated an intense curiosity and
a robust desire to learn, and in particular to learn from cultures
quite different from their own: "No matter how far we go back in
Japanese culture we will not find an age in which evidence of
admiration of foreign cultures is not to be found" (Watsuji
1998, 250). Japan was withdrawing inwards at precisely the time when
the West's expansionism and imperialism was gathering steam.
LaFleur goes so far as to suggest that in a 1943 lecture to Navy
officials, Watsuji attempted to warn those in charge of the dangerous
course they continued to pursue. LaFleur speculates that this is why
Watsuji felt no need to recant after the War had ended. The salient
point of all of this is that the instances of isolationism in Japanese
history are exceptions which run counter to what Watsuji saw as the
dominant tendency of the Japanese to both welcome and encourage
outside influence. Japan's national character throughout the
bulk of its history displays a remarkable openness to and interest in
other cultures, and a steadfast desire to learn from those cultures.
He was advocating a return to this more 'normal' attitude
towards the outside world, and not a return to Japanese nationalism or
chauvinism. Watsuji presents us with a complex perspective.
## 6. Religion
Watsuji's interest in religion was as a social phenomenon, that
is, in religion as an aspect of the cultural environment. And yet in
the Watsuji scholarship there remains disagreement about how religion
-- Buddhism in particular -- is to be read in his work
(Sevilla 2016, 608-609). William LaFleur remarks, for example, that
Watsuji "embraces a religious solution...philosophically
and methodologically" (LaFleur 1978, 238). While David Dilworth
contends that "Watsuji's position was not essentially a
Buddhistic or religious one"(Dilworth 1974, 17). While there is
disagreement on LaFleur's claim that "it was in the
Buddhist notion of emptiness that Watsuji found the principle that
gives his system...coherence" (LaFleur 1978, 239), it
cannot be denied that when Watsuji writes of emptiness as that final
place or context in which all distinctions disappear, or empty, and
yet from which they emerge, that this notion is at least partly rooted
in his study of Buddhist philosophy and culture, and the concept of
emptiness in particular. Watsuji "embraces a religious
solution...philosophically and methodologically" (LaFleur
1978, 238).The Buddhist notion of
*pratitya-samutpada*, or 'dependent
co-origination' (or 'relationality,'
'conditioned co-production,' 'dependent
co-arising,' 'co-dependent origination') implies
that everything is 'empty' that is to say, "that
everything is deprived of its substantiality, nothing exists
independently, everything is related to everything else, nothing ranks
as a first cause, and even the self is but a delusory
construction" (Abe 1985, 153). The delusion of independent
individuality can be overcome by recognizing our radical relational
interconnectedness. At the same time, even this negation must be
emptied or negated, hence our radical relational interconnectedness is
possible only because true individuals have created a network in the
betweenness between them. The result is a selfless awareness of that
totality beyond all limited, social totalities, namely the emptiness
or nothingness at the bottom of all things, whether individual or
group. LaFleur summarises Watsuji's position: "the social
side of human existence 'empties' the individuated side of
any possible priority or autonomy and the individuated side does
something exactly the same and exactly proportionate to and for the
social side. There can be no doubt, I think, that for Watsuji,
emptiness is the key to it all" (LaFleur 1978, 250).
Watsuji's emptiness is more a recognition of underlying
relatedness, and manifests as a place, a *basho*, that is the
dynamic and creative origination of all relationships and all networks
of interactions. However, it is important to keep in mind that even
when Watsuji uses religious language (as in "it is a religious
ecstasy of the great emptiness" (LaFleur 1978, 249), that
ecstasy is not a mystical-like insight into one's union with God
or the Absolute or a recognition of floating in
'other-power,' but a relational union of those persons
involved in some communal forms, eventually culminating in the state.
It is an ethical, or political, or communal ecstasy and not a
religious identification with a transcendent Other - and as much as
we hear the influence of Buddhism, we also hear, for example, echoed
of the Hegelian concept of sublation (Sevilla 2016, 626). Watsuji
gives no evidence of deep religiosity but expresses a profound and
sometimes ecstatic ethical humanism, one which is, nonetheless,
significantly Buddhist in conception.
In denying the reality of the subject/object distinction, and
affirming the emptiness of all things, Watsuji was able to argue that
as the individual negates or rebels against society, thereby emptying
society of an unchanging objective status, and as the second negation
establishes the totality of society by emptying the individual, it
reduces the individual ego to emptiness as well. Neither of the dual
characteristics of *ningen* are either unchanging or ultimately
real. What is ultimate is the emptiness which is revealed as their
basis, for all things are empty, and yet, once this truth is realized,
it is as individuals and societies that nothingness is expressed and
revealed. Emptiness negates itself, or empties itself as the beings of
this world. And it does this in the emptiness of betweenness, the
empty space within which relations between individuals and societies
form and continue to re-form. The state, as the culminating social
organization thus far achieved, is the form of forms,
"transcending all the other levels of social
organization...giving to each of those protected forms a proper
place" (LaFleur 1994, 457). The state is, ultimately, "the
moral systematiser of all those organizations" (LaFleur, french,
457). However, the problem of the proper relation of the individual to
society emerges, for Watsuji goes on to argue that "the state
subsumes within itself all these forms of private life and continually
turns them into the form of the public domain" (LaFleur 1994,
457). In attempting to move away from the selfishness of egoistic
action, Watsuji has given primacy to the state over individual and
group rights. As LaFleur states, "by valorizing the state as
that entity that has the moral authority--in fact the moral
obligation--to 'turn private things into the public domain,
Watsuji provides a smooth rationale for a totalizing state"
(LaFleur 1994, 458). Nevertheless, it must be kept in mind that his
intent was not to advocate tyranny or fascism, but to seek out an
ethical and social theory whereby human beings as human beings could
interact easily and fruitfully in the space between them, creating as
a result a society, and a world-wide association of societies which
selflessly recognized the value of the individual and the crucial
importance of the well-being of the whole. His solution may have been
inadequate, and open to unwelcome misuse. His analysis of
'betweenness' shows it to be communality, and communality
as a mutuality wherein each individual may affect every other
individual and thereby affect the community or communities; and the
community, as an historical expression of the whole may affect each
individual. Ideally, what would result would be an enlightened sense
of our interconnectedness with all human beings, regardless of race,
color, religion, or creed, and a selfless, compassionate capacity to
identify with others as though they were oneself. By reintroducing a
vivial sense of our communitarian interconnectedness, and our spatial
and bodily place in the betweenness between us, where we meet, love,
and strive to live ethical lives together, Watsuji provides an ethical
and political theory which might well prove to be helpful both to
non-Japanese societies, and to a modern Japan itself which is torn
between what it was, and what it is becoming. |
weakness-will | ## 1. Hare on the Impossibility of Weakness of Will
Let us commence our examination of contemporary discussions of this
issue in appropriately Socratic vein, with an account that gives
expression to and builds on many of the intuitions that lead us to be
sceptical about reports like (3) above. For the moral philosopher
R. M. Hare--as for Socrates--it is impossible for
a person to do one thing if he genuinely and in the fullest sense
holds that he ought instead to do something else. (If, that
is--to echo the earlier quote from Socrates--he
"believes that there is another possible course of action,
better than the one he is following.") This certainly seems to
constitute a denial of the possibility of akratic or weak-willed
action. In Hare's case it is a consequence of the general account of
the nature of evaluative judgments which he defends (Hare 1952;
see also Hare 1963).
Hare is much impressed by what we vaguely referred to above as the
"special character" of evaluative judgments: judgments,
that is, such as that one course of action is *better* than
another, or that one *ought* to do a certain thing. Such
evaluative judgments seem to have properties that differentiate them
from merely "descriptive" judgments such as that one
thing is more expensive than another, or rounder than another
(Hare 1952, p. 111). Evaluative judgments seem, in
particular, to bear a special connection to *action* which no
purely descriptive judgment possesses. Hare's analysis, then, takes
off from something like the data we rehearsed earlier. Hare goes on
to develop these data in the following way. He begins by identifying,
as the fundamental distinctive feature of evaluative
judgments--that which lends them a special character--that
evaluative judgments are intended to *guide conduct*. (See,
e.g., Hare 1952, p. 1; p. 29; p. 46; p. 125;
p. 127; p. 142, pp. 171-2; Hare 1963,
p. 67; p. 70.) The special function of evaluative judgments
is to be action-guiding: that is, if you will, what evaluative
judgments are *for*. Hare then puts a more precise gloss on
what it is for a judgment to "guide conduct": an
action-guiding judgment is one which entails an answer to the
practical question "What shall I do?" (Hare 1952,
p. 29; see Hare 1963, p. 54 for the terminology "practical
question").[3]
What is it that an action-guiding judgment must entail? That is, what
constitutes an answer to the question "What shall I do?"
Hare holds that no (descriptive) statement can constitute an answer
to such a question (Hare 1952, p. 46). Rather, such a
question is answered by a first-person *command* or
*imperative* (Hare 1952, p. 79), which could be
verbally expressed as "Let me do *a*"
(Hare 1963, p. 55).
To recap the argument thus far: it is the function of evaluative
judgments like "I ought to do *a*" to guide
conduct. Guiding conduct means entailing an answer to the question
"What shall I do?" An answer to that question will take
the form "Let me do *a*," where this is a
first-person command or imperative. Therefore evaluative judgments
entail such first-person imperatives (Hare 1952, p. 192).
Now in general, if judgment *J*1 entails judgment
*J*2, then assenting to *J*1 must
involve assenting to *J*2: someone who professed to
assent to *J*1 but who disclaimed
*J*2 would be held not to have spoken correctly
when he claimed to assent to *J*1 (Hare 1952,
p. 172). So assenting to an evaluative judgment like "I
ought to do *a*" involves assenting to the first-person
command "Let me do *a*" (Hare 1952,
pp. 168-9). We should inquire, then, what exactly is
involved in sincerely assenting to a first-person command or
imperative of this type. Just as sincere assent to a statement
involves *believing* that statement, sincere assent to an
imperative addressed to ourselves involves *doing* the thing
in question:
>
> It is a tautology to say that we cannot sincerely assent to a
> ... command addressed to ourselves, and *at the same
> time* not perform it, if now is the occasion for performing it
> and it is in our (physical and psychological) power to do so.
> (Hare 1952, p. 20)
>
So: provided it is within my power to do *a* now, if I do not
do *a* now it follows that I do not genuinely judge that I
ought to do *a* now. Thus, as Hare states at the very opening
of his book, a person's evaluative judgments are infallibly revealed
by his actions and choices:
>
> If we were to ask of a person 'What are his moral
> principles?' the way in which we could be most sure of a true
> answer would be by studying what he *did*.... It would
> be when ... he was faced with choices or decisions between
> alternative courses of action, between alternative answers to the
> question 'What shall I do?', that he would reveal in
> what principles of conduct he really believed. (Hare 1952,
> p. 1)
>
Note that Hare is not simply saying that a person's actions are the
most reliable source of evidence as to his evaluative judgments, or
that if a person did *b* the most likely hypothesis is that he
judged *b* to be the best thing to do. Hare is saying, rather,
that it *follows* from a person's having done *b* that
he judged *b* best from among the options open to him at the
time. On this view, then, akratic or weak-willed actions as we have
understood them are impossible. There could not be a case in which
someone genuinely and in the fullest sense held that he ought to do
*a* now (where *a* was within his power) and yet did
*b*. On Hare's view, "it becomes analytic to say that
everyone always does what he thinks he ought to [if physically and
psychologically able]" (Hare 1952, p. 169).
But *does* everyone always do what he thinks he ought to, when
he is physically and psychologically able? It may seem that this is
simply not always the case (even if it is *usually* the case).
Have you, dear reader, *never* failed to get up off the couch
and turn off the TV when you judged it was really time to start
grading those papers? Have you *never* had one or two more
drinks than you thought best on balance? Have you *never*
deliberately pursued a sexual liaison which you viewed as an overall
bad idea? In short, have you *never* acted in a way which
departed from your overall evaluation of your options? If so, let me
be the first to congratulate you on your fortitude. While weak-willed
action does seem somehow puzzling, or defective in some important
way, *it does nonetheless seem to happen*.
For Hare, however, any apparent case of *akrasia* must in fact
be one in which the agent is actually *unable* to do
*a*, or one in which the agent does not genuinely evaluate
*a* as better--even if he says he
does.[4]
As an example of the first kind of case Hare cites Medea
(Hare 1963, pp. 78-9), who (he contends) is powerless,
literally helpless, in the face of the strong emotions and desires
roiling her: she is truly *unable* (psychologically) to resist
the temptations besieging her. A typical example of the second kind
of case, on the other hand, would be one in which the agent is
actually using the evaluative term "good" or
"ought" only in what Hare calls an
"inverted-commas" sense (Hare 1952, p. 120;
pp. 124-6; pp. 164-5; pp. 167-171).
In such cases, when the agent says (while doing *b*) "I
know I really ought to do *a*," he means only that most
people--or, at any rate, the people whose opinions on such
matters are generally regarded as authoritative--would say he
ought to do *a*. As Hare notes (Hare 1952, p. 124),
to believe this is not to make an evaluative judgment oneself;
rather, it is to allude to the value-judgments of other people. Such
an agent does not himself assess the course of action he fails to
follow as better than the one he selects, even if other people
would.
No doubt there are cases of the two types Hare describes; but they do
not seem to exhaust the field. We can grant that there is the odd
murderer, overcome by irresistible homicidal urges but horrified at
what she is doing. But surely not every case that we might be tempted
to describe as one of acting contrary to one's better judgment
involves irresistible psychic forces. Consider, for example, the
following case memorably put by J. L. Austin:
>
> I am very partial to ice cream, and a bombe is served divided into
> segments corresponding one to one with the persons at High Table: I
> am tempted to help myself to two segments and do so, thus succumbing
> to temptation and even conceivably (but why necessarily?) going
> against my principles. But do I lose control of myself? Do I raven,
> do I snatch the morsels from the dish and wolf them down, impervious
> to the consternation of my colleagues? Not a bit of it. We often
> succumb to temptation with calm and even with finesse.
> (Austin 1956/7, p. 198)
>
(I might add that it also seems doubtful that irresistible psychic
forces kept you on the couch watching TV while those papers were
waiting.) As for the "inverted-commas" case, this too
surely happens: people do sometimes pay lip service to conventional
standards which they themselves do not really accept. But again, it
seems highly doubtful that this is true of all seeming cases of
weak-willed action. It seems depressingly possible to select and
implement one course of action while *genuinely believing*
that it is an overall worse choice than some other option open to
you.
Has something gone wrong? We started with the
unexceptionable-sounding thought that moral and evaluative judgments
are intended to guide conduct; we arrived at a blanket denial of the
possibility of akratic action which fits ill with observed facts. But
if we are disinclined to follow Hare this far we should ask what the
alternative is, for it may be even worse. For Hare, the answer is
clear: our only other option is to repudiate the idea that moral and
other evaluative judgments have a special character or nature, namely
that of being action-guiding. For we should recall that Hare presents
all his subsequent conclusions as simply following, through a series
of steps, from that initial thought. "The reason why actions
are in a peculiar way revelatory of moral principles is that the
function of moral principles is to guide conduct," Hare
continues in the passage quoted earlier (Hare 1952, p. 1). For
Hare, then, the only way to escape his "Socratic"
conclusion about weakness of will would be to give up the idea that
evaluative judgments are intended to guide conduct, or to "have
[a] bearing upon our actions" (Hare 1963, p. 169; see
also Hare 1952, p. 46; p. 143; p. 163;
pp. 171-2; and Hare 1963, p. 70; p. 99).
The choices before us, then, as presented by Hare, are Hare's own
view, or one which assigns no distinctive role in action or practical
thought to evaluative judgments, treating them as just like any other
judgment. We might call the first of these an extreme version of
(judgment) *internalism.* (I use this polysemous label to
refer, here, to the idea that certain judgments have an internal or
necessary connection to motivation and to action.) By extension, we
might usefully follow Michael Bratman in calling the second type of
view "extreme externalism" (Bratman 1979,
pp. 158-9).
Extreme externalism also seems unsatisfactory, however. First, it
seems unable to explain why there should be anything perplexing or
problematical about action contrary to one's better judgment, why
there should be any philosophical problem about its possibility or
its analysis. On this kind of view, it seems, Joseph's choice ((3)
above) should strike us as no more puzzling than Julie's or Jimmy's
((1) or (2)). As Hare puts it:
>
> On the view that we are considering, there is nothing odder about
> thinking something the best thing to do in the circumstances, but
> not doing it, than there is about thinking a stone the roundest
> stone in the vicinity and not picking it up, but picking up some
> other stone instead.... There will be nothing that requires
> explanation if I choose to do what I think to be, say, the worst
> possible thing to do and leave undone what I think the best thing to
> do. (Hare 1963, pp. 68-9)
>
But our reactions to (1), (2), and (3) show that we *do* think
there is something peculiar about action contrary to one's better
judgment which renders such action hard to understand, or perhaps
even impossible. An extreme externalist view thus seems to
mischaracterize the status of akratic actions.
Perhaps even more importantly, however, extreme externalism has
dramatic implications for our understanding of intentional action in
general--not just weak-willed action. For such a view implies
that
>
> deliberation about what it would be best to do has no closer
> relation to practical reasoning than, say, deliberation about what
> it would be chic to do. If one happens to care about what it would
> be chic to do, then a consideration of this matter may play an
> important role in one's practical reasoning. If one does not care,
> it will be irrelevant. The case is the same with reasoning about
> what it would be best to do. (Bratman 1979, p. 158)
>
To adopt a general doctrine of this sort seems an awfully precipitous
response to the possibility of *akrasia*. For it seems
extremely plausible to assign to our overall evaluations of our
options an important role in our choices. Man is a rational animal,
the saying goes; that is--to offer one gloss on this
idea--we act on reasons, and in the light of our assessments of
the overall balance of reasons. When we engage in deliberation or
reasoning about what to do, we often proceed by thinking about the
reasons which favor our various options, and then bringing these
together into an overall assessment which is, precisely, intended to
guide our choice.
Or, as Bratman puts it, we very often reason about what it is
*best* to do as a way of settling the question of what
*to* do. (He calls this "evaluative practical
reasoning": Bratman 1979, p. 156.) "One's
evaluations [thus] play a crucial role in the reasoning underlying
full-blown action," Bratman holds (p. 170), and to be
forced to deny this would be in his view "too high a price to
pay" (p. 159). As Alfred Mele similarly puts it,
"there is a real danger that in attempting to make causal and
conceptual space for full-fledged akratic action one might commit
oneself to the rejection of genuine ties between evaluative judgment
and action" (1991, p. 34). But that would be to throw the
baby out with the bathwater. If we want to resist Hare's conclusions,
we must do so in a way which steers clear of the danger to which Mele
alerts us. We must navigate between the Scylla of extreme internalism
and the Charybdis of extreme externalism.
## 2. Davidson on the Possibility of Weakness of Will
This is just what Donald Davidson set out to do in a rich, elegant,
and incisive paper published in 1970 which has had a towering
influence on the subsequent literature. Davidson's treatment aims to
vindicate the possibility of weakness of will; to offer a novel
analysis of its nature; to clarify its status as a marginal, somehow
defective instance of agency which we rightly find dubiously
intelligible; and to do all this within the contours of a general
view of practical reasoning and intentional action which assigns a
central and special role to our evaluative judgments. Let us see how
he proposes to do these things.
First, Davidson offers the following general characterization of
weak-willed or incontinent
action:[5]
>
> In doing *b* an agent acts incontinently if and only
> if: (a) the agent does *b* intentionally;
> (b) the agent believes there is an alternative action
> *a* open to him; and (c) the agent judges that, all things
> considered, it would be better to do *a* than to do
> *b*.
>
We initially described weak-willed action as free, intentional action
contrary to the agent's better judgment; it may be useful to see how
Davidson's more precise definition matches up with that initial
characterization. Davidson's condition (a) requires that the action in
question be intentional.[6] Condition (b) seems intended to ensure that
the action in question is
free.[7]
Part (c) of Davidson's definition represents what we have called the
agent's "better judgment," that is, the overall
evaluation of his options contrary to which the incontinent agent
acts.
Davidson notes that "there is no proving such actions exist;
but it seems to me absolutely certain that they do"
(p. 29). Why, then, is there a persistent tendency, both in
philosophy and in ordinary thought, to deny that such actions are
possible? Davidson's diagnosis is that two plausible principles which
"derive their force from a very persuasive view of the nature
of intentional action and practical reasoning" (p. 31)
appear to entail that incontinence is impossible. He articulates
those two principles as follows (p. 23):
>
> **P1**. If an agent wants to do *a* more than he wants to
> do *b* and he believes himself free to do either *a*
> or *b*, then he will intentionally do *a* if he does
> either *a* or *b* intentionally.
>
>
>
> **P2**. If an agent judges that it would be better to do
> *a* than to do *b*, then he wants to do *a*
> more than he wants to do *b*.
>
>
>
P2, Davidson observes, "connects judgements of what it is
better to do with motivation or wanting" (p. 23); he adds
later that it "states a mild form of internalism"
(p. 26). Davidson is proposing, *contra* the extreme
externalist position, that our evaluative judgments about the merits
of the options we deem open to us are not motivationally inert. While
he admits that one could quibble or tinker with the formulation of P1
and P2 (pp. 23-4; p. 27; p. 31), he is confident
that they or something like them give expression to a powerfully
attractive picture of practical reasoning and intentional action, one
which assigns an important motivational role to the agent's
evaluative
judgments.[8]
The difficulty is, though, that P1 and P2--however
attractive--together imply that an agent never intentionally
does *b* when he judges that it would be better to do
*a* (if he takes himself to be free to do either). And this
certainly looks like a denial of the possibility of incontinent
action. No wonder, then, that so many have been tempted to say that
akratic action is impossible! Looking carefully, however, we can see
that P1 and P2 do *not* imply the impossibility of incontinent
actions *as Davidson has defined them*. For Davidson
characterizes the agent who incontinently does *b* as holding,
not that it would be better to do *a* than to do *b*,
but that it would be better, *all things considered*, to do
*a* than to do *b*. Is the "all things
considered" just a rhetorical flourish? Or does it mark a
genuine difference between these two judgments? If these are two
different judgments, and one can hold the latter without holding the
former, then incontinent action is possible *even if P1 and P2 are
true*.
In the rest of his paper Davidson sets out to vindicate that very
possibility. The phrase "all things considered" is not,
as it might seem, merely a minor difference in wording that allows
weakness of will to get off on a technicality. Rather, that phrase
marks an important contrast in logical form to which we would need to
attend in any case in order properly to understand the structure of
practical reasoning. For that phrase indicates a judgment that is
*conditional* or *relational* rather than
*all-out* or *unconditional* in form; and that
difference is
crucial.[9]
We can better see the relational character of an
all-things-considered judgment if we first look at evaluative
judgments that play an important role in an earlier phase of
practical reasoning, the phase where we consider what reasons or
considerations favor doing *a* and what reasons or
considerations favor doing *b*. (For simplicity, imagine a
case in which an agent is choosing between only two mutually
incompatible options, *a* and *b*.) These *prima
facie* judgments, as Davidson terms them, take the form:
>
> **PF**: In light of *r*, *a* is *prima
> facie* better than *b*.
>
In this schema *r* refers to a consideration, say that
*a* would be relaxing, while *b* would be stressful. A
PF judgment of this kind thus identifies one respect in which
*a* is deemed superior to *b*, one perspective from
which *a* comes out on top.
We should pause to note three things about PF judgments. (a) A PF
judgment is not itself a conclusion in favor of the overall
superiority of *a*. Such "all-out" evaluative
judgments have a simpler logical form, namely:
>
> **AO**: *a* is better than *b*.
>
(b) Indeed, no conclusion of the form AO follows logically from any
PF judgment. (c) More strongly: the fact, taken by itself, that
someone has made a certain PF judgment does not even supply her with
sufficient grounds to draw the corresponding AO conclusion. For even
if she makes one PF judgment which favors *a* over *b*,
as in the case we imagined, she may *also* make *other*
PF judgments which favor *b* over *a* (say, when
*r* is the consideration that *b* would be lucrative,
while *a* would be expensive). We do not want to say in that
case that she has sufficient grounds to draw each of two incompatible
conclusions (that *a* is better than *b*, and that
*b* is better than *a*; these are incompatible provided
the better-than relation is asymmetric, assumed here).
We have contrasted PF judgments with AO or "all-out"
evaluative judgments. PF judgments are relational in character: they
point out a *relation* which holds between the consideration
*r* and doing *a*. (We could call that relation the
"favoring" relation.) That relation is not such as to
permit us to "detach" (as Davidson puts it, p. 37)
an unconditional evaluative conclusion in favor of doing *a*
from PF and the supposition that *r* obtains. That is, we are
not to understand PF judgments as having the form of a material
conditional.
Davidson's innovative suggestion is that judgments with this PF
logical form are an appropriate way to model what happens in the
early stages of practical reasoning, where we rehearse reasons for
and against the options we are considering. And his stressing that no
such PF judgment commits the agent to an overall evaluative
conclusion in favor of *a* or *b* is useful in thinking
about a case like Julie's ((1) above). We described Julie as knowing
(and therefore believing) that *b* was more expensive than
*a*, but opting for *b* nonetheless. We can imagine,
then, that among the ingredients of Julie's practical reasoning was a
PF judgment like this:
>
> In light of the fact that *b* is more expensive than *a*,
> *a* is *prima facie* better than *b*.
>
But this PF judgment alone, as we have seen, does not commit her to
the overall judgment that *a* is better than *b*. For
she may also have made other PF judgments, such as
>
> In light of the fact that *b* would be much more
> gastronomically exciting than *a*, *b* is *prima
> facie* better than *a*.
>
But we would not then want to say Julie has sufficient grounds to
conclude that *a* is better than *b* and to conclude
that *b* is better than *a*. She does not have
sufficient grounds to embrace a contradiction; her premises all seem
consistent. So her various PF judgments, when considered separately,
must *not* each commit her to a corresponding overall
conclusion in favor of *a* or *b*.
Practical reasoning, Davidson suggests, starts from judgments like
these, each identifying one respect in which one of the options is
superior. But in order to make progress in our practical reasoning we
shall eventually need to consider how *a* compares to
*b* not just with respect to *one* consideration, but
in the light of several considerations taken together. That is, Julie
will eventually need to consider how to fill in the blanks in a PF
judgment like this:
>
> In light of the fact that *b* is more expensive
> than *a* *and* the fact that *b* would be much
> more gastronomically exciting than *a*, ... is *prima
> facie* better than ....
>
This PF judgment is more *comprehensive* than the ones we
attributed to Julie a moment ago, as it takes into account a broader
range of considerations. (I take the label
"comprehensive" from Lazar 1999.) Now in Julie's case we
can surmise how she filled in those blanks: with "*b* is
*prima facie* better than *a*." Julie's filling
in the blanks in that way can naturally be taken as expressing the
view that the much greater gastronomic excitement promised by
*b* *outweighs* or *overrides* *b*'s
inferiority to *a* from a strictly financial standpoint.
We can generalize our schema for PF judgments to account for the
possibility of relativizing our comparative assessment of *a*
and *b* not just to a single consideration, but to multiple
considerations taken together or as a body:
>
> **PFN**: In light of *<r*1, ...,
> *rn*> , *a* is *prima facie*
> better than *b*.
>
Notice that PFN judgments are still relational in form: they assert
that a relation (the "favoring" relation) holds between
the *set* of considerations *<r*1,
..., *rn*> and doing *a*. Indeed,
the relational character of a PFN judgment remains even if we make it
as comprehensive as we can: if we expand the set
*<r*1, ..., *rn*>
to incorporate *all* the considerations the agent deems
relevant to her decision. Following Davidson (p. 38), let us
give the label *e* to that set. So even the following
judgment:
>
> **ATC**: In light of *e*, *a* is *prima
> facie* better than *b*.
>
is a relational or conditional judgment and not an all-out conclusion
in favor of doing *a*. To make a judgment of the form ATC is
*not* to draw an overall conclusion in favor of doing
*a.*
We may be better able to see this by considering an analogy from
theoretical reason. Suppose Hercule Poirot has been called in to
investigate a murder. We can imagine him assessing bits of evidence
as he encounters them:
>
> In light of the fact that the murder weapon belongs to Colonel
> Mustard, Mustard looks guilty;
>
>
> In light of his having an alibi for the time of the murder, Mustard
> looks not guilty;
>
>
>
and so on. These are theoretical analogues of the PF judgments
relativized to single considerations which we looked at earlier.
However, Poirot will eventually need to consider how these various
bits of evidence add up; that is, he will eventually need to fill in
the blanks in a more comprehensive PFN judgment like this:
>
> In light of
> <*e*1, ..., *en*> ,
> ... looks to be the guilty party,
>
where
<*e*1, ..., *en*>
is a set of bits of pertinent evidence. Notice, though, that no such
PFN judgment actually constitutes settling on a particular person as
the culprit. For even if we put in a maximally large
<*e*1, ..., *en*>
consisting of *all* the evidence Poirot has seen, and imagine
him thinking
>
> All the evidence I have seen points toward Colonel Mustard as the
> guilty party,
>
to make this observation is manifestly *not* to conclude that
Mustard is guilty.
In the same way, an ATC or all-things-considered judgment, although
comprehensive, is still relational in nature, and therefore distinct
from an AO judgment in favor of *a*. That is, it is possible to
make an ATC judgment in favor of *a* without making the
corresponding AO judgment in favor of *a.* (This is the
analogue of Poirot's position.) And this is the key to Davidson's
solution to the problem of how weakness of will is possible. For ATC
is, precisely, *the agent's better judgment* as Davidson
construes it in his definition of incontinent action. P1 and P2
together imply that an agent who reaches an AO conclusion in favor
of *a* will not intentionally do *b*. But the
incontinent agent never reaches such an AO conclusion. With respect
to *a*, he remains stuck at the Hercule Poirot stage: he sees
that the considerations he has rehearsed, taken as a body,
favor *a*, but he is unwilling or unable to make a commitment
to *a* as the thing to do.[10] He makes only a relational ATC judgment
in favor of *a*, contrary to which he then acts.
What should we say about an agent who does this? Returning to the
three features of *prima facie* or PF judgments which we noted
earlier, features (a) and (b) hold even of the special
subclass of PF judgments which are ATC judgments. Such judgments
neither are equivalent to, nor logically imply, any AO judgment. So
the incontinent agent who fails to draw the AO conclusion which
corresponds to his ATC conclusion, and to perform the corresponding
action, is not committing "a simple logical blunder"
(p. 40). Notably, he does not contradict himself. He does,
however, exhibit a defect in rationality, on Davidson's account. For
feature (c) of PF judgments in general does *not* hold of the
special subclass of such judgments which are ATC judgments. Drawing
an *ATC* conclusion in favor of *a* *does* give
one sufficient grounds to conclude that *a* is better *sans
phrase* and, indeed, to do
*a*. For Davidson proposes that the transition from an ATC
judgment in favor of *a* to the corresponding AO judgment, and
to doing *a*, is enjoined by a substantive principle of
rationality which he dubs "the principle of continence."
That principle tells us to "perform the action judged best on
the basis of all available relevant reasons" (p. 41); and
the incontinent agent violates this injunction. The principle of
continence thus substantiates the idea that "what is wrong is
that the incontinent man acts, and judges, irrationally, for this is
surely what we must say of a man who goes against his own best
judgement" (p. 41). He acts irrationally in virtue of
violating this substantive principle, obedience to which is a
necessary condition for rationality.
We must put this point about the irrationality of incontinence with
some care, however. For recall that an incontinent action must itself
be intentional, that is, done for a reason. The weak-willed agent,
then, has a reason for doing *b*, and does *b* for that
reason. What he lacks--and lacks by his own lights--is a
*sufficient* reason to do *b*, given all the
considerations that he takes to favor *a*. As Davidson puts
it, if we ask "what is the agent's reason for doing
[*b*] when he believes it would be better, all things
considered, to do another thing, then the answer must be: for this,
the agent has no reason" (p. 42). And this is so even
though he does have a reason for doing *b* (p. 42,
n. 25). Because the agent has, by his own lights, no adequate
reason for doing *b*, he cannot make sense of his own action:
"he recognizes, in his own intentional behaviour, something
essentially surd" (p. 42). So akratic action, while
*possible* on Davidson's account, is nonetheless necessarily
*irrational*; this is the sense in which it is a defective and
not fully intelligible instance of agency, despite being a very real
phenomenon.
## 3. The Debate After Davidson
### 3.1 Internalist and Externalist Strands
Davidson has certainly presented an arresting theory of practical
reasoning. But has he shown how weakness of the will is possible?
Most philosophers writing after him, while acknowledging his
pathbreaking work on the issue, think he has not. One principal
difficulty which subsequent theorists have seized on is that
Davidson's view can account for the possibility of action contrary to
one's better judgment *only if* one's better judgment is
construed merely as a conditional or *prima facie* judgment.
Davidson's P1 and P2 in fact rule out the possibility of free
intentional action contrary to an all-out or unconditional evaluative
judgment.[11]
But it seems that such cases exist. Michael Bratman, for instance,
introduces us to Sam, who, in a depressed state, is deep into a
bottle of wine, despite his acknowledged need for an early wake-up
and a clear head tomorrow (1979, p. 156). Sam's friend, stopping
by, says:
>
> Look here. Your reasons for abstaining seem clearly stronger than
> your reasons for drinking. So how can you have thought that it would
> be best to drink?
>
To which Sam replies:
>
> I don't think it would be best to drink. Do you think
> I'm stupid enough to think that, given how strong my reasons
> for abstaining are? I think it would be best to abstain. Still,
> I'm drinking.
>
Sam's case certainly seems possible as described. Davidson's view,
though, must reject it as impossible. Given his conduct, Sam can't
think it best to abstain; at most, he thinks it all-things-considered
best to abstain, a very different kettle of fish. But this seems
false of Sam: there is no evidence that he has remained stuck at the
Hercule Poirot stage with respect to the superiority of abstaining.
He seems to have gone all the way to a judgment *sans phrase*
that abstaining would be better; and yet he drinks.
Ironically, this complaint makes Davidson out to be a bit like Hare.
Like Hare, Davidson subscribes to an internalist principle (P2) which
connects evaluative judgments with motivation and hence with action.
(Indeed, in light of the difficulty raised here, one might wonder if
Davidson is entitled to consider P2 a "mild" form of
internalism (p. 26).) As with Hare, this internalist commitment
rules out as impossible certain kinds of action contrary to one's
evaluative judgment. Now Davidson, like Hare, does accept the
possibility of certain phenomena in this neighborhood; but--as
with Hare--critics think the cases permitted by his analysis
simply do not exhaust the range of actual cases of weakness of will.
The phenomenon seems to run one step ahead of our attempts to make
room for it.
Those writing after Davidson have tended to focus, then, on the
question of the possibility and rational status of action contrary to
one's *unconditional* better
judgment.[12]
Naturally, different theorists have plotted different courses through
these shoals. Some tack more to the internalist side, wishing to
preserve a strong internal connection between evaluation and action
even at the risk of denying or seeming to deny the possibility of
akratic action (or at least some understandings of it). Examples of
some post-Davidson treatments which share a broadly internalist
emphasis, even if they feature different flavors of internalism, are
those of Bratman (1979), Buss (1997), Tenenbaum (1999; see also 2007,
ch. 7), and Stroud (2003). The main danger for such approaches is that
in seeking to preserve and defend a certain picture of the primordial
role of evaluative thought in rational action--a picture critics
are likely to dismiss as too rationalistic--such theorists may be
led to reject common phenomena which ought properly to have
constrained their more abstract theories. (See the opening of Wiggins
1979 for a forceful articulation of this criticism.)
Other theorists, by contrast, are more drawn toward the externalist
shoreline. They emphasize the motivational importance of factors
other than the agent's evaluative judgment and the divergences that
can result between an agent's evaluation of her options and her
motivation to act. They are thus disinclined to posit any strong,
necessary link between evaluative judgment and action. Michael
Stocker, for instance, argues that the philosophical tradition has
been led astray in assuming that evaluation dictates motivation.
"Motivation and evaluation do not stand in a simple and direct
relation to each other, as so often supposed," he writes.
Rather, "their interrelations are mediated by large arrays of
complex psychic structures, such as mood, energy, and interest"
(1979, pp. 738-9). Similarly, Alfred Mele proposes as a
fundamental and general truth--and one that underlies the
possibility of *akrasia*--that "the motivational
force of a want may be out of line with the agent's evaluation of the
object of that want" (1987, p. 37). Mele goes on to offer
several different reasons why the two can come apart: for example,
rewards perceived as *proximate* can exert a motivational
influence disproportionate to the value the agent reflectively
attaches to them (1987, ch. 6). Such wants may function as
strong *causes* even if the agent takes them to constitute
weak *reasons*.
With respect to these questions, the challenge sketched at the end of
Section 1 above remains in full force. What is required is a view
which successfully navigates between the Scylla of an extreme
internalism about evaluative judgment which would preclude the
possibility of weakness of will, and the Charybdis of an extreme
externalism which would deny any privileged role to evaluative
judgment in practical reasoning or rational action. For one's
verdict about *akrasia* will in general be closely connected to
one's more general views of action, practical reasoning,
rationality, and evaluative judgment--as was certainly true of
Davidson.
Views that downplay the role of evaluative judgment in action and
hence tack more toward the externalist side of the channel may more
easily be able to accept the possibility and indeed the actuality of
weakness of will. But they are subject to their own challenges. For
example, suppose we follow Mele's image of *akrasia* and posit
that a certain agent is caused to do *x* by motivation to do
*x* which is dramatically out of kilter with her assessment of
the merits of doing *x*. In what sense, then, is her doing
*x* free, intentional, and uncompelled? Such an agent might
seem rather to be at the mercy of a motivational force which is, from
her point of view, utterly alien. Thus, worries about distinguishing
*akrasia* from compulsion come back in full force in
connection with proposals like these. (See
fn. 7
above for relevant references; Buss and Tenenbaum press these worries
against accounts like Mele's in particular.) Moreover, there is the
danger, for accounts of this more externalist stripe, of taking too
much of the mystery out of weakness of will. Even if akratic action
is possible and indeed actual, it remains a puzzling, marginal,
somehow defective instance of agency, one that we rightly find not
fully intelligible. Views that do not assign a privileged place in
rational deliberation and action to the agent's overall assessment of
her options risk making akratic action seem no more problematic than
Julie's or Jimmy's decisions, or Hare's agent who fails to pick up
the roundest stone in the vicinity.
### 3.2 Weakness of Will as Potentially Rational
The "externalist turn" toward downplaying the role of an
agent's better judgment and emphasizing other psychic factors instead
is connected to a second way in which some theorists writing after
Davidson have dissented from his analysis. Davidson, as we saw,
viewed akratic action as possible, but irrational. The weak-willed
agent acts contrary to what she herself takes to be the balance of
reasons; her choice is thus unreasonable by her own lights. On this
picture, incontinent action is a paradigm case of practical
irrationality. Many other theorists have agreed with Davidson on this
score and have taken *akrasia* to be perhaps the clearest
example of practical irrationality. But some writers (notably Audi
1990, McIntyre 1990, and Arpaly 2000) have questioned whether akratic
action *is* necessarily irrational. Perhaps we ought to leave
room, not just for the *possibility* of akratic action, but
for the potential *rationality* of akratic action.
The irrationality which is held necessarily to attach to akratic
action derives from the discrepancy between what the agent judges to
be the best (or better) thing to do, and what she does. That is, her
action is faulted as irrational in virtue of not conforming to her
better judgment. But--ask these critics--what if her better
judgment is itself faulty? There is nothing magical about an agent's
better judgment that ensures that it is correct, or even warranted;
like any other judgment, it can be in error, or even unjustified.
(Recall that by "better judgment" we meant, all along,
only "a judgment as to which course of action is better,"
not "a *superior* judgment.") Where the agent's
better judgment is itself defective, in doing what she deems herself
to have insufficient reason to do, the agent may actually be doing
what she has most reason to do. "Even though the akratic agent
does not believe that she is doing what she has most reason to do, it
may nevertheless be the case that the course of action that she is
pursuing is the one that she has ... most reason to
pursue" (McIntyre 1990, p. 385). In that sense the
akratic agent may be wiser than her own better judgment.
How, concretely, could an agent's better judgment go astray in
this way? Perhaps her survey of what she took to be the relevant
considerations did not include, or did not attach sufficient weight
to, what were in fact significant reasons in favor of one of the
possible courses of action. She may have overlooked these, or
(wrongly) deemed them not to be reasons, or failed to appreciate
their full force; and in that case her judgment of what it is best to
do will be incorrect. Consider, for example, Jonathan Bennett's
Huckleberry Finn (Bennett 1974, discussed in
McIntyre 1990), who akratically fails to turn in his slave
friend Jim to the authorities. Huck's judgment that he ought to do
so, however, was based primarily on what he took to be the force of
Miss Watson's property rights; it ignored his powerful feelings of
friendship and affection for Jim, as well as other highly relevant
factors. His "better judgment" was thus not in fact a
very comprehensive judgment; it did not take into account the full
range of relevant considerations.
Or consider Emily, who has always thought it best that she pursue a
Ph.D. in chemistry (Arpaly 2000, p. 504). When she revisits
the issue, as she does periodically, she discounts her increasing
feelings of restlessness, sadness, and lack of motivation as she
proceeds in the program, and concludes that she ought to persevere.
But in fact she has very good reasons to quit the program--her
talents are not well suited to a career in chemistry, and the people
who are thriving in the program are very different from her. If she
impulsively, akratically quits the program, purely on the basis of
her feelings, Emily is in fact doing just what she ought to
do.[13]
That her action conflicts with her better judgment does not
significantly impugn its rationality, given all the considerations
that *do* support her quitting the program. "A theory of
rationality should not assume that there is something special about
an agent's best judgment. An agent's best judgment is just another
belief" (Arpaly 2000, p. 512). Emily's action
conflicts, then, with one belief she has; but it coheres with many
more of her beliefs and desires overall. So even though she may find
her own action inexplicable or "surd," she is in fact
acting rationally, although she does not know it. *Contra*
Davidson, "we can ... act rationally just when we cannot
make any sense of our actions" (Arpaly 2000, p. 513).
It is unclear, however, whether these arguments and examples are
likely to sway those who take *akrasia* to be a paradigm of
practical irrationality. These dissenters stress the *substantive
merits* of the course of action the akratic agent follows. But
traditionalists may say that is beside the point: however well things
turn out, the practical thinking of the akratic agent still exhibits
a *procedural defect*. Someone who flouts her own conclusion
about where the balance of reasons lies is *ipso facto* not
reasoning well. Even if the action she performs is in fact supported
by the balance of reasons, she does not think it is, and that is
enough to show her practical reasoning to be faulty. The defenders of
the traditional conception of *akrasia* as irrational thus
wish to grant special rational authority (in this procedural sense)
to the agent's better judgment, even if they admit that such a
judgment can be substantively incorrect. By contrast, the dissenters
"[do] not believe best judgments have any privileged
role" (Arpaly 2000, p. 513). We see again the
contrast between "internalist" and
"externalist" tendencies in the debates over weakness of
will.
### 3.3 Changing the Subject
A final revisionist strand now emerging in the literature takes the
agent's better judgment even farther out of the picture. In an
outstandingly lucid and stimulating essay published in 1999 (see also
his 2009, ch. 4), Richard Holton argued that weakness of will is not
action contrary to one's better judgment at all. The literature has
gone astray in understanding weakness of will in this way; weakness of
will is actually quite a different phenomenon, in which the agent's
better judgment plays no
role.[14]
For Holton, when ordinary people speak of weakness of will they have
in mind a certain kind of failure to act on one's
*intentions*. What matters for weakness of will, then, is not whether
you deem another course of action superior at the time of action. It
is whether you are abandoning an intention you previously formed.
Weakness of will as the untutored understand it is not
*akrasia* (if we reserve that term for action contrary to
one's better judgment), but rather a certain kind of failure to stick to
one's plans.[15]
This understanding of weakness of will changes the
subject in two ways. First, the state of the agent with which the
weak-willed action is in conflict is not an evaluative judgment (as
in *akrasia*) but a different kind of state, namely an
intention. Second, it is not essential that there be
*synchronic* conflict, as *akrasia* demands. You must
act contrary to your *present* better judgment in order to
exhibit *akrasia*; conflict with a *previous* better
judgment does not indicate *akrasia*, but merely a change of
mind. However, you can exhibit weakness of will as Holton understands
it simply by abandoning a previously formed intention.
Of course not all cases of abandoning or failing to act on a
previously formed intention count as weakness of will. I intend
to run five miles tomorrow evening. If I break my leg tomorrow
morning and fail to run five miles tomorrow evening, I will not have
exhibited weakness of will. How can we characterize
*which* failures to act on a previously formed intention count
as weakness of will? Holton's answer has two parts. First, he says,
there is an irreducible normative dimension to the question whether
someone's abandoning of an intention constituted weakness of will
(Holton 1999,
p. 259).[16]
That is, there is no purely
descriptive criterion (such as whether her action conflicted with her
better judgment) which is sufficient for weakness of will; in order
to decide whether a given case was an instance of weakness of will we
must consider normative questions, such as whether it was
*reasonable* for the agent to have abandoned or revised that
intention, or whether she *should* have done so. In the case
of my broken leg, for instance, it was clearly reasonable for me to
abandon my intention; that is why I could not be charged with
weakness of will in that case.
Second, says Holton, we need to attend to an important subclass of
our intentions to do something at a future time, namely
*contrary-inclination-defeating* *intentions*, or, as he
later terms them (Holton 2003), *resolutions*.
Resolutions are intentions that are formed precisely in order to
insulate one against contrary inclinations one expects to feel when
the time comes. Thus one reason I might form an intention on Monday
to run five miles on Tuesday--as opposed to leaving the issue
open until Tuesday, for decision then--is to reduce the effect
of feelings of lassitude to which I fear I may be subject when
Tuesday rolls around. Then suppose Tuesday rolls around; I am indeed
prey to feelings of lassitude; and I decide as a result not to run.
*Now* I can be charged with weakness of will. Weakness of will
involves, specifically, a failure to act on a *resolution*;
this is sufficient to differentiate weakness of will from mere change
of mind and even from caprice (which is a *different* species
of unreasonable intention revision, according to Holton).
As a later paper by Alison McIntyre shows (McIntyre 2006),
understanding weakness of will in this way casts a fresh light on the
issue of its rational status. The weak-willed agent abandons a
resolution because of a contrary inclination of exactly the type
which the resolution was expressly designed to defeat. Therefore, as
McIntyre underlines, weak-willed action always involves a procedural
rational
defect:[17]
a technique of self-management has been deployed but has failed
(McIntyre 2006, p. 296). To that extent we have grounds to
criticize weak-willed action simply in virtue of the second of the
ways in which Holton wishes to distinguish weakness of will from a
mere change of mind, without even resolving the potentially murky
issue of whether the agent was *reasonable* in abandoning her
intention.
McIntyre holds, however, that it would be overstating the case to say
that because weakness of will involves this procedural defect, it is
always irrational (McIntyre 2006, p. 290;
pp. 298-9; p. 302). She proposes rather that
practical rationality has multiple facets and aims, and that failure
in one respect or along one dimension does not automatically justify
the especially severe form of rational criticism which we intend by
the term "irrational." For example, consider an agent who
succumbs to contrary inclination of exactly the type expected when
the time comes to act on a truly *stupid* resolution. (Holton
gives the example of resolving to go without water for two days just
to see what it feels like: Holton 2003, p. 42.) There will
indeed be a blemish on this agent's rational scorecard if he
eventually gives in and drinks: he will have failed in his attempt at
self-management. But wouldn't it be rationally far *worse* for
him to stick to his silly resolution no matter what the cost?
We can also re-examine the issue of the rationality of
*akrasia* in light of this analysis of weakness of will; for
we can distinguish between akratic and non-akratic cases of the
latter. As McIntyre points out, resolutions typically rest on
judgments about what it is best that one do at a (future) time
*t*. If an agent fails to act on a previously formed
resolution to do *a* at *t*, thus exhibiting weakness
of will, we can distinguish the case in which he still endorses at
*t* the judgment that it is best that he do *a* at
*t* (even though he does not do it) from the case in which he
abandons that judgment as well as his resolution. In the latter,
non-akratic type of case, the agent in effect rationalizes his
failure to live up to his resolution by deciding that it is not after
all best that he do *a* at *t*. McIntyre points out
that the traditional view that *akrasia* is always irrational
seems to give us a perverse incentive to rationalize, since in that
case we escape the grave charge of practical irrationality, being
left only with the procedural practical defect present in all cases
of weakness of will (McIntyre 2006, p. 291). But this seems
implausible: are the two sub-cases so radically different in their
rational status? Indeed, she argues, if anything, akratic weakness of
will is typically rationally *preferable* to rationalizing
weakness of will (McIntyre 2006, p. 287; pp. 309ff.).
"In the presence of powerful contrary inclinations that bring
about a failure to be resolute," she writes, "resisting
rationalization and remaining clearheaded about one's reasons to act
can constitute a modest accomplishment" (McIntyre 2006,
p. 311). Have we witnessed the transformation of
*akrasia* from impossible, to irrational, to downright
admirable?
### 3.4 Recent Developments
#### 3.4.1 Epistemic Akrasia
One focus of renewed attention in recent years has been (so-called)
epistemic akrasia. An epistemically akratic agent holds beliefs of the
form "P, but my evidence doesn't support P." In an
influential discussion, Sophie Horowitz posits an analogy between
epistemic akrasia and practical akrasia: "Just as an akratic
agent acts in a way she believes she ought not act, an epistemically
akratic agent believes something that she believes is unsupported by
her evidence" (Horowitz 2014, p. 718).
Is epistemic akrasia possible? Some philosophers who are happy to
countenance practical akrasia have answered in the negative with
respect to its putative doxastic analogue (these include Hurley 1989,
Adler 2002, and Owens 2002). One argument for this denial focuses on
the notion of doxastic control. David Owens argues that we lack the
requisite sort of doxastic control required for our beliefs to be
formed freely and deliberately, "either in accordance with our
judgement about what we should believe or against those
judgements" (Owens 2002, p. 395). But such control, Owens
argues, would be necessary in order for epistemic akrasia to be
possible.
A second argument for the impossibility of epistemic akrasia
(discussed in Adler 2002) turns on what is seen as an important
disanalogy between epistemic and practical reasoning. Conflict between
two incompatible beliefs weakens one or both beliefs (since both
can't be true), whereas conflict between two desires that
can't both be satisfied need not weaken either desire. A
practically akratic agent who has formed an all things considered
judgment about what to do may thus still be in the grip of a desire to
do something else. But this, Adler says, has no analogue in the
epistemic realm: one can't remain in the grip of a belief if one
views the evidence for an opposing belief as decisive. Thus, practical
akrasia is "motivationally intelligible" in a way that
epistemic akrasia is not (Adler 2002, p. 8). Neil Levy responds to
this concern by arguing that while beliefs in some domains fit this
model, the disanalogy Adler points to does not hold in domains where
there is room for "ongoing, rational controversy,"
including philosophy (Levy 2004, p. 156). One may form a belief in
favor of a philosophical view while nonetheless feeling the pull of
incompatible views, just as one may retain a desire that one judges
all things considered one shouldn't satisfy. Though Levy only
claims that Adler's impossibility argument fails, Ribeiro (2011)
uses similar considerations to motivate the claim that epistemic
akrasia is not only possible, but actual.
Some philosophers who think there could be an epistemic version of
akrasia have raised questions about how closely it would parallel
practical akrasia. Daniel Greco (2014), for instance, argues that
while a divergence between one's moral emotions and one's
urges might play an important role in understanding some cases of
practical akrasia, there are no corresponding epistemic emotions that
would help to illuminate epistemic akrasia (Greco 2014,
p. 207). Practical and epistemic varieties of akrasia nonetheless have
in common that they involve a kind of fragmentation or inner conflict;
Greco uses this characterization to support the idea that both
epistemic and practical akrasia are always irrational. (Feldman 2005
also sees epistemic akrasia as a paradigm of irrationality.)
Just as the rational status of practical akrasia has become contested
(see section 3.2 above), however, some theorists have now argued that
epistemic akrasia could in fact be rational. For example, Coates
(2012), Weatherson (2019), Wedgwood (2011), and Lasonen-Aarnio (2014)
have argued for the rational permissibility of some beliefs of the
characteristic akratic form "*P*, but my evidence doesn't
support *P*." This discussion has been shaped by a growing
interest in higher-order evidence--that is, evidence about what
one's evidence supports. The notion of higher-order evidence
complicates the picture of belief at work in arguments like
Adler's (2002) above. Defenders of the rationality of epistemic
akrasia have argued that beliefs of the form "*P*, but my evidence
doesn't support *P*" can be rationally permissible when one
has misleading higher-order evidence. In such cases, they contend, a
person could have good grounds both for believing that *P* and for
believing that her evidence doesn't support the conclusion that
*P*.
For instance, Horowitz describes a case of epistemic akrasia involving
a detective, Sam, who stays up all night trying to identify a
thief. He knows the evidence he is working with is good, and concludes
on the basis of this evidence that the thief was Lucy. He calls his
partner and tells her that he's cracked the case, only to have
her remind him that his reasoning while sleepy is often poor. Sam
hadn't thought about his previous track record, but he believes
his partner that he's often wrong about what his evidence
supports when he's tired. (Horowitz 2014, p. 719).
Defenders would say that Sam could be rational in believing both that
Lucy was the thief, and in believing that his evidence doesn't
support that claim. Against this view, Horowitz (2014) maintains that
higher-order evidence should affect our first-order attitudes whenever
we expect our evidence to be truth-guiding (which rationally we almost
always should); the conjunction involved in an akratic belief is thus
rationally unstable (Horowitz 2014, p. 740). (See also Lasonen-Aarnio
2020 for the argument that epistemically akratic subjects, while
sometimes rational, nonetheless manifest bad dispositions).
#### 3.4.2 Addiction as Akrasia?
There has also been growing interest in the question of whether
addiction is perhaps best understood as a form of *akrasia*.
Building on the arguments of Mele (2002), Nick Heather (2016) contends
that addiction shares the core features of akratic action and should be
understood as a special kind of *akrasia*, one in which agents
consistently act against both their present judgments and their prior
resolutions (Heather 2016 p. 133-4). Heather thus accepts
Holton's (1999) distinction between *akrasia* and weakness
of will, but argues that addiction paradigmatically involves both
halves of this distinction.
In contrast, Edmund Henden (2016) argues against classifying
addiction as a form of weakness of will. He thinks that the
phenomenology of addiction tells against such an assimilation.
Addiction often involves habitual behavior, and relapses are often
triggered by environmental cues that the addict is not conscious of at
the time of action. Addicts may continue to take drugs even when they
don't find it pleasurable to do so, and while retaining a strong
sense of the disvalue of this course of action (Henden 2016, p. 122).
In these respects, Henden contends that addiction is unlike giving into
temptation, as paradigm examples of *akrasia* are often
described.
Henden also notes that weakness of will seems rationally
criticizable in a way that addiction does not, which suggests they
can't be the same phenomenon. Many addicts sincerely try very
hard to abstain from using drugs. Though their effort may be
insufficient for them to succeed in abstaining, it may nonetheless be
sufficient relative to ordinary standards. That is, if someone who was
not an addict made the same effort with respect to similar endeavors,
it would be reasonable to expect her to succeed. The challenges that
face addicts thus seem much more demanding than the challenges that
face the average practical agent, including those prone to
*akrasia* (Henden 2016, p. 123-4). |
weber | ## 1. Life and Career
Maximilian Carl Emil "Max" Weber (1864-1920) was
born in the Prussian city of Erfurt to a family of notable heritage.
His father, Max Sr., came from a Westphalian family of merchants and
industrialists in the textile business and went on to become a lawyer
and National Liberal parliamentarian in Wilhelmine politics. His
mother, Helene, came from the Fallenstein and Souchay families, both
of the long illustrious Huguenot line, which had for generations
produced public servants and academicians. His younger brother,
Alfred, was an influential political economist and sociologist, too.
Evidently, Max Weber was brought up in a prosperous, cosmopolitan, and
cultivated family milieu that was well-plugged into the political,
social, and cultural establishment of the German
*Burgertum* [Roth 2000]. Also, his parents represented
two, often conflicting, poles of identity between which their eldest
son would struggle throughout his life - worldly statesmanship
and ascetic scholarship.
Educated mainly at the universities of Heidelberg and Berlin, Weber
was trained in law, eventually writing his dissertation on medieval
trading companies under Levin Goldschmidt and Rudolf von Gneist (and
examined by Theodor Mommsen) and *Habilitationsschrift* on
Roman law and agrarian history under August Meitzen. While
contemplating a career in legal practice and public service, he
received an important research commission from the *Verein fur
Sozialpolitik* (the leading social science association under
Gustav Schmoller's leadership) and produced the so-called East
Elbian Report on the displacement of the German agrarian workers in
East Prussia by Polish migrant labours. Greeted upon publication with
high acclaim and political controversy, this early success led to his
first university appointment at Freiburg in 1894 to be followed by a
prestigious professorship in political economy at Heidelberg two years
later. Weber and his wife Marianne, an intellectual in her own right
and early women's rights activist, soon found themselves at the
center of the vibrant intellectual and cultural life of Heidelberg.
The so-called "Weber Circle" attracted such intellectual
luminaries as Georg Jellinek, Ernst Troeltsch, and Werner Sombart and
later a number of younger scholars including Marc Bloch, Robert
Michels, and Gyorgy Lukacs. Weber was also active in
public life as he continued to play an important role as a Young Turk
in the *Verein* and maintain a close association with the
liberal *Evangelische-soziale Kongress* (especially with the
leader of its younger generation, Friedrich Naumann). It was during
this time that he solidified his reputation as a brilliant political
economist and outspoken public intellectual.
All these fruitful years came to an abrupt halt in 1897 when Weber
collapsed with a nervous-breakdown shortly after his father's
sudden death (precipitated by a confrontation with Weber) [Radkau
2011, 53-69]. His routine as a teacher and scholar was
interrupted so badly that he eventually withdrew from regular teaching
duties in 1903, to which he would not return until 1919. Although
severely compromised and unable to write as prolifically as before, he
still managed to immerse himself in the study of various philosophical
and religious topics. This period saw a new direction in his
scholarship as the publication of miscellaneous methodological essays
as well as *The Protestant Ethic and the Spirit of Capitalism*
(1904-1905) testifies. Also noteworthy about this period is his
extensive trip to America in 1904, which left an indelible trace in
his understanding of modernity in general [Scaff 2011].
After this stint essentially as a private scholar, he slowly resumed
his participation in various academic and public activities. With
Edgar Jaffe and Sombart, he took over editorial control of the
*Archiv fur Sozialwissenschaften und Sozialpolitik*,
turning it into a leading social science journal of the day as well as
his new institutional platform. In 1909, he co-founded the
*Deutsche Gesellschaft fur Soziologie*, in part as a
result of his growing unease with the *Verein*'s
conservative politics and lack of methodological discipline, becoming
its first treasurer (he would resign from it in 1912, though). This
period of his life, until interrupted by the outbreak of the First
World War in 1914, brought the pinnacles of his achievements as he
worked intensely in two areas - the comparative sociology of
world religions and his contributions to the *Grundriss der
Sozialokonomik* (to be published posthumously as *Economy
and Society*). Along with the major methodological essays that he
drafted during this time, these works would become mainly responsible
for Weber's enduring reputation as one of the founding fathers
of modern social science.
With the onset of the First World War, Weber's involvement in
public life took an unexpected turn. At first a fervent patriotic
supporter of the war, as virtually all German intellectuals of the
time were, he grew disillusioned with the German war policies,
eventually refashioning himself as one of the most vocal critics of
the Kaiser government in a time of war. As a public intellectual, he
issued private reports to government leaders and wrote journalistic
pieces to warn against the Belgian annexation policy and the unlimited
submarine warfare, which, as the war deepened, evolved into a call for
overall democratization of the authoritarian state
(*Obrigkeitsstaat*) that was Wilhelmine Germany. By 1917, Weber
was campaigning vigorously for a wholesale constitutional reform for
post-war Germany, including the introduction of universal suffrage and
the empowerment of parliament.
When defeat came in 1918, Germany found in Weber a public intellectual
leader, even possibly a future statesman, with unscathed liberal
credentials who was well-positioned to influence the course of
post-war reconstruction. He was invited to join the draft board of the
Weimar Constitution as well as the German delegation to Versaille;
albeit in vain, he even ran for a parliamentary seat on the liberal
Democratic Party ticket. In those capacities, however, he opposed the
German Revolution (all too sensibly) and the Versaille Treaty (all too
quixotically) alike, putting himself in an unsustainable position that
defied the partisan alignments of the day. By all accounts, his
political activities bore little fruit, except his advocacy for a
robust plebiscitary presidency in the Weimar Constitution.
Frustrated with day-to-day politics, he turned to his scholarly
pursuits with renewed vigour. In 1919, he briefly taught in turn at
the universities of Vienna (*General Economic History* was an
outcome of this experience) and Munich (where he gave the much-lauded
lectures, *Science as a Vocation* and *Politics as a
Vocation*), while compiling his scattered writings on religion in
the form of massive three-volume *Gesammelte Aufsatze zur
Religionssoziologie* [*GARS* hereafter]. All these
reinvigorated scholarly activities came to an end in 1920 when he died
suddenly of pneumonia in Munich (likely due to the Spanish flu). Max
Weber was fifty-six years old.
## 2. Philosophical Influences
Putting Weber in the context of philosophical tradition proper is not
an easy task. For all the astonishing variety of identities that can
be ascribed to him as a scholar, he was certainly no philosopher at
least in the narrow sense of the term. His reputation as a Solonic
legislator of modern social science also tends to cloud our
appreciation of the extent to which his ideas were embedded in the
intellectual context of the time. Broadly speaking, Weber's
philosophical worldview, if not coherent philosophy, was informed by
the deep crisis of the Enlightenment project in fin-de-siecle
Europe, which was characterized by the intellectual revolt against
positivist reason, a celebration of subjective will and intuition, and
a neo-Romantic longing for spiritual wholesomeness [Hughes 1977]. In
other words, Weber belonged to a generation of self-claimed epigones
who had to struggle with the legacies of Darwin, Marx, and Nietzsche.
As such, the philosophical backdrop to his thoughts will be outlined
here along two axes -- epistemology and ethics.
### 2.1 Knowledge: Neo-Kantianism
Weber encountered the pan-European cultural crisis of his time mainly
as filtered through the jargon of German Historicism [Beiser 2011].
His early training in law had exposed him to the sharp divide between
the reigning Labandian legal positivism and the historical
jurisprudence championed by Otto von Gierke (one of his teachers at
Berlin); in his later incarnation as a political economist, he was
keenly interested in the heated "strife over methods"
(*Methodenstreit*) between the positivist economic methodology
of Carl Menger and the historical economics of Schmoller (his mentor
during the early days). Arguably, however, it was not until Weber grew
acquainted with the Baden or Southwestern School of Neo-Kantians,
especially through Wilhelm Windelband, Emil Lask, and Heinrich Rickert
(his one-time colleague at Freiburg), that he found a rich conceptual
template suitable for the clearer elaboration of his own
epistemological position.
In opposition to a Hegelian emanationist epistemology, briefly,
Neo-Kantians shared the Kantian dichotomy between reality and concept.
Not an emanent derivative of concepts as Hegel posited, reality is
irrational and incomprehensible, and the concept, only an abstract
construction of our mind. Nor is the concept a matter of will,
intuition, and subjective consciousness as Wilhelm Dilthey posited.
According to Hermann Cohen, one of the early Neo-Kantians, concept
formation is fundamentally a cognitive process, which cannot but be
rational as Kant held. If our cognition is logical and all reality
exists within cognition, then only a reality that we can comprehend in
the form of knowledge is rational - metaphysics is thereby
reduced to epistemology, and Being to logic. As such, the process of
concept formation both in the natural (*Natur*-) and the
cultural-historical sciences (*Geisteswissenschaften*) has to
be universal as well as abstract, not different in kind but in their
subject matters. The latter is only different in dealing with the
question of values in addition to logical relationships.
For Windelband, however, the difference between the two kinds of
knowledge has to do with its aim and method as well.
Cultural-historical knowledge is not concerned with a phenomenon
because of what it shares with other phenomena, but rather because of
its own definitive qualities. For values, which form its proper
subject, are radically subjective, concrete and individualistic.
Unlike the "nomothetic" knowledge that natural science
seeks, what matters in historical science is not a universal law-like
causality, but an understanding of the particular way in which an
individual ascribes values to certain events and institutions or takes
a position towards the general cultural values of his/her time under a
unique, never-to-be-repeated constellation of historical
circumstances. Therefore, cultural-historical science seeks
"ideographic" knowledge; it aims to understand the
particular, concrete and irrational "historical
individual" *with* inescapably universal, abstract, and
rational concepts. Turning irrational reality into rational concept,
it does not simply paint (*abbilden*) a picture of reality but
transforms (*umbilden*) it. Occupying the gray area between
irrational reality and rational concept, then, its question became
twofold for the Neo-Kantians. One is in what way we can understand the
irreducibly subjective values held by the historical actors in an
objective fashion, and the other, by what criteria we can select a
certain historical phenomenon as opposed to another as historically
significant subject matter worthy of our attention. In short, the
issue was not only the values to be comprehended by the seeker of
historical knowledge, but also his/her own values, which are no less
subjective. Value-judgment (*Werturteil*) as well as value
(*Wert*) became a keen issue.
According to Rickert's definitive elaboration, value-judgment
precedes values. He posits that the "in-dividual," as
opposed to mere "individual," phenomenon can be isolated
as a discrete subject of our historical inquiry when we ascribe
certain subjective values to the singular coherence and indivisibility
that are responsible for its uniqueness. In his theory of
value-relation (*Wertbeziehung*), Rickert argues that relating
historical objects to values can still retain objective validity when
it is based on a series of explicitly formulated conceptual
distinctions. They are to be made firmly between the
investigator's values and those of the historical actor under
investigation, between personal or private values and general cultural
values of the time, and between subjective value-judgment and
objective value-relations. In so positing, however, Rickert is making
two highly questionable assumptions. One is that there are certain
values in every culture that are universally accepted within that
culture as valid, and the other, that a historian free of bias must
agree on what these values are. Just as natural science must assume
"unconditionally and universally valid laws of nature,"
so, too, cultural-historical science must assume that there are
"unconditionally and universally valid values." If so, an
"in-dividual" historical event has to be reduced to an
"individual" manifestation of the objective process of
history, a conclusion that essentially implies that Rickert returned
to the German Idealist faith in the meaningfulness of history and the
objective validity of the diverse values to be found in history. An
empirical study in historical science, in the end, cannot do without a
metaphysics of history. Bridging irrational reality and rational
concept in historical science, or overcoming *hiatus
irrationalis* (a la Emil Lask) without recourse to a
metaphysics of history still remained a problem as acutely as before.
While accepting the broadly neo-Kantian conceptual template as Rickert
elaborated it, Weber's methodological writings would turn mostly
on this issue.
### 2.2 Ethics: Kant and Nietzsche
German Idealism seems to have exerted another enduring influence on
Weber, discernible in his ethical worldview more than in his
epistemological position. This was the strand of Idealist discourse in
which a broadly Kantian ethic and its Nietzschean interlocution figure
prominently.
The way in which Weber understood Kant seems to have come through the
conceptual template set by moral psychology and philosophical
anthropology. In conscious opposition to the utilitarian-naturalistic
justification of modern individualism, Kant viewed moral action as
principled and self-disciplined while expressive of genuine freedom
and autonomy. On this Kantian view, freedom and autonomy are to be
found in the instrumental control of the self and the world
(objectification) according to a law formulated solely from within
(subjectification). Furthermore, such a paradoxical compound is made
possible by an internalization or willful acceptance of a
transcendental rational principle, which saves it from falling prey to
the hedonistic subjectification that Kant found in Enlightenment
naturalism and which he so detested. Kant in this regard follows
Rousseau in condemning utilitarianism; instrumental-rational control
of the world in the service of our desires and needs just degenerates
into organized egoism. In order to prevent it, mere freedom of choice
based on elective will (*Willkur*) has to be replaced by
the exercise of purely rational will (*Wille)* [Taylor 1989,
364]. The so-called "inward turn" is thus the crucial
benchmark of autonomous moral agency for Kant, but its basis has been
fundamentally altered; it should be done with the purpose of serving a
higher end, that is, the universal law of reason. A willful
self-transformation is demanded now in the service of a higher law
based on reason, or an "ultimate value" in Weber's
parlance.
Weber's understanding of this Kantian ethical template was
strongly tinged by the Protestant theological debate taking place in
the Germany of his time between (orthodox Lutheran) Albrecht Ritschl
and Matthias Schneckenburger (of Calvinist persuasion), a context with
which Weber became acquainted through his Heidelberg colleague,
Troeltsch. Suffice it to note in this connection that Weber's
sharp critique of Ritschl's Lutheran communitarianism seems
reflective of his broadly Kantian preoccupation with radically
subjective individualism and the methodical transformation of the self
[Graf 1995].
All in all, one might say that "the preoccupations of Kant and
of Weber are really the same. One was a philosopher and the other a
sociologist, but there... the difference ends" [Gellner
1974, 184]. That which also ends, however, is Weber's
subscription to a Kantian ethic of duty when it comes to the
possibility of a universal law of reason. Weber was keenly aware of
the fact that the Kantian linkage between growing self-consciousness,
the possibility of universal law, and principled and thus free action
had been irrevocably severed. Kant managed to preserve the precarious
identification of non-arbitrary action and subjective freedom by
asserting such a linkage, which Weber believed to be unsustainable in
his allegedly Nietzschean age.
According to Nietzsche, "will to truth" cannot be content
with the metaphysical construction of a grand metanarrative, whether
it be monotheistic religion or modern science, and growing
self-consciousness, or "intellectualization" a la
Weber, can lead only to a radical skepticism, value relativism, or,
even worse, nihilism. According to such a Historicist diagnosis of
modernity that culminates in the "death of God," the
alternative seems to be either a radical self-assertion and
self-creation that runs the risk of being arbitrary (as in Nietzsche)
or a complete desertion of the modern ideal of self-autonomous freedom
(as in early Foucault). If the first approach leads to a radical
divinization of humanity, one possible extension of modern humanism,
the second leads inexorably to a "dedivinization" of
humanity, a postmodern antihumanism [Vattimo 1988, 31-47].
Seen in this light, Weber's ethical sensibility is built on a
firm rejection of a Nietzschean divination and Foucaultian resignation
alike, both of which are radically at odds with the Kantian ethic of
duty. In other words, Weber's ethical project can be described
as a search for non-arbitrary freedom (his Kantian side) in what he
perceived as an increasingly post-metaphysical world (his Nietzschean
side). According to Paul Honigsheim, Weber's ethic is that of
"tragedy" and "nevertheless" [Honigsheim 2013,
113]. This deep tension between the Kantian moral imperatives and a
Nietzschean diagnosis of the modern cultural world is apparently what
gives such a darkly tragic and agnostic shade to Weber's ethical
worldview.
## 3. History
### 3.1 Rationalization as a Thematic Unity
Weber's main contribution as such, nonetheless, lies neither in
epistemology nor in ethics. Although they deeply informed his thoughts
to an extent still under-appreciated, his main concern lay elsewhere.
He was after all one of the founding fathers of modern social science.
Beyond the recognition, however, that Weber is not simply a
sociologist par excellence as Talcott Parsons's
quasi-Durkheimian interpretation made him out to be, identifying an
*idee maitresse* throughout his disparate oeuvre
has been debated ever since his own days and is still far from
settled. *Economy and Society*, his alleged *magnum
opus*, was a posthumous publication based upon his widow's
editorship, the thematic architectonic of which is unlikely to be
reconstructed beyond doubt even after its recent reissuing under the
rubric of *Max Weber Gesamtausgabe* [*MWG* hereafter].
*GARS* forms a more coherent whole since its editorial edifice
was the work of Weber himself; and yet, its relationship to his other
sociologies of, for instance, law, city, music, domination, and
economy, remains controvertible. Accordingly, his overarching theme
has also been variously surmised as a developmental history of Western
rationalism (Wolfgang Schluchter), the universal history of
rationalist culture (Friedrich Tenbruck), or simply the
*Menschentum* as it emerges and degenerates in modern rational
society (Wilhelm Hennis). The first depicts Weber as a
comparative-historical sociologist; the second, a latter-day Idealist
historian of culture reminiscent of Jacob Burckhardt; and the third, a
political philosopher on a par with Machiavelli, Hobbes, and Rousseau.
Important as they are for in-house Weber scholarship, however, these
philological disputes need not hamper our attempt to grasp the gist of
his ideas. Suffice it for us to recognize that, albeit with varying
degrees of emphasis, these different interpretations all converge on
the thematic centrality of rationality, rationalism, and
rationalization in making sense of Weber.
At the outset, what immediately strikes a student of Weber's
rationalization thesis is its seeming irreversibility and
Eurocentrism. The apocalyptic imagery of the "iron cage"
that haunts the concluding pages of the *Protestant Ethic* is
commonly taken to reflect his fatalism about the inexorable unfolding
of rationalization and its culmination in the complete loss of freedom
and meaning in the modern world. The "Author's
Introduction" (*Vorbemerkung* to *GARS*) also
contains oft-quoted passages that allegedly disclose Weber's
belief in the unique singularity of Western civilization's
achievement in the direction of rationalization, or lack thereof in
other parts of the world. For example:
>
> A child of modern European civilization (*Kulturwelt*) who
> studies problems of universal history shall inevitably and justfiably
> raise the question (*Fragestellung*): what combination of
> circumstances have led to the fact that in the West, and here only,
> cultural phenomena have appeared which - at least as *we*
> like to think - came to have *universal* significance and
> validity [Weber 1920/1992, 13: translation altered]?
>
Taken together, then, the rationalization process as Weber narrated it
seems quite akin to a metahistorical teleology that irrevocably sets
the West apart from and indeed above the East.
At the same time, nonetheless, Weber adamantly denied the possibility
of a universal law of history in his methodological essays. Even
within the same pages of *Vorbemerkung**,* he said,
"rationalizations of the most varied character have existed in
various departments of life and in all areas of culture"
[*Ibid.*, 26]. He also made clear that his study of various
forms of world religions was to be taken for its heuristic value
rather than as "complete analyses of cultures, however
brief" [*Ibid.*, 27]. It was meant as a
comparative-conceptual platform on which to erect the edifying
features of rationalization in the West. If merely a heuristic device
and not a universal law of progress, then, what is rationalization and
whence comes his uncompromisingly dystopian vision?
### 3.2 Calculability, Predictability, and World-Mastery
Roughly put, taking place in all areas of human life from religion and
law to music and architecture, rationalization means a historical
drive towards a world in which "one can, in principle, master
all things by calculation" [Weber 1919/1946, 139]. For instance,
modern capitalism is a rational mode of economic life because it
depends on a calculable process of production. This search for exact
calculability underpins such institutional innovations as monetary
accounting (especially double-entry bookkeeping), centralization of
production control, separation of workers from the means of
production, supply of formally free labour, disciplined control on the
factory floor, and other features that make modern capitalism
*qualitatively* different from all other modes of organizing
economic life. The enhanced calculability of the production process is
also buttressed by that in non-economic spheres such as law and
administration. Legal formalism and bureaucratic management reinforce
the elements of predictability in the sociopolitical environment that
encumbers industrial capitalism by means of introducing formal
equality of citizenship, a rule-bound legislation of legal norms, an
autonomous judiciary, and a depoliticized professional bureaucracy.
Further, all this calculability and predictability in political,
social, and economic spheres was not possible without changes of
values in ethics, religion, psychology, and culture. Institutional
rationalization was, in other words, predicated upon the rise of a
peculiarly rational type of personality, or a "person of
vocation" (*Berufsmensch*) as outlined in the
*Protestant Ethic*. The outcome of this complex interplay of
ideas and interests was modern rational Western civilization with its
enormous material and cultural capacity for relentless
world-mastery.
### 3.3 Knowledge, Impersonality, and Control
On a more analytical plateau, all these disparate processes of
rationalization can be surmised as increasing knowledge, growing
impersonality, and enhanced control [Brubaker 1991, 32-35].
First, knowledge. Rational action in one very general sense
presupposes knowledge. It requires some knowledge of the ideational
and material circumstances in which our action is embedded, since to
act rationally is to act on the basis of conscious reflection about
the probable consequences of action. As such, the knowledge that
underpins a rational action is of a causal nature conceived in terms
of means-ends relationships, aspiring towards a systematic, logically
interconnected whole. Modern scientific and technological knowledge is
a culmination of this process that Weber called intellectualization,
in the course of which, the germinating grounds of human knowledge in
the past, such as religion, theology, and metaphysics, were slowly
pushed back to the realm of the superstitious, mystical, or simply
irrational. It is only in modern Western civilization, according to
Weber, that this gradual process of disenchantment
(*Entzauberung*) has reached its radical conclusion.
Second, impersonality. Rationalization, according to Weber, entails
objectification (*Versachlichung*). Industrial capitalism, for
one, reduces workers to sheer numbers in an accounting book,
completely free from the fetters of tradition and non-economic
considerations, and so does the market relationship vis-a-vis
buyers and sellers. For another, having abandoned the principle of
Khadi justice (i.e., personalized *ad hoc* adjudication),
modern law and administration also rule in strict accordance with the
systematic formal codes and *sine ira et studio*, that is,
"without anger or passion." Again, Weber found the seed of
objectification not in material interests alone, but in the Puritan
vocational ethic (*Berufsethik*) and the life conduct that it
inspired, which was predicated upon a disenchanted monotheistic
theodicy that reduced humans to mere tools of God's providence.
Ironically, for Weber, modern inward subjectivity was born once we
lost any inherent value *qua* humans and became thoroughly
objectified vis-a-vis God in the course of the Reformation.
Modern individuals are subjectified and objectified all at once.
Third, control. Pervasive in Weber's view of rationalization is
the increasing control in social and material life. Scientific and
technical rationalization has greatly improved both the human capacity
for a mastery over nature and institutionalized discipline
*via* bureaucratic administration, legal formalism, and
industrial capitalism. The calculable, disciplined control over humans
was, again, an unintended consequence of the Puritan ethic of rigorous
self-discipline and self-control, or what Weber called
"innerworldly asceticism (*innerweltliche*
*Askese*)." Here again, Weber saw the irony that a modern
individual citizen equipped with inviolable rights was born as a part
of the rational, disciplinary ethos that increasingly penetrated into
every aspect of social life.
## 4. Modernity
### 4.1 The "Iron Cage" and Value-fragmentation
Thus seen, rationalization as Weber postulated it is anything but an
unequivocal historical phenomenon. As already pointed out, first,
Weber viewed it as a process taking place in disparate fields of human
life with a logic of each field's own and varying directions;
"each one of these fields may be rationalized in terms of very
different ultimate values and ends, and what is rational from one
point of view may well be irrational from another" [Weber
1920/1992, 27]. Second, and more important, its ethical ramification
for Weber is deeply ambivalent. To use his own dichotomy, the
formal-procedural rationality (*Zweckrationalitat*) to
which Western rationalization tends does not necessarily go with a
substantive-value rationality (*Wertrationalitat*). On the
one hand, exact calculability and predictability in the social
environment that formal rationalization has brought about dramatically
enhances individual freedom by helping individuals understand and
navigate through the complex web of practice and institutions in order
to realize the ends of their own choice. On the other hand, freedom
and agency are seriously curtailed by the same force in history when
individuals are reduced to a "cog in a machine," or
trapped in an "iron cage" that formal rationalization has
spawned with irresistible efficiency and at the expense of substantive
rationality. Thus, his famous lament in the *Protestant
Ethic*:
>
> No one knows who will live in this cage (*Gehause*) in the
> future, or whether at the end of this tremendous development entirely
> new prophets will arise, or there will be a great rebirth of old ideas
> and ideals, or, if neither, mechanized petrification, embellished with
> a sort of convulsive self-importance. For the "last man"
> (*letzten Menschen*) of this cultural development, it might
> well be truly said: "Specialist without spirit, sensualist
> without heart; this nullity imagines that it has attained a level of
> humanity (*Menschentums*) never before achieved" [Weber
> 1904-05/1992, 182: translation altered].
>
Third, Weber envisions the future of rationalization not only in terms
of "mechanized petrification," but also of a chaotic, even
atrophic, inundation of subjective values. In other words, the
bureaucratic "iron cage" is only one side of the modernity
that rationalization has brought about; the other is a
"polytheism" of value-fragmentation. At the apex of
rationalization, we moderns have suddenly found ourselves living
"as did the ancients when their world was not yet disenchanted
of its gods and demons" [Weber 1919/1946, 148]. Modern society
is, Weber seems to say, once again enchanted as a result of
disenchantment. How did this happen and with what consequences?
### 4.2 Reenchantment *via* Disenchantment
In point of fact, Weber's rationalization thesis can be
understood with richer nuance when we approach it as, for lack of
better terms, a dialectics of disenchantment and reenchantment rather
than as a one-sided, unilinear process of secularization.
Disenchantment had ushered in monotheistic religions in the West. In
practice, this means that *ad hoc* maxims for life-conduct had
been gradually displaced by a unified total system of meaning and
value, which historically culminated in the Puritan ethic of vocation.
Here, the irony was that disenchantment was an ongoing process
nonetheless. Disenchantment in its second phase pushed aside
monotheistic religion as something irrational, thus delegitimating it
as a unifying worldview in the modern secular world.
Modern science, which was singularly responsible for this late
development, was initially welcomed as a surrogate system of orderly
value-creation, as Weber found in the convictions of Bacon (science as
"the road to *true* nature") and Descartes (as
"the road to the *true* god") [Weber 1919/1946,
142]. For Weber, nevertheless, modern science is a deeply nihilistic
enterprise in which any scientific achievement worthy of the name
*must* "ask to be surpassed and made obsolete" in a
process "that is in principle *ad infinitum*," at
which point, "we come to the *problem of the meaning* of
science." He went on to ask: "For it is simply not
self-evident that something which is subject to such a law is in
itself meaningful and rational. Why should one do something which in
reality never comes to an end and never can?" [*Ibid*.,
138: translation altered]. In short, modern science has relentlessly
dismantled all other sources of value-creation, in the course of which
its own meaning has also been dissipated beyond repair. The result is
the "*Gotterdammerung* of all evaluative
perspectives" including its own [Weber 1904/1949, 86].
Irretrievably gone as a result is a unifying worldview, be it
religious or scientific, and what ensues is its fragmentation into
incompatible value spheres. Weber, for instance, observed:
"since Nietzsche, we realize that something can be beautiful,
not only in spite of the aspect in which it is not good, but rather in
that very aspect" [Weber 1919/1946, 148]. That is to say,
aesthetic values now stand in irreconcilable antagonism to religious
values, transforming "value judgments (*Werturteile*)
into judgments of taste (*Geschmacksurteile*) by which what is
morally reprehensible becomes merely what is tasteless" [Weber
1915/1946, 342].
Weber is, then, *not* envisioning a peaceful dissolution of the
grand metanarratives of monotheistic religion and universal science
into a series of local narratives and the consequent modern pluralist
culture in which different cultural practices follow their own
immanent logic. His vision of polytheistic reenchantment is rather
that of an incommensurable value-fragmentation into a plurality of
alternative metanarratives, each of which claims to answer the same
metaphysical questions that religion and science strove to cope with
in their own ways. The slow death of God has reached its apogee in the
return of gods and demons who "strive to gain power over our
lives and again ... resume their eternal struggle with one
another" [Weber 1919/1946, 149].
Seen this way, it makes sense that Weber's rationalization
thesis concludes with two strikingly dissimilar prophecies - one
is the imminent iron cage of bureaucratic petrification and the other,
the Hellenistic pluralism of warring deities. The modern world has
come to be monotheistic and polytheistic all at once. What seems to
underlie this seemingly self-contradictory imagery of modernity is the
problem of modern humanity (*Menschentum*) and its loss of
freedom and moral agency. Disenchantment has created a world with no
objectively ascertainable ground for one's conviction. Under the
circumstances, according to Weber, a modern individual tends to act
only on one's own aesthetic impulse and arbitrary convictions
that cannot be communicated in the eventuality; the majority of those
who cannot even act on their convictions, or the "last men who
invented happiness" a la Nietzsche, lead the life of a
"cog in a machine." Whether the problem of modernity is
accounted for in terms of a permeation of objective, instrumental
rationality or of a purposeless agitation of subjective values, Weber
viewed these two images as constituting a single problem insofar as
they contributed to the inertia of modern individuals who fail to take
principled moral action. The "sensualists without heart"
and "specialists without spirit" indeed formed two faces
of the same coin that may be called the *disempowerment of the
modern self*.
### 4.3 Modernity *contra* Modernization
Once things were different, Weber claimed. An unflinching sense of
conviction that relied on nothing but one's innermost
personality once issued in a highly methodical and disciplined conduct
of everyday life - or, simply, life as a duty. Born in the
crucible of the Reformation, this archetypal modern subjectivity drew
its strength solely from within in the sense that one's
principle of action was determined by one's own psychological
need to gain self-affirmation. Also, the way in which this deeply
introspective subjectivity was practiced, that is, in self-mastery,
entailed a highly rational and radically methodical attitude towards
one's inner self and the outer, objective world. Transforming
the self into an integrated personality and mastering the world with
tireless energy, subjective value and objective rationality once
formed "one unbroken whole" [Weber 1910/1978, 319]. Weber
calls the agent of this unity the "person of vocation"
(*Berufsmensch*) in his religious writings,
"personality" (*Personlichkeit*) in the
methodological essays, "genuine politician"
(*Berufspolitiker*) in the political writings, and
"charismatic individual" in *Economy and Society*.
The much-celebrated Protestant Ethic thesis was indeed a genealogical
reconsturction of this idiosyncratic moral agency in modern times
[Goldman 1992].
Once different, too, was the mode of society constituted by and in
turn constitutive of this type of moral agency. Weber's social
imagination revealed its keenest sense of irony when he traced the
root of the cohesive integration, intense socialization, and severe
communal discipline of sect-like associations to the isolated and
introspective subjectivity of the Puritan person of vocation. The
irony was that the self-absorbed, anxiety-ridden and even antisocial
virtues of the person of vocation could be sustained only in the thick
disciplinary milieu of small-scale associational life. Membership in
exclusive voluntary associational life is open, and it is such
membership, or "achieved quality," that guarantees the
ethical qualities of the individuals with whom one interacts.
"The old 'sect spirit' holds sway with relentless
effect in the intrinsic nature of such associations," Weber
observed, for the sect was the first mass organization to combine
individual agency and social discipline in such a systematic way.
Weber thus claimed that "the ascetic conventicles and sects
... formed one of the most important foundations of modern
individualism" [Weber 1920/1946, 321]. It seems clear that what
Weber was trying to outline here is an archetypical form of social
organization that can empower individual moral agency by sustaining
group disciplinary dynamism, a kind of pluralistically organized
social life we would now call a "civil society" [Kim 2007,
57-94].
To summarize, the irony with which Weber accounted for rationalization
was driven by the deepening tension between *modernity* and
*modernization*. Weber's problem with modernity
originates from the fact that it required a historically unique
constellation of cultural values and social institutions, and yet,
modernization has effectively undermined the cultural basis for modern
individualism and its germinating ground of disciplinary society,
which together had given the original impetus to modernity. The modern
project has fallen victim to its own success, and in peril is the
individual moral agency and freedom. Under the late modern
circumstances characterized by the "iron cage" and
"warring deities," then, Weber's question becomes:
"How is it at all possible to salvage *any remnants* of
'individual' freedom of movement *in any sense*
given this all-powerful trend" [Weber 1918/1994, 159]?
## 5. Knowledge
Such an appreciation of Weber's main problematic, which
culminates in the question of modern individual freedom, may help shed
light on some of the controversial aspects of Weber's
methodology. In accounting for his methodological claims, it needs to
be borne in mind that Weber was not at all interested in writing a
systematic epistemological treatise in order to put an end to the
"strife over methods" (*Methodenstreit*) of his
time between historicism and positivism. His ambition was much more
modest and pragmatic. Just as "the person who attempted to walk
by constantly applying anatomical knowledge would be in danger of
stumbling" [Weber 1906/1949, 115; translation altered], so can
methodology be a kind of knowledge that may supply a rule of thumb,
codified *a posteriori*, for what historians and social
scientists do, but it could never substitute for the skills they use
in their research practice. Instead, Weber's attempt to mediate
historicism and positivism was meant to aid an actual researcher make
a *practical value-judgment* that is fair and acceptable in the
face of the plethora of subjective values that one encounters when
selecting and processing historical data. After all, the questions
that drove his methodological reflections were what it means to
practice science in the modern polytheistic world and how one can do
science with a sense of vocation. In his own words, "the
capacity to distinguish between empirical knowledge and
value-judgments, and the fulfillment of the scientific duty to see the
factual truth as well as the practical duty to stand up for our own
ideals constitute the program to which we wish to adhere with ever
increasing firmness" [Weber 1904/1949, 58]. Sheldon Wolin thus
concludes that Weber "formulated the idea of methodology to
serve, not simply as a guide to investigation but as a moral practice
and a mode of political action" [Wolin 1981, 414]. In short,
Weber's methodology was as ethical as it was
epistemological.
### 5.1 Understanding (*Verstehen*)
Building on the Neo-Kantian nominalism outlined above [2.1], thus,
Weber's contribution to methodology turned mostly on the
question of objectivity and the role of subjective values in
historical and cultural concept formation. On the one hand, he
followed Windelband in positing that historical and cultural knowledge
is categorically distinct from natural scientific knowledge. Action
that is the subject of any social scientific inquiry is clearly
different from mere behaviour. While behaviour can be accounted for
without reference to inner motives and thus can be reduced to mere
aggregate numbers, making it possible to establish positivistic
regularities, and even laws, of collective behaviour, an action can
only be interpreted because it is based on a radically subjective
attribution of meaning and values to what one does. What a social
scientist seeks to understand is this subjective dimension of human
conduct as it relates to others. On the other hand, an
understanding(*Verstehen*) in this subjective sense is not
anchored in a non-cognitive empathy or intuitive appreciation that is
arational by nature; it can gain objective validity when the meanings
and values to be comprehended are explained causally, that is, as a
means to an end. A teleological contextualization of an action in the
means-end nexus is indeed the precondition for a causal explanation
that can be objectively ascertained. So far, Weber is not essentially
in disagreement with Rickert.
From Weber's perspective, however, the problem that
Rickert's formulation raised was the objectivity of the end to
which an action is held to be oriented. As pointed out [2.1 above],
Rickert in the end had to rely on a certain transhistorical,
transcultural criterion in order to account for the purpose of an
action, an assumption that cannot be warranted in Weber's view.
To be consistent with the Neo-Kantian presuppositions, instead, the
ends themselves have to be conceived of as no less subjective.
Imputing an end to an action is of a fictional nature in the sense
that it is not free from the subjective value-judgment that conditions
the researcher's thematization of a certain subject matter out
of "an infinite multiplicity of successively and coexistently
emerging and disappearing events" [Weber 1904/1949, 72].
Although a counterfactual analysis might aid in stabilizing the
process of causal imputation, it cannot do away completely with the
subjective nature of the researcher's perspective.
In the end, the kind of objective knowledge that historical and
cultural sciences may achieve is precariously limited. An action can
be interpreted with objective validity only at the level of means, not
ends. An end, however, even a "self-evident" one, is
irreducibly subjective, thus defying an objective understanding; it
can only be reconstructed conceptually based on a researcher's
no less subjective values. Objectivity in historical and social
sciences is, then, not a goal that can be reached with the aid of a
correct method, but an ideal that must be striven for without a
promise of ultimate fulfillment. In this sense, one might say that the
so-called "value-freedom" (*Wertfreiheit*) is as
much a methodological principle for Weber as an ethical virtue that a
personality fit for modern science must possess.
### 5.2 Ideal Type
The methodology of "ideal type" (*Idealtypus*) is
another testimony to such a broadly ethical intention of Weber.
According to Weber's definition, "an ideal type is formed
by the one-sided *accentuation* of one or more points of
view" according to which "*concrete individual*
phenomena ... are arranged into a unified analytical
construct" (*Gedankenbild*); in its purely fictional
nature, it is a methodological "utopia [that] cannot be found
empirically anywhere in reality" [Weber 1904/1949, 90]. Keenly
aware of its fictional nature, the ideal type never seeks to claim its
validity in terms of a reproduction of or a correspondence with
reality. Its validity can be ascertained only in terms of adequacy,
which is too conveniently ignored by the proponents of positivism.
This does not mean, however, that objectivity, limited as it is, can
be gained by "weighing the various evaluations against one
another and making a 'statesman-like' compromise among
them" [Weber 1917/1949, 10], which is often proposed as a
solution by those sharing Weber's kind of methodological
perspectivism. Such a practice, which Weber calls
"syncretism," is not only impossible but also unethical,
for it avoids "the practical duty to stand up for our own
ideals" [Weber 1904/1949, 58].
According to Weber, a clear value commitment, no matter how
subjective, is both unavoidable *and* necessary. It is
*unavoidable*, for otherwise no meaningful knowledge can be
attained. Further, it is *necessary*, for otherwise the value
position of a researcher would not be foregrounded clearly and
admitted as such - not only to the readers of the research
outcome but also to the very researcher him/herself. In other words,
Weber's emphasis on "one-sidedness"
(*Einseitigkeit*) not only affirms the subjective nature of
scientific knowledge but also demands that the researcher be
*self-consciously* subjective. The ideal type is devised for
this purpose, for "only as an ideal type" can subjective
value - "that unfortunate child of misery of our
science" - "be given an unambiguous meaning"
[*Ibid*., 107]. Along with value-freedom, then, what the ideal
type methodology entails in ethical terms is, on the one hand, a
daring confrontation with the tragically subjective foundation of our
historical and social scientific knowledge and, on the other, a public
confession of one's own subjective value. Weber's
methodology in the end amounts to a call for the heroic
character-virtue of clear-sightedness and intellectual integrity that
together constitute a genuine person of science - a scientist
with a sense of vocation who has a passionate commitment to
one's own specialized research, yet is utterly "free of
illusions" [Lowith 1982, 38].
## 6. Politics and Ethics
Even more explicitly ethical than his methodology, Weber's
political project also discloses his entrenched preoccupation with the
willful resuscitation of certain character traits in modern society.
At the outset, it seems undeniable that Weber was a deeply liberal
political thinker especially in German context which is not well known
for political liberalism. This means that his ultimate value as a
political thinker was locked on individual freedom, that "old,
general type of human ideals" [Weber 1895/1994, 19]. He was also
a *bourgeois* liberal, and self-consciously so, in a time of
great transformations that were undermining the social conditions
necessary to support classical liberal values and bourgeois
institutions, thereby compelling liberalism to search for a
fundamental reorientation. To that extent, he belongs to that
generation of liberal political thinkers in fin-de-siecle
Europe who clearly perceived the general crisis of liberalism and
sought to resolve it in their own liberal ways [Bellamy 1992,
157-216]. Weber's own way was to address the problem of
classical liberal characterology that was, in his view, being
progressively undermined by the indiscriminate bureaucratization of
modern society.
### 6.1 Domination and Legitimacy
Such an ethical subtext is legible even in Weber's stark realism
that permeates his political sociology - or, a sociology of
domination (*Herrschaftssoziologie*) as he called it [for the
academic use of this term in Weber's time, see Anter 2016,
3-23]. For instance, utterly devoid of moral qualities that many
of his contemporaries attributed to the state, it is defined all too
thinly as "a human community that (successfully) claims the
*monopoly of the legitimate use of physical force* within a
given territory" [Weber 1919/1994, 310]. With the same brevity,
he asserted that domination of the ruled by the ruler, or more
literally, "lordship" (*Herrschaft*), is an
immutable reality of political life even in a democratic state. That
is why, for Weber, an empirical study of politics cannot but be an
inquiry into the different modalities by which domination is
effectuated and sustained. All the while, he also maintained that a
domination worthy of sustained attention is about something far more
than the brute fact of subjugation and subservience. For "the
merely external fact of the order being obeyed is not sufficient to
signify domination in our sense; we cannot overlook the meaning of the
fact that the command is accepted as a *valid* norm"
[Weber 1921-22/1978, 946]. In other words, it has to be a
domination mediated through *justification* and
*interpretation* in which the ruler's claim to authority,
not mere threat of force or promise of benefits, is the reason for the
obedience, not mere compliance, by the ruled. This bipolar emphasis on
the factuality of coercive domination at the phenomenal level and the
essentially *noumenal* nature of power (a la Rainer
Forst) is what characterizes Weber's political realism [Forst
2012].
In terms of contemporary political realism, Weber seemed to hold that
the primary concern of politics is the establishment of an
*orderly* domination and its management within a given
territory rather than the realization of such pre- or extra-political
moral goals as justice (Kant) or freedom (Hegel) - thus the
brevity with which the state is defined above. Sharing this Hobbesian
outlook on politics, or what Bernard Williams calls the "First
Political Question" (FPQ), enables Weber to square his diagnosis
of agonistic value pluralism with an entrenched suspicion of
natural-law foundation of liberalism to sustain a democratic politics
that is uniquely his own [see 6.2 below]. He went beyond Ordorealism,
however, when an evaluative perspective on politics is advocated
without recourse to the moral commitments coming from outside the
political sphere. The making of a workable political order cannot be
authorized by virtue of its coming-into-being and has to satisfy what
Williams called the "Basic Legitimation Demand" (BLD) to
be an acceptable arrangement of social coordination. A legitimate
political order is an institutionalized modus vivendi for collective
life that "makes sense as an intelligible order" (MSIO) in
the eyes of the beholder [Williams 2005, 1-17]. Since such an
acceptance by those living under a particular arrangement depends on
the political morality animating that particular community, the
ruler's claim to authority can meet with success only when based
on a reasonable fit with the local mores, values, and cultures
[Cozzaglio and Greene 2019, 1025-26]. Like Machiavelli's
*Principe*, then, Weber's *Herren* do not behave
in a normless vacuum. They rule under certain political-normative
constraints that turn on the congruence between the way their
domination is *justified* and the way such a public
justification is *interpreted* as acceptable to the ruled.
Weber's concept of domination is as much noumenal as phenomenal.
To that extent, it is little wonder that his name figures not only
prominently but also uniquely in the pantheon of political realists
[Galston 2010].
From this nuanced realist premise, Weber famously moved on to identify
three ideal types of legitimate domination based on, respectively,
charisma, tradition, and legal rationality. Roughly, the first type of
legitimacy claim depends on how persuasively the leaders prove their
charismatic qualities, for which they receive personal devotions and
emotive followings from the ruled. The second kind of claim can be
made successfully when certain practice, custom, and mores are
institutionalized to (re)produce a stable pattern of domination over a
long duration of time. In contrast to these crucial dependences on
personality traits and the passage of time, the third type of
authority is unfettered by time, place, and other forms of contingency
as it derives its legitimacy from adherence to impersonal rules and
general principles that can only be found by suitable legal-rational
reasoning. It is, along with the traditional authority, a type of
domination that is inclined towards the status quo in ordinary times
as opposed to the charismatic authority that represents extraordinary,
disruptive, and transformative forces in history. Weber's fame
and influence as a political thinker are built most critically upon
this typology and the ways in which those ideal types are deployed in
his political writings.
As such, Weber's sociology of domination has been suspected
variously of its embedded normative biases. For one, his theory of
legitimacy is seen as endorsing a cynical and unrealistic rejection of
universal morality in politics that makes it hard to pass an objective
and moral evaluative judgment on legitimacy claims, a charge that is
commonly leveled at political realism at large. Under Weber's
concept of legitimacy, anything goes, so to speak, as long as the
ruler goes along with the local political morality of the ruled (even
if it is formed independently of any coercive or corrosive
interference by the ruler, thereby satisfying Williams's
"critical theory principle"). Read in conjunction with his
voluminous political writings, especially, it is criticized to this
day as harbouring or foreshadowing, among others, Bonapartist
caesarism, passive-revolutionary Fordist ideology, quasi-Fascist
elitism, and even proto-Nazism (especially with respect to his robust
nationalism and/or nihilistic celebration of power) [*inter
alia*, Strauss 1950; Marcuse in Stammer (ed.) 1971; Mommsen 1984;
Rehman 2013]. In addition to these politically heated charges,
Weber's typology also reveals a crucial lacuna even as an
empirical political sociology. That is to say, it allows scant, or
ambiguous, a conceptual topos for democracy.
In fact, it seems as though Weber is unsure of the proper place of
democracy in his schema. At one point, democracy is deemed as a
*fourth* type of legitimacy because it should be able to
embrace legitimacy *from below* whereas his three ideal types
all focus on that *from above* [Breuer in Schroeder (ed.) 1998,
2]. At other times, Weber seems to believe that democracy is simply
*non-legitimate*, rather than another type of legitimate
domination, because it aspires to an identity between the ruler and
the ruled (i.e., no domination at all), but without assuming a
hierarchical and asymmetrical relationship of power, his concept of
legitimacy takes hardly off the ground. Thus, Weber could describe the
emergence of proto-democracy in the late medieval urban communes only
in terms of "revolutionary usurpation" [Weber
1921-22/1978, 1250], calling them the "first
*deliberately non-legitimate and revolutionary* political
association" [*ibid.*, 1302]. Too recalcitrant to fit
into his overall schema, in other words, these historical prototypes
of democracy simply fall *outside* of his typology of
domination as non- or not-legitimate at all.
Overlapping but still distinguishable is Weber's yet another way
of conceptualizing democracy, which had to do with charismatic
legitimacy. The best example is the Puritan sect in which authority is
legitimated only on the grounds of a consensual order created
voluntarily by proven believers possessing their own quantum of
charismatic legitimating power. As a result of this political
corollary of the Protestant doctrine of universal priesthood, Puritan
sects could and did "insist upon 'direct democratic
administration' by the congregation" and thereby do away
with the hierarchical distinction between those ruling and those ruled
[*ibid.*, 1208]. In a secularized version of this group
dynamics, a democratic ballot would become the primary tool by which
presumed charisma of the individual lay citizenry are aggregated and
transmitted to their elected leader who become "the agent and
hence the servant of his voters, not their chosen master"
[*ibid.*, 1128]. Rather than an outright non-legitimate or
fourth type of domination, here, democracy comes across as an
extremely rare subset of a diffused and institutionalized form of
*charismatic* legitimacy.
### 6.2 Democracy, Partisanship, and Compromise
All in all, the irony is unmistakable. It seems as though one of the
most influential political thinkers of the twentieth century cannot
come to clear terms with its zeitgeist in which democracy, in whatever
form, shape and shade, emerged as the only acceptable ground for
political legitimacy. Weber's such awkwardness is nowhere more
compelling than in his advocacy for "leadership democracy"
(*Fuhrerdemokratie*) during the constitutional politics of
post-WWI Germany.
If the genuine self-rule of the people is impossible, according to his
unsentimental outlook on democracy, the only choice is one between
leaderless and leadership democracy. When advocating a sweeping
democratization of defeated Germany, thus, Weber envisioned democracy
in Germany as a political marketplace in which strong charismatic
leaders can be identified and elected by winning votes in a free
competition, even battle, among themselves. Preserving and enhancing
this element of struggle in politics is important since it is only
through a dynamic electoral process that national leadership strong
enough to control the otherwise omnipotent bureaucracy can be made.
The primary concern for Weber in designing democratic institutions
has, in other words, less to do with the realization of democratic
ideals, such as freedom, equality, justice, or self-rule, than with
cultivation of certain character traits befitting a robust national
leadership. In its overriding preoccupation with the leadership
qualities, Weber's theory of democracy contains ominous streaks
that may vindicate Jurgen Habermas's infamous dictum that
Carl Schmitt, "the *Kronjurist* of the Third
Reich," was "a legitimate pupil of Weber's"
[Habermas in Stammer (ed.) 1971, 66].
For a fair and comprehensive assessment, however, it should also be
brought into purview that Weber's leadership democracy is not
solely reliant upon the fortuitous personality traits of its leaders,
let alone a caesaristic dictator. "[A] genuine charisma is
radically different from the convenient presentation of the present
'divine right of king'... the very opposite is true
of the genuinely charismatic ruler, who is responsible to the
ruled" [1922/1978, 1114]. Such responsibility is conceivable
because charisma is attributed to a leader through a process that can
be described as "imputation" *from below* [Joose
2014, 271]. In addition to the free electoral competition led by the
organized mass parties, Weber saw localized, yet public associational
life as a breeding ground for such an imputation of charisma. When
leaders are identified and selected at the level of, say neighborhood
choral societies and bowling clubs [Weber 1910/2002], the alleged
authoritarian elitism of leadership democracy comes across as more
pluralistic in its valence, far from its usual identification with
demagogic dictatorship and unthinking mass following. Insofar as a
vibrant civil society functions as an effective medium for the
horizontal diffusion of charismatic qualities among lay people, his
notion of charismatic leadership can retain a strongly democratic tone
to the extent that he also suggested associational pluralism as a
sociocultural ground for the political education of the lay citizenry
from which genuine leaders would hail. Weber's charismatic
leadership has to be "democratically manufactured" [Green
2008, 208], in short, and such a formative political project is
predicated upon a pluralistically organized civil society as well as
such liberal institutions as universal suffrage, free elections, and
organized parties.
In this respect, however, it should be noted that Weber's take
on civil society is crucially different from a
communitarian-Tocquevillean outlook, and this contrast can be cast
into sharper relief once put in terms of the contemporary democratic
theory of partisanship [cf., *inter alia*, Rosenblum 2008;
Muirhead 2014; White and Ypi 2016]. Like the contemporary advocates of
partisanship, Weber is critical of the conventional communitarian view
that simply equates civil society with voluntary associational life
itself. For not all voluntary associations are conducive to democracy;
some are in fact "bad" for its viability. Even in a
"good" civil society, those "associative
practice," or *Vergesellschaftung* in Weber's
parlance [Weber 1910/2002], may cultivate the kind of *civil*
virtues that regulate our private lives, but such social capital
cannot be automatically transferred to the public realm as a useful
set of *civic* virtues and skills for democratic politics.
Political capital can be acquired by living *political*
experiences daily. This realization led Weber as well as a growing
number of contemporary democratic theorists to converge on an
insistent call for the politicization of civil society in the form of
not less, but better partisanship, making his politics of civil
society crucially different from that of a communitarian-Tocquevillean
persuasion [see Kim in Hanke, Scaff & Whimster (eds.) 2020].
Also different from this intensely political civil society is a
liberal-Habermasian "public sphere," a
rational-communicative haven in which the open exchange and fair
deliberation of impartial opinions take place until reasonable
consensus emerges. By contrast, Weber's civil society is to be
an agonistic arena of organized rivalry, competition, and struggle on
behalf of the irreducibly *partial* claims between which
consensus - be that reasonable, overlapping, or bipartisan
- may not always be found. Given the incommensurable value
fragmentation of the modern politics and society, Weber would
wholeheartedly embrace the so-called "circumstances of
politics" under which deep disagreements are reasonable
*and* permanent, agreeing that it is not necessarily a bad
thing for democracy as long as those "permanent
disagreements" remain peaceful [Waldron 1999]. From such an
agonistic perspective, the best that can be expected is some kind of
mixture of those partial claims - a *compromise* wherein
lies the true meaning of *political* virtue. That is to say,
although no "overlapping consensus" can be expected, it is
precisely because all partisan claims are so partial that a political
compromise can be made at least between *good* partisans. For
neither too unprincipled (as in opportunistic power-seekers) nor too
principled (as in moral zealots), good partisan citizens welcome a
political compromise, albeit their passionate value convictions,
because they know that some reasonable disagreements are permanent.
Then, the kind of political capital expected to be accumulated in a
good partisan civil society is a mixture of "principle and
pragmatism" [Muirhead 2014, 41-42] - a political
virtue much akin to Weber's syncretic ethics of conviction
(*Gesinnungsethik**)* and responsibility
(*Verantwortungsethik*) [see 6.3 below].
Together, Weber's ethics also demand that the political leaders
and public citizenry combine unflinching commitments to higher causes
(which make them different from mere bureaucratic careerists) with
sober realism that no political claim, including their own, can
represent the whole truth (which makes them different from moral
purists and political romantics). This syncretic ethic is the ultimate
hallmark of those politicians with a sense of vocation who would fight
for their convictions with fierce determination yet not without a
"sense of pragmatic judgment"
(*Augenmass*)that a compromise is unavoidable between
incommensurable value positions, and all they can do in the end is to
take robust responsibility for the consequences, either intended or
unintended, of what *they* thought was a principled compromise.
This is why Weber said: "The politician must make compromises
... the scholar may not cover them (*Der* *Politiker
muss Kompromisse machen ... der Gelehrte darf sie nicht
decken*)" [MWG II/10, 983; also see Bruun 1972 (2007, 244)]. It is
this type of political virtue that Weber wants to instill at the
citizenship as well as leadership level, and the site of this
political education is a pluralistically organized civil society in
which leaders and citizens can experience the dynamic and
institutionalized politicization (re)produced by partisan
politics.
### 6.3 Conviction and Responsibility
What are, then, these two ethics of conviction and responsibility
exactly that Weber wanted to foster through a
"'chronic' political schooling*"*
[Weber 1894/1994, 26]. According to the ethic of responsibility, on
the one hand, an action is given meaning only as a cause of an effect,
that is, only in terms of its causal relationship to the empirical
world. The virtue lies in an objective understanding of the possible
causal effect of an action and the calculated reorientation of the
elements of an action in such a way as to achieve a desired
consequence. An ethical question is thereby reduced to a question of
technically correct procedure, and free action consists of choosing
the correct means. By emphasizing the causality to which a free agent
subscribes, in short, Weber prescribes an ethical integrity between
action and consequences, instead of a Kantian emphasis on that between
action and intention.
According to the ethic of conviction, on the other hand, a free agent
should be able to choose autonomously not only the means, but also the
end; "this concept of personality finds its
'essence' in the constancy of its inner relation to
certain ultimate 'values' and 'meanings' of
life" [Weber 1903-06/1975, 192]. In this respect,
Weber's central problem arises from the recognition that the
kind of rationality applied in choosing a means cannot be used in
choosing an end. These two kinds of reasoning represent categorically
distinct modes of rationality, a boundary further reinforced by modern
value fragmentation. With no objectively ascertainable ground of
choice provided, then, a free agent has to create purpose ex nihilo:
"ultimately life as a whole, if it is not to be permitted to run
on as an event in nature but is instead to be consciously guided, is a
series of ultimate decisions through which the soul - as in
Plato - chooses its own fate" [Weber 1917/1949, 18]. This
ultimate decision and the Kantian integrity between intention and
action constitute the essence of what Weber calls an ethic of
conviction.
It is often held that the gulf between these two types of ethics is
unbridgeable for Weber. Demanding an unmitigated integrity between
one's ultimate value and political action, that is to say, the
*deontological* ethic of conviction cannot be reconciled with
that of responsibility which is *consequentialist* in essence.
In fact, Weber himself admitted the "abysmal contrast"
that separates the two. This frank admission, nevertheless, cannot be
taken to mean that he privileged the latter over the former as far as
political education is concerned.
Weber keenly recognized the deep tension between consequentialism and
deontology, but he still insisted that they should be forcefully
brought together. The former recognition only lends urgency to the
latter agenda. Resolving this analytical inconsistency in terms of
certain "ethical decrees" did not interest Weber. Instead,
he sought for a moral character that can manage this
"combination" with a sheer force of will. In fact, he also
called this synthetic ethic as that of responsibility without clearly
distinguishing it from the merely consequentialist ethic it sought to
overcome, thus creating an interpretive debate that continues to this
day [de Villiers 2018, 47-78]. Be that as it may, his advocacy
for this willful synthesis is incontrovertible, and he called such an
ethical character a "politician with a sense of vocation"
(*Berufspolitiker*) who combines a passionate conviction in
supra-mundane ideals that politics has to serve and a sober rational
calculation of its realizability in this mundane world. Weber thus
concluded: "the ethic of conviction and the ethic of
responsibility are not absolute opposites. They are complementary to
one another, and only in combination do they produce the true human
being who is *capable* of having a 'vocation for
politics'" [Weber 1919/1994, 368].
This synthetic political virtue seems not only hard to achieve, but
also without a promise of felicitous ending. Weber's synthesis
demands a sober confrontation with the reality of politics, i.e., the
ever-presence of "physical forces" and all the unintended
consequences or collateral damages that come with the use of coercion.
Only then may it be brought under ethical control by a superhuman
deployment of passion and prudence, but, even so, Weber's
political superhuman (*Ubermensch*) cannot circumvent the
so-called "dirty-hands dilemma" [cf. Walzer 1973; Coady
2009]. For, even at the moment of triumph, the unrelenting grip of
responsibility would never let him or her disavow the guilt and
remorse for having employed the "physical forces," no
matter how beneficial or necessary. It is a tragic-heroic ethic of
"nevertheless (*dennoch*)" [see 2.2] and, as such,
Weber's "tragicism" goes beyond politics [Honigsheim
2013, 115]. *Science as a Vocation* is a self-evident case in
which the virtue of "value freedom" demands a scientist to
confront the modern epistemological predicament of incommensurable
value-fragmentation without succumbing to the nihilistic plethora of
subjective values by means of a disciplined and willful devotion to
the scholarly specialization and scientific objectivity [see 5.2].
From this ethical vintage point, *The Protestant Ethic and the
Spirit of Capitalism* may as well be re-titled *Labour*
*as a Vocation*. It was in this much earlier work
(1904-5) that Weber first outlined the basic contours of the
ethic of vocation (*Berufsethik*) and a person of vocation
(*Berufsmensch*) and the way those work practices emerged
historically in the course of the Reformation (and faded away
subsequently). The Calvinist doctrine of predestination has amplified
the innermost anxiety over one's own salvation, but such a
subjective fear and trembling was channeled into a psychological
reservoir for the most disciplined and methodical life conduct
(*Lenbensfuhrung*), or labour in calling, that created the
"spirit" of capitalism. Paradoxically combining subjective
value commitments and objective rationality in the pursuit of those
goals, in short, the making and unmaking of the *Berufsmensch*
is where Weber's ethical preoccupations in politics, science,
and economy converge [cf. Hennis 1988].
In the end, Weber's project is not about formal analysis of
moral maxims, nor is it about substantive virtues that reflect some
kind of ontic telos. It is too formal or empty to be an Aristotelean
virtue ethics, and it is too concerned with moral character to be a
Kantian deontology narrowly understood. The goal of Weber's
ethical project, rather, aims at cultivating a character who can
willfully bring together these conflicting formal virtues to create
what he calls a "total personality"
(*Gesamtpersonlichkeit*). It culminates in an ethical
characterology or philosophical anthropology in which passion and
reason are properly ordered by sheer force of individual will. As
such, Weber's political virtue resides not simply in a
subjective intensity of value commitment nor in a detached
intellectual integrity and methodical purposefulness, but in their
willful combination in a unified soul. In this abiding preoccupation
with *statecraft-cum-soulcraft*, Weber was a moralist and
political educator who squarely belonged to the venerable tradition
that stretches back to the ancient Greeks down to Rousseau, Hegel, and
Mill.
## 7. Concluding Remarks
Seen this way, we find a remarkable consistency in Weber's
thought. Weber's main problematic turned on the question of
individual autonomy and freedom in an increasingly rationalized
society. His dystopian and pessimistic assessment of rationalization
drove him to search for solutions through politics and science, which
broadly converge on a certain *practice of the self*. What he
called the "person of vocation," first outlined famously
in *The Protestant Ethic*, provided a bedrock for his various
efforts to resuscitate a character who can willfully combine
unflinching conviction and methodical rationality even in a society
besieged by bureaucratic petrification and value fragmentation. It is
also in this entrenched preoccupation with an ethical characterology
under modern circumstances that we find the source of his enduring
influences on twentieth-century political and social thought.
On the left, Weber's articulation of the tension between
modernity and modernization found resounding echoes in the
"Dialectics of Enlightenment" thesis by Theodor Adorno and
Max Horkheimer; Lukacs's own critique of the perversion
of capitalist reason owes no less to Weber's problematization of
instrumental rationality on which is also built Habermas's
elaboration of communicative rationality as an alternative. Different
elements in Weber's political thought, e.g., intense political
struggle as an antidote to modern bureaucratic petrification,
leadership democracy and plebiscitary presidency, stark realist
outlook on democracy and power-politics, and value-freedom and
value-relativism in political ethics, were selected and critically
appropriated by such diverse thinkers on the right as Carl Schmitt,
Joseph Schumpeter, Leo Strauss, Hans Morgenthau, and Raymond Aron.
Even the postmodernist project of deconstructing the Enlightenment
subjectivity finds, as Michel Foucault does, a precursor in Weber. All
in all, across the vastly different ideological and methodological
spectrum, Max Weber's thought will continue to be a deep
reservoir of fresh inspiration as long as an individual's fate
under (post)modern circumstances does not lose its privileged place in
the political, social, cultural, and philosophical reflections of our
time. |
simone-weil | ## 1. Philosophical Development
Simone Weil was born in Paris on 3 February 1909. Her parents, both of
whom came from Jewish families, provided her with an assimilated,
secular, bourgeois French childhood that was cultured and comfortable.
Weil and her older brother Andre--himself a math prodigy,
founder of the Bourbaki group, and a distinguished mathematician at
the Princeton Institute for Advanced Study--studied at
prestigious Parisian schools. Weil's first philosophy teacher,
at the Lycee Victor-Duruy, was Rene Le Senne; it was he
who introduced her to the thesis--which she would
maintain--that contradiction is a theoretical obstacle generative
of nuanced, alert thinking. Beginning in October 1925, Weil studied at
Henri IV Lycee in preparation for the entrance exams of the
Ecole Normale Superieure. At Henri IV she studied under
the philosopher and essayist Emile-Auguste Chartier (known
pseudonymously as Alain), whose teacher was Jules Lagneau. Like Weil
at this time, Alain was agnostic. In his classes he emphasized
intellectual history: in philosophy this included Plato, Marcus
Aurelius, Descartes, Spinoza, and Kant, and in literature, Homer,
Aeschylus, Sophocles, and Euripides. Already sympathetic with the
downtrodden and critical of French society, she gained the theoretical
tools to levy critiques against her country and philosophical
tradition in Alain's class. There, employing paradox and
attention through the form of the essay (it is important to note that
none of her writings was published as a book in her lifetime), she
began intentionally developing what would become her distinct mode of
philosophizing. It is therefore arguable that she is part of the
Alain/Lagneau line of voluntarist, *spiritueliste* philosophy
in France.
In 1928 Weil began her studies at the Ecole Normale. She was
the only woman in her class, the first woman having been first
admitted in 1917. In 1929-1930 she worked on her dissertation on
knowledge and perception in Descartes, and having received her
*agregation* diploma, she served from late 1931 to
mid-1934 as a teacher at *lycees*. Throughout this
period, outside of her duties at each *lycee* where she
instructed professionally, Weil taught philosophy to, lobbied for, and
wrote on behalf of workers' groups; at times, moreover, she
herself joined in manual labor. In her early thinking she prized at
once the first-person perspective and radical skepticism of Descartes,
the class-based solidarity and materialist analysis of Marx, and the
moral absolutism and respect for the individual of Kant. Drawing from
each, her early work can be read as an attempt to provide, with a view
toward liberty, her own analysis of the fundamental causes of
oppression in society.
In early August 1932, Weil travelled to Germany in order to understand
better the conditions fostering Nazism. German trade unions, she wrote
to friends upon her return to France, were the single force in Germany
able to generate a revolution, but they were fully reformist. Long
periods of unemployment left many Germans without energy or esteem. At
best, she observed frankly, they could serve as a kind of dead weight
in a revolution. More specifically, by early 1933 she criticized the
tendency of social organizations to engender bureaucracy, which
elevated management and collective thinking over and against the
individual worker. Against this tendency, she advocated for
workers' understanding the physical labor they performed within
the context of the whole organizational apparatus. In
"Reflections Concerning the Causes of Liberty and Social
Oppression" (1934), Weil presented both a summation of her early
thought and a prefiguring of central elements in her thematic
trajectory. The essay employs a Marxian method of analysis that pays
attention to the oppressed, critiques her own position as an
intellectual, privileges manual labor, and demands precise and
unorthodox individual thinking that unites theory and practice against
collective cliches, propaganda, obfuscation, and
hyper-specialization. These ideas would provide a theoretical
framework for her idiosyncratic practice of philosophy. Near the end
of her life she wrote in a notebook: "Philosophy (including
problems of cognition, etc.) is *exclusively* an affair of
action and practice" (FLN 362).
On 20 June, 1934, Weil applied for a sabbatical from teaching. She was
to spend a year working in Parisian factories as part of its most
oppressed group, unskilled female laborers. Weil's "year
of factory work" (which amounted, in actuality, to around 24
weeks of laboring) was not only important in the development of her
political philosophy but can also be seen as a turning point in her
slow religious evolution.
In Paris's factories, Weil began to see and to comprehend
firsthand the normalization of brutality in modern industry. There,
she wrote in her "Factory Journal", "[t]ime was an
intolerable burden" (FW 225) as modern factory work comprised
two elements principally: orders from superiors and, relatedly,
increased speeds of production. While the factory managers continued
to demand more, both fatigue and thinking (itself less likely under
such conditions) slowed work. As a result, Weil felt dehumanized.
Phenomenologically, her factory experience was less one of physical
suffering *per se*, and more one of humiliation. Weil was
surprised that this humiliation produced not rebellion but rather
fatigue and docility. She described her experience in factories as a
kind of "slavery". On a trip to Portugal in August 1935,
upon watching a procession to honor the patron saint of fishing
villagers, she had her first major contact with Christianity and wrote
that
>
>
> the conviction was suddenly borne in upon [her] that Christianity is
> pre-eminently the religion of slaves, that slaves cannot help
> belonging to it, and [she] among others. (1942 "Spiritual
> Autobiography" in WFG 21-38, 26)
>
>
>
In comparison with her pre-factory "Testament", we see
that in her "Factory Journal" Weil maintains the language
of liberty, but she moves terminologically from
"oppression" to "humiliation" and
"affliction". Thus her conception and description of
suffering thickened and became more personal at this time.
Weil participated in the 1936 Paris factory occupations and, moreover,
planned on returning to factory work. Her trajectory shifted, however,
with the advent of the Spanish Civil War. On the level of geopolitics,
she was critical of both civil and international war, and she approved
of France's decision not to intervene on the Republican side. On
the level of individual commitment, however, she obtained
journalist's credentials and joined an international anarchist
brigade. On 20 August, 1936, Weil, clumsy and nearsighted, stepped in
a pot of boiling oil, severely burning her lower left leg and instep.
Only her parents could persuade her not to return to combat. By late
1936 Weil wrote against French colonization of Indochina, and by early
1937 she argued against French claims to Morocco and Tunisia. In April
1937 she travelled to Italy. Within the basilica Santa Maria degli
Angeli, inside the small twelfth-century Romanesque chapel where St.
Francis prayed, Weil had her second significant contact with
Christianity. As she would later describe in a letter,
"[S]omething stronger than I compelled me for the first time to
go down on my knees" (WFG 26).
From 1937-1938 Weil revisited her Marxian commitments, arguing
that there is a central contradiction in Marx's thought:
although she adhered to his method of analysis and demonstration that
the modern state is inherently oppressive--being that it is
composed of the army, police, and bureaucracy--she continued to
reject any positing of revolution as immanent or determined. Indeed,
in Weil's middle period, Marx's confidence in history
seemed to her a worse ground for judgment than Machiavelli's
emphasis on contingency.
During the week of Easter 1938, Weil visited the Benedictine abbey of
Solesmes from Palm Sunday to the following Tuesday. At Solesmes she
had her third contact with Christianity: suffering from headaches,
Weil found a joy so pure in Gregorian chant that, by analogy, she
gained an understanding
>
>
> of the possibility of living divine love in the midst of affliction.
> It goes without saying that in the course of these services the
> thought of the passion of Christ entered into [her] being once and for
> all. (WFG 26)
>
>
>
At Solesmes, she was also introduced to the seventeenth century poet
George Herbert by a young Englishman she met there. She claimed to
feel Christ's presence while reciting Herbert's poem
"Love". As she fixed her full attention on the poem while
suffering from her most intense headache, Weil came to see that her
recitation had the virtue of prayer, saying, "Christ himself
came down and took possession of me" (WFG 27). Importantly, she
thought God "in his mercy" had prevented her from reading
the mystics until that point; therefore, she could not say that she
invented her unexpected contact with Christ (WFG 27). These events and
writings in 1936-1938 exemplify the mutually informing nature of
solidarity and spirituality in Weil's thought that began in
August 1935.
After the military alliance of Germany and Italy in May 1939, Weil
renounced her pacifism. It was not that she felt she was wrong in
holding such a position before, but rather that now, she argued,
France was no longer strong enough to remain generous or merely
defensive. Following the German Western offensive, she left Paris with
her family in June 1940, on the last train. They settled eventually
but temporarily in Marseilles, at the time the main gathering point
for those attempting to flee France, and where Weil would work with
the Resistance.
In Vichy France Weil took up a practice she had long sought, namely,
to apprentice herself to the life of agricultural laborers. In
addition, in Marseilles she was introduced to the Dominican priest
Joseph-Marie Perrin, who became a close friend as well as a spiritual
interlocutor, and through whom she began to consider the question of
baptism. In an effort to help Weil find a job as an agricultural
laborer, Perrin turned to his friend Gustave Thibon, a Catholic writer
who owned a farm in the Ardeche region. Thus in Fall 1941 Weil
worked in the grape harvest. Importantly, however, she was not treated
like the rest of the laborers; although she worked a full eight hours
per day, she resided and ate at the house of her employers. She
reportedly carried Plato's *Symposium* with her in the
vineyards and attempted to teach the text to her fellow laborers.
In 1942 Weil agreed to leave France in part so her parents would be in
a safe place (they would not leave without her, she knew), but
principally because she thought she might be more useful for
France's war effort if she were in another country. Thus she
went to New York via Morocco. In New York, as in Marseilles, she
filled notebook after notebook with philosophical, theological, and
mathematical considerations. New York, however, felt removed from the
sufferings of her native France; the Free French movement in London
felt one step closer to returning to France. In 1943 Weil was given a
small office at 19 Hill Street in London. From this room she would
write day and night for the next four months, sleeping around three
hours each night. Her output in this period totaled around 800 printed
pages, but she resigned from the Free French movement in late July
(Petrement 1973 [1976: xx]).
Weil died Tuesday, 24 August 1943. Three days later, the coroner
pronounced her death a suicide--cardiac failure from
self-starvation and tuberculosis. The accounts provided by her
biographers tell a more complex story: Weil was aware that her fellow
country-men and women in the occupied territory had to live on minimal
food rations at this time, and she had insisted on the same for
herself, which exacerbated her physical illness to the point of death
(Von der Ruhr 2006: 18). On 30 August she was buried at
Ashford's New Cemetery between the Jewish and the Catholic
sections. Her grave was originally anonymous. For fifteen years
Ashford residents thought it was a pauper's.
## 2. Social-Political Philosophy
Always writing from the left, Weil continually revised her
social-political philosophy in light of the rapidly changing material
conditions in which she lived. However, she was consistent in her
acute attention to and theorizing from the situation of the oppressed
and marginalized in society.
Her early essays on politics, a number of which were posthumously
collected by Albert Camus in *Oppression et liberte*
(OL), include "Capital and the Worker" (1932),
"Prospects: Are We Heading for the Proletarian
Revolution?" (1933), "Reflections concerning Technology,
National Socialism, the U.S.S.R., and certain other matters"
(1933) and, most importantly, "Reflections concerning the Causes
of Liberty and Social Oppression" (1934 in OL 36-117). In
her early writings, Weil attempted to provide an analysis of the real
causes of oppression so as to inform militants in revolutionary
action. Her concern was that, without this analysis, a socially
enticing movement would lead only to superficial changes in the
appearance of the means of production, not to new and freer forms of
structural organization.
The division of labor or the existence of material privilege alone is
not a sufficient condition for Weil's concept of
"oppression". That is, she recognized that there are some
forms of social relations involving deference, hierarchy, and order
that are not necessarily oppressive. However, the intervention of the
struggle for power--which she, following Hobbes, sees as an
inexorable feature of human society--generates oppression.
>
>
> [P]ower [*puissance*] contains a sort of fatality which weighs
> as pitilessly on those who command as on those who obey; nay more, it
> is in so far as it enslaves the former that, through their agency, it
> presses down upon the latter ("Reflections concerning the Causes
> of Liberty and Social Oppression", OL 62).
>
>
>
Oppression, then, is a specific social organization that, as a
consequence of the essentially unstable struggle for power, and
principally the structure of labor, limits the individual from
experiencing the world to the full extent of her or his full
capability, a capability Weil describes as "methodical
thinking" (*pensee methodique*). By
divorcing the understanding from the application of a method,
oppression, exercised through force, denies human beings direct
contact with reality. Also informing her sense of oppression is her
notion of "privilege", which includes not only money or
arms, but also a corpus of knowledge closed to the working masses
that, thereby, engenders a culture of specialists. Privilege thus
exacerbates oppression in modern societies, which are held together
not by shared goals, meaningful relations, or organically developing
communities, but through a "religion of power" (OL 69). In
this way, both power and prestige contribute to a modern reversal of
means and ends. That is, elements like money, technology, and
war--all properly speaking "means"--are, through
the workings of power, treated as "ends" worthy of
furtherance, enhancement, and multiplication without limit.
In her early social-political thought Weil testified to what she
called "a new species of oppression"
("Prospects", OL 9), namely, the bureaucracy of modern
industry. Both her anarchism and Marxian method of analysis influenced
how she problematized revolutionary struggle: the problem lies in
forming a social organization that does not engender bureaucracy, an
anonymous and institutionalized manifestation of force. The
oppressiveness of modern bureaucracy, to which she responded through
analysis and critique, includes cliched, official and
obscurantist language--the "caste privileges" of
intellectuals ("Technology, National Socialism, the
U.S.S.R", OL 34)-- and a division of labor such that, in
the case of most workers, labor does not involve, but in fact
precludes, engaged thinking. Out of balance in this way, modern humans
live as cogs in a machine: less like thinking individuals, and more
like instruments crushed by "collectivities".
Collectivity--a concept that centrally comprises industry,
bureaucracy, and the state (the "bureaucratic organization
*par excellence*" [OL 109]), but that also includes
political parties, churches, and unions--by definition quashes
individual subjectivity.
Despite her critiques of oppression, prestige, and collectivity, in a
polemical argument Weil is also critical of "revolution",
which for her refers to an inversion of forces, the victory of the
weak over the powerful. This, she says, is "the equivalent of a
balance whose lighter scale were to go down" (OL 74).
"Revolution" in its colloquial or deterministic sense,
then, has itself become an "opiate", a word for which the
laboring masses die, but which lies empty. For Weil, real revolution
is precisely the re-organization of labor such that it subordinates
the laborers neither to management (as in bureaucracy) nor to
oppressive conditions (as in factory work). As she sees it, if
revolution is to have meaning, it is only as a regulative, and not a
positive, ideal (OL 53). In Weil's adaptation of this Kantian
notion, such a regulative ideal involves an attention to reality,
e.g., to the present political and working conditions, to human
conditions such as the struggle for power. Only in this way can it
provide a standard of analysis for action: the ideal allows for a
dialectical relation between a revolutionary alternative and present
praxis, which is always grounded in material conditions. Freedom,
then, is a unity of thought and action. It is, moreover--and not
unlike in Hegel's conceptualization--a condition of
balanced relations to and interdependence with others (who check our
sovereignty) and the world (which is limited and limiting). A freeing
mode of production, as opposed to an alienating and oppressing one,
would involve a meaningful relation to thinking and to others
throughout the course of labor. When workers understand both the
mechanical procedures and the efforts of other members of the
collectivity, the collectivity itself becomes subject to individuals,
i.e., means and ends are rightfully in a relation of equilibrium.
The ontology behind Weil's notion of free labor draws on the
Kantian emphasis on the individual as an end--especially, for
Weil, in her or his capacity for thinking (with Plato, for whom
knowing and doing are united). In addition, the teleology of labor in
Weil's philosophy corresponds with the thought of Hegel and
Marx, and is in opposition to Locke: in the individual's mixing
with the material world, her interest lies in how this promotes
liberty, rather than property. The uprooting Weil experienced in
factory work introduced a shift in her conception of freedom: as she
began to see the human condition as not just one of inexorable
struggle, but of slavery, her notion of freedom shifted from a
negative freedom from constraints to a positive freedom to obey. She
referred to the latter, a particular kind of relational freedom, as
"consent". She concluded her "inventory of modern
civilization" (OL 116) with a call to introduce play into the
bureaucratic machine and a call to think as individuals, denying the
"social idol" by "refusing to subordinate
one's own destiny to the course of history" and by taking
up "critical analysis" (OL 117). In sum, by 1934 Weil
pictured an ideal society, which she conceptualized as a regulative
impossibility in which manual labor, understood and performed by
thinking individuals, was a "pivot" toward liberty.
Weil's "Factory Journal" from 1934-1935
suggests a broader shift in her social-political philosophy. During
this period, her political pessimism deepened. In light of the
humiliating work she conducted in the factories, her post-factory
writings feature a terminological intensification: from
"humiliation" and "oppression" to
"affliction" (*malheur*), a concept informed by her
factory experience of embodied pain combined with psychological agony
and social degradation--and to which she would later add
spiritual distress.
In 1936 Weil advanced her political commitments in ways that
foreshadowed her later social-political thought. She continued to
eschew revolution and instead to work toward reform, namely, greater
equality in the factory through a shift from a structure of
subordination to one of collaboration. In addition, in response to the
outbreak of the Spanish Civil War, she argued against fascism while
favoring--against the Communist position--French
non-intervention. She was opposed to authoritarian logic, and she saw
going to war as a kind of surrender to the logic of power and prestige
inherent in fascism.
>
>
> One must choose between prestige and peace. And whether one claims to
> believe in the fatherland, democracy, or revolution, the policy of
> prestige means war. (1936 "Do We Have to Grease Our Combat
> Boots", FW 258)
>
>
>
Weil's development of the concepts of power and prestige
continued in her article "Let Us Not Start Another Trojan
War" (1936, subtitled, and translated into English as,
"The Power of Words" in SWA 238-258). Her thesis is
that a war's destruction is inversely proportional to the
official pretexts given for fighting it. Wars are absurd, she argued
(against Clausewitz), because "*they are conflicts with no
definable objective*" (SWA 239). Like the phantom of Helen
that inspired ten years of fighting, ideologies (e.g., capitalism,
socialism, fascism), as well as capitalized words such as
"Nation" and "State", have taken on the role
of the phantom in the modern world. Power relies on
prestige--itself illusory and without limit because no nation
thinks it has enough or is sure of maintaining its imagined glory, and
therefore ever increases its means to wage war--so as to appear
absolute and invincible. In response to these forces, Weil prescribed
distinguishing between the imaginary and the real and, relatedly,
defining words properly and precisely. Taken together, these
prescriptions amount to a critique of ideology with its bloated
political rhetoric.
>
>
> [W]hen a word is properly defined it loses its capital letter and can
> no longer serve either as a banner or as a hostile slogan; it becomes
> simply a sign, helping us to grasp some concrete reality or concrete
> objective, or method of activity. To clarify thought, to discredit the
> intrinsically meaningless words, and to define the use of others by
> precise analysis--to do this, strange though it may appear, might
> be a way of saving human lives. (SWA 242)
>
>
>
Further, Weil desired a kind of equilibrium between forces instead of
an endless pursuit of an illusion of absolute stability and security.
Following Heraclitus, she saw struggle as a condition of life. What is
required of the thinking individual, in turn, is to distinguish
between worthwhile conflict, such as class struggle, and illusions of
prestige, which often serve as the foundation of war. In her middle
period, then, Weil maintained the contrariety between reality, limit,
and equilibrium on the one hand and imagination, limitlessness, and
collectivity on the other. In her 1937 "Note on Social
Democracy", she defined politics as follows:
>
>
> The material of the political art is the double perspective, ever
> shifting between the real conditions of social equilibrium and the
> movements of collective imagination. Collective imagination, whether
> of mass meetings or of meetings in evening dress, is never correctly
> related to the really decisive factors of a given social situation; it
> is always beside the point, or ahead of it, or behind it. (SE 152)
>
>
>
In her late period Weil provided an explication of the all-pervasive
and indiscriminate concept of "force" in the essay
"The *Iliad* or the Poem of Force" (1940 in SWA
182-215). It is important to note Weil's locus of
enunciation for this concept, temporally after the fall of France and
spatially from a position of exile, including antisemitic
marginalization, which is why the essay appeared in the December 1940
and January 1941 *Cahiers du Sud* under the anagrammatic
pseudonym Emile Novis, taken up by Weil to avoid antisemitic
confrontation and censorship. This essay was not only the most widely
read of Weil's pieces during her lifetime, but it was also her
first essay to appear in English, translated by Mary McCarthy and
published in Dwight Macdonald's magazine *Politics* in
1945 (November, pp. 321-330).
In her essay on the *Iliad*--a text Roberto Esposito calls
"a phenomenology of force" (Esposito 1996 [2017:
46])--Weil further develops an understanding of force initially
presented in her earlier, unfinished essay "Reflections on
Barbarism" (1939): "I believe that the concept of force
must be made central in any attempt to think clearly about human
relations" (quoted in Petrement 1973 [1976: 361]). The
protagonist of the *Iliad*, Weil writes in an original reading,
is not Achilles or Hector, but force itself. Like her concept of
"power" in her early writings, "force" reifies
and dehumanizes no matter if one wields or undergoes it. Further,
force includes not only coercion, but also prestige, which is to say
that it has a social element. Two important implications follow.
First, on the level of the individual, each "personality"
(*la personne*) is informed by social values and thereby
features the operations of force through accidental characteristics,
such as the name of the family into which one is born or the embodied
features that are considered physically attractive at a certain place,
in a certain society, at a certain time. Second, on the level of the
collectivity, force can destroy not only bodies, but also values and
cultures, as is the case, Weil is at pains to point out, in French
colonialism. In her mid-to-late writings, then, she saw neither
Marx's notion of class nor Hegel's self-development of
*Geist,* but force itself, as the key to history. She presents
this concept as "the force that kills" and as a specific
kind of violence "that does not kill just yet", though
>
>
> [i]t will surely kill, it will possibly kill, or perhaps it merely
> hangs, poised and ready, over the head of the creature it *can*
> kill, at any moment, which is to say at every moment. (SWA
> 184-185)
>
>
>
Weil's concept of force, then, is also a development from Hobbes
and Hegel: it names that which renders the individual a slave.
It was in London (1942-1943), during her work for the Free
French, that Weil articulated her most robust late social-political
philosophy. Her concepts of "labor" and
"justice" thickened as she moved further toward
Christianity and--against her early emphasis on the
individual--toward the social. Weil's *pieces
d'occasion* from this time period include "Draft for a
Statement of Human Obligations" (1943), written in response to
de Gaulle's State Reform Commission in its drafting of a new
Declaration of the Rights of Man and Citizen, as well as "Note
on the General Suppression of Political Parties" (1943), written
because the Free French was considering recognizing political parties.
In this piece Weil argues for the complete abolition of political
parties. Drawing on Rousseau's concept of the general will, Weil
contends that political parties subdue the independent, individual
wills of which the general will is derivative and on which democracy
depends. Most important from this period was her major work *The
Need for Roots* (*L'enracinement*), which Weil called
her second "magnum opus" (SL 186), and which Albert Camus
published posthumously in 1949, with Gallimard, as the first of 11
volumes of Weil's he would promote.
Weil wrote additional essays in London, the most conceptually
important of which was "*La Personne et le
sacre*" (1942-1943, translated into English as
"Human Personality"), in which she critiques
"rights" as reliant on force and poses as counter-terms
"obligation" and "justice". She distinguishes
between two conceptions of justice: natural (hence social and
contingent) and supernatural (hence impersonal and eternal). *The
Need for Roots* (1943 NR) adds "needs of the soul" as
another counter-balance to rights. Overall, Weil presents not a
law-based or rights-based, but a compassion-based morality, involving
obligations to another that are discernible through attention,
centered on and evolving toward a supernatural justice that is not of
the world, but that can be in it. As a departure from Kant, and
through Plato, whom Weil "came to feel... was a
mystic" ("Spiritual Autobiography", WFG 28), the
basis of her sense of (supernatural) justice was not human
rationality, but a desire for the Good, which she believed all humans
share, even if at times they forget or deny this. Importantly, given
her critique of French colonialism, and despite her claim that
obligations to another must be indiscriminate, i.e., universal, she
did not want to universalize law in a Kantian fashion. Rather, her aim
was for cultures to continue their own traditions, for the goal of
rootedness (*l'enracinement*) is not to change cultural
values *per se*, but more precisely, to change how individuals
in those cultures read and orient themselves toward those values.
Weil's concept of "roots" is crucial to her late
political thought. With connotations of both vitality and
vulnerability, "roots" conceptualizes human society as
dynamic and living while attesting to the necessity of stability and
security if growth and flourishing are to occur on the level of the
individual. Beyond the organic metaphor on the level of the natural,
her concept of roots serves as a kind of bridge between the reality of
society and the ideal of supernatural justice. Roots do this by
manifesting human subjection to material and historical conditions,
including the need to participate in the life of a community, to feel
a sense of connection to a place, and to maintain temporal links,
e.g., to cultural history and to hopes for the future. In turn, a
rooted community allows for the development of the individual with a
view toward God or eternal values. As such, and *contra* her
early and middle maintenance of a critical distance from any
collectivity (or Great Beast, to use the Platonic metaphor she
frequently employs), in her late thought Weil sees roots as allowing
for a "new patriotism" based on compassion. The
establishing of roots (*l'enracinement*) enables multiple
relations to the world (e.g., on the level of the nation, the
organically developing community, the school) that at once nourish the
individual and the community.
In addition to war and colonization, in *The Need for Roots*
Weil points to money and to contemporary education as
self-perpetuating forces that uproot human life. The longest section
of the text describes modern uprootedness
(*deracinement*), occurring when the imagined modern
nation and money are the only binding forces in society, and she
characterizes this condition as a threat to the human soul. Modern
education is corrupted both by capitalism--such that it is
nothing except "a machine for producing diplomas, in other
words, jobs" (NR 118)--and by a Roman inheritance for
cultivating prestige with respect to the nation: "It is this
idolatry of self which they have bequeathed to us in the form of
patriotism" (NR 137). As an alternative to modern uprootedness,
Weil outlines a civilization based not on force, which turns a person
into a thing, but on free labor, which in its engagement with and
consent to necessary forces at play in the world, including time and
death, allows for direct contact with reality. Moreover, Weil's
writing in *The Need for Roots* is refracted through her
religious experience. In this later period, then, she no longer
conceptualizes labor in the mechanical terms of a "pivot"
as she did in her early writings; it is now the "spiritual
core" of "a well-ordered social life" (NR 295). (In
her late social-political philosophy, in fact, a spiritual revolution
is more important than an economic one.) *Contra* the Greeks,
who devalued physical work, her conception of labor serves to mediate
between the natural and the supernatural. In her late writings, labor
is fully inflected by her experiences with Christianity. As such,
labor's consent to and, moreover, working through natural forces
(e.g., gravity) is in fact consent to God, who created the natural
world. Labor's kenotic activity, as energy is expended daily, is
a kind of *imitatio Christi*.
Her social-political writings in London are markedly different from
her early writings as an anarchist informed principally by Descartes,
Marx, and Kant. While those influences remain, her later writings must
be read through the lens of her Christian Platonism. She suggests that
we must draw our spiritual life from our social environment. That is,
while spirituality is individual *vis-a-vis* God, this
spirituality occurs within a social context, namely, the collectivity,
and principally, the nation. This is a reversal from the critical
distance she had maintained from the collectivity, especially in her
early emphasis on the individual's methodical thinking.
## 3. Epistemology
Throughout her life, Weil argued that knowledge in and of the world
demanded rigorous, balanced thinking, even if that difficulty and
measurement led the thinker to near-impossible tasks. For her these
tasks of thinking included, for instance, her attempt to synthesize
perspectives of intricate Catholic doctrine on the threshold of the
Church with wisdom from various traditions such as ancient Greek
philosophy and tragedies, Hinduism, Buddhism, and Taoism. Following
Aeschylus, she believed knowledge was gained through suffering. Shaped
by her social-political and religious thought, Weil's
epistemology would change over time, especially in light of her
mystical experiences.
In her dissertation, Weil attempts to think with Descartes in order to
find foundational knowledge. Like Descartes, she argues for the
existence of self, God, and world. Her *cogito*, however, was a
decisive break from his: "I can, therefore I am" (*Je
puis, donc je suis*) (1929-1930 "Science and
Perception in Descartes", FW 59). The self has the power of
freedom, Weil argues, but something else--the omnipotent,
God--makes the self realize that it is not all-powerful.
Self-knowledge, then, is capability always qualified by the
acknowledgment that one is not God. She maintained a kind of Cartesian
epistemology during her time as a teacher, early in her academic life.
In *Lectures on Philosophy* (1978 LP), a collection of lecture
notes taken by one of her students during the academic year
1933-1934 at a girl's *lycee* in Roanne, we
find that Weil is initially critical of sensations as grounds for
knowledge, and thereby critical of empiricist epistemology (LP
43-47).
From her early writings onward she was to problematize the imagination
as the "*folle imagination*", a barrier between
mind and reality, meaning that the human knower is kept from
things-in-themselves. Weil's epistemology, then, informed by her
initial studies of Descartes, would take on inflections of Kant and
Plato while positioning her against Aristotle. She was critical of any
sensation that universalizes one's reading of the world, and she
saw the imagination as thus extending the self, because it could not
help but filter phenomena through its own categories, wishes, and
desires, thereby reading the world on its own terms. Relatedly, Weil
would develop an intersubjective epistemology. Knowing the truth
requires not extending one's own limited perspective, but
suspending or abandoning it such that reality--including the
reality of the existence of others--could appear on its own
terms. This suspension involves a practice of epistemic humility and
openness to all ideas; intelligence for Weil demands the qualified use
of language, acknowledgement of degrees, proportions, contingencies,
and relations, as well as an ability to call the self into question.
These epistemic practices are part of a broader recognition that the
individual knower is limited.
The progression of Weil's epistemology can be seen first in her
conceptualization of contradiction. By *Lectures on
Philosophy*, she affirmed a sense of contradiction beyond the
logical conjunction "*a* is *b*" and
"*a* is not-*b*". Indeed, she argues that
contradiction can be a generative obstacle in that it requires the
mind to expand its thinking in order to transcend the obstacle.
Drawing on the mathematics of Eudoxus, she elaborates on this notion
to claim that incommensurables can be reconciled when set on a kind of
"higher plane". This is not, however, a type of Hegelian
synthesis that can be intellectually apprehended. Instead, the
contemplation of contradictions can lead the knower to a higher
contemplation of truth-as-mystery. Thus, in her late epistemology Weil
presents this concept of "mystery" as a certain kind of
contradiction in which incommensurables appear linked in an
incomprehensible unity.
Mystery, as a conceptualization of contradiction, carries theological,
or at least supernatural, implications. For instance, if contradiction
is understood through formal logic, then the existence of affliction
would seemingly prove the nonexistence of an omnipotent, wholly
benevolent God; however, contradiction, understood as mystery, can
itself serve as a mediation, allowing for the coexistence of
affliction and God, seen most prominently on the Cross. Indeed, Christ
is the religious solution to Weil's principal contradiction,
that between the necessary and the good. Moreover, through incarnation
that opens onto universality, Christ also manifests and solves the
contradiction of the individual and the collectivity. Thus, in a
modification of the Pythagorean idea of harmony, she claims that
Christ allows for "the just balance of contraries"
("Spiritual Autobiography", WFG 33). More broadly, in her
1941 "The Pythagorean Doctrine", she argues that
mathematics are a bridge between the natural and eternal (or between
humans and God). That is, the Pythagoreans held an intellectual
solution to apparent natural contradictions. Inspired by Pythagoras,
she claimed that the very study of mathematics can be a means of
purification in light of the principles of proportion and the
necessary balancing of contraries--especially in geometry. She
saw in the Pythagorean legacy and spirit a link between their
mathematical insights and their distinctly religious project to
penetrate the mysteries of the cosmos.
As opposed to the suppression or dissolution of contradictions, as in
systematic philosophy, in Weil's value-centered philosophy
contradictions are to be presented honestly and tested on different
levels; for her, they are "the criterion of the real" and
correspond with the orientation of detachment (GG 98). For example,
Christ's imperative to "love your enemies" contains
a contradiction in value: love those who are detestable and who
threaten the vulnerability of loving. For Weil, submitting to this
union of contraries loosens one's attachments to particular,
ego-driven perspectives and enables a "well-developed
intellectual pluralism" (Springsted 2010: 97). She writes,
"An attachment to a particular thing can only be destroyed by an
attachment which is incompatible with it" (GG 101). With these
philosophical moves in mind, Robert Chenavier argues that she has not
a philosophy of perception, a phenomenology, but rather, borrowing
Gaston Bachelard's phrase, a "dynamology of
contradiction" (Chenavier 2009 [2012: 25]).
Overall, Weil's presentation of contradiction is more
Pythagorean or Platonic than it is Marxian: it takes up contradiction
not through resolution on the level of things, but through dialectics
on the level of thought, where mystery is the beginning and end-point
of thought (Springsted 2010: 97). In her unfinished essay "Is
There a Marxist Doctrine?", from her time in London, Weil
writes,
>
>
> Contradiction in matter is imaged by the clash of forces coming from
> different directions. Marx purely and simply attributed to social
> matter this movement towards the good through contradictions, which
> Plato described as being that of the thinking creature drawn upwards
> by the supernatural operation of grace. (OL 180)
>
>
>
She follows the Greek usage of "dialectics" to consider
"the virtue of contradiction as support for the soul drawn
upwards by grace" (or the good); Marx errs, she thought, in
coupling such a movement with "materialism" (OL 181).
A second central epistemological concept for Weil is
"reading" (*lecture*)*.* Reading is a kind
of interpretation of what is presented to knowers by both their
physical sensations and their social conditions; therefore,
reading--as the reception and attribution of certain meanings in
the world--is always mediated. In turn, readings are mediated
through other readings, since our perception of meaning is undoubtedly
involved in and affected by an intersubjective web of interpretations.
Weil explains this through the metaphor, borrowed from Descartes, of a
blind man's stick. We can read a situation through attention to
another in order to expand our awareness and sensitivity, just as the
blind man enlarges his sensibility through the use of his stick. But
our readings of the world can also become more narrow and simplistic,
as for instance, when a context of violence and force tempts us to see
everyone we encounter as a potential threat. Moreover, readings are
not free from power dynamics and can become projects of imposition and
intervention; here her epistemology connects to her social-political
philosophy, specifically to her concept of force.
>
>
> We read, but also we are read by, others. Interferences in these
> readings. Forcing someone to read himself as we read him (slavery).
> Forcing others to read us as we read ourselves (conquest). (NB 43)
>
>
>
She connects reading to war and imagination: "War is a way of
imposing another reading of sensations, a pressure upon the
imagination of others" (NB 24). In her 1941 "Essay on the
Concept of Reading" (LPW 21-27) Weil elaborates,
"War, politics, eloquence, art, teaching, all action on others
essentially consists in changing what they read" (LPW 26). In
the same essay she develops "reading" in relation to the
aforementioned epistemological concepts of appearance, the empirical
world, and contradiction:
>
>
> [A]t each instant of our life we are gripped from the outside, as it
> were, by meanings that we ourselves read in appearances. That is why
> we can argue endlessly about the reality of the external world, since
> what we call the world are the meanings that we read; they are not
> real. But they seize us as if they were external; that is real. Why
> should we try to resolve this contradiction when the more important
> task of thought in this world is to define and contemplate insoluble
> contradictions, which, as Plato said, draw us upwards? (LPW 22)
>
>
>
It is important to note that for Weil, we are not simply passive in
our readings. That is, we can learn to change our readings of the
world or of others. An elevated transformation in reading, however,
demands an apprenticeship in loving God through the things of this
world--a kind of attentiveness that will also entail certain
bodily involvements, labors, postures, and experiences. Particular
readings result from particular ways of living. Ideally for her, we
would read the natural as illuminated by the supernatural. This
conceptualization of reading involves recognition on hierarchical
levels, as she explains in her notebooks:
>
>
> To read necessity behind sensation, to read order behind necessity, to
> read God behind order. We must love all facts, not for their
> consequences, but because in each fact God is there present. But that
> is tautological. To love all facts is nothing else than to read God in
> them. (NB 267)
>
>
>
Thus the world is known as a kind of text featuring several
significations on several stages, levels, or domains.
Weil's epistemology grounds her critique of modern science. In
*The Need for Roots* she advocates for a science conducted
"according to methods of mathematical precision, and at the same
time maintained in close relationship with religious faith" (NR
288). Through contemplation of the natural world via this kind of
science, the world could be read on multiple levels. The knower,
reading thus, would understand that the order of the world is the same
as a unity, but different on its myriad levels:
>
>
> with respect to God [it] is eternal Wisdom; with respect to the
> universe, perfect obedience; with respect to our love, beauty; with
> respect to our intelligence, balance of necessary relations; with
> respect to our flesh, brute force. (NR 288-289)
>
>
>
As in her social-political thought, in her epistemology Weil is a kind
of anti-modern. She sees modern science and epistemology as a project
of self-expansion that forgets limit and thinks the world should be
subject to human power and autonomy. Labor (especially physical labor,
such as farming), then, also assumes an epistemological role for her.
By heteronomously subjecting the individual to necessity on a daily
basis, it at once contradicts self-aggrandizement and allows for a
more balanced reading: the intelligence qualifies itself as it reads
necessary relations simultaneously on multiple levels. This reading on
different levels, and inflected through faith, amounts to a kind of
non-reading, in that it is detached, impersonal and impartial. Through
these predicates "reading" is connected to her social
thought, aesthetics, and religious philosophy. That is, in her later
thought Weil's epistemology relies on a time-out-of-mind
metaphysics for justification. Analogous to aesthetic taste, spiritual
discernment--God-given and graceful--allows the abdicated
self to read from a universal perspective at its most developed stage.
Thus, she argued, one can love equally and indiscriminately, just as
the sun shines or the rain falls without preference.
## 4. Ethics
Weil's central ethical concept is "attention"
(*l'attention*), which, though thematically and
practically present in her early writings, reached its robust
theoretical expression while she was in Marseilles in 1942. Attention
is a particular kind of ethical "turn" in her
conceptualization. Fundamentally, it is less a moral position or
specific practice and more an orientation that nevertheless requires
an arduous apprenticeship leading to a capacity of discernment on
multiple levels. Attention includes discerning what someone is going
through in her or his suffering, the particular protest made by
someone harmed, the social conditions that engender a climate for
suffering, and the fact that one is, by chance (*hazard*) at a
different moment, equally a subject of affliction.
Attention is directed not by will but by a particular kind of desire
without an object. It is not a "muscular effort" but a
"negative effort" (WFG 61), involving release of egoistic
projects and desires and a growing receptivity of the mind. For Weil,
as a Christian Platonist, the desire motivating attention is oriented
toward the mysterious good that "draws God down" (WFG 61).
In her essay "Reflections on the Right use of School Studies
with a View to the Love of God" (1942 in WFG 57-65), Weil
takes prayer, defined as "the orientation of all the attention
of which the soul is capable toward God", as her point of
departure (WFG 57). She then describes a kind of vigilance in her
definition of attention:
>
>
> Attention consists of suspending our thought, leaving it detached
> [*disponible*], empty [*vide*], and ready to be
> penetrated by the object; it means holding in our minds, within the
> reach of this thought, but on the lower level and not in contact with
> it, the diverse knowledge we have acquired which we are forced to make
> use of. ... Above all our thought should be empty
> [*vide*], waiting [*en attente*], not seeking anything
> [*ne rien chercher*], but ready to receive in its naked truth
> the object that is to penetrate it. (WFG 62)
>
>
>
The French makes more clear the connection between attention
(*l'attention*) and waiting (*attente*). For Weil
the problem with searching, instead of waiting *en hupomene*
(di upomones),
is precisely that one is eager to fill the void characterizing
*attente*. As a result, one settles too quickly on some-thing:
a counterfeit, falsity, idol. Because in searching or willing the
imagination fills the void (*le vide*), it is crucial that
attention be characterized by suspension and detachment. Indeed, the
void by definition is empty (*vide*)--of idols, futural
self-projections, consolations that compensate un-thinking, and
attachments of collective and personal prestige. As such, its
acceptance marks individual fragility and destructibility, that is,
mortality. But this acceptance of death is the condition for the
possibility of the reception of grace. (As explained below in regard
to her religious philosophy, Weil's concept for the disposition
characterized by these features of attention, with obvious theological
resonances, is "decreation".)
In attention one renounces one's ego in order to receive the
world without the interference of one's limited and consumptive
perspective. This posture of self-emptying, a stripping away of the
"I" (*depouillement*)--ultimately for
Weil an *imitatio Christi* in its kenosis--allows for an
impersonal but intersubjective ethics. Indeed, if the primary
orientation of attention is toward a mysterious and unknown God (often
experienced as a desire for the Good), the secondary disposition is
toward another person or persons, especially toward those going
through affliction.
>
>
> The soul empties itself [*se vide*] of all its own contents in
> order to receive into itself the being it is looking at, just as he
> is, in all his truth. Only he who is capable of attention can do this.
> (WFG 65)
>
>
>
Weil recognizes and problematizes the fact that the autonomous self
naturally imposes itself in its projects, as opposed to disposing
itself to the other; for this reason, attention is rare but is
required of any ethical disposition. The exemplary story of attention
for Weil is the parable of the Good Samaritan, in which, on her
reading, compassion is exchanged when one individual "turns his
attention toward" another individual, anonymous and afflicted
(WFG 90).
>
>
> The actions that follow are just the automatic effect of this moment
> of attention. The attention is creative. But at the moment when it is
> engaged it is a renunciation. This is true, at least, if it is pure.
> The man accepts to be diminished by concentrating on an expenditure of
> energy, which will not extend his own power but will only give
> existence to a being other than himself, who will exist independently
> of him. (1942 "Forms of the Implicit Love of God" in WFG
> 83-142, 90)
>
>
>
As such, attention not only gives human recognition and therefore
meaningful existence to another, but it also allows the individual
engaged in renunciation to take up a moral stance in response to her
or his desire for good.
It is important to distinguish Weil's ethics of attention from
canonical conceptualizations of ethics. Attention is not motivated by
a duty (although Weil thinks we are obligated to respond to
others' needs, whether of soul or body [see "Draft for a
Statement of Human Obligations" in SWA, esp. 224-225]) or
assessed by its consequences. In addition, through its sense of
*phronesis* (practical wisdom), which Weil assumed from
Aristotle through Marx, attention is arguably closer to virtue ethics
than it is to deontology or consequentialism. However, it is separated
from the tradition of virtue ethics in important ways: it often
emerges more spontaneously than virtue, which, for Aristotle, is
cultivated through habituation as it develops into a *hexis*;
Weil's emphasis on a "negative effort" suggests an
active-passive orientation that militates against Aristotle's
emphasis on activity (it is more a "turning" than a
"doing", more orientation than achievement); it is
*excessive* in its generosity, as opposed to being a mean,
e.g., liberality; its supernatural inspiration contrasts with
Aristotle's naturalism; it does not imply a teleology of
realizing one's own virtuous projects--in fact, it is a
suspension of one's own projects; finally, for Weil, Aristotle
lacks a sense of the impersonal good toward which attention is
oriented (and in this respect she is, again, inspired by Plato).
Weil treats the connections among attention, void, and love, relying
on her supernatural (Christian Platonic) metaphysics, in "Forms
of the Implicit Love of God".
>
>
> To empty ourselves [*Se vider*] of our false divinity, to deny
> ourselves, to give up being the center of the world in imagination, to
> discern that all points in the world are equally centers and that the
> true center is outside the world, this is to consent to the rule of
> mechanical necessity in matter and of free choice at the center of
> each soul. Such consent is love. The face of this love, which is
> turned toward thinking persons, is the love of our neighbor. (WFG
> 100)
>
>
>
Attention can be seen as love, for just as attention consents to the
existence of another, love requires the recognition of a reality
outside of the self, and thus de-centers the self and its
particularity. *Contra* the colloquial sense of love, it is
because we do not love personally, but "it is God in us who
loves them [*les malheureux*]" (WFG 93-94), that
our love for others is "quite impersonal" and thereby
universal (WFG 130). Weil allows, however, "one legitimate
exception to the duty of loving universally", namely, friendship
("Last Thoughts", WFG 51). Friendship is "a personal
and human love which is pure and which enshrines an intimation and a
reflection of divine love" (WFG 131), a "supernatural
harmony, a union of opposites" (WFG 126). The opposites that
form the miraculous harmony are necessity/subordination (i.e., drawing
from Thucydides' Melian dialogue, she sees it as an
impossibility for one to want to preserve autonomy in both oneself and
another; in the world the stronger exerts force through will) and
liberty/equality (which is maintained through the desire of each
friend to preserve the consent of oneself and of the other, a consent
to be "two and not one" [WFG 135]). In other words, in
friendship a particular, self-founded reading of the other is not
forced. Distance is maintained, and in this way Weil's concept
of friendship advances her previous critiques of the *ethos* of
capitalism, bureaucracy, and colonialism--i.e., free consent of
all parties is the essential ingredient of all human relations that
are not degraded or abusive. Hence friendship is a model for ethics
more generally--even, *contra* her claim, universally.
>
>
> Friendship has something universal about it. It consists of loving a
> human being as we should like to be able to love each soul in
> particular of all those who go to make up the human race. (WFG
> 135)
>
>
>
Weil's ethics of attention informs her later social-political
philosophy and epistemology. In relation to her social-political
thought, attention suspends the centrality of the self to allow for
supernatural justice, which involves simultaneously turning attention
to God and to affliction. Justice and the love of the neighbor are not
distinct, for her.
>
>
> Only the absolute identification of justice and love makes the
> coexistence possible of compassion and gratitude on the one hand, and
> on the other, of respect for the dignity of affliction
> [*malheur*] in the afflicted--a respect felt by the
> sufferer himself [*les malheureux par lui-meme*] and the
> others. (WFG 85)
>
>
>
Additionally, attention, in its consent to the autonomy of the other,
is an antidote to force. Important to Weil's epistemology,
attention militates against readings that are based on imagination,
unexamined perceptions, or functions of the collectivity (e.g.,
prestige). Attention here manifests as independent, detached thought.
In its religious valence involving obedience and consent, attention
also bears on Weil's epistemology in an additional way: it
suggests that knowing the reality of the world is less an individual
achievement or attainment of mastery and more a gift of grace. That
is, attention is openness to what cannot be predicted and to what
often takes us by surprise. In this way, attention resists the natural
tendency of humans to seek control and dominance over others. At stake
in this ethical mode, then, is the prevention of injustices that
result from projects of self-expansion, including the French
colonialism Weil criticized in her time.
## 5. Metaphysical and Religious Philosophy
Although the metaphysics of Weil's later thought was both
Christian and Platonic, and therefore graceful and supernatural, her
turn to God occurred not despite but, rather, because of her attention
to reality and contact with the world. It is not the case, then, that
her spiritual turn and "theological commitment"
(Springsted 2015: 1-2) severed her contact with her early
materialism, solidarities, or Marxian considerations; rather, her
spiritual turn occurred within this context, which would ground the
religious philosophy she would subsequently articulate.
In her late thought Weil presents an original creation theology. God,
as purely good, infinite, and eternal, withdrew (or reduced Godself)
so that something else (something less than fully good, finite, and
spatio-temporally determinate) could exist, namely, the universe.
Implicit in this outside-of-God universe is a contingency of forces.
She calls this principle of contingency, the "web of
determinations" (McCullough 2014: 124) contrary to God,
"necessity". Necessity is "the screen"
(*l'ecran*) placed between God and creatures. Here
her creation metaphysics echoes Plato's distinction between the
necessary and the good in the *Timaeus*. Her Christian
inflection translates this to the "supreme contradiction"
between creature and creator; "it is Christ who represents the
unity of these contradictories" (NB 386).
Weil is clear that God and the world is less than God alone, yet that
only heightens the meaning of God's abdication. That is, out of
love (for what would be the world and the creatures therein), God
decided to be lesser. Because existence--God's very denial
of Godself--is itself a mark of God's love, providence is
therefore not found in particular interventions of God, but understood
by recognizing the universe in all its contingency as the sum of
God's intentions. In this theology rests an implication for
creatures. If humans are to imitate God, then they must also renounce
their autonomy (including imagined "centeredness" in the
universe) and power out of love for God and therefore the world; she
calls this "decreation"
(*decreation*), which she describes paradoxically
by "passive activity" (WFG 126) and, drawing on her
readings of the *Bhagavad Gita*, "non-active
action" (NB 124).
Weil articulates her religious philosophy through a series of
distinctions--of oppositions or contradictions. It is important
to read the distinctions she makes not as positing dualisms, but as
suggesting contraries that are, unsynthesized, themselves mediations
through which the soul is drawn upward. Her concept of
"intermediaries"--or *metaxu*
(metaxu), the Greek term she employs in
her notebooks--becomes explicit beginning in 1939. Through
*metaxu* God is indirectly present in the world--for
example, in beauty, cultural traditions, law, and labor--all of
which place us into contact with reality. In terms of her
periodization, her concept of mediation moved from relations of mind
and matter on the level of the natural (found in the early Weil) to a
mediation relating the natural intelligence to attention and love on
the level of the supernatural (at work in the late Weil), thus
encompassing her early view through a universal perspective.
Given that reality is itself *metaxu*, Weil's late
concept of mediation is more universal than the aforementioned
specific examples suggest. For her the real (*le reel*)
itself is an obstacle that represents contradiction, an obstacle felt,
say, in a difficult idea, in the presence of another, or in physical
labor; thus thought comes into contact with necessity and must
transform contradiction into correlation or mysterious and crucifying
relation, resulting in spiritual edification. Her explanation of this
mediation is based on her idiosyncratic cosmology, especially its
paradoxical claim that what is often painful reality, as distance from
God, is also, as intermediate, connection to God. She illustrates this
claim with the following metaphor:
>
>
> Two prisoners whose cells adjoin communicate with each other by
> knocking on the wall. The wall is the thing which separates them but
> it is also their means of communication. It is the same with us and
> God. Every separation is a link. (GG 145)
>
>
>
When developing her concept of the real, Weil is especially interested
in distinguishing it from the imaginary. The
imagination--problematized on epistemological grounds in her
early thought--is criticized once again in her religious
philosophy for its insidious tendency to pose false consolations that
at once invite idolatry or self-satisfaction, both of which obviate
real contemplation. This is why decreation, in which the individual
withdraws her or his "I" and personal perspective so as to
allow the real and others to give themselves, is crucial not only
spiritually, but also epistemologically and ethically. It is,
additionally and more radically, why Weil suggests that atheism can be
a kind of purification insofar as it negates religious consolation
that fills the void. For those for whom religion amounts to an
imagined "God who smiles on [them]" (GG 9), atheism
represents a necessary detachment. Crucially, for her, love is pure
not in the name of a personal God or its particular image, but only
when it is anonymous and universal.
The real leads us back to the aforementioned concept of
"necessity", for reality--essentially determinate,
limited, contingent, and conditional--is itself a "network
of necessity" (Veto 1971 [1994: 90]), such that necessity
is a reflection of reality. Moreover, like "power" in her
early period and "force" in her middle period,
Weil's concept of necessity includes not only the physical
forces of the created world, but also the social forces of human life.
Through necessity a sense of slavery remains in her thought, for
humans are ineluctably subject to necessity. In this way, enslavement
to forces outside our control is essentially woven into the human
condition.
Time, contrary to God as eternal, along with space and matter, is
first of all the most basic form of necessity. In its constant
reminder of distance from God, and in the experiences of enduring and
waiting, time is also painful. Weil's Christian Platonism comes
to light in the two most poignant metaphors she uses to refer to time,
namely, the (Platonic) Cave and the (Christian) Cross. In both cases,
time is the weight or the pull of necessity, through which the soul,
in any effort of the self, feels vulnerable, contingent, and
unavoidably subject to necessity's mediation here below
(*ici-bas*). Time, then, is both the Cave where the self
pursues its illusory goals of expansion into the future and the Cross
where necessity, a sign of God's love, pins the self, suffering
and mortal, to the world.
Importantly, by 1942 in New York, Weil's concept of time aligns
with Plato's against what she sees as Christian emphasis on
progress:
>
>
> Christianity was responsible for bringing this notion of progress,
> previously unknown, into the world; and this notion, become the bane
> of the modern world, has de-Christianized it. We must abandon the
> notion. We must get rid of our superstition of chronology in order to
> find Eternity. (LPr 29)
>
>
>
For Weil progress does not carry normative implications of
improvement, for the Good is eternal and non-existent (in that it is
neither spatial nor temporal); time must be consented to and suffered,
not fled. As for Christ on the cross, for creatures there is
redemption not from but through suffering. She thus presents a
supernatural use of, not a remedy for, affliction. One form of sin,
then, is an attempt to escape time, for only God is time-out-of-mind.
However, time, paradoxically, can also serve as *metaxu*. While
monotony is dreadful and fatiguing when it is in the form of a
pendulum's swinging or a factory's work, it is beautiful
as a reflection of eternity, in the form of a circle (which unites
being and becoming) or the sound of Gregorian chant. This beauty of
the world suggests, when read from the detached perspective, order
behind necessity, and God behind order.
A second form of necessity is "gravity"
(*pesanteur*), as distinct from supernatural
"grace". Gravity signifies the forces of the natural world
that subject all created beings physically, materially,
psychologically, and socially, and thus functions as a downward
"pull" on the attention, away from God and the afflicted.
"Grace", on the other hand, is a counter-balance, the
motivation by and goodness of God. Grace pierces the world of
necessity and serves to orient, harmonize, and balance, thus providing
a kind of "supernatural bread" for satiating the human
void. Grace, entering the empirical world, disposes one to be purified
by leaving the void open, waiting for a good that is real but that
could never "exist" in a material sense (i.e., as subject
to time, change, force, etc.). For Weil, natural/necessary gravity
(force) and supernatural/spiritual grace (justice) are the two
fundamental aspects of the created world, coming together most
prominently in the crucifixion. The shape of the cross itself reflects
this intersection of the horizontal (necessity) and the vertical
(grace).
Weil's concept of necessity bears on her late conceptualization
of the subject. She connects seeing oneself as central to the world to
seeing oneself as exempt from necessity. From this perspective, if
something were to befall oneself, then the world would cease to have
importance; therefore, the assertive and willful self concludes,
nothing could befall oneself. Affliction contradicts this perspective
and thus forcefully de-centers the self. Unlike her existentialist
contemporaries such as Sartre, Weil did not think human freedom
principally through agency; for her, humans are free not ontologically
as a presence-to-self but supernaturally through obedience and
consent. More than obedience, consent is the unity of necessity in
matter with freedom in creatures. A creature cannot *not* obey;
the only choice for the intelligent creature is to desire or not to
desire the good. To desire the good--and here her stoicism
through Marcus Aurelius and Spinoza emerges in her own appeal to
*amor fati*--is a disposition that implies a consent to
necessity and a love of the order of the world, both of which mean
accepting divine will. Consent, therefore, is a kind of reconciliation
in her dialectic between the necessary and the good. Consent does not
follow from effort or will; rather, it expresses an ontological
status, namely, decreation. In supernatural compassion one loves
*through* evil: through distance (space) and through monotony
(time), attending in a void and through the abdication of God. Thus,
in regard to contradiction and mediation, just as the intelligence
must grapple with mystery in Weil's epistemology, so too love
must be vulnerable and defenseless in the face of evil in her
metaphysical and religious philosophy.
## 6. Aesthetics
Weil's metaphysic sheds light on her aesthetic philosophy, which
is primarily Kantian and Platonic. For Weil beauty is a snare
(*a la* Homer) set by God, trapping the soul so that God
might enter it. Necessity presents itself not only in gravity, time,
and affliction, but also in beauty. The contact of impersonal good
with the faculty of sense is beauty; contact of evil with the faculty
of sense is ugliness and suffering--both are contact with the
real, necessary, and providential.
Weil is a realist in regard to aesthetics in that she uses the
language of being gripped or grasped by beauty, which weaves, as it
were, a link among mind, body, world, and universe. Woven through the
world, but beyond relying simply on the individual's mind or
senses, beauty, in this linking, lures and engenders awareness of
something outside of the self. In paradoxical terms, for Weil,
following Kant, the aesthetic experience can be characterized as a
disinterested interestedness; against Kant, her *telos* of such
experience is Platonic, namely, to orient the soul to the
contemplation of the good. Moreover, for Weil beauty is purposive only
because it is derivative of the good, i.e., the order of the world is
a function of God. In a revision of Kant's line that beauty is a
finality without finality, then, she sees beauty not only as a kind of
feeling or presence of finality, but also as a gesture toward,
inclination to, or promise of supernatural, transcendental
goodness.
At the same time, Weil's concept of beauty is not only informed
by her Platonism and thus marked by eternity, but it is also inspired
by her experiences with Christ and hence features a kind of
incarnation: the ideal can become a reality in the world. As such,
beauty is a testament to and manifestation of the network of
inflexible necessity that is the natural world--a network that,
in some sense, the intelligence can grasp. It is in this way--not
as an ontological category but, rather, as a sensible
experience--that through beauty necessity becomes the object of
love (i.e., in Kantian terms, beauty is regulative, not constitutive).
Thus beauty is *metaxu* (an intermediary) attracting the soul
to God, and as *metaxu*, beauty serves as a "locus of
incommensurability" (Winch 1989: 173) between the fragile
contingency of time (change, becoming, death) and an eternal
reality.
Because beauty is the order of the world as necessity, strictly
speaking, "beauty" applies principally to the world as a
whole, and therefore consent to beauty must be total. Thus the love of
beauty functions as an "implicit love of God" [see WFG
99]. More specifically, on the level of the particular there are,
secondarily, types of beauty that nevertheless demonstrate balance,
order, proportion, and thus their divine provenance (e.g., those found
in nature, art, and science).
One's recognition of the beauty/order of the world has
implications, once again, for the subject. Because beauty is to be
contemplated at a distance and not consumed through the greedy will,
it trains the soul to be detached in the face of something
irreducible, and in this sense it is similar to affliction. Both
de-center the self and demand a posture of waiting (*attente*).
Contemplating beauty, then, means transcending the perspective of
one's own project. Because beauty, as external to self, is to be
consented to, it implies both that one's reality is limited and
that one does not want to change the object of her/his mode of
engagement. Furthermore, beauty has an element of the impersonal
coming into contact with a person. Real interaction with beauty is
decreative. True to this idea, Weil's aesthetic commitments are
reflected in her style: in her sharp prose she scrutinizes her own
thought while tending to exclude her own voice and avoid personal
references; thus she performs "the linguistic decreation of the
self" (Dargan 1999: 7).
## 7. Reception and Influence
Although the French post-structuralists who succeeded Weil did not
engage extensively with her thought, her concepts carry a legacy
through her contemporaries domestically and her heirs internationally
(see Rozelle-Stone 2017). In regard to her generation of French
thinkers, the influence of "attention" can be seen in the
writings of Maurice Blanchot; her Platonic sense of good, order, and
clarity was taken up--and rejected--by both Georges Bataille
and Emmanuel Levinas. Following Weil's generation, in his
younger years Jacques Derrida took interest in her mysticism and,
specifically, her purifying atheism, only to leave her behind almost
entirely in his later references (Baring 2011). It is possible that
Weil's limited influence on post-structuralists is derived not
only from the fact that she was not influenced by Nietzsche and
Heidegger to the extent of the four aforementioned French thinkers,
but also, quite simply, because she did not survive World War II and
hence did not write thereafter. It is also important to note that
throughout her life, Weil's position as a woman philosopher
contributed to *ad hominem* attacks against her person, not her
thought; she was often perceived as "psychologically cold"
as opposed to being engaged in "an ethical project with
different assumptions" (Nelson 2017: 9).
Across Europe and more recently, Weil's "negative
politics"--that is, turning away from institutions and
ideology and toward religious reflection--, in conjunction with
Michel Foucault's concept of biopolitics, has been taken up by
the political philosophies of Giorgio Agamben and Roberto Esposito
(Ricciardi 2009). In doing so, Agamben (who wrote his dissertation on
Weil's political thought and critique of personhood, ideas of
which went on to shape *Homo Sacer* [see Agamben 2017]) and
Esposito rely on Weil's concepts of decreation, impersonality,
and force. Beyond the Continental scene, her Christian
Platonism--especially her concepts of the good, justice, void,
and attention--influenced Iris Murdoch's emphasis on the
good, metaphysics, and morality, and was thereby part of a recent
revival in virtue ethics (Crisp & Slote 1997). In addition, many
have noted the "spiritual kinship" that is apparent between the
religious and ethical philosophies of Weil and Ludwig Wittgenstein.
For example, the view that belief in God is not a matter of evidence,
logic, or proof is shared by these thinkers (Von der Ruhr 2006). The
legacy of Weil's writings on affliction and beauty in relation
to justice is also felt in Elaine Scarry's writings on
aesthetics (Scarry 1999). T. S. Eliot, who wrote the introduction to
*The Need for Roots*, cites Weil as an inspiration of his
literature, as do W. H. Auden, Czeslaw Milosz, Seamus Heaney, Flannery
O'Connor, Susan Sontag, and Anne Carson.
The Anglo-American secondary literature on Weil has emphasized her
concept of supernatural justice, including the philosophical tensions
that inform her materialism and mysticism (Winch 1989; Dietz 1988;
Bell 1993, 1998; Rhees 2000). Additional considerations treat her
Christian Platonism (Springsted 1983; Doering & Springsted 2004).
Recent English-language scholarship on Weil has included texts on her
concept of force (Doering 2010), her radicalism (Rozelle-Stone &
Stone 2010), and the relationship in her thought between science and
divinity (Morgan 2005), between suffering and trauma (Nelson 2017),
and between decreation and ethics (Cha 2017). Furthermore, her
concepts have influenced recent contributions to questions of identity
(Cameron 2007), political theology (Lloyd 2011), animality (Pick
2011), and international relations (Kinsella 2021). |
economic-justice | ## 1. Economics and Ethics
The role of ethics in economic theorizing is still a debated issue. In
spite of the reluctance of many economists to view normative issues as
part and parcel of their discipline, normative economics now
represents an impressive body of literature. One may, however, wonder
if normative economics cannot also be considered a part of political
philosophy.
### 1.1 Positive vs. Normative Economics
In the first half of the twentieth century, most leading economists
(Pigou, Hicks, Kaldor, Samuelson, Arrow etc.) devoted a significant
part of their research effort to normative issues, notably the
definition of criteria for the evaluation of public policies. The
situation is very different nowadays. "Economists do not devote
a great deal of time to investigating the values on which their
analyses are based. Welfare economics is not a subject which every
present-day student of economics is expected to study", writes
Atkinson (2001, p. 195), who regrets "the strange disappearance
of welfare economics". Normative economics itself may be partly
guilty for this state of affairs, in view of its repeated failure to
provide conclusive results and its long-lasting focus on impossibility
theorems (see
SS 4.1).
But there has also been a persistent ambiguity about the status of
normative propositions in economics. The subject matter of economics
and its close relation to policy advice make it virtually impossible
to avoid mingling with value judgments. Nonetheless, the desire to
separate positive statements from normative statements has often been
transformed into the illusion that economics could be a science only
by shunning the latter. Robbins (1932) has been influential in this
positivist move, in spite of a late clarification (Robbins 1981) that
his intention was not to disparage normative issues, but only to
clarify the normative status of (useful and necessary) interpersonal
comparisons of welfare. It is worth emphasizing that many results in
normative economics are mathematical theorems with a primary
analytical function. Endowing them with a normative content may be
confusing, because they are most useful in clarifying ethical values
and do not imply by themselves that these values must be endorsed.
"It is a legitimate exercise of economic analysis to examine the
consequences of various value judgments, whether or not they are
shared by the theorist." (Samuelson 1947, p. 220) The role of
ethical judgments in economics has received recent and valuable
scrutiny in Sen (1987), Hausman and McPherson (2006) and Mongin
(2001b).
### 1.2 Normative Economics and Political Philosophy
There have been many mutual influences between normative economics and
political philosophy. In particular, Rawls' difference principle
(Rawls 1971) has been instrumental in making economic analysis of
redistributive policies pay some attention to the maximin criterion,
which puts absolute priority on the worst-off, and not only to
sum-utilitarianism. (It has taken more time for economists to realize
that Rawls' difference principle applies to primary goods, not
utilities.) Conversely, many concepts used by political philosophers
come from various branches of normative economics (see below).
There are, however, differences in focus and in methodology. Political
philosophy tends to focus on the general issue of social justice,
whereas normative economics also covers microeconomic issues of
resource allocation and the evaluation of public policies in an unjust
society (although there is now philosophical work on non-ideal
theory). Political philosophy focuses on arguments and basic
principles, whereas normative economics is more concerned with the
effective ranking of social states than the arguments underlying a
given ranking. The difference is thin in this respect, since the
axiomatic analysis in normative economics may be interpreted as
performing not only a logical decomposition of a given ranking or
principle, but also a clarification of the underlying basic principles
or arguments. But consider for instance the "leveling-down
objection" (Parfit 1995), which states that egalitarianism is
wrong because it considers that there is *something* good in
achieving equality by leveling down (even when the egalitarian ranking
says that, *all things considered*, leveling down is bad). This
kind of argument has to do with the reasons underlying a social
judgment, not with the content of the all things considered judgment
itself. It is hard to imagine if and how the leveling-down objection
could be incorporated in the models of normative economics. A final
difference between normative economics and political philosophy,
indeed, lies in conceptual tools. Normative economics uses the formal
apparatus of economics, which gives powerful means to derive
non-intuitive conclusions from simple arguments, although it also
deprives the analyst of the possibility of exploring issues that are
hard to formalize.
There are now several general surveys of normative economics, some of
which do also cover the intersection with political philosophy: Arrow,
Sen and Suzumura (1997, 2002, 2011), Fleurbaey (1996), Hausman and
McPherson (2006, rev. and augmented in Hausman et al. 2016), Kolm
(1996), Moulin (1988, 1995, 1998), Roemer (1996), Young (1994).
## 2. Inequality and Poverty Measurement
Focusing traditionally on income inequality and poverty, this field
has been first built on the assumption that there is a given
unidimensional measure of individual advantage. The gist of the
analysis is then about the distribution of this particular notion of
advantage.
More recently, partly due to the emergence of data about living
conditions, there has been growing interest in the measurement of
inequality and poverty when individual situations are described by a
multidimensional list of attributes or deprivations. This has
generated the field of "multidimensional inequality and poverty
measurement".
So far most of this literature has remained disconnected from the
welfare economics literature in which interpersonal comparisons of
well-being in relation to individual preferences is a key issue. The
typical multidimensional indices do not refer to well-being or
preferences. But the connection is being made and the welfare
economics literature will eventually blend with the inequality and
poverty measurement literature.
Excellent surveys of the unidimensional part of the theory include:
Chakravarty (1990, 2009), Cowell (2000), Dutta (2002), Lambert (1989),
Sen and Foster (1997), Silber (1999). The multidimensional approach is
surveyed or discussed in Weymark (2006), Chakravarty (2009), Decancq
and Lugo (2013), Aaberge and Brandolini (2015), Alkire et al. (2015),
Duclos and Tiberti (2016). The link between inequality and poverty
measurement and welfare economics is discussed in Decancq et al.
(2015) and in several chapters of Adler and Fleurbaey (2016). For a
comprehensive handbook on economic inequality, see Atkinson and
Bourguignon (2000, 2015).
### 2.1 Indices of Inequality and Poverty
The study of inequality and poverty indices started from a
statistical, pragmatic perspective, with such indices as the Gini
index of inequality or the poverty head count. Recent research has
provided two valuable insights. First, it is possible to relate
inequality indices to social welfare functions, so as to give
inequality indices a more transparent ethical content. The idea is
that an inequality index should not simply measure dispersion in a
descriptive way, but would gain in relevance if it measured the harm
to social welfare done by inequality. There is a simple method to
derive an inequality index from a social welfare function, due to Kolm
(1969) and popularized by Atkinson (1970) and Sen (1973). Consider a
social welfare function which is defined on distributions of income
and is symmetrical (i.e., permuting the income of two individuals
leaves social welfare unchanged). For any given unequal distribution
of income, one may compute the egalitarian distribution of income
which would yield the same social welfare as the unequal distribution.
This is called the "equally-distributed equivalent" (or
"equal-equivalent") distribution. If the social welfare
function is averse to inequality, the total amount of income in the
equal-equivalent distribution is less than in the unequal
distribution. In other words, the social welfare function condones
some sacrifice of total income in order to reach equality. This drop
in income, measured in proportion of the initial total income, may
serve as a valuable index of inequality. This index may also be used
in a picturesque decomposition of social welfare. Indeed, an ordinally
equivalent measure of social welfare is then total income (or average
income - it does not matter when the population is fixed) times
one minus the inequality index.
This method of construction of an index of inequality, often referred
to as the *ethical approach* to inequality measurement, is most
useful when the argument of the social welfare function, and the
object of the measurement of inequality, is the distribution of
individual well-being (which may or may not be measured by income).
Then the social welfare function is indeed symmetrical (by requirement
of impartiality) and its aversion to inequality reflects its
underlying ethical principles. In other contexts, the method is more
problematic. Consider the case when social welfare depends on
individual well-being, and individual well-being depends on income
with some individual variability due to differential needs. Then
income equality may no longer be a valuable goal, because the needy
individuals may need more income than others. Using this method to
construct an index of inequality of *well-being* is fine, but
using it to construct an index of inequality of *incomes* would
be strange, although it would immediately reveal that income
inequality is not always bad (when it compensates for unequal needs).
Now consider the case when social welfare is the utilitarian sum of
individual utilities, and all individuals have the same strictly
concave utility function (strict concavity means that it displays a
decreasing marginal utility). Then using this method to construct an
index of income inequality is amenable to a different interpretation.
The index then does not reflect a principled aversion to inequality in
the social welfare function, since the social welfare function has no
aversion to inequality of utilities. It only reflects the consequence
of an empirical fact, the degree of concavity of individual utility
functions. To call this the ethical approach, in this context, seems a
misnomer.
In the field of multidimensional inequality or poverty measurement, a
key divide has separated the measures that evaluate the distribution
in every dimension (such as income, health, asset deprivation...)
before they aggregate over the dimensions, from the measures that
first evaluate individual situations, all dimensions included, before
aggregating over individuals. The latter has been praised (Decancq and
Lugo 2013) for being closer to the standard individualistic approach
in welfare economics, and for making it possible to have measures that
are sensitive to the correlation between disadvantages (e.g., the
positive correlation between income and health), and therefore more
accurately record the multidimensional suffering of the worst off. An
interesting feature of such measures is that a positive correlation
among attributes of advantage or disadvantage worsens inequalities
only if the elasticity of substitution between attributes, in the
measure of individual multidimensional advantage, is greater than the
inequality aversion in the social index. To illustrate, consider two
canonical distributions, with two individuals and two attributes:
(Perfect correlation) Ann: (1,1) and Bob: (2,2)
(Anti-correlation) Ann: (1,2) and Bob: (2,1)
If the attributes are deemed perfectly substitutable (meaning that
only the sum, potentially weighted, of the attributes matters for the
assessment of individual advantage), and the inequality aversion is
zero (meaning that only the sum of individual indexes of advantage
matters), the two distributions are considered equally good. But they
also appear equally good if there is no possible substitution (only
the value of the worst attribute matters) and inequality aversion is
infinite (only the worst-off individual matters). In contrast, the
first distribution appears worse if the attributes are substitutable
and inequality aversion is strong (the first distribution is then
unequal, unlike the second one); whereas it appears better if there is
no possible substitution and inequality aversion is weak (only one
individual has a bad attribute in the first distribution, whereas both
have one in the second).
The second valuable contribution of recent research in this field is
the development of an alternative ethical approach through the
axiomatic study of the properties of indices. The main ethical axioms
deal with transfers. The Pigou-Dalton principle of transfers says that
inequality decreases (or social welfare increases) when an even
transfer is made from a richer to a poorer individual without
reversing their pairwise ranking (although this may alter their
ranking relative to other individuals). Since this condition is about
even transfers, it is quite weak and other axioms have been proposed
in order to strengthen the priority of the worst-off. The principle of
diminishing transfers (Kolm 1976) says that a Pigou-Dalton transfer
has a greater impact the lower it occurs in the distribution. The
principle of proportional transfers (Fleurbaey and Michel 2001) says
that an inefficient transfer in which what the donor gives and what
the beneficiary receives is proportional to their initial positions
increases social welfare. Similar transfer axioms have been adapted to
the measurement of poverty. For instance, Sen (1976) proposed the
condition saying that poverty increases when an even transfer is made
from someone who is below the poverty line to a richer individual
(below or above the line). The other axioms with which the axiomatic
analysis has been made usually have a less obvious ethical appeal, and
relate to decomposability of indices, scale invariance and the like.
Characterization results have been obtained, which identify classes of
indices satisfying particular lists of axioms. The two ethical
approaches may be combined, when one takes as an axiom the condition
that the index be derived from a social welfare function with
particular features.
### 2.2 The Dominance Approach
The multiplicity of indices, even when a restriction to special
sub-classes may be justified by axiomatic characterization, raises a
serious problem for applications. How can one make sure that a
distribution is more or less unequal, or has more or less poverty,
than another without checking an infinite number of indices? Although
this may look like a purely practical issue, it has given rise to a
broad range of deep results, relating the statistical concept of
stochastic dominance to general properties of social welfare functions
and to the satisfaction of transfer axioms by inequality and poverty
indices. This approach, in particular, justifies the widespread use of
Lorenz curves in the empirical studies of inequality. The Lorenz curve
depicts the percentage of the total amount of whatever is measured,
income, wealth or well-being, possessed by any given percentage of the
poorest among the population. For instance, according to the Census
Bureau, in 2006 the poorest 20%'s share of total income was
3.7%, the poorest 40%'s share was 13.1%, the poorest 60%'s
share was 28.1%, the poorest 80%'s share was 50.6%, while the
top 5%'s share was 22.2%. This indicates that the Lorenz curve
is approximately as in the following figure.
![graph of lorenz curve](lorenz2006.png)
Considerable progress has been made in the development of dominance
techniques of the Lorenz type, with extensions to multidimensional
inequality and to poverty measurement (Aaberge and Brandolini
2015).
### 2.3 Equality, Priority, Sufficiency
Philosophical interest in the measurement of inequality has recently
risen (Temkin 1993). Most of this philosophical literature, however,
tends to focus on defining the right foundations for an aversion to
inequality. In particular, Parfit (1995) proposes to give priority to
the worse-off not because of their relative position compared to the
better-off, but because and to the extent that they are badly off.
This probably corresponds to defining social welfare by an additively
separable social welfare function, with diminishing marginal social
utility (a social welfare function is additively separable when it is
the sum of separate terms, each of which depends only on one
individual's well-being). Interestingly, if egalitarianism is
defined in opposition to this "priority view" by the
feature that it relies on judgments of relative positions, this means
that egalitarian values cannot be correctly represented by a separable
social welfare function. This seems to raise the ethical stakes
concerning properties of decomposability of indices or separability of
social welfare functions, which are usually considered in economics
merely as convenient conditions simplifying the functional forms
(although separability may also be justified by the subsidiarity
principle, according to which unconcerned individuals need not have a
say in a decision). The content and importance of the distinction
between egalitarianism and prioritarianism remains a matter of debate
(see, among many others, Tungodden 2003, and the contributions in
Holtug and Lippert-Rasmussen 2007). It is also interesting to notice
that philosophers are often at ease to work with the notion of social
welfare (or social good, or inequality) as a numerical quantity with
cardinal meaning, whereas economists typically restrict their
interpretation of social welfare or inequality to a purely ordinal
ranking of social states. Beside egalitarian and prioritarian
positions one must also mention the "sufficiency view",
defended e.g. by Frankfurt (1987) who argues that priority should be
given only to those below a certain threshold. One may consider that
this view supports the idea that poverty indices might summarize
everything that is relevant about social welfare.
## 3. Welfare Economics
Welfare economics is the traditional generic label of normative
economics, but, in spite of substantial variations between authors, it
now tends to be associated with a particular subcontinent of this
domain, maybe as a result of the development of
"non-welfarist" approaches and of approaches with a
broader scope, such as the theory of social choice.
Surveys on welfare economics in its restricted definition can be found
in Graff (1957), Boadway and Bruce (1984), Chipman and Moore (1978),
Samuelson (1981). Many topics of welfare economics are addressed in
the more recent handbooks by Arrow, Sen and Suzumura (2002, 2011),
Atkinson and Bourguignon (2000, 2015) and Adler and Fleurbaey (2016).
Accessible introductions are given by (book-length) Adler (2019) and
(article-length) Fleurbaey (2019).
### 3.1 Old and New Welfare Economics
The proponents of a "new" welfare economics (Hicks,
Kaldor, Scitovsky) have distanced themselves from their predecessors
(Marshall, Pigou, Lerner) by abandoning the idea of making social
welfare judgments on the basis of interpersonal comparisons of
utility. Their problem was then that in absence of any kind of
interpersonal comparisons, the only principle on which to ground their
judgments was the Pareto principle, according to which a situation is
a global improvement if it is an improvement for every member of the
concerned population (there are variants of this principle depending
on how individual improvement is defined, in terms of preferences or
some notion of well-being, and depending on whether it is a strict
improvement for all members or some of them stay put). Since most
changes due to public policy hurt some subgroups for the benefit of
others, the Pareto principle remains generally silent. The need for a
less restrictive criterion of evaluation has led Kaldor (1939) and
Hicks (1939) to propose an extension of the Pareto principle through
compensation tests. According to Kaldor's criterion, a situation
is a global improvement if ex post the gainers could compensate the
losers. For Hicks' criterion, the condition is that ex ante the
losers could not compensate the gainers (a change from situation A to
situation B is approved by Hicks' criterion if the change from B
to A is not approved by Kaldor's criterion). These criteria are
much less partial than the Pareto principle, but they remain partial
(that is, they fail to rank many pairs of alternatives). This is not,
however, their main drawback. They have been criticized for two basic
flaws. First, for plausible definitions of how the compensation
transfers could be computed, these criteria may lead to inconsistent
social judgments: the same criterion may simultaneously declare that a
situation A is better than another situation B, and conversely.
Scitovsky (1941) has proposed to combine the two criteria, but this
does not prevent the occurrence of intransitive social judgments.
Second, the compensation tests have a dubious ethical value. If the
compensatory transfers are performed in Kaldor's criterion, then
the Pareto criterion alone suffices since after compensation everybody
gains. If the compensatory transfers are not performed, the losers
remain losers and the mere possibility of compensation is a meager
consolation to them. Such criteria are then typically biased in favor
of the rich whose willingness to pay is generally high (i.e., they are
willing to give a lot in order to obtain whatever they want, and
therefore they can easily compensate the losers; when they do not
actually pay the compensation, they can have the cake and eat it
too).
Cost-benefit analysis has more recently developed criteria which are
very similar and rely on the summation of willingness to pay across
the population. In spite of repeated criticism by specialists (Arrow
1951, Boadway and Bruce 1984, Sen 1979, Blackorby and Donaldson 1990),
practitioners of cost-benefit analysis and some branches of economic
theory (industrial organization, international economics) still
commonly rely on such criteria. More sophisticated variants of
cost-benefit analysis (Layard and Glaister 1994, Dreze and
Stern 1987) avoid these problems by relying on weighted sums of
willingness to pay or even on consistent social welfare functions.
Adler (2012) offers a comprehensive study of the foundations of the
social welfare function approach to cost-benefit analysis. Many
specialists of public economics (e.g. Stiglitz 1987) have considered
that the Pareto criterion was the core ethical principle on which
economists should buttress their social evaluations, and that they
should focus on denouncing all sources of inefficiency in social
organizations and public policies.
A subfield of welfare economics focused on the possibility of making
social welfare judgments on the basis of national income. An increase
in national income may reflect an increase in social welfare under
some stringent assumptions, most conspicuously the assumption that the
distribution of incomes is socially optimal. Although very
restrictive, this kind of result has a lasting influence, in theory
(international economics) and in practice (the salience of GDP growth
in policy discussions). There exists a school of social indicators
(see the *Social Indicators Research* journal) which fights
this influence and the number of alternative indicators (of happiness,
genuine progress, social health, economic well-being, etc.) has soared
in the last decades (see e.g. Miringoff and Miringoff 1999, Frey and
Stutzer 2002, Kahneman et al. 2004 and Gadrey and Jany-Catrice
2006).
Bergson (1938) and Samuelson (1947, 1981) occupy a special position,
which may be described as a third way between old and new welfare
economics. From the former, they retain the goal of making complete
and consistent social welfare judgments with the help of well-defined
social welfare functions. The formula
*W*(*U*1(*x*),...,*Un*(*x*))
is often named a "Bergson-Samuelson social welfare
function" (*x* is the social state;
*Ui*(*x*), for
*i*=1,...,*n*, is individual *i*'s
utility in this state). With the latter, however, they share the idea
that only ordinal non-comparable information should be retained about
individual preferences. This may seem contradictory with the formula
of the Bergson-Samuelson social welfare function, in which individual
utility functions appear, and there has been a controversy about the
possibility of constructing a Bergson-Samuelson social welfare
function on the sole basis of individual ordinal non-comparable
preferences (see in particular Arrow (1951), Kemp and Ng (1976),
Samuelson (1977, 1987), Sen (1986) and a recent discussion in
Fleurbaey and Mongin (2005)). Samuelson and his defenders are commonly
considered to have lost the contest, but it may also be argued that
their opponents have misunderstood them. Indeed, individual utility
functions in the
*W*(*U*1(*x*),...,*Un*(*x*))
formula are, according to Bergson and Samuelson, to be constructed out
of individual preference orderings, on the basis of fairness
principles. The logical possibility of such a construction has been
repeatedly proven by Samuelson (1977), Pazner (1979), Mayston (1974,
1982). The fact that such a construction does not require any other
information than ordinal non-comparable preferences is indisputable.
Bergson and Samuelson acknowledged the need for interpersonal
comparisons, but considered that these could be done, in an ethically
relevant way, on the sole basis of non-comparable preference
orderings. They failed, however, to be more specific about the
fairness principles on which the construction could be justified. The
theory of fair allocation (see
SS 6)
may fill the gap.
### 3.2 Harsanyi's Theorems
Harsanyi may be viewed as the last representative of the old welfare
economics, to which he made a major contribution in the form of two
arguments. The first one is often called the "impartial observer
argument". An impartial observer should decide for society as if
she had an equal chance of becoming anyone in the considered
population. This is a risky situation in which the standard decision
criterion is expected utility. The computation of expected utility, in
this equal probability case, yields an arithmetic mean of the
utilities that the observer would have if she became anyone in the
population. Harsanyi (1953) considers this to be an argument in favor
of utilitarianism. The obvious weakness of the argument, however, is
that not all versions of utilitarianism would measure individual
utility in a way that may be entered in the computation of the
expected utility of the impartial observer. In other words, ask a
utilitarian to compute social welfare, and ask an impartial observer
to compute her expected utility. There is little reason to believe
that they will come up with similar conclusions, even though both
compute a sum or a mean. For instance, a very risk-averse impartial
observer may come arbitrarily close to the maximin criterion.
This argument has triggered controversies, in particular with Rawls
(1974), about the soundness of the maximin criterion in the original
position, and with Sen (1977b). See Harsanyi (1976) and recent
analyses in Weymark (1991), Mongin (2001a). There is a related, but
different controversy about the consequences of the veil of ignorance
in Dworkin's hypothetical insurance scheme (Dworkin 2000).
Roemer (1985) argues that if individuals maximize their expected
utility on the insurance market, they insure against states in which
they have low marginal utility. If low marginal utility happens to be
the consequence of some handicaps, then the hypothetical market will
tax the disabled for the benefit of the others, a paradoxical but
typical consequence of utilitarian policies. It is indeed well known
that insurance markets have strange consequences when utilities are
state-dependent (that is, when the utility of income is affected by
random events). For a recent revival of this controversy, see Dworkin
(2002), Fleurbaey (2008) and Roemer (2002a).
Harsanyi's second argument, the "aggregation
theorem", is about a social planner who, facing risky prospects,
maximizes expected social welfare and wants to respect individual
preferences about prospects. Harsanyi (1955) shows that these two
conditions imply that social welfare must be a weighted sum of
individual utilities, and concludes that this is another argument in
favor of utilitarianism. Recent evaluation of this argument and its
consequences may be found in Broome (1991), Weymark (1991). In
particular, Broome uses the structure of this argument to conclude
that social good must be computed as the sum of individual goods,
although this does not preclude incorporating a good deal of
inequality aversion in the measurement of individual good. Diamond
(1967) has raised a famous objection against the idea that expected
utility is a good criterion for the social planner. This criterion
implies that if the social planner is indifferent between the
distributions of utilities, for two individuals, (1,0) and (0,1), then
he must also be indifferent between these two distributions and an
equal probability of getting either distribution. This is paradoxical,
this lottery being ex ante better since it gives equal prospects to
individuals. Broome (1991) raises another puzzle. An even better
lottery would yield either (0,0) or (1,1) with equal probability. It
is better because ex ante it gives individuals the same prospects as
the previous lottery, and it is more egalitarian ex post. The problem
is that it seems quite hard to construct a social criterion which
ranks these four alternatives as suggested here. Defining social
welfare under uncertainty is still a matter of bafflement. See
Deschamps and Gevers (1977), Ben Porath, Gilboa and Schmeidler (1997),
Fleurbaey (2010). Things are even more difficult when probabilities
are subjective and individual beliefs may differ. Harsanyi's
aggregation theorem then transforms into an impossibility theorem. On
this, see in particular Hylland and Zeckhauser (1979), Mongin (1995),
Bradley (2005), Mongin and Pivato (2016, 2020), Dietrich (2021).
## 4. Social Choice
The theory of social choice in its modern incarnation originated in
Arrow's pioneering work. Arrow himself described it as a
'subversive' attempt to 'operationalize' the
Bergson-Samuelson approach (Arrow 1983, 26), but it could equally be
viewed as extending the tradition of earlier work on voting and
committee decisions, going back to Condorcet and Black. It has
developed into an immense literature, with many ramifications to a
variety of subfields and topics. The social choice framework is,
potentially, so general that one may think of using it to unify
normative economics. In a restrictive definition, however, social
choice is considered to deal with the problem of synthesizing
heterogeneous individual preferences into a consistent ranking.
Sometimes an even more restrictive notion of "Arrovian social
choice" is used to name works which faithfully adopt
Arrow's particular axioms.
There are many surveys of social choice theory, in broad and
restrictive senses: Arrow, Sen and Suzumura (1997, in particular chap.
3, 4, 7, 11, 15; 2002, in particular chap. 1, 2, 3, 4, 7, 10; 2011, in
particular chap. 13, 14, 17-20), Sen (1970, 1977a, 1986, 1999,
2009), Anand, Puppe and Pattanaik (2009). See also the entry on
social choice theory.
### 4.1 Arrow's Theorem
In an attempt to construct a consistent social ranking of a set of
alternatives on the basis of individual preferences over this set,
Arrow (1951) obtained: 1) an impossibility theorem; 2) a
generalization of the framework of welfare economics, covering all
collective decisions from political democracy and committee decisions
to market allocation; 3) an axiomatic method which set a standard of
rigor for any future endeavor.
The impossibility theorem roughly says that there is no general way to
rank a given set of (more than two) alternatives on the basis of (at
least two) individual preferences, if one wants to respect three
conditions: (Weak Pareto) unanimous preferences are always respected
(if everyone prefers A to B, then A is better than B); (Independence
of Irrelevant Alternatives) any subset of two alternatives must be
ranked on the sole basis of individual preferences over this subset;
(No-Dictatorship) no individual is a dictator in the sense that his
strict preferences are always obeyed by the ranking, no matter what
they and the other individuals' preferences are. The
impossibility holds when one wants to cover a great variety of
possible profiles of individual preferences. When there is sufficient
homogeneity among preferences, for instance when alternatives differ
only in one dimension and individual preferences are based on the
distance of alternatives to their preferred alternative along this
dimension (think, for instance, of political options on the left-right
spectrum), then consistent methods exist (the majority rule, in this
example; Black 1958).
Arrow's result clearly extends the scope of analysis beyond the
traditional focus of welfare economics, and nicely illuminates the
difficulties of democratic voting procedures such as the Condorcet
paradox (consisting of the fact that majority rule may be
intransitive). The analysis of voting procedures is wide domain. For
recent surveys, see e.g. Saari (2001) and Brams and Fishburn (2002),
as well as the entry on
Arrow's Theorem.
This analysis reveals a deep tension between rules based on the
majority principle and rules which protect minorities by taking
account of preferences in a more extended way (see Pattanaik
2002).
Specialists of welfare economics once claimed that Arrow's
result had no bearing on economic allocation (e.g. Samuelson 1967),
and there is some ambiguity in Arrow (1951) about whether, in an
economic context, the best application of the theorem is about
individual self-centered *tastes* over personal consumptions,
in which case it is indeed relevant to welfare economics, or about
individual ethical *values* about general allocations. It is
now generally considered that the formal framework of social choice
can sensibly be applied to the Bergson-Samuelson problem of ranking
allocations on the basis of individual tastes. Applications of
Arrow's theorem to various economic contexts have been made (see
the surveys by Le Breton 1997, Le Breton and Weymark 2011).
### 4.2 The Informational Basis
Sen (1970a) proposes a further generalization of the social choice
framework, by permitting consideration of information about individual
utility functions, not only preferences. This enlargement is motivated
by the impossibility theorem, but also by the ethical relevance of
various kinds of data. Distributional issues obviously require
interpersonal comparisons of well-being. For instance, an egalitarian
evaluation of allocations needs a determination of who the worst-off
are. It is tempting to think of such comparisons in terms of
utilities. This has triggered an important body of literature which
has greatly clarified the meaning of various kinds of interpersonal
utility comparisons (of levels, differences, etc.) and the relation
between them and various social criteria (egalitarianism,
utilitarianism, etc.). This literature (esp. d'Aspremont and
Gevers 1977) has also provided an important formal analysis of the
concept of welfarism, showing that it contains two subcomponents. The
first one is the Paretian condition that an alternative is equivalent
to another when all individuals are indifferent between them. This
excludes using non-welfarist information about alternatives, but does
not exclude using non-welfarist information about individuals (one
individual may be favored because of a physical handicap). The second
one is an independence condition formulated in terms of utilities. It
may be called Independence of Irrelevant Utilities (Hammond 1987), and
says that the social ranking of any pair of alternatives must depend
only on utility levels at these two alternatives, so that a change in
the profile of utility functions which would leave the utility levels
unchanged at the two alternatives should not alter how they are
ranked. This excludes using non-welfarist information about
individuals, but does not exclude using non-welfarist information
about alternatives (one may be preferred because it has more freedom).
The combination of the two conditions excludes all non-welfare
information. Excellent surveys are given in d'Aspremont (1985),
d'Aspremont and Gevers (2002), Bossert and Weymark (2004),
Mongin and d'Aspremont (1998). In spite of the important
clarification made by this literature, the introduction of utility
functions essentially amounts to going back to old welfare economics,
after new welfare economics and authors such as Bergson, Samuelson and
Arrow failed to provide appealing solutions with data on consumer
tastes only.
A related issue is how the evaluation of individual well-being must be
made, or, equivalently, how interpersonal comparisons must be
performed. There is now a growing interest in exploring concrete ways
of measuring individual well-being, as illustrated by happiness
studies (section 7.5 below) and the handbook published by Adler and
Fleurbaey (2016). Welfare economics traditionally relied on
"utility", and the extended informational basis of social
choice is mostly formulated with utilities (although the use of
extended preference orderings is often shown to be formally
equivalent: for instance, saying that Jones is better-off than Smith
is equivalent to saying that it is better to be Jones than to be
Smith, in some social state). But utility functions may be given a
variety of substantial interpretations, so that the same formalism may
be used to discuss interpersonal comparisons of resources,
opportunities, capabilities and the like. In other words, one may
separate two issues: 1) whether one needs more information than
individual preference orderings in order to perform interpersonal
comparisons; 2) what kind of additional information is ethically
relevant (subjective utility or objective notions of opportunities,
etc.). The latter issue is directly related to philosophical
discussions about how well-being should be conceived and to the
"equality of what" debate. This debate, which originates
with Sen (1980), focuses on seeking the appropriate metric of
advantage that should be used by theories of justice that are
egalitarian or that give priority to the worst-off.
The former issue is still debated. Extending the informational basis
by introducing numerical indices of well-being (or equivalent extended
orderings) is not the only conceivable extension. Arrow's
impossibility is obtained with the condition of Independence of
Irrelevant Alternatives, which may be logically analyzed, when the
theorem is reformulated with utility functions as primitive data, as
the combination of Independence of Irrelevant Utilities (defined
above) with a condition of ordinal non-comparability, saying that the
ranking of two alternatives must depend only on individuals'
ordinal non-comparable preferences. Arrow's impossibility may be
avoided by relaxing the ordinal non-comparability condition, and this
is the above-described extension of the informational basis by relying
on utility functions. But Arrow's impossibility may also be
avoided by relaxing Independence of Irrelevant Utilities only. In
particular, it makes sense to rank alternatives on the basis of how
these alternatives are considered by individuals in comparison to
other alternatives. For instance, when considering to make a transfer
of consumption goods from Jones to Smith, it is not enough to know
that Jones is against it and Smith is in favor of it (this is the only
information usable under Arrow's condition). It is also relevant
to know if both consider that Jones has a better bundle, or not, which
involves considering other alternatives in which bundles are permuted,
for instance. In this vein, Hansson (1973) and Pazner (1979) have
proposed to weaken Arrow's axiom so as to make the ranking of
two alternatives depend on the indifference curves of individuals at
these two alternatives. In particular, Pazner relates this approach to
Samuelson's (Samuelson 1977) and concludes that the
Bergson-Samuelson social welfare function can indeed be constructed
consistently in this way. Interpersonal comparisons may be sensibly
made on the sole basis of indifference curves and therefore on the
sole basis of ordinal non-comparable preferences. This requires
broadening the concept of interpersonal comparisons in order to cover
all kinds of comparisons, not just utility comparisons (see Fleurbaey
and Hammond 2004).
The concept of informational basis itself need not be limited to
issues of interpersonal comparisons. Many conditions of equity,
efficiency, separability, responsibility, etc. bear on the kind and
quantity of information that is deemed relevant for the ranking of
alternatives. The theory of social choice gives a convenient framework
for a rigorous analysis of this issue.
### 4.3 Around Utilitarianism
The theory of social choice with utility functions has greatly
systematized our understanding of social welfare functions. For
instance, it has shown how to construct a continuum of intermediate
social welfare functions between sum-utilitarianism and the maximin
criterion (or its lexicographic refinement, the leximin criterion,
which ranks distributions of well-being by examining first the
worst-off position, then the position which is just above the
worst-off, and so on; for instance, the maximin is indifferent between
the three distributions (1,2,5), (1,3,5) and (1,3,6), whereas the
leximin ranks them in increasing order). Three other developments
around utilitarian social welfare functions are worth mentioning.
The first development is related to the application of theories of
equality of opportunity, and involves the construction of mixed social
welfare functions which combine utilitarianism and maximin. Suppose
that there is a double partition of the population, such that one
would like the social welfare function to display infinite inequality
aversion within subgroups of the first partition, and zero inequality
aversion within subgroups of the second partition. For instance,
subgroups of the first partition consist of equally deserving
individuals, for which one would like to obtain equality of outcomes,
whereas subgroups of the second partition consist of individuals who
have equal opportunities, so that inequalities among them do not
matter. Van de gaer (1993) proposes to apply average utilitarianism
within each subgroup of the second partition, and to apply the maximin
criterion to the vector of average utilities obtained in this way. In
other words, the average utilities measure the value of the
opportunity sets offered to individuals, and one applies the maximin
criterion to such values, in order to equalize the value of
opportunity sets. Roemer (1998) proposes to apply the maximin
criterion within each subgroup of the first partition, and then to
apply average utilitarianism to the vector of minimum utilities
obtained in this way. In other words, one tries to equalize outcomes
for equally deserving individuals first, and then applies a
utilitarian calculus. These may not be the only possible combinations
of utilitarianism and maximin, but they are given an axiomatic
justification which suggests that they are indeed salient, in Ooghe,
Schokkaert and Van de gaer (2007) and Fleurbaey (2008). For a survey
on the applications of Roemer's criterion, see Roemer (2002b)
and, more recently, Ferreira and Peragine (2016), Roemer and Trannoy
(2016), Ramos and Van de gaer (2016).
A second interesting development deals with intergenerational ethics.
With an infinite horizon, it is essentially impossible to combine the
Pareto criterion and anonymity (permuting the utilities of some
generations does not change social welfare) in a fully satisfactory
way, even when utilities are perfectly comparable. This is a similar
but more basic impossibility than Arrow's theorem. The intuition
of the problem can be given with the following simple example.
Consider the following sequence of utilities: (1,2,1,2,...).
Permute the utility of every odd period with the next one. One then
obtains (2,1,2,1,...). Then permute the utility of every even
period with the next one. This yields (2,2,1,2,...). This third
sequence Pareto-dominates the first one, even though it was obtained
only through simple permutations. This impossibility is now better
understood, and various results point to the "catching up"
criterion as the most reasonable extension of sum-utilitarianism to
the infinite horizon setting. This criterion, which does not rank all
alternatives, applies when the finite-horizon sums of utilities, for
two infinite sequences of utilities, are ranked in the same way for
all finite horizons above some finite time. Interestingly, this topic
has seen parallel and sometimes independent contributions by
economists and philosophers (see e.g. Lauwers and Liedekerke 1997,
Lauwers and Vallentyne 2003, Roemer and Suzumura 2007).
A third development worth mentioning has to do with population ethics.
Sum-utilitarianism appears to be overly populationist, since it
implies the "repugnant conclusion" (Parfit 1984) that we
should aim for an unhappy but sufficiently large population in
preference to a small and happy one. Conversely, average
utilitarianism is "Malthusian", preferring a happier
population, no matter how small, to a less happy one, no matter how
large. Here again there is an interesting tension, namely, between
accepting all individuals whose utility is greater than zero,
accepting equalization of utilities, and avoiding the "repugnant
conclusion". This tension is shown in this way. Start with a
given affluent population of any size. Add any number of individuals
with positive but almost zero utilities. This does not reduce social
welfare. Then equalize utilities, which again does not reduce social
welfare. One then obtains, compared to the initial population, a
larger population with lower utilities. One sees that these lower
utilities may be arbitrarily low, if the added individuals are
sufficiently numerous and have sufficiently low initial utilities,
thus yielding the repugnant conclusion (see Arrhenius 2000). Average
utilitarianism disvalues additional individuals whose utility is below
the average, which is very restrictive for affluent populations. A
less restrictive approach is that of critical-level utilitarianism,
which disvalues only individuals whose utility level is below some
fixed, low but positive threshold. For an extensive overview and a
defense of critical-level utilitarianism, see Blackorby, Bossert and
Donaldson (1997, 2005), as well as Broome (2004). For an original
proposal relying on a different social welfare function (inspired by
the Gini function), see Asheim and Zuber (2014).
## 5. Bargaining and Cooperative Games
At the time when Arrow declared social choice to be impossible, Nash
(1950) published a possibility theorem for the bargaining problem,
which is the problem of finding an option acceptable to two parties,
among a subset of alternatives. Interestingly, Nash relied on the
axiomatic analysis just like Arrow, so that both can be given credit
for the introduction of this method in normative economics. In the
same decade, a similar contribution was made by Shapley (1953) to the
theory of cooperative games. The development of such approaches has
been impressive since then, but some questioning has emerged regarding
the ethical relevance of this theory to issues of distributive
justice.
### 5.1 Nash's and Other Solutions
Nash (1950) adopted a welfarist framework, in which alternatives are
described only by the utility levels they give to the two parties. His
solution consists in choosing the alternative which, in the feasible
set, maximizes the product of individual utility gains from the
disagreement point (this point is the fallback option when the parties
fail to reach an agreement). This solution is therefore related to a
particular social welfare function which is somehow intermediate
between sum-utilitarianism and the maximin criterion. Contrary to
these, however, it is invariant to independent changes in utility
zeros and scales, which means that it can be applied with utility
functions which are defined only up to an affine transformation (i.e.,
no difference is made between utility function *Ui*
and utility function *ai Ui* +
*bi*), such as Von Neumann-Morgenstern utility
functions. Nash uses this invariance property in his axiomatic
characterization of the solution. He also uses another property, which
holds for any solution maximizing a social welfare function, namely,
that removing non-selected options does not alter the choice.
This particular property is criticized in Kalai and Smorodinsky
(1975), because it makes the solution ignore the relative size of the
sacrifices made by the parties in order to reach a compromise. They
propose another solution, which consists of equalizing the
parties' sacrifice relative to the maximum gain they could
expect in the available set of options. This solution, contrary to
Nash's, guarantees that an enlargement of the set of options
that is favorable to one party never hurts this party in the ultimate
selection. It is very similar to Gauthier's (1986)
"minimax relative concession" solution. Many other
solutions to the bargaining problem have been proposed, but these two
are by far the most prominent.
The relevance of bargaining theory to the theory of distributive
justice has been questioned. First, if the disagreement point is
defined, as it probably should, in relation to the relative strength
of parties in a "state of nature", then the scope for
redistributive solidarity is very limited. One obtains a theory of
"justice as mutual advantage" (Barry 1989, 1995) which is
not satisfactory at the bar of any minimal conception of impartiality
or equality. Second, the welfarist formal framework of the theory of
bargaining is poor in information (Roemer 1986b, 1996). Describing
alternatives only in terms of utility levels makes it impossible to
take account of basic physical features of allocations. For instance,
it is impossible to find out, from utility data alone, which of the
alternatives is a competitive equilibrium with equal shares. As
another illustration, Nash's and Kalai and Smorodinsky's
solutions both recommend allocating an indivisible prize by a
fifty-fifty lottery, whether the prize is symmetric (a one-dollar bill
for either party) or asymmetric (a one-dollar bill if party 1 wins,
ten dollars if party 2 wins).
Extensive surveys of bargaining theory can be found in Peters (1992),
Thomson (1999).
### 5.2 Axiomatic Bargaining and Cooperative Games
The basic theory of bargaining focuses on the two-party case, but it
can readily be extended to the case when a greater number of parties
are at the table. However, when there are more than two parties, it
becomes relevant to consider the possibility for subgroups
(coalitions) to reach separate agreements. Such considerations lead to
the broader theory of cooperative games.
This broader theory is, however, more developed for the relatively
easy case when coalition gains are like money prizes which can be
allocated arbitrarily among coalition members (the "transferable
utility case"). In this case, for the two-party bargaining
problem the Nash and Kalai-Smorodinsky solutions coincide and give
equal gains to the two parties. The Shapley value is a solution which
generalizes this to any number of parties and gives a party the
average value of the marginal contribution that this party brings to
all coalitions which it can join. In other words, it rewards the
parties in proportion to the increase in coalition gain that they
bring about by gathering with others.
Another important concept is the core. This notion generalizes the
idea that no rational party would accept an agreement that is less
favorable than the disagreement point. An allocation of the total
population prize is in the core if the total amount received by any
coalition is at least as great as the prize this coalition could
obtain on its own. Otherwise, obviously, the coalition has an
incentive to "block" the agreement. Interestingly, the
Shapley value is not always in the core, except for "convex
games", that is, games such that the marginal contribution of a
party to a coalition increases when the coalition is bigger.
The basics of this theory are very well presented in Moulin (1988) and
Myerson (1991). Cooperative games are distinguished from
non-cooperative games by the fact that the players can commit to an
agreement, whereas in a non-cooperative game every player always seeks
his interest and never commits to a particular strategy. The central
concept of the theory of non-cooperative games is the Nash equilibrium
(every player chooses his best strategy, taking others'
strategies as given), which has nothing to do with Nash's
bargaining solution. It has been shown, however, that Nash's
bargaining solution can be obtained as the Nash equilibrium of a
non-cooperative bargaining game, in which players make offers
alternatively, and accept or reject the other's offer.
## 6. Fair allocation
The theory of fair allocation studies the allocation of resources in
economic models. The seminal contribution is Kolm (1972), where the
criterion of equity as no-envy is extensively analyzed with the
conceptual tools of general equilibrium theory. Later the theory
borrowed the axiomatic method from bargaining theory, and it now
covers a great variety of economic models and encompasses a variety of
fairness concepts.
There are several surveys of this theory: Thomson and Varian (1985),
Moulin and Thomson (1997), Maniquet (1999), Thomson (2011).
### 6.1 Equity as No-Envy
An allocation is envy-free if no individual would prefer having the
bundle of another. An egalitarian distribution in which everyone has
the same bundle is trivially envy-free, but is generally
Pareto-inefficient, which means that there exist other feasible
allocations that are better for some individuals and worse for none. A
competitive equilibrium with equal shares (i.e., equal budgets) is the
central example of a Pareto-efficient and envy-free allocation. It is
envy-free since all agents have the same budget options, so that
everyone could buy everyone's bundle. It is Pareto-efficient
because, by an important theorem of welfare economics, any perfectly
competitive equilibrium is Pareto-efficient (in absence of asymmetric
information, externalities, public goods). A non-technical
presentation of this theorem can be found in Hausman and McPherson
(2006, 5.2).
This concept of equity does not need any other information than
individual ordinal preferences. It is not welfarist, in the sense that
from utility data alone it is impossible to distinguish an envy-free
allocation from an allocation with envy. Moreover, an envy-free
allocation may be Pareto-indifferent (everyone is indifferent) to
another allocation that has envy. On the other hand, this concept is
strongly egalitarian, and it is quite natural to view it as capturing
the idea of equality of resources (Dworkin 2000). When resources are
multi-dimensional, for instance when there are several consumption
goods, and when individual preferences are heterogeneous, it is not
obvious to define equality of resources, but the no-envy criterion
seems the best concept for this purpose. It guarantees that no
individual will consider that another has a better bundle than his. It
has been shown by Varian (1976) that, if preferences are sufficiently
diverse and numerous (a continuum), then the competitive equilibrium
with equal shares is the only Pareto-efficient and envy-free
allocation.
This concept can also be related to the idea of equality of
opportunities (Kolm 1996). An allocation is envy-free if and only if
the bundles granted to everyone could have been chosen by each
individual in the same opportunity set, such as, for instance, the set
containing all the bundles of the allocation under consideration.
Along this vein, the concept of no-envy can also be shown to have a
close connection with incentive considerations. A no-envy test is used
in the theory of optimal taxation in order to make sure that no one
would have interest to lie about one's preferences (Boadway and
Keen 2000). Consider the condition that, when an allocation is
selected and some individuals' preferences change so that their
bundle goes up in their own preference ranking, then the selected
allocation is still acceptable. A particular version of this condition
plays a central role in the theory of incentives, under the name of
Maskin monotonicity (see e.g. Jackson 2001), but it can also be given
an ethical meaning, in terms of neutrality with respect to changes in
preferences. Notice that envy-free allocations satisfy this condition,
since after such a change of preferences every individual's
bundle goes up in his ranking, thereby precluding any appearance of
envy. Conversely, it turns out that this condition implies that the
selected allocation must be envy-free, under the additional assumption
that, in any selected allocation, individuals with identical
preferences must have equivalent bundles. If one also requires the
selection to be Pareto-efficient, then one obtains a characterization
of the competitive equilibrium with equal shares (Gevers 1986).
### 6.2 Extensions
Kolm's (1972) seminal monograph focused on the simple problem of
distributing a bundle of non-produced commodities, and on equity as
no-envy. Other economic problems, and other fairness concepts, have
been studied later. Here is a non-exhaustive list of other economic
problems that have been analyzed: sharing labor and consumption in the
production of a consumption good; producing a public good and
allocating the contribution burden among individuals; distributing
indivisible commodities, with or without the possibility of making
monetary compensations; matching pairs of individuals (men-women,
employers-workers...); distributing compensations for
differential needs; rationing on the basis of claims; distributing a
divisible commodity when preferences are satiable. In the main stream
of this theory, the problem is to select a good subset of allocations
under perfect knowledge of the characteristics of the population and
of the feasible set. There is also a branch which studies cost and
surplus sharing, when the only information available are the
quantities demanded or contributed by the population, and the cost or
surplus may be distributed as a function of these quantities only (see
Moulin 2002). The relevance of this literature for political
philosophers should not be underestimated. Even models which seem to
be devoted to narrow microeconomic allocation problems may turn out to
be quite relevant, and some models are addressing issues already
salient in political philosophy. This is the case in particular for
the model of production of a private good when individuals have
unequal skills, which is a rough description of a market economy, and
for the model of differential needs. Both models are especially
relevant for analyzing the issue of responsibility, talent and
handicap, which is now prominent in egalitarian theories of justice. A
survey on these two models in made in Fleurbaey and Maniquet (2011a),
and a monograph connecting the various relevant fields of economic
analysis to theories of responsibility-sensitive egalitarianism is in
Fleurbaey (2008).
Among the other concepts of fairness which have been introduced, two
families are important. The first family contains principles of
solidarity, which require individuals to be affected in the same way
(they all gain or all lose) by some external shock (change in
resources, technology, population size, population characteristics).
For instance, if resources or technology improve, then it is natural
to hope that everyone will benefit. The second family contains welfare
bounds, which provide guarantees to everyone against extreme
inequality. For instance, in the division of non-produced commodities,
it is very natural to require that nobody should be worse-off than at
the equal-split allocation (i.e. the allocation in which everyone gets
the per capita amount of resources).
Let us briefly describe some of the insights that are gained through
this theory and seem relevant to political philosophy. A very
important one is that there is a conflict between no-envy and
solidarity (Moulin and Thomson 1988, 1997). This conflict is well
illustrated by the fact that in a market economy, typically any change
in technology benefits some agents and hurts others, even when the
change is a pure progress which could benefit all. Solidarity
principles are not obeyed by allocation rules which pass the no-envy
test, and these principles point toward a different kind of
distribution, named "egalitarian-equivalence" by Pazner
and Schmeidler (1978). An allocation is egalitarian-equivalent when
everyone is indifferent between his bundle in this allocation and the
bundle he would have in an egalitarian economy defined in some simple
way. For instance, the egalitarian economy may be such that everyone
has the same bundle. In this case, an egalitarian-equivalent
allocation is such that everyone is indifferent between his bundle and
one particular bundle. In more sophisticated versions, the egalitarian
economy is such that everyone has the same budget set, in some
particular family of budget sets. Egalitarian-equivalence is a serious
alternative to no-envy for the definition of equality of resources,
and its superiority in terms of solidarity is quite significant, in
relation to the next point.
The second insight, indeed, is that no-envy itself is a combination of
conflicting principles (Fleurbaey and Maniquet 2011a, Fleurbaey 2008).
This conflict is made apparent in models with talents and handicaps.
For instance, Pazner and Schmeidler (1974) found out that there may
not exist envy-free and Pareto-efficient allocations in the context of
production with unequal skills (when there are high-skilled
individuals who are strongly averse to labor). This results from an
incompatibility between a compensation principle saying that
individuals with identical preferences should have equivalent bundles
(suppressing inequalities due to skills), and a reward principle
saying that individuals with the same skills should not envy each
other (no preferential treatment on the basis of different
preferences). Both principles are a logical implication of the no-envy
test. This is obvious for the latter. For the former, notice that
no-envy among individuals with the same preferences means that they
must have bundles on the same indifference curve. Interestingly, the
compensation principle is a logical consequence of solidarity
principles and is therefore perfectly compatible with them. It is very
well satisfied by egalitarian-equivalent allocation rules. In
contrast, it is violated by Dworkin's hypothetical insurance
which applies the no-envy test behind a veil of ignorance (see Dworkin
2000, Fleurbaey 2008, and
SS 3.2).
A recent philosophical analysis of the no-envy approach has been
developed in Olson (2020). The relation between envy and theories of
justice is also scrutinized in the entry on
envy.
The theory of fair allocations contains many positive results about
the existence of fair allocations, for various fairness concepts, and
this stands in contrast to Arrow's impossibility theorem in the
theory of social choice. The difference between the two theories has
often been interpreted as due to the fact that they perform different
exercises (Sen 1986, Moulin and Thomson 1997). The theory of social
choice, it is said, seeks a ranking of all options, while the theory
of fair allocation focuses on the selection of a subset of
allocations. This explanation is not convincing, since selecting a
subset of fair allocations is formally equivalent to defining a
full-blown albeit coarse ranking, with "good" and
"bad" allocations. A more convincing explanation lies in
the fact that the information used in fairness criteria is richer than
allowed by Arrow's Independence of Irrelevant Alternatives
(Fleurbaey, Suzumura and Tadenuma 2002). For instance, in order to
check that an allocation is envy-free while another displays envy, it
is not enough to know how individuals rank these two allocations in
their preferences. One must know individual preferences over other
alternatives involving permutations of bundles (an envious individual
would prefer an allocation in which his bundle is permuted with one he
envies). In this vein, one discovers that it is possible to extend the
theory of fair allocation so as to construct fine-grained rankings of
all allocations. This is very useful for the discussion of public
policies in "second-best" settings, that is, in settings
where incentive constraints make it impossible to reach
Pareto-efficiency. With this extension, the theory of fair allocation
can be connected to the theory of optimal taxation (Maniquet 2007),
and becomes even more relevant to the political philosophy of
redistributive institutions (Fleurbaey 2007). It turns out that the
egalitarian-equivalence approach is very convenient for the definition
of fine-grained orderings of allocations, which provides an additional
argument in its favor. A detailed study of fair social orderings is
made in Fleurbaey and Maniquet (2011b).
## 7. Related topics
### 7.1 Freedom and Rights
Sen (1970b) and Gibbard (1974) propose, within the framework of social
choice, paradoxes showing that it may not be easy to rank alternatives
when some individuals have a special right to rank some alternatives
that differ only in matters belonging to their private sphere, and
when their preferences are sensitive to what happens in other
individuals' private spheres. For instance, as an illustration
of Gibbard's paradox, individuals have the right to choose the
color of their shirt, but, in terms of social ranking, should A and B
wear the same color or different colors, when A wants to imitate B and
B wants to have a different color? There is a huge literature on this
topic, and after Gaertner, Pattanaik and Suzumura (1992), who argue
that no matter what choice A and B make, their rights to choose their
own shirt is respected, a good part of it examines how to describe
rights properly. The framework of game forms is an interesting
alternative to the social choice model. Recent surveys can be found in
Arrow, Sen and Suzumura (1997, vol. 2).
A related but different literature has focused on measuring the
freedom contained in a given menu from which a person may choose.
Inspired by Sen's (1985) remarks on how to value capabilities
(see section 7.2 below), this literature has offered characterizations
of various measures, starting from simple counting methods (Pattanaik
and Xu 1990) and later on including additional considerations such as
quality and diversity (e.g., Arlegi & Nieto 2001, Barbera et al.
2004). Surveys of this literature can be found in Dowding and van Hees
(2009) and Foster (2011).
Apart from this formal analysis of rights and measures of freedom,
economic theory is not very well connected to libertarian philosophy,
since economic models show that, apart from the very specific context
of perfect competition with complete markets, perfect information, no
externalities and no public goods, the laissez-faire allocation is
typically inefficient and arbitrarily unequal. Therefore libertarian
philosophers do not find much help or inspiration in economic theory,
and there is little cross-fertilization in this area.
### 7.2 Capabilities
The capability approach, developed in Sen (1985, 1992), is a
particular response to the "equality of what" debate, and
is presented by Sen as the best way to think about the relevant
interpersonal comparisons to be made for evaluations of social
situations at the bar of distributive justice. It is often presented
as intermediate between resourcist and welfarist approaches, but it is
perhaps accurate to present it as more general. A
"functioning" is any doing or being in the life of an
individual. A "capability set" is the set of functioning
vectors that an individual has access to. This approach has attracted
a lot of interest in particular because it makes it possible to take
into account all the relevant dimensions of life, in contrast with the
resourcist and welfarist approaches which can be criticized as too
narrow.
Being so general, the approach needs to be specified in order to
inspire original applications. The body of empirical literature that
takes inspiration from the capability approach is now numerically
impressive. As noted in Robeyns (2006) and Schokkaert (2007b), in many
cases the empirical studies are essentially similar, except for
terminology, to the sociological studies of living conditions. But
there are more original applications, e.g., when an evaluation of
development programs that takes account of capabilities is contrasted
with cost-benefit analysis (Alkire 2002) or when a list of basic
capabilities is enshrined in a theory of what a just society should
provide to all citizens (Nussbaum 2000). More generally, all studies
which seek to incorporate multiple dimensions of quality of life into
the evaluation of individual and social situations can be considered,
broadly speaking, as pertaining to this approach.
Two central questions pervade the empirical applications. The first
concerns the distinction between capabilities and functionings. The
latter are easier to observe because individual achievements are more
accessible to the statistician than pure potentialities. There is also
the normative issue of whether the evaluation of individual situations
should be based on capabilities only, viewed as opportunity sets, or
should take account of achieved functionings as well. The second
central question is the index problem, which has also been raised
about Rawls' theory of primary goods. There are many dimensions
of functionings and capabilities and not all of them are equally
valuable. The definition of a proper system of weights has appeared
problematic in connection to the difficulties of social choice
theory.
Recent surveys on this approach and its applications can be found in
Alkire (2016), Basu and Lopez-Calva (2011), Kuklys (2005), Robeyns
(2006), Robeyns and Van der Veen (2007), Schokkaert (2009).
### 7.3 Marxism
Roemer (1982, 1986c) proposes a renewed economic analysis of Marxian
concepts, in particular exploitation. He shows that, even if the
theory of labor value is flawed as a causal theory of prices, it may
be consistently used in order to measure exploitation and analyze the
correlation between exploitation and the class status of individuals.
However, he considers that this concept of exploitation is ethically
not very appealing, since it roughly amounts to requiring individual
consumption to be proportional to labor, and he suggests a different
definition of exploitation, in terms of undue advantage due to unequal
distribution of some assets. This leads him eventually to merge this
line of analysis with the general stream of egalitarian theories of
justice. The idea that consumption should be proportional to labor has
also received some attention in the theory of fair allocation (Moulin
1990, Roemer & Silvestre 1993). See Roemer (1986a) for a
collection of philosophical and economic essays on Marxism. There has
been a recent wave of analysis of the concept of exploitation in
economics, in particular under the impulsion of Veneziani and
Yoshihara (2015, forthcoming). See also Veneziani (2013), Fleurbaey
(2014) and Skillman (2014). For the references in philosophy, see the
entry on
exploitation.
### 7.4 Opinions
In normative economics, theorists have often been wary of relying on
concepts which are disconnected from the lay person's intuition.
Questionnaire surveys, usually performed among students, have indeed
given some disturbing results. Welfarist approaches have been
questioned by the results of Yaari and Bar Hillel (1984), the
Pigou-Dalton principle has been critically scrutinized by Amiel and
Cowell (1992), the principles of compensation and reward have obtained
mixed support in Schokkaert and Devooght (1998), etc. It is of course
debatable how much theorists can learn from such results (Bossert
1998).
Surveys of this questionnaire approach are available in Schokkaert and
Overlaet (1989), Amiel and Cowell (1999), Schokkaert (1999), Gaertner
and Schokkaert (2011). Philosophers have also performed similar
inquiries (Miller 1992).
### 7.5 Altruism and Reciprocity
It is standard in normative economics, as in political philosophy, to
evaluate individual well-being on the basis of self-centered
preferences, utility or advantage. Feelings of altruism, jealousy,
etc. are ignored in order not to make the allocation of resources
depend on the contingent distribution of benevolent and malevolent
feelings among the population (see e.g. Goodin 1986, Harsanyi 1977).
It may be worth mentioning here that the no-envy criterion discussed
above has nothing to do with interpersonal feelings, since it is
defined only with self-centered preferences. When an individual
"envies" another in this particular sense, he simply
prefers the other's consumption to his own, but no feeling is
involved (he might even not be aware of the existence of the other
individual).
But positive economics is quite relevantly interested in studying the
impact of individual feelings on behavior. Homo oeconomicus may
be rational without being narrowly focused on his own consumption. The
analysis of labor relations, strategic interactions, transfers within
the family, generous gifts require a more complex picture of human
relations (Fehr and Fischbacher 2002). Reciprocity, in particular,
seems to be a powerful source of motivation, leading individuals to
incur substantial costs in order to reward nice partners and punish
faulty partners (Fehr and Gachter 2000). For an extensive survey of
this branch of the economic literature, see Gerard-Varet, Kolm
and Mercier-Ythier (2004).
### 7.6 Happiness studies
The literature on happiness has surged in the last decade. The
findings are well summarized in many surveys (see in particular Diener
(1994, 2000), Diener et al. (1999), Frey and Stutzer (2002), Graham
(2009), Kahneman et al. (1999), Kahneman and Krueger (2006), Layard
(2005), Oswald (1997), Van Praag and Ferrer-i-Carbonell 2008), and
reveal the main factors of happiness: personal temperament, health,
social connections (in particular being married and employed). The
impact of material wealth is debated, some arguing that it is more a
matter of relative position than of absolute comfort, at least above a
minimal level of affluence (Easterlin 1995, Clark et al. 2008), others
arguing that there is a positive (but logarithmic: doubling income
induces a constant increment on happiness) impact over the whole range
of observed living standards (Deaton 2008, Sacks et al. 2010) .
A hotly debated question is what to make of this approach in welfare
economics. There is a wide variety of positions, from those who
propose to measure and maximize national happiness (Diener 2000,
Kahneman et al. 2004, Layard 2005) to those who firmly oppose this
idea on various grounds (Burchardt 2006, Nussbaum 2008, among others).
There seems to be a consensus on the idea that happiness studies
suggest a welcome shift of focus, in social evaluation, from purely
materialistic performances to a broader set of values. Above all, one
can consider that the traditional suspicion among economists about the
possibility to measure subjective well-being is being assuaged by the
recent progress.
However, the fact that subjective well-being can be measured does not
imply that it ought to be taken as the metric of social evaluation.
Surprisingly, the literature on happiness refers very little to the
lively philosophical debates of the previous decades about welfarism,
and in particular the criticisms raised by Rawls (1982) and Sen (1985)
against utilitarianism. (Two exceptions are Burchardt (2006) and
Schokkaert (2007). Layard (2005) also mentions and quickly rebuts some
of the arguments against welfarism.) Nonetheless, one of the key
elements of that earlier debate, namely, the fact that subjective
adaptation is likely to hide objective inequalities, shows up in the
data, challenging happiness specialists. Subjective well-being seems
relatively immune in the long run to many aspects of objective
circumstances, individuals displaying a remarkable ability to adapt.
After most important life events, satisfaction returns to its usual
level and the various affects return to their usual frequency. If
subjective well-being is not so sensitive to objective circumstances,
should we stop caring about inequalities, safety, and
productivity?
These issues and the possible uses of subjective well-being data for
welfare analysis are discussed in detail in several chapters of Adler
and Fleurbaey (2016), in particular the chapters by Fujiwara and
Dolan, Bykvist, Haybron, Lucas, Graham, Clark, Decancq and
Neumann.
### 7.7 Animals
Normative economics, even more so than political and moral philosophy,
has traditionally been anthropocentric, but growing interest in animal
rights and animal interests is emerging, following the lead of
influential philosophers (see the entry on the
moral status of animals)
and, e.g., Sunstein and Nussbaum 2004). Interestingly, specialized
studies in animal welfare, in biology, adopt similar methods as
welfare economics to estimate animal preferences and conceptualize
various approaches to their needs and their well-being (e.g., Appleby
et al. 2018). Many arguments being developed against the current
farming practices are still based on anthropocentric considerations
such as climate change and ecosystem services. But the idea of carving
a place in the social welfare function for animals, alongside their
fellow humans, is gaining ground (Johansson-Stenman 2018, Carlier and
Treich 2020, Espinosa and Treich 2021; for the opposite view, see
Eichner and Pethig 2006). Budolfson and Spears (2020) propose a way to
compute an inclusive utilitarian social welfare function based on a
distinction between well-being potential and relative realized
well-being. |
ramsey-economics | ## 1. Production Possibilities in Ramsey's Formulation
Ramsey's goal was practical: "How much of a nation's output should
it save for the future?" The demographic profile over time was taken by
him to be given, meaning that future numbers of people were seen as
exogenous and predictable. We were therefore to imagine that economic
policies have a negligible effect on reproductive behaviour (but see
Dasgupta, 1969, for a study of the joint population/saving problem,
using Classical Utilitarianism as the guiding principle). Parfit (1984)
christened choices involving the same demographic profile, "Same
Numbers Choices."
The ingredients of Ramsey's theory are individuals' lifetime
well-beings. Government House in his world maximizes the expected sum
of the lifetime well-beings of all who are here today and all who will
ever be born, subject to resource constraints. The optimum distribution
of lifetime well-beings across generations is derived from that
maximization exercise. Of course, the passage of time is not the same
as the advance of generations. An individual's lifetime well-being is
an aggregate of the flow of well-being she experiences, while
intergenerational well-being is an aggregate of the lifetime
well-beings of all who appear on the scene. It is doubtful that the two
aggregates should have the same functional form. On the other hand,
there is little evidence to suggest that we would be way off the mark
in assuming they do have the same form. As a matter of practical
ethics, it helps enormously to approximate by not distinguishing the
functional form of someone's well-being over time from that of
well-being across the generations. Ramsey adopted this short-cut.
People were also taken to be identical, so we may as well assume that
there is a single individual at each date. The move removes any
distinction between time and generations. An alternative interpretation
would have us imagine that the economy consists of a single dynasty,
where parents in each generation leave bequests for their children
(Meade 1966, adopted this interpretation). Ramsey also assumed,
probably because the mathematics is simpler, that time is a continuous
variable, not discrete.
Let \(t \ge 0\) denote time. In Ramsey's model there is no
uncertainty (but see Levhari and Srinivasan, 1969, for one of the first
of many extensions of the Ramsey model that incorporate uncertainty
about future possibilities). The economy is endowed with a *single,
non-depreciating* commodity that can be worked by labour to produce
output at each date (Gale 1967 and Brock 1973, were among the first
of many extensions of the Ramsey model that contain a heterogeneous
collection of capital goods). The economy is assumed to be closed to
international trade (opening the economy to trade involves only a minor
extension to Ramsey's model). That means some of the output can be
invested so as to add to the commodity's stock while the remainder can
be consumed immediately. We call the stock of the commodity that serves
to produce output, "capital." The problem is then to find the optimum
allocation of output at each date between consumption and
investment.
Ramsey assumed that work is unpleasant. But because including the
disutility of work in our account of his work here would add nothing of
substance, we suppose that labour supply is an exogenously given
constant (e.g., it is independent of the wages labour can demand). That
enables us to suppress the supply of labour in both production and the
factors affecting well-being.
If \(K\) is the stock of capital of the economy's one and only
commodity, output is taken to be \(F(K)\), where \(F(0) = 0\) (i.e.,
output is zero if there is no capital), \(dF(K)/dK \gt 0\) (i.e.,
the marginal product of capital is positive), and \(d^2 F(K)/dK^2
\le 0\) (i.e., the marginal product of \(K\) does not increase with
\(K\)). \(F(K)\) is a *flow* (production at a moment in time), in
contrast to \(K\), which is a *stock* (quantity of capital,
period). Notice also that output depends solely on the stock of
capital. No mention is made of possible improvements in the quality of
capital or labour. Thus, there is no prospect of technological
progress or accumulation of human capital in Ramsey's model (but
see Mirrlees 1967, for one of the first of many extensions of the
Ramsey model that include technological advances in production and
human capital formation); nor are there any natural resources in the
model (but see Dasgupta and Heal 1974, for one of the first of many
extensions of the Ramsey model that include natural capital in
production).
Let \(C(t)\) be consumption at \(t\). It is a
flow (units of consumption per moment). Similarly, we write
\(K(t)\) for the stock of capital at \(t\). As
\(dK(t)/dt\) is the rate of change in the
capital stock at \(t\), it is "net investment at \(t\),"
which too is a flow. And because the capital stock is assumed not to
depreciate, gross investment equals net investment.
In Ramsey's model anticipated output at each moment equals the sum
of intended investment and intended consumption. Intentions are always
realized. To put it in technical language, the economy is in
equilibrium at each moment, which is another way of saying that at each
moment intended saving equals intended investment. (The assumption
needs no explanation in a model with a single agent, but has real bite
in a world where savers are not the same agents as investors.) Capital
is assumed to be always fully deployed, and labour (which is hidden in
the production function \(F(K)\)) is taken to be fully
employed. Output at \(t\) is \(F(K(t))\).
It follows that the economy is driven by the dynamical equation
\[\tag{1}
\frac{dK(t)}{dt} = F(K(t)) - C(t)
\]
Equation (1) says that if consumption is \(C(t)\), investment is what
remains of output. So, Ramsey's problem can be cast equally as,
"How much of a nation's output should it consume?"
If consumption is less than output at \(t\) (i.e., \(C(t) \lt
F(K(t))\), investment is positive (i.e., \(dK(t)/dt \gt 0)\) and the
stock of capital increases; but if consumption exceeds output at
\(t\), investment is negative, which means capital is eaten into and
the stock declines (i.e., \(dK(t)/dt \lt 0).\) We now imagine that
Government House is advised by a "socially-concerned
citizen," the person being someone who is trying to determine
the right balance between the economy's consumption and
investment at each date. We shall call that person the *decision
maker*, or DM. Ramsey imagined that DM is a
Classical-Utilitarian.
## 2. The Classical-Utilitarian Calculus
Classical Utilitarianism identifies the good as the expected sum of
well-being over time and across generations. Here is Sidgwick (1907:
414) on the matter:
>
>
> It seems ... clear that the time at which a man exists cannot
> affect the value of his happiness from the universal point of view;
> and that the interests of posterity must concern a Utilitarian as much
> as those of his contemporaries, except in so far as the effect of his
> actions on posterity - *and even the existence of human
> beings to be affected* - must necessarily be more
> uncertain. (Italics added)
To formalize this, we consider an arbitrary date \(t\) at which DM is
deliberating. Let \(\tau\) denote dates not earlier than \(t\) (i.e.,
\(\tau \ge t)\). Ramsey considered a deterministic, infinitely lived
world (but see Yaari, 1965, for the first of many extensions of the
Ramsey model that incorporate the risk of individual or societal
extinction). Well-being is assumed to be a numerical quantity. Let
\(U(t)\) be well-being at \(t\), and let \(V(t)\) be an aggregate
measure of the flow of well-being across time and generations, as
evaluated at time \(t\). Ramsey followed Sidgwick in assuming that
\[\tag{2}
V(t) = \int^{\infty}\_t[U(\tau)]d\tau
\]
\(V(t)\) is *intergenerational well-being* at \(t\). Because
Ramsey's world is deterministic, \(V(t)\) is also the expected
value of \(V(t)\). So Sidgwick's criterion is the \(V(t)\) in
equation (2).
Well-being at any given date is assumed to be a function solely of
consumption at that date. We therefore write \(U(t) =
U(C(t))\). Ramsey assumed that marginal well-being is positive (i.e.,
\(dU(C)/dC \gt 0)\) but diminishes with increasing consumption levels
(i.e., \(d^2 U(C)/dC^2 \lt 0)\). The latter property implies that
\(U(C)\) is a *strictly concave* function. (Edgeworth 1885,
had routinized the idea that marginal well-being declines with
increasing consumption.) Thus equation (2) can be written as
\[\tag{3}
V(t) = \int^{\infty}\_t [U(C(\tau))]d\tau
\]
Classical Utilitarianism, as reflected in equation (3), requires
that if \(U\) is a numerical measure of well-being, then so is
\(\alpha U+\beta\), where \(\alpha\) is a positive number and \(\beta\)
is a number of either sign. Formally, we say that \(U\) is unique
up to "positive affine transformations." We confirm presently that the
theory's recommendations are invariant under such transformations.
### 2.1 Zero Discounting of Future Well-Beings
In equation (3), future values of \(U\) are not discounted when
viewed from the present moment, \(t\). This particular move has
provoked more debate among economists and philosophers than any other
feature of Ramsey's theory of optimum saving. The debate has on
occasion been shriller than even we economists are used to (see in
particular Nordhaus 2007). At the risk of generalizing wildly,
economists have favoured the use of positive rates to discount future
well-beings (e.g., Arrow and Kurz 1970), whereas philosophers have
insisted that the well-being of future people should be given the same
weight as that of present people (e.g., Parfit 1984).
What would Classical Utilitarianism with positive discounting of
future well-beings look like? Let \(\delta \gt 0\) be the rate at which
it is deemed desirable to discount future well-beings (for simplicity
we take the discount rate to be constant). Then, in place of equations
(2)-(3), intergenerational well-being at \(t\), would read as
\[\tag{4}
\begin{align}
V(t) &= \int^{\infty}\_t [U(\tau)e^{-\delta(\tau -t)}]d\tau \\
&= \int^{\infty}\_t [U(C(\tau))e^{-\delta(\tau -t)}]d\tau, t \ge 0 \\
\end{align}\]
In equation (4), \(\delta\) the "time discount rate" and
\(e^{-\delta}\) the resulting "time discount factor."
\(\delta \gt 0\) implies \(e^{-\delta} \lt 1\). That
means \(e^{-\delta(\tau -t)}\) tends to
zero exponentially as \(\tau\) tends to infinity. In the latter
part of his paper Ramsey (1928: 553-555) did use equation (4) to study
the problem of optimum saving, but he did not approve of the
formulation. Instead, he wrote (p. 543) that to discount later
\(U\)'s in comparison with earlier ones is "... ethically
indefensible and arises merely from the weakness of the imagination."
In a book that inaugurated the formal study of economic development,
Harrod (1948: 40) followed suit by calling the practice a "...
polite expression for rapacity and the conquest of reason by
passion."
Strong words, but to some economists, the Ramsey-Harrod stricture in
a deterministic world reads like a Sunday pronouncement. Solow (1974a:
9) expressed this feeling exactly when he wrote, "In solemn conclave
assembled, so to speak, we ought to act as if the [discount rate on
future well-beings] were zero."
But the matter cannot be settled without a study of production and
consumption possibilities open to an economy. Consider the following
tension between two sets of considerations:
1. Low rates of consumption by generations sufficiently far into
the future would not be seen to be a bad thing by the current DM if
future well-beings were discounted at a positive rate. So today's DM
would recommend high consumption rates for now and the near future even
if that meant generations in the distant future would live in penury.
But if such a policy were followed, the demands of a further moral
requirement to Classical Utilitarianism that DM may hold, namely,
"intergenerational equity," would not be met. Therefore we should
follow Ramsey and not discount future well-beings.
2. Write \(dF(K)/dK\) as \(F\_K\). From equation (1) it is simple to
deduce that \(F\_K\) is the rate of return on investment. In
Ramsey's economy \(F\_K \gt 0\), which means every unit of output
that is saved yields more than a unit of future consumption, other
things equal. For example, if DM were to reduce consumption at \(t\)
by a unit, the additional consumption that would be available in the
briefest of periods later - we write that as \(\Delta t\)
- without affecting consumption at any future date would be
\(1+[dF(K(t))/dK(t)]\Delta t\). The productivity of capital is thus
tied to the arrow of time, which creates a bias in favour of future
generations. This bias gives bite to the adage, "We can do
something for posterity, but what can posterity ever do for us?"
The thought inevitably arises that perhaps the bias should be
countered in DM's calculus if some attention were to be given by
her to intergenerational equity in realized well-being as a supplement
to Classical Utilitarianism. That in turn suggests that DM should
abandon Ramsey and discount future well-beings at a positive rate.
The force of each consideration has been demonstrated in the
economics literature. It has been shown in the context of a simple
model that if production requires produced capital and exhaustible
resources, then optimum consumption declines to zero in the long run if
future well-beings are discounted at a positive rate (Dasgupta and
Heal 1974), but increases indefinitely if we follow Ramsey in not
discounting future well-beings (Solow 1974b). The exercises tell us
that the long-run features of optimum saving policies depend on the
relative magnitudes of the rate at which future well-beings are
discounted and the long-term productivity of capital assets.
There is a more general point here, which was explored by Koopmans
(1960, 1965, 1967, 1972) in a remarkable set of publications on the
idea of economic development. In such complex exercises as those
involving consumption and investment over a long time horizon, it is
foolish to regard any ethical principle (e.g., Classical
Utilitarianism) as sacrosanct. One can never know in advance what it
may run up against. A more judicious tactic than Ramsey's would be to
to play off one set of ethical assumptions against another in
not-implausible worlds, see what their implications are for the
distribution of well-being across generations, and then appeal to our
intuitive senses before arguing over policy. Settling *ex ante*
whether to use a positive rate to discount future well-beings could be
a self-defeating
move.[1]
## 3. The Problem of Optimum Saving
Ramsey considered a world with an indefinite future. This could
appear to be an odd move, but it has a strong rationale. Suppose DM
were to choose a horizon of \(T\) years. As she doesn't know when
our world will end, she will want to specify the resources that should
be left behind at \(T\) in case the world doesn't terminate then.
But to find a justification for the amount to leave behind at
\(T\), DM will need an assessment of the world beyond \(T\).
That would, however, amount to including the world beyond \(T\).
And so on.
Denote a consumption stream from the present \((t = 0)\) to infinity
as \(\{C(t)\}.\) \(K(0) \gt 0\) circumscribes the economy; it is the
quantity of capital that society has inherited from the
past. Mathematicians would call \(K(0)\) an "initial
condition." The problem Ramsey set himself was to determine the
consumption stream \(\{C(t)\}\) from 0 to infinity that DM would
select if she were a Classical Utilitarian.
### 3.1 Undiscounted Utilitarianism
Call a consumption stream \(\{C(t)\}\) *feasible*
if it satisfies equation (1) with initial condition \(K(0)\). In
Ramsey's deterministic world the Classical Utilitarian formulation of
the problem of optimum national saving at date \(t = 0\) is
thus:
"From the set of all feasible consumption streams, find that
\(\{C(t)\}\) which maximizes
\[
V(0) = \int^{\infty}\_0 [U(C(t))]dt."
\]
We will call this optimization problem, *Ramsey Mark I*.
There is a serious difficulty with *Ramsey Mark I*: it is not
coherent. Infinite sums don't necessarily converge. For any
\(\{C(t)\}\) for which the infinite integral doesn't
converge, \(V(0)\) doesn't exist. If the integral is
non-convergent for every feasible consumption streams
\(\{C(t)\}\), the maximization problem is meaningless: One
cannot maximize something that appears to be a real-valued function
\(V(0)\) when in fact the function doesn't exist.
The force of this observation can be seen in
**Example 1** (attributed to David Gale)
Suppose as an extreme special case of the Ramsey economy,
\(F(K) = 0\) for all \(K \ge 0\). Then equation (1)
reduces to
\[\tag{5}
\frac{dK(t)}{dt} = - C(t)
\]
The economy described in equation (5) consists of a non-deteriorating
piece of cake, of size \(K(0) \gt 0\) at the initial date. It is
obvious that every consumption stream \(\{C(t)\}\) satisfying equation
(5) tends to zero in the long run. Formally, \(C(t) \rightarrow 0\) as
\(t \rightarrow \infty\).
Because the \(U\)-function is unique up to positive affine
transformations, we may without any loss of generality normalize it so
that \(U(0) \ne 0\). It is then obvious that for all feasible
\(\{C(t)\}\), \(V(0)\) in *Ramsey Mark I*
diverges to minus infinity if \(U(0) \lt 0\), but diverges to
plus infinity if \(U(0) \gt 0\). That an optimum policy does not
exist in the cake-eating model can be seen if we now recall that
\(U(C)\) has been assumed to be strictly concave. The
assumption implies that any non-egalitarian distribution of consumption
among the generations can be improved upon by a suitable
redistribution. The ideal distribution would be equal consumption for
all generations. The only consumption stream with the latter property
is \(C(t) = 0\) for all \(t\). But that's the worst
possible distribution. QED
### 3.2 Re-normalizing Undiscounted Utilitarianism
The question arises whether there are circumstances in which there
is a best consumption stream even though \(V(0)\) does not
converge for all consumption streams. Ramsey formulated the question by
altering the way the saving problem is posed.
Imagine that well-being is bounded above no matter how large
consumption happens to be. Let \(U\) be the numerical measure of
well-being that DM chooses to work with. (All positive affine
transformations of \(U\) would be equally legitimate measures of
well-being.) Let \(B\) be the lowest upper bound of \(U\). Ramsey
christened it "Bliss". Because the rate of return on
investment \((F\_K)\) in his model is positive, consumption would grow
indefinitely and tend to infinity in the long run if saving rates were
suitably chosen. That means there are possible paths of economic
development in which \(U(C(t))\) tend to \(B\) in the long run. But
that implies there are possible paths of economic development in which
the short-fall of \(U(C(t))\) from \(B\) tends to zero in the long
run. If the short-fall tends to zero fast enough, the undiscounted
integral of the difference between \(U(C(t))\) and \(B\) would exist,
and DM could seek to maximize the modified integral. So we
have *Ramsey Mark II*, which reads as
"From the set of all feasible consumption streams, find that
\(\{C(t)\}\) which maximizes
\[
V(0) = \int^{\infty}\_0 [U(C(t))-B]dt."
\]
Notice that *Mark II* is a transformation of *Mark I*.
The transformation amounts to re-normalizing the optimality criterion.
Not only was the move from *Mark I* to *Mark II* on
Ramsey's part ingenious, it also displayed his moral integrity. It
would have been easy enough for him to ask DM instead to discount
future consumption and expand the range of circumstances in which
Utilitarianism provides an answer to the problem DM is attempting to
solve. He chose not to do that.
Ramsey's intuition in moving from *Mark I* to *Mark
II* was powerful, but in a paper that initiated the modern
literature on the Ramsey problem, Chakravarty (1962) observed that to
rely exclusively on the condition Ramsey had identified as being
*necessary* for a consumption stream to be the optimum (see
below) can lead to absurd results (see below, Sect. 4). In effect
Chakravarty observed that infinite integrals, even when cast in the
re-normalized form in *Ramsey Mark II*, don't necessarily
converge to finite values.
### 3.3 The Overtaking Criterion
What was needed was to de-link the question whether infinite
well-being integrals converge from the question whether optimum
consumption streams exist. That insight was provided by Koopmans (1965)
and von Weizsacker (1965). The latter author's re-statement of
the problem of optimum saving was as follows:
We say that the feasible consumption stream \(\{C^\*(t)\}\)
is *superior* to a feasible consumption stream \(\{C(t)\}\) if
there exists \(T \gt 0\) such that for all \(t \ge T\),
\[\tag{6}
\int^t\_0 [U(C^\*(s))]ds \ge \int^t\_0 [U(C(s))]ds
\]
We call \(\{C^\*(t)\}\) *optimum* if it is superior
to all other feasible consumption streams.
The condition that is represented in inequality (6) is known as the
*Overtaking Criterion* (OC), for that is what it is. OC avoids
asking whether the integrals on either side of inequality (6) converge
as \(t \rightarrow \infty\). If they do, OC reduces to Classical
Utilitarianism. But OC is able to respond to Ramsey's saving problem in
a wider class of situations. In his work Koopmans (1965) identified a
canonical economic model in which the \(U\)-function is bounded
above and in which *Ramsey Mark II* is equivalent to an
optimization problem that is posed in terms of OC.
What are we to make of the ethics of discounting the well-beings of
future generations? Ramsey (1928) began by dismissing it but then
studied it at the tail end of his paper. DM could of course justify
discounting future well-being if there is a possibiliy of future
extinction. Sidgwick (1907) himself noted that in the passage quoted
earlier. If Classical Utilitarianism is taken to commend the expected
sum of well-beings, then the "hazard rate" at date
\(t\) (i.e., the probability of extinction at date \(t\)
conditional on society surviving until \(t)\) would appear in the
expression for expected well-being as a discount rate for well-being at
\(t\). The question remains whether Classical Utilitarianism would
insist on zero-discounting of future utilities in a deterministic
world.
In a remarkable pair of works Koopmans (1960, 1972) exposed internal
contradictions in ethical reasoning in a deterministic world in both
*Ramsey Mark I* and *Ramsey Mark II*. He (and
subsequently Diamond, 1965) showed that if relatively weak normative
requirements are imposed on the concept of intergenerational well-being
in a deterministic world, equal treatment of the \(U\)-function
across generations has to be abandoned. We turn to that now.
### 3.4 Discounted Utilitarianism
It transpires the mathematics is a lot simpler if, instead of
assuming time is continuous, time is taken to be discrete. Thus we now
assume that \(t = 0,1,2,\ldots\) . Assume also that
intergenerational well-being at \(t = 0\) can be measured in terms
of a numerical function \(V\). The idea is to require the
function, which is defined on infinite well-being streams, to satisfy
properties that reflect ethical directives.
Let \(\{U(t)\}\) be an infinite well-being stream, that is, \(\{U(t)\}
= (U(0),U(1),\ldots ,U(t),\ldots)\). We say \(V(\{U(t)\})\)
is *continuous* if in an appropriate mathematical sense the
values of \(V\) for well-being streams \(\{U(t)\}\) that don't
differ much in the space of \(\{U(t)\}\)s are close to one another. A
further condition on the \(V\)-function that is ethically attractive
is "monotonicity". To define the notion let us say a
well-being stream is "superior" to another if no
generation enjoys less well-being along the former than along the
latter and if there is at least one generation that enjoys greater
well-being in the former than it does in the latter. We say that \(V\)
is *monotonic* if \(V\) is larger for a well-being stream than
it is for another if the former is superior to the latter.
Both properties are attractive. Lexicographic orderings
notwithstanding, there are not convincing arguments against continuity.
Of course Rawls (1972) placed priority rules and the lexicographic
orderings on the objects of interest in his conception of justice that
come with them at the centre of his theory, but that has proved to have
been one of his most contentious moves. The richness and depth of his
analysis would not be lessened if small tradeoffs were admitted between
the objects of justice. And it's hard to find reasons against
monotonicity. Even Rawls, whose work was so pointed toward distributive
justice, insisted on monotonicity.
But it can be shown that any \(V\)-function that satisfies
continuity and monotonicity must have generation discounting built into
it. It would seem the real numbers are not rich enough to accommodate
infinite well-being streams in a manner that respects continuity and
monotonicity while awarding the well-beings of all generations equal
weight. Proof of the proposition is in Diamond (1965), and was
attributed by the author to Menahem Yaari. So we now introduce positive
well-being discounting in the \(V\)-function and formulate
*Ramsey Mark III*.
Return once again to the formulation where time is continuous. As
previously, we say a consumption stream \(\{C(t)\}\) is
*feasible* if it satisfies equation (1) with an initial capital
stock of \(K(0)\). *Ramsey Mark III* (Ramsey 1928,
553-555) is then:
"From the set of all feasible consumption streams, find that
\(\{C(t)\}\) which maximizes
\[
V(0) = \int^{\infty}\_0 [U(C(t))e^{-\delta t}]dt, \delta \gt 0."
\]
In *Mark III* the discount rate \(\delta\) is a positive
constant. That means the corresponding discount factor
\(e^{-\delta}\) is less than 1. The latter in turn can be
shown to mean that in a wide range of economic models
\(e^{-\delta t}\) tends to zero at so fast a rate
that *Mark III* has an answer.
Let \(\{C^\*(t)\}\) be the solution of *Ramsey Mark
III*. Heuristically it is useful to imagine that there is a DM at
each date. The measure of intergenerational well-being for the DM at
date \(t\) is the \(V(t)\) of equation (4). Notice that the ethical
views of the successive DMs are congruent with one another. There is
thus no need for the DMs to draw up an "intergenerational
contract". The DM at every date will want to choose the level of
consumption it deems to be optimum, aware that succeeding DMs will
choose in accordance with what she had planned for them. In modern
game theoretic parlance, Ramsey's optimum consumption stream
\(\{C^\*(t)\}\) is a "non-cooperative" (Nash) equilibrium
among the DMs.
## 4. The Ramsey Rule and Its Ramifications
We now construct an informal version of the variational argument
Ramsey used for determining \(\{C^\*(t)\}\) in *Mark
III*. Loosely speaking, the DMs require the marginal rate of
ethically indifferent substitution between consumption at any two brief
periods of time to equal the marginal rate at which consumption can be
transformed between those same pair of brief periods of time. Their
equality (i.e., the right balance from among the
"desirables" and the "feasibles") is a
*necessary* property of an optimum consumption stream.
Ramsey constructed a mathematical expression of the property, but
did not look for conditions that, taken together, are both necessary
and *sufficient*. We will use a simple example, which is also in
his paper, to show how a sufficient condition can be obtained.
### 4.1 The Variational Argument
Write \(dU/dC = U\_C\) and \(d^2 U/dC^2 = U\_{CC}.\) Let \(\{C(t)\}\) be
a feasible consumption stream. We first deduce a formal expression for
the marginal rate of ethically indifferent substitution between
consumption at any two brief periods of time. Suppose the intention is
to reduce consumption at some future date \(t\) by a small quantity
\(\Delta C(t)\) and raise consumption at a nearby date \(t+\Delta t\)
while keeping consumption at all other dates the same as in
\(\{C(t)\}\). The loss in well-being that would follow from the move
is \(e^{-\delta t}U\_{C(t)}\Delta C(t)\). We now seek to
determine the percentage increase in consumption that would be
required at \(t+\Delta t\) if \(V(0)\) is to remain unchanged; because
that's the marginal rate of ethically indifferent substitution
between consumption at \(t\) and consumption at \(t+\Delta t\). Denote
that rate by \(\varrho(t)\). Then \(\varrho(t)\) must be the
percentage rate at which discounted marginal well-being declines at
\(t\). It also follows that \(\varrho(t)\) is the rate the DM at \(t =
0\) would use to discount a unit of consumption at \(t\) so as to
bring it to the present (because that's what is meant by the
percentage rate at which discounted marginal well-being declines at
\(t\) - for a formal demonstration, see Dasgupta, 2008). Some
economists call \(\varrho(t)\) the *consumption rate of
interest* (Little and Mirrlees 1974), others call it
the *social rate of discount* (Arrow and Kurz
1970). \(\varrho(t)\) is a fundamental object in social cost-benefit
analysis.
Let \(\Delta\) be vanishingly small. Then, by definition
\[\tag{7}
\varrho(t) = - [d(e^{-\delta t}U\_{C(t)})/dt]/e^{-\delta t}U\_{C(t)}
\]
So as to simplify the notation let \(g(C(t))\) denote the percentage
rate of growth in \(C(t)\) (i.e. \(g(C(t)) = [dC(t)/dt]/C(t)\), which
can be negative), and let \(\sigma(C)\) denote the elasticity of
marginal well-being (i.e., \(\sigma(C) = -CU\_{CC}/U\_C \gt
0)\). Equation (7) then simplifies to
\[\tag{8}
\varrho(t) = \delta + \sigma(C(t))g(C(t))
\]
Because \(\{C^\*(t)\}\) is by assumption the optimum, no feasible
deviation from \(\{C^\*(t)\}\) can increase \(V(0)\). That means the
consumption rate of interest \((\varrho(t))\) must equal the social
rate of return on investment \((F\_{K(t)})\) at every \(t\). To see
why, suppose in some vanishingly small interval of time \(F\_{K(t)} \gt
\varrho(t)\). Then \(V(0)\) could be increased by consuming a unit
less at \(t\) and enjoying the return of \((1+F\_{K(t)})\) soon after.
Alternatively, if \(F\_{K(t)} \lt \varrho(t), V(0)\) could be increased
by consuming a unit more at \(t\) and reducing consumption soon after
by an amount equal to the return \((1+F\_{K(t)})\). But that means
the consumption rate of interest \(\varrho(t)\) equals the social rate
of return \(F\_{K(t)}\) along \(\{C^\*(t)\}\) at every date. Using
equation (8) we have,
\[\tag{9}
\delta + \sigma(C(t))g(C(t)) = F\_{K(t)}
\]
Equation (9) is the Ramsey Rule. It is a necessary condition for
optimality in *Ramsey Mark III* and is unarguably the most
famous equation in intertemporal welfare economics. The rule is a
formal statement of the requirement of \(\{C^\*(t)\}\), that
the marginal rate of substitution between consumption at two nearby
dates (the left-hand-side of eq. 9) equals the marginal rate of
transformation between consumption at those same pair of nearby dates
(the right-hand-side of eq. (9). It is simple to confirm that equation
(9) is invariant under positive affine transformations of the
\(U\)-function.
### 4.2 Incompleteness in Ramsey's Analysis
Presently we will specify a \(U\)-function for which \(\sigma\) is
independent of \(C\). For the moment we merely suppose that
\(\sigma\) is constant. In that case the Ramsey Rule reads as
\[\tag{10}
\delta + \sigma g(C(t)) = F\_{K(t)}
\]
In *Ramsey Mark III*, \(K(0)\) is given as an inheritance from
the past. That means \(F\_{K(0)}\) is given as an initial condition, it
is not a choice for the DM at \(t = 0\). Moreover \(\delta\) and
\(\sigma\) are parameters, both reflecting ethical values. The DM can
therefore determine \(g(C(0))\) from equation (10). But that's
the optimum percentage rate of growth in consumption at the initial
date. The Ramsey Rule gives the DM an equation for determining the
initial growth rate of consumption, but it does not say what the
initial *level* of consumption ought to be. Below we show by
way of an example that there are an infinity of feasible consumption
paths satisfying the Ramsey Rule. It follows that the DM at \(t = 0\)
needs a further condition to determine \(C^\*(0)\).
**Example 2** (the linear economy)
Assume
\[\begin{align}
\tag{11a} F(K) &= \mu K, \mu \gt 0 \\
\tag{11b} U(C) &= - C^{-(\sigma -1)}, \sigma \gt 1
\end{align}\]
From equation (11a) it follows that \(F\_K = \mu\), which means the
rate of return on investment is constant. From equation (11b) it
follows that \(\sigma\) is the elasticity of marginal well-being.
Notice also that \(U(C) \rightarrow -\infty\) as \(C \rightarrow 0\)
and that, under the chosen normaliztion of the \(U\)-function, \(U(C)
\rightarrow 0\) as \(C \rightarrow \infty\). Using equation (11a) in
equation (1) yields,
\[\tag{12}
\frac{dK(t)}{dt} = \mu K(t) - C(t)
\]
Write \(m = (\mu -\delta)/\sigma\). Applying equations (11a-b)
to equation (10) reduces the Ramsey Rule to
\[\tag{13}
\frac{dC(t)}{dt} = [(\mu - \delta)/\sigma]C(t) = mC(t)
\]
Equation (13) says that if \(\mu \lt \delta , C(t)\) declines to 0 at
an exponential rate. Empirically, the pausible case to consider is
\(\mu \gt \delta\), which is what we shall do here. It means that the
rate of return on investment \((\mu)\) exceeds the rate at which time
is discounted \((\delta)\). And that in turn means \(m \gt 0\).
Integrating equation (13) yields
\[\tag{14}
C(t) = C(0)e^{mt}
\]
Equation (14) says \(C(t)\) grows exponentially at the
rate \(m\). We reconfirm a point that was made previously, that
although equation (14) reveals the rate of growth optimum consumption
at the initial date (i.e., \(t = 0)\), it doesn't reveal the
initial level of consumption (i.e., \(C(0)\)). That's the
indeterminacy in the Ramsey Rule.
The simplest way to determine the optimum initial consumption,
\(C^\*(0)\), is to observe from equation (14) that if
\(C^\*(t)\) grows indefinitely at the rate \(m\), so
should \(K(t)\) be required to grow at that same rate.
The reason is that if the growth rate of \(K(t)\) were to
be less than \(m\), capital would be eaten into, which means the
stock would be exhausted in finite time. The economy would then cease
to exist \((V(0)\) would be minus infinity if the future
trajectory of the economy were to be thus.) If on the other hand the
growth rate of \(K(t)\) were to exceed \(m\), there
would be over-accumulation of capital, in the sense that consumption
would be lower at every date than it needs be. The situation would
resemble one where DM throws away a part of the initial capital stock
\(K(0)\) and then settles on a saving behaviour that satisfies the
Ramsey Rule.
Exponential growth in our linear economy (eq. 11a) tells us that
the saving rate should be constant. Let us define the *saving
rate*, \(s\), as the proportion of output (GDP) that is
invested at each instant. Then equation (1) can be re-written as
\[\tag{15}
\frac{dK(t)}{dt} = s\mu K(t)
\]
Equation (15) says that intended saving equals intended investment.
Integrating equation (15) yields
\[\tag{16}
K(t) = K(0)e^{s\mu t}
\]
But we are insisting that both \(K(t)\) and \(C(t)\) should grow at
the same rate. Equations (14) and (16) therefore imply
\[\tag{17}
m = \frac{\mu -\delta}{\sigma} = s\mu
\]
The saving rate in equation (17) is the optimum. So we write it as
\(s^\*\). Thus
\[\tag{18}
s^\* = \frac{m}{\mu} = \frac{\mu -\delta}{\sigma\mu} \lt 1
\]
Equations (16)-(18) tell us that the optimum rate of growth of
consumption, \(g^\*\), is
\[\tag{19}
g^\* = \frac{\mu -\delta}{\sigma} \gt 0
\]
Notice also that if \(\delta = 0\), equation (18) reduces to
\[\tag{20}
s^\* = \frac{1}{\sigma}
\]
Equation (20) offers as elegant a simplified answer as there could
be to the question with which Ramsey started his paper.
### 4.3 The Transversality Condition
The linear technology (eq. 11a) and the iso-elastic
\(U\)-function (eq. 11b) allowed us to recognise immediately
that if a consumption stream satisfying the Ramsey Rule is to be the
optimum, both capital and consumption should grow at the same
exponential rate, \(m\). Identifying a sufficient condition for
optimality in more general models is a lot more difficult. What we need
is a condition on the long-run features of a consumption stream
satisfying the Ramsey Rule that can ensure it is the optimum. von
Weizsacker (1965) showed that the required condition relates to the
long-run behaviour of the social value of capital associated with that
consumption stream. We now formalize the condition.
Let \(U\) be the unit of account. Consider a consumption stream
\(\{C(t)\}\). It follows that \(U\_{C(t)}\) is the social worth of a
marginal unit of consumption. Write \(P(t)\) for \(U\_{C(t)}. P(t)\)
is called the (spot) *accounting price* of consumption. Because
\(e^{-\delta t}P(t)\) is the discounted value of \(P(t)\), it is
called the present-value accounting price of consumption. If
\(\{C(t)\}\) satisfies the Ramsey Rule in *Mark III*,
\(e^{-\delta t}P(t)\) is also the present-value accounting price of a
unit of capital stock. von Weizsacker (1965) showed that a sufficient
condition for the optimality of \(\{C(t)\}\) is \(e^{-\delta
t}P(t)K(t) \rightarrow A\) as t \(\rightarrow \infty\), where \(A\) is
a (finite) non-negative number. In words, a necessary and sufficient
condition for \(\{C(t)\}\) to be the optimum is (i) that it satisfies
the Ramsey Rule, and (ii) that the present-value of the
economy's stock of capital is finite. Condition (ii), which is
widely known as the "transversality condition," eliminates
those feasible consumption streams that satisfy the Ramsey Rule but
along which there is excessive saving. A simple calculation confirms
that in Example 2 the transversality condition is satisfied if the
saving rate is \(s^\*\) (eq. 18).
### 4.4 Numerical Estimates of the Optimum Rate of Saving
Equation (18) says that \(s^\*\) is an increasing function of the
return on investment \((\mu)\), a decreasing function of the time rate of
discount \((\delta)\), and a decreasing function of the elasticity of
marginal well-being \((\sigma)\). Each of these properties is intuitively
obvious:
(1) The higher is the rate of return on investment \((\mu)\), the
greater is the gain to future generations from a marginal increase in
saving by initial generations. That says the optimum rate of saving
should be an increasing function of \(\mu\), other things equal.
(2) The larger is the value of the time discount rate \((\delta)\)
chosen by DM, the lower is the weight that she awards to the well-being
of future generations. That implies higher optimum consumption levels
for early generations (Sect. 2.1), which in turn implies that the
optimum rate of saving is lower, other things equal.
(3) As the return on investment is positive \((\mu \gt 0)\), the arrow
of time displays a bias in favour of future generations (Sect. 2.1).
But the larger is the chosen value of \(\sigma\), the more DM displays
concerns over equity in consumption across the generations. Therefore,
the larger is that concern, the higher is the optimum rate of
consumption to be enjoyed by initial generations. So we should expect
the optimum rate of saving to be a decreasing function of \(\sigma\),
other things equal.
It is instructive to consider stylized figures for the parameters on
the right-hand-sides of equations (18) and (19), respectively. Although
stylized, they are figures for the pair of ethical parameters \(\sigma\)
and \(\delta\) that economists who have written on the economics of
climate change have assumed in their work. To be sure, the welfare
economics of climate change has demanded more complicated models than
the model that is represented in equations (1) and (11a), but as we
confirm below, it has not offered any additional *theoretical*
insights. In what follows we take a year to be the unit of time and
assume that \(\mu = 0.05\) (i.e., 5% a year). Along the optimum, the
consumption rate of interest equals the rate of return on investment
(the Ramsey Rule), which means that the optimum consumption rate of
interest equals a constant 5% a year.
A figure of 5% a year for \(\mu\) implies a capital-output ratio
\((1/\mu)\) of 20 years, which is far higher than the estimates of
capital-output ratios from inter-industry studies that economists in
various parts of the world have arrived at (Behrman 2001); a
representative figure for 1/\(\mu\) in that literature is 3 years. But
their estimates have been based on a definition of
"capital" that is confined to "produced"
capital, such as factories, roads, ports, and buildings. Human capital
(education, health, knowledge) is missing from them, as is natural
capital (ecosystems, sub-soil resources). Ramsey's model, as
encapsulated in equation (11a), embraces all forms of capital goods. No
doubt his formulation requires a heroic (read, impossible!) feat of
aggregation, but when all capital goods that enter production are taken
into account, we should expect an aggregate capital-output ratio (which
we should call the (inclusive) wealth-output ratio), to be a lot higher
than 3 years; perhaps even higher than 20 years (Arrow et al., 2012,
2013). Large categories of capital goods are absent from the national
economic accounts that inform we economists' understanding of
production and consumption possibilities (Dasgupta, 2019). It would
thus seem there is still a long way to go before we can reach a good
approximation of what we should bequeath to our descendants.
**Example 3** (taken from the economics of climate
change)
We now turn our attention to the values of the two ethical parameters
in equation (11b) that were chosen by three economists in their study
of the economics of climate change.
\[\begin{align}
\tag\*{Cline (1992)} \sigma = 1.5 \quad &\text{and} \quad \delta = 0 \\
\tag\*{Nordhaus (1994)} \sigma = 1 \quad &\text{and} \quad \delta = 0.03 \text{ (3% a year)} \\
\tag\*{Stern (2007)} \sigma = 1 \quad &\text{and} \quad \delta = 0.001 \text{ (0.1% a year)}
\end{align}\]
(NB: \(\sigma = 1\) corresponds to the logarithmic well-being function,
that is \(U(C) =\) log\(C\), and can be obtained as
a limit of the functional form of \(U(C)\) in equation
(11b) as \(\sigma \rightarrow 1.)\)
We impose those parameter values to find that the optimum saving
rate \(s^\*\) (eq. 18) and the optimum rate of growth of consumption
(eq. 19) are, in turn:
\[\begin{align}
\tag{21a} s^\* = 67\% \quad &\text{and} \quad g^\* = 3.3\% \text{ a year (Cline)} \\
\tag{21b} s^\* = 40\% \quad &\text{and} \quad g^\* = 2.0\% \text{ a year (Nordhaus)} \\
\tag{21c} s^\* = 98\% \quad &\text{and} \quad g^\* = 4.9\% \text{ a year (Stern)}
\end{align}\]
### 4.5 Commentary
A national saving rate of 40% (eq. 21b) is no doubt high by the
standards of contemporary western economies, but there are countries
that in recent years have achieved 40-45% saving rates (China is
a prominent example). A figure of 67% for \(s^\*\) (eq. 21a) is higher
than the saving rate in any country, but is not beyond belief. The
truly outlandish figures is 98% (eq. 21c). It is outlandish
especially because the figure is the optimum saving rate no matter how
small \(K(0)\) happens to be. Admittedly, the model here
(eqs. 11a-b) is phenomenally stylized, but it does bring out
sharply the observation of Koopmans (1965), that it is foolish to
assume \(\delta = 0\) (or close to 0) without first checking its
possible consequences for the distribution of well-being across the
generations.
Equation (19) has shown that the optimum growth rate of consumption
is bounded above by \(\mu\), which explains why \(g^\*\) is less than
5% a year for each of the three parametric specifications we have
considered. The specifications come from three studies in the
welfare economics of global climate change, in which the authors worked
with models that are a lot more complex than Ramsey's. And yet
their findings are exactly what his formulation would point to
(Dasgupta 2008), namely, that other things equal, the lower is the
chosen value of \(\delta\) and/or the larger is the damage to future
well-being that is expected to be caused by global climate change, the
greater is the investment level DM should recommend to avert climate
change or soften the effects of that change on human well-being. The
often shrill debate (e.g., Nordhaus 2007) on the extent to which
global investment should be directed at reducing the deletarious
effects of climate change was spurred by differences in model
specification among climate-change economists.
The linear technology (eq. 11a) and the iso-elastic
\(U\)-function (eq. 11b), when taken together, have offered
deep insights even though we have restricted the discussion to
pen-and-paper calculations here. The functional forms are not
believable; nevertheless, Ramsey made use of them. His paper showed
that unbelievably simplified models, provided their construction is
backed by strong intuition, can illuminate questions that are
seemingly impossible to frame, let alone to answer
quantitatively. That has been Ramsey's enduring gift to
theoretical economics. |
well-being | ## 1. The Concept
Popular use of the term 'well-being' usually relates to
health. A doctor's surgery may run a 'Women's
Well-being Clinic', for example. Philosophical use is broader,
but related, and amounts to the notion of how well a person's
life is going for that person. A person's well-being is what is
'good for' them. Health, then, might be said to be a
constituent of my well-being, but it is not plausibly taken to be all
that matters for my well-being. One correlate term worth noting here
is 'self-interest': my self-interest is what is in the
interest of myself, and not others.
The philosophical use of the term also tends to encompass the
'negative' aspects of how a person's life goes for
them. So we may speak of the well-being of someone who is, and will
remain in, the most terrible agony: their well-being is negative, and
such that their life is worse for them than no life at all. The same
is true of closely allied terms, such as 'welfare', which
covers how a person is faring as a whole, whether well or badly, or
'happiness', which can be understood--as it sometimes
was by the classical utilitarians from Jeremy Bentham onwards, for
example--to be the balance between good and bad things in a
person's life. But note that philosophers also use such terms in
the more standard 'positive' way, speaking of
'ill-being', 'ill-faring', or, of course,
'unhappiness' to capture the negative aspects of
individuals' lives. Most philosophical discussion has been of
'goods' rather than 'bads', but recently more
interest has been shown in the latter (e.g. Kagan 2015; Bradford
2021).
'Happiness' is often used, in ordinary life, to refer to a
short-lived state of a person, frequently a feeling of contentment:
'You look happy today'; 'I'm very happy for
you'. Philosophically, its scope is more often wider,
encompassing a whole life. And in philosophy it is possible to speak
of the happiness of a person's life, or of their happy life,
even if that person was in fact usually pretty miserable. The point is
that some good things in their life made it a happy one, even though
they lacked contentment. But this usage is uncommon, and may cause
confusion.
Over the last few decades, so-called 'positive psychology'
has hugely increased the attention paid by psychologists and other
scientists to the notion of 'happiness'. Such happiness is
usually understood in terms of contentment or
'life-satisfaction', and is measured by means such as
self-reports or daily questionnaires. Is positive psychology about
well-being? As yet, conceptual distinctions are not sufficiently clear
within the discipline. But it is probably fair to say that many of
those involved, as researchers or as subjects, are assuming that
one's life goes well to the extent that one is contented with
it--that is, that some kind of hedonistic account of well-being
is correct. Some positive psychologists, however, explicitly reject
hedonistic theories in preference to Aristotelian or
'eudaimonist' accounts of well-being, which are a version
of the 'objective list' theory of well-being discussed
below. A leader in the field, Martin Seligman, for example, has
suggested that, rather than happiness, positive psychology should
concern itself with positive emotion, engagement, relationships,
meaning and accomplishment ('Perma') (Seligman 2011).
When discussing the notion of what makes life good for the individual
living that life, it is preferable to use the term
'well-being' instead of 'happiness'. For we
want at least to allow conceptual space for the possibility that, for
example, the life of a plant may be 'good for' that plant.
And speaking of the happiness of a plant would be stretching language
too far. (An alternative here might be 'flourishing',
though this might be taken to bias the analysis of human well-being in
the direction of some kind of natural teleology.) In that respect, the
Greek word commonly translated 'happiness'
(*eudaimonia*) might be thought to be superior. But, in fact,
*eudaimonia* seems to have been restricted not only to
conscious beings, but to human beings: non-human animals cannot be
*eudaimon*. This is because *eudaimonia* suggests that
the gods, or fortune, have favoured one, and the idea that the gods
could care about non-humans would not have occurred to most
Greeks.
It is occasionally claimed that certain ancient ethical theories, such
as Aristotle's, result in the collapse of the very notion of
well-being. On Aristotle's view, if you are my friend, then my
well-being is closely bound up with yours. It might be tempting, then,
to say that 'your' well-being is 'part' of
mine, in which case the distinction between what is good for me and
what is good for others has broken down. But this temptation should be
resisted. Your well-being concerns how well your life goes for you,
and we can allow that my well-being depends on yours without
introducing the confusing notion that my well-being is constituted by
yours. There are signs in Aristotelian thought of an expansion of the
subject or owner of well-being. A friend is 'another
self', so that what benefits my friend benefits me. But this
should be taken either as a metaphorical expression of the dependence
claim, or as an identity claim which does not threaten the notion of
well-being: if you really are the same person as I am, then of course
what is good for you will be what is good for me, since there is no
longer any metaphysically significant distinction between you and
me.
Well-being is a kind of value, sometimes called 'prudential
value', to be distinguished from, for example, aesthetic value
or moral value. What marks it out is the notion of 'good
for'. The serenity of a Vermeer painting, for instance, is a
kind of goodness, but it is not 'good for' the painting.
It may be good for us to contemplate such serenity, but contemplating
serenity is not the same as the serenity itself. Likewise, my giving
money to a development charity may have moral value, that is, be
morally good. And the effects of my donation may be good for others.
But it remains an open question whether my being morally good is good
for me; and, if it is, its being good for me is still conceptually
distinct from its being morally good. A great deal of attention
has been paid in philosophy to the issue of moral
'normativity', less so to that of prudential normativity
(recent exceptions are Dorsey 2021; Fletcher 2021).
The most common view of well-being is 'invariabilism',
according to which there is a single account of well-being for all
individuals for whom life can go well or badly (Lin 2018). Some have
argued, however, that we should develop a variabilist view, according
to which, for example, there might be one theory of well-being for
adults and another for children (see Skelton 2015). According to
Benatar (2006), existence for any individual with well-being is always
of overall negative value. On well-being and death, see Bradley
(2006).
## 2. Moore's Challenge
There is something mysterious about the notion of 'good
for'. Consider a possible world that contains only a single
item: a stunning Vermeer painting. Leave aside any doubts you might
have about whether paintings can be good in a world without viewers,
and accept for the sake of argument that this painting has aesthetic
value in that world. It seems intuitively plausible to claim that the
value of this world is constituted solely by the aesthetic value of
the painting. But now consider a world which contains one individual
living a life that is good for them. How are we to describe the
relationship between the value of this world, and the value of the
life lived in it for the individual? Are we to say that the world has
a value at all? How can it, if the only value it contains is
'good for' as opposed to just 'good'? And yet
we surely do want to say that this world is better ('more
good') than some other empty world. Well, should we say that the
world is good, and is so because of the good it contains
'for' the individual? This fails to capture the idea that
there is in fact nothing of value in this world except what is good
for the individual.
Thoughts such as these led G.E. Moore to object to the very idea of
'good for' (Moore 1903, pp. 98-9). Moore argued that
the idea of 'my own good', which he saw as equivalent to
what is 'good for me', makes no sense. When I speak of,
say, pleasure as what is good for me, he claimed, I can mean only
either that the pleasure I get is good, or that my getting it is good.
Nothing is added by saying that the pleasure constitutes my good, or
is good for me.
But the distinctions I drew between different categories of value
above show that Moore's analysis of the claim that my own good
consists in pleasure is too narrow. Indeed Moore's argument
rests on the very assumption that it seeks to prove: that only the
notion of 'good' is necessary to make all the evaluative
judgements we might wish to make. The claim that it is good that I get
pleasure is, logically speaking, equivalent to the claim that the
world containing the single Vermeer is good. It is, so to speak,
'impersonal', and leaves out of account the special
feature of the value of well-being: that it is good for
individuals.
One way to respond both to Moore's challenge, and to the puzzles
above, is to try, when appropriate, to do without the notion of
'good' (see Kraut 2011) and make do with 'good
for', alongside the separate and non-evaluative notion of
reasons for action. Thus, the world containing the single individual
with a life worth living, might be said to contain nothing good
*per se*, but a life that is good for that individual. And this
fact may give us a reason to bring about such a world, given the
opportunity.
## 3. Scanlon's Challenge
Moore's book was published in Cambridge, England, at the
beginning of the twentieth century. At the end of the same century, a
book was published in Cambridge, Mass., which also posed some serious
challenges to the notion of well-being: *What Do We Owe to Each
Other?*, by T.M. Scanlon.
Moore's ultimate aim in criticizing the idea of 'goodness
for' was to attack egoism. Likewise, Scanlon has an ulterior
motive in objecting to the notion of well-being--to attack
so-called 'teleological' or end-based theories of ethics,
in particular, utilitarianism, which in its standard form requires us
to maximize well-being. But in both cases the critiques stand
independently.
One immediately odd aspect of Scanlon's position that
'well-being' is an otiose notion in ethics is that he
himself seems to have a view on what well-being is. It involves, he
believes, among other things, success in one's rational aims,
and personal relations. But Scanlon claims that his view is not a
'theory of well-being', since a theory must explain what
unifies these different elements, and how they are to be compared.
And, he adds, no such theory is ever likely to be available, since
such matters depend so much on context.
Scanlon does, however, implicitly make a claim about what unites these
values: they are all constituents of well-being, as opposed to other
kinds of value, such as aesthetic or moral. Nor is it clear why
Scanlon's view of well-being could not be developed so as to
assist in making real-life choices between different values in
one's own life.
Scanlon suggests that we often make claims about what is good in our
lives without referring to the notion of well-being, and indeed that
it would often be odd to do so. For example, I might say, 'I
listen to Alison Krauss's music because I enjoy it', and
that will be sufficient. I do not need to go on to say, 'And
enjoyment adds to my well-being'.
But this latter claim sounds peculiar only because we already
*know* that enjoyment makes a person's life better for
them. And in some circumstances such a claim would anyway not be odd:
consider an argument with someone who claims that aesthetic experience
is worthless, or with an ascetic. Further, people do use the notion of
well-being in practical thinking. For example, if I am given the
opportunity to achieve something significant, which will involve
considerable discomfort over several years, I may consider whether,
from the point of view of my own well-being, the project is worth
pursuing.
Scanlon argues also that the notion of well-being, if it is to be
philosophically acceptable, ought to provide a 'sphere of
compensation'--a context in which it makes sense to say,
for example, that I am losing one good in my life for the sake of gain
over my life as a whole. And, he claims, there is no such sphere. For
Scanlon, giving up present comfort for the sake of future health
'feels like a sacrifice'.
But this does not chime with my own experience. When I donate blood,
this feels to me like a sacrifice, albeit a minor one. But when I
visit the dentist, it feels to me just as if I am weighing present
pains against potential future pains. And we can weigh different
components of well-being against one another. Consider a case in which
you are offered a job which is highly paid but many miles away from
your friends and family.
Scanlon denies that we need an account of well-being to understand
benevolence, since we do not have a general duty of benevolence, but
merely duties to benefit others in specific ways, such as to relieve
their pain. But, from the philosophical perspective, it may be quite
useful to use the heading of 'benevolence' in order to
group such duties. And, again, comparisons may be important: if I have
several *pro tanto* duties of benevolence, not all of which can
be fulfilled, I shall have to weigh the various benefits I can provide
against one another. And here the notion of well-being will again come
into play.
Further, if morality includes so-called 'imperfect' duties
to benefit others, that is, duties that allow the agent some
discretion as to when and how to assist, the lack of any overarching
conception of well-being is likely to make the fulfillment of such
duties problematic.
## 4. Theories of Well-being
### 4.1 Hedonism
On one view, human beings always act in pursuit of what they think
will give them the greatest balance of pleasure over pain. This is
'psychological hedonism', and will not be my concern here.
Rather, I intend to discuss 'evaluative hedonism' or
'prudential hedonism', according to which well-being
consists in the greatest balance of pleasure over pain.
This view was first, and perhaps most famously, expressed by Socrates
and Protagoras in the Platonic dialogue, *Protagoras* (Plato
1976 [C4 BCE], 351b-c). Jeremy Bentham, one of the most
well-known of the more recent hedonists, begins his *Introduction
to the Principles of Morals and Legislation* thus: 'Nature
has placed mankind under the governance of two sovereign masters,
*pain* and *pleasure*. It is for them alone to point out
what we ought to do'.
In answer to the question, 'What does well-being consist
in?', then, the hedonist will answer, 'The greatest
balance of pleasure over pain'. We might call this
*substantive hedonism*. A complete hedonist position will
involve also *explanatory hedonism*, which consists in an
answer to the following question: 'What *makes* pleasure
good, and pain bad?', that answer being, 'The pleasantness
of pleasure, and the painfulness of pain'. Consider a
substantive hedonist who believed that what makes pleasure good for us
is that it fulfills our nature. This theorist is not an explanatory
hedonist.
Hedonism--as is demonstrated by its ancient roots--has long
seemed an obviously plausible view. Well-being, what is good
*for* me, might be thought to be naturally linked to what seems
good *to* me, and pleasure does, to most people, seem good. And
how could anything else benefit me except in so far as I enjoy
it?
The simplest form of hedonism is Bentham's, according to which
the more pleasantness one can pack into one's life, the better
it will be, and the more painfulness one encounters, the worse it will
be. How do we measure the value of the two experiences? The two
central aspects of the respective experiences, according to Bentham,
are their duration, and their intensity.
Bentham tended to think of pleasure and pain as a kind of sensation,
as the notion of intensity might suggest. One problem with this kind
of hedonism, it has often been claimed, is that there does not appear
to be a single common strand of pleasantness running through all the
different experiences people enjoy, such as eating hamburgers, reading
Shakespeare, or playing water polo. Rather, it seems, there are
certain experiences we want to continue, and we might be prepared to
call these--for philosophical purposes--pleasures (even
though some of them, such as diving in a very deep and narrow cave,
for example, would not normally be described as pleasurable).
Hedonism could survive this objection merely by incorporating whatever
view of pleasure was thought to be plausible. A more serious objection
is to the evaluative stance of hedonism itself. Thomas Carlyle, for
example, described the hedonistic component of utilitarianism as the
'philosophy of swine', the point being that simple
hedonism places all pleasures on a par, whether they be the lowest
animal pleasures of sex or the highest of aesthetic appreciation. One
might make this point with a thought experiment. Imagine that you are
given the choice of living a very fulfilling human life, or that of a
barely sentient oyster, which experiences some very low-level
pleasure. Imagine also that the life of the oyster can be as long as
you like, whereas the human life will be of eighty years only. If
Bentham were right, there would have to be a length of oyster life
such that you would choose it in preference to the human. And yet many
say that they would choose the human life in preference to an oyster
life of any length.
Now this is not a knockdown argument against simple hedonism. Indeed
some people are ready to accept that at some length or other the
oyster life becomes preferable. But there is an alternative to simple
hedonism, outlined famously by J.S. Mill, using his distinction
(itself influenced by Plato's discussion of pleasure at the end
of his *Republic* (Plato 1992 [C4 BCE], 582d-583a)) between
'higher' and 'lower' pleasures (1863 [1998],
ch. 2). Mill added a third property to the two determinants of value
identified by Bentham, duration and intensity. To distinguish it from
these two 'quantitative' properties, Mill called his third
property 'quality'. The claim is that some pleasures, by
their very nature, are more valuable than others. For example, the
pleasure of reading Shakespeare, by its very nature, is more valuable
than any amount of basic animal pleasure. And we can see this, Mill
suggests, if we note that those who have experienced both types, and
are 'competent judges', will make their choices on this
basis.
A long-standing objection to Mill's move here has been to claim
that his position can no longer be described as hedonism proper (or
what I have called 'explanatory hedonism'). If higher
pleasures are higher because of their nature, that aspect of their
nature cannot be pleasantness, since that could be determined by
duration and intensity alone. And Mill anyway speaks of properties
such as 'nobility' as adding to the value of a pleasure.
Now it has to be admitted that Mill is sailing close to the wind here.
But there is logical space for a hedonist position which allows
properties such as nobility to determine pleasantness, and insists
that only pleasantness determines value. But one might well wonder how
nobility could affect pleasantness, and why Mill did not just come out
with the idea that nobility is itself a good-making property.
Above I noted the plausibility of the claim that nothing can benefit
me if I don't enjoy it. Some non-hedonists have denied this,
while accepting the so-called 'experience requirement' on
well-being. They suggest that what matters is valuable
consciousness, and consciousness can be valuable for non-hedonic
reasons (see Kraut 2018; Kriegel 2019). But there is a yet more weighty
objection both to hedonism and to the view that well-being
consists only in conscious states: the so-called 'experience
machine'. Imagine that I have a machine that I could plug you
into for the rest of your life. This machine would give you
experiences of whatever kind you thought most valuable or
enjoyable--writing a great novel, bringing about world peace,
attending an early Rolling Stones' gig. You would not know you
were on the machine, and there is no worry about its breaking down or
whatever. Would you plug in? Would it be wise, from the point of your
own well-being, to do so? Robert Nozick thinks it would be a big
mistake to plug in: 'We want to do certain things ... we
want to be a certain way ... plugging into an experience machine
limits us to a man-made reality' (Nozick 1974, p. 43).
One can make the machine sound more palatable, by allowing that
genuine choices can be made on it, that those plugged in have access
to a common 'virtual world' shared by other machine-users,
a world in which 'ordinary' communication is possible, and
so on. But this will not be enough for many anti-hedonists. A further
line of response begins from so-called 'externalism' in
the philosophy of mind, according to which the content of mental
states is determined by facts external to the experiencer of those
states. Thus, the experience of *really* writing a great novel
is quite different from that of *apparently* writing a great
novel, even though 'from the inside' they may be
indistinguishable. But this is once again sailing close to the wind.
If the world can affect the very content of my experience without my
being in a position to be aware of it, why should it not directly
affect the value of my experience?
The strongest tack for hedonists to take is to accept the apparent
force of the experience machine objection, but to insist that it rests
on 'common sense' intuitions, the place in our lives of
which may itself be justified by hedonism. This is to adopt a strategy
similar to that developed by 'two-level utilitarians' in
response to alleged counter-examples based on common-sense morality.
The hedonist will point out the so-called 'paradox of
hedonism', that pleasure is most effectively pursued indirectly.
If I consciously try to maximize my own pleasure, I will be unable to
immerse myself in those activities, such as reading or playing games,
which do give pleasure. And if we believe that those activities are
valuable independently of the pleasure we gain from engaging in them,
then we shall probably gain more pleasure overall.
These kinds of stand-off in moral philosophy are unfortunate, but
should not be brushed aside (for a balanced discussion of the
experience machine, see Lin 2016). They raise questions concerning the
epistemology of ethics, and the source and epistemic status of our
deepest ethical beliefs, which we are further from answering than many
would like to think. Certainly the current trend of quickly dismissing
hedonism on the basis of a quick run-through of the experience machine
objection is not methodologically sound.
### 4.2 Desire Theories
The experience machine is one motivation for the adoption of a desire
theory (for a good introduction to the view, see Heathwood 2016;
2019). When you are on the machine, many of your central desires are
likely to remain unfilled. Take your desire to write a great novel.
You may believe that this is what you are doing, but in fact it is
just a hallucination. And what you want, the argument goes, is to
write a great novel, not the experience of writing a great novel.
Historically, however, the reason for the current dominance of desire
theories lies in the emergence of welfare economics. Pleasure and pain
are inside people's heads, and also hard to
measure--especially when we have to start weighing different
people's experiences against one another. So economists began to
see people's well-being as consisting in the satisfaction of
preferences or desires, the content of which could be revealed by the
choices of their possessors. This made possible the ranking of
preferences, the development of 'utility functions' for
individuals, and methods for assessing the value of
preference-satisfaction (using, for example, money as a standard).
The simplest version of a desire theory one might call the *present
desire* theory, according to which someone is made better off to
the extent that their current desires are fulfilled. This theory does
succeed in avoiding the experience machine objection. But it has
serious problems of its own. Consider the case of the *angry
adolescent*. This boy's mother tells him he cannot attend a
certain nightclub, so the boy holds a gun to his own head, wanting to
pull the trigger and retaliate against his mother. Recall that the
scope of theories of well-being should be the whole of a life. It is
implausible that the boy will make his life go as well as possible by
pulling the trigger. We might perhaps interpret the simple desire
theory as a theory of well-being-at-at-a-particular-time. But even
then it seems unsatisfactory. From whatever perspective, the boy would
be better off if he put the gun down.
We should move, then, to a *comprehensive desire* theory,
according to which what matters to a person's well-being is the
overall level of desire-satisfaction in their life as a whole. A
*summative* version of this theory suggests, straightforwardly
enough, that the more desire-fulfilment in a life the better. But it
runs into Derek Parfit's case of *addiction* (1984, p.
497). Imagine that you can start taking a highly addictive drug, which
will cause a very strong desire in you for the drug every morning.
Taking the drug will give you no pleasure; but not taking it will
cause you quite severe suffering. There will be no problem with the
availability of the drug, and it will cost you nothing. But what
reason do you have to take it?
A *global* version of the comprehensive theory ranks desires,
so that desires about the shape and content of one's life as a
whole are given some priority. So, if I prefer not to become a drug
addict, that will explain why it is better for me not to take
Parfit's drug. But now consider the case of the *orphan
monk*. This young man began training to be a monk at the earliest
age, and has lived a very sheltered life. He is now offered three
choices: he can remain as a monk, or become either a cook or a
gardener outside the monastery, at a grange. He has no conception of
the latter alternatives, so chooses to remain a monk. But surely it
might be possible that his life would be better for him were he to
live outside?
So we now have to move to an *informed desire* version of the
comprehensive theory (Sobel 1994). According to the informed desire
account, the best life is the one I would desire if I were fully
informed about all the (non-evaluative) facts. But now consider a case
made famous by John Rawls (1971: 432; see Stace 1944: 238):
the *grass-counter*. Imagine a brilliant Harvard mathematician,
fully informed about the options available to her, who develops an
overriding desire to count the blades of grass on the lawns of
Harvard. Like the experience machine, this case is another example of
philosophical 'bedrock'. Some will believe that, if she
really is informed, and not suffering from some neurosis, then the
life of grass-counting will be the best for her.
Note that on the informed desire view the subject must actually have
the desires in question for well-being to accrue to her. If it were
true of me that, were I fully informed I would desire some object
which at present I have no desire for, giving me that object now would
not benefit me. Any theory which claimed that it would amounts to an
objective list theory with a desire-based epistemology.
All these problem cases for desire theories appear to be symptoms of a
more general difficulty. Recall again the distinction between
substantive and formal theories of well-being. The former state the
constituents of well-being (such as pleasure), while the latter state
what makes these things good for people (pleasantness, for example).
Substantively, a desire theorist and a hedonist may agree on what
makes life good for people: pleasurable experiences. But formally they
will differ: the hedonist will refer to pleasantness as the
good-maker, while the desire theorist must refer to
desire-satisfaction. (It is worth pointing out here that if one
characterizes pleasure as an experience the subject wants to continue,
the distinction between hedonism and desire theories becomes quite
hard to pin down.)
The idea that desire-satisfaction is a 'good-making
property' is somewhat odd. As Aristotle says
(*Metaphysics*, 1072a, tr. Ross): 'desire is consequent
on opinion rather than opinion on desire'. In other words, we
desire things, such as writing a great novel, because we think those
things are independently good; we do not think they are good because
they will satisfy our desire for them.
### 4.3 Objective List Theories
The threefold distinction I am using between different theories of
well-being has become standard in contemporary ethics (Parfit 1984:
app. I). There are problems with it, however, as with many
classifications, since it can blind one to other ways of
characterizing views (see Kagan 1992; Hurka 2019). Objective list
theories are usually understood as theories which list items
constituting well-being that consist neither merely in pleasurable
experience nor in desire-satisfaction. Such items might include, for
example, knowledge or friendship. But it is worth remembering, for
example, that hedonism might be seen as one kind of 'list'
theory, and all list theories might then be opposed to desire theories
as a whole.
What should go on the list (Moore 2000)? It is important that every
good should be included. As Aristotle put it: 'We take what is
self-sufficient to be that which on its own makes life worthy of
choice and lacking in nothing. We think happiness to be such, and
indeed the thing most of all worth choosing, not counted as just one
thing among others' (*Nicomachean Ethics*, 1097b, tr.
Crisp). In other words, if you claim that well-being consists only in
friendship and pleasure, I can show your list to be unsatisfactory if
I can demonstrate that knowledge is also something that makes people
better off.
What is the 'good-maker', according to objective list
theorists? This depends on the theory. One, influenced by Aristotle
and recently developed by Thomas Hurka (1993; see Bradford 2017), is
*perfectionism*, according to which what makes things
constituents of well-being is their perfecting human nature. (On the
history of modern perfectionism, see Brink 2019.) If it is part
of human nature to acquire knowledge, for example, then a
perfectionist should claim that knowledge is a constituent of
well-being. But there is nothing to prevent an objective list
theorist's claiming that all that the items on her list have in
common is that each, in its own way, advances well-being.
How do we decide what goes on the list? All we can work on is the
deliverance of reflective judgement--intuition, if you like. But
one should not conclude from this that objective list theorists are,
because they are intuitionist, less satisfactory than the other two
theories. For those theories too can be based only on reflective
judgement. Nor should one think that intuitionism rules out argument.
Argument is one way to bring people to see the truth. Further, we
should remember that intuitions can be mistaken. Indeed, as suggested
above, this is the strongest line of defence available to hedonists:
to attempt to undermine the evidential weight of many of our natural
beliefs about what is good for people.
One common objection to objective list theories is that they are
elitist, since they appear to be claiming that certain things
are good for people, even if those people will not enjoy them, and do
not even want them. One strategy here might be to adopt a
'hybrid' account, according to which certain goods do
benefit people independently of pleasure and desire-satisfaction, but
only when they do in fact bring pleasure and/or satisfy desires.
Another would be to bite the bullet, and point out that a theory could
be both elitist and true.
It is also worth pointing out that objective list theories need not
involve any kind of objectionable authoritarianism or perfectionism.
First, one might wish to include autonomy on one's list,
claiming that the informed and reflective living of one's own
life for oneself itself constitutes a good. Second, and perhaps more
significantly, one might note that any theory of well-being in itself
has no direct moral implications. There is nothing logically to
prevent one's holding a highly elitist conception of
well-being alongside a strict liberal view that forbade paternalistic
interference of any kind with a person's own life (indeed, on
some interpretations, J.S. Mill's position is close to
this).
One not implausible view, if desire theories are indeed mistaken in
their reversal of the relation between desire and what is good, is
that the debate is really between hedonism and objective list
theories. And, as suggested above, what is most at stake here is the
issue of the epistemic adequacy of our beliefs about well-being. The
best way to resolve this matter would consist, in large part at least,
in returning once again to the experience machine objection, and
seeking to discover whether that objection really stands.
## 5. Well-being and Morality
### 5.1 Welfarism
Well-being obviously plays a central role in any moral theory. A
theory which said that it just does not matter would be given no
credence at all. Indeed, it is very tempting to think that well-being,
in some ultimate sense, is all that can matter morally. Consider, for
example, Joseph Raz's 'humanistic principle':
'the explanation and justification of the goodness or badness of
anything derives ultimately from its contribution, actual or possible,
to human life and its quality' (Raz 1986, p. 194). If we expand
this principle to cover non-human well-being, it might be read as
claiming that, ultimately speaking, the justificatory force of any
moral reason rests on well-being. This view is *welfarism*.
Act-utilitarians, who believe that the right action is that which
maximizes well-being overall, may attempt to use the intuitive
plausibility of welfarism to support their position, arguing that any
deviation from the maximization of well-being must be grounded on
something distinct from well-being, such as equality or rights. But
those defending equality may argue that egalitarians are concerned to
give priority to those who are worse off, and that we do see here a
link with concern for well-being. Likewise, those concerned with
rights may note that we have rights to certain goods, such as freedom,
or to the absence of 'bads', such as suffering (in the
case of the right not to be tortured, for example). In other words,
the interpretation of welfarism is itself a matter of dispute. But,
however it is understood, it does seem that welfarism poses a problem
for those who believe that morality can require actions which benefit
no one, and harm some, such as, for example, punishments intended to
give individuals what they deserve.
### 5.2 Well-being and Virtue
Ancient ethics was, in a sense, more concerned with well-being than a
good deal of modern ethics, the central question for many ancient
moral philosophers being, 'Which life is best for one?'.
The rationality of egoism--the view that my strongest reason is
always to advance my own well-being--was largely assumed. This
posed a problem. Morality is naturally thought to concern the
interests of others. So if egoism is correct, what reason do I have to
be moral?
One obvious strategy to adopt in defence of morality is to claim that
a person's well-being is in some sense constituted by their
virtue, or the exercise of virtue, and this strategy was adopted in
subtly different ways by the three greatest ancient philosophers,
Socrates, Plato, and Aristotle (for a modern defence of the view, see
Bloomfield (2014)). At one point in his writings, Plato appears to
allow for the rationality of moral self-sacrifice: the philosophers in
his famous 'cave' analogy in the *Republic*
(519-20) are required by morality to desist from contemplation
of the sun outside the cave, and to descend once again into the cave
to govern their fellow citizens. In the voluminous works of Aristotle,
however, there is no recommendation of sacrifice. Aristotle believed
that he could defend the virtuous choice as always being in the
interest of the individual. Note, however, that he need not be
described as an egoist in a strong sense--as someone who believes
that our only reasons for action are grounded in our own well-being.
For him, virtue both tends to advance the good of others, and (at
least when acted on) advances our own good. So Aristotle might well
have allowed that the well-being of others grounds reasons for me to
act. But these reasons will never come into conflict with reasons
grounded in my own individual well-being.
His primary argument is the notorious and perfectionist
'function argument', according to which the good for some
being is to be identified through attention to its
'function' or characteristic activity. The characteristic
activity of human beings is to exercise reason, and the good will lie
in exercising reason well--that is, in accordance with the
virtues. This argument, which is stated by Aristotle very briefly and
relies on assumptions from elsewhere in his philosophy and indeed that
of Plato, appears to conflate the two ideas of what is good for a
person, and what is morally good. I may agree that a
'good' example of humanity will be virtuous, but deny that
this person is doing what is best for them. Rather, I may insist,
reason requires one to advance one's own good, and this good
consists in, for example, pleasure, power, or honour. But much of
Aristotle's *Nicomachean Ethics* is taken up with
portraits of the life of the virtuous and the vicious, which supply
independent support for the claim that well-being is constituted by
virtue. In particular, it is worth noting the emphasis placed by
Aristotle on the value to a person of 'nobility' (*to
kalon*), a quasi-aesthetic value which those sensitive to such
qualities might not implausibly see as a constituent of well-being of
more worth than any other. In this respect, the good of virtue is, in
the Kantian sense, 'unconditional'. Yet, for Aristotle,
virtue or the 'good will' is not only morally good, but
good for the individual. |
weyl | ## 1. Life and Achievements
Hermann Weyl was born on 9 November 1885 in the small town of
Elmshorn near Hamburg. In 1904 he entered Gottingen University,
where his teachers included Hilbert, Klein and Minkowski. Weyl was
particularly impressed with Hilbert's lectures on number theory and
resolved to study everything he had written. Hilbert's work on
integral equations became the focus of Weyl's (1908) doctoral
dissertation, written under Hilbert's direction. In this and in
subsequent papers Weyl made important contributions to the theory of
self-adjoint operators. Virtually all of Weyl's many publications
during his stay in Gottingen until 1913 dealt with integral
equations and their applications.
After Weyl's (1910b) habilitation, he became a Privatdozent and was
thereby entitled to give lectures at the University of
Gottingen. Weyl chose to lecture on Riemann's theory of
algebraic functions during the winter semester of 1911-12.
These lectures became the basis of Weyl's (1913) first book *Die
Idee der Riemannschen Flache* (*The Concept of a Riemann
Surface*). This work, in which function theory, geometry and
topology are unified, constitutes the first modern and comprehensive
treatment of Riemann surfaces. The work also contains the first
construction of an abstract manifold. Emphasizing that the `points of
the manifold' can be quite arbitrary, Weyl based his definition of a
general two-dimensional manifold or surface on an extension of the
neighbourhood axioms that Hilbert (1902) had proposed for the
definition of a plane. The work is indicative of Weyl's exceptional
gift for harmoniously uniting into a coherent whole a patchwork of
distinct mathematical fields.
In 1913 Weyl was offered, and accepted, a professorship at the
Eidgenossische Technische Hochschule--ETH (Swiss Federal
Institute of Technology)--in Zurich. Weyl's years in
Zurich were extraordinarily productive and resulted in some of
his finest work, especially in the foundations of mathematics and
physics. When he arrived in Zurich in the fall of 1913, Einstein
and Grossmann were struggling to overcome a difficulty in their
effort to provide a coherent mathematical formulation of the general
theory of relativity. Like Hilbert, Weyl appreciated the importance
of a close relationship between mathematics and physics. It was
therefore only natural that Weyl should become interested in
Einstein's theory and the potential mathematical challenges it might
offer. Following the outbreak of the First World War, however, in May
1915 Weyl was called up for military service. But Weyl's academic
career was interrupted only briefly, since in 1916 he was exempted
from military duties for reasons of health. In the meantime Einstein
had accepted an offer from Berlin and had left Zurich in 1914.
Einstein's departure had weakened the theoretical physics program at
the ETH and (as reported by Frei and Stammbach (1992, 26) the
administration hoped that Weyl's presence would alleviate the
situation. But Weyl needed no external prompting to work in, and to
teach, theoretical physics: his interest in the subject in general
and, above all, in the theory of relativity, gave him more than
sufficient motivation in that regard. Weyl decided to lecture on the
general theory of relativity in the summer semester of 1917, and
these lectures became the basis of his famous book
*Raum-Zeit-Materie* (*Space-Time-Matter*) of 1918.
During 1917-24, Weyl directed his energies equally to the
development of the mathematical and philosophical foundations of
relativity theory, and to the broader foundations of mathematics. It
is in these two areas that his philosophical erudition, nourished
from his youth, manifests itself most clearly. The year 1918, the
same year in which *Space-Time-Matter* appeared, also saw the
publication of *Das Kontinuum* (*The Continuum*), a
work in which Weyl constructs a new foundation for mathematical
analysis free of what he had come to see as fatal flaws in the
set-theoretic formulation of Cantor and Dedekind. Soon afterwards
Weyl embraced Brouwer's mathematical intuitionism; in the early 1920s
he published a number of papers elaborating on and defending the
intuitionistic standpoint in the foundations of mathematics.
It was also during the first years of the 1920s that Weyl came to
appreciate the power and utility of group theory, initially in
connection with his work on the solution to the Riemann-Helmholtz-Lie
problem of space. Weyl analyzed this problem, the
*Raumproblem*, in a series of articles and lectures during the
period 1921-23. Weyl (1949b, 400) noted that his interest in
the philosophical foundations of the general theory of relativity had
motivated his analysis of the representations and invariants of the
continuous groups:
> I can say that the wish to understand what really
> is the mathematical substance behind the formal apparatus of
> relativity theory led me to the study of the representations and
> invariants of groups; and my experience in this regard is probably not
> unique.
This newly acquired appreciation of group theory led Weyl to what he
himself considered his greatest work in mathematics, a general theory
of the representations and invariants of the classical Lie groups
(Weyl 1924a, 1924f, 1925, 1926a, 1926b, 1926c). Later Weyl (1939)
wrote a book, *The Classical Groups: Their Invariants and
Representations*, in which he returned to the theory of
invariants and representations of semisimple Lie groups. In this work
he realized his ambition "to derive the decisive results for
the most important of these groups by direct algebraic construction,
in particular for the full group of all non-singular linear
transformations and for the orthogonal group."
Weyl applied his work in group theory and his earlier work in
analysis and spectral theory to the new theory of quantum mechanics.
Weyl's mathematical analysis of the foundations of quantum mechanics
showed that regularities in a physical theory are most fruitfully
understood in terms of symmetry groups. Weyl's (1928) book
*Gruppentheorie und Quantenmechanik* (*Group Theory and
Quantum Mechanics*) deals not only with the theory of quantum
mechanics but also with relativistic quantum electrodynamics. In this
work Weyl also presented a very early analysis of discrete symmetries
which later stimulated Dirac to predict the existence of the positron
and the antiproton.
During his years in Zurich Weyl received, and turned down,
numerous offers of professorships by other
universities--including an invitation in 1923 to become Felix
Klein's successor at Gottingen. It was only in 1930 that he
finally accepted the call to become Hilbert's successor there. His
second stay in Gottingen was to be brief. Repelled by Nazism,
"deeply revolted," as he later wrote, "by the
shame which this regime had brought to the German name," he
left Germany in 1933 to accept an offer of permanent membership of
the newly founded Institute for Advanced Study in Princeton. Before
his departure for Princeton he published *The Open World*
(1932); his tenure there saw the publication of *Mind and
Nature* (1934), the aforementioned *The Classical Groups*
(1939), *Algebraic Theory of Numbers* (1940), *Meromorphic
Functions and Analytic Curves* (1943), *Philosophy of
Mathematics and Natural Science* (1949; an enlarged English
version of a 1927 work *Philosophie der Mathematik und
Naturwissenschaften*), and *Symmetry* (1952). In 1951 he
formally retired from the Institute, remaining as an emeritus member
until his death, spending half his time there and half in
Zurich. He died in Zurich suddenly, of a heart attack, on 9
December 1955.
## 2. Metaphysics
Weyl was first and foremost a mathematician, and certainly not a
"professional" philosopher. But as a German intellectual
of his time it was natural for him to regard philosophy as a pursuit
to be taken seriously. In Weyl's case, unusually even for a German
mathematician, it was idealist philosophy that from the beginning
played a significant role in his thought. Kant, Husserl, Fichte, and,
later, Leibniz, were at various stages major influences on Weyl's
philosophical thinking. As a schoolboy Weyl had been impressed by
Kant's "Critique of Pure Reason." He was especially taken
with Kant's doctrine that space and time are not inherent in the
objects of the world, existing as such and independently of our
awareness, but are, rather, forms of our intuition As he reports in
Insight and Reflection, (Weyl 1955), his youthful enthusiasm for Kant
crumbled soon after he entered Gottingen University in 1904.
There he read Hilbert's Foundations of Geometry, a tour-de-force of
the axiomatic method, in comparison to which Kant's "bondage to
Euclidean geometry" now appeared to him naive. After this
philosophical reverse he lapsed into an indifferent positivism for a
while. But in 1912 he found a new and exciting source of
philosophical enlightenment in Husserl's
phenomenology.[1]
It was also at about this time that Fichte's metaphysical idealism
came to "capture his imagination." Although Weyl later
questioned idealist philosophy, and became dissatisfied with
phenomenology, he remained faithful throughout his life to the
primacy of intuition that he had first learned from Kant, and to the
irreducibility of individual consciousness that had been confirmed in
his view by Fichte and Husserl.
Weyl never provided a systematic account of his philosophical views,
and sorting out his overall philosophical position is no easy matter.
Despite the importance of intuition and individual consciousness in
Weyl's philosophical outlook, it would nevertheless be inexact to
describe his outlook as being that of a "pure" idealist,
since certain "realist" touches seem also to be present,
in his approach to physics, at least. His metaphysics appears to rest
on three elements, the first two of which may be considered
"idealist", and the third "realist": these
are, respectively, the Ego or "I", the (Conscious) Other
or "Thou", and the external or "objective"
world.
It is the first of these constituents, the Ego, to which Weyl
ascribes primacy. Indeed, in Weyl's words,
>
>
> The world exists only as met with by an ego, as one appearing to
> a consciousness; the consciousness in this function does not
> belong to the world, but stands out against the being as the
> sphere of vision, of meaning, of image, or however else one may call
> it. (Weyl 1934, 1)
The Ego alone has direct access to the given, that is, to the raw
materials of the existent which are presented to consciousness with
an immediacy at once inescapable and irreducible. The Ego is singular
in that, from its own standpoint, it is unique. But in an act of
self-reflection, through grasping (in Weyl's words) "what I am
for myself", the Ego comes to recognize that it has a
*function*, namely as "conscious-existing carrier of the
world of phenomena." It is then but a short step for the Ego to
transcend its singularity through the act of defining an
"ego" to be an entity performing that same function
*for itself*. That is, an ego is precisely what I am for
myself (in other words, what the Ego is for itself)--again a
"conscious-existing carrier of the world of
phenomena"--and yet *other* than myself.
"Thou" is the term the Ego uses to address, and so to
identify, an ego in this sense. "Thou" is thus the Ego
generalized, the Ego refracted through itself. The Ego grasps that it
exists within a world of Thous, that is, within a world of other Egos
similar to itself. While the Ego has, of necessity, no direct access
to any Thou, it can, through analogy and empathy, grasp what it is to
be Thou, a conscious being like oneself. By that very fact the Ego
recognizes in the Thou the same luminosity it sees in itself.
The relationship of the Ego with the external world, the realm of
"objective" reality, is of an entirely different nature.
There is no analogy that the Ego can draw--as it can with the
Thou--between itself and the external world, since that world
(presumably) lacks consciousness. The external world is radically
other, and opaque to the
Ego[2].
Like Kant's noumenal realm, the external world is outside the
immediacy of consciousness; it is, in a word, *transcendent*.
Since this transcendent world is not directly accessible to the Ego,
as far as the latter is concerned the existence of that world must
arise through *postulation*, "a matter of metaphysics,
not a judgment but an act of acknowledgment or
belief."[3]
Indeed, according to Weyl, it is not strictly necessary for the Ego
to postulate the existence of such a world, even given the existence
of a world of Thous:
For as long as I do not proceed beyond what is given, or, more
exactly, what is given at the moment, there is no need for the
substructure of an objective world. Even if I include memory and in
principle acknowledge it as valid testimony, if I furthermore accept
as data the contents of the consciousness of others on equal terms
with my own, thus opening myself to the mystery of intersubjective
communication, I would still not have to proceed as we actually do,
but might ask instead for the 'transformations' which
mediate between the images of the several consciousnesses. Such a
presentation would fit in with Leibniz's monadology. (Weyl 1949,
117.)
But once the existence of the transcendent world is postulated, its
opacity to the Ego can be partly overcome by constructing a
representation of it through the use of *symbols*, the
procedure called by Weyl *symbolic construction*, (or
*constructive
cognition)*[4]
and which he regarded as the cornerstone of scientific explanation.
He outlines the process as follows (Weyl 1934, 53):
1. Upon that which is given, certain reactions are performed by
which the given is in general brought together with other elements
capable of being varied arbitrarily. If the results to be read from
these reactions are found to be independent of the variable auxiliary
elements they are then introduced as attributes inherent in the things
themselves (even if we do not actually perform those reactions on
which their meaning rests, but only believe in the possibility of
their being performed).
2. By the introduction of symbols, the judgements are split up and a
part of the manipulations is made independent of the given and its
duration by being shifted onto the representing symbols which are time
resisting and simultaneously serve the purpose of preservation and
communication. Thereby the unrestricted handling of notions arises in
counterpoint to their application, ideas in a relatively independent
manner confront reality.
3. Symbols are not produced simply "according to demand"
wherever they correspond to actual occurrences, but they are embedded
into an ordered manifold of possibilities created by free construction
and open towards infinity. Only in this way may we contrive to predict
the future, for the future is not given actually.
Weyl's procedure thus amounts to the following. In step 1, a given
configuration is subjected to variation. One then identifies those
features of the configuration that remain unchanged under the
variation--the *invariant* features; these are in turn,
through a process of reification, deemed to be properties of an
unchanging substrate--the "things themselves". It is
precisely the invariance of such features that renders them (as well
as the "things themselves") capable of being represented
by the "time resisting" symbols Weyl introduces in step
2. As (written) symbols these are communicable without temporal
distortion and can be subjected to unrestricted manipulation without
degradation. It is the flexibility conferred thereby which enables
the use of symbols to be conformable with reality. Nevertheless (step
3) symbols are not haphazardly created in response to immediate
stimuli; they are introduced, rather, in a structured, yet freely
chosen manner which reflects the idea of an underlying
order--the "one real world"--about which not
everything is, or can be, known--it is, like the future,
"open towards infinity". Weyl observes that the
reification implicit in the procedure of symbolic construction leads
inevitably to its iteration, for "the transition from step to
step is made necessary by the fact that the objects at one step
reveal themselves as manifestations of a higher reality, the reality
of the next step" (Weyl (1934), 32-33). But in the end
"systematic scientific explanation will finally reverse the
order: first it will erect its symbolical world by itself, without
any reference, then, skipping all intermediate steps, try to describe
which symbolical configurations lead to which data of
consciousness" (*ibid*.). In this way the symbolic world
becomes (mistakenly) identified with the transcendent world.
It is symbolic construction which, in Weyl's vision, allows us access
to the "objective" world presumed to underpin our
immediate perceptions; indeed, Weyl holds that the objective world,
being beyond the grasp (the "lighted circle") of
intuition, can *only* be presented to us in symbolic
form[5].
We can see a double dependence on the Ego in Weyl's idea of symbolic
construction to get hold of an objective world beyond the mental. For
not only is that world "constructed" by the Ego, but the
materials of construction, the symbols themselves, as signs intended
to convey meaning, have no independent existence beyond their
graspability by a consciousness. By their very nature these symbols
cannot point directly to an external world (even given an unshakable
belief in the existence of that world) lying beyond consciousness.
Weyl's metaphysical triad thus reduces to what might be called a
*polarized dualism*, with the mental (I, Thou) as the primary,
independent pole and objective reality as a secondary, dependent
pole[6].
In Weyl's view mathematics simply lies - as it did for Brouwer
- within the Ego's "lighted circle of intuition" and
so is, in principle at least, completely presentable to that
intuition. But the nature of physics is more complicated. To the
extent that physics is linked to the transcendent world of objective
reality, it cannot arise as the direct object of intuition, but must,
like the transcendent world itself, be presented in symbolic form;
more exactly, as the result of a process of symbolic construction. It
is this which, in Weyl's vision, allows us access to to the
"objective" world presumed to underpin our immediate perceptions.
Weyl's conviction that the objective world can only be presented to us
through symbolic construction may serve to explain his apparently
untroubled attitude towards the highly counterintuitive nature of
quantum theory. Indeed, the claims of numerous physicists that the
quantum microworld is accessible to us only through abstract
mathematical description provides a vindication of Weyl's thesis that
objective reality cannot be grasped directly, but only through the
mediation of symbols.
In his later years Weyl attempted to enlarge his metaphysical triad
(I, Thou, objective world) to a tetrad, by a process of completion,
as it were, to embrace the "godhead that lives in impenetrable
silence", the objective counterpart of the Ego, which had been
suggested to him by his study of Eckhart. But this effort was to remain
uncompleted.
During his long philosophical voyage Weyl stopped at a number of
ports of call: in his youth, Kantianism and positivism; then
Husserlian phenomenological idealism; later Brouwerian intuitionism
and finally a kind of theological existentialism. But apart from his
brief flirtation with positivism (itself, as he says, the result of a
disenchantment with Kant's "bondage to Euclidean
geometry"), Weyl's philosophical orientation remained in its
essence idealist (even granting the significant realist elements
mentioned above). Nevertheless, while he continued to acknowledge the
importance of phenomenology, his remarks in *Insight and
Reflection* indicate that he came to regard Husserl's doctrine as
lacking in two essential respects: first, it failed to give due
recognition to the (construction of) transcendent external world,
with which Weyl, in his capacity as a natural scientist, was
concerned; secondly, and perhaps in Weyl's view even more seriously,
it failed to engage with the enigma of selfhood: the fact that I am
the person I am. Grappling with the first problem led Weyl to
identify symbolic construction as providing sole access to objective
reality, a position which brought him close to Cassirer in certain
respects; while the second problem seems to have led him to
existentialism and even, through his reading of Eckhart, to a kind of
religious mysticism.
## 3. Work in the foundations and philosophy of mathematics
Towards the end of his *Address on the Unity of Knowledge*,
delivered at the 1954 Columbia University bicentennial celebrations,
Weyl enumerates what he considers to be the essential constituents of
knowledge. At the top of his
list[7]
comes
>
>
> ...intuition, mind's ordinary act of seeing what is
> given to it. (Weyl 1954, 629)
In particular Weyl held to the view that intuition, or
*insight*--rather than *proof*--furnishes the
ultimate foundation of *mathematical* knowledge. Thus in his
*Das Kontinuum* of 1918 he says:
>
> In the Preface to Dedekind (1888) we read that "In science,
> whatever is provable must not be believed without proof." This
> remark is certainly characteristic of the way most mathematicians
> think. Nevertheless, it is a preposterous principle. As if such an
> indirect concatenation of grounds, call it a proof though we may, can
> awaken any "belief" apart from assuring ourselves through
> immediate insight that each individual step is correct. In all cases,
> this process of confirmation--and not the proof--remains
> the ultimate source from which knowledge derives its authority; it is
> the "experience of truth". (Weyl 1987, 119)
Weyl's idealism naturally inclined him to the view that the ultimate
basis of his own subject, mathematics, must be found in the
intuitively given as opposed to the transcendent. Nevertheless, he
recognized that it would be unreasonable to require all mathematical
knowledge to possess intuitive immediacy. In *Das Kontinuum*,
for example, he says:
>
> The states of affairs with which mathematics deals are, apart from the
> very simplest ones, so complicated that it is practically impossible
> to bring them into full givenness in consciousness and in this way to
> grasp them completely. (*Ibid*., 17)
Nevertheless, Weyl felt that this fact, inescapable as it might be,
could not justify extending the bounds of mathematics to embrace
notions, such as the actual infinite, which cannot be given fully in
intuition even in principle. He held, rather, that such extensions of
mathematics into the transcendent are warranted only by the fact that
mathematics plays an indispensable role in the physical sciences, in
which intuitive evidence is necessarily transcended. As he says in
The Open
World[8]:
> ... if mathematics is taken by itself, one should
> restrict oneself with Brouwer to the intuitively cognizable truths
> ... nothing compels us to go farther. But in the natural sciences
> we are in contact with a sphere which is impervious to intuitive
> evidence; here cognition necessarily becomes symbolical construction.
> Hence we need no longer demand that when mathematics is taken into the
> process of theoretical construction in physics it should be possible to
> set apart the mathematical element as a special domain in which all
> judgments are intuitively certain; from this higher standpoint which
> makes the whole of science appear as one unit, I consider Hilbert to be
> right. (Weyl 1932, 82).
In *Consistency in Mathematics* (1929), Weyl characterized the
mathematical method as
> the a priori construction of the possible in opposition to
> the a posteriori description of what is actually
> given.[9]
>
The problem of identifying the limits on constructing "the
possible" in this sense occupied Weyl a great deal. He was
particularly concerned with the concept of the mathematical
*infinite*, which he believed to elude
"construction" in the naive set-theoretical sense
[10].
Again to quote a passage from *Das Kontinuum:*
>
> No one can describe an infinite set other than by indicating
> properties characteristic of the elements of the set.... The
> notion that a set is a "gathering" brought together by
> infinitely many individual arbitrary acts of selection, assembled and
> then surveyed as a whole by consciousness, is nonsensical;
> "inexhaustibility" is essential to the infinite. (Weyl
> 1987, 23)
But still, as Weyl attests towards the end of *The Open
World*, "the demand for totality and the metaphysical
belief in reality inevitably compel the mind to represent the
infinite as closed being by symbolical construction". The
conception of the completed infinite, even if nonsensical, is
inescapable.
### 3.1 *Das Kontinuum*
Another mathematical "possible" to which Weyl gave a
great deal of thought is the *continuum*. During the period
1918-1921 he wrestled with the problem of providing the
mathematical continuum--the real number line--with a
logically sound formulation. Weyl had become increasingly critical of
the principles underlying the set-theoretic construction of the
mathematical continuum. He had come to believe that the whole
set-theoretical approach involved vicious
circles[11]
to such an extent that, as he says, "every cell (so to speak)
of this mighty organism is permeated by contradiction." In
*Das Kontinuum* he tries to overcome this by providing
analysis with a *predicative* formulation--not, as
Russell and Whitehead had attempted, by introducing a hierarchy of
logically ramified types, which Weyl seems to have regarded as
excessively complicated--but rather by confining the
comprehension principle to formulas whose bound variables range over
just the initial given entities (numbers). Accordingly he restricts
analysis to what can be done in terms of natural numbers with the aid
of three basic logical operations, together with the operation of
substitution and the process of "iteration", i.e.,
primitive recursion. Weyl recognized that the effect of this
restriction would be to render unprovable many of the central results
of classical analysis--e.g., Dirichlet's principle that any
bounded set of real numbers has a least upper
bound[12]--but
he was prepared to accept this as part of the price that must be paid
for the security of mathematics.
As Weyl saw it, there is an unbridgeable gap between intuitively
given continua (e.g. those of space, time and motion) on the one
hand, and the "discrete" exact concepts of mathematics
(e.g. that of natural
number[13])
on the other. The presence of this chasm meant that the construction
of the mathematical continuum could not simply be "read
off" from intuition. It followed, in Weyl's view, that the
mathematical continuum must be treated as if it were an element of
the transcendent realm, and so, in the end, justified in the same way
as a physical theory. It was not enough that the mathematical theory
be *consistent*; it must also be *reasonable*.
*Das Kontinuum* embodies Weyl's attempt at formulating a
theory of the continuum which satisfies the first, and, as far as
possible, the second, of these requirements. In the following
passages from this work he acknowledges the difficulty of the task:
>
>
>
> ... the conceptual world of mathematics is so foreign to what
> the intuitive continuum presents to us that the demand for
> coincidence between the two must be dismissed as absurd. (Weyl 1987,
> 108)
>
>
>
>
> ... the continuity given to us immediately by intuition (in the
> flow of time and of motion) has yet to be grasped mathematically as a
> totality of discrete "stages" in accordance with that
> part of its content which can be conceptualized in an exact way.
> (*Ibid*.,
> 24)[14]
>
>
>
>
>
> Exact time- or space-points are not the ultimate, underlying atomic
> elements of the duration or extension given to us in experience. On
> the contrary, only reason, which thoroughly penetrates what is
> experientially given, is able to grasp these exact ideas. And only in
> the arithmetico- analytic concept of the real number belonging to the
> purely formal sphere do these ideas crystallize into full
> definiteness. (*Ibid*., 94)
>
>
>
>
> When our experience has turned into a real process in a real world
> and our phenomenal time has spread itself out over this world and
> assumed a cosmic dimension, we are not satisfied with replacing the
> continuum by the exact concept of the real number, in spite of the
> essential and undeniable inexactness arising from what is given.
> (*Ibid*., 93)
>
>
>
As these quotations show, Weyl had come to accept that it was in
principle impossible to furnish the continuum as presented to
intuition with an exact mathematical formulation : so, with
reluctance, he lowered his sights. In *Das Kontinuum* his goal
was, first and foremost, to establish the *consistency* of the
mathematical theory of the continuum by putting the
*arithmetical* notion of real number on a firm logical basis.
Once this had been achieved, he would then proceed to show that this
theory is *reasonable* by employing it as the foundation for a
plausible account of continuous process in the objective physical
world.[15]
In SS6 of *Das Kontinuum* Weyl presents his conclusions as
to the relationship between the intuitive and mathematical
continua. He poses the question: Does the mathematical
framework he has erected provide an adequate representation of
physical or temporal continuity as it is *actually
experienced*? In posing this question we can see the continuing
influence of Husserl and phenomenological doctrine. Weyl begins his
investigation by noting that, according to his theory, if one asks
whether a given function is continuous, the answer is not fixed once
and for all, but is, rather, dependent on the extent of the domain of
real numbers which have been defined up to the point at which the
question is posed. Thus the continuity of a function must always
remain *provisional*; the possibility always exists that a
function deemed continuous *now* may, with the emergence of
"new" real numbers, turn out to be discontinuous *in
the future*.
[16]
To reveal the discrepancy between this formal account of continuity
based on real numbers and the properties of an intuitively given
continuum, Weyl next considers the experience of seeing a pencil
lying on a table before him throughout a certain time interval. The
position of the pencil during this interval may be taken as a
function of the time, and Weyl takes it as a fact of observation that
during the time interval in question this function is continuous and
that its values fall within a definite range. And so, he says,
>
> This observation entitles me to assert that during a certain period
> this pencil was on the table; and even if my right to do so is not
> absolute, it is nevertheless reasonable and well-grounded. It is
> obviously absurd to suppose that this right can be undermined by
> "an expansion of our principles of definition"--as if
> new moments of time, overlooked by my intuition could be added to this
> interval, moments in which the pencil was, perhaps, in the vicinity of
> Sirius or who knows where. If the temporal continuum can be
> represented by a variable which "ranges over" the real
> numbers, then it appears to be determined thereby how narrowly or
> widely we must understand the concept "real number" and
> the decision about this must not be entrusted to logical deliberations
> over principles of definition and the like. (Weyl 1987, 88)
To drive the point home, Weyl focuses attention on the fundamental
continuum of *immediately given phenomenal time*, that is, as
he characterizes it,
>
> ... to that constant form of my experiences of consciousness by
> virtue of which they appear to me to flow by successively. (By
> "experiences" I mean what I experience, exactly as I
> experience it. I do not mean real psychical or even physical processes
> which occur in a definite psychic-somatic individual, belong to a real
> world, and, perhaps, correspond to the direct experiences.)
> (*Ibid*., 88)
In order to correlate mathematical concepts with phenomenal time in
this sense Weyl grants the possibility of introducing a rigidly
punctate "now" and of identifying and exhibiting the
resulting temporal points. On the collection of these temporal points
is defined the relation of *earlier than* as well as a
congruence relation of *equality of temporal intervals*, the
basic constituents of a simple mathematical theory of time. Now Weyl
observes that the discrepancy between phenomenal time and the concept
of real number would vanish if the following pair of conditions could
be shown to be satisfied:
1. The immediate expression of the intuitive finding that during
a certain period I saw the pencil lying there were construed in such a
way that the phrase "during a certain period" was replaced
by "in every temporal point which falls within a certain time
span OE". [Weyl goes on to say parenthetically here that he admits
"that this no longer reproduces what is intuitively present, but
one will have to let it pass, *if it is really legitimate to
dissolve a period into temporal points*.")
2. If \(P\) is a temporal point, then the domain of rational
numbers to which \(l\) belongs if and only if there is a time
point \(L\) earlier than \(P\) such that
\[
OL = l{.}OE
\]
can be constructed arithmetically in pure number theory on the basis
of our principles of definition, and is therefore a real number in
our sense. (*Ibid*., 89)
Condition 2 means that, if we take the time span \(OE\) as a
unit, then each temporal point \(P\) is correlated with a
definite real number. In an addendum Weyl also stipulates the
converse.
But can temporal intuition itself provide evidence for the truth or
falsity of these two conditions? Weyl thinks not. In fact, he states
quite categorically that
>
> ... everything we are demanding here is obvious nonsense: to
> these questions, the intuition of time provides no answer--just
> as a man makes no reply to questions which clearly are addressed to
> him by mistake and, therefore, are unintelligible when addressed to
> him. (*Ibid*., 90)
The grounds for this assertion are by no means immediately evident,
but one gathers from the passages following it that Weyl regards the
experienced *continuous flow* of phenomenal time as
constituting an insuperable barrier to the whole enterprise of
representing the continuum as experienced in terms of individual
points, and even to the characterization of "individual
temporal point" itself. As he says,
> The view of a flow consisting of points and, therefore,
> also dissolving into points turns out to be mistaken: precisely what
> eludes us is the nature of the continuity, the flowing from point to
> point; in other words, the secret of how the continually enduring
> present can continually slip away into the receding past. Each one of
> us, at every moment, directly experiences the true character of this
> temporal continuity. But, because of the genuine primitiveness of
> phenomenal time, we cannot put our experiences into words. So we shall
> content ourselves with the following description. What I am conscious
> of is for me both a being-now and, in its essence, something which,
> with its temporal position, slips away. In this way there arises the
> persisting factual extent, something ever new which endures and changes
> in consciousness. (*Ibid*., 91-92)
Weyl sums up what he thinks can be affirmed about "objectively
presented time"--by which he presumably means
"phenomenal time described in an objective
manner"--in the following two assertions, which he claims
apply equally, mutatis mutandis, to every intuitively given
continuum, in particular, to the continuum of spatial extension.
(*Ibid*., 92):
1. An individual point in it is non-independent, i.e., is pure
nothingness when taken by itself, and exists only as a "point of
transition" (which, of course, can in no way be understood
mathematically);
2. It is due to the essence of time (and not to contingent
imperfections in our medium) that a fixed temporal point cannot be
exhibited in any way, that always only an approximate, never an exact
determination is possible.
The fact that single points in a true continuum "cannot be
exhibited" arises, Weyl asserts, from the fact that they are
not genuine individuals and so cannot be characterized by their
properties. In the physical world they are never defined absolutely,
but only in terms of a coordinate system, which, in an arresting
metaphor, Weyl describes as "the unavoidable residue of the
eradication of the ego." This metaphor, which Weyl was to
employ more than
once[17],
again reflects the continuing influence of phenomenological doctrine
in his thinking : here, the thesis that the existent is given in the
first instance as the contents of a consciousness.
### 3.2 Weyl and Brouwerian Intuitionism
By 1919 Weyl had come to embrace Brouwer's views on the intuitive
continuum. Given the idealism that always animated Weyl's thought,
this is not surprising, since Brouwer assigned the thinking subject a
central position in the creation of the mathematical
world[18].
In his early thinking Brouwer had held that that the continuum is
presented to intuition as a whole, and that it is impossible to
construct all its points as individuals. But later he radically
transformed the concept of "point", endowing points with
sufficient fluidity to enable them to serve as generators of a
"true" continuum. This fluidity was achieved by admitting
as "points", not only fully defined discrete numbers such
as 1/9, \(e\), and the like--which have, so to speak,
already achieved "being"--but also
"numbers" which are in a perpetual state of
"becoming" in that the entries in their decimal (or
dyadic) expansions are the result of free acts of choice by a subject
operating throughout an indefinitely extended time. The resulting
choice sequences cannot be conceived as finished, completed objects:
at any moment only an initial segment is known. Thus Brouwer obtained
the mathematical continuum in a manner compatible with his belief in
the primordial intuition of time--that is, as an unfinished, in
fact unfinishable entity in a perpetual state of growth, a
"medium of free development". In Brouwer's vision, the
mathematical continuum is indeed "constructed", not,
however, by initially shattering, as did Cantor and Dedekind, an
intuitive continuum into isolated points, but rather by assembling it
from a complex of continually changing overlapping parts.
Brouwer's impact looms large in Weyl's 1921 paper, On the New
Foundational Crisis of Mathematics. Here Weyl identifies two distinct
views of the continuum: "atomistic" or
"discrete"; and "continuous". In the first of
these the continuum is composed of individual real numbers which are
well-defined and can be sharply distinguished. Weyl describes his
earlier attempt at reconstructing analysis in Das Kontinuum as
atomistic in this sense:
>
> Existential questions concerning real numbers only become
> meaningful if we analyze the concept of real number in this
> extensionally determining and delimiting manner. Through this
> conceptual restriction, an ensemble of individual points is, so to
> speak, picked out from the fluid paste of the continuum. The continuum
> is broken up into isolated elements, and the flowing-into-each other of
> its parts is replaced by certain conceptual relations between these
> elements, based on the "larger-smaller" relationship. This
> is why I speak of the *atomistic* conception of the
> continuum. (Weyl 1921, 91)
By this time Weyl had repudiated atomistic theories of the continuum,
including that of Das
Kontinuum.[19]
While intuitive considerations, together with Brouwer's influence,
must certainly have fuelled Weyl's rejection of such theories, it
also had a logical basis. For Weyl had come to regard as meaningless
the formal procedure--employed in Das Kontinuum--of
negating universal and existential statements concerning real numbers
conceived as developing sequences or as sets of rationals. This had
the effect of undermining the whole basis on which his theory had
been erected, and at the same time rendered impossible the very
formulation of a "law of excluded middle" for such
statements. Thus Weyl found himself espousing a
position[20]
considerably more radical than that of Brouwer, for whom negations of
quantified statements had a perfectly clear constructive meaning,
under which the law of excluded middle is simply not generally
affirmable.
Of existential statements Weyl says:
> An existential statement--e.g., "there is an
> even number"--is not a judgement in the proper sense at
> all, which asserts a state of affairs; existential states of affairs
> are the empty invention of logicians. (Weyl [1921], 97)
Weyl termed such pseudostatements "judgment abstracts",
likening them, with typical literary flair, to "a piece of
paper which announces the presence of a treasure, without divulging
its location." Universal statements, although possessing
greater substance than existential ones, are still mere intimations
of judgments, "judgment instructions", for which Weyl
provides the following metaphorical description:
> If knowledge be compared to a fruit and the realization of
> that knowledge to the consumption of the fruit, then a universal
> statement is to be compared to a hard shell filled with fruit. It is,
> obviously, of some value, however, not as a shell by itself, but only
> for its content of fruit. It is of no use to me as long as I do not
> open it and actually take out a fruit and eat it. (*Ibid*.,
> 98)
Above and beyond the claims of logic, Weyl welcomed Brouwer's
construction of the continuum by means of sequences generated by free
acts of choice, thus identifying it as a "medium of free
Becoming" which "does not dissolve into a set of real
numbers as finished entities". Weyl felt that Brouwer, through
his doctrine of
Intuitionism[21],
had come closer than anyone else to bridging that "unbridgeable
chasm" between the intuitive and mathematical continua. In
particular, he found compelling the fact that the Brouwerian
continuum is not the union of two disjoint nonempty parts--that
it is, in a word, indecomposable. "A genuine continuum,"
Weyl says, "cannot be divided into separate
fragments."[22]
In later publications he expresses this more colourfully by quoting
Anaxagoras to the effect that a continuum "defies the chopping
off of its parts with a hatchet."
Weyl also agreed with Brouwer that all functions everywhere defined
on a continuum are continuous, but here certain subtle differences of
viewpoint emerge. Weyl contends that what mathematicians had taken to
be discontinuous functions actually consist of several continuous
functions defined on separated continua. In Weyl's view, for example,
the "discontinuous" function defined by
\(f(x) = 0\) for \(x \lt 0\) and
\(f(x) = 1\) for \(x \ge 0\) in fact consists of
the *two* functions with constant values and 1 respectively
defined on the separated continua \(\{x: x \lt 0\}\) and
\(\{x: x \ge 0\}\). (The union of these two continua
fails to be the whole of the real continuum because of the failure of
the law of excluded middle: it is not the case that, for be any real
number \(x\), either \(x \lt 0\) or \(x \ge 0\).)
Brouwer, on the other hand, had not dismissed the possibility that
discontinuous functions could be defined on proper parts of a
continuum, and still seems to have been searching for an appropriate
way of formulating this
idea.[23]
In particular, at that time Brouwer would probably have been inclined
to regard the above function \(f\) as a genuinely discontinuous
function defined on a proper part of the real continuum. For Weyl, it
seems to have been a self-evident fact that all functions defined on
a continuum are continuous, but this is because Weyl confines
attention to functions which turn out to be continuous by definition.
Brouwer's concept of function is less restrictive than Weyl's and it
is by no means immediately evident that such functions must always be
continuous.
Weyl defined real functions as mappings correlating each interval in
the choice sequence determining the argument with an interval in the
choice sequence determining the value "interval by
interval" as it were, the idea being that approximations to the
input of the function should lead effectively to corresponding
approximations to the input. Such functions are continuous by
definition. Brouwer, in contrast, considers real functions as
correlating choice sequences with choice sequences, and the
continuity of these is by no means obvious. The fact that Weyl
refused to grant (free) choice sequences--whose identity is in
no way predetermined--sufficient individuality to admit them as
arguments of functions betokens a commitment to the conception of the
continuum as a "medium of free Becoming" even deeper,
perhaps, than that of Brouwer.
There thus being only minor differences between Weyl's and Brouwer's
accounts of the continuum, Weyl accordingly abandoned his earlier
attempt at the reconstruction of analysis, and joined Brouwer. He
explains:
>
> I tried to find solid ground in the impending state of dissolution of
> the State of analysis (which is in preparation, although still only
> recognized by few)without forsaking the order on which it is founded,
> by carrying out its fundamental principle purely and honestly. And I
> believe I was successful--as far as this is possible. For
> *this order is itself untenable*, as I have now convinced
> myself, and Brouwer--that is the revolution!... It would
> have been wonderful had the old dispute led to the conclusion that the
> atomistic conception as well as the continuous one can be carried
> through. Instead the latter triumphs for good over the former. It is
> Brouwer to whom we owe the new solution of the continuum
> problem. History has destroyed again from within the provisional
> solution of Galilei and the founders of the differential and the
> integral calculus. (Weyl 1921, 98-99)
Weyl's initial enthusiasm for intuitionism seems later to have waned.
This may have been due to a growing belief on his part that the
mathematical sacrifices demanded by adherence to intuitionistic
doctrine (e.g., the abandonment of the least upper bound principle,
and other important results of classical analysis) would prove to be
intolerable to practicing mathematicians. Witness this passage from
*Philosophy of Mathematics and Natural Science*:
>
>
> Mathematics with Brouwer gains its highest intuitive clarity. He
> succeeds in developing the beginnings of analysis in a natural manner,
> all the time preserving the contact with intuition much more closely
> than had been done before. It cannot be denied, however, that in
> advancing to higher and more general theories the inapplicability of
> the simple laws of classical logic eventually results in an almost
> unbearable awkwardness. And the mathematician watches with pain the
> greater part of his towering edifice which he believed to be built of
> concrete blocks dissolve into mist before his eyes. (Weyl [1949],
> 54)
Nevertheless, it is likely that Weyl remained convinced to the end of
his days that intuitionism, despite its technical
"awkwardness", came closest, of all mathematical
approaches, to capturing the essence of the continuum.
### 3.3 Weyl and Hilbert
Weyl's espousal of the intuitionistic standpoint in the foundations
of mathematics in 1920-21 inevitably led to friction with his
old mentor Hilbert. Hilbert's conviction had long been that there
were in principle no limitations on the possibility of a full
scientific understanding of the natural world, and, analogously, in
the case of mathematics, that once a problem was posed with the
required precision, it was, at least in principle, soluble. In 1904
he was moved to respond to Emil du Bois-Reymond's famous declaration
concerning the limits of science, ignoramus et ignorabimus ("we
are ignorant and we shall remain ignorant"):
>
> We hear within us the perpetual call. There is the problem. Seek the
> solution. You can find it by pure reason, for in mathematics there is
> no
> ignorabimus.[24]
>
Hilbert was unalterably opposed to any restriction of mathematics
"by decree", an obstacle he had come up against in the
early stages of his career in the form of Leopold Kronecker's (the
influential 19th century German mathematician)
anathematization of all mathematics venturing beyond the finite. In
Brouwer's intuitionistic program--with its draconian
restrictions on what was admissible in mathematical argument, in
particular, its rejection of the law of excluded middle,
"pure" existence proofs, and virtually the whole of
Cantorian set theory--Hilbert saw the return of Kroneckerian
constaints on mathematics (and also, perhaps, a trace of du
Bois-Reymond's "ignorabimus") against which he had
struggled for so long. Small wonder, then, that Hilbert was upset
when Weyl joined the Brouwerian
camp.[25]
Hilbert's response was to develop an entirely new approach to the
foundations of mathematics with the ultimate goal of establishing
beyond doubt the consistency of the whole of classical mathematics,
including arithmetic, analysis, and Cantorian set theory. With the
attainment of that goal, classical mathematics would be placed
securely beyond the destructive reach of the intuitionists. The core
of Hilbert's program was the translation of the whole apparatus of
classical mathematical demonstration into a simple, finitistic
framework (which he called "metamathematics") involving
nothing more, in principle, than the straightforward manipulation of
symbols, taken in a purely formal sense, and devoid of further
meaning.[26]
Within metamathematics itself, Hilbert imposed a standard of
demonstrative evidence stricter even than that demanded by the
intuitionists, a form of finitism rivalling (ironically) that of
Kronecker. The demonstration of the consistency of classical
mathematics was then to be achieved by showing, within the
constraints of strict finitistic evidence insisted on by Hilbert,
that the formal metamathematical counterpart of a classical proof in
that system can never lead to an assertion evidently false, such as
\(0 = 1\).
Hilbert's program rested on the insight that, *au fond*, the
only part of mathematics whose reliability is entirely beyond
question is the *finitistic*, or *concrete* part: in
particular, finite manipulation of surveyable domains of distinct
objects including mathematical symbols presented as marks on paper.
Mathematical propositions referring only to concrete objects in this
sense Hilbert called *real*, *concrete*, or
*contentual* propositions, and all other mathematical
propositions he distinguished as possessing an *ideal*, or
*abstract* character. (Thus, for example, \(2 + 2 = 4\) would
count as a real proposition, while *there exists an odd perfect
number* would count as an ideal one.) Hilbert viewed ideal
propositions as akin to the ideal lines and points "at
infinity" of projective geometry. Just as the use of these does
not violate any truths of the "concrete" geometry of the
usual Cartesian plane, so he hoped to show that the use of ideal
propositions--even those of Cantorian set theory--would
never lead to falsehoods among the real propositions, that, in other
words, such use *would never contradict any self-evident fact
about concrete objects*. Establishing this by strictly concrete,
and so unimpeachable means was thus the central aim of Hilbert's
program. Hilbert may be seen to have followed Kant in attempting to
ground mathematics on the apprehension of spatiotemporal
configurations; but Hilbert restricted these configurations to
concrete signs (such as inscriptions on paper). Hilbert regarded
consistency as the touchstone of existence, and so for him the
important thing was the fact that no inconsistencies can arise within
the realm of concrete signs, since correct descriptions of concrete
objects are always mutually compatible. In particular, within the
realm of concrete signs, actual infinity cannot generate
inconsistencies since, again along with Kant, he held that this
concept cannot correspond to any concrete object. Hilbert's view
seems accordingly to have been that the formal soundness of
mathematics issues ultimately, not from a *logical* source,
but from a *concrete*
one[27],
in much the same way as the consistency of truly reported empirical
statements is guaranteed by the concreteness of the external
world[28].
Weyl soon grasped the significance of Hilbert's program, and came to
acknowledge its "immense significance and
scope"[29].
Whether that program could be successfully carried out was, of
course, still an open question. But independently of this issue Weyl
was concerned about what he saw as the loss of content resulting from
Hilbert's thoroughgoing formalization of mathematics. "Without
doubt," Weyl warns, "if mathematics is to remain a
serious cultural concern, then some *sense* must be attached
to Hilbert's game of formulae." Weyl thought that this sense
could only be supplied by "fusing" mathematics and
physics so that "the mathematical concepts of number, function,
etc. (or Hilbert's symbols) generally partake in the theoretical
construction of reality in the same way as the concepts of energy,
gravitation, electron,
etc."[30]
Indeed, in Weyl's view, "it is the function of mathematics to
be at the service of the natural sciences". But still:
>
> The propositions of theoretical physics... lack that feature
> which Brouwer demands of the propositions of mathematics, namely, that
> each should carry within itself its own intuitively comprehensible
> meaning.... Rather, what is tested by confronting theoretical
> physics with experience is the system as a whole. It seems that we
> have to differentiate between phenomenal knowledge or
> insight--such as is expressed in the statement: "This leaf
> (given to me in a present act of perception) has this green color
> (given to me in this same perception)"--and theoretical
> construction. Knowledge furnishes truth, its organ is
> "seeing" in the widest sense. Though subject to error, it
> is essentially definitive and unalterable. Theoretical construction
> seems to be bound only to one strictly formulable rational principle,
> concordance, which in mathematics, where the domain of sense data
> remains untouched, reduces to consistency; its organ is creative
> imagination. (Weyl 1949, 61-62)
Weyl points out that, just as in theoretical physics, Hilbert's
account of mathematics "already... goes beyond the bounds
of intuitively ascertainable states of affairs through... ideal
assumptions." (Weyl 1927, 484.) If Hilbert's realm of
contentual or "real" propositions--the domain of
metamathematics--corresponds to that part of the world directly
accessible to what Weyl terms "insight" or
"phenomenal knowledge", then "serious"
mathematics--the mathematics that practicing mathematicians are
actually engaged in doing--corresponds to Hilbert's realm of
"ideal" propositions. Weyl regarded this realm as the
counterpart of the domain generated by "symbolic
construction", the transcendent world focussed on by
theoretical physics. Hence his memorable characterization:
>
> The set-theoretical approach is the stage of naive realism which is
> unaware of the transition from the given to the transcendent. Brouwer
> represents idealism, by demanding the reduction of all truth to the
> intuitively given. In [Hilbert's] formalism, finally, consciousness
> makes the attempt to "jump over its own shadow", to leave
> behind the stuff of the given, to represent the
> transcendent--but, how could it be otherwise?, only through the
> symbol. (Weyl 1949, 65-66)
In Weyl's eyes, Hilbert's approach embodied the "symbolic
representation of the transcendent, which demands to be
satisfied", and so he regarded its emergence as a natural
development. But by 1927 Weyl saw Hilbert's doctrine as beginning to
prevail over intuitionism, and in this an adumbration of *"a
decisive defeat of the philosophical attitude of pure
phenomenology*, which thus proves to be insufficient for the
understanding of creative science even in the area of cognition that
is most primal and most readily open to
evidence--mathematics."[31]
Since by this time Weyl had become convinced that "creative
science" must *necessarily* transcend what is
phenomenologically given, he had presumably already accepted that
pure phenomenology is incapable of accounting for theoretical
physics, let alone the whole of existence. But it must have been
painful for him to concede the analogous claim in the case of
*mathematics*. In 1932, he asserts: "If mathematics is
taken by itself, one should restrict oneself with Brouwer to the
intuitively cognizable truths ... nothing compels us to go
farther." If mathematics could be "taken by
itself", then there would be no need for it to justify its
practices by resorting to "symbolic construction", to
employ symbols which in themselves "signify
nothing"--nothing, at least, accessible to intuition. But,
unlike Brouwer, Weyl seems finally to have come to terms with the
idea that mathematics could not simply be "taken by
itself", that it has a larger role to play in the world beyond
its service as a paradigm, however pure, of subjective certainty.
The later impact of Godel's incompleteness theorems on Hilbert's
program led Weyl to remark in
1949:[32]
>
> The ultimate foundations and the ultimate meaning of mathematics
> remain an open problem; we do not know in what direction it will find
> its solution, nor even whether a final objective answer can be
> expected at all. "Mathematizing" may well be a creative
> activity of man, like music, the products of which not only in form
> but also in substance defy complete objective rationalization. The
> undecisive outcome of Hilbert's bold enterprise cannot fail to affect
> the philosophical interpretation. (Weyl 1949, 219)
The fact that "Godel has left us little hope that a
formalism wide enough to encompass classical mathematics will be
supported by a proof of consistency" seems to have led Weyl to
take a renewed interest in "axiomatic systems developed before
Hilbert without such ambitious dreams", for example Zermelo's
set theory, Russell's and Whitehead's ramified type theory and
Hilbert's own axiom systems for geometry (as well, possibly, as
Weyl's own system in *Das Kontinuum*, which he modestly fails
to mention). In one of his last papers, *Axiomatic Versus
Constructive Procedures in Mathematics*, written sometime after
1953, he saw the battle between Hilbertian formalism and Brouwerian
intuitionism in which he had participated in the 1920s as having
given way to a "dextrous blending" of the axiomatic
approach to mathematics championed by Bourbaki and the algebraists
(themselves mathematical descendants of Hilbert) with constructive
procedures associated with geometry and topology.
It seems appropriate to conclude this account of Weyl's work in the
foundations and philosophy of mathematics by allowing the man himself
to have the last word:
>
> This history should make one thing clear: we are less certain than
> ever about the ultimate foundations of (logic and) mathematics; like
> everybody and everything in the world today, we have our
> "crisis". We have had it for nearly fifty years. Outwardly
> it does not seem to hamper our daily work, and yet I for one confess
> that it has had a considerable practical influence on my mathematical
> life: it directed my interests to fields I considered relatively
> "safe", and it has been a constant drain on my enthusiasm
> and determination with which I pursued my research work. The
> experience is probably shared by other mathematicians who are not
> indifferent to what their scientific endeavours mean in the contexts
> of man's whole caring and knowing, suffering and creative existence in
> the world. (Weyl 1946, 13)
## 4. Contributions to the Foundations of Physics
### 4.1 Spacetime Geometries and Weyl's Unified Field Theory
Weyl's clarification of the role of coordinates, invariance or
symmetry principles, his important concept of gauge invariance, his
group-theoretic results concerning the uniqueness of the Pythagorean
form of the metric, his generalization of Levi-Civita's concept of
parallelism, his development of the geometry of paths, his discovery
of the causal-inertial method which prepared the way to empirically
determine the spacetime metric in a non-circular, non-conventional
manner, his deep analysis of the concept of motion and the role of
Mach's Principle, are but a few examples of his important
contributions to the philosophical and mathematical foundations of
modern spacetime theory.
Weyl's book, *Raum-Zeit-Materie*, beautifully exemplifies the
fruitful and harmonious interplay of mathematics, physics and
philosophy. Here Weyl aims at a mathematical and philosophical
elucidation of the problem of space and time in general. In the
preface to the great classical work of 1923, the fifth German
edition, after mentioning the importance of mathematics to his work,
Weyl says:
>
>
> Despite this, the book does not disavow its basic, philosophical
> orientation: its central focus is *conceptual analysis*;
> physics provides the experiential basis, mathematics the sharp
> tools. In this new edition, this tendency has been further
> strengthened; although the growth of speculation was trimmed, the
> supporting foundational ideas were more intuitively, more carefully
> and more completely developed and analyzed.
>
>
#### 4.1.1 Weyl's metric-independent construction of the symmetric linear connection
Extending and abstracting from Gauss's treatment of curved surfaces
in Euclidian space, Riemann constructed an infinitesimal geometry of
\(n\)-dimensional manifolds. The coordinate assignments
\(x^{k}(p)\ [k \in \{1, \ldots ,n\}\)] of the points \(p\) in such an \(n\)-dimensional
Riemannian manifold are quite arbitrary, subject only to the
requirement of arbitrary differential coordinate
transformations.[33]
Riemann's assumption that in an infinitesimal neighbourhood of a
point, Euclidean geometry and hence Pythagoras's theorem holds, finds
its formal expression in Riemann's equation
\[\tag{1}
ds^2 = \sum\_{i,j} g\_{ij}(x^{k}(p))dx^{i}dx^{j}\ [\text{where } g\_{ij}(x^{k}(p)) = g\_{ji}(x^{k}(p))]
\]
for the square of the length \(ds\) of an infinitesimal line
element that leads from the point \(p = x(p) =
(x^{1}(p), \ldots ,x^{n}(p))\) to an arbitrary infinitely
near point \(p' = x(p') = (x^{1}(p) + dx^{1}(p), \ldots ,x^{n}(p) + dx^{n}(p))\).
The assumption that Euclidean geometry holds in the infinitesimally
small means that the \(dx^{i}(p)\)
transform *linearly* under arbitrary coordinate
transformations. Using the *Einstein summation*
*convention*[34],
equation (1) can simply be written as
\[\tag{2}
ds^{2} = g\_{ij}(x^{k}(p))dx^{i}dx^{j}.
\]
Riemann assumed the validity of the Pythagorean metric only in the
infinitely small. Riemannian geometry is essentially a geometry of
infinitely near points and conforms to the requirement that all laws
are to be formulated as *field* laws. Field laws are
*close-action-laws* which relate the field magnitudes only to
infinitesimally neighbouring points in
space.[35]
The value of some field magnitude at each point depends only on the
values of other field magnitudes in the infinitesimal neighbourhoods
of the corresponding points. The field magnitudes consist of partial
derivatives of position functions at some point, and this requires
the knowledge of the behavior of the position functions only with
respect to the neighbourhood of that point. To construct a field law,
only the behavior of the world in the infinitesimally small is
required.[36]
Riemann's ideas were brought to a concrete realization fifty years
later in Einstein's general theory of relativity. The basic idea
underlying the general theory of relativity was Einstein's
recognition that the metric field, which has such powerful real
effects on matter, cannot be a rigid once and for all given geometric
structure of the spacetime, but must itself be something real, that
not only has effects on matter, but is in turn also affected by
matter. Riemann had already suggested that analogous to the
electromagnetic field, the metric field reciprocally interacts with
matter. Einstein came to this idea of reciprocity between matter and
field independently of Riemann, and in the context of his theory of
general relativity, applied this principle of reciprocity to four
dimensional spacetime. Thus Einstein could adopt Riemann's
infinitesimal geometry with the important difference: given the
causal requirements of Einstein's theory of special relativity,
Riemann's quadratic form is not positive definite but indefinite; it
has signature
1.[37]
Weyl (1922a) says:
>
>
> All our considerations until now were based on the assumption, that
> the metric structure of space is something that is fixed and
> given. Riemann already pointed to another possibility which was
> realized through General Relativity. The metrical structure of the
> extensive medium of the external world is a field of physical reality,
> which is causally dependent on the state of matter.
>
>
And in another place Weyl (1918b) remarks:
>
>
> The metric is not a property of the world [spacetime] in itself,
> rather spacetime as a form of appearance is a completely formless
> four-dimensional continuum in the sense of analysis situs, but the
> metric expresses something real, something which exists in the world,
> which exerts through centrifugal and gravitational forces physical
> effects on matter, and whose state is conversely conditioned through
> the distribution and nature of matter.
>
>
After Einstein applied Riemannian geometry to his theory of general
relativity, Riemannian geometry became the focus of intense research.
In particular, G. Ricci and T. Levi-Civita's so-called *Absolute
Differential Calculus* developed and clarified the Riemannian
notions of an *affine connection* and *covariant
differentiation*. The decisive step in this development, however,
was T. Levi-Civita's discovery in 1917 of the concept of
*infinitesimal parallel vector displacement*, and the
fact that such parallel vector displacement is uniquely determined by
the metric field of Riemannian geometry. Levi-Civita's construction
of infinitesimal parallel transport on a manifold required the
process of embedding the manifold into a flat higher-dimensional
metric space. In 1918, Weyl generalized Levi-Civita's concept of
parallel transport by means of an *intrinsic* construction
that does not require the process of such an embedding, and is
therefore independent of a metric. Weyl's intrinsic construction
results in a *metric-independent, symmetric linear
connection*. Weyl simply referred to the latter as
*affine*
*connection*.[38]
Weyl defines what he means by an *affine connection* in the
following way: A point \(p\) on the manifold \(M\) is affinely
connected with its immediate neighborhood, if and only of for every
tangent vector \(v\_{p}\) at \(p\), a tangent
vector \(v\_{q}\) at \(q\) is determined to which
the tangent vector \(v\_{p}\) gives rise under
parallel displacement from \(p\) to the infinitesimally
neighboring point \(q\). This definition merely says that a
manifold is affinely connected if it admits the process of
infinitesimal parallel displacement of a vector.
Weyl's next definition characterizes the *essential nature* of
infinitesimal parallel displacement. The definition says that at any
arbitrary point of the manifold there exists a *geodesic
coordinate system* such that the components of any vector at that
point are not altered by an infinitesimal parallel displacement with
respect to it. This is a geometrical way of expressing Einstein's
requirement that the gravitational field can always be made to vanish
locally. According to Weyl (1923b, 115), it characterizes the
*nature* of an affine connection on the manifold. A manifold
which is an affine manifold is homogeneous in this sense. Moreover,
manifolds do not exist whose affine structure is of a different
*nature*.
The transport of a tangent vector \(v\_{p}\) at
\(p\) to an infinitesimally nearby point \(q\) results in the
tangent vector \(v\_{q}\) at \(q\), namely,
\[\tag{3}
v\_{q} = v\_{p} + dv\_{p}.
\]
This infinitesimal tangent vector transport Weyl defines as
*infinitesimal parallel displacement* if and only if there
exists a coordinate system \(\overline{x}\), called a *geodesic coordinate system*
for the neighborhood of \(p\), relative to which the transported
tangent vector \(\overline{v}\_{q}\) at \(q\)
possesses the same components as the original tangent vector \(\overline{v}\_{p}\) at \(p\); that is,
\[\tag{4}
\overline{v}^{\,i}\_q - \overline{v}^{\,i}\_p = d\overline{v}^{\,i}\_p = 0.
\]
![figure](fig1.png)
Figure 1: Parallel transport in a geodesic coordinate system \(\overline{x}\)
For an arbitrary coordinate system \(x\) the components \(dv^{\,i}\_p\)
vanish whenever \(v^{\,i}\_p\) or \(dx^{\,i}\_p\) vanish. Consequently,
\(dv^{\,i}\_p\) is *bi-linear* in \(v^{\,i}\_p\) and
\(dx^{\,i}\_p\); that is,
\[\tag{5}
dv^{\,i}\_p = -\Gamma^{\,i}\_{jk}(x^i(p))v^{\,i}\_p dx^{\,k}\_p,
\]
where, in the case of four dimensions, the \(4^{3} = 64\)
coefficients \(\Gamma^{\,i}\_{jk} (x^{i}(p))\)
are coordinate functions, that is, functions of
\(x^{i}(p)\) \((i = 1, \ldots, 4)\),
and the minus sign is introduced to agree with convention.
![figure](fig2.png)
Figure 2: Parallel transport in an arbitrary coordinate system \(x\)
It is important to understand that there is no intrinsic notion of
infinitesimal parallel displacement on a differentiable manifold. A
notion of "parallelism" is not something that a manifold
would possess merely by virtue of being a smooth manifold; additional
structure has to be introduced which resides on the manifold and
which permits the notion of infinitesimal parallelism. A manifold is
an "affine manifold" \((M, \Gamma)\) if in addition to its
manifold structure (differential topological structure) it is also
endowed with an affine structure \(\Gamma\) that assigns to each of its
points 64 coefficients \(\Gamma^{i}\_{jk} (x^{i}(p))\)
satisfying the symmetry condition \(\Gamma^{i}\_{jk} (x^{i}(p))
= \Gamma^{i}\_{kj} (x^{i}(p))\).
An \(n\)-dimensional manifold \(M\), which is an affinely
connected manifold, Weyl (1918b) interprets physically as an
\(n\)-dimensional world (spacetime) filled with a gravitational
field. Weyl says, "...the affine connection appears in
physics as the *gravitational field*..." Since
there exists at each spacetime point a geodesic coordinate system in
which the components \(\Gamma^{i}\_{jk}\) of the symmetric linear
connection vanish, the gravitational field can be made to vanish at
each point of the manifold.
The *classical theory of physical geometry*, developed by
Helmholtz, Poincare and Hilbert, regarded the concept of
"metric congruence" as the only basic relation of
geometry, and constructed physical geometry from this one notion
alone in terms of the relative positions and displacements of
physical congruence standards. Although Einstein's general theory of
relativity championed a *dynamical view of spacetime
geometry* that is very different from the *classical theory of
physical geometry*, Einstein initially approached the problem of
the structure of spacetime from the metrical point of view. It was
Weyl (1923b) who emphasized and developed the metric-independent
construction of the symmetric linear connection and who pointed out
the rationale for doing so. In both the non-relativistic and
relativistic contexts, it is the symmetric linear connection, and not
the metric, which plays the essential role in the formulation of all
physical laws that are expressed in terms of differential equations.
It is the symmetric linear connection that relates the state of a
system at a spacetime point to the states at neighboring spacetime
events and enters into the differentials of the corresponding
magnitudes. In both Newtonian physics and the theory of general
relativity, all dynamical laws presuppose the projective and affine
structure and hence the Law of Inertia. In fact, the whole of tensor
analysis with its covariant derivatives is based on the affine
concept of infinitesimal *parallel displacement* and *not*on the metric.
Weyl's metric independent construction not only led to a deeper
understanding of the mathematical characterization of gravity, it
also prepared the way for new constructions and generalizations in
differential geometry and the general theory of relativity. In
particular, it led to
1. The development of the *geometry of paths*, first introduced by Weyl in 1918.
2. Weyl's discovery of the *causal-inertial method* which prepared the way to empirically determine the spacetime metric in a non-circular, non-conventional manner.
3. Weyl's generalization of Riemannian geometry in his attempt to unify gravity and electromagnetism.
4. Weyl's introduction of the concept of gauge in the context of his attempt to unify gravity and electromagnetism.
For more detail on Weyl's metric independent construction of the
affine connection (linear symmetric connection), see
the supplement.
#### 4.1.2 Projective Geometry or the Geometry of Paths
Weyl's metric-independent construction of the affine structure led to
the development of differential *projective* geometries or the
*geometries of paths.* The interest in projective geometry is
in the *paths*, that is, in the continuous set of points of
the *image set of curves*, rather than in the possible
parameter descriptions of curves. A curve has one degree of freedom;
it depends on one parameter, and its image set or path is a
one-dimensional continuous set of points of the manifold. One
represents a *curve* on a manifold \(M\) as a smooth map
(i.e., \(C^{\infty})\) \(\gamma\) from some open interval
\(I = (-\varepsilon, \varepsilon)\) of the real line \(\mathbb{R}\) into
\(M\).
![figure](fig3.png)
Figure 3:
A curve on the manifold \(M\) is the smooth map
\(\gamma : I \subset \mathbb{R} \rightarrow M\)
It is important to understand that what one means by a
"curve" is the map (the parametric description) itself,
and *not* the set of its image points, the path. Consequently,
two curves are mathematically considered to be different curves if
they are given by different maps (different parameter descriptions),
*even if their image set, that is, their path, is the
same*. If we change a curve's parameter description we change the
curve but not its image set (its path), the points it passes through.
A path is therefore sometimes defined as an equivalence class of
curves under arbitrary parameter transformations. Hence, projective
geometry may be defined as an equivalence class of affine geometries.
A geodesic curve in *flat space* is a straight line. Its
tangent at one point is parallel to the tangent at previous or
subsequent points. A straight line in Euclidean space is the only
curve that parallel-transports its own tangent vector. This notion of
parallel transport of the tangent vector also characterizes geodesic
curves in *curved space*. That is, a curve \(\gamma\) in curved
space, which parallel-transports its own tangent vector along all of
its points, is called a *geodesic* curve. Given a manifold
with an affine structure and some arbitrary local coordinate system,
the coordinate functions (components) \(\gamma^{i}\) of a
geodesic curve \(\gamma\) satisfy the second-order non-linear
differential equations
\[\tag{6}
\frac{d^2\gamma^i}{ds^2} + \Gamma^{i}\_{jk} \frac{d\gamma^j}{ds} \frac{d\gamma^k}{ds} = 0.
\]
One may characterize the *projective geometry* \(\Pi\) on an
affine manifold either in terms of an equivalence class of geodesic
curves under arbitrary parameter
diffeomorphisms[39],
thereby eliminating all the parameter descriptions and hence all
possible notions of distance along the curves satisfying
(6),[40]
or one may take the process of *autoparallelism of directions*as fundamental in defining a projective structure.
![figure](fig4.png)
Figure 4:
A path \(\xi\) is an equivalence class [\(\gamma\)] of curves
under all parameter diffeomorphisms \(\mu: \mathbb{R} \rightarrow \mathbb{R};
\lambda \mapsto \mu(\lambda)\)
Weyl took the latter approach. According to Weyl, the infinitesimal
process of parallel displacements of vectors contains, as a special
case, the infinitesimal displacement of a *direction* into its
own *direction*. Such an infinitesimal autoparallelism of
*directions* is characteristic of the projective structure of
an affinely connected manifold.
**Infinitesimal Autoparallelism of a Direction:**
An *infinitesimal autoparallelism of a direction* \(R\) at an
arbitrary point \(p\) consists in the parallel displacement of \(R\)
at \(p\) to a neighbouring point \(p'\) which lies in the direction
\(R\) at \(p\).
A curve is geodesic if and only if its *tangent direction*
\(R\) experiences infinitesimal autoparallelism when moved
along all the points of the curve. This characterization of a
geodesic curve constitutes an abstraction from affine geometry.
Through this abstraction, a geodesic curve is definable exclusively
in terms of autoparallelism of tangent *directions*, and not
tangent *vectors*. Roughly speaking, an affine geometry is
essentially a projective geometry with the notion of distance defined
along the curves. By eliminating all possible notions of distance
along curves, or equivalently, all the parameter descriptions of the
curves, one abstracts the projective geometry form affine geometry.
As mentioned above, a projective geometry \(\Pi\) may be defined as an
equivalence class of affine geometries, that is, an equivalence class
of *projectively related* affine connections [\(\Gamma\)]. Weyl
presented the details of his approach to projective geometry, which
uses the notion of *autoparallelism of direction*, in a set of
lectures delivered in Barcelona and Madrid in the spring of 1922
(Weyl (1923a); see also Weyl (1921c)). Weyl began with the following
necessary and sufficient condition for the invariance of the
projective structure \(\Pi\) under a transformation
\(\Gamma \rightarrow \overline{\Gamma}\) of the affine
structure:
**Projective Transformation:**
A transformation \(\Gamma \rightarrow \overline{\Gamma}\)
preserves the projective structure \(\Pi\) of a
manifold with an affine structure \(\Gamma\), and is called a
*projective transformation*, if and only if
\[\tag{7}
(\overline{\Gamma} - \Gamma)^i\_{jk} v^{\,j}v^{\,k} \propto v^{\,i},
\]
where \(v^{\,i}\) is an arbitrary vector.
Weyl's definition says that a change in the affine structure of
the manifold \(M\) preserves the projective structure \(\Pi\) of \(M\)
if the vectors \(v^{i}\_{q}\) and \(\overline{v}^{i}\_{q}\) at \(q\)
that result from the vector \(v^{i}\_{p}\) at \(p\) by parallel
transport under \(\Gamma\) and \(\overline{\Gamma}\) respectively,
differ at most in length but not in
direction.[41]
A spacetime manifold \(M\) is a "projective manifold"
(M, \(\Pi)\), if in addition to its manifold structure (differential
topological structure), it is also endowed with a projective structure
\(\Pi\) that assigns to each of its manifold points 64 coefficients
\(\Pi^{\,i}\_{jk} x^{i}(p))\)
satisfying certain symmetry
conditions.[42]
These projective coefficients characterize the equivalence class
[\(\Gamma\)] of projectively equivalent connections, that is,
connections equivalent under the *projective transformation*
(7).
In physical spacetime the projective structure has an immediate
intuitive significance according to Weyl. The real world is a
non-empty spacetime filled with an inertial-gravitational field,
which Weyl calls the *guiding field*
\((F\)*u**hrungsfeld*)[43].
It is an indubitable fact, according to Weyl (1923a, 13), that a body
which is let free in a certain spacetime direction (time-like
direction) carries out a uniquely determined natural motion from
which it can only be diverted through an external force. The process
of autoparallelism of direction appears, thus, as the tendency of
persistence of the spacetime direction of a free particle whose
motion is governed by, what Weyl calls, the *guiding field*\((F\)*u**hrungsfeld*). This natural
motion occurs on the basis of an effective infinitesimal tendency of
persistence, that parallelly displaces the spacetime direction
\(R\) of a body at an arbitrary point \(p\) on its trajectory
to a neighbouring point \(p'\) that lies in the direction
\(R\) at \(p\).
If external forces exert themselves on a body, then a motion results
which is determined through the conflict between the tendency of
persistence due to the guiding field and the force. The tendency of
persistence of the guiding field is a type of constraining guidance,
that the inertial-gravitational field exerts on every body. Weyl
(1923b, 219) says:
>
>
> Galilei's inertial law shows, that there exists a type of
> constraining guidance in the *world* [spacetime] that imposes
> on a body that is let free in some definite world direction a unique
> natural motion from which it can only be diverted through external
> forces; this occurs on the basis of an effective infinitesimal
> tendency of persistence from point to point, that
> *auto-parallelly* transfers the world direction \(r\) of
> the body at an arbitrary point \(P\) to an infinitesimally close
> neighboring point \(P'\), that lies in the direction
> \(r\) at \(P\).
>
>
>
#### 4.1.3 Conformal Geometry, Weyl Geometry, and Weyl's Unified Field Theory
Shortly after the completion of the general theory of relativity in
1915, Einstein, Weyl, and others began to work on a unified field
theory. It was natural to assume at that
time[44]
that this task would only involve the unification of gravity and
electromagnetism. In Einstein's geometrization of gravity, the
Newtonian gravitational potential, and the Newtonian gravitational
force, are respectively replaced by the components of the metric
tensor \(g\_{ij}(x)\), and the components of
the symmetric linear connection \(\Gamma^{i}\_{jk}(x)\). In the general
theory of relativity the gravitational field is thus accounted for in
terms of the curvature of spacetime, but the electromagnetic field
remains completely unrelated to the spacetime geometry. Einstein's
mathematical formulation of his theory of general relativity does
not, however, provide room for the geometrization of the other long
range force field, the electromagnetic
field.[45]
It was therefore natural to ask whether nature's only two long range
fields of force have a common origin. Consequently, it was quite
natural to suggest that the electromagnetic field might also be
ascribed to some property of spacetime, instead of being merely
something embedded in spacetime. Since, however, the components
\(g\_{ij}(x)\) of the metric tensor are
already sufficiently determined by Einstein's field equations, this
would require setting up a more general differential geometry than
the one which underlies Einstein's theory, in order to make room for
incorporating electromagnetism into spacetime geometry. Such a
generalized differential geometry would describe both long range
forces, and a new theory based on this geometry would constitute a
unified field theory of electromagnetism and gravitation.
In 1918, Weyl proposed such a theory. In Weyl (1918a, 1919a), and in
the third edition (1920) of *Raum-Zeit-Materie*, Weyl
presented his ingenious attempt to unify gravitation and
electromagnetism by constructing a gauge-invariant geometry (see
below), or what he called a *purely infinitesimal
'metric' geometry*. Since the conformal structure
\(C\) (see below) of spacetime does not determine a unique
symmetric linear connection \(\Gamma\) but only an equivalence class
\(K = [\Gamma]\) of *conformally equivalent symmetric linear
connections*, Weyl was able to show that this degree of freedom
in a conformal structure of spacetime provides just enough room for
the geometrization of the electromagnetic potentials. The resulting
geometry, called a *Weyl geometry*, is an intermediate
geometric structure that lies between the conformal and Riemannian
structures.[46]
The metric tensor field that is locally described by
\[\tag{8}
ds^{2} = g\_{ij}(x(p))dx^{i}dx^{j},
\]
is characteristic of a Riemannian geometry. That geometry requires of
the symmetric linear connection \(\Gamma\) that the infinitesimal
parallel transport of a vector always preserves the length of the
vector. Therefore, the metric field in Riemannian geometry determines
a unique symmetric linear connection, a "metric
connection" that satisfies the length-preserving condition of
parallel transport. This means that the metric field, locally
represented by (8), is invariant under parallel transport. The
coefficients of this unique symmetric linear *metric*connection are given by
\[\tag{9}
\Gamma^i\_{jk} = \frac{1}{2} g^{ir}(g\_{rj,k} + g\_{kr,j} - g\_{jk,r}).
\]
If \(v\_{p}\) is a vector at \(p \in M\), its length
\[\tag{10}
\lvert v\_p \rvert^2 = g\_{ij}(x(p))v^{\,i}\_p v^{\,j}\_p.
\]
Moreover, the angle between two vectors \(v\_{p}\)
and \(w\_{p}\) at \(p\in M\) is given by
\[\tag{11}
\cos \theta = \frac{g\_{ij}(x(p))v^{\,i}\_p w^{\,j}\_p}{\lvert v\_p \rvert \lvert w\_p \rvert}.
\]
While in Riemannian geometry the parallel transport of length is
*path independent*, that is, it is possible to compare the
lengths of any two vectors, even if they are located at two finitely
different points, a vector suffers a *path-dependent* change
in *direction* under parallel transport; that is, it is not
possible to define the angle between two vectors, located at
different points, in a path-independent way. Consequently, the angle
between two vectors at a given point is invariant under parallel
transport if and only if both vectors are transported *along the
same path*. In particular, a vector which is carried around a
closed circuit by a continual parallel displacement back to the
starting point, will have the same length, but will not in general
return to its initial direction.
![figure](fig5.png)
Figure 5: The parallel transport of a
vector by a two-dimensional creature, from \(A \rightarrow B
\rightarrow C \rightarrow A\) around a geodesic triangle on a
two-dimensional surface \(S^{2}\), ends up pointing in a different
direction upon returning to \(A\).
For a closed loop which circumscribes an infinitesimally small
portion of space, the rotation of the vector per unit area
constitutes the measure of the local curvature of space.
Consequently, whether or not finite parallel displacement of
direction is *integrable*, that is, path-independent, depends
on whether or not the curvature tensor vanishes.
According to Weyl, Riemannian geometry, is not a pure or genuine
*infinitesimal* differential (metric) geometry, since it
permits the comparison of length at a finite distance. In his seminal
1918 paper entitled *Gravitation und Elektrizitat*
(*Gravitation and Electricity*) Weyl (1918a) says:
>
>
> However, in the Riemannian geometry described above, there remains a
> last distant-geometric [ferngeometrisches] element--without any
> sound reason, as far as I can see; the only cause of this appears to
> be the development of Riemannian geometry from the theory of
> surfaces. The metric permits the comparison of length of two vectors
> not only at the same point, but also at any arbitrarily separated
> points. *A true near-geometry (Nahegeometrie), however, may
> recognize only a principle of transferring a length at a point to an
> infinitesimal neighbouring point*, and then it is no more
> reasonable to assume that the transfer of length from a point to a
> finitely distant point is integrable, then it was to assume that the
> transfer of direction is integrable.
>
>
>
Weyl wanted a metric geometry which would not permit *distance
comparison of length* between two vectors located at finitely
different points. In a *pure infinitesimal geometry*, Weyl
argued, if attention is restricted to a single point of the manifold,
then some standard of length or *gauge* must be chosen
arbitrarily before the lengths of vectors can be determined.
Therefore, all that is intrinsic to the notion of a pure
infinitesimal metric differential geometry is the ability to
determine the *ratios of the lengths* of any two vectors and
the angle between any two vectors, *at a point*. Such a pure
*infinitesimal* metric manifold must have at least a
*conformal* structure \(C\).
The defining characteristic of a conformal spacetime structure is
given by the equation
\[\tag{12}
0 = ds^2 = g\_{ij}(x(p))dx^i dx^{\,j},
\]
which determines the light cone at \(p\). A gauge transformation
of the metric is a map
\[
g\_{ij}(x(p)) \rightarrow \lambda(x(p))g\_{ij}(x(p))
= \overline{g}\_{ij}(x(p)),
\]
which preserves the metric up to a positive and smooth but otherwise
arbitrary scalar factor or gauge function
\(\lambda(x(p))\). In the case of a pseudo-Riemannian
structure such a gauge transformation leaves the light cones
unaltered. The angle between two vectors at \(p\) is given by
(11). Clearly, the gauge transformation
\(\overline{g}\_{ij}(x(p)) = \lambda(x(p))g\_{ij}(x(p))\)
is angle preserving, that is, conformal. Two metrics which are
related by a conformal gauge transformation are called
*conformally equivalent.* A conformal structure does not
determine the length of any one vector at a point. Only the relative
lengths, the ratio of lengths, of any two vectors
\(\bfrac{\lvert v\_p \rvert}{\lvert w\_p \rvert}\)
is determined.
Weyl exploited these features of the conformal structure, and
suggested that given a conformal structure, a *gauge* could be
chosen at each point in a smooth but otherwise arbitrary manner, such
that the metric (8) at any point of the manifold is conventional or
undetermined to the extent that the metric
\[\tag{13}
d\overline{s}^2 = \lambda(x(p))g\_{ij}(x(p))dx^{i}dx^{\,j}
\]
is equally valid.
However, a conformal structure by itself does not determine a unique
symmetric linear connection; it only determines an equivalence class
of conformally equivalent connections \(K = [\Gamma]\), namely,
connections which preserve the conformal structure \(C\) during
parallel transport. The difference between any two conformally
equivalent symmetric linear connections \(\overline{\Gamma}^{\,i}\_{jk}\),
\(\Gamma^i\_{jk} \in [\Gamma]\) is given by
\[\tag{14}
\overline{\Gamma}^{\,i}\_{jk} - \Gamma^i\_{jk}
= \frac{1}{2}(\delta^{\,i}\_j \theta\_k + \delta^{\,i}\_k \theta\_j
- g\_{jk}g^{ir}\theta\_{r}),
\]
where
\[\tag{15}
\theta\_{j}(x(p))dx^{\,j}
\]
is an arbitrary one-form field.
Since the conformal structure determines only an equivalence class of
conformally equivalent symmetric linear connections \(K = [\Gamma]\),
the affine connection in this type of geometry is not uniquely
determined, and the parallel transport of vectors is not generally
well defined. Moreover, the ratio of the lengths of two vectors
located at different points is not determined even in a path-dependent
way. According to Weyl, it is a fundamental principle of infinitesimal
geometry that the metric structure on a manifold \(M\) determines a
unique affine structure on \(M\). As was pointed out earlier, this
principle is satisfied in Riemannian geometry where the metric
determines a unique symmetric linear connection, namely, the metric
connection according to (9). Evidently this fundamental principle of
infinitesimal geometry is not satisfied for a structure which is
merely a conformal structure, since the conformal structure only
determines an equivalence class of conformally equivalent symmetric
connections. Weyl showed that besides the conformal structure an
additional structure is required in order to determine a unique
symmetric linear connection from the equivalence class
\(K = [\Gamma]\) of conformally equivalent symmetric linear
connections. Weyl showed that this additional structure is provided by
the *length connection* or *gauge field* \(A\_{j}\) that
governs the *congruent displacement of lengths*. Weyl called
this additional structure the "metric connection" on a
manifold; however, we shall use the term "length
connection" instead, in order to avoid confusion with the modern
usage of the term "metric connection", which today denotes
the symmetric linear connection that is uniquely determined by a
Riemannian metric tensor according to (9).
**Weyl's Length Connection:**
A point \(p\) is *length connected* with its infinitesimal
neighborhood, if and only if for every length at \(p\), there is
determined at every point \(q\) infinitesimally close to \(p\) a
length to which the length at \(p\) gives rise when it
is *congruently* displaced from \(p\) to \(q\).
This definition merely says that a manifold is "length
connected" if it admits the process of infinitesimal congruent
displacement of length. The only condition imposed on the concept of
congruent displacement of length is the following:
**Congruent Displacement of Length:**
With respect to a choice of gauge for a neighborhood of \(p\), the
transport of a length \(l\_{p}\) at \(p\) to an infinitesimally
neighboring point \(q\) constitutes a
*congruent* displacement if and only if there exists a choice
of gauge for the neighborhood of \(p\) relative to which the
transported length
\(\overline{l}\_{q}\) has the same value as
\(\overline{l}\_{p}\); that is
\[\tag{16}
\overline{l}\_q - \overline{l}\_p = d\overline{l}\_p = 0.
\]
Weyl called such a gauge at \(p\) a *geodesic gauge* at
\(p\).[47]
Weyl's proof of the following theorem closely parallels the proof of
theorem A.3 in
the supplement on Weyl's metric independent construction of the affine connection.
**Theorem 4.1:**
If for every point \(p\) in a
neighborhood \(U\) of \(M\), there exists a choice of gauge
such that the change in an arbitrary length at \(p\) under
congruent displacement to an infinitesimally near point \(q\) is
given by
\[\tag{17}
d\overline{l}\_p = 0,
\]
then locally with respect to any other choice of gauge,
\[\tag{18}
dl = -lA\_j(x(p))dx^{\,j},
\]
and conversely.
Making use of
\[\begin{align}
dv^{\,i}\_p &= -\Gamma^i\_{jk}(x(p))v^{\,j}\_pdx^k \\
l\_p &= g\_{ij}(x(p))v^{\,i}\_p v^{\,j}\_p \\
dl\_p &= -l\_p A\_j x(p)dx^{\,j},
\end{align}\]
Weyl (1923a, 124-125) shows that the conformal structure
supplemented with the structure of a *length connection* or
*gauge field* \(A\_{j}(x)\) singles
out a unique connection from the equivalence class \(K = [\Gamma]\) of
conformally equivalent
connections.[48]
This unique connection, which is called the *Weyl connection*,
is given by
\[\begin{align}
\Gamma^{\,i}\_{jk} &= \frac{1}{2}g^{ir}(g\_{rj,k} + g\_{kr,j} - g\_{jk,r})
+\frac{1}{2} g^{ir}(g\_{rj}A\_{k} + g\_{kr}A\_{j} - g\_{jk}A\_{r}) \\
\tag{19}
&= \frac{1}{2}g^{ir}(g\_{rj,k} + g\_{kr,j} - g\_{jk,r})
+\frac{1}{2}(\delta^{\,i}\_j A\_{k} + \delta^{\,i}\_k A\_{j} - g\_{jk}g^{ir}A\_{r}),
\end{align}\]
which is analogous to (14). The first term of the Weyl connection is
identical to the metric connection (9) of Riemannian geometry,
whereas the second term represents what is new in a Weyl geometry.
The Weyl connection is invariant under the gauge transformation
\[\begin{align}
\overline{g}\_{ij}(x) &= e^{\theta(x)}g\_{ij}(x) \\
\tag{20}
\overline{A}\_j(x) &= A\_j(x) - \partial\_j \theta(x),
\end{align}\]
where the gauge function is \(\lambda(x) = e^{\theta(x)}\). Thus, a
conformal structure plus *length connection* or *gauge
field* \(A\_{j}(x)\) determines a *Weyl geometry*equipped with a unique *Weyl connection*. Therefore, the
fundamental principle of infinitesimal geometry also holds in a Weyl
geometry; that is, the metric structure of a Weyl geometry determines
a unique affine connection, namely, the Weyl connection.
In Weyl's physical interpretation of his purely infinitesimal metric
geometry (Weyl geometry), the gauge field
\(A\_{j}(x)\) is identified with the
electromagnetic four potential, and the electromagnetic field tensor
is given by
\[\tag{21}
F\_{jk}(x) = \partial\_{j} A\_{k}(x) - \partial\_{k} A\_{j}(x).
\]
A spacetime that is formally characterizable as a Weyl geometry, would
not only have a *curvature of direction*
(*Richtungskrummung*) but would also have a *curvature
of length* (*Streckenkrummung*). Because of the latter
property the formal characterization of the congruent displacement of
length would be non-integrable, that is, path-dependent, in a Weyl
geometry.
![figure](fig6.png)
Figure 6:
In a Weyl geometry parallel displacement of a vector along different
paths not only changes its direction but also its
length
Suppose physical spacetime corresponds to a Weyl geometry. Then two
identical clocks \(A\) and \(B\) at an event \(p\) with a
common unit of time, that is, a timelike vector of given length
\(l\_{p}\), which are separated and moved along
different world lines to an event \(q\), will not only differ with
respect to the elapsed time (first clock effect (i.e., relativistic
effect)), but in general the clocks will differ with respect to their
common unit of time (rate of ticking) at \(q\) (second clock
effect). That is, congruent time displacement in a Weyl geometry is
such that two congruent time intervals at \(p\) will not in
general be congruent at \(q\), when congruently displaced in
parallel along different world lines from \(p\) to \(q\), that
is, \(l^{A}\_{q} \ne l^{B}\_{q}\).
This means that a twin who travels to a distant star and then returns
to earth would not only discover that the other twin on earth had
aged much more, but also that all the clocks on earth tick at a
different rate. Hence, in the presence of a non-vanishing
electromagnetic field \(F\_{jk}(x)\) the
clock rates will not in general be the same; that is, there will be a
second clock effect in addition to the relativistic effect (first
clock effect). Thus, \(l^{A}\_{q} = l^{B}\_{q}\) if
and only if the curl of \(A\_{j}(x)\)
vanishes, that is, if and only if the electromagnetic field tensor
\(F\_{jk}(x)\) vanishes, namely,
\[
F\_{jk}(x) = \partial\_{j} A\_{k}(x) - \partial\_{k} A\_{j}(x) = 0.
\]
In that case the second term of the Weyl connection vanishes and (19)
reduces to the metric connection (9) of Riemannian geometry.
In a Weyl geometry there are no ideal absolute "meter
sticks" or "clocks". For example, the rate at which
any clock measures time is a function of its history. However, as
Einstein pointed out in a *Nachtrag* (addendum) to Weyl
(1918a), it is precisely this situation which suggests that Weyl's
geometry conflicts with experience. In Weyl's geometry, the frequency
of the spectral lines of atomic clocks would depend on the location
and past histories of the atoms. But experience indicates otherwise.
The spectral lines are well-defined and sharp; they appear to be
independent of an atom's history. Atomic clocks define units of time,
and experience shows they are integrably transported. Thus, if we
assume that the atomic time and the gravitational standard time are
identical, and that the gravitational standard time is determined by
the Weyl geometry, then the electromagnetic field tensor is zero. But
if that is the case, then a Weyl geometry reduces to the standard
Riemannian geometry that underlies general relativity, since the
vanishing of Weyl's *Streckenkrummung*(*length curvature*) is necessary and sufficient for the
existence of a Riemannian metric \(g\_{ij}\).
When
quantum theory was developed a few years later it became clear that
Weyl's theory was in conflict with experience in an even more
fundamental way since there is a direct relation between clock rates
and masses of particles in quantum theory. A particle with a certain
rest mass \(m\) possesses a natural frequency which is a function
of its rest mass, the speed of light \(c\), and Planck's constant
\(h\). This means that in a Weyl geometry not only clocks would
depend on their histories but also the masses of particles. For
example, if two protons have different histories then they would also
have different masses in a Weyl geometry. But this violates the
quantum mechanical principle that particles of the same kind--in
this case, protons--have to be exactly identical.
However, in 1918 it was still possible for Weyl to defend his theory
in the following way. In response to Einstein's criticism Weyl noted
that atoms, clocks and meter sticks are complex objects whose real
behavior in arbitrary gravitational and electromagnetic fields can
only be inferred from a dynamical theory of matter. Since no detailed
and reliable dynamical models were available at that time, Weyl could
argue that there is no reason to assume that, for example, clock
rates are correctly modelled by the length of a timelike vector. Weyl
(1919a, 67) said:
>
>
> At first glance it might be surprising that according to the purely
> close-action geometry, length transfer is non-integrable in the
> presence of an electromagnetic field. Does this not clearly contradict
> the behaviour of rigid bodies and clocks? The behaviour of these
> measurement instruments, however, is a physical process whose course
> is determined by natural laws and as such has nothing to do with the
> ideal process of 'congruent displacement of spacetime
> distance' that we employ in the mathematical construction of the
> spacetime geometry. The connection between the metric field and the
> behaviour of rigid rods and clocks is already very unclear in the
> theory of Special Relativity if one does not restrict oneself to
> quasi-stationary motion. Although these instruments play an
> indispensable role in praxis as indicators of the metric field, (for
> this purpose, simpler processes would be preferable, for example, the
> propagation of light waves), it is clearly incorrect
> to *define* the metric field through the data that are directly
> obtained from these instruments.
>
>
>
Weyl elaborated this idea by suggesting that the dynamical nature of
such time keeping systems was such that they
continually *adapt* to the spacetime structure in such a way
that their rates remain constant. He distinguished between quantities
that remain constant as a consequence of such *dynamical
adjustment*, and quantities that remain constant
by *persistence* because they are isolated and undisturbed. He
argued that all quantities that maintain a perfect constancy probably
do so as a result of *dynamical adjustment*. Weyl (1921a, 261)
expressed these ideas in the following way:
>
>
> What is the cause of this discrepancy between the idea of congruent
> transfer and the behaviour of measuring-rods and clocks? I
> differentiate between the determination of a magnitude in Nature by
> "persistence" (*Beharrung*) and by
> "adjustment" (*Einstellung*). I shall make the
> difference clear by the following illustration: We can give to the
> axis of a rotating top any arbitrary direction in space. This
> arbitrary original direction then determines for all time the
> direction of the axis of the top when left to itself, by means of a
> *tendency of persistence* which operates from moment to
> moment; the axis experiences at every instant a parallel
> displacement. The exact opposite is the case for a magnetic needle in
> a magnetic field. Its direction is determined at each instant
> independently of the condition of the system at other instants by the
> fact that, in virtue of its constitution, the system *adjusts*itself in an unequivocally determined manner to the field in
> which it is situated. *A priori* we have no ground for
> assuming as integrable a transfer which results purely from the
> tendency of persistence. ...Thus, although, for example,
> Maxwell's equations demand the conservational equation
> \(de\,/\,dt =0\) for the charge \(e\) of an
> electron, we are unable to understand from this fact why an electron,
> even after an indefinitely long time, always possesses an unaltered
> charge, and why the same charge \(e\) is associated with all
> electrons. This circumstance shows that the charge is not determined
> by persistence, but by adjustment, and that there can exist only
> *one* state of equilibrium of the negative electricity, to
> which the corpuscle adjusts itself afresh at every instant. For the
> same reason we can conclude the same thing for the spectral lines of
> atoms. The one thing common to atoms emitting the same frequency is
> their constitution, and not the agreement of their frequencies on the
> occasion of an encounter in the distant past. Similarly, the length
> of a measuring-rod is obviously determined by adjustment, for I could
> not give *this* measuring-rod in *this* field-position
> any other length arbitrarily (say double or treble length) in place
> of the length which it now possesses, in the manner in which I can at
> will pre-determine its direction. The theoretical possibility of a
> determination of length by adjustment is given as a consequence of
> the *world-curvature*, which arises from the metrical field
> according to a complicated mathematical law. As a result of its
> constitution, the measuring-rod assumes a length which possesses this
> or that value, *in relation to the radius of curvature of the
> field*.
>
>
>
Weyl's response to Einstein's criticism that a Weyl geometry
conflicts with experience, took advantage of the fact that the
underlying dynamical laws of matter which govern clocks and rigid
rods, were not known at that time. Weyl could thus argue that it is
at least *theoretically* possible that there exists an
underlying dynamics of matter, such that a Weyl geometry, according
to which length transfer is non-integrable, nonetheless coheres with
*observable* experience, according to which length transfer
*appears* to be integrable. However, as was clearly pointed
out by Wolfgang Pauli, Weyl's plausible defence comes at a
cost.[49]
Pauli (1921/1958, 196) argued that Weyl's defence of his theory
deprives it of its inherent convincing power from a physical point of
view.
>
>
> Weyl's present attitude to this problem is the
> following: *The ideal process of the congruent transference of
> world lengths ... has nothing to do with the real behaviour of
> measuring rods and clocks; the metric field must not be defined by
> means of information taken from these measuring instruments.* In
> this case the quantities \(g\_{ik}\) and \(\varphi\_{i}\) are, be
> definition, no longer observable, in contrast to the line elements of
> Einstein's theory. This relinquishment seems to have very
> serious consequences. While there now no longer exists a direct
> contradiction with experiment, the theory appears nevertheless to have
> been robbed of its inherent convincing power, from a physical point of
> view. For instance, the connexion between electromagnetism and world
> metric is not now essentially physical, but purely formal. For there
> is no longer an immediate connection between the electromagnetic
> phenomena and the behaviour of measuring rods and clocks. There is
> only an interrelation between the former and the ideal process which
> is mathematically defined as congruent transference of
> vectors. Besides, there exists only formal, and not physical, evidence
> for a connection between world metric and
> electricity.[50]
>
>
>
>
Pauli concluded his critical assessment of Weyl's theory with the
following statement:
>
> *Summarizing, we can say that Weyl's theory has not succeeded in
> getting any nearer to solving the problem of the structure
> of matter.* As will be argued in more detail ... there is,
> on the contrary, something to be said for the view that a solution of
> this problem cannot at all be found in this way.
>
>
>
It should be noted, however, that Weyl's defence of his theory
implicitly addresses an important methodological consideration
concerning the relation between theory and evidence. As Pauli puts it
above, according to Weyl "*the metric field must not be
defined by means of information taken from these measuring*
*instruments* [rigid rods and ideal clocks]". That is,
Weyl rejects Einstein's *operational* standpoint which gives
*operational significance* to the metric field in terms of the
*observable* behaviour of ideal rigid rods and ideal
clocks.[51]
Unlike light propagation and freely falling (spherically symmetric,
neutral) particles, rigid rods and ideal clocks are relativistically
ill defined probative systems, and are thus unsuitable for the
determination of the inherent structures of spacetime postulated by
the theory of relativity. Weyl (1918a) clearly recognized this when
he said in response to Einstein's critique "because of the
problematic behaviour of yardsticks and clocks I have in my book
*Space-Time-Matter* restricted myself for the specific
measurement of the \(g\_{ik}\), exclusively to the
observation of the arrival of light signals." It is interesting
to note parenthetically that in the first edition of his book Weyl
thought that it was possible to have an intrinsic method of comparing
the lengths of arbitrary spacetime intervals with an interval between
two fiducial spacetime events, by using light signals only. It was
Lorentz who pointed out to Weyl that not only the world lines of
light rays but also the world lines of material bodies are required
for an intrinsic method of comparing lengths. Not only did Weyl
correct this mistake in subsequent editions, but already in 1921,
Weyl (1921c) discovered the causal-inertial method for determining
the spacetime metric (see SS4.3) by proving an important theorem
that shows that the spacetime metric is already fully determined by
the inertial and causal structure of spacetime. Weyl (1949a, 103)
remarks "... therefore mensuration need not depend on
clocks and rigid bodies but ... light signals and mass points
moving under the influence of inertia alone will suffice." It
is clear that Weyl regarded the use of clocks and rigid rods as an
undesirable makeshift within the context of the special and general
theory. Since neither spatial nor temporal intervals are invariants
of spacetime, the invariant spacetime interval \(ds\) cannot be
directly ascertained by means of standard clocks and rigid rods. In
addition, the latter presuppose quantum theoretical principles for
their justification and therefore lie outside the relativistic
framework because the laws which govern their physical processes are
not
known.[52]
Weyl (1929c, 233) abandoned his unified field theory only with the
advent of the quantum theory of the electron. He did so because in
that theory a different kind of gauge invariance associated with
Dirac's theory of the electron was discovered, which more adequately
accounted for the conservation of electric charge. Weyl's
contributions to quantum mechanics, and his construction of a new
principle of gauge invariance, are discussed in
SS4.5.3.[53]
Weyl's unified field theory was revived by Dirac (1973) in a slightly
modified form, which incorporated a real scalar field
\(\beta(x)\). Dirac also argued that the time intervals measured
by atomic clocks need not be identified with the lengths of timelike
vectors in the Weyl
geometry.[54]
### 4.2 The Riemann-Helmholtz-Lie Problem of Space
Prior to the works of Gauss, Grassmann and Riemann, the study of
geometry tended to emphasize the employment of empirical intuitions
and images of the three dimensional physical space. Physical space
was thought of as having definite metrical attributes. The task of
the geometer was to take physical mensuration devices in that space
and work with them.
Under the influence of Gauss and Grassmann, Riemann's great
philosophical contribution consisted in the demonstration that,
unlike the case of a discrete manifold, where the determination of a
set necessarily implies the determination of its quantity or cardinal
number, in the case of a continuous manifold, the concept of such a
manifold and of its continuity properties, can be separated form its
metrical structure. Using modern terminology, Riemann separated a
manifold's local differential topological structure from its metrical
structure. Thus Riemann's separation thesis gave rise to the
*space problem*, or as Weyl called it, *das
Raumproblem*: how can metric relations be determined on a
continuous manifold \(M\)?
A metric manifold is a manifold on which a distance function
\(f : M \times M \rightarrow \mathbb{R}\) is defined. Such a
distance function must satisfy the following minimal conditions: for
all \(p, q, r \in M\),
1. \(f(p, q) \ge 0\), and if \(f(p,q) = 0\), then \(p = q\),
2. \(f(p, q) = f(q,p)\), (symmetry)
3. \(f(p, q) + f(q,r) \ge f(p,r)\) (triangle inequality).
In his famous inaugural lecture at Gottingen, entitled
*Uber die Hypothesen, welche der Geometrie zu*
*Grunde liegen* (*About the hypotheses which lie at the
basis of geometry*), Riemann (1854) examined how metric relations
can be determined on a continuous manifold; that is, what specific
form \(f : M \times M \rightarrow \mathbb{R}\) should
have. Consider the coordinates \(x^{i}(p)\)
and \(x^{i} (p) + dx^{i} (p)\) of two neighboring
points \(p, q \in M\). The measure of the
distance \(ds = f(p,q)\) must
be some function \(F\_{p}\) at \(p\) of the
differential increments \(dx^{i}(p)\); that is,
\[\tag{22}
ds = F\_{p}(dx^{1}(p), \ldots ,dx^{n}(p)).
\]
Riemann states that \(F\_{p}\) should satisfy the
following requirements:
**Functional Homogeneity:**
If \(\lambda \gt 0\) and
\(ds = F\_{p}(dx(p))\), then
\[\tag{23}
\lambda ds = \lambda F\_{p}(dx(p)) = F\_{p}(\lambda dx(p)).
\]
**Sign Invariance:**
A change in sign of the
differentials should leave the value of \(ds\) invariant.
Sign invariance is satisfied by every positive homogeneous function
of degree \(2m\) \((m = 1, 2, 3, \ldots)\). In the simple
case \(m = 1\), and the length element \(ds\) is the square
root of a homogeneous function of second degree, which can be
expressed in the standard form
\[\tag{24}
ds = \left[
\sum\_{i=1}^n (dx^{\,i}(p))^{2}
\right]^{\bfrac{1}{2}}.
\]
That is, at each point of \(M\) there exists a coordinate system
(defined up to an element of the orthogonal group \(O(n)\)
in which the square root of the homogeneous function of second degree
can be expressed in the above standard form. Riemann's well-known
general expression for the measure of length at \(p \in M\) with
respect to an arbitrary coordinate system is given by
\[\tag{25}
ds^{2} = g\_{ij}(x(p))dx^{i}(p)dx^{\,j}(p),
\]
where the components of the metric tensor satisfy the symmetry
condition \(g\_{ij} = g\_{ji}\).
The assumption that \(ds^{2} = F^{2}\_{p}\) is a
quadratic differential form is not only the simplest one, but also
the preferred one for other important reasons. Riemann himself was
well aware of other possibilities; for example, the possibility that
\(ds\) could be the 4th root of a homogeneous polynomial of 4th
order in the differentials. But Riemann restricted himself to the
special case \(m = 1\) because he was pressed for time and because he
wanted to give specific geometric interpretations of his results. As
Weyl points out Riemann's own answer to the space problem is
inadequate since Riemann's mathematical justification for the
restriction to the Pythagorean case are not very compelling. The
first satisfactory justification of the Pythagorean form of the
Riemannian metric, although limited in scope because it presupposed
the full homogeneity of Euclidean space, was provided by the
investigations of Hermann von Helmholtz. Helmholtz diverged from
Riemann's analytic approach and made use merely of the fundamental
concept of geometry, namely, the concept of *congruent
mapping*, and characterized the geometric structure of space by
requiring of space the full homogeneity of Euclidean space. His
analysis was thereby restricted to the cases of constant positive,
zero, or negative curvature. Abstracting from our experience of the
movement of rigid bodies, Helmholtz was able to mathematically derive
Riemann's distance formula from a number of axioms about rigid body
motion in space. Helmholtz (1868) argued that Riemann's hypothesis
that the metric structure of space is determined locally by a
quadratic differential form, is really a consequence of the facts
(*Tatsachen*) of rigid-body motion.
Considering the general case of \(n\) dimensions, and using Lie
groups and Lie algebras, Sophus Lie, (Lie (1886/1935, 1890a,b)),
later developed and improved Helmholtz's justification. However, the
Helmholtz-Lie treatment of, and solution to, the problem of space,
lost its relevance with the arrival of Einstein's theory of general
relativity. As Weyl (1922b) points out, instead of a
three-dimensional continuum we must now consider a four-dimensional
continuum, the metric of which is not *positive definite* but
is given instead by an *indefinite* quadratic form. In
addition, Helmholtz's presupposition of metric homogeneity no longer
holds, since we are now dealing with an inhomogeneous metric field
that causally depends on the distribution of matter. Consequently,
Weyl provided a reformulation of the space problem that is compatible
with the causal and metric structures postulated by the theory of
general relativity. But Weyl went further. Such a reformulation
should not only incorporate Riemann's *infinitesimal*
standpoint, as required by Einstein's general theory, it should also
cohere with Weyl's requirements of a **pure***infinitesimal geometry* developed earlier in the
context of Weyl's construction of a unified field theory.
More precisely, Weyl generalized the so-called
*Riemann-Helmholtz-Lie* problem of space in two ways: First,
he allowed for *indefinite metrics* in order to encompass the
general theory of relativity. Secondly, he considered metrics with
variable gauge \(\lambda(x(p))\) together with an
associated *length connection*, in order to obtain a
*purely* infinitesimal geometry. Thus each member of a general
class of geometries under consideration is locally determined
relative to a choice of variable gauge by two structural fields
(Strukturfelder): (1) a possibly indefinite *Finsler* metric
field[55]
\(F\_{p}(dx)\), and (2) a *length connection* that is
determined by a 1-form field \(\theta\_{i}dx^{i}\). Weyl's task
was to prove:
>
>
> If the geometry satisfies the *Postulate of Freedom*, (the
> nature of space imposes no restrictions on admissible metrical
> relations), and determines a unique, symmetric, linear connection
> \(\Gamma\), then the Finsler metric field
> \(F\_{p}(dx)\) must be a Riemannian metric field of
> some
> signature.[56]
>
>
>
>
In a Riemannian space the concept of parallel displacement is defined
by two conditions:
1. The components of a vector remain unchanged during an infinitesimal
parallel displacement in a suitably chosen coordinate system (geodesic
coordinate).[57]
This condition is satisfied if
\[\begin{matrix}
dv^{\,i}\_{p} = - \Gamma^{i}\_{jk} v^{\,j}\_{p} dx^{k}\_{p}
& \text{and} &
\Gamma^{i}\_{jk} = \Gamma^{i}\_{kj}\,.
\end{matrix}\]
2. The length of a vector remains unchanged during an infinitesimal
parallel displacement.
It follows from these conditions that a Riemannian space possesses a
definite symmetric linear connection--a symmetric linear
*metric*
connection[58]--which
is uniquely determined by the Pythagorean-Riemannian metric. Weyl
calls this:
**The Fundamental Postulate of Riemannian Geometry:**
Among the *possible* systems of parallel displacements of a
vector to infinitely near points, that is among the *possible*sets of symmetric linear connection coefficients, there exists
one and only one set, and hence one and only one system of parallel
displacement, which is *length preserving*.
In his
lectures[59]
on the mathematical analysis of the problem of space delivered in
1922 at Barcelona and Madrid, Weyl sketched a proof demonstrating
that the following is also true:
**Uniqueness of the Pythagorean-Riemannian Metric:**
Among all the possible infinitesimal metrics that can be put on a
differentiable manifold, the Pythagorean-Riemannian metric is the only
type of metric that uniquely determines a symmetric linear
connection.
Weyl begins his proof with two natural assumptions. First, the
*nature* of the metric should be coordinate independent. If
\(ds\) is given by an expression
\(F\_{p}(dx^{1}, \ldots ,dx^{n})\) with respect to a given system
of coordinates, then with respect to another system of coordinates,
\(ds\) is given by a function that is related to
\(F\_{p}(dx^{1}, \ldots ,dx^{n})\) by a linear, homogeneous
transformation of its arguments \(dx^{i}\).
Second, it is reasonable to assume that the *nature* of the
metric is the same everywhere, in the sense that at every point of
the manifold, and with respect to every coordinate system for a
neighborhood of the point in question, \(ds\) is represented by an
element of the equivalence class \([F]\) of functions generated by
any one such function, say
\(F\_{p}(dx^{1}, \ldots ,dx^{n})\), by all linear, homogeneous
transformations of its arguments \(dx^{i}\).
For the case in which \(F\_{p}\) is Pythagorean in
form, namely the square root of a positive-definite quadratic form,
there exists just one possible equivalence class [\(F\)], because
every function that is the square root of a positive-definite
quadratic form can be transformed to the standard expression
\[\tag{26}
F = \left[(dx^{1})^{2} + \cdots + (dx^{n})^{2}\right]^{\bfrac{1}{2}}
\]
by means of a linear, homogeneous transformation.
To every possible equivalence class [\(F\)] of homogeneous
functions, there corresponds a *type* of metrical space. The
Pythagorean-Riemannian space, for which
\(F^{2}\_{p} = (dx^{1})^{2} + \cdots + (dx^{n})^{2}\), is one among
several types of possible metrical spaces. The problem, therefore, is
to single out the equivalence class \([F]\), where \(F\)
corresponds to
\(F^{2}\_{p} = (dx^{1})^{2} + \cdots + (dx^{n})^{2}\), from the other
possibilities, and to provide arguments for this preference.
By the term 'metric' Weyl means *any*infinitesimal *distance function*
\(F\_{p} \in [F]\), where the equivalence
class \([F]\) represents a *type* of metric structure or
metric field. Any such type of metric field structure has a
microsymmetry group \(G\_{p}\) at each \(p \in M\).
**Definition 4.1 (Microsymmetry Group)**
A microsymmetry of a structural field (Strukturfeld) at a point \(p
\in M\) is a local diffeomorphism that takes \(p \in M\) into \(p\)
and preserves the structural field at \(p \in M\). The microsymmetry
group of a field at \(p \in M\) is the group of its microsymmetries at
\(p \in M\) under the operation of composition.
A microsymmetry group \(G\_{p}\), at \(p \in M\), of a metric
structure, is a set of invertible, linear maps of the tangent space
\(T(M\_{p})\) onto itself, which preserve the infinitesimal distance
function at \(p \in M\). For every \(p \in M, G\_{p}\) is isomorphic to
one and the same abstract group.
For a Riemannian type of metric structure the congruent linear maps
of the tangent space T\((M\_{p})\) onto itself form
a group \(G\_{p}\) which is isomorphic to the
orthogonal group \(O(n)\). The Pythagorean-Riemannian
metric at \(p\) is therefore determined through the concrete
realization of the *orthogonal* group at \(p\) which leaves
the fundamental quadratic differential form at \(p\) invariant.
Thus the Pythagorean-Riemannian *type* of metric is
characterized by the abstract microsymmetry group \(O(n)\).
For a metric which is not of the Pythagorean-Riemannian metric type,
the abstract microsymmetry group \(G\_{p}\) will be
different from \(O(n)\) and will be some other subgroup of
\(GL(n)\). At each point of the manifold the microsymmetry
group will be a concrete realization of this subgroup of
\(GL(n)\). Weyl now states what he calls
**The Postulate of Freedom:**
If only the *nature* (of whatever type) of the metric is
specified, that is, if only the corresponding abstract microsymmetry
group \(G\_{p}\) is specified, and the metric in
question is otherwise left arbitrary, then the *mutual
orientations* of the corresponding microsymmetry groups at
different points are also left arbitrary.
Weyl emphasizes that the *Postulate of Freedom* provides the
general framework for a concise formulation of
**The Hypothesis of Dynamical Geometry:**
Whatever the nature or type of the metric may be--provided it
is the same everywhere--the variations in the *mutual
orientations* of the concrete microsymmetry groups from point to
point are causally determined by the material content that fills
space.
In contrast with Helmholtz's analysis, which presupposes the
homogeneity of space, the *Postulate of Freedom* allows for
the possibility of replacing Helmholtz's homogeneity requirement with
the possibility of subjecting the metric field to arbitrary,
infinitesimal change. *To assert this dynamical possibility does
not require that the nature of the metric be specified*.
Next, Weyl points out that what has been provided so far is merely an
explication of the concepts *metric*, *length*
*connection* and *symmetric linear connection*. Some
claim which goes beyond conceptual analysis has to be made, according
to Weyl, in order to prove that among the various types of possible
metrical structures that can be put on a differentiable manifold
representing physical space, the Pythagorean-Riemannian form is
unique. Weyl suggests the following hypothesis:
**Weyl's Hypothesis:**
Whatever determination the essentially free length connection at some
point \(p\) of the manifold may realize with the points in its
infinitesimal neighborhood, there always exists among the possible
systems of parallel displacements of the tangent space
\(T(M\_{p})\), one and only one, which is at the same
time a system of infinitesimal *congruent transport*.
Weyl shows that this hypothesis does in fact single out metrics of
the Pythagorean-Riemannian type by proving the following theorem:
**Theorem 4.2**
If a specific length connection is such that it determines a unique
symmetric linear connection, then the metric must be of the
Pythagorean-Riemannian form (for some signature).
Thus the *Postulate of Freedom* and *Weyl's
Hypothesis* together entail the existence, at each \(p \in M\), of
a non-degenerate quadratic form that is unique up to a choice of gauge
at \(p \in M\), and that is invariant under the action of the
microsymmetry group \(G\_{p}\) that is isomorphic to an orthogonal
group of some signature.
This formulation suggests, according to Weyl, an intuitive contrast
between Euclidean 'distance-geometry' and the
'*near*-geometry' (*Nahegeometrie*) or
'*field*-geometry' of Riemann. Weyl (1949a, 88)
compared Euclidean 'distance-geometry' to a crystal
"built up from uniform unchangeable atoms in the rigid and
unchangeable arrangement of a lattice", and the latter
[Riemannian *field-geometry*] to a liquid, "consisting
of the same indiscernible unchangeable atoms, whose arrangement and
orientation, however, are mobile and yielding to forces acting upon
them."
The *nature* of the metric field, that is the *nature*of the metric everywhere, is the same and is, therefore,
absolutely determined. It reflects according to Weyl, the *a
priori* structure of space or spacetime. In contrast, what is
*posteriori*, that is, accidental and capable of continuous
change being causally dependent on the material content that fills
space, are the mutual *orientations* of the metrics at
different points. Hence, the demarcation between the \(a\)
*priori* and the *a posteriori* has shifted, according
to Weyl: Euclidean geometry is still preserved for the infinitesimal
neighborhood of any given point, but the coordinate system in which
the metrical law assumes the standard form
\(ds^{2} =\sum^{n}\_{i=1}(dx^{i})^{2}\)
is in general different from place to place.
Weyl's *a priori* and *a posteriori* distinction must
not be confused with Kant's distinction. Weyl (1949a, 134) remarks:
"In the case of physical space it is possible to
counterdistinguish aprioristic and aposterioristic features in a
certain objective sense without, like Kant, referring to their
cognitive source or their cognitive character." Weyl makes the
same remark in (Weyl, 1922b, 266). See also the discussion in
SS4.5.8.
In the context of his group-theoretical analysis, Weyl (1922b, p.
266) makes the following interesting and important statement:
>
>
> I remark from an epistemological point of view: it is not correct to
> say that space or the world [spacetime] is in itself, prior to any
> material content, merely a formless continuous manifold in the sense
> of analysis situs; the *nature* of the metric [its
> infinitesimal Pythagorean-Riemannian character] is characteristic of
> space in itself, only the mutual orientation of the metrics at the
> various points is contingent, a posteriori and dependent on the
> material content.
>
>
>
Within the context of general relativity, empty spacetime is
impossible, if 'empty' is understood to mean not merely
empty of all *matter* but also empty of all *fields*.
At another place, Weyl (1949a, Engl. edn, 172) says:
>
>
> Geometry unites organically with the field theory; space is not
> opposed to things (as it is in substance theory) like an empty vessel
> into which they are placed and which endows them with far-geometrical
> relationships. No empty space exists here; the assumption that the
> field omits a portion of the space is absurd.
>
>
>
According to Weyl, the metric field does not cease to exist in a world
devoid of matter but is in a state of rest: As a *rest* field
it would possess the property of *metric homogeneity*; the
mutual *orientations* of the *orthogonal* groups
characterizing the Pythagorean-Riemannian nature of the metric
everywhere would not differ from point to point. *This means that
in a matter-empty universe the metric is fixed. Consequently, the set
of congruence relations on spacetime is uniquely determined.*
Since the metric uniquely determines the symmetric linear connection,
the homogeneous *metric* field (rest field) determines
an *integrable* affine structure. Therefore, a flat Minkowski
spacetime consistent with the complete absence of matter is endowed
with an *integrable connection* and thus determines all
(hypothetical) free motions. According to Weyl, there exists in the
absence of matter a homogeneous metric field, a structural field
(*Strukturfeld*), which has the character of a *rest*
field, and which constitutes an all pervasive background that cannot
be eliminated. The structure of this *rest* field determines
the *extension* of the spacetime congruence relations and
determines Lorentz invariance. The *rest*
field possesses no net energy and makes no contribution to
curvature.
The contrast with Helmholtz and Lie is this: both of them require
homogeneity and isotropy for physical space. From a general
Riemannian standpoint, the latter characteristics are valid only for
a matter-empty universe. Such a universe is flat and Euclidean,
whereas a universe that contains matter is *inhomogeneous,
anisotropic and of variable curvature*.
It is important to note here that the validity of Weyl's assertion
that the metric field does not cease to exist but is in a state of
rest, has its source in the mathematical fact that the metric field
is a \(G\)-structure. A \(G\)-structure may be flat or non-flat; but a
\(G\)-structure can never vanish. Consequently, geometric fields
characterizable as \(G\)-structures, such as the projective, conformal,
affine and metric structures, do not
vanish.[60]
### 4.3 Weyl's Causal-Inertial Method for determining the Spacetime Metric
#### 4.3.1 Weyl's Field Ontology of Spacetime Geometry
Riemann searched for the most general type of an \(n\)-dimensional
manifold. On this manifold, Euclidean geometry turns out to be a
special case resulting from a certain form of the metric. Weyl takes
this general structure, the manifold structure, which has certain
continuity and order properties, as basic, but leaves the
determination of the other geometrical structures, such as the
projective, conformal, affine and metric structures, open. The
metrical axioms are no longer dictated, as they were for Kant, by
pure intuition. According to Weyl (1949a, 87), for Riemann the metric
is not, as it was for Kant, "part of the static homogeneous
form of phenomena, but of their ever changing material
content". Weyl (1931a, 338) says:
>
>
> We differentiate now between the amorphous continuum and its metrical
> structure. The first has retained its *a priori*
> character,[61]
> ... whereas the structural field [Strukturfeld] is completely
> subjected to the power-play of the world; being a real entity,
> Einstein prefers to call it the ether.
>
>
>
There is no indication in Riemann's work on gravitation and
electromagnetism that would indicate that he anticipated the
conceptual revolution underlying Einstein's theory. However,
Weyl's interpretation of Riemann's work suggests that
Riemann foresaw something like its possibility in the following
sense:
>
>
> By formally separating the post-topological structures such as the
> affine, projective, conformal and metric structures from the
> manifold, so that these structures are no longer rigidly tied to it,
> Riemann deprived them of their formal geometric rigidity and, on the
> basis of his infinitesimal geometric standpoint or
> "near-geometry", allowed for the possibility of
> interpreting them as mathematical representations of flexible,
> dynamical physical structural fields [Strukturfelder] on the manifold
> of spacetime, geometrical fields that reciprocally interact with
> matter.
>
>
>
Riemann's separation thesis together with his adoption of the
infinitesimal standpoint, were prerequisite steps for the development
of differential geometry as the mathematics of differentiable
geometric fields on manifolds. When interpreted physically, these
mathematical structures or geometrical fields correspond, as Weyl
says, to physical structural fields (Strukturfelder). Analogous to
the electromagnetic field, these structural fields act on matter and
are in turn acted on by matter. Weyl (1931a, 337) remarks:
>
>
> I now come to the crucial idea of the theory of General Relativity.
> *Whatever exerts as powerful and real effects as does the
> metric structure of the world, cannot be a rigid, once and for
> all, fixed geometrical structure of the world, but must
> itself be something real which not only exerts effects on
> matter but which in turn suffers them through matter.* Riemann
> already suggested for space the idea that the structural field, like
> the electromagnetic field, reciprocally interacts with matter.
>
>
>
Weyl (1931a, 338) continues:
>
>
> We already explained with the example of inertia, that the structural
> field [Strukturfeld] must, as a close-action [Nahewirkung], be
> understood infinitesimally. How this can occur with the metric
> structure of space, Riemann abstracted from Gauss's theory of curved
> surfaces.
>
>
>
The various geometrical fields are not "intrinsic" to the
manifold structure of spacetime. The manifold represents an amorphous
four-dimensional differentiable continuum in the sense of
*analysis situs* and has no properties besides those that fall
under the concept of a manifold.
The amorphous four-dimensional differentiable manifold possesses a
high degree of symmetry. Because of its homogeneity, all points are
alike; there are no objective geometric properties that enable one to
distinguish one point from another. This full homogeneity or symmetry
of space must be described by its group of *automorphisms*,
the one-to-one mappings of the point field onto itself which leave
all relations of objective significance between points undisturbed.
If a geometric object \(F\), that is a point set with a definite
relational structure is given, then those automorphisms of space that
leave \(F\) invariant, constitute a group and this group describes
exactly the symmetry which \(F\) possesses. For instance, to use
an example by Weyl (1938b) (see also Weyl (1949a, 72-73) and
Weyl (1952)), if
\(R(p\_{1},p\_{2},p\_{3})\) is a ternary relation that asserts
\(p\_{1},p\_{2},p\_{3}\)
lie on a straight line, then we require that any three points,
satisfying this relation \(R\), are mapped by an automorphism into
three other points
\(p\_{1}',p\_{2}',p\_{3}'\),
fulfilling the same relation.
The group of automorphisms of the \(n\)-dimensional number space
contains only the identity map, since all numbers of
\(\mathbb{R}^{n}\) are distinct individuals. It is
essentially for this reason that the real numbers are used for
coordinate descriptions. Whereas the continuum of real numbers
consists of individuals, the continua of space, time, and spacetime
are homogeneous. Spacetime points do not admit of an absolute
characterization; they can be distinguished, according to Weyl, only
by "a demonstrative act, by pointing and saying
here-now".
In a little book entitled *Riemanns geometrische Ideen, ihre
Auswirkung und ihre Verknupfung mit
der Gruppentheorie*, published posthumously in 1988, Weyl (1988,
4-5) makes this interesting comment:
>
>
> Coordinates are introduced on the Mf [manifold] in the most direct
> way through the mapping onto the number space, in such a way, that
> all coordinates, which arise through one-to-one continuous
> transformations, are equally possible. With this the coordinate
> concept breaks loose from all special constructions to which it was
> bound earlier in geometry. In the language of relativity this means:
> The coordinates are not measured, their values are not read off from
> real measuring rods which react in a definite way to physical fields
> and the metrical structure, rather they are a priori placed in the
> world arbitrarily, in order to characterize those physical fields
> including the metric structure numerically. The metric structure
> becomes through this, so to speak, freed from space; it becomes an
> existing field within the remaining structure-less space. Through
> this, space as form of appearance contrasts more clearly with its
> real content: The content is measured after the form is arbitrarily
> related to coordinates.
>
>
>
By mapping a given spacetime homeomorphically onto the real number
space, providing through the arbitrariness of the mapping, what Weyl
calls, a qualitatively non-differentiated field of free
possibilities--the continuum of all possible
coincidences--we represent spacetime points by their coordinates
corresponding to some coordinate system. The four-dimensional
arithmetical space can be utilized as a four-dimensional schema for
the localization of events of all possible "here-nows".
Physical dynamical quantities in spacetime, such as the geometrical
structural fields on the four-dimensional spacetime continuum, are
describable as functions of a variable point which ranges over the
four-dimensional number space \(\mathbb{R}^{4}\). Instead of
thinking of the spacetime points as real substantival entities, and
any talk of fields as just a convenient way of describing geometrical
relations between points, one thinks of the geometrical fields such
as the projective, conformal causal, affine and metric fields, as
real physical entities with dynamical properties, such as energy,
momentum and angular momentum, and the field points as mere
mathematical abstractions. Spacetime is not a medium in the sense of
the old ether concept. No ether in that sense exists here. Just as
the electromagnetic fields are not states of a medium but constitute
independent realities which are not reducible to anything else, so,
according to Weyl, the geometrical fields are independent irreducible
physical
fields.[62]
A class of geometric structural fields of a given type is
characterized by a particular Lie group. A geometric structural field
belonging to a given class has a microsymmetry group (see definition
4.1) at each point \(p \in M\) which is isomorphic to
the Lie group that is characteristic of the class. In relativity
theory, this microsymmetry group is isomorphic to the Lorentz group
and leaves invariant a pseudo-Riemannian metric of Lorentzian
signature.
The different types of geometric, structural fields may be
represented from a modern mathematical point of view as cross
sections of appropriate fiber bundles over the manifold \(M\);
that is, the amorphous manifold \(M\) has associated with it
various geometric fields in terms of a mapping of a certain kind
(called a cross section) from the manifold \(M\) to the
corresponding bundle space over
\(M\).[63]
In particular, Einstein's general theory of relativity postulates a
physical field, the metrical field, which, mathematically speaking,
may be characterized as a cross section of the bundle of
non-degenerate, second-order, symmetric, covariant tensors of Lorentz
signature over \(M\). Weyl (1931a, 336) says of this world
structure:
>
>
> However this structure is to be exactly and completely described and
> whatever its inner ground might be, all laws of nature show that it
> constitutes the most decisive influence on the evolution of physical
> events: the behavior of rigid bodies and clocks is almost exclusively
> determined through the metric structure, as is the pattern of the
> motion of a force-free mass point and the propagation of a light
> source. And only through these effects on the concrete natural
> processes can we recognize this structure.
>
>
>
The views of Weyl are diametrically opposed to geometrical
conventionalism and some forms of relationalism. According to Weyl,
we *discover* through the behavior of physical phenomena an
already determined metrical structure of spacetime. The metrical
relations of physical objects are determined by a physical field, the
metric field, which is represented by the second rank metric tensor
field. Contrary to geometric conventionalism, spacetime geometry is
not about rigid rods, ideal clocks, light rays or freely falling
particles, except in the derivative sense of providing information
about the physically real metric field which, according to Weyl, is as
physically real as is the electromagnetic field, and which determines
and explains the metrical behavior of congruence standards under
transport. The metrical field has physical *and* metrical
significance, and the metrical significance does not consist in the
mere articulation of relations obtaining between, say, rigid rods or
ideal clocks.
The special and general, as well as the non-relativistic spacetime
theories postulate various structural constraints which events are
held to satisfy. When interpreted physically, these mathematical
structures or constraints correspond to physical structural fields
(*Strukturfelder*). Analogous to the electromagnetic field,
these structural fields act on matter and are, within the context of
the general theory of relativity, in turn acted on by matter. An
\(n\)-dimensional manifold \(M\) whose sole properties are
those that fall under the concept of a manifold, Weyl (1918b)
physically interprets as an \(n\)-dimensional empty world, that
is, a world empty of both matter and fields. On the other hand, an
\(n\)-dimensional manifold \(M\) that is an affinely connected
manifold, Weyl physically interprets as an \(n\)-dimensional world
filled with a gravitational field, and an \(n\)-dimensional
manifold \(M\) endowed with a projective structure represents an
\(n\)-dimensional non-empty world filled with an
inertial-gravitational field, or what Weyl calls the *guiding
field Fuhrungsfeld*). In a
similar vein, an \(n\)-dimensional manifold \(M\) that
possesses a conformal structure of Lorentz type, represents a
non-empty \(n\)-dimensional world filled with a causal field.
Finally, an \(n\)-dimensional manifold \(M\) endowed with a
metrical structure may be interpreted physically as an
\(n\)-dimensional non-empty world filled with a metric field.
#### 4.3.2 The Causal and Inertial Field uniquely determine the Metric
The mathematical model of physical spacetime is the four-dimensional
pseudo-Riemannian manifold. Weyl (1921c) distinguished between two
primitive substructures of that model: the *conformal* and
*projective* structures and showed that the conformal
structure, modelling the *causal field* governing light
propagation, and the projective structure, modelling the inertial or
*guiding field* governing all free (fall) motions, uniquely
determine the metric. That is, Weyl (1921c) proved
**Theorem 4.3**
The projective and conformal structure of a metric space determine the
metric uniquely.
A metric \(g\) on a manifold determines a first-order conformal
structure on the manifold, namely, an equivalence class of
conformally related metrics
\[\tag{27}
[g] = \{\overline{g} \mid \overline{g} = e^{\theta} g \}.
\]
A metric \(g\) also uniquely determines a symmetric linear
connection \(\Gamma\) on the manifold. Under a conformal transformation
\[\tag{28}
g \rightarrow e^{\theta} g = \overline{g},
\]
the change of the components of the symmetric linear connection is
given by (14), that is,
\[\tag{29}
\Gamma^{i}\_{jk} \rightarrow \overline{\Gamma}^{i}\_{jk}
= \Gamma^{i}\_{jk} +
\frac{1}{2}(\delta^{i}\_{j}\theta\_{k} + \delta^{i}\_{k}\theta\_{j} - g\_{jk}g^{ir}\theta\_{r}).
\]
Thus the set of all arbitrary conformal transformations of the metric
induces an equivalence class \(K\) of conformally related
symmetric linear connections. This equivalence class \(K\)
constitutes a second-order conformal structure on the manifold and
the difference between any two connections in the equivalence class
is given by (29). Weyl shows that a conformal transformation (29)
preserves the projective structure and hence is a projective
transformation (that is, a conformal transformation which also
satisfies (7)), if and only if \(\theta\_{j} = 0\), in
which case the conformal and projective structures are compatible.
Weyl remarks after the proof:
>
>
> If it is possible for us, in the real world, to discern causal
> propagation, and in particular light propagation, and if moreover, we
> are able to recognize and observe as such the motion of free mass
> points which follow the guiding field, then we are able to read off
> the metric field from this alone, without reliance on clocks and
> rigid rods.
>
>
>
Elsewhere, Weyl (1949a, 103) says:
>
>
> As a matter of fact it can be shown that the metrical structure of
> the world is already fully determined by its inertial and causal
> structure, that therefore mensuration need not depend on clocks and
> rigid bodies but that light signals and mass points moving under the
> influence of inertia alone will suffice.
>
>
>
The use of clocks and rigid rods is, within the context of either
theory, an undesirable makeshift for two reasons. First, since
neither spatial nor temporal intervals are invariants of the
four-dimensional spacetime of the special theory of relativity and
the general theory of relativity, the invariant spacetime interval
\(ds\) cannot be directly ascertained by means of standard clocks
and rigid rods. Second, the concepts of a rigid body and a periodic
system (such as pendulums or atomic clocks) are not fundamental or
theoretically self-sufficient, but involve assumptions that
presuppose quantum theoretical principles for their justification and
thus lie outside the present conceptual relativistic framework.
Therefore, methodological and ontological considerations decidedly
favor Weyl's causal-inertial method for determining the spacetime
metric.
From the physical point of view, Weyl emphasized the roles of light
propagation and free (fall) motion in revealing the conformal-causal
and the projective structures respectively. However, from the
mathematical point of view, Weyl did not use these two structures
directly in order to derive from them and their compatibility
relation, the metric field. Rather, Weyl regarded the metric and
affine structures as fundamental and showed that the conformal and
the projective structures respectively arise from those structures by
mathematical abstraction.
![figure](fig7.png)
Figure 7:
Weyl took the metric and affines structures as fundamental and showed
that the conformal and projective structures respectively arise from
them by mathematical abstraction.
#### 4.3.3 The Ehlers, Pirani, Schild Construction of the Causal-Inertial Method
Ehlers et al. (1972) generalized Weyl's causal-inertial method by
deriving the metric field *directly* from the conformal and
projective fields and derived a unique pseudo-Riemannian spacetime
metric solely as a consequence of a set of natural, physically
well-motivated, constructive, "geometry-free" axioms
concerning the incidence and differential properties of light
propagation and free (fall) motion. Ehlers, Pirani and Schild adopt
Reichenbach's (1924) term, *constructive axiomatics* to
describe the nature of their approach. The
"geometry-free" axioms are propositions about a few
general qualitative assumptions concerning free (fall) motion and
light propagation that can be verified directly through experience in
a way that does not presuppose the full blown edifice of the general
theory of relativity. From these axioms, the theoretical basis of the
theory is reconstructed step by step.
The constructive axiomatic approach to spacetime structure is roughly
this:
1. **Primitive Notions.** The constructive axiomatic
approach is based on a triple of sets
\[
\langle M, \mathcal{P}, \mathcal{L}\rangle
\]
of objects corresponding respectively to the notions of *events,
particle paths* and *light rays*, which are taken as
primitive. The set \(M\) of events is assumed to have a Hausdorff
topology with a countable basis in order to state local axioms
through the use of such terms as "neighborhood". Members
of the sets \(\mathcal{P} = \{P, Q, P\_{1}, Q\_{1}, \ldots \}\) and
\(\mathcal{L} = \{ L, N, L\_{1}, N\_{1}, \ldots \}\) are subsets of
\(M\) that represent the possible or actual paths of massive
particles and light rays in spacetime.
2. **Differential Structure.** The differential structure
is not presupposed; rather through the first few axioms a
differential-manifold structure is introduced on the set of events
\(M\) that is sufficient for the localization of events by means
of local coordinates, such as radar coordinates. Once \(M\) is
given a differential-manifold structure through the introduction of
local radar coordinates by means of particles and light rays (such
that any two radar coordinates are smoothly related to one another),
one can do calculus on \(M\) and one may speak of tangent and
direction spaces.
It is important to emphasize that the members of \(\mathcal{P}\)
represent possible or actual paths of *arbitrary* massive
particles that may have some internal structure such as higher order
gravitational and electromagnetic multipole moments and that may
therefore interact in complicated ways with various physical
fields. In order to constructively establish the projective structure
of spacetime, it is necessary to single out a subset of
\(\mathcal{P}\), namely \(\mathcal{P}\_{f}\), the set of possible or
actual paths of spherically symmetric, electrically neutral particles
(that is, the world lines of freely falling particles). However, the
set \(\mathcal{P}\_{f} \subset \mathcal{P}\), can be properly
characterized only after a coordinate system (differential structure)
is available. *Consequently, one must employ* arbitrary *particles in the statement of those axioms that lead to the local
differential structure of spacetime.*
3. **Second-Order Conformal Structure.** The *Law of
Causality* asserts the existence of a unique first-order
conformal structure on spacetime (27), that is, a field of
infinitesimal light cones.
Only null one-directions are determined. Therefore no special choice
of parameters along light rays is determined by this structure. The
first-order conformal structure can be measured using only the local
differential-topological structure. Moreover, by a purely
mathematical process involving only differentiation, the first-order
conformal structure determines a second-order conformal structure,
namely, an equivalence class \(K\) of conformally related
symmetric linear connections.
4. **Projective Structure.** The motions of freely falling
particles governed by the guiding field reveal the geodesics of
spacetime, that is, the geodesics corresponding to an equivalence
class \(\Pi\) of projectively equivalent symmetric linear connections.
Only geodesic one-directions are determined, that is, no special
choice of parameters is involved in characterizing free fall motion.
5. **Compatibility between the Conformal and Projective
Structures.** That the conformal and projective structures are
compatible is suggested by high energy experiments, according to
Ehlers, Pirani and Schild: "A massive particle \((m \gt 0)\),
though always slower than a photon, can be made to chase a photon
arbitrarily closely." Ehlers, Pirani and Schild
therefore assume an axiom of compatibility between the conformal and
projective structures, and this leads to a *Weyl space*: If
the projective and conformal structures are compatible, then the
intersection
\[
\Pi \cap K = \Gamma\ \text{ (Weyl connection)}
\]
of the equivalence class \(K\) of conformally equivalent symmetric
linear connections, and the equivalence class \(\Pi\) of projectively
equivalent symmetric linear connections, contains a unique symmetric
linear connection, a *Weyl connection*. Thus light propagation
and free (fall) motion reveal on spacetime a unique Weyl connection
which determines the parallel transport of vectors, preserving their
timelike, null or spacelike character, and for any pair of non-null
vectors, the Weyl connection leaves invariant the ratio of their
lengths and the angle between them, provided the vectors are
transported along the same path.
6. **Pseudo-Riemannian Metric.** Since length transfer is
non-integrable (i.e., path-dependent) in a Weyl space, a Weyl
geometry reduces to a pseudo-Riemannian geometry if and only if
Weyl's *length-curvature*
(*Streckenkrummung*) tensor equals
zero, in which case the length of a vector is path-independent under
parallel transport, and there exists no second clock effect.
#### 4.3.4 The Philosophical Significance of the Causal-Inertial Method
Can it be argued that Ehlers, Pirani and Schild's generalization of
Weyl's causal-inertial method for determining the spacetime metric
constitutes a convention-free, and - in relevant respects
- theory-independent body of evidence that can adjudicate
between spacetime geometries, and hence between spacetime theories
that postulate them? As Weyl showed, we can empirically determine the
metric field, provided certain epistemic conditions are satisfied,
that is, provided we can measure the conformal-causal structure, and
provided "we are able to recognize and observe as such the
motion of free mass points which follow the guiding field."
Criticisms of Ehlers, Pirani and Schild's constructive axiomatics
suggest that the causal-inertial method is not convention-free and
that it is ineffective epistemologically in providing a possible
solution to the controversy between geometrical realism and
conventionalism in favor of realism. Basically, all of the charges
laid against Ehlers, Pirani and Schild's constructive axiomatics
concentrate on the roles which massive particles play in their
construction. One of the constructive axioms employed by Ehlers,
Pirani and Schild, the projective axiom, is a statement of the
infinitesimal version of the Law of Inertia, the law of free (fall)
motion which contains Newton's first law of motion as a special case
in the absence of gravitation. Since Ehlers, Pirani and Schild do not
provide an independent, non-circular criterion by which to
characterize free (fall) motion, their approach has been charged with
circularity by philosophers such as Grunbaum (1973), Salmon
(1977), Sklar (1977) and Winnie (1977).
The problem is a familiar one; how to introduce a class of preferred
motions, that is, how to characterize that particular path structure
that would govern the motions of free particles \(\mathcal{P}\_{f}\),
that is, neutral, spherically symmetric, non-rotating test bodies,
while avoiding the circularity problem surrounding the notion of
a *free particle*: The only way of knowing when no forces act
on a body is by observing that it moves as a free particle along the
geodesics of spacetime. But how, without already knowing the geodesics
or the projective structure of spacetime is it possible to determine
which particles are free and which are not? And to determine the
projective structure of spacetime it is necessary to use free
particles.
Coleman and Korte (1980) have addressed these and related
difficulties by providing a non-conventional procedure for the
empirical determination of the projective
structure.[64]
It is worth emphasizing that Weyl's approach to differential
geometry, in which the affine, projective and conformal structures
are treated in their own right rather than as mere aspects of the
metric, was instrumental for his discovery of the non-circular and
non-conventional geodesic method for the empirical determination of
the spacetimet metric. The old notion of a 'geodesic
path' had its inception in the context of classical
*metrical* geometry and 'geodesicity' was
characterized in terms of *extremal* paths of curves, which
presupposed a metric. It was Weyl's metric-independent construction
of the symmetric linear connection that led him to introduce the
geometry of paths and the metric-independent characterization of a
*geodesic* path in terms of the process of autoparallelism of
its tangent direction.
### 4.4 The Laws of Motion, Mach's Principle, and Weyl's Cosmological Postulate
Weyl provided a general conceptual/mathematical clarification of the
concept of motion that applies to any spacetime theory that is based
on a differential manifold. In particular, Weyl's penetrating
analysis shows that Einstein's understanding of the role and
significance of Mach's Principle for the general theory of relativity
and cosmology is actually inconsistent with the basic principles of
general relativity.
Weyl's major contribution to cosmology is known as "Weyl's
Hypothesis". The name was coined by Weyl (1926d) himself in an
article in the *Encyclopedia of*
*Britannica*.[65]
According to Weyl's Postulate, the worldlines of all galaxies are
non-intersecting diverging geodesics that have a common origin in the
distant past. From this system of worldlines Weyl derived a common
cosmic time. On the basis of his postulate, Weyl (1923c, Appendix
III) was also the first to show that there is an approximately
*linear* relation between the redshift of galactic spectra and
distance. Weyl had basically discovered *Hubble's Law* six
years prior to Hubble's formulation of it in 1929. Another
contribution to cosmology is Weyl's (1919b) spherically symmetric
static exact solution to Einstein's
linearized[66]
field equations.
#### 4.4.1 The Laws of Motion and Mach's Principle
There are essentially two ways to understand Mach's Principle: (1)
Mach's Principle rejects the absolute character of the inertial
structure of spacetime, and (2) Mach's Principle rejects the inertial
structure of spacetime *per se*. Version (2) might be
characterized as *Leibnizian relativity* or *body
relationalism*; that is, one understands by relative motion the
motion of *bodies* with respect only to other observable
*bodies* or observable *bodily* reference frames. The
relative motion of a body with respect to absolute space or to the
inertial structure of space (Newton) or spacetime is ruled out on
epistemological and/or metaphysical grounds.
In the context of his general theory of relativity, what Einstein is
objecting to in Newtonian Mechanics, and by implication, the theory of
special relativity, is the *absolute* character of the inertial
structure; he is not asserting its *fictitious* character. That
is, the general theory of relativity incorporates Mach's
Principle as expressed in version (1) by treating the inertial
structure as dynamical and not as absolute.
However, Einstein also tried to extend and generalize the special
theory of relativity by incorporating version (2) of Mach's
Principle into the general theory of relativity. Einstein was deeply
influenced by Mach's empiricist programme and accepted
Mach's insistence on the primacy of observable facts of
experience: *only observable facts of experience may be invoked to
account for the phenomena of motion*. As a consequence, Einstein
restricted the concept of relative motions to relative motions between
bodies. Newton thought that the plane of Foucault's pendulum
remains aligned with respect to absolute space. Since the fixed stars
are at rest with respect to absolute space the plane of
Foucault's pendulum remains aligned to them as well, and rotates
relative to the earth. But according to Einstein, Newton's
intermediary notion of absolute space is as questionable as it is
unnecessary in explaining the behaviour of Foucault's
pendulum. Not absolute space, but the actually existing masses of the
fixed stars of the whole cosmos guide the plane of Foucault's
pendulum.
Einstein (1916) argued that the general theory of relativity removes
from the special theory of relativity and Newton's theory an inherent
epistemological defect. The latter is brought to light by Mach's
paradox, namely, Einstein's example of two fluid bodies, \(A\) and
\(B\), which are in constant *relative* rotation about a
common axis. With regard to the extent to which each of the spheres
bulges at its equator, infinitely many different states are possible
although the relative rotation of the two bodies is the same in every
case. Einstein considered the case in which \(A\) is a sphere and
\(B\) is an oblate spheroid. The paradox consists in the fact that
there is no readily discernible reason that accounts for the fact
that one of the bodies bulges and the other does not. According to
Einstein, an epistemological satisfactory solution to this paradox
must be based on 'an observable fact of experience'.
Einstein wanted to implement a Leibnizian-Machian *relational*conception of motion according to which all motion is to be
interpreted as the motion of some bodies in relation to other bodies.
Einstein wished to extend the body-relative concept of uniform
inertial motion to the concept of a body-relative accelerated motion.
#### 4.4.2 Weyl's Critique of Einstein's Machian Ideas
Weyl was very critical of Einstein's attempt to incorporate version
(2) of Mach's Principle into the theory of general relativity and
relativistic cosmology because he considered the Leibnizian-Machian
*relational* conception of motion--according to which all
motion is to be interpreted as the motion of some bodies in relation
to other bodies--to be an incoherent notion within the context
of the general theory of relativity.
In a paper entitled *Massentragheit und
Kosmos. Ein Dialog* [*Inertial Mass and Cosmos. A
Dialogue*] Weyl (1924b) articulates his overall position on the
concept of motion and the role of Mach's Principle in general
relativity and
cosmology.[67]
Weyl defines Mach's Principle as follows:
**M (Mach's Principle):**
The inertia of a body is determined through the interaction of all the masses in the universe.
Weyl (1924b) then makes the observation that the *kinematic
principle of relative motion* is by itself without any content,
unless one also makes the additional physical causal assumption that
**C (Physical Causality):**
All events or processes are uniquely causally determined through matter, that is, through charge, mass and the state of motion of the elementary particles of
matter.[68]
The underlying motivation for assumption \(\mathbf{C}\) of physical
causality is essentially Mach's empiricist programme, namely,
Mach's insistence on the *primacy of observable facts of
experience*. Addressing Einstein's formulation of
Mach's paradox, Weyl (1924b) says: Only if we conjoin
the *kinematic principle of relative motion* with the physical
assumption \(\mathbf{C}\) does it appear groundless or impossible on
the basis of the kinematic principle that in the absence of any
external forces a stationary body of fluid has the form of a sphere
"at rest", while on the other hand it has the form of a
"rotating" flattened ellipsoid. Weyl rejects principle
\(\mathbf{C}\) of physical causality because he denies the feasibility
of \(\mathbf{M}\) (Mach's Principle), as defined above, on \(a\)
*priori*[69]
grounds. According to Weyl (1924b)
**A:**
The concept of relative motion of several isolated bodies with respect
to each other is as untenable according to the theory of general
relativity as is the concept of absolute motion of a single body.
Weyl notes that what we seem to observe as the rotation of the stars,
is in reality not the rotation of the stars themselves but the
rotation of the "star compass" [Sternenkompass] which
consists of light signals from the stars that meet our eyes at our
present location from a certain direction. It is crucial, Weyl
reminds us, to be cognisant of the existence of the metric field
between the stars and our eyes. This metric field determines the
propagation of light, and, like the electromagnetic field, it is
capable of change and variation. Weyl (1924b) says that "the
metric field is no less important for the direction in which I see
the star then is the location of the star itself." How is it
possible, Weyl asks, to compare within the context of the general
theory of relativity, the state of motions of two separate bodies? Of
course, Weyl notes, prior to the general theory of relativity, during
Mach's time, one could rely on a rigid frame of reference such as the
earth, and indefinitely extend such a frame throughout space. One
could then postulate the relative motion of the stars with respect to
this frame. However, under the hands of Einstein the coordinate
system has lost its rigidity to such a degree, that it can always
"cling to the motion of all bodies simultaneously"; that
is, whatever the motions of the bodies are, there exists a coordinate
system such that all bodies are at rest with respect to that
coordinate system. Weyl then clarifies and illustrates the above with
the plasticine example, which Weyl (1949a, 105) elsewhere describes
as follows:
>
>
> Incidentally, without a world structure the concept of relative
> motion of several bodies has, as the postulate of general relativity
> shows, no more foundation than the concept of absolute motion of a
> single body. Let us imagine the four-dimensional world as a mass of
> plasticine traversed by individual fibers, the world lines of the
> material particles. Except for the condition that no two world lines
> intersect, their pattern may be arbitrarily given. The plasticine can
> then be continuously deformed so that not only one but all fibers
> become vertical straight lines. Thus no solution of the problem is
> possible as long as in adherence to the tendencies of Huyghens and
> Mach one disregards the structure of the world. But once the inertial
> structure of the world is accepted as the cause for the dynamical
> inequivalence of motions, we recognize clearly why the situation
> appeared so unsatisfactory. ... Hence the solution is attained
> as soon as we dare to *acknowledge the inertial structure as a
> real thing that not only exerts effects upon matter but in*
> *turn suffers such effects.*
>
>
>
![figure](fig8.png)
Figure 8:
Weyl's plasticine example
Applying these considerations to the fixed stars and assuming that it
is possible that the (conformal) metrical field which determines the
cones of light propagation (light cones) at each point of the
plasticine, is carried along by the continuous transformation of the
plasticine, then both the earth and the fixed stars will be at rest
with respect to the plasticine's coordinate system. Yet despite this
the "star compass" is rotating with respect to the earth,
exactly as we observe!
Employing the concept of the microsymmetry group (definition 4.1),
Coleman and Korte (1982) have analyzed Weyl's plasticine
example in the following way: Consider a space-time manifold equipped
only with a differentiable structure, the plasticine of Weyl's
example. Then our spacetime does not have an affine, conformal,
projective or metric structure defined on it. In such a world it is
possible do define curves and paths; however, there are no preferred
curves or paths. Since there is only the differentiable structure,
one may apply any diffeomorphism; that is, *all*diffeomorphisms preserve this structure; consequently, in the
absence of a post-differential-topological structure, the
microsymmetry group at any event \(p\) is an infinite-parameter
group isomorphic to the group of all invertible formal power series
in four variables. If there is no post-differentiable topological
geometric field in the neighbourhood of a point, then all of these
infinite parameters may be chosen freely within rather broad limits.
Clearly then, given an infinite number of parameters, one can, as
Weyl says, straighten out an arbitrary pattern of world lines
(fibers) in the neighbourhood of any event. Now suppose that there
exists a post-differentiable topological geometric field, namely, the
projective structure at any event of spacetime. Then the
microsymmetry group that preserves that structure is a 20-parameter
Lie group (see Coleman and Korte (1981)). Thus instead of an
infinity of degrees of freedom, only twenty degrees of freedom may be
used to actively deform the neighbouring region of spacetime. The
fact that only a finite number of parameter are available prevents an
arbitrary realignment of the worldlines of material bodies in the
neighbourhood of any given event.
Other post-differential topological geometrical field structures are
similarly restrictive. For example, the microsymmetry group of the
conformal structure, which determines the causal structure of
spacetime, permits 7 degrees of freedom (6 Lorentz transformations
and a dilatation), and permits four more degrees of freedom in second
order. Consequently, the existence of the conformal metrical field
which determines at each point the cones of light propagation would
prevent an arbitrary realignment of light-like fibers, that is, it
would be impossible to realign the earth and the fixed stars such
that both are at rest with the coordinate system of the plasticine.
Weyl's plasticine example shows that the Leibnizian-Machian view of
relative motion, namely the view according to which all motion must
be defined as motion relative to bodies, is self-defeating in the
general theory of relativity. The fact that a stationary, homogeneous
elastic sphere will, when set in rotation, bulge at the equator and
flatten at the poles is, according to Weyl (1924b), to be accounted
for in the following way. The complete physical system consisting of
both the body and the local inertial-gravitational field is not the
same in the two situations. The cause of the effect is the state of
motion of the body *with respect to* the local
inertial-gravitational field, the guiding field, and is not, indeed
as Weyl's plasticine example shows, cannot be the state of motion of
the body relative to other bodies. To attribute the effect as
Einstein and Mach did to the rotation of the body with respect to the
other *bodies* in the universe is, according to Weyl, to
endorse a remnant of the unjustified monopoly of the older body
ontology, namely, the sovereign right of material bodies to play the
role of physically real and acceptable causal
agents.[70]
#### 4.4.3 Coordinate Transformation Laws of Acceleration
Weyl's view that there must be an *inertial structure field*on spacetime, which governs material bodies in *free*
*motion*, follows from the mathematical nature of the
coordinate-transformation laws for acceleration. In a world equipped
with only a differential structure, it is possible to do calculus;
one can define curves and paths and differentiate, etc. However, as
was already pointed out, in such a world, the world of Weyl's
plasticine example, there would be no preferred curves or paths.
Consequently, *the motion of material bodies would not be
predictable*. However, experience overwhelmingly indicates that
the acceleration of a massive body cannot be freely chosen. In
particular, consider a simple type of particle, a monopole
(unstructured) particle. Experience overwhelmingly tells us that such
a particle is characterized by the fact that at any event on its
world line, its velocity at that event is sufficient to determine its
acceleration at that event. Predictability of motion, therefore,
entails that corresponding to every type of massive monopole, there
exists a geometric structure field, or what Weyl calls a
*Strukturfeld* that governs the motion of that type of
particle. The basic reason which explains this brute fact of
experience is a simple mathematical fact about how the acceleration
of bodies transforms under a coordinate transformation. Moreover,
this simple mathematical fact, involving no more than the basic
techniques of partial differentiation, holds in all relativistic,
non-relativistic, curved or flat, dynamic or non-dynamic spacetime
theories that are based on a local differential topological
structure, the minimal structure required for the possibility of
assigning arbitrary local coordinates on a differential manifold.
**Transformation law for acceleration:**
The transformation law for acceleration is linear, but is *not*
homogeneous in the acceleration variable.
As an example consider the transformation laws for the 4-velocity and
the 4-acceleration. Recall that a *curve* in the
four-dimensional spacetime manifold \(M\) is a map \(\gamma :
\mathbb{R} \rightarrow M\). For convenience we restrict our attention
to those curves which satisfy \(\gamma(0) = p\). If we set
\(\gamma^{i} = x^{i} \circ \gamma(0)\), then the components of the
4-velocity and 4-acceleration at \(p \in M\) are respectively given
by
\[\tag{30}
\gamma^i\_1 =\_{def} \frac{d}{dt}\gamma^i(0),
\]
\[\tag{31}
\gamma^i\_2 =\_{def} \frac{d^2}{dt^2}\gamma^i(0),
\]
The transformation laws of the 4-velocity components
\(\gamma^{i}\_{1}\) and of the 4-acceleration components
\(\gamma^{i}\_{2}\) under a change of coordinate chart from
\((U,x)\_{p}\) to \((\overline{U}, \overline{x})\_p\), follow from their
pointwise definition. From
\[
\overline{\gamma}^i(t) = \overline{X}^i(\gamma^i(t)),
\]
where \(\overline{X}^{i} = \overline{x}^i \circ x^{- 1}\),
one obtains the transformation law for the 4-velocity and the
4-acceleration respectively:
\[\begin{align}
\tag{32}
\overline{\gamma}^i\_1 &= \overline{X}^i\_j \gamma^{\,j}\_1 \\
\tag{33}
\overline{\gamma}^i\_2 &= \overline{X}^i\_j \gamma^{\,j}\_2
+ \overline{X}^i\_{jk} \gamma^{\,j}\_1 \gamma^k\_1.
\end{align}\]
The \(\overline{X}^{i}\_{j}\) and \(\overline{X}^{i}\_{jk}\) denote the
first and second partial derivatives of \(\overline{X}^{i}(x^{i})\) at
\(x^{i} (p)\), namely,
\[
\frac{\partial \overline{x}^i}{\partial x^{j}}
\text{ and }
\frac{\partial \overline{x}^i}{\partial x^{j} \partial x^{k} }
\]
The expression \(\overline{X}^{i}\_{jk}\gamma^{\,j}\_{1}\gamma^{k}\_{1}\) in
equation (33) represents the inhomogeneous term of the transformation
of the 4-acceleration. The inhomogeneity of the transformation law
entails that a 4-acceleration that is zero with respect to one
coordinate system is not zero with respect to another coordinate
system. This means that there does not exist a unique standard of
zero 4-acceleration that is intrinsic to the differential topological
structure of spacetime. Moreover, even the difference of the
4-accelerations of two bodies at the same spacetime point has no
absolute meaning, unless their 4-velocities happen to be the same.
This shows that while the differential topological structure of
spacetime gives us sufficient structure to do calculus and to derive
the transformation laws for 4-velocities and 4-accelerations by way
of simple differentiation, it does not provide sufficient structure
with which to determine a standard of zero 4-acceleration. Therefore,
as Weyl repeatedly emphasized, no solution to the problem of motion
is possible, unless "we dare to *acknowledge the
inertial structure as a real thing that not only exerts
effects upon matter but in turn suffers such effects*". In
other words there must exist a structure in addition to the
differential topological structure in the form of a geometric
structure field, or in Weyl's words, *geometrisches
Strukturfeld*, which constitutes the inertial structure of
spacetime, and which provides the standard of zero 4-acceleration.
Since this field provides the standard of zero 4-acceleration we can
call it a geodesic 4-acceleration field, or simply, geodesic
acceleration field. A particle in *free motion* is one that is
exclusively governed by this geodesic acceleration field.
An acceleration field, geodesic or non-geodesic, can be constructed
in the following way. Since the terms that are independent of the
4-acceleration depend on both the spacetime location and on the
corresponding 4-velocity of the particle, it is necessary to specify
a *geometric field standard* for zero 4-acceleration that also
depends on those independent variables.
The transformation law for a 4-acceleration field can be obtained
from (33) by replacing
\(\overline{\gamma}^{i}\_{2}\) by
\(\overline{A}^{i}\_{2}(\overline{x}^{i},\overline{\gamma}^{i}\_{1})\)
and \(\gamma^{j}\_{2}\) by \(A^{j}\_{2}(x^{i}, \gamma^{i}\_{1}\)) to yield
\[
\overline{A}^{i}\_{2}(\overline{x}^{i}, \overline{\gamma}^{i}\_{1}) =
\overline{X}^{i}\_{j} A^{j}\_{2}(x^{i},\gamma^{i}\_{1}) +
\overline{X}^{i}\_{jk}\gamma^{j}\_{1}\gamma^{k}\_{1}.
\]
The important special case for which the function \(A^{i}\_{2}(x^{i},
\gamma^{i}\_{1})\) is a geodesic 4-acceleration field
corresponds to the affine structure of spacetime. For this special
case the function \(A^{i}\_{2}(x^{i}, \gamma^{i}\_{1}\))
is denoted by \(\Gamma^{i}\_{2}(x^{i}, \gamma^{i}\_{1}\)) and is given
by
\[
\Gamma^{i}\_{2}(x^{i},\gamma^{i}\_{1}) = -\Gamma^{i}\_{jk}(x^{i},\gamma^{i}\_{1})
\gamma^{j}\_{1}\gamma^{k}\_{1}.
\]
The familiar transformation law for the affine structure (geodesic
4-acceleration field) is then given by
\[\tag{34}
\overline{\Gamma}^{i}\_{2}(\overline{x}^{1},\overline{\gamma}^{i}\_{1})
= \overline{X}^{i}\_{j}\Gamma^{j}\_{2}(x^{i},\gamma^{i}\_{1})
+\overline{X}^{i}\_{jk}\gamma^{j}\_{1}\gamma^{k}\_{1}.
\]
Note that the inhomogeneous term
\(\overline{X}^{i}\_{jk}\gamma^{j}\_{1}\gamma^{k}\_{1}\) of the
geodesic 4-acceleration field is identical to the inhomogeneous term
of the transformation law (33) for the 4-acceleration of body motion.
The differences
\[\tag{35}
\overline{\gamma}^{i}\_{2} - \overline{\Gamma}^{i}\_{2}(\overline{x}^{i},
\overline{\gamma}^{i}\_{1}) = \overline{X}^{i}\_{j}(\gamma^{j}\_{2}
- \Gamma^{j}\_{2}(x^{i},\gamma^{i}\_{1}))
\]
then transform linearly and homogeneously; consequently, the
vanishing or non-vanishing of body accelerations relative to the
standard of zero acceleration provided by the geodesic 4-acceleration
field (the affine structure), is coordinate independent. That is, the
4-accelerations of bodies and the corresponding 4-forces, are
tensorial quantities in concordance with experience.
The above argument for the necessity of geometric fields also holds
for 3-velocity and 3-acceleration, denoted respectively by
\(\xi^{\alpha}\_{1}\) and \(\xi^{\alpha}\_{2}\). The transformation law
for the 3-acceleration is much more complicated than that of the
4-acceleration. However, analogous to the case of 4-acceleration, the
transformation law of 3-acceleration is linear and is inhomogeneous in
the 3-acceleration variable \(\xi^{\alpha}\_{2}\). Consequently, there
does not exist a unique standard of zero 3-acceleration that is
intrinsic to the differential topological structure of spacetime. The
standard of zero 3-acceleration must be provided by a geodesic
3-acceleration field or geodesic directing field, or what Weyl calls
the guiding field. The guiding field is also referred to as the
projective structure of spacetime and is denoted by
\(\Pi^{\alpha}\_{2}(x^{i}, \xi^{\alpha}\_{1}\)). It is a function of
spacetime location and the 3-velocity, both variables of which are
independent of the 3-acceleration, as is required. Since the
transformation law of the projective structure
\(\Pi^{\alpha}\_{2}(x^{i}, \xi^{\alpha}\_{1}\)) has the same
inhomogeneous form as the 3-acceleration \(\xi^{\alpha}\_{2}\), the
difference
\[\tag{36}
\xi^{\alpha}\_{2} - \Pi^{\alpha}\_{2}(x^{i}, \xi^{\alpha}\_{1})
\]
also transforms linearly and homogeneously.
The components \(\gamma^{i}\_{2}\) and \(\xi^{\alpha}\_{2}\) of the
4-acceleration and 3-acceleration can be thought of as the dynamic
descriptors of a material body. On the other hand, the components
\(\Gamma^{i}\_{2}(x^{i},\gamma^{i}\_{1}\)) and
\(\Pi^{\alpha}\_{2}(x^{i}, \xi^{\alpha}\_{1}\)) of the
geodesic acceleration field, and the geodesic directing field,
respectively, are *field* quantities. The differences
\[\tag{37}
\gamma^{i}\_{2} - \Gamma^{i}\_{2}(x^{i},\gamma^{i}\_{1})
\]
and
\[\tag{38}
\xi^{\alpha}\_{2} -\Pi^{\alpha}\_{2}(x^{i}, \xi^{\alpha}\_{1})
\]
denote the components of a coordinate independent *field-body*
*relation*.[71]
Weyl (1924b) remarks:
>
>
> We have known since Galileo and Newton, that the motion of a body
> involves an inherent struggle between inertia and force. According to
> the old view, the inertial tendency of persistence, the
> "guidance", which gives a body its natural inertial
> motion, is based on a formal geometric structure of the spacetime
> (uniform motion in a straight line) which resides once and for all in
> spacetime independently of any natural processes. This assumption
> Einstein rejects; because whatever exerts as powerful effects as
> inertia--for example, in opposition to the molecular forces of
> two colliding trains it rips apart their freight cars--must be
> something real which itself suffers effect from matter. Moreover,
> Einstein recognized that the guiding field's variability and
> dependence on matter is revealed in gravitational effects. Therefore,
> the dualism between guidance and force is maintained; but
>
>
>
> **(G)** *Guidance is a physical field, like the
> electromagnetic field, which stands in mutual interaction*
> *with matter. Gravitation belongs to the guiding field and not to
> force.* Only thus is it possible to explain the equivalence
> between inertial and gravitational mass.
>
>
>
To move from the old conception to the new conception (G) means,
according to Weyl (1924b)
>
> *to replace the geometric difference between uniform and
> accelerated motion with the dynamic difference between
> guidance and force.* Opponents of Einstein asked the question:
> Since the church tower receives a jolt in its motion relative to the
> train just as the train receives a jolt in its motion relative to the
> church tower, why does the train become a wreckage and not the church
> tower which it passes? Common sense would answer: because the train
> is ripped out of the pathway of the guiding field, but the church
> tower is not. ... As long as one ignores the guiding field one
> can neither speak of absolute nor of relative motion; only if one
> gives due consideration to the guiding field does the concept of
> motion acquire content. The theory of relativity, correctly
> understood, does not eliminate absolute motion in favour of relative
> motion, rather it eliminates the kinematic concept of motion and
> replaces it with a dynamic one. The worldview for which Galileo
> fought is not undermined by it [relativity]; to the contrary, it is
> more concretely interpreted.
>
>
>
#### 4.4.4 Weyl's Field-Body Relationist Ontology and Newton's Laws of Motion
It is now possible to provide a reformulation of Newton's laws of
motion which explicitly takes account of Weyl's
field-body-relationalist spacetime ontology, and his analysis of the
concept of motion. The law of inertia is an empirically verifiable
statement[72]
which says
>
> **The Law of Inertia:** There exists on spacetime a
> unique projective structure \(\Pi\_{2}\) or equivalently, a
> unique geodesic directing field \(\Pi\_{2}\).
>
>
>
*Free motion* is defined with reference to the projective
structure \(\Pi\_{2}\) as follows:
>
> **Definition of Free Motion:** A possible or actual
> material body is in a state of free motion during any part of its
> history just in case its motion is exclusively governed by the
> geodesic directing field (projective structure), that is, just in
> case the corresponding segment of its world path is a solution path
> of the differential equation determined by the unique projective
> structure of spacetime.
>
>
>
Newton's second law of motion may be reformulated as follows:
>
> **The Law of Motion:** With respect to any coordinate
> system, the world line path of a possible or actual material body
> satisfies an equation of the form
>
>
>
> \[
> m(\xi^{\alpha}\_{2} - \Pi^{\alpha}\_{2}(x^{i}, \xi^{\alpha}\_{1}))
> = F^{\alpha}(x^{i},\xi^{\alpha}\_{1}),
> \]
>
>
>
> where \(m\) is a scalar constant characteristic of the material
> body called its inertial mass, and
> \(F^{\alpha}(x^{i},\xi^{\alpha}\_{1}\)) is the
> 3-force acting on the body.
>
>
>
To emphasize, the Law of Inertia and the Law of Motion, as formulated
above, apply to *all*, relativistic or non-relativistic,
curved or flat, dynamic or non-dynamic, spacetime theories. The
reason for the general character of these laws consists in the fact
that they require for their formulation only the local differential
topological structure of spacetime, a structure which is common to
all spacetime theories. In addition, as was noted earlier in
SS4.2, the affine and projective spacetime structures are
G-structures. Consequently, they may be flat or non-flat; *but
they can never vanish*. In theories prior to the advent of
general relativity, the affine and projective structures were flat.
It was common practice, however, to use coordinate systems that were
*adapted* to these flat G-structures. And since in such
adapted coordinate systems the components of the affine and
projective structures vanish, it was difficult to recognize and to
appreciate the existence of these structures, and their important role
in providing a coherent account of motion.
#### 4.4.5 Mie's Pure Field Theory, Weyl's 'Agens Theory' and Wormhole Theory of Matter
We saw that Weyl forcefully advocated a *field-body ontological
dualism*, according to which matter and the guiding field are
independent physical realities that causally interact with each
other: matter uniquely generates the various states of the guiding
field, and the guiding field in turn acts on matter.
Weyl did not always subscribe to this ontological dualist position.
For a short period, from 1918 to 1920, he advocated a *pure field
theory of matter* , developed in 1912 by Gustav Mie, in the context
of Einstein's special theory of relativity:
**Pure Field Theory of Matter:**
The physical field has an independent reality that is not reducible to
matter; rather, the physical field is constitutive of all matter in
the sense that the mass (quantity of matter) of a material particle,
such as an electron, consists of a large field energy that is
concentrated in a very small region of spacetime.
Mie's theory of matter is akin to the traditional *geometric
view* of matter: matter is *passive* and *pure*
*extension*. Weyl (1921b) remarks that he adopted the
standpoint of the classical pure field theory of matter in the first
three editions of Weyl (1923b) because of its beauty and unity, but
then gave it up. Weyl (1931a) points out in the Rouse Ball Lecture
that since the theory of general relativity geometrized a physical
entity, the gravitational field, it was natural to try to
*geometrize the whole of physics*. Prior to the advent of
quantum physics one was justified in regarding gravitation and
electromagnetism as the only basic entities of nature and to seek
their unification by geometrizing both. One could hope, following the
example of Gustav Mie, to construct elementary material particles as
*knots of energy* in the gravitational-electromagnetic field,
that is, tiny demarcated regions in which the field magnitudes attain
very high values.
Already in a letter to Felix
Klein,[73]
toward the end of 1920, Weyl indicated that he had finally freed
himself completely from Mie's theory of matter. It now appeared to
him that the classical field theory of matter is not the key to
reality. In the Rouse Ball Lecture Weyl adduces two reasons for this.
First, due to quantum mechanics, there are, in addition to
electromagnetic waves, *matter waves* (*Materiewellen*)
represented by Schrodinger's wave function \(\psi\). And Pauli and
Dirac recognized that \(\psi\) is not a scalar but a magnitude with
several components. Thus, from the point of view of the classical
field theory of matter not two but three entities would have to be
unified. Moreover, given the transformation properties of the wave
function, Weyl says it is certain that the magnitude \(\psi\) cannot be
reduced to gravitation or electromagnetism. Weyl saw clearly, that
this *geometric view* of matter or physics--which to a
certain extent had also motivated his earlier construction and manner
of presentation of a *pure infinitesimal geometry*--was
untenable in light of the new developments in atomic physics.
The second reason, Weyl says, consists in the radical new
interpretation of the wave function, which replaces the concept of
intensity with that of probability. It is only through such a
statistical interpretation that the corpuscular and atomistic aspect
of nature is properly recognized. Instead of a *geometric*treatment of the classical field theory of matter, the new
quantum theory called for a *statistical* treatment of
matter.[74]
Already in 1920, Weyl (1920) addressed the relationship between
*causal* and *statistical* approaches to
physics.[75]
The theory of general relativity, as well as early developments in
atomic physics, clearly tell us, Weyl (1921b) suggests, that matter
uniquely determines the field, and that there exist deeper underlying
physical laws with which modern physics, such as quantum theory, is
concerned, which specify "how the field is affected by
matter". That is, experience tells us that matter plays the
role of a *causal agent* which uniquely determines the field,
and which therefore has an independent physical reality that cannot
be reduced to the field on which it acts. Weyl (1921b, 1924e) refers
to his theory of matter as the *Agenstheorie der Materie*(literally, *agent-theory of matter*):
**Matter-Field Dualism (Weyl's Agens Theory of Matter):**
Matter and field are independent physical realities that causally
interact with each other: matter uniquely generates the various states
of the field, and the field in turn acts on matter. To excite the field
is the essential primary function of matter. The field's function is
to respond to the action of matter and is thus secondary. The
secondary role of the field is to transmit effects (from body to body)
caused by matter, thereby in return affecting matter.
The view, that *matter uniquely determines the field*,
was a necessary postulate of an opposing
ontological standpoint, according to Weyl. The postulate essentially says that *Matter is the only thing which is
genuinely real*. According to this ontological view, held to a certain degree
by the younger Einstein and others who advocated a form of Machian
empiricism, the field is relegated to play the role of a feeble
extensive medium which transmits effects from body to
body.[76]
According to this opposing ontological view, the field laws, that is, certain implicit differential
connections between the various possible states of the field, on the
basis of which the field alone is capable of transmitting effects
caused by matter, can essentially have no more significance for
reality than the laws of geometry could, according to earlier views.
But as we saw earlier, Weyl held that no satisfactory solution can be given to
the problem of motion as long as we adhere to the Einstein-Machian
empiricist position that relegates the field to the role of a feeble
extensive medium, and which does not acknowledge that the guiding
field is physically real. However, from the standpoint of Weyl's *agens theory of
matter*, a satisfactory answer to Mach's paradox can be given:
the reason why a stationary, homogeneous elastic sphere will bulge at the equator and flatten at the poles, when
set in rotation, is
due to the fact that the complete physical system consisting of both
the body *and* the guiding field, differs in the rotating case
from the stationary one. The local guiding field is the real cause of
the inertial forces.
Weyl lists two reasons in support for his agens theory of matter.
First, the agens theory of matter is the only theory which coheres
with the basic experiences of life and physics: matter generates the
field and all our actions ultimately involve matter. For example,
only through matter can we change the field. Secondly, in order to
understand the fact of the existence of charged material particles,
we have two possibilities: either we follow Mie and adopt a pure
field theory of matter, or we elevate the ontological status of
matter and regard it as a *real* singularity of the field and
not merely as a high concentration of field energy in a tiny region
of spacetime. Since Mie's approach is necessarily limited to the
framework of the theory of special relativity, and since there is no
room in the general theory of relativity for a generalization and
modification of the classical field laws, as envisaged by Mie in the
context of the special theory of relativity, Weyl adopted the second
possibility. He was motivated to do so by his recognition that the
field equation of an electron at rest contains a finite mass term
\(m\) that appears to have nothing to do with the energy of the
associated field. Weyl's subsequent analysis of mass in terms of
electromagnetic field energy provided a definition of mass and a
derivation of the basic equations of mechanics, and led Weyl to the
invention of the topological idea of *wormholes* in spacetime.
Weyl did not use the term 'wormholes'; it was John
Wheeler who later coined the term 'wormhole' in 1957.
Weyl spoke of *one-dimensional tubes* instead.
"Inside" these tubes no space exists, and their
boundaries are, analogous to infinite distance, inaccessible; they do
not belong to the field. In a chapter entitled "Hermann Weyl
and the Unity of Knowledge" Wheeler (1994) says,
>
>
> Another insight Weyl gave us on the nature of electricity is
> topological in character and dates from 1924. We still do not know
> how to assess it properly or how to fit it into the scheme of
> physics, although with each passing decade it receives more
> attention. The idea is simple. Wormholes thread through space as air
> channels through Swiss cheese. Electricity is not electricity.
> Electricity is electric lines of force trapped in the topology of
> space.
>
>
>
#### 4.4.6 Relativistic Cosmology and Weyl's Postulate
A year after Einstein (1916) had established the field equations of
his new general theory of relativity, Einstein (1917) applied his
theory for the first time to cosmology. In doing so, Einstein made
several assumptions:
**Cosmological Principle:**
Like Newton, Einstein
assumed that the universe is *homogeneous* and
*isotropic* in its distribution of matter.
**Static Universe:**
Einstein assumed, as did
Newton and most cosmologists at that time, that the universe is
static on the large scale.
**Mach's Principle:**
Einstein believed that the
metric field is completely determined through the masses of bodies.
The metric field is determined through the energy-momentum tensor of
the field equations.
The *cosmological principle* continuous to play an important
role in cosmological modelling to this day. However, Einstein's
second assumption that the universe is static was in conflict with
his field equations, which permitted models of the universe that were
homogeneous and isotropic, but *not static*. In this regard,
Einstein's difficulties were essentially the same that Newton had
faced: A static Newtonian model involving an infinite container with
an infinite number of stars was unstable; that is, local regions
would collapse under gravity. Because Einstein was committed to
Mach's Principle he faced a problem concerning the *boundary
conditions* for *infinite* space containing *finite*amount of
matter.[77]
Einstein recognized that it was impossible to choose boundary
conditions such that the ten potentials of the metric
\(g\_{ij}\) are completely determined by the
energy-momentum tensor \(T\_{ij}\), as required by
Mach's Principle. That is, the boundary conditions "flat at
infinity" entail a *global inertial frame* that
is tied to *empty flat space at infinity*, and hence is
unrelated to the mass-energy content of space, contrary to Mach's
Principle, according to which *only* mass-energy can influence
inertia.
Einstein thought that he could solve the difficulties of an unstable
non-static universe with boundary conditions at infinity that do not
satisfy Mach's Principle, by introducing the cosmological term
\(\Lambda\) into his field equations. He showed that for positive values
of the cosmological constant, his modified field equation admitted a
solution for a
static[78]
universe in which space is curved, unbounded and finite; that is,
space is a hyper surface of a *sphere in four*
*dimensions*. Einstein's spatially closed universe is often
referred to as Einstein's "cylinder" world: with two of
the spatial dimensions suppressed, the model universe can be pictured
as a cylinder where the radius \(A\) represents the space and the
axis the time coordinate.
![figure](fig9.png)
Figure 9:
Einstein Universe
According to Einstein's Machian convictions, since inertia is
determined only by matter, there can be no inertial structure or
field in the absence of matter. Consequently, it is impossible,
Einstein conjectured, to find a solution to the field
equations--that is, to determine the metric
\(g\_{ij}\)--if the energy-momentum tensor
\(T\_{ij}\) representing the mass-energy content of
the universe is zero. The non-existence of 'vacuum
solutions' for a static universe demonstrated, Einstein
thought, that Mach's Principle had been successfully incorporated
into his theory of general relativity. Einstein also believed that
his solution was unique because of the assumptions of isotropy and
homogeneity.[79]
However, Einstein was mistaken. In 1917, the Dutch astronomer Willem
de Sitter published another solution to Einstein's field equations
containing the cosmological constant. De Sitter's solution showed
that Einstein's solution is not a unique solution of his field
equations. In addition, since de Sitter's universe is empty it
provided a direct counter-example to Einstein's hope that Mach's
Principle had been successfully incorporated into his
theory.[80]
There are cosmologists who, like Einstein, are favourably disposed
towards some version of Mach's Principle, and who believe that the
local laws, which are satisfied by various physical fields, are
determined by the large scale structure of the universe. On the other
hand, there are those cosmologists who, like Weyl, take a
conservative approach; they take empirically confirmed *local*laws and investigate what these laws might imply about the
universe as a whole. Our understanding of the large scale structure
of the universe, Weyl emphasized, must be based on theories and
principles which are verified locally. Einstein's general theory is a
*local* field theory; like electromagnetism, it is a *close
action*
*theory*.[81]
Weyl (1924b) says:
>
>
> It appears to me that one can grasp the concrete physical content of
> the theory of relativity without taking a position regarding the
> causal relationship between the masses of the universe and inertia.
>
>
>
And, referring to (G), (see citation at the end of SS4.4.3),
which says that "*Guidance is a physical field, like
the electromagnetic field, which stands in mutual
interaction with matter. Gravitation belongs to the guiding field
and not to force*", Weyl (1924b) says:
>
>
> What I have so far presented and briefly formulated in the two
> sentences of G, that alone impacts on physics and underlies the
> actual individual investigations of problems of the theory of
> relativity. Mach's Principle, according to which the fixed stars
> intervene with mysterious power in earthly events, goes far beyond
> this [G] and is until now pure speculation; it merely has
> cosmological significance and does not become important for natural
> science until astronomical observations reach the totality of the
> cosmos [Weltganze], and not merely one island of stars
> [Sterneninsel]. We could leave the question unanswered if I did not
> have to admit that it is tempting to construct, on the basis of the
> theory of relativity, a picture of the totality of the cosmos.
>
>
>
Weyl's claim is that because general relativity is an inherently
*local* field theory, its validity and soundness is
essentially independent of global cosmological considerations.
However, if we wish to introduce such global considerations into our
local physics, then we can do so only on the basis of additional
assumptions, such as, for example, the Cosmological Principle,
already mentioned. In 1923 Weyl (1923b, SS39) introduced another
cosmological assumption, namely, the so-called *Weyl
Postulate*. De Sitter's solution and the new astronomical
discoveries in the early 1920's, which suggested that the universe is
not static but expanding, led to a drastic change in thinking about
the nature of the universe and an increased scepticism towards
Einstein's model of a static universe. In 1923, Weyl (1923b,
SS39) notes in the fifth edition of *Raum Zeit Materie*,
that despite its attractiveness, Einstein's cosmology suffers from
serious defects. Weyl begins by pointing out that spectroscopic
results indicate that the stars have an age. Weyl continued,
>
>
> all our experiences about the distribution of stars show that the
> present state of the starry sky has nothing to do with a
> "*statistical final state*." The small velocities
> of the stars is due to a common origin rather than some equilibrium;
> incidentally, it appears, based on observation, that the more distant
> configurations are from each other, the greater the velocities on
> average. Instead of uniform distribution of matter, astronomical
> facts lead rather to the view that individual clouds of stars glide
> by in vast empty space.
>
>
>
Weyl further points out that de Sitter showed that Einstein's
cosmological equations of gravity have "a very simple regular
solution" and that an empty spacetime, namely, "a
metrically homogeneous spacetime of non-vanishing curvature,"
is compatible with these equations after all. Weyl says that de
Sitter's solution, which on the whole is not static, forces us to
abandon our predilection for a static universe.
The Einstein and the de Sitter universe are both spacetimes with two
separate fringes, the infinitely remote past and the infinitely
remote future. Dropping two of its spatial dimensions we imagine
Einstein's universe as the surface of a straight cylinder of a
certain radius and de Sitter's universe as a one sheeted hyperboloid.
Both surfaces are surfaces of infinite extent in both directions.
Both the Einstein universe and the de Sitter universe spread from the
eternal past to the eternal future. However, unlike de Sitter's
universe, in Einstein's universe "the metrical relations are
such that the light cone issuing from a world point is folded back
upon itself an infinite number of times. An observer should therefore
see infinitely many images of a star, showing him the star in states
between which an eon has elapsed, the time needed by the light to
travel around the sphere of the world." Weyl (1930) says:
>
>
> ... I start from de Sitter's solution: the world, according to
> its metric constitution, has the character of a four-dimensional
> "sphere" (hyperboloid)
>
>
>
> \[\tag{39}
> x^{2}\_{1} + x^{2}\_{2} + x^{2}\_{3} + x^{2}\_{4} - x^{2}\_{5} = a^{2}
> \]
>
>
>
> in a five-dimensional quasi-euclidean space, with the line element
>
>
>
> \[\tag{40}
> ds^{2} = dx^{2}\_{1} + dx^{2}\_{2} + dx^{2}\_{3} + dx^{2}\_{4} - dx^{2}\_{5}.
> \]
>
>
>
> The sphere has the same degree of metric homogeneity as the world of
> the special theory of relativity, which can be conceived as a
> four-dimensional "plane" in the same space. The plane,
> however, has only one connected infinitely distant
> "seam," while it is the most prominent topological
> property of the sphere to be endowed with two--the infinitely
> distant past and the infinitely distant future. In this sense one may
> say that space is closed in de Sitter's solution. On the other hand,
> however, it is distinguished from the well-known Einstein solution,
> which is based on a homogeneous distribution of mass, by the fact
> that the null cone of future belonging to a world-point does not
> overlap with itself; in this causal sense, the de Sitter space is
> open.
>
>
>
On this hyperboloid, a single star (nebula or galaxy, in later
contexts) \(A\), also called "observer" by Weyl,
traces a geodesic world line, and from each point of the star's world
line a light cone opens into the future and fills a region \(D\),
which Weyl calls the *domain of influence of the star.* In de
Sitter's cosmology this domain of influence covers only half of the
hyperboloid and Weyl suggests that it is reasonable to assume that
this half of the hyperboloid corresponds to the real world.
![figure](fig10.png)
Figure 10:
De Sitter's hyperboloid with domain of influence \(D\)
covering half of the hyperboloid and world lines of stars.
There are innumerable stars or geodesics, according to Weyl, that
have the same domain of influence as the arbitrarily chosen star
\(A\); they form, he says, a *system that has been causally
interconnected since eternity.* Such a system of causally
interconnected stars Weyl describes as stars of a common origin that
lies in an infinitely remote past. The sheaf of world-lines of such a
system of stars converges, in the direction of the infinitely remote
past, on an infinitely small part of the total extent of the
hyperboloid, and diverges in the direction of the future on an ever
increasing extent of the hyperboloid. Weyl's choice of singling out a
particular sheaf of non-intersecting timelike geodesics as
constituting the cosmological substratum is the content of Weyl's
Postulate. Weyl (1923b, 295) says:
>
>
> The hypothesis is suggestive, that all the celestial bodies which we
> know belong to such a single system; this would explain the small
> velocities of the stars as a consequence of their common origin.
>
>
>
The transition from a static to a dynamic universe opens up the
possibility of a disorderly universe where galaxies could collide,
that is, their world lines might intersect. Roughly speaking, Weyl's
Postulate states that the actual universe is an orderly universe. It
says that the world lines of the galaxies form a 3-sheaf of
non-intersecting[82]
geodesics orthogonal to layers of spacelike hypersurfaces.
![figure](fig11.png)
Figure 11:
Weyl's Postulate
Since the relative velocities of matter is small in each collection
of galaxies extending over an astronomical neighbourhood, one can
approximate a "smeared-out" motion of the galaxies and
introduce a *substratum* or *fluid* which fills space
and in which the galaxies move like "fundamental
particles".[83]
Weyl's postulate says that observers associated with this smeared-out
motion constitute a privileged class of observers of the universe.
Since geodesics do not intersect, according to Weyl's Postulate,
there exist one and only one geodesic which passes through each
spacetime point. Consequently, matter possesses a unique velocity at
any spacetime point. Therefore, the fluid may be regarded as a
*perfect fluid*; and this is the essential content of Weyl's
Postulate.
Since the geodesics of the galaxies are orthogonal to a layer of
spacelike hypersrfaces according to Weyl's Postulate, one can
introduce coordinates \((x^{0}, x^{1}, x^{2}, x^{3})\) such that the
spacelike hypersurfaces are given by \(x^{0} =\) constant, and the
spacelike coordinates \(x^{\alpha}\) \((\alpha = 1, 2, 3)\) are constant
along the geodesics of each galaxy. Therefore, the spacelike
coordinates \(x^{\alpha}\) are *co-moving* coordinates along
the geodesics of each galaxy. The
orthogonality condition permits a choice of the time coordinate
\(x^{0}\) such that the metric or line element has the
form
\[\begin{align}
ds^{2} &= (dx^{0})^{2} - g\_{\alpha \beta}dx^{\alpha}dx^{\beta} \\
\tag{41}
&= c^{2}dt^{2} - g\_{\alpha \beta}dx^{\alpha}dx^{\beta},
\end{align}\]
where \(ct = x^{0}, x^{0}\) is called the
*cosmic time*, and \(t\) is the *proper time* of any
galaxy. The spacelike hypsersurfaces are therefore the surfaces of
simultaneity with respect to the cosmic time \(x^{0}\).
The Cosmological Principle in turn tells us that these hypersurfaces
of simultaneity are homogeneous and isotropic.
Independently, Robertson and Walker, were subsequently able to give a
precise mathematical derivation of the most general metric by
assuming Weyl's Postulate and the Cosmological Principle.
#### 4.4.7 Discovering Hubble's Law
Weyl's introduction of his Postulate made it possible for him to
provide the first satisfactory treatment of the cosmological
redshift. Consider a light source, say a star \(A\), which emits
monochromatic light that travels along null geodesics \(L, L',\ldots\)
to an observer \(O\). Let \(s\) be the proper time of the light
source, and let \(\sigma\) be the proper time of the observer
\(O\). Then to every point \(s\) on the world line of the light source
\(A\) there corresponds a point on the world line of the observer
\(O\), namely, \(\sigma = \sigma(s)\).
![figure](fig12.png)
Figure 12:
A body or star \(A\) emits monochromatic light which travels along
null geodesics \(L, L',\ldots\) to an observer \(O\).
Consequently, if one of the generators of the light cone issuing from
\(A\)'s world line at \(A\)'s proper time
\(s\_{0}\)--the null geodesic \(L\)--reaches
observer \(O\) at the observer's proper time
\(\sigma(s\_{0})\), then
\[\tag{42}
d\sigma = \left.\frac{d\sigma(s)}{ds}\right|\_{s\_0} ds.
\]
Therefore, the frequency \(\nu\_{A}\) of the light that
would be measured by some hypothetical observer on \(A\) is
related to the frequency \(\nu\_{O}\) measured on \(O\)
by
\[\tag{43}
\frac{\nu\_{O}}{\nu\_{A}} = \frac{d\sigma(s)}{ds}.
\]
According to Weyl (1923c) this relationship holds in arbitrary
spacetimes and for arbitrary motions of source and observer. Weyl
(1923b, Anhang III) then applied this relationship to de Sitter's
world and showed, to lowest order, that the redshift is *linear*in distance; that is, Weyl theoretically derived, what was later
called *Hubble's redshift law*. Using Slipher's
redshift data Weyl estimated a *Hubble constant* six years
prior to Hubble. Weyl (1923b, Anhang III) remarks:
>
>
> It is noteworthy that neither the elementary nor Einstein's cosmology
> lead to such a redshift. Of course, one cannot claim today, that our
> explanation hits the right mark, especially since the views about the
> nature and distance of the spiral nebulae are still very much in need
> of further clarification.
>
>
>
In 1933 Weyl gave a lecture in Gottingen in which Weyl (1934b)
recalls
>
>
> According to the Doppler effect the receding motion of the stars is
> revealed in a redshift of their spectral lines which is proportional
> to distance. In this form, where De Sitter's solution of the
> gravitational equation is augmented by an assumption concerning the
> undisturbed motion of the stars, I had predicted the redshift in the
> year 1923.
>
>
>
### 4.5 Quantum Mechanics and Quantum Field Theory
#### 4.5.1 Group Theory
During the period 1925-1926 Weyl published a sequence of
groundbreaking papers (Weyl (1925, 1926a,b,c)) in which he presented
a general theory of the representations and invariants of the
classical Lie groups. In these celebrated papers Weyl drew together
I. Schur's work on invariants and representations of the
\(n\)-dimensional rotation group, and E. Cartan's work on
semisimple Lie algebras. In doing so, Weyl utilized different fields
of mathematics such as, tensor algebra, invariant theory, Riemann
surfaces and Hilbert's theory of integral equations. Weyl himself
considered these papers his greatest work in mathematics.
The central role that group theoretic techniques played in Weyl's
analysis of spacetime was one of several factors which led Weyl to
his general theory of the representations and invariants of the
classical Lie groups. It was in the context of Weyl's investigation
of the space-problem (see SS4.2) that Weyl came to appreciate the
value of group theory for investigating the mathematical and
philosophical foundations of physical theories in general, and for
dealing with fundamental questions motivated by the general theory of
relativity, in particular.
A motivation of quite another sort, which led Weyl to his general
representation theory, was provided by Study when he attacked Weyl
specifically, as well as other unnamed individuals, by accusing them
"of having neglected a rich cultural domain (namely, the theory
of invariants), indeed of having completely ignored
it".[84]
Weyl (1924c) replied immediately providing a new foundation for the
theory of invariants of the special linear groups
\(SL(n, \mathbb{C})\) and its most important subgroups, the special
orthogonal group \(SO(n, \mathbb{C})\) and the special symplectic group
\(SSp(\bfrac{n}{2}, \mathbb{C})\) (for \(n\) even) based on algebraic
identities due to Capelli. In a footnote, Weyl (1924c) sarcastically
informed Study that "even if he [Weyl] had been as well versed
as Study in the theory of invariants, he would not have used the
symbolic method in his book
*Raum, Zeit, Materie* and even with the last breath of his
life would not have mentioned the algebraic completeness theorem for
invariant theory". Weyl's point was that in the context of his
book *Raum-Zeit-Materie*, the kernel-index method of tensor
analysis is more appropriate than the methods of the theory of
algebraic
invariants.[85]
While this account of events leading up to Weyl's groundbreaking
papers on group theory seems reasonable enough, Hawkins (2000) has
suggested a fuller account, which brings into focus Weyl's deep
philosophical interest in the mathematical foundations of the theory
of general relativity by drawing attention to Weyl (1924d) on tensor
symmetries, which, according to Hawkins, played an important role in
redirecting Weyl's research interests toward pure
mathematics.[86]
Weyl (1949b, 400) himself noted that his interest in the
philosophical foundations of the general theory of relativity
motivated his analysis of the representations and invariants of the
continuous groups: "I can say that the wish to understand what
really is the mathematical substance behind the formal apparatus of
relativity theory led me to the study of representations and
invariants of groups; and my experience in this regard is probably
not unique". Weyl's paper (Weyl (1924a)), and the first chapter
Weyl (1925) of his celebrated papers on representation theory, have
the same title: "The group theoretic foundation of the tensor
calculus". Hawkins (1998) says, Weyl
>
>
> had obtained through the theory of groups, and in particular through
> the theory of group representations--as augmented by his own
> contributions--what he felt was a proper mathematical
> understanding of tensors, tensor symmetries, and the reason they
> represent the source of all linear quantities that might arise in
> mathematics or physics. Once again, he had come to appreciate the
> importance of the theory of groups--and now especially the
> theory of group representation--for gaining insight into
> mathematical questions suggested by relativity theory. Unlike his
> work on the space problem ...Weyl now found himself drawing upon
> far more than the rudiments of group theory. ... And of course
> Cartan[87]
> had showed that the space problem could also be resolved with the aid
> of results about representations. In short, the representation theory
> of groups had proved itself to be a powerful tool for answering the
> sort of mathematical questions that grew out of Weyl's involvement
> with relativity theory.
>
>
>
Somewhat later, Weyl (1939) wrote a book, entitled *The Classical
Groups, Their Invariants and Representations*, in which
he returned to the theory of invariants and representations of the
semisimple Lie groups. In this work, he satisfied his ambition
"to derive the decisive results for the most important of these
groups by direct algebraic construction, in particular for the full
group of all non-singular linear transformations and for the
orthogonal group". He intentionally restricted the discussion
of the general theory and devoted most of the book to the derivation
of specific results for the general linear, the special linear, the
orthogonal and the symplectic groups.
#### 4.5.2 Weyl's philosophical critique of Cartan's approach to geometry
As far back as the 1920s, the great French mathematician and geometer
Elie Cartan had recognized that the notions
of *parallelism* and *affine connection* admit of an
important generalization in the sense that (1) the spaces for which
the notion of infinitesimal parallel transport is defined need not be
the tangent spaces that intrinsically arise from the differential
structure of a Riemannian manifold \(M\) at each of its points;
rather, the spaces are general spaces that are not intrinsically tied
to the differential manifold structure of \(M\), and (2) relevant
groups operate on these general spaces directly and not on the
manifold \(M\), and therefore groups play a dominant and independent
role.
Weyl (1938a) published a critical review of Cartan's (1937) book in
which Cartan further developed his notion of *moving frames*("reperes mobiles") and *generalized
spaces* ("espaces generalises").
However, Weyl (1988) expressed some of his reservations to Cartan's
approach as early as 1925; and four years later Weyl (1929e)
presented a more detailed critique.
Cartan's approach to differential geometry is in response to the fact
that Euclidean geometry was generalized in two ways resulting in
essentially two incompatible approaches to
geometry.[88]
The first generalization occurred with the discovery of non-Euclidean
geometries and with Klein's (1921) subsequent Erlanger program in
1872, which provided a coherent group theoretical framework for the
various non-Euclidean geometries. The second generalization of
Euclidean geometry occurred when Riemann (1854) discovered Riemannian
geometry.
The two generalizations of Euclidean geometry essentially constitute
incompatible approaches to applied geometry. In particular, while
Klein's Erlanger program provides an appropriate group theoretical
framework for Einstein's theory of special relativity, it is
Riemannian geometry, and not Klein's group theoretic approach, which
provides the appropriate underlying geometric framework for
Einstein's theory of general relativity. As Cartan observes:
>
>
> General relativity threw into physics and philosophy the antagonism
> that existed between the two principle directors of geometry, Riemann
> and Klein. The space-times of classical mechanics and of special
> relativity are of the type of Klein, those of general relativity are
> of the type of
> Riemann.[89]
>
>
>
>
Cartan eliminated the incompatibility between the two approaches by
synthesizing Riemannian geometry and Klein's Erlanger program through
a further generalization of both, resulting in what Cartan called,
*generalized spaces* (or *generalized geometries*).
In his Erlanger program, Klein provided a unified approach to the
various "global" geometries by showing that each of the
geometries is characterized by a particular group of transformations:
Euclidean geometry is characterized by the group of translations and
rotations in the plane; the geometry of the sphere
\(S^{2}\) is characterized by the orthogonal group
\(O(3)\); and the geometry of the hyperbolic plane is
characterized by the pseudo-orthogonal group \(O(1, 2)\). In
Klein's approach each geometry is a (connected) manifold endowed with
a group of automorphisms, that is, a Lie group \(G\) of
"motions" that acts *transitively* on the
manifold, such that two figures are regarded as congruent if and only
if there exists an element of the appropriate Lie group \(G\) that
transforms one of the figures into the other. A generalized geometry
in Klein's sense shifts the emphasis from the underlying manifold or
space to the group. Thus a Klein geometry (space) consists of, (1) a
smooth manifold, (2) a Lie group \(G\) (the principal group of the
geometry), and (3) a transitive action of \(G\) on the manifold.
Besides being "global", a Klein geometry (space) is
completely homogeneous in the sense that its points cannot be
distinguished on the basis of geometric relations because the
transitive group action preserves such relations.
As Weyl (1949b) describes it, Klein's approach to the various
"global" geometries is very suited to Einstein's theory
of special relativity:
>
>
> According to Einstein's special relativity theory the
> four-dimensional world of the spacetime points is a Klein space
> characterized by a definite group \(\Gamma\); and that group is the
> ... group of Euclidean similarities--with one very
> important difference however. The orthogonal transformations, i.e.,
> the homogeneous linear transformations which leave
>
>
>
> \[
> x^{2}\_{1} + x^{2}\_{2} + x^{2}\_{3} + x^{2}\_{4}
> \]
>
>
>
> unchanged have to be replaced by the Lorentz transformations leaving
>
>
>
> \[
> x^{2}\_{1} + x^{2}\_{2} + x^{2}\_{3} - x^{2}\_{4}
> \]
>
>
>
> invariant.
>
>
>
However, with the advent of Einstein's general theory of relativity
the emphasis shifted from *global* homogeneous geometric
structures to *local* inhomogeneous structures. Whereas Klein
spaces are global and fully homogeneous, the Riemannian metric
structure underlying Einstein's general theory is local and
inhomogeneous. A general Riemannian space admits of no isometry other
than the identity.
Referring to Cartan (1923a), Weyl (1929e) says that Cartan's
generalization of Klein geometries consists in adapting Klein's
Erlanger program to infinitesimal geometry by applying Klein's
Erlanger program to the tangent plane rather than to the manifold
itself.[90]
>
>
> Cartan developed a general scheme of infinitesimal geometry in which
> Klein's notions were applied to the tangent plane and not to the
> \(n\)-dimensional manifold \(M\) itself.
>
>
>
![figure](fig13.png)
Figure 13:
Cartan's generalization
Figure 13 above, adapted from Sharpe (1997), may help in clarifying
the discussion. The generalization of Euclidean geometry to a
Riemannian space (the left vertical blue arrow) says:
1. A general Riemannian space approximates Euclidean space only
locally; that is, at each point \(p \in M\) there exists
a **tangent space** \(T(M\_{p})\) *that arises
intrinsically from the underlying differential structure of*
\(M\).
2. In addition, a Riemannian space is inhomogeneous through the
introduction of curvature.
Analogously, Cartan's generalization of a Klein space to a Cartan
space (the right vertical blue arrow) says:
1. Cartan's generalized space \(\Sigma(M)\) approximates a
Klein space only locally; that is, at each point \(p \in M\) there
exists a **"Tangent Space"**, that is, a
Klein space \(\Sigma(M\_{p})\). Note that a Klein space
\(\Sigma(M\_{p})\) is itself a generalized space (in the sense of
Cartan) with zero curvature; it possesses perfect homogeneity.
2. In addition, Cartan's generalized space \(\Sigma(M)\) is
inhomogeneous by the introduction of curvature.
![figure](fig14.png)
Figure 14:
Cartan's generalized space
Cartan's generalized space \(\Sigma\)(M) is the space of all
**"Tangent Spaces"** (i.e., all Klein spaces
\(\Sigma(M\_{p}))\) and contains a mixture of
homogeneous and inhomogeneous spaces (see figure 14).
Finally, Cartan's generalization of Riemannian space (lower
horizontal red arrow) (figure 13) turns on the recognition that the
**"Tangent Space"** in Cartan's sense is not
the same, or need not be the same, as the ordinary **tangent
space** that arises naturally from the underlying differential
structure of a Riemannian manifold. Cartan's
**"Tangent Space"**
\(\Sigma(M\_{p})\) at \(p \in M\) denotes what is known as
a *fiber* in modern fiber bundle language, where the manifold
\(M\) is called the *base space* of the fiber bundle.
In Weyl (1929e, 1988) and to a lesser extent in Weyl (1938a), Weyl
objected to Cartan's approach by noting that Cartan's
**"Tangent Space"**, namely the Klein space
\(\Sigma(M\_{p})\) associated with each point of the manifold \(M\),
does not arise intrinsically from the differential structure of the
manifold the way the ordinary tangent vector space does. Weyl
therefore noted that it is necessary to impose certain non-intrinsic
embedding conditions on \(\Sigma(M\_{p})\) that specify how the
**"Tangent Space"**
\(\Sigma(M\_{p})\) is associated with each point of the manifold
\(M\). Paraphrasing Weyl, the situation is as follows: We assume that
we can associate a copy \(\Sigma(M\_{p})\) of a given Klein space with
each point \(p\) of the manifold \(M\) and that the displacement of
the Klein space \(\Sigma(M\_{p})\) at \(p \in M\) to the Klein space
\(\Sigma(M\_{p'})\) associated with an infinitely nearby point
\(p'\in M\), constitutes an isomorphic representation of
\(\Sigma(M\_{p})\) on \(\Sigma(M\_{p'})\) by means of an infinitesimal
action of the group \(G\). In choosing an admissible frame of
reference \(f\) for each Klein space \(\Sigma(M\_{p})\), their points
are represented by
*normal* coordinates \(\xi\). Any two frames
\(f,f'\) are related by a group
element \(s \in G\), and a succession of transformations
\(f \rightarrow f'\) and \(f' \rightarrow f''\) by \(s \in G\) and
\(t \in G\) respectively, relates \(f\) and
\(f''\) by the group composition \(t \circ s \in G\).
Nothing so far has been said about how specifically the
**"Tangent Space"**
\(\Sigma(M\_{p})\) is connected to the manifold.
Since \(\Sigma(M\_{p})\) is supposed to be a
generalization of the ordinary tangent space which arises
intrinsically from the local differential structure of \(M\), Weyl
suggests that certain embedding conditions have to be imposed on the
*normal* coordinates \(\xi\) of the Klein space
\(\Sigma(M\_{p})\).
**Embedding Condition 1:**
We must first designate
a point as the center of \(\Sigma(M\_{p})\) and then require that they
coincide or cover the point \(p \in M\). This leads, Weyl says, to a
restriction in the choice of a *normal coordinate system*
\(\xi\) on \(\Sigma(M\_{p})\). And because \(G\) acts transitively, a
normal coordinate system \(\xi\) on \(\Sigma(M\_{p})\) can be chosen
such that the normal coordinates \(\xi\) vanish at the center, that
is, \(\xi^{1} = \xi^{2} = \cdots = 0\). The group \(G\) is therefore
restricted to the subgroup \(G\_{0}\) of all representations of \(G\)
which leave the center invariant.
**Embedding Condition 2:**
The notion of a tangent
plane also requires that there is a one-to-one linear mapping between
the line elements of \(\Sigma(M\_{p})\) starting
from 0, with the line elements of \(M\) starting from \(p\).
This means that the number of dimension of the Klein space
\(\Sigma(M\_{p})\) has the same number of dimension
as the manifold \(M\).
**Embedding Condition 3:**
The infinitesimal
displacement \(\Sigma(M\_{p}) \rightarrow \Sigma(M\_{p'})\) will carry an
infinitesimal vector at the center of
\(\Sigma(M\_{p})\), which is in one-to-one
correspondence with a vector at \(p \in M\), to the
center of \(\Sigma(M\_{p'})\).
No further conditions need be imposed according to Weyl. If we
displace \(\Sigma(M\_{p})\) by successive steps around a curve
\(\gamma\) back to the point \(p \in M\) then the final position of
\(\Sigma(M\_{p})\) is obtained from its original poition or orientation
by a certain automorphism \(\Sigma(M\_{p}) \rightarrow
\Sigma(M\_{p})\). This automorphism is Cartan's generalization of
Riemann's concept of curvature along the curve \(\gamma\) on
\(M\).
According to Weyl, the **"Tangent Space"**
\(\Sigma(M\_{p})\) is not uniquely determined by the differential
structure of \(M\). If \(G\) were the affine group, Weyl says, then
the conditions above would fully specify the *normal*
coordinate system \(\xi^{\alpha}\) on \(\Sigma(M\_{p})\) as a function
of the chosen local coordinates \(x^{i}\) on \(M\). Since this is not
the case if \(G\) is a more extensive group than the affine group,
Weyl concludes that the
**"Tangent Space"**
\(\Sigma(M\_{p})\) "is not as yet uniquely
determined by the nature of \(M\), and so long as this is not
accomplished we can not say that Cartan's theory deals only with the
manifold \(M\)." Weyl adds:
>
>
> Conversely, the tangent plane in \(p\) in the ordinary sense, that
> is, the linear manifold of line elements in \(p\), is a centered
> affine space; its group \(G\) is not a matter of convention. This
> has always appeared to me to be a deficiency of the theory ....
>
>
>
The reader may wish to consult Ryckman (2005, 171-173), who
argues "that a philosophical contention, indeed,
phenomenological one, underlies the stated mathematical reasons that
kept him [Weyl] for a number of years from concurring with Cartan's
"moving frame" approach to differential geometry".
In 1949 Weyl explicitly acknowledged and praised Cartan's approach.
Unlike his earlier critical remarks, he now considered it to be a
virtue that the frame of reference in
\(\Sigma(M\_{p})\) is independent of the choice of
coordinates on \(M\). Weyl (1949b) says of the traditional
approach and Cartan's new approach to geometry:
>
>
> Hence we have here before us the natural general basis on which that
> notion rests. The infinitesimal trend in geometry initiated by Gauss'
> theory of curved surfaces now merges with that other line of thought
> that culminated in Klein's Erlanger program.
>
>
>
>
> It is not advisable to bind the frame of reference in \(\Sigma\_{p}\)
> to the coordinates \(x^{i}\) covering the neighborhood of \(p\) in
> \(M\). In this respect the old treatment of affinely connected
> manifolds is misleading. ... [I]n the modern development of
> infinitesimal geometry in the large, where it combines with topology
> and the associated Klein spaces appear under the name of fibres, it
> has been found best to keep the *reperes*, the
> frames of the fibre spaces, independent of the coordinates of the
> underlying manifold.
>
>
>
Moreover, in 1949, Weyl also emphasizes that it is necessary to
employ Cartan's method if one wishes to fit Dirac's theory of the
electron into general relativity. Weyl (1949b) says:
>
>
> When one tries to fit Dirac's theory of the electron into general
> relativity, it becomes imperative to adopt the Cartan method. For
> Dirac's four \(\psi\)-components are relative to a Cartesian (or rather
> a Lorentz) frame. One knows how they transform under transition from
> one Lorentz frame to another (spin representation of the Lorentz
> group); but this law of transformation is of such a nature that it
> cannot be extended to arbitrary linear transformations mediating
> between affine frames.
>
>
>
Weyl is here referring to his three important papers, which appeared
in 1929--the same year in which he had published his detailed
critique of Cartan's method--in which he investigates the
adaptation of Dirac's theory of the special relativistic electron to
the theory of general relativity, and where he develops the
*tetrad* or *Vierbein* formalism for the representation
of *local* two-component spinor structures on Lorentz
manifolds.
#### 4.5.3 Weyl's New Gauge Principle and Dirac's Special Relativistic Electron
Only a year after Pauli's review article in 1921, in which Pauli had
argued that Weyl's defence of his unified field theory deprives it of
its inherent convincing power from a physical point of view,
Schrodinger (1922) suggested the possibility that Weyl's 1918
gauge theory could suitably be employed in the quantum mechanical
description of the
electron.[91]
Similar proposals were subsequently made by Fock (1926) and London
(1927).
With the advent of the quantum theory of the electron around 1927/28
Weyl abandoned his gauge theory of 1918. He did so because in the new
quantum theory a different kind of gauge invariance associated with
Dirac's theory of the electron was discovered which, as had been
suggested by Fock (1926) and London (1927), more adequately accounted
for the conservation of electric
charge.[92]
Why did Weyl hold on to his gauge theory for almost a decade despite
a preponderance of compelling empirical arguments that were mounted
against it by Einstein, Pauli and
others?[93]
In one of Weyl's (1918/1998) last letters to Einstein concerning his
unified field theory, Weyl made it clear that *it was mathematics
and not physics* that was the driving force behind his unified
field
theory.[94]
>
>
> Incidentally, you must not believe that it was because of physics
> that I introduced the linear differential form d\(\varphi\) in addition
> to the quadratic form. I wanted rather to eliminate this
> "inconsistency" which always has been a bone of
> contention to
> me.[95]
> And then, to my surprise, I realized that it looked as if it might
> explain electricity. You clap your hands above your head and shout:
> But physics is not made this way!
>
>
>
As London (1927, 376-377) remarks, one must admire Weyl's
immense courage in developing his gauge invariant interpretation of
electromagnetism and holding on to it on the mere basis of purely
formal considerations. London observes that the principle of
equivalence of inertial and gravitational mass, which prompted
Einstein to provide a geometrical interpretation of gravity, was at
least a physical fact underlying gravitational theory. In contrast,
an analogous fact was not known in the theory of electricity;
consequently, it would seem that there was no compelling physical
reason to think that rigid rods and ideal clocks would be under the
universal influence of the electromagnetic field. To the contrary,
London says, experience strongly suggests that atomic clocks exhibit
sharp spectral lines that are unaffected by their history in the
presence of a magnetic field, contrary to Weyl's non-integrability
assumption. London concludes, that in the face of such elementary
empirical facts it must have been an unusually clear metaphysical
conviction which prevented Weyl from abandoning his idea that nature
ought to make use of the beautiful geometrical possibilities that a
pure infinitesimal geometry offers.
In 1955, shortly before his death, Weyl wrote an
addendum[96]
to his 1918 paper *Gravitation und
Elektrizitat*, in which he looks back at
his early attempt to find a unified field theory and explains why he
reinterpreted his gauge theory of 1918, a decade later.
>
>
> This work stands at the beginning of attempts to construct a
> "unified field theory" which subsequently were continued
> by many, it seems to me, without decisive results. As is known, the
> problem relentlessly occupied Einstein in particular, until his end.
> ... The strongest argument for my theory appeared to be that
> gauge invariance corresponds to the principle of the conservation of
> electric charge just as coordinate invariance corresponds to the
> conservation theorem of energy-impulse. Later, quantum theory
> introduced the Schrodinger-Dirac potential \(\psi\) of the
> electron-positron field; the latter revealed an experimentally based
> principle of gauge invariance which guaranteed the conservation of
> charge and which connected the \(\psi\) with the electromagnetic
> potentials \(\varphi\_{i}\) in the same way that my
> speculative theory had connected the gravitational potentials
> \(g\_{ik}\) with \(\varphi\_{i}\), where,
> in addition, the \(\varphi\_{i}\) are measured in known
> atomic rather than unknown cosmological units. I have no doubts that
> the principle of gauge invariance finds its correct place here and
> not, as I believed in 1918, in the interaction of electromagnetism
> and gravity.
>
>
>
By the late 1920s Weyl's methodological approach to gauge theory
underwent an "empirical turn". In contrast to \(a\)
*priori* geometrical reasoning, which guided his early
unification attempts--Weyl calls it a "speculative
theory" in the above citation--by 1928/1929 Weyl
emphasized *experimentally-based principles* which underlie
gauge
invariance.[97]
In early 1928 P. A. M. Dirac provided the first physically compelling
theoretical account of the dynamics of an electron in the presence of
an electric field. The components \(\psi^{i} (x)\)
of Dirac's four-component wave function or *spinor field* in
Minkowski space, \(\psi(x) = (\psi^{1}(x), \psi^{2}(x), \psi^{3}(x),
\psi^{4}(x))\), are complex-valued functions that
satisfy Dirac's first-order partial differential equation and provide
probabilistic information about the electron's dynamical behaviour,
such as angular momentum and location. Prior to the appearance of
*spinor fields* \(\psi\) in Dirac's equation, it was generally
thought that scalars, vectors and tensors provided an adequate system
of mathematical objects that would allow one to provide a
mathematical description of reality independently of the choice of
coordinates or reference
frames.[98]
For example, spin zero particles \((\pi\) mesons, \(\alpha\) particles)
could be described by means of scalars; spin 1 particles (deuterons)
by vectors, and spin 2 particles (hypothetical gravitons) by tensors.
However, the most frequently occurring particles in Nature are
electrons, protons, and neutrons. They are spin
\(\bfrac{1}{2}\) particles, called *fermions* that
are properly described by mathematical objects called
*spinors*, which are neither scalars, vectors or
tensors.[99]
Weyl referred to the \(\psi(x)\) in Dirac's equation as the
"Dirac quantity" and von Neumann called it the
"\(\psi(x)\)-vector". Both von Neumann and Weyl, and
others, immediately recognized that Dirac had introduced something
that was new in theoretical physics. v. Neumann (1928, 876) remarks:
>
>
> ... \(\psi\) does by no means have the relativistic transformation
> properties of a common four-vector. ... The case of a quantity
> with four components that is not a four-vector is a case which has
> never occurred in relativity theory; the Dirac \(\psi\)-vector is the
> first example of this type.
>
>
>
(Weyl (1929c)) notes that the spinor representation of the orthogonal
group \(O(1, 3)\) cannot be extended to a representation of the
general linear group \(GL(n)\), \(n = 4\), with the
consequence that it is necessary to employ the *Vierbein*,
*tetrad* or Lorentz-structure formulation of the theory of
general relativity in order to incorporate Dirac's spinor fields
\(\psi(x)\):
>
>
> The tensor calculus is not the proper mathematical instrument to use
> in translating the quantum-theoretic equations of the electron over
> into the *general theory of relativity.* Vectors and terms
> [tensors] are so constituted that the law which defines the
> transformation of their components from one Cartesian set of axes to
> another can be extended to the most general linear transformation, to
> an affine set of axes. That is not the case for quantity \(\psi\),
> however; this kind of quantity belongs to a representation of the
> rotation group which cannot be extended to the affine group.
> Consequently we cannot introduce components of \(\psi\) relative to an
> arbitrary coordinate system in general relativity as we can for the
> electromagnetic potential and field strengths. We must rather
> describe the metric at a point \(p\) by local Cartesian axes
> \(e(a)\) instead of by the \(g\_{pq}\).
> The wave field has definite components
> \(\psi^{+}\_{1}, \psi^{+}\_{2}, \psi^{-}\_{1}, \psi^{-}\_{2}\)
> relative to such axes, and we know how they transform on transition to
> any other Cartesian axes in \(p\).
>
>
>
Impressed by the initial success of Dirac's equation of the spinning
electron within the special relativistic context, Weyl adapted
Dirac's special relativistic theory of the electron to the general
theory of relativity in three groundbreaking papers (Weyl
(1929b,c,d)). A complete exposition of this formalism is presented in
(Weyl (1929b)). O'Raifeartaigh (1997) says of this paper:
>
>
> Although not fully appreciated at the time, Weyl's 1929 paper has
> turned out to be one of the seminal papers of the century, both from
> the philosophical and from the technical point of view.
>
>
>
In this ground braking paper, as well as in (Weyl (1929c,d)), Weyl
explicitly abandons his earlier attempt to unify electromagnetism
with the theory of general relativity. In his early attempt he
associated the electromagnetic vector potential
\(A\_{j}(x)\) with the additional connection
coefficients that arise when a conformal structure is reduced to a
Weyl structure (see SS4.1). The important concept of gauge
invariance, however, is preserved in his 1929 paper. Rather than
associating gauge transformations with the scale or gauge of the
spacetime metric tensor, Weyl now associates gauge transformations
with the phase of the Dirac spinor field \(\psi\) that represents
matter. In the introduction of (Weyl (1929b)), which presents in
detail the new formalism, Weyl describes his reinterpretation of the
gauge principle as follows:
>
>
> The Dirac field-equations for \(\psi\) together with the Maxwell
> equations for the four potentials \(f\_{p}\) of the
> electromagnetic field have an invariance property which, from a
> formal point of view, is similar to the one that I called gauge
> invariance in my theory of gravitation and electromagnetism of 1918;
> the equations remain invariant when one makes the simultaneous
> replacements
>
>
>
> \[\begin{array}{ccc}
> \psi \text{ by } e^{i\lambda}\psi
> & \text{and} &
> f\_p \text{ by } f\_p - \dfrac{\partial\lambda}{\partial x^p},
> \end{array}\]
>
>
>
> where \(\lambda\) is understood to be an arbitrary function of
> position in the four-dimensional world. Here the factor
> \(\bfrac{e}{ch}\), where \(- e\) is the charge of the electron, \(c\)
> is the speed of light, and \(\bfrac{h}{\pi}\) is the quantum of
> action, has been absorbed in \(f\_{p}\). The connection of this
> "gauge invariance" to the conservation of electric charge
> remains untouched. But an essential difference, which is significant
> for the correspondence to experience, is that the exponent of the
> factor multiplying \(\psi\) is not real but purely imaginary. \(\psi\)
> now assumes the role that \(ds\) played in Einstein's old
> theory. It seems to me that this new principle of gauge invariance,
> which follows not from speculation but from experiment, compellingly
> indicates that *the electromagnetic field is a necessary
> accompanying phenomenon, not of gravitation, but of the material wave
> field represented by* \(\psi\). Since gauge invariance includes an
> arbitrary function \(\lambda\) it has the character of
> "general" relativity and can naturally only be understood
> in that context.
>
>
>
Weyl then introduces his two-component spinor theory in Minkowski
space. Since one of his aims is to adapt Dirac's theory to the
curved spacetime of general relativity, Weyl develops a theory
of *local* spinor structures for curved
spacetime.[100]
He achieves this by providing a systematic formulation of
*local* tetrads or Vierbeins (orthonormal basis
vectors). Orthonormal frames had already been introduced as early as
1900 by Levi-Civita and Ricci. Somewhat later, Cartan had shown the
usefulness of employing local orthonormal-basis vector fields, the
so-called "moving frames" in his investigation of
Riemannian geometry in the 1920s. In addition, Einstein (1928) had
used tetrads or Vierbeins in his attempt to unify gravitation and
electricity by resorting to *distant parallelism*
with *torsion*. In Einstein's theory, the effects of
gravity and electromagnetism are associated with a specialized torsion
of spacetime rather than with the curvature of spacetime. Since the
curvature vanishes everywhere, distant parallelism is a feature of
Einstein's theory. However, distant parallelism appeared to Weyl
to be quite unnatural from the viewpoint of Riemannian geometry. Weyl
expressed his criticism in all three papers (Weyl (1929b,c,d)) and he
contrasted the way in which Vierbeins are employed in his own work
with the way they were used by Einstein. In the introduction Weyl
(1929b) says:
>
>
> I prefer not to believe in distant parallelism for a number of
> reasons. First, my mathematical attitude resists accepting such an
> artificial geometry; it is difficult for me to understand the force
> that would keep the local tetrads at different points and in rotated
> positions in a rigid relationship. There are, I believe, two
> important physical reasons as well. In particular, by loosening the
> rigid relationship between the tetrads at different points, the gauge
> factor \(e^{i\lambda}\), which remains arbitrary
> with respect to the quantity \(\psi\), changes from a constant to an
> arbitrary function of spacetime location; that is, only through the
> loosening of the rigidity does the actual gauge invariance become
> understandable. Secondly, the possibility to rotate the tetrads at
> different points independently from each other, is as we shall see,
> equivalent to the symmetry of the energy-momentum tensor or with the
> validity of its conservation law.
>
>
>
Every tetrad uniquely determines the pseudo-Riemannian spacetime
metric \(g\_{ij}\). However, the converse does not hold since the
tetrad has 16 independent components whereas the spacetime metric,
\(g\_{ij} = g\_{ji}\), has only 10 independent components. The extra 6
degrees of freedom of the tetrads that are not determined by the
metric may be represented by the elements of a 6-parameter internal
Lorentz group. That is, the local tetrads are determined by the
spacetime metric up to local Lorentz transformations. The tetrad
formalism made it possible, therefore, for Weyl to derive, as a
special case of Noether's second
theorem[101],
the energy-momentum conservation laws for general coordinate
transformations and the internal Lorentz transformations of the
tetrads. Moreover, Weyl had always emphasized the strong analogy
between gravitation and electricity. The tetrad formalism and the
conservation laws both made explicit and supported this analogy.
Weyl introduced the final section of his seminal 1929 paper saying
"We now come to the critical part of the theory", and
presented a derivation of electromagnetism from the new gauge
principle. The initial step in Weyl's derivation exploits the
intrinsic gauge freedom of his two-component theory of spinors for
Minkowski space, namely
\[
\psi(x) \rightarrow e^{i\lambda}\psi(x),
\]
where the gauge factor is a constant. Since Weyl wished to adapt his
theory to the curved spacetime of general relativity, the above phase
transformation must be generalized to accommodate *local*tetrads. That is, each spacetime point has its own tetrad and
therefore its own point-dependent gauge factor. The phase
transformation is thus given by
\[
\psi(x) \rightarrow e^{i\lambda(x)}\psi(x),
\]
where the \(\lambda(x)\) is a function of spacetime. Weyl says:
>
>
> We come now to the critical part of the theory. In my view the origin
> and the necessity for the electromagnetic field lie in the following
> justification. The components \(\psi\_{1},\psi\_{2}\)
> are, in fact, not uniquely determined by the tetrad but only to the
> extent that they can still be multiplied by an arbitrary
> "gauge-factor" \(e^{i\lambda}\) of
> absolute value 1. The transformation of the \(\psi\) induced by a
> rotation of the tetrad is determined only up to such a factor. In the
> special theory of relativity one must regard this gauge factor as a
> constant, since we have here only a single point-independent tetrad.
> This is different in the general theory of relativity. Every point
> has its own tetrad, and hence its own arbitrary gauge factor, because
> the gauge factor necessarily becomes an arbitrary function of
> position through the removal of the rigid connection between tetrads
> at different points.
>
>
>
Today, the concept of gauge invariance plays a central role in
theoretical physics. Not until 1954 did Yang and Mills (1954)
generalize Weyl's electromagnetic gauge concept to the case of the
non-Abelian group
\(O(3)\).[102]
Although Weyl's reinterpretation of gauge invariance had been
preceded by suggestions from London and Fock, it was Weyl, according
to O'Raifeartaigh and Straumann (2000),
>
>
> who emphasized the role of gauge invariance as a *symmetry
> principle* from which electromagnetism can be derived. It took
> several decades until the importance of this symmetry
> principle--in its generalized form to non-Abelian gauge groups
> developed by Yang, Mills, and others--also became fruitful for a
> description of the weak and strong interactions. The mathematics of
> the non-Abelian generalization of Weyl's 1929 paper would have been
> an easy task for a mathematician of his rank, but at the time there
> was no motivation for this from the physics side.
>
>
>
It is interesting in this context to consider the following remarks
by Yang. Referring to Einstein's objection to Weyl's 1918 gauge
theory, Yang (1986, 18) asked, "what has happened to Einstein's
original objection after quantum mechanics inserted an \(-i\) into
the scale factor and made it into a phase factor?" Yang
continuous:
>
>
> Apparently no one had, after 1929, relooked at Einstein's objection
> until I did in 1983. The result is interesting and deserves perhaps
> to be a footnote in the history of science: Let us take Einstein's
> Gedankenexperiment .... When the two clocks come back, because
> of the insertion of the factor \(-i\), they would not have
> different scales but different phases. That would not influence their
> rates of time-keeping. Therefore, Einstein's original objection
> disappears. But you can ask a further question: Can one measure their
> phase difference? Well, to measure a phase difference one must do an
> interference experiment. Nobody knows how to do an interference
> experiment with big objects like clocks. However, one can do
> interference experiments with electrons. So let us change Einstein's
> Gedankenexperiment to one of bringing electrons back along two
> different paths and ask: Can one measure the phase difference? The
> answer is yes. That was in fact a most important development in 1959
> and 1960 when Aharonov and Bohm realized--completely
> independently of Weyl--that electromagnetism has some meaning
> which was not understood
> before.[103]
>
>
>
>
We end the discussion on Weyl's gauge theory by quoting the following
remarks by Dyson (1983).
>
>
> A more recent example of a great discovery in mathematical physics
> was the idea of a gauge field, invented by Hermann Weyl in 1918. This
> idea has taken only 50 years to find its place as one of the basic
> concepts of modern particle physics. Quantum chromodynamics, the most
> fashionable theory of the particle physicists in 1981, is
> conceptually little more than a synthesis of Lie's group-algebras
> with Weyl's gauge fields.
>
>
>
>
> The history of Weyl's discovery is quite unlike the history of Lie
> groups and Grassmann algebras. Weyl was neither obscure nor
> unrecognized, and he was working in 1918 in the most fashionable area
> of physics, the newborn theory of general relativity. He invented
> gauge fields as a solution of the fashionable problem of unifying
> gravitation with electromagnetism. For a few months gauge fields were
> at the height of fashion. Then it was discovered by Weyl and others
> that they did not do what was expected of them. Gauge fields were in
> fact no good for the purpose for which Weyl invented them. They
> quickly became unfashionable and were almost forgotten. But then,
> very gradually over the next fifty years, it became clear that gauge
> fields were important in a quite different context, in the theory of
> quantum electrodynamics and its extensions leading up to the recent
> development of quantum chromodynamics. The decisive step in the
> rehabilitation of gauge fields was taken by our Princeton colleague
> Frank Yang and his student Bob Mills in 1954, one year before Hermann
> Weyl's death [Yang and Mills, 1954]. There is no evidence that Weyl
> ever knew or cared what Yang and Mills had done with his brain-child.
>
>
>
>
>
> So the story of gauge fields is full of ironies. A fashionable idea,
> invented for a purpose which turns out to be ephemeral, survives a
> long period of obscurity and emerges finally as a corner-stone of
> physics.
>
>
>
#### 4.5.4 Weyl's two-component Neutrino theory
It is remarkable that Weyl's (1929b) two-component spinor formalism
led him to anticipate the existence of particles that violate
conservation of parity, that is, left-right symmetry. In 1929
left-right symmetry was taken for granted and considered a basic fact
of all the laws of Nature. Weyl formulated the four-component Dirac
spinor \(\psi\) in terms of a two-component left-handed Weyl spinor
\(\psi\_{L}\) and a two-component right-handed Weyl spinor
\(\psi\_{R}\):
\[\begin{align}
\psi &= (\psi^{1}, \psi^{2}, \psi^{3}, \psi^{4})^{T} \\
&= (\psi^{1}\_{L}, \psi^{2}\_{L}, \psi^{1}\_{R}, \psi^{2}\_{R})^{T} \\
&= (\psi\_L, \psi\_R)^T
\end{align}\]
The four-component Dirac spinor, formulated in terms of the two Weyl
spinors
\[
\psi = \left[\matrix{\psi\_L \\ \psi\_R}\right]
\]
preserves parity; it applies to all massive spin \(\bfrac{1}{2}\)
particles (fermions) and all massive fermions are known to obey parity
conservation. However, a single Weyl spinor, either \(\psi\_{L}\) or
\(\psi\_{R}\), does not preserve parity. Weyl noted that instead of the
four-component Dirac spinor "two components suffice if the
requirement of left-right symmetry (parity) is dropped". A
little later he added, "the restriction 2 removes the
equivalence of left and right. It is only the fact that left-right
symmetry actually appears in Nature that forces us to introduce a
second pair of \(\psi\)-components". Weyl's two-spinor
version of the Dirac equation is a *coupled* system of
equations requiring both Weyl spinors \(\psi\_{L}\) and \(\psi\_{R}\) in
order to preserve parity. Weyl considerd massless particles in his
two-spinor version of the Dirac equation. In this case, the equations
of the two-spinor version of Dirac's equation *decouple*,
yielding an equation for \(\psi\_{L}\) and for \(\psi\_{R}\). These
equations are independent of each other, and the equation for the
2-component left-handed Weyl spinor \(\psi\_{L}\) is
called *Weyl's equation*; it is applicable to the
massless particle called the
*neutrino*[104],
a spin \(\bfrac{1}{2}\) particle, that was discovered in 1956. Yang
(1986, 12) remarks
>
>
> Now I come to another piece of work of Weyl's which dates back to
> 1929, and is called Weyl's two-component neutrino theory. He invented
> this theory in 1929 in one of his very important articles ... as
> a mathematical possibility satisfying most of the requirements of
> physics. But it was rejected by him and by subsequent physicists
> because it did not satisfy left-right symmetry. With the realisation
> that left-right symmetry was not exactly right in 1957 it became
> clear that this theory of Weyl's should immediately be re-looked at.
> So it was and later it was verified theoretically and experimentally
> that this theory gave, in fact, the correct description of the
> neutrino.
>
>
>
#### 4.5.5 The Theory of Groups and Quantum Mechanics
During the interval from 1924-26, in which Weyl was intensely
occupied with the pure mathematics of Lie groups, the essentials of
the formal apparatus of the new revolutionary theory of quantum
mechanics had been completed by Heisenberg, Schrodinger and
others. As if to make up for lost time, Weyl immediately returned
from pure mathematics to theoretical physics, and applied his new
group theoretical results to quantum mechanics. As Yang (1986, 9, 10)
describes it,
>
>
> In the midst of Weyl's profound research on Lie groups there
> occurred a great revolution in physics, namely the development of
> quantum mechanics. We shall perhaps never know Weyl's initial
> reaction to this development, but he soon got into the act and studied
> the mathematical structure of the new mechanics. There resulted a
> paper of 1927 and later a book, this book together with Wigner's
> articles and *Gruppen Theorie und Ihre Anwendung auf die Quanten
> Mechanik der Atome* were instrumental in introducing group theory
> into the very language of quantum mechanics.
>
>
>
Mehra and Rechenberg (2000, 482) note in this context:
"Actually, we have mentioned in previous volumes Weyl's early
reactions to both matrix mechanics (in 1925) and wave mechanics (in
early 1926), and they were very enthusiastic. Therefore, we have to
assume quite firmly that it was only his deep involvement with the
last stages of his work on the theory of semisimple continuous groups
that prevented Weyl 'to get in the act'
immediately."
Weyl was particularly well positioned to handle some of the
mathematical and foundational problems of the new theory of quantum
mechanics. Almost every aspect of his mathematical expertise, in
particular, his recent work on group theory and his very early work
on the theory of singular differential-integral equations
(1908-1911), provided him with the precise tools for solving
many of the concrete problems posed by the new theory: the theory of
Hilbert space, singular differential equations, eigenfunction
expansions, the symmetric group, and unitary representations of Lie
groups.
Weyl's (1927) paper, referred to by Yang above, is entitled
*Quantenmechanik und Gruppentheorie* (*Quantum Mechanics
and Group Theory*). In it, Weyl provides an analysis of the
foundations of quantum mechanics and he emphasizes the fundamental
role Lie groups play in that
theory.[105]
Weyl begins the paper by raising two questions: (1) how do I arrive
at the self-adjoint operators, which represent a given quantity of a
physical system whose constitution is known, and (2), what is the
physical interpretation of these operators and which physical
consequences can be derived from them? Weyl suggests that while the
second question has been answered by von Neumann, the first question
has not yet received a satisfactory answer, and Weyl proposes to
provide one with the help of group theory.
In a way, Weyl's 1927 paper was programmatic in character; nearly all
the topics of that paper were taken up again a year later in his
famous book (Weyl (1928)) entitled *Gruppentheorie und
Quantenmechanik* (*The Theory of Groups and Quantum
Mechanics*). The book emerged from the lecture notes taken by a
student named F. Bohnenblust of Weyl's lectures given in Zurich
during the winter semester 1927-28. A revised edition of that
book appeared in 1931. In the preface to the first edition Weyl says:
>
>
> Another time I venture on stage with a book that belongs only half to
> my professional field of mathematics, the other half to physics. The
> external reason is not very different from that which led some time
> ago to the origin of the book *Raum Zeit Materie*. In the
> winter term 1927/28 Zurich was suddenly deprived of all
> theoretical physics by the simultaneous departures of Debye and
> Schrodinger. I tried to fill the gap by changing an already
> announced lecture course on group theory into one on group theory and
> quantum mechanics.
>
> ...
>
> Since I have for some years been deeply occupied with the theory of
> the representation of continuous groups, it appeared to me at this
> point to be a fitting and useful project, to provide an organically
> coherent account of the knowledge in this field won by mathematicians,
> on such a scale and in such a form, that is suitable for the
> requirements of quantum physics.
>
>
>
Weyl's book is one of the first textbooks on the new theory of
quantum mechanics. As Weyl indicates in the preface it was necessary
for him to include a short account of the foundation of quantum
theory in order to be able to show how the theory of groups finds its
application in that theory. If the book fulfils its purpose, Weyl
suggests, then the reader should be able to learn from it the
essentials of both the theory of groups and quantum theory. Weyl's
aim was to explain the mathematics to the physicists and the physics
to the mathematicians. However, as Yang (1986, 10) points out,
referring to Weyl's book:
>
>
> Weyl was a mathematician and a philosopher. He liked to deal with
> concepts and the connection between them. His book was very famous,
> and was recognized as profound. Almost every theoretical physicist
> born before 1935 has a copy of it on his bookshelves. But very few
> read it: Most are not accustomed to Weyl's concentration on the
> structural aspects of physics and feel uncomfortable with his
> emphasis on concepts. The book was just too abstract for most
> physicists.
>
>
>
Weyl's book (Weyl (1931b, 2 edn)) is remarkably complete for such an
early work and covers many topics. Chapters I and III are mainly
concerned with preliminary mathematical concepts. The first chapter
provides an account of the theory of finite dimensional Hilbert
spaces and the third chapter is an exposition of the unitary
representation theory of finite groups and compact Lie groups.
Chapter II is entitled *Quantum Theory*; it is the earliest
systematic and comprehensive account of the new quantum theory.
Chapter IV, entitled *Application of the Theory of Groups
to Quantum Mechanics*, is divided into four parts. In part A,
entitled *The Rotation Group*, Weyl provides a systematic
explanatory account of the theory of atomic spectra in terms of the
unitary representation theory of the rotation group, followed by a
discussion of the selection and intensity rules. Part B is entitled
*The Lorentz Group*. After discussing the spin of the electron
and its role in accounting for the anomalous Zeeman effect, Weyl
presents Dirac's theory of the relativistic quantum mechanics of the
electron and develops in detail the theory of an electron in a
spherically symmetric field, including an analysis of the fine
structure of the spectrum. In part C, entitled *The Permutation
Group*, Weyl applies the Pauli exclusion principle to explicate
the periodic table of the elements. Next, Weyl develops the second
quantization of the Maxwell and Dirac fields required for the
analysis of many body relativistic systems. Weyl noted in the preface
to the second edition that his treatment is in accordance with the
recent work of Heisenberg and Pauli. It is now customary to include
such a topic under the heading of relativistic quantum field theory.
The final part of Chapter IV, part D, is entitled *Quantum*
*Kinematics*; it provides an exposition of part II of Weyl's
(1927) paper, mentioned earlier. Chapter V, entitled *The
Symmetric Permutation Group and the Algebra of Symmetric
Transformations*, is for the most part pure mathematics. It is
widely regraded to be the most difficult part of the Weyl's book.
Overall, Weyl's treatment is quite modern except for the confusion
regarding the positive electron (anti-electron) that at that time was
identified with the proton rather than with the positron, which was
discovered a few years later. Weyl was quite concerned about the
identification of the proton with the positive electron because his
analysis of the discrete symmetries \(\mathbf{C}, \mathbf{P}, \mathbf{T}\) and
\(\mathbf{CPT}\) led him to conclude that the mass of the positive electron
should equal the mass of the
electron.[106]
#### 4.5.6 Weyl's Early Discussion of the Discrete Symmetries \(\mathbf{C}, \mathbf{P}, \mathbf{T}\) and \(\mathbf{CPT}\)
Weyl (1931b, 2 edn) analyzed Dirac's relativistic theory of the
electron (Dirac (1928a,b)). Although this theory correctly accounted
for the spin of the electron, there was however a problem because in
addition to the positive-energy levels, Dirac's theory predicted the
existence of an equal number of negative-energy levels. Dirac (1930)
reinterpreted the theory by assuming that all of the negative-energy
levels were normally occupied. The Pauli Exclusion Principle, which
asserts that it is impossible for two electrons to occupy the same
quantum state, would prevent an electron with positive energy from
falling into a negative- energy state. Dirac's theory also predicted
that one of the negative-energy electrons could be raised to a state
of positive energy, thereby creating a 'hole' or
unoccupied negative-energy state. Such a hole would behave like a
particle with a positive energy and a positive charge, that is, like
a positive electron.
Because the only fundamental particles that were known to exist at
that time were the electron and the proton, one was justifiably
reluctant to postulate the existence of new particles that had not
yet been observed experimentally; consequently, it was suggested that
the positive electron should be identified with the proton. However,
Weyl was quite concerned about the identification of the proton with
the anti-electron. In the preface to the second German edition of his
book *Gruppentheorie und Quantenmechanik*, Weyl (1928, 2 edn,
1931, VII) wrote
>
>
> The problem of the proton and the electron is discussed in connection
> with the symmetry properties of the quantum laws with respect to the
> interchange of right and left, past and future, and positive and
> negative electricity. At present no acceptable solution is in sight;
> I fear, that in the context of this problem, the clouds are rolling
> together to form a new, serious crisis in quantum physics.
>
>
>
Weyl had good reasons for his concern. He analyzed the invariance of
the Maxwell-Dirac equations under the discrete symmetries that
correspond to the transformations now called \(\mathbf{C}, \mathbf{P},
\mathbf{T}\) and \(\mathbf{CPT}\) both for the case of relativistic quantum
mechanics and for the case of relativistic quantum field theory, and
concluded in both cases that the mass of the anti-electron should be
the same as the mass of the electron. That the mass of the proton was
so different from the mass of the electron, therefore, appeared to
Weyl to constitute a new serious crisis in physics.
In a lecture presented at the Centenary for Hermann Weyl held at the
ETH in Zurich, Yang (1986, 10) says of the above quote from
Weyl's preface to the second edition of *Gruppentheorie und
Quantenmechanik*:
>
>
> This was a most remarkable passage in retrospect. The symmetry that
> he mentioned here, of physical laws with respect to the interchange
> of right and left, had been introduced by Weyl and Wigner
> independently into quantum physics. It was called parity
> conservation, denoted by the symbol \(P\). The symmetry between
> the past and future was something that was not well understood in
> 1930. It was understood later by Wigner, was called time reversal
> invariance, and was denoted by the symbol \(T\). The symmetry with
> respect to positive and negative electricity was later called charge
> conjugation invariance \(C\). It is a symmetry of physical laws
> when you change positive and negative signs of electricity. Nobody,
> to my knowledge, absolutely nobody in the year 1930, was in any way
> suspecting that these symmetries were related in any manner. I will
> come back to this matter later. What had prompted Weyl in 1930 to
> write the above passage is a great mystery to me.
>
>
>
It would seem that Yang's comment is misleading since it
suggests that Weyl did not have a good reason for his remark. In fact,
however, Weyl's statement was firmly based on a detailed
analysis of the discrete symmetries \(\mathbf{C}, \mathbf{P},
\mathbf{T}\) and \(\mathbf{CPT}\). Coleman and Korte (2001)
have shown in detail that Weyl's treatment of these symmetries
is the *same* as that used today except for the fact that the
symmetry \(\mathbf{T}\) is treated by Weyl as linear and unitary,
rather than as antilinear and antiunitary. Weyl had presented in 1931
a complete analysis, in the context of the quantized Maxwell-Dirac
field equations, of the discrete symmetries that are now called
\(\mathbf{C}, \mathbf{P}, \mathbf{T}\) and \(\mathbf{CPT}\). His
transformations \(\mathbf{C}\) and \(\mathbf{P}\) are the same as
those used today. His transformations \(\mathbf{T}\) and
\(\mathbf{CPT}\) are also very close to those used today except that
Weyl's transformations were linear and unitary rather than
antilinear and antiunitary. Moreover, Weyl drew two very
important conclusions from his analysis of these discrete
symmetries. First, Weyl announced that the important question of the
arrow of time had been solved because the field equations were not
invariant under his time-reversal transformation
\(\mathbf{T}\). Second, Weyl pointed out that the invariance of the
field equations under his charge-conjugation transformation
\(\mathbf{C}\) implied that the mass of the
'anti-electron' is necessarily the same as that of the
electron; moreover, Weyl's result is the primary reason that
Dirac (1931, 61) abandoned the assignment of the proton to the role of
the anti-electron. Many years later Dirac (1977, 145) recalled:
>
>
> Well, what was I to do with these holes? The best I could think of
> was that maybe the mass was not the same as the mass of the electron.
> After all, my primitive theory did ignore the Coulomb forces between
> the electrons. I did not know how to bring those into the picture,
> and it could be that in some obscure way these Coulomb forces would
> give rise to a difference in the masses.
>
>
>
>
> Of course, it is very hard to understand how this difference could be
> so big. We wanted the mass of the proton to be nearly 2000 times the
> mass of the electron, an enormous difference, and it was very hard to
> understand how it could be connected with just a sort of perturbation
> effect coming from Coulomb forces between the electrons.
>
>
>
>
> However, I did not want to abandon my theory altogether, and so I put
> it forward as a theory of electrons and protons. Of course I was very
> soon attacked on this question of the holes having different masses
> from the original electrons. I think the most definite attack came
> from Weyl, who pointed out that mathematically the holes would have
> to have the same mass as the electrons, and that came to be the
> accepted view.
>
>
>
At another place Dirac (1971, 52-55) remarks:
>
>
> But still, I thought there might be something in the basic idea and
> so I published it as a theory of electrons and protons, and left it
> quite unexplained how the protons could have such a different mass
> from the electrons.
>
>
>
>
> This idea was seized upon by Herman [sic] Weyl. He said boldly that
> the holes had to have the same mass as the electrons. Now Weyl was a
> mathematician. He was not a physicist at all. He was just concerned
> with the mathematical consequences of an idea, working out what can
> be deduced from the various symmetries. And this mathematical
> approach led directly to the conclusion that the holes would have to
> have the same mass as the electrons. Weyl just published a blunt
> statement that the holes must have the same mass as the electrons and
> did not make any comments on the physical implications of this
> assertion. Perhaps he did not really care what the physical
> implications were. He was just concerned with achieving consistent
> mathematics.
>
>
>
Dirac's characterization of Weyl's unconcern for physics seems unfair
in light of Weyl's own statement in the preface of the second edition
of his book, cited earlier, where he expresses the fear "that
in the context of this problem, the clouds are rolling together to
form a new, serious crisis in quantum physics"; Weyl did care
about the physics.
Weyl's analysis did have a significant impact on the development of
the Maxwell-Dirac theory; however, as Coleman and Korte (2001)
have argued, Weyl's early analysis of the transformations \(\mathbf{C},
\mathbf{P}, \mathbf{T}\) and \(\mathbf{CPT}\) was, for the most part, lost to
subsequent researchers and had to be essentially re-invented.
However, it should be noted in this context that Schwinger (1988,
107-129) was greatly influenced by Weyl's book. Schwinger makes
particular reference to Weyl's work on the discrete symmetries and
says that this work "... was the starting point of my own
considerations concerning the connection between spin and statistics,
which culminated in what is now referred to as the TCP--or some
permutation thereof--theorem".
#### 4.5.7 Weyl's Philosophical Views about Quantum Mechanics
Weyl analyzed the foundations of both the general theory of
relativity and the theory of quantum mechanics. For both theories, he
provided a coherent exposition of the mathematical structure of the
theory, elegant characterizations of the entities and laws postulated
by the theory and a lucid account of how these postulates explain the
most significant, more directly observable, lower-level phenomena. In
both cases, he was also concerned with the constructive aspects of
the theory, that is, with the extent to which the higher-level
postulates of the theory are necessary.
There is no doubt that with regard to the general theory of
relativity, Weyl held strong philosophical views. Some of these views
are couched in a phenomenological language and reveal Husserl's
influence on Weyl. Ryckman's (2005) study *The Reign of Relativity*provides an extensive account of Weyl's orientation to Husserl's
phenomenology. On the other hand, many of Weyl's philosophical views
are couched in an unequivocal empiricist-realist language. For
example, Weyl rejected Poincare's geometrical conventionalism
and forcefully argued that the spacetime metric field is physically
real, that it is a physically real structural field (Strukturfeld),
which is determined by the physically real causal (conformal)
structure and the physically real inertial (projective) structure or
guiding field (Fuhrungsfeld) of spacetime. He was not deterred
in putting forward such ontological claims about the metric structure
of spacetime despite the fact that a complete epistemologically
satisfactory solution to the measurement problem for the spacetime
metric field was not then available. In the same manner, Weyl
forcefully advanced a field-body-relationalist ontology of spacetime
structure. He argued that a Leibnizian or Einstein-Machian form of
relationalism that is based on a pure body ontology, is not tenable,
indeed is incoherent within the context of general relativity, and he
presented a *reductio* argument, the plasticine example, to
underscore the necessity of the existence of a physically real
guiding field in addition to the existence of bodies.
However, in contrast to Weyl's many philosophical views with regard
to spacetime theories, Weyl's philosophical positions regarding the
status of quantum mechanics, while not absent, are not as
transparent. There are passages, such as the following ((Weyl, 1931b,
2 edn, 44), which argue for the reality of photons.
>
>
> The intensity of the monochromatic radiation that is used to generate
> the photoelectric effect has no influence on the speed with which the
> electrons are ejected from the metal but affects only the frequency
> of this process. Even with intensities so weak that on the classical
> theory hours would be required before the electromagnetic energy
> passing through a given atom would attain to an amount equal to that
> of a photon, the effect begins immediately, the points at which it
> occurs being distributed irregularly over the entire metal plate.
> This fact is a proof of the existence of light quanta that is no less
> meaningful than the flashes of light on the scintillation screen are
> for the corpuscular-discontinuous nature of \(\alpha\)-rays.
>
>
>
On the other hand, Weyl's (1931b) discussion of the problem of
'directional quantization' in the old quantum theory and
of the way that this problem is 'resolved' in the new
quantum theory appears to have a distinctly instrumentalist flavour.
In a number of places, he describes the essence of the dilemma posed
by quantum mechanics with a dispassionate precision. Consider, for
example, the following (Weyl, 1931b, 2 edn, 67):
>
> Natural science has a constructive character. The phenomena with which
> it deals are not independent manifestations or qualities which can be
> read off from nature, but can only be determined by means of an
> indirect method, through interaction with other bodies. Their implicit
> definition is bound up with definite natural laws which underlie the
> interactions. Consider, for example, the introduction of the Galilean
> concept of mass which essentially comes down to the following indirect
> definition: "Each body possesses a momentum, that is, a vector
> \(m\overline{v}\) which has the same direction as its velocity
> \(\overline{v}\)--the scalar factor \(m\) is called its mass. The
> law of momentum holds, according to which the sum of the momenta
> before a reaction between several bodies is the same as the sum of
> their momenta after the reaction." By applying this law to the
> observed collision phenomena, one obtains data for the determination
> of the relative masses. The scientific consensus was, however, that
> such *constructive phenomena can nevertheless be attributed to the
> things themselves* even if the manipulations, which alone can lead
> to their recognition, are not being carried out. *In Quantum Theory
> we encounter a fundamental limitation to this epistemological position
> of the constructive natural science.*
>
>
>
It is difficult for many people to accept quantum mechanics as an
ultimate theory without at the same time giving up some form of
realism and adopting something like an instrumentalist view of the
theory. It is clear that Weyl was fully aware of this state of
affairs, and yet in all of his published work, he refrained from
making any bold statements of his views on the fundamental questions
about quantum reality. He did not vigorously participate in the
debate between Einstein and Schrodinger and the Copenhagen
School nor did he offer decisive views concerning, for example, the
*Einstein, Podolsky, Rosen* thought experiment or
*Schrodinger's Cat*. Since Weyl held
strong philosophical views within the context of the general theory
of relativity, it is therefore only natural that one might have
expected him to take a stand with respect to Schrodinger's cat
and whether or not one should be fully satisfied with a theory
according to which the cat is neither alive or dead but is in a
superposition of these two states.
The reason for Weyl's seeming reticence concerning the
ontological/epistemological questions about quantum reality was
already hinted at in note 5 of SS2, where it was suggested that
Weyl was not especially bothered by the counterintuitive nature of
quantum mechanics because he held the view that "objective
reality cannot be grasped directly, but only through the use of
symbols". Although Weyl (1948, 1949a, 1953) did express his
philosophical views about quantum theory, he did so cautiously. Weyl
(1949a, 263) summarizes some of the features of quantum mechanics
that he considered of "paramount philosophical
significance": the measurement problem, the incompatibility of
quantum physics with classical logic, quantum causality, the
non-local nature of quantum mechanics, the *Leibniz-Pauli
Exclusion*
*Principle*[107],
and the irreducible probabilistic nature of quantum mechanics. At the
end of the summary Weyl remarks:
>
>
> It must be admitted that the meaning of quantum physics, in spite of
> all its achievements, is not yet clarified as thoroughly as, for
> instance, the ideas underlying relativity theory. The relation of
> reality and observation is the central problem. We seem to need a
> deeper epistemological analysis of what constitutes an experiment, a
> measurement, and what sort of language is used to communicate its
> result.
>
>
>
#### 4.5.8 Science as Symbolic Construction
According to Weyl (1948, 295), both the theory of general relativity
and quantum mechanics force upon us the realization that
"instead of a real spatio-temporal material being what remains
for us is only a construction in pure symbols". If it is
necessary, Weyl (1948, 302) says, that our scientific grasp of an
objective world must not depend on sense qualities, because of their
inherent subjective nature, then it is for the same reason necessary
to eliminate space and time. And Descartes gave us the means to do
this with his discovery of analytic geometry.
As Weyl (1953, 529) observes, when Newton explained the experienced
world through the movements of solid particles in space, he rejected
sense qualities for the construction of the objective world, but he
held on to, and used an intuitively given objective space for the
construction of a real world that lies behind the appearances. It was
Leibniz who recognized the phenomenal character (Phenomenalitat)
of space and time as consisting in the mere ordering of phenomena;
however, space and time themselves do not have an independent
reality.
It is the freely created pure numbers, that is, pure symbols,
according to Weyl, which serve as coordinates, and which provide the
material with which to symbolically construct the objective world. In
symbolically constructing the objective world we are forced to
replace space and time through a pure arithmetical construct. Instead
of spacetime points, \(n\)-tuples of pure numbers corresponding to
a given coordinate system are used. Weyl (1948, 303) says:
>
>
> ... the laws of physics are viewed as arithmetic laws between
> numerical values of variable magnitudes, in which spatial points and
> moments of time are represented through their numerical coordinates.
> Magnitudes such as the temperature of a body or the field strength of
> an electric field, which have at each spacetime point a definite
> value, appear as functions of four variables, the spacetime
> coordinates \(x, y, z, t\).
>
>
>
In systematic theorizing we construct a *formal scaffold* that
consists of mere symbols, according to Weyl (1948, 311), without
explaining initially what the symbols for mass, charge, field
strength, etc., mean; and only toward the end do we describe how the
symbolic structure connects directly with experience.
>
>
> It is certain, that on the symbolic side, not space and time but four
> independent variables \(x, y, z, t\) appear;
> one speaks of space, as one does of sounds and colours, only on the
> side of conscious experience. A monochromatic light signal ... has
> now become a mathematical formula in which a certain symbol \(F\),
> called electromagnetic field strength, is expressed as a pure
> arithmetically constructed function of four other symbols
> \(x, y, z, t\), called spacetime coordinates.
>
>
>
At another place Weyl (1949a, 113) says:
>
>
> Intuitive space and intuitive time are thus hardly the adequate
> medium in which physics is to construct the external world. No less
> than the sense qualities must the intuitions of space and time be
> relinquished as its building material; they must be replaced by a
> four-dimensional continuum in the abstract arithmetical sense.
>
>
>
Weyl's point is that while space and time exist within the realm of
conscious experience, or, according to Kant, as \(a\) *priori*forms underlying all of our conscious experiences, they are
unsuited as elements with which to construct the objective world and
must be replaced by means of a purely arithmetical symbolic
representation. All that we are left with, according to Weyl, is
symbolic construction. If this still needed any confirmation, Weyl
(1948, 313) says, it was provided by the theory of relativity and
quantum
theory.[108]
For ease of reference we repeat a citation of Weyl (1988, 4-5)
in SS4.3.1:
>
>
> Coordinates are introduced on the Mf [manifold] in the most direct
> way through the mapping onto the number space, in such a way, that
> all coordinates, which arise through one-to-one continuous
> transformations, are equally possible. *With this the coordinate
> concept breaks loose from all special constructions to which
> it was bound earlier in geometry. In the language of relativity this
> means: The coordinates are not* measured, *their
> values are not read off from real measuring rods which react
> in a definite way to physical fields and the metrical structure,
> rather they are a priori placed in the world arbitrarily, in
> order to characterize those physical fields including the metric
> structure numerically.* The metric structure becomes through
> this, so to speak, freed from space; it becomes an existing field
> within the remaining structure-less space. Through this, space as
> form of appearance contrasts more clearly with its real content: The
> content is measured after the form is arbitrarily related to
> coordinates.[109]
>
>
>
>
The last two sentences in the above quote suggest that, (a) Weyl
embraces something close to Kant's position, according to which space
and time are "*a priori* forms of appearances", or
that (b) Weyl adheres to a position called *spacetime
substantivalism*, according to which, in addition to body and
fields and their relations, there also exists a
'container', the spacetime manifold, and this manifold,
its points and the manifold differential-topological relations are
physically real. However, this interpretation would contradict Weyl's
basic thesis that in the symbolic construction of the objective word
we are left with nothing but symbolic arithmetic functional
relations. Weyl's phrases, do not denote either a physically real
container or something like Kant's *a priori* form of
intuition. They merely denote a *conceptual* or *formal
scaffolding*, a *logical space*, as it were, whose points
are represented by purely *formal* coordinates \((n\)-tuple
of pure numbers). It is such a formal space which is employed by the
theorist in the initial stages of constructing an objective world. To
emphasize, in modelling the objective world the theorist begins by
constructing a *formal scaffold* which consists of mere
symbols and formal coordinates, without explaining initially what the
symbols for mass, charge, field strength, etc., mean; only toward the
end does the theorist describe how the symbolic structure connects
directly with experience ((Weyl, 1948, 311)).
The four-dimensional space-time continuum must be replaced by a
four-dimensional coordinate space \(\mathbb{R}^{4}\). However, the
sheer arbitrariness with which we assign coordinates does not affect
the objective relations and features of the world itself. To the
contrary, it is only *relative* to a symbolic construction or
modelling by means of an assignment of coordinates that the state of
the world, its relations and properties, can be *objectively*
determined by means of distinct, reproducible symbols. While our
immediate experiences are *subjective* and *absolute*,
our symbolic construction of the *objective* world is of
necessity *relative*. Weyl (1949a, 116) says:
>
>
> Whoever desires the absolute must take the subjectivity and
> egocentricity into the bargain; whoever feels drawn toward the
> objective faces the problem of relativity.
>
>
>
Weyl (1949a, 75) notes, "The objectification, by elimination of
the ego and its immediate life of intuition, does not fully succeed,
and the coordinate system remains as the necessary residue of the
ego-extinction." However, this residue of ego involvement is
subsequently rendered harmless through the principle of
*invariance*. The transition from one admissible coordinate
system to another can be mathematically described, and the natural
laws and *measurable quantities* must be
*invariant* under such transformations. This, Weyl (1948, 336)
says, constitutes the *general principle of
relativity*. Weyl (1949a, 104) says:
>
>
> ... Only such relations will have objective meaning as are
> independent of the mapping chosen and therefore remain invariant
> under deformations of the map. Such a relation is, for instance, the
> intersection of two world lines. If we wish to characterize a special
> mapping or a special class of mappings, we must do so in terms of the
> real physical events and of the structure revealed in them. That is
> the content of the *postulate of general relativity*.
> According to the *special theory of relativity*, it
> is possible in particular to construct a map of the world such that
> (1) the world line of each mass point which is subject to no external
> forces appears as a straight line, and (2) the light cone issuing
> from an arbitrary world point is represented by a circular cone with
> vertical axis and a vertex angle of 90deg. In this
> theory the inertial and causal structure and hence also the metrical
> structure of the world have the character of rigidity, they are
> absolutely fixed once and for all. It is impossible objectively,
> without resorting to individual exhibition, to make a narrower
> selection from among the 'normal mappings' satisfying the
> above conditions (1) and (2).
>
>
>
Weyl (1949a, 115) provides an illustration, which shows how a
measurement by observer \(B\) of the angular distance \(\delta\)
between two stars \(\Sigma\) and \(\Sigma^{\*}\) can be
constructed in the four-dimensional number space, and can be
expressed as an
*invariant*.[110]
![figure](fig15.png)
Figure 15:
Measurement of the angular distance \(\delta\) by an observer \(B\)
between two stars
In figure 15 the stars and observer are represented by their world
lines, and the past light cone \(K\) issuing from the observation
event \(O\) intersects the world lines of the stars \(\Sigma\) and
\(\Sigma^{\*}\) in \(E\) and
\(E^{\*}\) respectively. The light rays emitted at
\(E\) and \(E^{\*}\), which arrive at the
observation event \(O\), are null geodesics laying on the past
light cone and are respectively denoted by \(\Lambda\) and
\(\Lambda^{\*}\). This construction of the numerical
quantity of the angle \(\delta\) observed by \(B\) at \(O\), which
is describable in the form of purely arithmetical relations, is
invariant under arbitrary coordinate transformations and constitutes
an objective fact of the
world.[111]
On the other hand, the angles between two stars determine the
*objectively* indescribable *subjective* experience of
the observer. Moreover, Weyl says, "there is no difference in
our experiences to which there does not correspond a difference in
the underlying objective situation." And that difference is
itself invariant under arbitrary coordinate transformations. In other
words, an observer's subjective experiences *supervene* on the
invariant relationships and structures of a symbolically constructed
objective world.
Perhaps no statement captures the contrast between the
objective-symbolic and the subjective-intuitive more vividly then
Weyl's famous statement
>
>
> The objective world simply \(is\), it does not *happen*.
> Only to the gaze of my consciousness, crawling upward along the life
> line of my body, does a section of this world come to life as a
> fleeting image in space which continuously changes in
> time.[112]
>
>
>
> |
whewell | ## 1. Biography
Whewell was born in 1794, the eldest child of a master-carpenter in
Lancaster. The headmaster of his local grammar school, a parish
priest, recognized Whewell's intellectual abilities and
persuaded his father to allow him to attend the Heversham Grammar
School in Westmorland, some twelve miles to the north, where he would
be able to qualify for a closed exhibition to Trinity College,
Cambridge. In the 19th century and earlier, these "closed
exhibitions" or scholarships were set aside for the children of
working class parents, to allow for some social mobility. Whewell
studied at Heversham Grammar for two years, and received private
coaching in mathematics. Although he did win the exhibition it did not
provide full resources for a boy of his family's means to attend
Cambridge; so money had to be raised in a public subscription to
supplement the scholarship money.
He thus came up to Trinity in 1812 as a "sub-sizar"
(scholarship student). In 1814 he won the Chancellor's prize for
his epic poem "Boadicea," in this way following in the
footsteps of his mother, Elizabeth Whewell, who had published poems in
the local papers. Yet he did not neglect the mathematical side of his
training; in 1816 he proved his mathematical prowess by placing as
both second Wrangler and second Smith's Prize man. The following
year he won a college fellowship. He was elected to the Royal Society
in 1820, and ordained a priest (as required for Trinity Fellows) in
1825. He took up the Chair in Mineralogy in 1828, and resigned it in
1832. In 1838 Whewell became Professor of Moral Philosophy. Almost
immediately after his marriage to Cordelia Marshall on 12 October
1841, he was named Master of Trinity College upon the recommendation
of the Prime Minister Robert Peel. He was Vice-Chancellor of the
University in 1842 and again in 1855. In 1848 he played a large role
in establishing the Natural and Moral Sciences Triposes at the
University. His first wife died in 1855, and he later married Lady
Evalina Affleck, the widowed sister of his friend Robert Ellis. Lady
Affleck died in 1865. Whewell had no children. He died, after being
thrown from his horse, on 6 March 1866. (More details about
Whewell's life and times can be found in Snyder 2011.)
## 2. Philosophy of Science: Induction
According to Whewell, all knowledge has both an ideal, or subjective
dimension, as well as an objective dimension. He called this the
"fundamental antithesis" of knowledge. Whewell explained
that "in every act of knowledge ... there are two opposite
elements, which we may call Ideas and Perceptions" (1860a, 307).
He criticized Kant and the German Idealists for their exclusive focus
on the ideal or subjective element, and Locke and the
"Sensationalist School" for their exclusive focus on the
empirical, objective element. Like Francis Bacon, Whewell claimed to
be seeking a "middle way" between pure rationalism and
ultra-empiricism. Whewell believed that gaining knowledge requires
attention to both ideal and empirical elements, to ideas as well as
sensations. These ideas, which he called "Fundamental
Ideas," are "supplied by the mind itself"--they
are not (as Mill and Herschel protested) merely received from our
observations of the world. Whewell explained that the Fundamental
Ideas are "not a consequence of experience, but a result of the
particular constitution and activity of the mind, which is independent
of all experience in its origin, though constantly combined with
experience in its exercise" (1858a, I, 91). Consequently, the
mind is an active participant in our attempts to gain knowledge of the
world, not merely a passive recipient of sense data. Ideas such as
Space, Time, Cause, and Resemblance provide a structure or form for
the multitude of sensations we experience. The Ideas provide a
structure by expressing the general relations that exist between our
sensations (1847, I, 25). Thus, the Idea of Space allows us to
apprehend objects as having form, magnitude, and position. Whewell
held, then, that observation is "idea-laden;" all
observation, he noted, involves "unconscious inference"
using the Fundamental Ideas (see 1858a, I, 46). Each science has a
Particular Fundamental idea which is needed to organize the facts with
which that science is concerned; thus, Space is the Fundamental Idea
of geometry, Cause the Fundamental Idea of mechanics, and Substance
the Fundamental Idea of chemistry. Moreover, Whewell explained that
each Fundamental Idea has certain "conceptions" included
within it; these conceptions are "special modifications"
of the Idea applied to particular types of circumstances (1858b, 187).
For example, the conception of force is a modification of the Idea of
Cause, applied to the particular case of motion (see 1858a, I,
184-5 and 236).
Thus far, this discussion of the Fundamental Ideas may suggest that
they are similar to Kant's forms of intuition, and indeed there
are some similarities. Because of this, some commentators have argued
that Whewell's epistemology is a type of Kantianism (see, for
example, Butts 1973; Buchdahl 1991). However, this interpretation
ignores several crucial differences between the two views. Whewell did
not follow Kant in drawing a distinction between
"precepts," or forms of intuition, such as Space and Time,
and the categories, or forms of thought, in which Kant included the
concepts of Cause and Substance. Moreover, Whewell included as
Fundamental Ideas many ideas which function not as conditions of
experience but as conditions for having knowledge within their
respective sciences: although it is certainly possible to have
experience of the world without having a distinct idea of, say,
Chemical Affinity, we could not have any knowledge of certain chemical
processes without it. Unlike Kant, Whewell did not attempt to give an
exhaustive list of these Fundamental Ideas; indeed, he believed that
there are others which will emerge in the course of the development of
science. Moreover, and perhaps most importantly for his philosophy of
science, Whewell rejected Kant's claim that we can only have
knowledge of our "categorized experience." The Fundamental
Ideas, on Whewell's view, accurately represent objective
features of the world, independent of the processes of the mind, and
we can use these Ideas in order to have knowledge of these objective
features. Indeed, Whewell criticized Kant for viewing external reality
as a "dim and unknown region" (see 1860a, 312). Further,
Whewell's justification for the presence of these concepts in
our minds takes a very different form than Kant's transcendental
argument. For Kant, the categories are justified because they make
experience possible. For Whewell, though the categories *do*
make experience (of certain kinds) possible, the Ideas are justified
by their origin in the mind of a divine creator (see especially his
discussion of this in his 1860a). And finally, the type of necessity
which Whewell claimed is derived from the Ideas is very different from
Kant's notion of the synthetic *a priori* (For a nuanced
view of the relation between the views of Kant and Whewell, see
Ducheyne 2011.) I return to these last two points in the section on
Necessary Truth below.
I turn now to a discussion of the theory of induction Whewell
developed with his antithetical epistemology. From his earliest
thoughts about scientific method, Whewell was interested in developing
an inductive theory. At their philosophical breakfasts at Cambridge,
Whewell, Babbage, Herschel and Jones discussed how science had
stagnated since the heady days of the Scientific Revolution in the
17th century. It was time for a new revolution, which they pledged to
bring about. The cornerstone of this new revolution was to be the
promotion of a Baconian-type of induction, and all four men began
their careers endorsing an inductive scientific method against the
deductive method then being advanced by political economist David
Ricardo, logician Richard Whately, and their followers (see Snyder
2011). (Although the four friends agreed about the importance of an
inductive scientific method, Herschel and Jones would late take issue
with some aspects of Whewell's version, notably his antithetical
epistemology.)
Whewell's first explicit, lengthy discussion of induction is
found in his *Philosophy of the Inductive Sciences, founded upon
their History*, which was originally published in 1840 (a second,
enlarged edition appeared in 1847, and the third edition appeared as
three separate works published between 1858 and 1860). He called his
induction "Discoverers' Induction" and explained
that it is used to discover both phenomenal and causal laws. Whewell
considered himself to be a follower of Bacon, and claimed to be
"renovating" Bacon's inductive method; thus one
volume of the third edition of the *Philosophy* is entitled
*Novum Organon Renovatum*. Whewell followed Bacon in rejecting
the standard, overly-narrow notion of induction that holds induction
to be merely simple enumeration of instances. Rather, Whewell
explained that, in induction, "there is a New Element added to
the combination [of instances] by the very act of thought by which
they were combined" (1847, II, 48). This "act of
thought" is a process Whewell called "colligation."
Colligation, according to Whewell, is the mental operation of bringing
together a number of empirical facts by "superinducing"
upon them a conception which unites the facts and renders them capable
of being expressed by a general law. The conception thus provides the
"true bond of Unity by which the phenomena are held
together" (1847, II, 46), by providing a property shared by the
known members of a class (in the case of causal laws, the colligating
property is that of sharing the same cause).
Thus the known points of the Martian orbit were colligated by Kepler
using the conception of an elliptical curve. Often new discoveries are
made, Whewell pointed out, not when new facts are discovered but when
the appropriate conception is applied to the facts. In the case of
Kepler's discovery, the observed points of the orbit were known
to Tycho Brahe, but only when Kepler applied the ellipse conception
was the true path of the orbit discovered. Kepler was the first one to
apply this conception to an orbital path in part because he had, in
his mind, a very clear notion of the conception of an ellipse. This is
important because the fundamental ideas and conceptions are provided
by our minds, but they cannot be used in their innate form. Whewell
explained that "the Ideas, the germs of them at least, were in
the human mind before [experience]; but by the progress of scientific
thought they are unfolded into clearness and distinctness"
(1860a, 373). Whewell referred to this "unfolding" of
ideas and conceptions as the "explication of conceptions."
Explication is a necessary precondition to discovery, and it consists
in a partly empirical, partly rational process. Scientists first try
to clarify and make explicit a conception in their minds, then attempt
to apply it to the facts they have precisely examined, to determine
whether the conception can colligate the facts into a law. If not, the
scientist uses this experience to attempt a further refinement of the
conception. Whewell claimed that a large part of the history of
science is the "history of scientific ideas," that is, the
history of their explication and subsequent use as colligating
concepts. Thus, in the case of Kepler's use of the ellipse
conception, Whewell noted that "to supply this conception,
required a special preparation, and a special activity in the mind of
the discoverer. ... To discover such a connection, the mind must
be conversant with certain relations of space, and with certain kinds
of figures" (1849, 28-9).
Once conceptions have been explicated, it is possible to choose the
appropriate conception with which to colligate phenomena. But how is
the appropriate conception chosen? According to Whewell, it is not a
matter of guesswork. Nor, importantly, is it merely a matter of
observation. Whewell explained that "there is a special process
in the mind, in addition to the mere observation of facts, which is
necessary" (1849, 40). This "special process in the
mind" is a process of inference. "We infer more than we
see," Whewell claimed (1858a, I, 46). Typically, finding the
appropriate conception with which to colligate a class of phenomena
requires a series of inferences, thus Whewell noted that
discoverers's induction is a process involving a "train of
researches" (1857/1873, I, 297). He allowed any type of
inference in the colligation, including enumerative, eliminative and
analogical. Thus Kepler in his *Astronomia Nova* (1609) can be
seen as using various forms of inference to reach the ellipse
conception (see Snyder 1997a). When Augustus DeMorgan complained, in
his 1847 logic text, about certain writers using the term
"induction" as including "the use of the whole box
of [logical] tools," he was undoubtedly referring to his teacher
and friend Whewell (see Snyder 2008).
After the known members of a class are colligated with the use of a
conception, the second step of Whewell's discoverers'
induction occurs: namely, the generalization of the shared property
over the complete class, including its unknown members. Often, as
Whewell admitted, this is a trivially simple procedure. Once Kepler
supplied the conception of an ellipse to the observed members of the
class of Mars' positions, he generalized it to all members of
the class, including those which were unknown (unobserved), to reach
the conclusion that "all the points of Mars' orbit lie on
an ellipse with the sun at one focus." He then performed a
further generalization to reach his first law of planetary motion:
"the orbits of all the planets lie on ellipses with the sun at
one focus."
I've mentioned that Whewell thought of himself as renovating
Bacon's inductive philosophy. His inductivism does share
numerous features with Bacon's method of interpreting nature:
for instance the claims that induction must involve more than merely
simple enumeration of instances, that science must proceed by
successive steps of generalization, that inductive science can reach
unobservables (for Bacon, the "forms," for Whewell,
unobservable entities such as light waves or properties such as
elliptical orbits or gravitational forces). (For more on the relation
between Whewell and Bacon see Snyder 2006; McCaskey 2014.) Yet,
surprisingly, the received view of Whewell's methodology in the
20th century tended to describe Whewell as an anti-inductivist in the
Popperian mold. (see, for example, Ruse 1975; Niiniluoto 1977; Laudan
1980; Butts 1987; Buchdahl 1991). That is, it was claimed that Whewell
endorsed a "conjectures and refutations" view of
scientific discovery. However, it is clear from the above discussion
that his view of discoverers' induction does not resemble the
view asserting that hypotheses can be and are typically arrived at by
mere guesswork. Indeed, Whewell explicitly rejected the
hypothetico-deductive claim that hypotheses discovered by non-rational
guesswork can be confirmed by consequentialist testing. For example,
in his review of his friend Herschel's *Preliminary Discourse
on the Study of Natural Philosophy*, Whewell argued, against
Herschel, that verification is not possible when a hypothesis has been
formed non-inductively (1831, 400-1). Nearly thirty years later,
in the last edition of the *Philosophy*, Whewell referred to
the belief that "the discovery of laws and causes of phenomena
is a loose hap-hazard sort of guessing," and claimed that this
type of view "appears to me to be a misapprehension of the whole
nature of science" (1860a, 274). In other mature works he noted
that discoveries are made "not by any capricious conjecture of
arbitrary selection" (1858a, I, 29) and explained that new
hypotheses are properly "collected from the facts" (1849,
17). In fact, Whewell was criticized by his contemporary David
Brewster for *not* agreeing that discoveries, including
Newton's discovery of the universal gravitation law, were
typically made by accident.
Why has Whewell been misinterpreted by so many modern commentators?
One reason has to do with the error of reading certain terms used by
Whewell in the 19th century as if they held the same meaning they have
in the 20th and 21st. Thus, since Whewell used the terms
"conjectures" and "guesses," some writers
believe that he shares Popper's methodology. Whewell made
mention, for instance, of the "happy guesses" made by
scientists (1858b, 64) and claimed that "advances in
knowledge" often follow "the previous exercise of some
boldness and license in guessing" (1847, II, 55). But Whewell
generally used these terms to connote a conclusion that is not (yet)
conclusively confirmed. The *Oxford English Dictionary* tells
us that prior to the 20th century the term "conjecture"
was used to connote not a hypothesis reached by non-rational means,
but rather one which is "unverified," or which is "a
conclusion as to what is likely or probable" (as opposed to the
results of demonstration). The term was used this way by Bacon,
Kepler, Newton, and Dugald Stewart, writers whose work was well-known
to Whewell. In other places where Whewell used the term
"conjecture" he suggests that what appears to be the
result of guesswork is actually what we might call an "educated
guess," i.e., a conclusion drawn by (weak) inference. Whewell
described Kepler's discovery, which seems so "capricious
and fanciful" as actually being "regulated" by his
"clear scientific ideas" (1857/1873, I, 291-2).
Finally, Whewell's use of the terminology of guessing sometimes
occurs in the context of a distinction he draws between the generation
of a number of possible conceptions, and the selection of one to
superinduce upon the facts. Before the appropriate conception is
found, the scientist must be able to call up in his mind a number of
possible ones (see 1858b, 79). Whewell noted that this calling up of
many possibilities "is, in some measure, a process of
conjecture." However, selecting the appropriate conception with
which to colligate the data is not conjectural (1858b, 78). Thus
Whewell claimed that the selection of the conception is often
"*preluded* by guesses" (1858b, xix); he does not,
that is, claim that the selection *consists* in guesswork. When
inference is not used to select the appropriate conception, the
resulting theory is not an "induction," but rather a
"hasty and imperfect hypothesis." He drew such a
distinction between Copernicus' heliocentric theory, which he
called an induction, and the heliocentric system proposed by
Aristarchus in the third century BCE to which he referred as a hasty
and imperfect hypothesis (1857/1873, I, 258).
Thus Whewell's philosophy of science cannot be described as the
hypothetico-deductive view. It is an inductive method; yet it clearly
differs from the more narrow inductivism of Mill. Whewell's view
of induction has the advantage over Mill's of allowing the
inference to unobservable properties and entities, and for this reason
is a more accurate view of how science works. (For more detailed
arguments against reading Whewell as a hypothetico-deductivist, see
Snyder 2006; Snyder 2008; McCaskey 2014).
## 3. Philosophy of Science: Confirmation
On Whewell's view, once a theory is invented by
discoverers' induction, it must pass a variety of tests before
it can be considered confirmed as an empirical truth. These tests are
prediction, consilience, and coherence (see 1858b, 83-96). These
are characterized by Whewell as, first, that "our hypotheses
ought to *fortel* [sic] phenomena which have not yet been
observed" (1858b, 86); second, that they should "explain
and determine cases of a *kind different* from those which were
contemplated in the formation" of those hypotheses (1858b, 88);
and third that hypotheses must "become more coherent" over
time (1858b, 91).
I start by discussing the criterion of prediction. Hypotheses ought to
foretell phenomena, "at least all phenomena of the same
kind," Whewell explained, because "our assent to the
hypothesis implies that it is held to be true of all particular
instances. That these cases belong to past or to future times, that
they have or have not already occurred, makes no difference in the
applicability of the rule to them. Because the rule prevails, it
includes all cases" (1858b, 86). Whewell's point here is
simply that since our hypotheses are in universal form, a true
hypothesis will cover all particular instances of the rule, including
past, present, and future cases. But he also makes the stronger claim
that successful predictions of unknown facts provide greater
confirmatory value than explanations of already-known facts. Thus he
held the historical claim that "new evidence" is more
valuable than "old evidence." He believed that "to
predict unknown facts found afterwards to be true is ... a
confirmation of a theory which in impressiveness and value goes beyond
any explanation of known facts" (1857/1873, II, 557). Whewell
claimed that the agreement of the prediction with what occurs (i.e.,
the fact that the prediction turns out to be correct), is
"nothing strange, if the theory be true, but quite
unaccountable, if it be not" (1860a, 273-4). For example,
if Newtonian theory were not true, he argued, the fact that from the
theory we could correctly predict the existence, location and mass of
a new planet, Neptune (as did happen in 1846), would be bewildering,
and indeed miraculous.
An even more valuable confirmation criterion, according to Whewell, is
that of "consilience." Whewell explained that "the
evidence in favour of our induction is of a much higher and more
forcible character when it enables us to explain and determine [i.e.,
predict] cases of a *kind different* from those which were
contemplated in the formation of our hypothesis. The instances in
which this have occurred, indeed, impress us with a conviction that
the truth of our hypothesis is certain" (1858b, 87-8).
Whewell called this type of evidence a "jumping together"
or "consilience" of inductions. An induction, which
results from the colligation of one class of facts, is found also to
colligate successfully facts belonging to another class.
Whewell's notion of consilience is thus related to his view of
natural classes of objects or events.
To understand this confirmation criterion, it may be helpful to
schematize the "jumping together" that occurred in the
case of Newton's law of universal gravitation, Whewell's
exemplary case of consilience. On Whewell's view, Newton used
the form of inference Whewell characterized as
"discoverers' induction" in order to reach his
universal gravitation law, the inverse-square law of attraction. Part
of this process is portrayed in book III of the *Principia*,
where Newton listed a number of "propositions." These
propositions are empirical laws that are inferred from certain
"phenomena" (which are described in the preceding section
of book III). The first such proposition or law is that "the
forces by which the circumjovial planets are continually drawn off
from rectilinear motions, and retained in their proper orbits, tend to
Jupiter's centre; and are inversely as the squares of the
distances of the places of those planets from that centre." The
result of another, separate induction from the phenomena of
"planetary motion" is that "the forces by which the
primary planets are continually drawn off from rectilinear motions,
and retained in their proper orbits, tend to the sun; and are
inversely as the squares of the distances of the places of those
planets from the sun's centre." Newton saw that these
laws, as well as other results of a number of different inductions,
coincided in postulating the existence of an inverse-square attractive
force as the cause of various classes of phenomena. According to
Whewell, Newton saw that these inductions "leap to the same
point;" i.e., to the same law. Newton was then able to bring
together inductively (or "colligate") these laws, and
facts of other kinds of events (e.g., the class of events known as
"falling bodies"), into a new, more general law, namely
the universal gravitation law: "All bodies attract each other
with a force of gravity which is inverse as the squares of the
distances." By seeing that an inverse-square attractive force
provided a cause for different classes of events--for satellite
motion, planetary motion, and falling bodies--Newton was able to
perform a more general induction, to his universal law.
What Newton found was that these different kinds of
phenomena--including circumjovial orbits, planetary orbits, as
well as falling bodies--share an essential property, namely the
same cause. What Newton did, in effect, was to subsume these
individual "event kinds" into a more general natural kind
comprised of sub-kinds sharing a kind essence, namely being caused by
an inverse-square attractive force. Consilience of event kinds
therefore results in *causal unification*. More specifically,
it results in unification of natural kind categories based on a shared
cause. Phenomena that constitute different event kinds, such as
"planetary motion," "tidal activity," and
"falling bodies," were found by Newton to be members of a
unified, more general kind, "phenomena caused to occur by an
inverse-square attractive force of gravity" (or,
"gravitational phenomena"). In such cases, according to
Whewell, we learn that we have found a "vera causa," or a
"true cause," i.e., a cause that really exists in nature,
and whose effects are members of the same natural kind (see 1860a, p.
191). Moreover, by finding a cause shared by phenomena in different
sub-kinds, we are able to colligate all the facts about these kinds
into a more general causal law. Whewell claimed that "when the
theory, by the concurrences of two indications ... has included a
new range of phenomena, we have, in fact, a new induction of a more
general kind, to which the inductions formerly obtained are
subordinate, as particular cases to a general population"
(1858b, 96). He noted that consilience is the means by which we effect
the successive generalization that constitutes the advancement of
science (1847, II, 74). (For early discussions of consilience see
Laudan 1971 and, especially, Hesse 1968 and Hesse 1971; for more on
consilience and its relation to realism and natural kinds, see Snyder
2005 and Snyder 2006. On consilience and classification, see Quinn,
2017. For more on Whewell and scientific kinds, see Cowles 2016.)
Whewell discussed a further, related test of a theory's truth:
namely, "coherence." In the case of true theories, Whewell
claimed, "the system becomes more coherent as it is further
extended. The elements which we require for explaining a new class of
facts are already contained in our system....In false theories,
the contrary is the case" (1858b, 91). Coherence occurs when we
are able to extend our hypothesis to colligate a new class of
phenomena without ad hoc modification of the hypothesis. When Newton
extended his theory regarding an inverse-square attractive force,
which colligated facts of planetary motion and lunar motion, to the
class of "tidal activity," he did not need to add any new
suppositions to the theory in order to colligate correctly the facts
about particular tides. On the other hand, Whewell explained, when
phlogiston theory, which colligated facts about the class of phenomena
"chemical combination," was extended to colligate the
class of phenomena "weight of bodies," it was
*unable* to do so without an ad hoc and implausible
modification (namely, the assumption that phlogiston has
"negative weight") (see 1858b, 92-3). Thus coherence
can be seen as a type of consilience that happens over time; indeed,
Whewell remarked that these two criteria--consilience and
coherence--"are, in fact, hardly different" (1858b,
95).
## 4. Philosophy of Science: Necessary Truth
A particularly intriguing aspect of Whewell's philosophy of
science is his claim that empirical science can reach necessary
truths. Explaining this apparently contradictory claim was considered
by Whewell to be the "ultimate problem" of philosophy (see
the important paper by Morrison 1997). Whewell accounted for it by
reference to his antithetical epistemology. Necessary truths are
truths which can be known *a priori*; they can be known in this
way because they are necessary consequences of ideas which are *a
priori*. They are necessary consequences in the sense of being
analytic consequences. Whewell explicitly rejected Kant's claim
that necessary truths are synthetic. Using the example "7 + 8 =
15," Whewell claimed that "we refer to our conceptions of
seven, of eight, and of addition, and as soon as we possess the
conceptions distinctly, we see that the sum must be 15." That
is, merely by knowing the *meanings* of "seven,"
and "eight," and "addition," we see that it
follows necessarily that "7 + 8 = 15" (1848, 471).
Once the Ideas and conceptions are explicated, so that we understand
their meanings, the necessary truths which follow from them are seen
as being necessarily true. Thus, once the Idea of Space is explicated,
it is seen to be necessarily true that "two straight lines
cannot enclose a space." Whewell suggested that the first law of
motion is also a necessary truth, which was knowable *a priori*
once the Idea of Cause and the associated conception of force were
explicated. This is why empirical science is needed to see necessary
truths: because, as we saw above, empirical science is needed in order
to explicate the Ideas. Thus Whewell also claimed that, in the course
of science, truths which at first required experiment to be known are
seen to be capable of being known independently of experiment. That
is, once the relevant Idea is clarified, the necessary connection
between the Idea and an empirical truth becomes apparent. Whewell
explained that "though the discovery of the First Law of Motion
was made, historically speaking, by means of experiment, we have now
attained a point of view in which we see that it might have been
certainly known to be true independently of experience" (1847,
I, 221). Science, then, consists in the "idealization of
facts," the transferring of truths from the empirical to the
ideal side of the fundamental antithesis. He described this process as
the "progressive intuition of necessary truths."
Although they follow analytically from the meanings of ideas our minds
supply, necessary truths are nevertheless informative statements about
the physical world outside us; they have empirical content.
Whewell's justification for this claim is a theological one.
Whewell noted that God created the universe in accordance with certain
"Divine Ideas." That is, all objects and events in the
world were created by God to conform to certain of his ideas. For
example, God made the world such that it corresponds to the idea of
Cause partially expressed by the axiom "every event has a
cause." Hence in the universe every event conforms to this idea,
not only by having a cause but by being such that it could not occur
without a cause. On Whewell's view, we are able to have
knowledge of the world because the Fundamental Ideas which are used to
organize our sciences resemble the ideas used by God in his creation
of the physical world. The fact that this is so is no coincidence: God
has created our minds such that they contain these same ideas. That
is, God has given us our ideas (or, rather, the "germs" of
the ideas) so that "they can and must agree with the
world" (1860a, 359). God intends that we can have knowledge of
the physical world, and this is possible only through the use of ideas
which resemble those that were used in creating the world. Hence with
our ideas--once they are properly "unfolded" and
explicated--we can colligate correctly the facts of the world and
form true theories. And when these ideas are distinct, we can know
*a priori* the axioms which express their meaning.
An interesting consequence of this interpretation of Whewell's
view of necessity is that every law of nature is a necessary truth, in
virtue of following analytically from some idea used by God in
creating the world. Whewell drew no distinction between truths which
can be idealized and those which cannot; thus, potentially, any
empirical truth can be seen to be a necessary truth, once the ideas
and conceptions are explicated sufficiently. For example, Whewell
suggests that experiential truths such as "salt is
soluble" may be necessary truths, even if we do not recognize
this necessity (i.e., even if it is not yet knowable *a
priori*) (1860b, 483). Whewell's view thus destroys the line
traditionally drawn between laws of nature and the axiomatic
propositions of the pure sciences of mathematics; mathematical truth
is granted no special status.
In this way Whewell suggested a view of scientific understanding which
is, perhaps not surprisingly, grounded in his conception of natural
theology. Since our ideas are "shadows" of the Divine
Ideas, to see a law as a necessary consequence of our ideas is to see
it as a consequence of the Divine Ideas exemplified in the world.
Understanding involves seeing a law as being not an arbitrary
"accident on the cosmic scale," but as a necessary
consequence of the ideas God used in creating the universe. Hence the
more we idealize the facts, the more difficult it will be to deny
God's existence. We will come to see more and more truths as the
intelligible result of intentional design. This view is related to the
claim Whewell had earlier made in his Bridgewater Treatise (1833),
that the more we study the laws of nature the more convinced we will
be in the existence of a Divine Law-giver. (For more on
Whewell's notion of necessity, see Fisch 1985; Snyder 1994;
Morrison 1997; Snyder 2006; Ducheyne 2009.)
## 5. The Relation Between Scientific Practice, History of Science, and Philosophy of Science
An issue of interest to philosophers of science today is the relation
between knowledge of the actual practice and history of science and
developing a philosophy of science. Whewell is interesting to examine
in relation to this issue because he claimed to be inferring his
philosophy of science from his study of the history and practice of
science. His large-scale *History of the Inductive Sciences*
(first edition published 1837) was a survey of science from ancient to
modern times. He insisted upon completing this work before writing his
*Philosophy of the Inductive Sciences, founded upon their
history*. Moreover, Whewell sent proof-sheets of the
*History* to his many scientist-friends to ensure the accuracy
of his accounts. Besides knowing about the history of science, Whewell
had first-hand knowledge of scientific practice: he was actively
involved in science in several important ways. In 1825 he traveled to
Berlin and Vienna to study mineralogy and crystallography with Mohs
and other acknowledged masters of the field. He published numerous
papers in the field, as well as a monograph, and is still credited
with making important contributions to giving a mathematical
foundation to crystallography. He also made contributions to the
science of tidal research, pushing for a large-scale world-wide
project of tidal observations; he won a Royal Society gold medal for
this accomplishment. (For more on Whewell's contributions to
science, see Becher 1986; Ruse 1991; Ducheyne 2010a; Snyder 2011;
Honenberger 2018). Whewell acted as a terminological consultant for
Faraday and other scientists, who wrote to him asking for new words.
Whewell only provided terminology when he believed he was fully
knowledgeable about the science involved. In his section on the
"Language of Science" in the *Philosophy*, Whewell
makes this position clear (see 1858b, p. 293). Another interesting
aspect of his intercourse with scientists becomes clear in reading his
correspondence with them: namely, that Whewell constantly pushed
Faraday, Forbes, Lubbock and others to perform certain experiments,
make specific observations, and to try to connect their findings in
ways of interest to Whewell. In all these ways, Whewell indicated that
he had a deep understanding of the activity of science.
So how is this knowledge of science and its history important for his
work on the philosophy of science? Some commentators have claimed that
Whewell developed an *a priori* philosophy of science and then
shaped his *History* to conform to his own view (Stoll 1929;
Strong 1955). It is true that he started out, from his undergraduate
days, with the project of reforming the inductive philosophy of Bacon;
indeed this early inductivism led him to the view that learning about
scientific method must be inductive (i.e., that it requires the study
of the history of science). Yet it is clear that he believed his study
of the history of science and his own work in science were needed in
order to flesh out the details of his inductive position. Thus, as in
his epistemology, both *a priori* and empirical elements
combined in the development of his scientific methodology. Ultimately,
Whewell criticized Mill's view of induction developed in the
*System of Logic* not because Mill had not inferred it from a
study of the history of science, but rather on the grounds that Mill
had not been able to find a large number of appropriate examples
illustrating the use of his "Methods of Experimental
Inquiry." As Whewell noted, Bacon too had been unable to show
that his inductive method had been exemplified throughout the history
of science. Thus it appears that what was important to Whewell was not
whether a philosophy of science had been, in fact, inferred from a
study of the history of science, but rather, whether a philosophy of
science was *inferable from* it. That is, regardless of how a
philosopher came to invent her theory, she must be able to show it to
be exemplified in the actual scientific practice used throughout
history. Whewell believed that he was able to do this for his
discoverers' induction.
## 6. Moral Philosophy
Whewell's moral philosophy was criticized by Mill as being
"intuitionist" (see Mill 1852). Whewell's moral
philosophy *is* intuitionist in the sense of claiming that
humans possess a faculty ("conscience") which enables them
to discern directly what is morally right or wrong. Yet his view
differs from that of earlier philosophers such as Shaftesbury and
Hutcheson, who claimed that this faculty is akin to our sense organs
and thus spoke of conscience as a "moral sense."
Whewell's position is more similar to that of intuitionists such
as Cudworth and Clarke, who claimed that our moral faculty is reason.
Whewell maintained that there is no *separate* moral faculty,
but rather that conscience is just "reason exercised on moral
subjects." Because of this, Whewell referred to moral rules as
"principles of reason" and described the discovery of
these rules as an activity of reason (see 1864, 23-4). These
moral rules "are primary principles, and are established in our
minds simply by a contemplation of our moral nature and condition; or,
what expresses the same thing, by intuition" (1846, 11). Yet,
what he meant by "intuition" was not a non-rational mental
process, as Mill claimed. On Whewell's view, the contemplation
of the moral principles is conceived as a rational process. Whewell
noted that "Certain moral principles being, as we've said,
thus seen to be true by intuition, under due conditions of reflection
and thought, are unfolded into their application by further reflection
and thought"(1864, 12-13). Morality requires rules because
reason is our distinctive property, and "Reason directs us to
Rules" (1864, 45). Whewell's morality, then, does not have
one problem associated with the moral sense intuitionists. For the
moral sense intuitionist, the process of decision-making is
non-rational; just as we feel the rain on our skin by a non-rational
process, we just feel what the right action is. This is often
considered the major difficulty with the intuitionist view: if the
decision is merely a matter of intuition, it seems that there can be
no way to settle disputes over how we ought to act. However, Whewell
never suggested that decision-making in morality is a non-rational
process. On the contrary, he believed that reason leads to common
decisions about the right way to act (although our desires/affections
may get in the way): he explained "So far as men decide
conformably to Reason, they decide alike" (see 1864, 43). Thus
the decision on how we ought to act should be made by reason, and so
moral disputes can be settled rationally on Whewell's view.
Mill also criticized Whewell's claim that moral rules are
necessary truths which are self-evident. Mill took this to mean that
there can be no progress in morality--what is self-evident must
always remain so--and thus to the further conclusion that Whewell
believed the current rules of society to be necessary truths. Such a
view would tend to support the status quo, as Mill rightly complained.
(Thus he, rather unfairly. accused Whewell of justifying evil
practices such as cruelty to animals, forced marriages, and even
slavery.) But Mill was wrong to attribute such a view to Whewell.
Whewell did claim that moral rules are necessary truths, and invested
them with the epistemological status of self-evident
"axioms" (see 1864, 58). However, as noted above,
Whewell's view of necessary truth is a progressive one. This is
as much the case in morality as in science. The realm of morality,
like the realm of physical science, is structured by certain
Fundamental Ideas: in this case, Benevolence, Justice, Truth, Purity,
and Order (see 1852, xxiii). These moral ideas are conditions of our
moral experience; they enable us to perceive actions as being in
accordance with the demands of morality. Like the ideas of the
physical sciences, the ideas of morality must be explicated before the
moral rules can be derived from them (see 1860a, 388). There is a
progressive intuition of necessary truth in morality as well as in
science. Hence it does not follow that because the moral truths are
axiomatic and self-evident that we currently know them (see 1846,
38-9). Indeed, Whewell claimed that "to test self-evidence
by the casual opinion of individual men, is a
self-contradiction" (1846, 35). Nevertheless, Whewell did
believe that we can look to the dictates of positive law of the most
"morally advanced" societies as a starting point in our
explication of the moral ideas. Although he surely had a biased view
of what societies were "morally advanced," he was not
therefore suggesting that the laws of those societies are the standard
of morality. Just as we examine the phenomena of the physical world in
order to explicate our scientific conceptions, we can examine the
facts of positive law and the history of moral philosophy in order to
explicate our moral conceptions. Only when these conceptions are
explicated can we see what axioms or necessary truths of morality
truly follow from them. Mill was therefore wrong to interpret
Whewell's moral philosophy as a justification of the status quo
or as constituting a "vicious circle." Rather,
Whewell's view shares some features of Rawls's later use
of the notion of "reflective equilibrium." (For more on
Whewell's moral philosophy, and his debate with Mill over
morality, see Snyder 2006, chapter four.) |
cambridge-platonists | ## Benjamin Whichcote
The oldest member of the group, Benjamin Whichcote, is usually
considered to be the founding father of Cambridge Platonism. During
the Civil War period, Whichcote was appointed Provost of King's
College, Cambridge, and he served as Vice Chancellor of the University
in 1650. However, he was removed from his post at King's College
at the Restoration in 1660, and was obliged to seek employment
elsewhere, as a clergyman in London. The interruption to his academic
career may explain why he never published any philosophical treatises
as such. The main source for his philosophical views are his
posthumously-published sermons and aphorisms. Whichcote's
tolerant, optimistic and rational outlook set the intellectual tone
for Cambridge Platonism. Whichcote's philosophical views are
grounded in his repudiation of Calvinist theology. He held that God
being supremely perfect is necessarily good, wise and loving.
Whichcote regarded human nature as rational and perfectible, and he
believed that it is through reason as much as revelation that God
communicates with man. 'God is the most knowable of any thing in
the world' (Patrides, 1969, p.58). Without reason we would have
no means of demonstrating the existence of God, and no assurance that
revelation is from God. By reason Whichcote did not mean the
disputatious logic of the schools but discursive, demonstrative and
practical reason enlightened by contemplation of the divine. He held
that moral principles are immutable absolutes which exist
independently of human minds and institutions, and that virtuous
conduct is grounded in reason. Whichcote's *Aphorisms*
amount to a manual of practical ethics which amply illustrates his
conviction that the fruit reason is not 'bare knowledge'
but action, or knowledge which 'doth go forth into act'.
It is through reason that we gain knowledge of the natural world, and
recognise natural phenomena as 'the EFFECTS OF GOD'.
Although Whichcote's published writings do not discuss natural
philosophy as such, his recognition of the demonstrative value of
natural philosophy for the argument from design anticipates the use of
natural philosophy in the apologetics of Cudworth and More.
## Culverwell, Smith and Sterry
Whichcote's optimism about human reason and his conviction that
philosophy properly belongs within the domain of religion, are shared
by the other Cambridge Platonists, all of whom affirmed the
compatibility of reason and faith. The fullest statement of this
position is Henry More's *The Apology of Henry More*
(1664) which sets out rules for the application of reason in religious
matters, stipulating the use of only those 'Philosophick
theorems' which are 'solid and rational in themselves, nor
really repugnant to the word of God'. Like Whichcote, Peter
Sterry, John Smith and Nathaniel Culverwell are known only through
posthumously published writings. The first published treatise by any
of the Cambridge Platonists was Nathaniel Culverwell's *An
Elegant and Learned Discourse of the Light of Nature* of 1652.
Like the other Cambridge Platonists Culverwell emphasises the freedom
of the will and proposes an innatist epistemology, according to which
the mind is furnished with 'clear and indelible
Principles' and reason an 'intellectual lamp' placed
in the soul by God to enable it to understand God's will
promulgated in the law of nature. These innate principles of the mind
also include moral principles. The soul is a divine spark, which
derives knowledge by inward contemplation, not outward observation.
Like the other Cambridge Platonists, Culverwell held that goodness is
intrinsic to all things, and that moral principles do not depend on
the will of God. He nevertheless underscored the importance of natural
law as the foundation of moral obligation..
John Smith taught mathematics at Queen's College until his
premature death in 1652. His posthumously published *Select
Discourses* (1659) discusses a number of metaphysical and
epistemological issues relating to religious belief - the
existence of God, immortality of the soul and the rationality of
religion. Smith outlines a hierarchy of four grades of cognitive
ascent from sense combined with reason, through reason in conjunction
with innate notions, and, thirdly, through disembodied,
self-reflective reason; and finally divine love.
Peter Sterry's only philosophical work, his posthumously-published
*A Discourse of the Freedom of the Will* (1675), is the most
visionary of all the writings of the Cambridge Platonists. Sterry was
deeply involved with events outside Cambridge as chaplain first to the
Parliamentary leader, Lord Brooke, and then to Oliver Cromwell. After
the death of Cromwell he retired to a Christian community in East
Sheen. In his *Discourse* Sterry argues that to act freely
consists in acting in accordance with ones nature, appropriately to
ones level of being, be that plant, animal or intellectual entity.
Human liberty is grounded in the divine essence and entails liberty of
the understanding and of the will.
## Henry More
A life-long fellow of Christ's College, Cambridge, Henry More
was the most prolific of the Cambridge Platonists. He was also the
most directly engaged in contemporary philosophical debate: not only
did he enter into correspondence with Descartes (between 1648 and
1649) but he also wrote against Hobbes, and was one of the earliest
English critics of Spinoza (whom he attacks in *Demonstrationum
duarum propositionum ... confutatio* and *Epistola altera*
both published in his *Opera omnia*, 1671). Although he
eventually became a critic of Cartesianism, he initially advocated the
teaching of Cartesianism in English Universities. More's
published writings included, besides philosophy, poetry, theology and
bible commentary. His main philosophical works are his *An Antidote
Against Atheism* (1653), his *Of the Immortality of the
Soul* (1659), *Enchiridion metaphysicum* (1671), and
*Enchiridion ethicum* (1667). Like the other Cambridge
Platonists, More used philosophy in defence of theism against the
claims of rational atheists. The most important statement of More's
theological position his *An Explanation of the Grand Mystery of
Godliness* appeared in 1660. In opposition to Calvinist
pessimistic voluntarism, this propounds a moral, rational
providentialism in which he vindicates the goodness and justice of God
by invoking the Origenist doctrine of the pre-existence of the soul.
It also makes the case for religious toleration.
In his philosophical writings, More elaborated a philosophy of spirit
which explained all the phenomena of mind and of the physical world as
the activity of spiritual substance controlling inert matter. More
conceived of both spirit and body as spatially extended, but defined
spiritual substance as the obverse of material extension: where body
is inert and solid, but divisible; spirit is active and penetrable,
but indivisible. It was in his correspondence with Descartes that he
first expounded his view that all substance, whether material or
immaterial, is extended. He went on to argue space is infinite,
anticipating that other native of Grantham, Isaac Newton, and that God
who is an infinite spirit is an extended being (*res extensa*).
In *Enchiridion metaphysicum*, he argues that the properties of
space are analogous to the attributes of God (infinity, immateriality,
immobility etc.).
Within the category of spiritual substance More includes not just the
souls of living creatures and God himself but the main intermediate
causal agent of the cosmos, the Spirit of Nature (or 'Hylarchic
Principle'). Conceived as the interface between the divine and
the material, the Spirit of Nature is a 'Superintendant
Cause' which combines efficient and teleological causality to
ensure the smooth-running of the universe according to God's plan.
It can also be understood as encapsulating 'certain general
Modes and Lawes of Nature' (More, *A Collection*,
Preface, p. xvi) since it is the Spirit of Nature that is responsible
for uniting individual souls with bodies, and for ensuring the regular
operation of non-animate nature. More sought, by this hypothesis, to
account for phenomena that apparently defy the laws of mechanical
physics (for example the inter-vortical trajectory of comets, the
sympathetic vibration of strings and tidal motion). More underpinned
his soul-body dualism by his theory of 'vital congruity'
which explains soul-body interaction as a sympathetic attraction
between soul and body engineered by the operation the Spirit of
Nature.
The most consistent theme of his philosophical writings, are arguments
for demonstrating the existence and providential nature of God. The
foundation stone of More's philosophical and apologetic
enterprise is his philosophy of spirit, especially his arguments for
the existence of incorporeal causal agents, that is, souls or spirits.
Furthermore, More attempted to answer materialists like Thomas Hobbes
whom he regarded as an atheist. More's strategy was to show that
the same arguments that materialists use demonstrate the existence and
properties of body, also support the obverse, the existence of
incorporeal substances. In this way More sought to demonstrate that
the *idea* of incorporeal substance, or spirit, was as
intelligible as that of corporeal substance, i.e. body. Like Plato (in
*Laws* 10), More argues that the operations of the nature
cannot be explained simply in terms of the chance collision of
material particles. Rather we must posit some other source of
activity, which More identifies as 'spirit'. It is a short
step, he argues, from grasping the concept of spirit, to accepting the
idea of an infinite spirit, namely God.
More underpins these *a priori* arguments for the existence of
spirit, with a wide range of *a posteriori* arguments, taken
from observed phenomena of nature to demonstrate the actions of
spirit. Through this excursus into observational method he accumulated
a wide variety of data ranging from experiments conducted by Robert
Boyle and members of the Royal Society, to supernatural effects
including cases of witchcraft and demons. He was censured by Boyle for
misappropriating his experiments to endorse his hypothesis of the
Spirit of Nature. Although his belief in evil spirits appears
inconsistent with his otherwise rational philosophy, it should be
remembered that belief in witchcraft was (a) not unusual in his time,
and (b) was entirely consistent with the theory of spirit according to
which to deny the existence of spirits good or evil, leads, logically
to the denial of the existence of God. As he put it, alluding to James
I's defence of episcopacy, "That saying is no less true in
Politicks 'No Bishop, no King,' than this in Metaphysicks,
'No Spirit, no God' " (More, 1662,
*Antidote*, p. 142). His most well-known fellow-believer was
Royal Society member, Joseph Glanvill (1636-1680), whose
*Sadducismus triumphatus*, More edited.
More also published a short treatise on ethics entitled
*Enchiridion Ethicum* (1667, translated as *An Account of
Virtue*), which was probably intended for to be used as a
textbook. Indebted to Descartes' theory of the passions this
argues that knowledge of virtue is attainable by reason, and the
pursuit of virtue entails the control of the passions by the soul.
Motivation to good is supplied by rightly-directed emotion, while
virtue is achieved by the exercise free will or *autoexousy*
(More uses the same term as Cudworth), that is the 'Power to act
or not act within ourselves'. Anticipating Shaftesbury's
concept of moral sense More posits a special faculty of the soul
combining reason and sensation which he calls the 'Boniform
Faculty'.
More used a number of different genres for conveying his philosophical
ideas to non-specialist readers. The most popular among these were his
*Philosophical Poems* (1647) and his *Divine Dialogues*
(1668). In *Conjectura cabbalistica* (1653), he presented core
themes of his philosophy in the form of an exposition of occulted
truths contained in the first book of Genesis. Subsequently he
undertook a detailed study of the Jewish Kabbalist texts translated
and published by Knorr von Rosenroth in *Kabbala denudata*
(1679). These studies were based on the belief, then current, that
kabbalistic writings contained, in symbolic form, original truths of
philosophy, as well as of religion. Kabbalism therefore exemplified
the compatibility of philosophy and faith. In addition to philosophy
More published several studies of biblical prophecy (e.g.
*Apocalypsis apocalypseos,* 1680, *Paralipomena
prophetica*, 1685). In 1675, More prepared a Latin translation of
his works, *Opera omnia* which ensured his philosophy reached a
European audience as well as an English one.
## Ralph Cudworth
Like his friend Henry More, Ralph Cudworth spent his entire career as
a teacher at the University of Cambridge, ending up as Master of Clare
College. Cudworth published only one major work of philosophy in his
lifetime, *The True Intellectual System of the Universe*
(1678). Among the papers he left at his death, were the treatises
published posthumously as *A Treatise Concerning Eternal and
Immutable Morality* (1731) and his *A Treatise of Freewill*
(1848). These papers also included two further manuscript treatises on
the topic of 'Liberty and Necessity', which have never
been printed.
Cudworth's *True Intellectual System* propounds an
anti-determinist system of philosophy grounded in his conception of
God as a fully perfect being, infinitely wise and good. The created
world reflects the perfection, wisdom and goodness of its creator. It
must, therefore be orderly, intelligible, and organised for the best.
This anti-voluntarist understanding of God's attributes is also
the foundation of epistemology and ethics, since God's wisdom
and goodness are the guarantors of truth and of moral principles. By
contrast, a philosophy founded on a voluntaristic conception of the
deity would have no ground of certainty or of morality because it
would depend on the arbitrary will of God who could, by arbitrary
fiat, decree non-sense to be true and wrong to be right. It follows
that misconceptions of God's attributes, which emphasise his
power and will, result by definition in false philosopical systems
with sceptical and atheistic implications.
Much of *The True Intellectual System* amounts to an extended
*consensus gentium* argument for belief in God, demonstrable
from an analysis of ancient sources which showed that most ancient
philosophers were theists, and therefore that theism is compatible
with philosophy. Among the non-theists, Cudworth (1678, p. 165)
identifies four schools of atheistic philosophy, each of which is a
type of materialism: Hylopathian atheism or materialism,
'Atomical or Democritical' atheism, Cosmo-plastic atheism
(which makes the world-soul the highest numen), and Hylozoic atheism
(which attributes life to matter). Each of these ancient brands of
atheism has its latter-day manifestations in philosophers such as
Hobbes (an example of a Hylopathian atheist) and Spinoza (a latter-day
Hylozoist).
The philosophy constitutive of Cudworth's intellectual system
combines atomist natural philosophy with Platonic metaphysics.
Cudworth conceives this as having originated with Moses from whom it
was transmitted via Pythagoras to Greek and other philosophers,
including Descartes whom Cudworth regarded as a reviver of Mosaic
atomism. For Cudworth, as for Plato, soul is ontologically prior to
the physical world. Since motion, life, thought and action cannot be
explained in terms of material particles, haphazardly jolted together,
there must be some guiding originator, namely soul or spirit. In order
to account for movement, life and orderliness in the operations of
nature, Cudworth proposed his hypothesis of 'the Plastick Life
of Nature'. Similar in conception to More's Spirit of Nature,
Cudworth's Plastic Nature is a formative principle which acts as an
intermediary between the divine and the natural world, as the means
whereby God imprints His presence on his creation and makes His wisdom
and goodness manifest (and therefore intelligible) throughout created
nature.
The Platonist principle that mind precedes the world lies at the
foundation of Cudworth's epistemology which is discussed in
*A Treatise of Eternal and Immutable Morality*. This is the
most fully developed theory of knowledge by any of the Cambridge
Platonists, and the most extensive treatment of innatism by any
seventeenth-century philosopher. For Cudworth, as for Plato, ideas and
moral principles 'are eternal and self-subsistent things'.
The external world is, intrinsically, intelligible, since it bears the
imprint of its creator in the order and relationship of its component
parts, as archetype to ectype. Cognition depends on the same
principles, for just as the created world is a copy of the divine
archetype, so also human minds contain the imprint of Divine wisdom
and knowledge. Since the human mind mirrors the mind of God, it is
ready furnished with ideas and the ability to reason. Cognition
therefore entails recollection and the ideas of things with which the
mind thinks are therefore 'anticipations'- for which
Cudworth adopts the Stoic term *prolepsis*. Cognition is not a
passive process, but involves the active participation of the mind.
Although innate knowledge is the only true knowledge, Cudworth does
not reject sense knowledge because sensory input is essential for
knowledge of the body and the external world. However, raw sense data
is not, by itself, knowledge since it requires mental processing in
order to become knowledge. As Cudworth puts it, we cannot understand
the book of nature unless we know how to read.
Cudworth's theory of the mind as active is matched by an
anti-determinist ethics of action, according to which the soul freely
directs itself towards the good. In *A Treatise* Cudworth
argues not only that ideas exist independently of human minds, but
also the principles of morality are eternal and immutable. In a
concerted attack on Hobbesian moral relativism, Cudworth, argues that
the criteria of right and wrong, good and evil, justice and injustice
are not a matter of convention, but are founded in the goodness and
justice of God. Like Plato in the *Euthyphro*, Cudworth argues
that it is not God's will that determines goodness, but that God
wills things because they are good. The exercise of virtue is not,
however, a passive process, but requires the free exercise of the
individual will. Cudworth sets out his theory of free will in three
treatises on 'Liberty and Necessity', only one of which
has been published, and that posthumously - *A Treatise of
Freewill* (1848). According to Cudworth, the will is not a faculty
of the soul, distinct from reason, but a power of the soul which
combines the functions of both reason and will in order to direct the
soul towards the good. Cudworth's use of the terms
'hegemonikon' (taken from Stoicism) and
'autexousion' (taken from Plotinus) underlines the fact
that the exercise of will entails the power to act. It is internal
direction, not external compulsion that induces us to act either
morally or immorally. Without the freedom (and therefore power) to of
act, there would be no moral responsibility. Moral conduct is active,
not passive. Virtuous action is therefore a matter of active internal
self-determination, rather than determination from without.
In *A Treatise of Freewill*, Cudworth elaborates his conception
of the *hegemonikon*, an integrative power of the soul, which
combines the the higher intellectual functions of the soul, will and
reason with the lower, animal, appetites of the soul. Furthermore,
Cudworth conceives of the *hegemonikon* not simply as the soul
but the whole person, 'that which is properly we
ourselves' (Cudworth, 1996, p. 178). Cudworth's concept of
*hegemonikon* lays the basis for a concept of self identity
founded in a subject that is at once thinking, autonomous and
end-directed. Cudworth did not (as far as is known) develop a
political philosophy. However, the political implications of his
ethical theory set him against Hobbes, but also, in many ways
anticipate John Locke.
### Legacy
Among the immediate philosophical heirs of the Cambridge Platonists,
mention should be made of Henry More's pupil, Anne Conway
(1631-1679), one of the very few female philosophers of the
period. Her *Principles of the Most Ancient and Modern
Philosophy* (1692) includes a critique of More's dualistic
philosophy of spirit, proposing instead a metaphysical monism that
anticipates Leibniz. Another figure linked to More was John Norris
(1657-1712) who was to become the leading English exponent of
the philosophy of Malebranche. Whichcote's philosophical wisdom was
admired by Anthony Ashley Cooper, third Earl of Shaftesbury who
published his *Select Sermons* in 1698. Shaftesbury's tutor,
John Locke was the intimate friend of Cudworth's philosophical
daughter, Damaris Masham.
The Cambridge Platonists have yet to receive full recognition as
philosophers. Evidence from publication and citation suggests that
their philosophical influence was more far-reaching than is normally
recognised in modern histories of philosophy. Culverwell's
*Discourse* was reprinted four times, including at Oxford. The
impact of Cudworth on Locke has yet to be fully investigated. Richard
Price, and Thomas Reid were both indebted to Cudworth. As the first
philosophers to write primarily and consistently in the English
language (preceded only by Sir Kenelm Digby), their impact is still
felt in English philosophical terminology, through their coinage of
such familiar terms as 'materialism',
'consciousness', and 'Cartesianism'.
The intellectual legacy of the Cambridge Platonists extends not just
to philosophical debate in seventeenth-century England and to Scottish
Enlightenment thought, but into the European Enlightenment and beyond.
Leibniz certainly read Cudworth and More, whose works were known
beyond the English-speaking world thanks to Latin translations
More's *Opera omnia* appeared in 1675-9, and Cudworth's
entire printed works were translated into Latin by Johann Lorenz
Mosheim and published in Jena in 1733. Cudworth's theory of Plastic
Nature was taken up in vitalist debates in the French enlightenment.
Although their critique of Descartes, Hobbes, Spinoza has ensured that
the Cambridge Platonists are never completely ignored in philosophical
history, they deserve to be considered an important strand in English
seventeenth-century philosophy. |
whitehead | ## 1. Life and Works
The son of an Anglican clergyman, Whitehead graduated from Cambridge
in 1884 and was elected a Fellow of Trinity College that same year.
His marriage to Evelyn Wade six years later was largely a happy one
and together they had a daughter (Jessie) and two sons (North and
Eric). After moving to London, Whitehead served as president of the
Aristotelian Society from 1922 to 1923. After moving to Harvard, he
was elected to the British Academy in 1931. His moves to both London
and Harvard were prompted in part by institutional regulations
requiring mandatory retirement, although his resignation from
Cambridge was also done partly in protest over how the University had
chosen to discipline Andrew Forsyth, a friend and colleague whose
affair with a married woman had become something of a local
scandal.
In addition to Russell, Whitehead influenced many other students who
became equally or more famous than their teacher, examiner or
supervisor himself. For example: mathematicians G. H. Hardy and J. E.
Littlewood; mathematical physicists Edmund Whittaker, Arthur
Eddington, and James Jeans; economist J. M. Keynes; and philosophers
Susanne Langer, Nelson Goodman, and Willard van Orman Quine. Whitehead
did not, however, inspire any school of thought during his lifetime,
and most of his students distanced themselves from parts of his
teachings that they considered anachronistic. For example:
Whitehead's conviction that pure mathematics and applied
mathematics should not be separated, but cross-fertilize each other,
was not shared by Hardy, but seen as a remnant of the fading mixed
mathematics tradition; after the birth of the theories of relativity
and quantum physics, Whitehead's method of abstracting some of
the basic concepts of mathematical physics from common experiences
seemed antiquated compared to Eddington's method of world
building, which aimed at constructing an experiment matching world
from mathematical building blocks; when, due to Whitehead's
judgment as one of the examiners, Keynes had to rewrite his fellowship
dissertation, Keynes raged against Whitehead, claiming that Whitehead
had not bothered to try to understand Keynes' novel approach to
probability; and Whitehead's main philosophical
doctrine--that the world is composed of deeply interdependent
processes and events, rather than mostly independent material things
or objects--turned out to be largely the opposite of
Russell's doctrine of logical atomism, and his metaphysics was
dispelled by the logical positivists from their dream land of pure
scientific philosophy.
Despite the fact that he did not inspire any school of thought during
his life, his influence on contemporaries was significant. The
late Associate Justice of the United States Supreme Court, Felix
Frankfurter, wrote:
>
>
> From knowledge gained through the years of the personalities who in
> our day have affected American university life, I have for some time
> been convinced that no single figure has had such a pervasive
> influence as the late Professor Alfred North Whitehead. (*New York
> Times,* January 8, 1948)
>
>
>
Moreover, Whitehead's philosophical views posthumously inspired
the movement of process philosophy and theology, and today
Whitehead's ideas continue to be felt and are revalued in
varying degrees in all of the main areas in which he worked. One of
the most important factors in recent Whitehead scholarship is the
gradual publishing of volumes of *The Edinburgh Critical Edition of
the Complete Works of Alfred North Whitehead*. Two volumes of
*The Harvard Lectures of Alfred North Whitehead*, mainly
containing student notes of lectures and seminars given by
Whitehead at Harvard from September 1924 until May 1927,
have already been published by Edinburgh University Press in
2017 and 2021 respectively (see the Primary Literature section of the
Bibliography below), and four more volumes are on their
way. Their publication is a stimulus for Whitehead
researchers all over the world, as is clear from the 2020 book edited
by Brian Henning and Joseph Petek, *Whitehead at Harvard,
1924-1925*.
A short chronology of the major events in Whitehead's life is
below.
| | |
| --- | --- |
| 1861 | Born February 15 in Ramsgate, Isle of Thanet, Kent,
England. |
| 1880 | Enters Trinity College, Cambridge, with a scholarship in
mathematics. |
| 1884 | Elected to the Apostles, the elite discussion club founded by
Tennyson in the 1820s; graduates with a B.A. in Mathematics; elected a
Fellow in Mathematics at Trinity. |
| 1890 | Meets Russell; marries Evelyn Wade. |
| 1903 | Elected a Fellow of the Royal Society as a result of his work on
universal algebra, symbolic logic, and the foundations of
mathematics. |
| 1910 | Resigns from Cambridge and moves to London. |
| 1911 | Appointed Lecturer at University College London. |
| 1912 | Elected President of both the South-Eastern Mathematical
Association and the London branch of the Mathematical Association for
the year 1913. |
| 1914 | Appointed Professor of Applied Mathematics at the Imperial
College of Science and Technology. |
| 1915 | Elected President of the Mathematical Association for the
two-year period 1915-1917. |
| 1921 | Meets Albert Einstein. |
| 1922 | Elected President of the Aristotelian Society for the one-year
period 1922-1923. |
| 1924 | Appointed Professor of Philosophy at Harvard University. |
| 1931 | Elected a Fellow of the British Academy. |
| 1937 | Retires from Harvard. |
| 1945 | Awarded Order of Merit. |
| 1947 | Dies December 30 in Cambridge, Massachusetts, USA. |
More detailed information about Whitehead's life can be found in
the comprehensive two-volume biography *A.N. Whitehead: The Man and
His Work* (1985, 1990) by Victor Lowe and J.B. Schneewind. Paul
Schilpp's *The Philosophy of Alfred North Whitehead*
(1941) also includes a short autobiographical essay, in addition to
providing a comprehensive critical overview of Whitehead's
thought and a detailed bibliography of his writings.
Other helpful introductions to Whitehead's work include Victor
Lowe's *Understanding Whitehead* (1962), Stephen
Franklin's *Speaking from the Depths* (1990), Thomas
Hosinski's *Stubborn Fact and Creative Advance* (1993),
Elizabeth Kraus' *The Metaphysics of Experience* (1998),
Robert Mesle's *Process-Relational Philosophy*
(2008), John Cobb's *Whitehead Word Book* (2015) and
Jay McDaniel's *What is Process Thought?* (2021). Recommendable
for the more advanced Whitehead student are Ivor Leclerc's
*Whitehead's Metaphysics* (1958), Wolfe Mays'
*The Philosophy of Whitehead* (1959), Donald Sherburne's
*A Whiteheadian Aesthetics* (1961), Charles Hartshorne's
*Whitehead's Philosophy* (1972), Lewis Ford's *The
Emergence of Whitehead's Metaphysics* (1984), George Lucas'
*The Rehabilitation of Whitehead* (1989), David Griffin's
*Whitehead's Radically Different Postmodern Philosophy*
(2007), Steven Shaviro's *Without Criteria*
(2009), Isabelle Stengers' *Thinking with Whitehead*
(2011), and Didier Debaise's *Speculative Empiricism: Revisiting
Whitehead* (2017a). For a chronology of Whitehead's major
publications, readers are encouraged to consult the Primary Literature
section of the Bibliography below.
## 2. Mathematics and Logic
Whitehead began his academic career at Trinity College, Cambridge
where, starting in 1884, he taught for a quarter of a century. In
1890, Russell arrived as a student and during the 1890s the two men
came into regular contact with one another. According to Russell,
>
>
> Whitehead was extraordinarily perfect as a teacher. He took a personal
> interest in those with whom he had to deal and knew both their strong
> and their weak points. He would elicit from a pupil the best of which
> a pupil was capable. He was never repressive, or sarcastic, or
> superior, or any of the things that inferior teachers like to be. I
> think that in all the abler young men with whom he came in contact he
> inspired, as he did in me, a very real and lasting affection. (1956:
> 104)
>
>
>
By the early 1900s, both Whitehead and Russell had completed books on
the foundations of mathematics. Whitehead's 1898 *A Treatise
on Universal Algebra* had resulted in his election to the Royal
Society. Russell's 1903 *The Principles of Mathematics*
had expanded on several themes initially developed by Whitehead.
Russell's book also represented a decisive break from the
neo-Kantian approach to mathematics Russell had developed six years
earlier in his *Essay on the Foundations of Geometry*. Since
the research for a proposed second volume of Russell's
*Principles* overlapped considerably with Whitehead's own
research for a planned second volume of his *Universal
Algebra*, the two men began collaboration on what eventually would
become *Principia Mathematica* (1910, 1912, 1913). According to
Whitehead, they initially expected the research to take about a year
to complete. In the end, they worked together on the project for a
decade.
According to Whitehead--inspired by Hermann
Grassmann--mathematics is the study of pattern:
>
>
> mathematics is concerned with the investigation of patterns of
> connectedness, in abstraction from the particular relata and the
> particular modes of connection. (1933 [1967: 153])
>
>
>
In his *Treatise on Universal Algebra*, Whitehead took a
generalized algebra--called 'universal
algebra'--to be the most appropriate tool for this study or
investigation, but after meeting Giuseppe Peano during the section
devoted to logic at the First International Congress of Philosophy in
1900, Whitehead and Russell became aware of the potential of symbolic
logic to become the most appropriate tool to rigorously study
mathematical patterns.
With the help of Whitehead, Russell extended Peano's symbolic
logic in order to be able to deal with all types of relations and,
consequently, with all the patterns of relatedness that mathematicians
study. In his *Principles of Mathematics*, Russell gave an
account of the resulting new symbolic logic of classes and
relations--called 'mathematical logic'--as well
as an outline of how to reconstruct all existing mathematics by means
of this logic. After that, instead of only being a driving force
behind the scenes, Whitehead became the public co-author of Russell of
the actual and rigorous reconstruction of mathematics from logic.
Russell often presented this reconstruction--giving rise to the
publication of the three *Principia Mathematica*
volumes--as the reduction of mathematics to logic, both
*qua* definitions and *qua* proofs. And since the 1920s,
following Rudolf Carnap, Whitehead and Russell's project as well
as similar reduction-to-logic projects, including the earlier project
of Gottlob Frege, are classified under the header of
'logicism'.
However, Sebastian Gandon has highlighted in his 2012 study
*Russell's Unknown Logicism* that Russell and
Whitehead's logicism project differed in at least one important
respect from Frege's logicism project. Frege adhered to a
radical universalism, and wanted the mathematical content to be
entirely determined from within the logical system. Russell and
Whitehead, however, took into account the consensus, or took a stance
in the ongoing discussions among mathematicians, with respect to the
constitutive features of the already existing,
'pre-logicized' branches of mathematics, and then
evaluated for each branch which of several possible types of relations
were best suited to logically reconstruct it, while safeguarding its
topic-specific features. Contrary to Frege, Whitehead and Russell
tempered their urge for universalism to take into account the
topic-specificity of the various mathematical branches, and as a
working mathematician, Whitehead was well positioned to compare the
pre-logicized mathematics with its reconstruction in the logical
system.
For Russell, the logicism project originated from the dream of a
rock-solid mathematics, no longer governed by Kantian intuition, but
by logical rigor. Hence, the discovery of a devastating
paradox--later called 'Russell's
paradox'--at the heart of mathematical logic was a serious
blow for Russell, and kicked off his search for a theory to prevent
paradox. He actually came up with several theories, but retained the
ramified theory of types in *Principia Mathematica*. Moreover,
the 'logicizing' of arithmetic required extra-logical
patchwork: the axioms of reducibility, infinity, and choice. None of
this patchwork could ultimately satisfy Russell. His original dream
evaporated and, looking back later in life, he wrote: "The
splendid certainty which I had always hoped to find in mathematics was
lost in a bewildering maze" (1959: 157).
Whitehead originally conceived of the logicism project as an
improvement upon his algebraic project. Indeed, Whitehead's
transition from the solitary *Universal Algebra* project to the
joint *Principia Mathematica* project was a transition from
universal algebra to mathematical logic as the most appropriate
symbolic language to embody mathematical patterns. It entailed a
generalization from the embodiment of absolutely abstract patterns by
means of algebraic forms of variables to their embodiment by means of
propositional functions of real variables. Hardy was quite right in
his review of the first volume of *Principia Mathematica* when
he wrote: "mathematics, one may say, is the science of
propositional functions" (quoted by Grattan-Guinness 1991:
173).
Whitehead saw mathematical logic as a tool to guide the
mathematician's essential activities of intuiting, articulating,
and applying patterns, and he did not aim at replacing mathematical
intuition (pattern recognition) with logical rigor. In the latter
respect, Whitehead, from the start, was more like Henri
Poincare than Russell (cf. Desmet 2016a). Consequently, the
discovery of paradox at the heart of mathematical logic was less of a
blow to Whitehead than to Russell and, later in life, now and again,
Whitehead simply reversed the Russellian order of generality and
importance, writing that "symbolic logic" only represents
"a minute fragment" of the possibilities of "the
algebraic method" (1947 [1968: 129]).
For a more detailed account of the genesis of *Principia
Mathematica* and Whitehead's place in the philosophy of
mathematics, cf. Smith 1953, Code 1985, Grattan-Guinness 2000 and
2002, Irvine 2009, Bostock 2010, Desmet 2010, N. Griffin et al. 2011,
N. Griffin & Linsky 2013.
Following the completion of *Principia,* Whitehead and Russell
began to go their separate ways (cf. Ramsden Eames 1989, Desmet &
Weber 2010, Desmet & Rusu 2012). Perhaps inevitably,
Russell's anti-war activities and Whitehead's loss of his
youngest son during World War I led to something of a split between
the two men. Nevertheless, the two remained on relatively good terms
for the rest of their lives. To his credit, Russell comments in his
*Autobiography* that when it came to their political
differences, Whitehead
>
>
> was more tolerant than I was, and it was much more my fault than his
> that these differences caused a diminution in the closeness of our
> friendship. (1956: 100)
>
>
>
## 3. Physics
Even with the publication of its three volumes, *Principia
Mathematica* was incomplete. For example, the logical
reconstruction of the various branches of geometry still needed to be
completed and published. In fact, it was Whitehead's task to do
so by producing a fourth *Principia Mathematica* volume.
However, this volume never saw the light of day. What Whitehead did
publish were his repeated attempts to logically reconstruct the
geometry of space and time, hence extending the logicism project from
pure mathematics to applied mathematics or, put differently, from
mathematics to physics--an extension which Russell greeted with
enthusiasm and saw as an important step in the deployment of his new
philosophical method of logical analysis.
At first, Whitehead focused on the geometry of space.
When Whitehead and Russell logicized the concept of number, their
starting point was our intuition of equinumerous classes of
individuals--for example, our recognition that the class of
dwarfs in the fairy tale of Snow White (Doc, Grumpy, Happy, Sleepy,
Bashful, Sneezy, Dopey) and the class of days in a week (from Monday
to Sunday) have 'something' in common, namely, the
something we call 'seven.' Then they logically defined (i)
classes C and C' to be equinumerous when there is a one-to-one
relation that correlates each of the members of C with one member of
C', and (ii) the number of a class C as the class of all the
classes that are equinumerous with C.
When Whitehead logicized the space of physics, his starting point was
our intuition of spatial volumes and of how one volume may contain (or
extend over) another, giving rise to the (mereo)logical relation of
containment (or extension) in the class of volumes, and to the concept
of converging series of volumes--think, for example, of a series
of Russian dolls, one contained in the other, but idealized to ever
smaller dolls. Whitehead made all this rigorous and then, crudely put,
defined the points from which to further construct the geometry of
space.
There is a striking resemblance between Whitehead's construction
of points and the construction of real numbers by Georg Cantor, who
had been one of Whitehead and Russell's main sources of
inspiration next to Peano. Indeed, Whitehead defined points as
equivalence classes of converging series of volumes, and Cantor
defined real numbers as equivalence classes of converging series of
rational numbers. Moreover, because Whitehead's basic
geometrical entities of geometry are not (as in Euclid) extensionless
points but volumes, Whitehead can be seen as one of the fathers of
point-free geometry; and because Whitehead's basic geometrical
relation is the mereological (or part-whole) relation of extension, he
can also be seen as one of the founders of mereology (and even, when
we take into account his later work on this topic in part IV of
*Process and Reality*, of mereotopology).
"Last night", Whitehead wrote to Russell on 3 September
1911,
>
>
> the idea suddenly flashed on me that time could be treated in exactly
> the same way as I have now got space (which is a picture of beauty, by
> the bye). (Unpublished letter kept in The Bertrand Russell Archives at
> McMaster University)
>
>
>
Shortly after, Whitehead must have learned about Einstein's
Special Theory of Relativity (STR) because in a letter to Wildon Carr
on 10 July 1912, Russell suggested to the Honorary Secretary of the
Aristotelian Society that Whitehead possibly might deliver a paper on
the principle of relativity, and added: "I know he has been
going into the subject". Anyhow, in the early years of the
second decade of the twentieth century, Whitehead's interest
shifted from the logical reconstruction of the Euclidean space of
classical physics to the logical reconstruction of the Minkowskian
space-time of the STR.
A first step to go from space to space-time was the replacement of
(our intuition of) spatial volumes with (our intuition of)
spatio-temporal regions (or events) as the basis of the construction
(so that, for example, a point of space-time could be defined as an
equivalence class of converging spatio-temporal regions). However,
whereas Whitehead had constructed the Euclidean distance based on our
intuition of cases of spatial congruence (for example, of two parallel
straight line segments being equally long), he now struggled to
construct the Minkowskian metric in terms of a concept of
spatio-temporal congruence, based on a kind of merger of our intuition
of cases of spatial congruence and our intuition of cases of temporal
congruence (for example, of two candles taking equally long to burn
out).
So, as a second step, Whitehead introduced a second relation in the
class of spatio-temporal regions next to the relation of extension,
namely, the relation of cogredience, based on our intuition of rest or
motion. Whitehead's use of this relation gave rise to a constant
*k*, which allowed him to merge spatial and temporal
congruence, and which appeared in his formula for the metric of
space-time. When Whitehead equated *k* with
*c*2 (the square of the speed of light) his metric
became equal to the Minkowskian metric.
Whitehead's most detailed account of this reconstruction of the
Minkowskian space-time of the STR was given in his 1919 book, *An
Enquiry concerning the Principles of Natural Knowledge*, but he
also offered a less technical account in his 1920 book, *The
Concept of Nature*.
Whitehead first learned about Einstein's General Theory of
Relativity (GTR) in 1916. He admired Einstein's new mathematical
theory of gravitation, but rejected Einstein's explanation of
gravitation for not being coherent with some of our basic intuitions.
Einstein explained the gravitational motion of a free mass-particle in
the neighborhood of a heavy mass as due to the curvature of space-time
caused by this mass. According to Whitehead, the theoretical concept
of a contingently curved space-time does not cohere with our
measurement practices; they are based on the essential uniformity of
the texture of our spatial and temporal intuition.
In general, Whitehead opposed the modern scientist's attitude of
dropping the requirement of coherence with our basic intuitions, and
he revolted against the issuing bifurcation of nature into the world
of science and that of intuition. In particular, as Einstein's
critic, he set out to give an alternative rendering of the
GTR--an alternative that passed not only what Whitehead called
"the narrow gauge", which tests a theory's empirical
adequacy, but also what he called "the broad gauge", which
tests its coherence with our basic intuitions.
In 1920, first in a newspaper article (reprinted in *Essays in
Science and Philosophy*), and then in a lecture (published as
Chapter VIII of *Concept of Nature*), Whitehead made public an
outline of his alternative to Einstein's GTR. In 1921, Whitehead
had the opportunity to discuss matters with Einstein himself. And
finally, in 1922, Whitehead published a book with a more detailed
account of his alternative theory of gravitation (ATG)--*The
Principle of Relativity*.
According to Whitehead, the Maxwell-Lorentz theory of electrodynamics
(unlike Einstein's GTR) could be conceived as coherent with our
basic intuitions--even in its four-dimensional format, namely, by
elaborating Minkowski's electromagnetic worldview. Hence,
Whitehead developed his ATG in close analogy with the theory of
electrodynamics. He replaced Einstein's geometric explanation
with an electrodynamics-like explanation. Whitehead explained the
gravitational motion of a free mass-particle as due to a field action
determined by retarded wave-potentials propagating in a uniform
space-time from the source masses to the free mass-particle.
It is important to stress that Whitehead had no intention of improving
the predictive content of Einstein's GTR, only the explanatory
content. However, Whitehead's replacement of Einstein's
explanation with an alternative explanation entailed a replacement of
Einstein's formulae with alternative formulae; and these
different formulae implied different predictions. So it would be
incorrect to say that Whitehead's ATG is empirically equivalent
to Einstein's GTR. What can be claimed, however, is that for a
long time Whitehead's theory was experimentally
indistinguishable from Einstein's theory.
In fact, like Einstein's GTR, Whitehead's ATG leads to
Newton's theory of gravitation as a first approximation. Also
(as shown by Eddington in 1924 and J. L. Synge in 1952)
Einstein's and Whitehead's theories of gravitation lead to
an identical solution for the problem of determining the gravitational
field of a single, static, and spherically symmetric body--the
Schwarzschild solution. This implies, for example, that
Einstein's GTR and Whitehead's ATG lead to the exact same
predictions not only with respect to the precession of the perihelion
of Mercury and the bending of starlight in the gravitational field of
the sun (as already shown by Whitehead in 1922 and William Temple in
1924) but also with respect to the red-shift of the spectral lines of
light emitted by atoms in the gravitational field of the sun (contrary
to Whitehead's own conclusion in 1922, which was based on a
highly schematized and soon outdated model of the molecule). Moreover
(as shown by R. J. Russell and Christoph Wassermann in 1986 and
published in 2004) Einstein's and Whitehead's theories of
gravitation also lead to an identical solution for the problem of
determining the gravitational field of a single, rotating, and axially
symmetric body--the Kerr solution.
Einstein's and Whitehead's predictions become different,
however, when considering more than one body. Indeed, Einstein's
equation of gravitation is non-linear while Whitehead's is
linear; and this divergence *qua* mathematics implies a
divergence *qua* predictions in the case of two or more bodies.
For example (as shown by G. L. Clark in 1954) the two theories lead to
different predictions with respect to the motion of double stars. The
predictive divergence in the case of two bodies, however, is quite
small, and until recently experimental techniques were not
sufficiently refined to confirm either Einstein's predictions or
Whitehead's, for example, with respect to double stars. In 2008,
based on a precise timing of the pulsar B1913+16 in the Hulse-Taylor
binary system, Einstein's predictions with respect to the motion
of double stars were confirmed, and Whitehead's refuted (by Gary
Gibbons and Clifford Will). The important fact from the viewpoint of
the philosophy of science is not that, since the 1970s, now and again,
a physicist rose to claim the experimental refutation of
Whitehead's ATG, but that for decades it was experimentally
indistinguishable from Einstein's GTR, hence refuting two modern
dogmas. First, that theory choice is solely based on empirical facts.
Clearly, next to facts, values--especially aesthetic
values--are at play as well. Second, that the history of science
is a succession of victories over the army of our misleading
intuitions, each success of science must be interpreted as a defeat of
intuition, and a truth cannot be scientific unless it hurts human
intuition. Surely, we can be scientific without taming the authority
of our intuition and without engaging in the disastrous race to
disenchant nature and humankind.
For a more detailed account of Whitehead's involvement with
Einstein's STR and GTR, cf. Palter 1960, Von Ranke 1997,
Herstein 2006 and Desmet 2011, 2016b, and 2016c.
In 1925, in one of his Lowell lectures (published as Chapter VIII of
*Science and the Modern World*), Whitehead made public a
popular outline of his alternative to Bohr's solar system view of
material atoms, but he never published a more detailed
and technical account of this alternative quantum theory (AQT).
Whitehead called his AQT, 'the theory of primates.' In it,
primates are not apes, of course, but standing
(or stationary) electromagnetic waves (or vibrations).
These waves are different from the
traveling electromagnetic waves *outside* atoms. They
are not light waves, but standing waves of charge density
*inside* atoms, in fact, *constituting* atoms.
Whitehead was up to date with respect to the old quantum
mechanics of Planck, Einstein and Bohr. He was as familiar with
Jeans' *Report on Radiation and the Quantum-Theory*
(1914) as with Eddington's *Report on the Relativity Theory
of Gravitation* (1918). And prior to his departure to
Harvard, on 12 July 1924, Whitehead chaired a
symposium--"The Quantum Theory: How far does it modify the
mathematical, the physical, and the psychological concepts of
continuity?"--which was part of a joint session of the
Aristotelian Society and the Mind Society. Unfortunately, he did not
himself deliver a lecture at that symposium, and the most technical
account of his AQT can be found in the student notes of lectures
Whitehead gave at Harvard, especially in the fall semester of the
academic year 1924-1925. Whitehead's lectures, however, were not
clear-cut presentations, but instances of philosophy in the making.
Moreover, his philosophy students lacked the necessary knowledge
of mathematical physics to fully comprehend Whitehead when he
talked physics. Consequently, the student notes of Whitehead's Harvard
lectures dealing with quantum theory are low quality notes. Ronny
Desmet, however, has made a first attempt to correct some of the
mistakes and to reconstruct Whitehead's AQT in his 2020 paper,
"Whitehead's Highly Speculative Lectures on Quantum Theory"
(Henning & Petek 2020: 154-181), and in his 2022 paper,
"Whitehead's 1925 View of Function and Time" (to be published).
In his AQT, Whitehead conceived of all electrons
and protons as compositions of standing electromagnetic
waves of charge density, which he called 'primates' or 'primordial
elements.' In *Science and the Modern World*, Whitehead wrote:
"We shall conceive each primordial element as a vibratory ebb and flow
of an underlying energy, or activity" (1925 [1967: 35]).
When a material atom emits energy, one of
its composing standing electromagnetic waves
is transformed into a travelling electromagnetic wave, that is,
into light. The energy emitted by an atom is the energy of the
emitted light, and the action of the emitted light is the product
of its energy and its period (which is the inverse of its frequency).
Experiment had revealed that the amounts of energy
and action of atomic light emission do not form
a continuous, but a discrete spectrum; they are quantized
physical variables. In fact, a smallest amount of action
exists, the quantum of action, given by Planck's
constant, and all measured amounts of action
of atomic light emission are integer multiples of
this quantum of action. Whitehead conceived of electrons and
protons as composed of standing electromagnetic waves of
which the action is also always a multiple of the
quantum of action. And as he conceived of the atomic energy
emission as a transformation of these standing into traveling
electromagnetic waves, it became clear to him that
this emission actually involves the decay of
a primate, and that the loss of the
primate's quantized energy and action explains why the
emitted light has similarly quantized energy and
action.
Contrary to his ATG, Whitehead's AQT soon became obsolete.
Experiments revealed that there are more elementary
particles than electrons and protons. And, of course, the
new quantum mechanics (of Schrodinger, Heisenberg, Dirac, Pauli,
and many others) emerged. As Whitehead focused on philosophy
when at Harvard, his knowledge of physics began to be out of
date. His own theory, however, showed some similarities with the new
quantum mechanics. Whitehead's standing electromagnetic waves, in a
sense, foreshadowed Schrodinger's wave mechanics. And
whereas the discovery of the quantum of action was the
"physicist's nightmare" (Whitehead 2021:93), Whitehead was happy with
the increasing importance of the notion of 'action' in
physics. Action is an activity through time, and hence
confirms Whitehead's idea that processes, and
not instantaneous configurations of inactive masses, are
fundamental in nature. Moreover, since there is a minimum quantum
of action, Whitehead interpreted the equation, action
equals the product of energy and period (which is a duration), by
saying: "concentration of time increases energy" (2021: 93). This
statement of Whitehead, in a sense, foreshadowed Heisenberg's
uncertainty principle with respect to energy and time, which was
formulated two years later. As his AQT thus foreshadowed the new
quantum mechanics in two respects, one wonders what he thought of this
new theory when it emerged. Charles Hartshorne wrote that
Whitehead definitely saw "Heisenberg's famous article of 1927 on the
Uncertainty Principle" (2010: 28). Hartshorne was sure about this,
because it was he himself who showed it to Whitehead. But
there is (as yet) no evidence that Whitehead seriously studied
this paper in particular, or the new quantum mechanics in general.
Whitehead said to his students: "A definite creative act is looked on
as handing over to physics the ultimate unit" (2021: 41), where the
ultimate unit is Planck's constant, or, in other words, the quantum of
action. Clearly, Whitehead thought of the quantum of action
as an abstract aspect of his ultimate metaphysical building
blocks, his elementary creative acts or 'actual
occasions,' and his AQT, like the Maxwell-Lorentz theory of
electrodynamics and Whitehead's ATG, were still at the back of
his mind when he developed his metaphysics at Harvard. This may
well be the reason why his metaphysics--his process philosophy--has
been conceived by many authors as an interesting framework to
ontologically interpret the mathematical formalism of quantum
mechanics. And even authors who are not directly inspired by
Whitehead, often come up with interpretations of quantum
mechanics that are strikingly Whiteheadian. A good and recent example
is Carlo Rovelli's relational interpretation. Rovelli writes:
>
>
> In the world described by quantum mechanics there is no reality except
> in the relations between physical systems. It isn't things that
> enter into relations but, rather, relations that ground the notion of
> "thing". The world of quantum mechanics is not a world of
> objects: it is a world of events. Things are built by the happenings
> of elementary events: as the philosopher Nelson Goodman wrote in the
> 1950s, in a beautiful phrase, "An object is a monotonous
> process." A stone is a vibration of quanta that maintains its
> structure for a while, just as a marine wave maintains its identity
> for a while before melting again into the sea. ... We, like waves
> and like all objects, are a flux of events; we are processes, for a
> brief time monotonous ... (2017: 115-116)
>
>
>
And Rovelli adds that in the speculative world of quantum gravity:
>
>
> There is no longer space which *contains* the world, and no
> longer time *during the course of which* events occur. There
> are elementary processes ... continuously interact[ing] with each
> other. Just as a calm and clear Alpine lake is made up of a rapid
> dance of a myriad of minuscule water molecules, the illusion of being
> surrounded by continuous space and time is the product of a
> long-sighted vision of a dense swarming of elementary processes.
> (2017: 158)
>
>
>
## 4. Philosophy of Science
Whitehead's reconstruction of the space-time of the STR and his
ATG make clear (i) that his main methodological requirement in the
philosophy of science is that physical theories should cohere with our
intuitions of the relatedness of nature (of the relations of
extension, congruence, cogredience, causality, etc.), and (ii) that
his paradigm of what a theory of physics should be like is the
Maxwell-Lorentz theory of electrodynamics. And indeed, in his
philosophy of science, Whitehead rejects David Hume's
"sensationalist empiricism" (1929c [1985: 57]) and Isaac
Newton's "scientific materialism" (1925 [1967:
17]). Instead Whitehead promotes (i) a radical empiricist methodology,
which relies on our perception, not only of sense data (colors,
sounds, smells, etc.) but also of a manifold of natural relations, and
(ii) an electrodynamics-like worldview, in which the fundamental
concepts are no longer simply located substances or bits of matter,
but internally related processes and events.
"Modern physical science", Whitehead wrote,
>
>
> is the issue of a coordinated effort, sustained for more than three
> centuries, to understand those activities of Nature by reason of which
> the transitions of sense-perception occur. (1934 [2011: 65])
>
>
>
But according to Whitehead, Hume's sensationalist empiricism has
undermined the idea that our perception can reveal those activities,
and Newton's scientific materialism has failed to render his
formulae of motion and gravitation intelligible.
Whitehead was dissatisfied with Hume's reduction of perception
to sense perception because, as Hume discovered, pure sense perception
reveals a succession of spatial patterns of impressions of color,
sound, smell, etc. (a procession of forms of sense data), but it does
not reveal any causal relatedness to interpret it (any form of process
to render it intelligible). In fact, all "relatedness of
nature", and not only its causal relatedness, was
"demolished by Hume's youthful skepticism" (1922
[2004: 13]) and conceived as the outcome of mere psychological
association. Whitehead wrote:
>
>
> Sense-perception, for all its practical importance, is very
> superficial in its disclosure of the nature of things. ... My
> quarrel with [Hume] concerns [his] exclusive stress upon
> sense-perception for the provision of data respecting Nature.
> Sense-perception does not provide the data in terms of which we
> interpret it. (1934 [2011: 21])
>
>
>
Whitehead was also dissatisfied with Newton's scientific
materialism,
>
>
> which presupposes the ultimate fact of an irreducible brute matter, or
> material, spread through space in a flux of configurations. In itself
> such a material is senseless, valueless, purposeless. It just does
> what it does do, following a fixed routine imposed by external
> relations which do not spring from the nature of its being.
> (1925 [1967: 17])
>
>
>
Whitehead rejected Newton's conception of nature as the
succession of instants of spatial distribution of bits of matter for
two reasons. First: the concept of a "durationless"
instant, "without reference to any other instant", renders
unintelligible the concepts of "velocity at an instant"
and "momentum at an instant" as well as the equations of
motion involving these concepts (1934 [2011: 47]). Second: the concept
of self-sufficient and isolated bits of matter, having "the
property of simple location in space and time" (1925 [1967:
49]), cannot "give the slightest warrant for the law of
gravitation" that Newton postulated (1934 [2011: 34]). Whitehead
wrote:
>
>
> Newton's methodology for physics was an overwhelming success.
> But the forces which he introduced left Nature still without meaning
> or value. In the essence of a material body--in its mass, motion,
> and shape--there was no reason for the law of gravitation. (1934
> [2011: 23])
>
>
>
> There is merely a formula for succession. But there is an absence of
> understandable causation for that formula for that succession. (1934
> [2011: 53-54])
>
>
>
"Combining Newton and Hume", Whitehead summarized,
>
>
> we obtain a barren concept, namely, a field of perception devoid of
> any data for its own interpretation, and a system of interpretation
> devoid of any reason for the concurrence of its factors. (1934 [2011:
> 25])
>
>
>
"Two conclusions", Whitehead wrote,
>
>
> are now abundantly clear. One is that sense-perception omits any
> discrimination of the fundamental activities within Nature. ...
> The second conclusion is the failure of science to endow its formulae
> for activity with any meaning. (1934 [2011: 65])
>
>
>
The views of Newton and Hume, Whitehead continued, are "gravely
defective. They are right as far as they go. But they omit ...
our intuitive modes of understanding" (1934 [2011: 26]).
In Whitehead's eyes, however, the development of Maxwell's
theory of electromagnetism constituted an antidote to Newton's
scientific materialism, for it led him to conceive the whole universe
as "a field of force--or, in other words, a field of
incessant activity" (1934 [2011: 27]). The theory of
electromagnetism served Whitehead to overcome Newton's
"fallacy of simple location" (1925 [1967: 49]), that
is, the conception of nature as a universe of self-sufficient isolated
bits of matter. Indeed, we cannot say of an electromagnetic event that
it is
>
>
> here in space, and here in time, or here in space-time, in a perfectly
> definite sense which does not require for its explanation any
> reference to other regions of space-time. (1925 [1967: 49])
>
>
>
The theory of electromagnetism "involves the entire abandonment
of the notion that simple location is the primary way in which things
are involved in space-time" because it reveals that, "in a
certain sense, everything is everywhere at all times"
(1925 [1967: 91]). "Long ago", Whitehead wrote,
Faraday already remarked "that in a sense an electric charge is
everywhere", and:
>
>
> the modification of the electromagnetic field at every point of space
> at each instant owing to the past history of each electron is another
> way of stating the same fact. (1920 [1986: 148])
>
>
>
The lesson that Whitehead learned from the theory of electromagnetism
is unambiguous:
>
>
> The fundamental concepts are activity and process. ... The notion
> of self-sufficient isolation is not exemplified in modern physics.
> There are no essentially self-contained activities within limited
> regions. ... Nature is a theatre for the interrelations of
> activities. All things change, the activities and their
> interrelations. ... In the place of the procession of [spatial]
> forms (of externally related bits of matter, modern physics) has
> substituted the notion of the forms of process. It has thus swept away
> space and matter, and has substituted the study of the internal
> relations within a complex state of activity. (1934 [2011:
> 35-36])
>
>
>
But overcoming Newton was insufficient for Whitehead because Hume
"has even robbed us of reason for believing that the past gives
any ground for expectation of the future" (1934 [2011: 65]).
According to Whitehead,
>
>
> science conceived as resting on mere sense-perception, with no other
> sources of observation, is bankrupt, so far as concerns its claims to
> self-sufficiency. (1934 [2011: 66])
>
>
>
In fact, science conceived as restricting itself to the sensationalist
methodology can find neither efficient nor final causality. It
"can find no creativity in Nature; it finds mere rules of
succession" (1934 [2011: 66]). "The reason for this
blindness", according to Whitehead, "lies in the fact that
such science only deals with half of the evidence provided by human
experience" (1934 [2011: 66]).
Contrary to Hume, Whitehead held that it is untrue to state that our
perception, in which sense perception is only one factor, discloses no
causal relatedness. Inspired by the radical empiricism of William
James and Henri Bergson, Whitehead gave a new analysis of perception.
According to Whitehead, our perception is a symbolic interplay of two
pure modes of perception, pure sense perception (which Whitehead
ultimately called "perception in the mode of presentational
immediacy"), and a more basic perception of causal relatedness
(which he called "perception in the mode of causal
efficacy"). According to Whitehead, taking into account the
whole of our perception instead of only pure sense perception, that
is, all perceptual data instead of only Hume's sense data,
implies also taking into account the other half of the evidence,
namely, our intuitions of the relatedness of nature, of "the
togetherness of things". He added:
>
>
> the togetherness of things involves some doctrine of mutual immanence.
> In some sense or other ... each happening is a factor in the
> nature of every other happening. (1934 [2011: 87])
>
>
>
Hume demolished the relatedness of nature; Whitehead restored it,
founded the "doctrine of causation ... on the doctrine of
immanence", and wrote: "Each occasion presupposes the
antecedent world as active in its own nature. ... This is the
doctrine of causation" (1934 [2011: 88-89]).
Whitehead also noticed that, in a sense, physicists are even more
reductionist than Hume. In practice they rely on sense data, but in
theory they abstract from most of the data of our five senses (sight,
hearing, smell, taste, and touch) to focus on the colorless,
soundless, odorless, and tasteless mathematical aspects of nature.
Consequently, in a worldview inspired not by the actual practices of
physicists, but by their theoretical speculations,
nature--methodologically stripped from its 'tertiary'
qualities (esthetical, ethical, and religious values)--is further
reduced to the scientific world of 'primary' qualities
(mathematical quantities and interconnections such as the amplitude,
length, and frequency of mathematical waves), and this scientific
world is bifurcated from the world of 'secondary'
qualities (colors, sounds, smells, etc.). Moreover, the former world
is supposed, ultimately, to fully explain the latter world (so that,
for example, colors end up as being nothing more than electromagnetic
wave-frequencies).
Whitehead spoke of the "bifurcation of nature into two systems
of reality" (1920 [1986: 30]) to denote the
strategy--originating with Galileo, Descartes, Boyle and
Locke--of bifurcating nature into the essential reality of
primary qualities and the non-essential reality of "psychic
additions" or secondary qualities, ultimately to be explained
away in terms of primary qualities. Whitehead sided with Berkeley in
arguing that the primary/secondary distinction is not tenable (1920
[1986: 43-44]), that all qualities are "in the same boat,
to sink or swim together" (1920 [1986: 148]), and that, for
example,
>
>
> the red glow of the sunset should be as much part of nature as are the
> molecules and electric waves by which men of science would explain the
> phenomenon. (1920 [1986: 29])
>
>
>
Whitehead described the philosophical outcome of the bifurcation of
nature as follows:
>
>
> The primary qualities are the essential qualities of substances whose
> spatio-temporal relationships constitute nature. ... The
> occurrences of nature are in some way apprehended by minds ...
> But the mind in apprehending also experiences sensations which,
> properly speaking, are qualities of the mind alone. These sensations
> are projected by the mind so as to clothe appropriate bodies in
> external nature. Thus the bodies are perceived as with qualities which
> in reality do not belong to them, qualities which in fact are purely
> the offspring of the mind. Thus nature gets credit which should in
> truth be reserved for ourselves: the rose for its scent: the
> nightingale for his song: and the sun for his radiance. The poets are
> entirely mistaken. They should address their lyrics to themselves, and
> should turn them into odes of self-congratulation on the excellency of
> the human mind. Nature is a dull affair, soundless, scentless,
> colourless; merely the hurrying of material, endlessly, meaninglessly.
> (1925 [1967: 54])
>
>
>
"The enormous success of the scientific abstractions",
Whitehead wrote, "has foisted onto philosophy the task of
accepting them as the most concrete rendering of fact" and, he
added:
>
>
> Thereby, modern philosophy has been ruined. It has oscillated in a
> complex manner between three extremes. There are the dualists, who
> accept matter and mind as on an equal basis, and the two varieties of
> monists, those who put mind inside matter, and those who put matter
> inside mind. But this juggling with abstractions can never overcome
> the inherent confusion introduced by the ascription of misplaced
> concreteness to the scientific scheme. (1925 [1967: 55])
>
>
>
Whitehead's alternative is fighting "the Fallacy of
Misplaced Concreteness"--the "error of mistaking the
abstract for the concrete"--because "this fallacy is
the occasion of great confusion in philosophy" (1925 [1967:
51]). The fallacy of misplaced concreteness is committed each time
abstractions are taken as concrete facts, and "more concrete
facts" are expressed "under the guise of very abstract
logical constructions" (1925 [1967: 50-51]). This
fallacy lies at the root of the modern philosophical confusions of
scientific materialism and progressive bifurcation of nature. Indeed,
the notion of simple location in Newton's scientific materialism
is an instance of the fallacy of misplaced concreteness--it
mistakes the abstraction of in essence unrelated bits of matter as the
most concrete reality from which to explain the relatedness of nature.
And the bifurcating idea that secondary qualities should be explained
in terms of primary qualities is also an instance of this
fallacy--it mistakes the mathematical abstractions of physics as
the most concrete and so-called primary reality from which to explain
the so-called secondary reality of colors, sounds, etc.
In light of the rise of electrodynamics, relativity, and quantum
mechanics, Whitehead challenged scientific materialism and the
bifurcation of nature "as being entirely unsuited to the
scientific situation at which we have now arrived", and he
clearly outlined the mission of philosophy as he saw it:
>
>
> I hold that philosophy is the critic of abstractions. Its function is
> the double one, first of harmonising them by assigning to them their
> right relative status as abstractions, and secondly of completing them
> by direct comparison with more concrete intuitions of the universe,
> and thereby promoting the formation of more complete schemes of
> thought. It is in respect to this comparison that the testimony of
> great poets is of such importance. Their survival is evidence that
> they express deep intuitions of mankind penetrating into what is
> universal in concrete fact. Philosophy is not one among the sciences
> with its own little scheme of abstractions which it works away at
> perfecting and improving. It is the survey of the sciences, with the
> special object of their harmony, and of their completion. It brings to
> this task, not only the evidence of the separate sciences, but also
> its own appeal to concrete experience. (1925 [1967: 87])
>
>
>
For more details on Whitehead's philosophy of science, cf.
Hammerschmidt 1947, Lawrence 1956, Bright 1958, Palter 1960, Mays
1977, Fitzgerald 1979, Plamondon 1979, Eastman & Keeton (eds)
2004, Bostock 2010, Athern 2011, Deroo & Leclercq (eds) 2011,
Henning et al. (eds) 2013, Segall 2013, McHenry 2015, Desmet 2016d,
Eastman & Epperson & Griffin (eds) 2016, Eastman 2020,
Lestienne 2020.
## 5. Philosophy of Education
While in London, Whitehead became involved in many practical aspects
of tertiary education, serving as President of the Mathematical
Association, Dean of the Faculty of Science and Chairman of the
Academic Council of the Senate at the University of London, Chairman
of the Delegacy for Goldsmiths' College, and several other
administrative posts. Many of his essays about education date from
this time and appear in his book, *The Aims of Education and Other
Essays* (1929a).
At its core, Whitehead's philosophy of education emphasizes the
idea that a good life is most profitably thought of as an educated or
civilized life, two terms which Whitehead often uses interchangeably.
As we think, we live. Thus it is only as we improve our thoughts that
we improve our lives. The result, says Whitehead, is that "There
is only one subject matter for education, and that is Life in all its
manifestations" (1929a: 10). This view in turn has corollaries
for both the content of education and its method of delivery.
(a) With regard to delivery, Whitehead emphasizes the importance of
remembering that a "pupil's mind is a growing organism
... it is not a box to be ruthlessly packed with alien
ideas" (1929a: 47). Instead, it is the purpose of education to
stimulate and guide each student's self-development. It is not
the job of the educator simply to insert into his students'
minds little chunks of knowledge.
Whitehead conceives of the student's educational process of
self-development as an organic and cyclic process in which each cycle
consists of three stages: first the stage of romance, then the stage
of precision, and finally, the stage of generalization. The first
stage is all about "free exploration, initiated by
wonder", the second about the disciplined "acquirement of
technique and detailed knowledge", and the third about
"the free application of what has been learned" (Lowe
1990: 61). These stages, continually recurring in cycles, determine
what Whitehead calls "The Rhythm of Education" (cf. 1929a:
24-44). In the context of mathematics, Whitehead's three
stages can be conceived of as the stage of undisciplined intuition,
the stage of logical reasoning, and the stage of logically guided
intuition. By skipping stage one, and never arriving at stage three,
bad math teachers deny students the major motivation to love
mathematics: the joy of pattern recognition.
That education does not involve inserting into the student's
mind little chunks of knowledge is clear from the description of
culture that Whitehead offers as the opening of the first and title
essay of *The Aims of Education*:
>
>
> Culture is activity of thought, and receptiveness of beauty and humane
> feeling. Scraps of information have nothing to do with it. (1929a:
> 1)
>
>
>
On the contrary, Whitehead writes,
>
>
> we must beware of what I call 'inert ideas'--that is
> to say, ideas that are merely received into the mind without being
> utilized, or tested, or thrown into fresh combinations, (1929a:
> 1-2)
>
>
>
and he holds that "education is the acquisition of the art of
the [interconnection and] utilization of knowledge" (1929a: 6),
and that ideas remain disconnected and non-utilized unless they are
related
>
>
> to that stream, compounded of sense perceptions, feelings, hopes,
> desires, and of mental activities adjusting thought to thought, which
> forms our life. (1929a: 4)
>
>
>
This point--the point where Whitehead links the art of education
to the stream of experience that forms our life--is the meeting
point of Whitehead's philosophy of education with his philosophy
of experience, which is also called: 'process
philosophy.'
According to Whitehead's process philosophy, the stream of
experience that forms our life consists of occasions of experience,
each of which is a synthesis of many feelings having objective content
(what is felt) and subjective form (how it is felt); also, the
synthesis of feelings is not primarily controlled by their objective
content, but by their subjective form. According to Whitehead's
philosophy of education, the attempt to educate a person by merely
focusing on objective content--on inert ideas, scraps of
information, bare knowledge--while disregarding the subjective
form or emotional pattern of that person's experience can never
be successful. The art of education has to take into account the
subjective receptiveness and appreciation of beauty and human
greatness, the subjective emotions of interest, joy and adventure, and
"the ultimate motive power" (1929a: 62), that is, the
sense of importance, values and possibilities (cf.1929a:
45-65).
(b) With regard to content, Whitehead holds that any adequate
education must include a literary component, a scientific component,
and a technical component.
According to Whitehead:
>
>
> Any serious fundamental change in the intellectual outlook of human
> society must necessarily be followed by an educational revolution.
> (1929a: 116)
>
>
>
In particular, the scientific revolution and the fundamental changes
it entailed in the seventeenth and subsequent centuries have been
followed by an educational revolution that was still ongoing in the
twentieth century. In 1912, Whitehead wrote:
>
>
> We are, in fact, in the midst of an educational revolution caused by
> the dying away of the classical impulse which has dominated European
> thought since the time of the Renaissance. ... What I mean is the
> loss of that sustained reference to classical literature for the sake
> of finding in it the expression of our best thoughts on all subjects.
> ... There are three fundamental changes ... Science now
> enters into the very texture of our thoughts ... Again,
> mechanical inventions, which are the product of science, by altering
> the material possibilities of life, have transformed our industrial
> system, and thus have changed the very structure of Society. Finally,
> the idea of the World now means to us the whole round world of human
> affairs, from the revolutions of China to those of Peru. ... The
> total result of these changes is that the supreme merit of immediate
> relevance to the full compass of modern life has been lost to
> classical literature. (1947 [1968: 175-176])
>
>
>
Whitehead listed the scientific and industrial revolutions as well as
globalization as the major causes for the educational reforms of the
nineteenth and twentieth century. These fundamental changes indeed
implied new standards for what counts as genuine knowledge. However,
together with these new standards emerged a romantic anxiety--the
anxiety that the new standards of genuine knowledge, education, and
living might impoverish human experience and damage both individual
and social wellbeing. Hence arose the bifurcation of culture into the
culture of "natural scientists" and the culture of
"literary intellectuals" (cf. Snow 1959), and the many
associated debates in the context of various educational
reforms--for example, the 1880s debate in Victorian England, when
Whitehead was a Cambridge student, between T. H. Huxley, an outspoken
champion of science, defending the claims of modern scientific
education, and Matthew Arnold, a leading man of letters, defending the
claims of classical literary education.
As for Whitehead, in whom the scientific and the romantic spirit
merged, one cannot say that he sided with either Huxley or Arnold. He
took his distance from those who, motivated by the idea that the
sciences embody the ultimate modes of thought, sided with Huxley, but
also from those who, motivated by conservatism, that is, by an
anachronistic longing for a highly educated upper class and an elitist
horror of educational democratization, sided with Arnold (cf. 1947
[1968: 23-24]). Next to not taking a stance in the debate on
which is the ultimate mode of thought, the scientific or the literary,
hence rejecting the antithesis between scientific and literary
education, Whitehead also rejected the antithesis between thought and
action (cf. 1947 [1968: 172]) and hence, between a liberal, that is,
mainly intellectual and theoretical, education, and a technical, that
is, mainly manual and practical, education (cf. 1929a: 66-92).
In other words, according to Whitehead, we can identify three instead
of two cultures but, moreover, we must refrain from promoting any one
of these three at the expense of the other two. He writes:
>
>
> My point is, that no course of study can claim any position of ideal
> completeness. Nor are the omitted factors of subordinate importance.
> The insistence in the Platonic culture on disinterested intellectual
> appreciation is a psychological error. Action and our implication in
> the transition of events amid the inevitable bond of cause to effect
> are fundamental. An education which strives to divorce intellectual or
> aesthetic life from these fundamental facts carries with it the
> decadence of civilisation. (1929a: 73)
>
>
>
> Disinterested scientific curiosity is a passion for an ordered
> intellectual vision of the connection of events. But the ...
> intervention of action even in abstract science is often overlooked.
> No man of science wants merely to know. He acquires knowledge to
> appease his passion for discovery. He does not discover in order to
> know, he knows in order to discover. The pleasure which art and
> sciences can give to toil is the enjoyment which arises from
> successfully directed intention. (1929a: 74)
>
>
>
> The antithesis between a technical and a liberal education is
> fallacious. There can be no technical education which is not liberal,
> and no liberal education which is not technical: that is, no education
> which does not import both technique and intellectual vision. (1929a:
> 74)
>
>
>
> There are three main methods which are required in a national system
> of education, namely, the literary curriculum, the scientific
> curriculum, the technical curriculum. But each of these curricula
> should include the other two ... each of these sides ...
> should be illuminated by the others. (1929a: 75)
>
>
>
For more details and an extensive bibliography on Whitehead's
philosophy of education, cf. Part VI of Volume 1 of the *Handbook
of Whiteheadian Process Thought* (Weber & Desmond 2008:
185-214). For some recent Whiteheadian contributions to
*Educating for an Ecological Civilization*, see Ford &
Rowe (eds.) 2017.
## 6. Metaphysics
Facing mandatory retirement in London, and upon being offered an
appointment at Harvard, Whitehead moved to the United States in 1924.
Given his prior training in mathematics, it was sometimes joked that
the first philosophy lectures he ever attended were those he himself
delivered in his new role as Professor of Philosophy. As Russell
comments, "In England, Whitehead was regarded only as a
mathematician, and it was left to America to discover him as a
philosopher" (1956: 100).
A year after his arrival, he delivered Harvard's prestigious
Lowell Lectures. The lectures formed the basis for *Science and the
Modern World* (1926). The 1927/28 Gifford Lectures at the
University of Edinburgh followed shortly afterwards and resulted in
the publication of Whitehead's most comprehensive (but difficult
to penetrate) metaphysical work, *Process and Reality* (1929c).
And in the Preface of the third major work composing his mature
metaphysical system, *Adventures of Ideas* (1933), Whitehead
stated:
>
>
> The three books--*Science and The Modern World, Process and
> Reality, Adventures of Ideas*--are an endeavor to express a
> way of understanding the nature of things, and to point out how that
> way of understanding is illustrated by ... human experience. Each
> book can be read separately; but they supplement each other's
> omissions or compressions. (1933 [1967: vii])
>
>
>
Whitehead's philosophy of science "has nothing to do with
ethics or theology or the theory of aesthetics" (1922 [2004:
4]). Whitehead in his London writings was "excluding any
reference to moral or aesthetic values", even though he was
already aware that "the values of nature are perhaps the key to
the metaphysical synthesis of existence" (1920 [1986: 5]).
Whitehead's metaphysics, on the contrary, not only take into
account science, but also art, morals and religion. Whitehead in his
Harvard writings did not exclude anything, but aimed at a
"synoptic vision" (1929c [1985: 5]) to which values are
indeed the key.
In his earlier philosophy of science, Whitehead revolted against the
bifurcation of nature into the worlds of primary and secondary
qualities, and he promoted the harmonization of the abstractions of
mathematical physics with those of Hume's sensationalist
empiricism, as well as the inclusion of more concrete intuitions
offered by our perception--our intuitions of causality,
extension, cogredience, congruence, color, sound, smell, etc. Closely
linked to this completion of the scientific scheme of thought,
Whitehead developed a new scientific ontology and a new theory of
perception. His scientific ontology is one of internally related
events (instead of merely externally related bits of matter). His
theory of perception (cf. *Symbolism: its Meaning and Effect*)
holds that our perception is always perception in the mixed mode of
symbolic reference, which usually involves a symbolic reference of
what is given in the pure mode of presentational immediacy to what is
given in the pure mode of causal efficacy:
>
>
> symbolic reference, though in complex human experience it works both
> ways, is chiefly to be thought of as the elucidation of percepta in
> the mode of causal efficacy by ... percepta in the mode of
> presentational immediacy. (1929c [1985: 178])
>
>
>
According to Whitehead, the failure to lay due emphasis on the
perceptual mode of causal efficacy implies the danger of reducing the
scientific method to Hume's sensationalist empiricism, and
ultimately lies at the basis of the Humean failure to acknowledge the
relatedness of nature, especially the causal relatedness of nature.
Indeed, "the notion of causation arose because mankind lives
amid experiences in the mode of causal efficacy" (1929c [1985:
175]). According to Whitehead, "symbolic reference is the
interpretative element in human experience" (1929c [1985: 173]),
and "the failure to lay due emphasis on symbolic reference
... has reduced the notion of 'meaning' to a
mystery" (1929c [1985: 168]), and ultimately lies at the basis
of Newton's failure to give meaning to his formulae of motion
and gravitation.
In his later metaphysics, Whitehead revolted against the bifurcation
of the world into the objective world of facts (as studied by science,
even a completed science, and one not limited to physics, but
stretching from physics to biology to psychology) and the subjective
world of values (aesthetic, ethic, and religious), and he promoted the
harmonization of the abstractions of science with those of art,
morals, and religion, as well as the inclusion of more concrete
intuitions offered by our experience--stretching from our
mathematical and physical intuitions to our poetic and mystic
intuitions. Closely linked to this completion of the metaphysical
scheme of thought (cf. Part I of *Process and Reality*),
Whitehead refined his earlier ontology, and generalized his earlier
theory of perception into a theory of feelings. Whitehead's
ultimate ontology--the ontology of 'the philosophy of
organism' or 'process philosophy'--is one of
internally related organism-like elementary processes (called
'actual occasions' or 'actual entities') in
terms of which he could understand both lifeless nature and nature
alive, both matter and mind, both science and
religion--"Philosophy", Whitehead even writes,
"attains its chief importance by fusing the two, namely,
religion and science, into one rational scheme of thought"
(1929c [1985: 15]). His theory of feelings (cf. part III of
*Process and Reality*) claims that not only our perception, but
our experience in general is a stream of elementary processes of
concrescence (growing together) of many feelings into
one--"the many become one, and are increased with
one" (1929c [1985: 21])--and that the process of
concrescence is not primarily driven by the objective content of the
feelings involved (their factuality), but by their subjective form
(their valuation, cf. 1929c [1985: 240]).
Whitehead's ontology cannot be disjoined from his theory of
feelings. The actual occasions ontologically constituting our
experience *are* the elementary processes of concrescence of
feelings constituting the stream of our experience, and they throw
light on the *what* and the *how* of all actual
occasions, including those that constitute lifeless material things.
This amounts to the panexperientialist claim that the intrinsically
related elementary constituents of all things in the universe, from
stones to human beings, are experiential. Whitehead writes:
"each actual entity is a throb of experience" (1929c
[1985: 190]) and "apart from the experiences of subjects there
is nothing, nothing, nothing, bare nothingness" (1929c [1985:
167])--an outrageous claim according to some, even when it is
made clear that panexperientialism is not the same as panpsychism,
because "consciousness presupposes experience, and not
experience consciousness" (1929c [1985: 53]).
The relational event ontology that Whitehead developed in his London
period might serve to develop a relational interpretation of quantum
mechanics, such as Rovelli's (cf. supra) or one of the many
proposed by Whitehead scholars (cf. Stapp 1993 and 2007, Malin 2001,
Hattich 2004, Epperson 2004, Epperson & Zafiris 2013). But
then this ontology has to take into account the fact that quantum
mechanics suggests that reality is not only relational, but also
granular (the results of measuring its changes do not form continuous
spectra, but spectra of discrete quanta) and indeterminist (physicists
cannot predetermine the result of a measurement; they can only
calculate for each of the relevant discrete quanta, that is, for each
of the possible results of the measurement, the probability that it
becomes the actual result).
In Whitehead's London writings, there is no "atomic structure"
of events (1920 [1986: 59]). The events constituting the passage
of nature, and underlying the abstraction of a continuous
space-time, were continuous events. In his first academic year at
Harvard, however, Whitehead changed his mind. Henceforth he
conceived of the passage of nature as atomic, as constituted by atomic
events. In *The Emergence of Whitehead's Metaphysics:
1925-1929*, Lewis Ford had already given an account of
the change in Whitehead's thought from continuous to atomic
becoming in the spring of 1925 (1984: 51-65). In "From Physics to
Philosophy, and from Continuity to Atomicity" (Henning &
Petek 2020: 132-153), Ronny Desmet confirmed Ford's account
by means of a close reading of *The Harvard Lectures
of Alfred North Whitehead 1924-1925*. Moreover, Desmet made
clear that the main reason for Whitehead's change of mind was not
the rise of quantum mechanics, nor the fact that the atomicity of
becoming provided a solution to Zeno's paradox of becoming, nor
even that it was needed to include contingency and freedom in his
philosophy. The main raison was that
this atomicity prevented his philosophy to
contradict our immediate experience of the irreversibility
of time. But no matter its reason, the introduction of the
atomicity of becoming in Whitehead's philosophy implied the
granularity and indeterminism of reality that are needed, next to its
relationality, to develop a feasible interpretation of
quantum mechanics within the frame of this philosophy.
In Whitehead's *Process and Reality*, "the
mysterious quanta of energy have made their appearance" (1929c
[1985: 78]), "the ultimate metaphysical truth is atomism"
(1929c [1985: 35]), and events are seen as networks (or
'societies') of elementary and atomic events, called
'actual occasions' or 'actual entities.'
Whitehead writes:
>
>
> I shall use the term 'event' in the more general sense of
> a nexus of actual occasions ... An actual occasion is the
> limiting type of an event with only one member. (1929c [1985: 73])
>
>
>
Each actual occasion determines a quantum of
extension--"the atomized quantum of extension correlative
to the actual entity" (1929c [1985: 73])--and it is by
means of the relation of extensive connection in the class of the
regions constituted by these quanta that Whitehead attempted to
improve upon his earlier construction of space-time (cf. Part IV of
*Process and Reality*).
The atomicity of events in quantum mechanics dovetails with the
atomicity of the stream of experience as conceived by William James,
hence reinforcing Whitehead's claim that each actual entity is
an elementary process of experience. Whitehead writes:
>
>
> The authority of William James can be quoted in support of this
> conclusion. He writes: "Either your experience is of no content,
> of no change, or it is of a perceptible amount of content or change.
> Your acquaintance with reality grows literally by buds or drops of
> perception. Intellectually and on reflection you can divide these into
> components, but as immediately given, they come totally or not at
> all". (1929c [1985: 68])
>
>
>
Whitehead's conclusion reads: "actual entities are drops
of experience, complex and interdependent" (1929c [1985: 18]),
and he expresses that reality grows by drops, which together form the
extensive continuum, by writing: "extensiveness becomes, but
'becoming is not itself extensive", and "there is a
becoming of continuity, but no continuity of becoming" (1929c
[1985: 35]).
In Whitehead's London writings, he aims at logically
reconstructing Einstein's STR and GTR, which are both
deterministic theories of physics, and his notion of causality (that
each occasion presupposes the antecedent world as active in its own
nature) does not seem to leave much room for any creative
self-determination. In his Harvard writings, however, Whitehead
considers deterministic interaction as an abstract limit in
*some* circumstances of the creative interaction that governs
the becoming of actual entities in *all* circumstances, and he
makes clear that his notion of causality includes both determination
by the antecedent world (efficient causation of past actual occasions)
and self-determination (final causation by the actual occasion in the
process of becoming). Whitehead writes:
>
>
> An actual entity is at once the product of the efficient past, and is
> also, in Spinoza's phrase, *causa sui*. Every philosophy
> recognizes, in some form or other, this factor of self-causation.
> (1929a: 150)
>
>
>
Again: "Self-realization is the ultimate fact of facts. An
actuality is self-realizing, and whatever is self-realizing is an
actuality" (1929a: 222).
Introducing indeterminism also means introducing potentiality next to
actuality, and indeed, Whitehead introduces pure potentials, also
called 'eternal objects,' next to actual occasions:
>
>
> The eternal objects are the pure potentials of the universe, and the
> actual entities differ from each other in their realization of
> potentials. (1929c [1985: 149])
>
>
>
Eternal objects can qualify (characterize) the objective content and
the subjective form of the feelings that constitute actual entities.
Eternal objects of the *objective* species are pure
mathematical patterns: "Eternal objects of the objective species
are the mathematical Platonic forms" (1929c [1985: 291]). An
eternal object of the objective species can only qualify the objective
content of a feeling, and "never be an element in the
definiteness of a subjective form" (idem). Eternal objects of
the *subjective* species, on the other hand, include sense data
and values.
>
>
> A member of the subjective species is, in its primary character, an
> element in the definiteness of the subjective form of a feeling. It is
> a determinate way in which a feeling can feel. (idem)
>
>
>
But it can also become an eternal object contributing to the
definiteness of the objective content of a feeling, for example, when
a smelly feeling gives rise to a feeling of that smell, or when an
emotionally red feeling is felt by another feeling, and red, an
element of the subjective form of the first feeling, becomes an
element of the objective content of the second feeling.
Whitehead's concept of self-determination cannot be disjoined
from his idea that each actual entity is an elementary process of
experience, and hence, according to Whitehead, it is relevant both at
the lower level of indeterminist physical interactions and at the
higher level of free human interactions. Indeed, each actual entity is
a concrescence of feelings of the antecedent world, which do not only
have objective content, but also subjective form, and as this
concrescence is not only determined by the objective content (by
*what* is felt), but also by the subjective form (by
*how* it is felt), it is not only determined by the antecedent
world that is felt, but also by *how* it is felt. In other
words, each actual entity has to take into account its past, but that
past only conditions and does not completely determine *how*
the actual entity will take it into account, and "*how*
an actual entity *becomes* constitutes *what* that
actual entity *is*" (1929c [1985: 23]).
How does this relate to eternal objects? *How* an actual entity
takes into account its antecedent world involves "the
realization of eternal objects [or pure potentials] in the
constitution of the actual entity in question" (1929c [1985:
149]), and this is partly decided by the actual entity itself. In
fact, "*actuality* is the decision amid
*potentiality*" (1929c [1985: 43]). Another way of
stating the same is that "the subjective form ... has the
character of a valuation" and
>
>
> according as the valuation is a 'valuation up' or 'a
> valuation down,' the importance of the eternal object [or pure
> potentials] is enhanced, or attenuated. (1929c [1985:
> 240-241])
>
>
>
According to Whitehead, self-determination gives rise to the
probabilistic laws of science as well as human freedom. We cannot
decide what the causes are of our present moment of experience,
but--to a certain extent--we can decide how we take them
into account. In other words, we cannot change what happens to us, but
we can choose how we take it. Because our inner life is constituted
not only by what we feel, but also by how we feel what we feel, not
only by objective content, but also by subjective form, Whitehead
argues that outer compulsion and efficient causation do not have the
last word in our becoming; inner self-determination and final
causation do.
Whitehead completes his metaphysics by introducing God (cf. Part V of
*Process and Reality*) as one of the elements to further
understand self-determination (and that it does not result in chaos or
mere repetition, but promotes order and novelty) and final causation
(and that it ultimately aims at "intensity of feeling"
(1929c [1985: 27]) or "depth of satisfaction" (1929c
[1985: 105])). According to Whitehead: "God is the organ of
novelty" and order (1929c [1985: 67]);
>
>
> Apart from the intervention of God, there could be nothing new in the
> world, and no order in the world. The course of creation would be a
> dead level of ineffectiveness, with all balance and intensity
> progressively excluded by the cross currents of incompatibility;
> (1929c [1985: 247])
>
>
>
and "God's purpose in the creative advance is the
evocation of intensities" (1929c [1985: 105]). Actually, this
last quote from *Process and Reality* is the equivalent of an
earlier quote from *Religion in the Making*--"The
purpose of God is the attainment of value in the world" (1926b
[1996: 100])--and a later quote from *Adventures of
Ideas*--"The teleology of the Universe is directed to
the production of Beauty" (1933 [1967: 265]). Each actual
occasion does not only feel its antecedent world (its past), but God
as well, and it is the feeling of God which constitutes the initial
aim for the actual occasion's becoming--"His
[God's] tenderness is directed towards each actual occasion, as
it arises" (1929c [1985: 105]). Again, however, the actual
occasion is "finally responsible for the decision by which any
lure for feeling is admitted to efficiency" (1929c [1985: 88]),
even if that lure is divine. In other words, each actual occasion is
"conditioned, though not determined, by an initial subjective
aim supplied by the ground of all order and originality" (1929c
[1985: 108]).
For more details on Whitehead's metaphysics, cf. the books
listed in section 1 as well as Emmet 1932, Johnson 1952, Eisendrath
1971, Lango 1972, Connelly 1981, Ross 1983, Ford 1984, Nobo 1986,
McHenry 1992, Jones 1998, Basile 2009, Nordsieck 2015,
Debaise 2017b, Dombrowski 2017, Stengers 2020, and Raud 2021.
## 7. Religion
As Whitehead's process philosophy gave rise to the movement of
process theology, most philosophers think that his take on religion
was merely positive. This commonplace is wrong. Whitehead wrote:
>
>
> Religion is by no means necessarily good. It may be very evil.
> (1926 [1996: 17])
>
>
>
> In considering religion, we should not be obsessed by the idea of its
> necessary goodness. This is a dangerous delusion. (1926 [1996:
> 18])
>
>
>
> Indeed history, down to the present day, is a melancholy record of the
> horrors which can attend religion: human sacrifice, and in particular,
> the slaughter of children, cannibalism, sensual orgies, abject
> superstition, hatred as between races, the maintenance of degrading
> customs, hysteria, bigotry, can all be laid at its charge. Religion is
> the last refuge of human savagery. The uncritical association of
> religion with goodness is directly negatived by plain facts.
> (1926 [1996: 37])
>
>
>
This being said, Whitehead didn't hold that religion is
*merely* negative. To him, religion can be "positive or
negative, good or bad" (1926 [1996: 17]). So after
highlighting that the necessary goodness of religion is a dangerous
delusion in *Religion in the Making*, Whitehead abruptly adds:
"The point to notice is its transcendent importance"
(1926 [1996: 18]). In *Science and the Modern World*,
Whitehead expresses this transcendent importance of religion as
follows:
>
>
> Religion is the vision of something which stands beyond, behind, and
> within, the passing flux of immediate things; something which is real,
> and yet waiting to be realized; something which is a remote
> possibility, and yet the greatest of present facts; something that
> gives meaning to all that passes, and yet eludes all apprehension;
> something whose possession is the final good, and yet is beyond all
> reach; something which is the ultimate ideal, and the hopeless quest.
> (1925 [1967: 191-192])
>
>
>
And after pointing out that religion is the last refuge of human
savagery in *Religion in the Making*, Whitehead abruptly adds:
"Religion can be, and has been, the main instrument for
progress" (1926 [1996: 37-38]). In *Science and
the Modern World* this message reads:
>
>
> Religion has emerged into human experience mixed with the crudest
> fantasies of barbaric imagination. Gradually, slowly, steadily the
> vision recurs in history under nobler form and with clearer
> expression. It is the one element in human experience which
> persistently shows an upward trend. It fades and then recurs. But when
> it renews its force, it recurs with an added richness and purity of
> content. The fact of the religious vision, and its history of
> persistent expansion, is our one ground for optimism.
> (1925 [1967: 192])
>
>
>
With respect to the relationship between science and religion,
Whitehead's view clearly differs from Stephen Jay Gould's
view that religion and science do not overlap. Gould wrote:
>
>
> The lack of conflict between science and religion arises from a lack
> of overlap between their respective domains of professional
> expertise--science in the empirical constitution of the universe,
> and religion in the search for proper ethical values and the spiritual
> meaning of our lives. (1997)
>
>
>
Whitehead, on the contrary, wrote: "You cannot shelter theology
from science, or science from theology" (1926 [1996: 79]).
And: "The *conflict* between science and religion is what
naturally occurs in our minds when we think of this subject"
(1926 [1967: 181]).
However, Whitehead did not agree with those who hold that the ideal
solution of the science-religion conflict is the complete annihilation
of religion. Whitehead, on the contrary, held that we should aim at
the integration of science and religion, and turn the impoverishing
opposition between the two into an enriching contrast. According to
Whitehead, both religion and science are important, and he wrote:
>
>
> When we consider what religion is for mankind, and what science is, it
> is no exaggeration to say that the future course of history depends
> upon the decision of this generation as to the relation between them.
> (1925 [1967: 181])
>
>
>
Whitehead never sided with those who, in the name of science, oppose
religion with a misplaced and dehumanizing rhetoric of disenchantment,
nor with those who, in the name of religion, oppose science with a
misplaced and dehumanizing exaltation of existent religious dogmas,
codes of behavior, institutions, rituals, etc. As Whitehead wrote:
"There is the hysteria of depreciation, and there is the
opposite hysteria which dehumanizes in order to exalt" (1927
[1985: 91]). Whitehead, on the contrary, urged both scientific and
religious leaders to observe "the utmost toleration of variety
of opinion" (1925 [1967: 187]) as well as the following
advice:
>
>
> Every age produces people with clear logical intellects, and with the
> most praiseworthy grip of the importance of some sphere of human
> experience, who have elaborated, or inherited, a scheme of thought
> which exactly fits those experiences which claim their interest. Such
> people are apt resolutely to ignore, or to explain away, all evidence
> which confuses their scheme with contradictory instances. What they
> cannot fit in is for them nonsense. An unflinching determination to
> take the whole evidence into account is the only method of
> preservation against the fluctuating extremes of fashionable opinion.
> This advice seems so easy, and is in fact so difficult to follow
> (1925 [1967: 187]).
>
>
>
Whitehead's advice of taking the whole evidence into account
implies taking the inner life of religion into account and not only
its external life:
>
>
> Life is an internal fact for its own sake, before it is an external
> fact relating itself to others. The conduct of external life is
> conditioned by environment, but it receives its final quality, on
> which its worth depends, from the internal life which is the
> self-realization of existence. Religion is the art and the theory of
> the internal life of man, so far as it depends on the man himself and
> on what is permanent in the nature of things.
>
>
>
> This doctrine is the direct negation of the theory that religion is
> primarily a social fact. Social facts are of great importance to
> religion, because there is no such thing as absolutely independent
> existence. You cannot abstract society from man; most psychology is
> herd-psychology. But all collective emotions leave untouched the awful
> ultimate fact, which is the human being, consciously alone with
> itself, for its own sake.
>
>
>
> Religion is what the individual does with his own solitariness.
> (1926 [1996: 15-16])
>
>
>
Whitehead's advice also implies the challenge to continually
reshape the outer life of religion in accord with the scientific
developments, while remaining faithful to its inner life. When taking
into account science, religion runs the risk of collapsing. Indeed,
while reshaping its outer life, religion can only avoid implosion by
remaining faithful to its inner life. "Religions commit
suicide", according to Whitehead, when do they not find
"their inspirations ... in the primary expressions of the
intuitions of the finest types of religious lives"
(1926 [1996: 144]). And he writes:
>
>
> Religion, therefore, while in the framing of dogmas it must admit
> modifications from the complete circle of our knowledge, still brings
> its own contribution of immediate experience. (1926 [1996:
> 79-80])
>
>
>
On the other hand, when religion shelters itself from the complete
circle of knowledge, it also faces "decay" and, Whitehead
adds, "the Church will perish unless it opens its window"
(1926 [1996: 146]). So there really is no alternative. But that
does not render the task at hand any easier.
Whitehead lists two necessary, but not sufficient, requirements for
religious leaders to reshape, again and again, the outer expressions
of their inner experiences: First, they should stop exaggerating the
importance of the outer life of religion. Whitehead writes:
>
>
> Collective enthusiasms, revivals, institutions, churches, rituals,
> bibles, codes of behavior, are the trappings of religion, its passing
> forms. They may be useful, or harmful; they may be authoritatively
> ordained, or merely temporary expedients. But the end of religion is
> beyond all this. (1926 [1996: 17])
>
>
>
Secondly, they should learn from scientists how to deal with continual
revision. Whitehead writes:
>
>
> When Darwin or Einstein proclaim theories which modify our ideas, it
> is a triumph for science. We do not go about saying that there is
> another defeat for science, because its old ideas have been abandoned.
> We know that another step of scientific insight has been gained.
>
>
>
> Religion will not regain its old power until it can face change in the
> same spirit as does science. Its principles may be eternal, but the
> expression of those principles requires continual development. This
> evolution of religion is in the main a disengagement of its own proper
> ideas in terms of the imaginative picture of the world entertained in
> previous ages. Such a release from the bonds of imperfect science is
> all to the good. (1925 [1967: 188-189])
>
>
>
In this respect, Whitehead offers the following example:
>
>
> The clash between religion and science, which has relegated the earth
> to the position of a second-rate planet attached to a second-rate sun,
> has been greatly to the benefit of the spirituality of religion by
> dispersing [a number of] medieval fancies. (1925 [1967: 190])
>
>
>
On the other hand, Whitehead is well aware that religion more often
fails than succeeds in this respect, and he writes, for example, that
both
>
>
> Christianity and Buddhism ... have suffered from the rise of
> ... science, because neither of them had ... the requisite
> flexibility of adaptation. (1926 [1996: 146])
>
>
>
If the condition of mutual tolerance is satisfied, then, according to
Whitehead: "A clash of doctrines is not a disaster--it is
an opportunity" (1925 [1967: 186]). In other words, if this
condition is satisfied, then the clash between religion and science is
an opportunity on the path toward their integration or, as Whitehead
puts it:
>
>
> The clash is a sign that there are wider truths and finer perspectives
> within which a reconciliation of a deeper religion and a more subtle
> science will be found. (1925 [1967: 185])
>
>
>
According to Whitehead, the task of philosophy is "to absorb
into one system all sources of experience" (1926 [1996:
149]), including the intuitions at the basis of both science and
religion, and in *Religion in the Making*, he expresses the
basic religious intuition as follows:
>
>
> There is a quality of life which lies always beyond the mere fact of
> life; and when we include the quality in the fact, there is still
> omitted the quality of the quality. It is not true that the finer
> quality is the direct associate of obvious happiness or obvious
> pleasure. Religion is the direct apprehension that, beyond such
> happiness and such pleasure remains the function of what is actual and
> passing, that it contributes its quality as an immortal fact to the
> order which informs the world. (1926 [1996: 80])
>
>
>
The first aspect of this dual intuition that "our existence is
more than a succession of bare facts" (idem) is that the quality
or value of each of the successive occasions of life derives from a
finer quality or value, which lies beyond the mere facts of life, and
even beyond obvious happiness and pleasure, namely, the finer quality
or value of which life is informed by God. The second aspect is that
each of the successive occasions of life contributes its quality or
value as an immortal fact to God.
In *Process and Reality*, Whitehead absorbed this dual
religious intuition in terms of the bipolar--primordial and
consequent--nature of God.
God viewed as primordial does not determine the becoming of each
actual occasion, but conditions it (cf. supra--the initial
subjective aim). He does not force, but tenderly persuades each actual
occasion to actualize--from "the absolute wealth of
potentiality" (1929c [1985: 343)--value-potentials relevant
for that particular becoming. "God", according to
Whitehead, "is the poet of the world, with tender patience
leading it by his vision of truth, beauty, and goodness" (1929c
[1985: 346]).
"The ultimate evil in the temporal world", Whitehead
writes,
>
>
> lies in the fact that the past fades, that time is a "perpetual
> perishing." ... In the temporal world, it is the empirical
> fact that process entails loss. (1929c [1985: 340])
>
>
>
In other words, from a merely factual point of view, "human life
is a flash of occasional enjoyments lighting up a mass of pain and
misery, a bagatelle of transient experience" (1925 [1967:
192]). According to Whitehead, however, this is not the whole story.
On 8 April 1928, while preparing the Gifford Lectures that became
*Process and Reality*, Whitehead wrote to Rosalind Greene:
>
>
> I am working at my Giffords. The problem of problems which bothers me,
> is the real transitoriness of things--and yet!!--I am
> equally convinced that the great side of things is weaving something
> ageless and immortal: something in which personalities retain the
> wonder of their radiance--and the fluff sinks into utter
> triviality. But I cannot express it at all--no system of words
> seems up to the job. (Unpublished letter archived by the Whitehead
> Research Project)
>
>
>
Whitehead's attempt to express it in *Process and
Reality* reads:
>
>
> There is another side to the nature of God which cannot be omitted.
> ... God, as well as being primordial, is also consequent ...
> God is dipolar. (1929c [1985: 345])
>
>
>
> The consequent nature of God is his judgment on the world. He saves
> the world as it passes into the immediacy of his own life. It is the
> judgment of a tenderness which loses nothing that can be saved. (1929c
> [1985: 346])
>
>
>
> The consequent nature of God is the fluent world become
> 'everlasting' ... in God. (1929c [1985: 347])
>
>
>
Whitehead's dual description of God as *tender* persuader
and *tender* savior reveals his affinity with "the
Galilean origin of Christianity" (1929c [1985: 343]). Indeed,
his
>
>
> theistic philosophy ... does not emphasize the ruling Caesar, or
> the ruthless moralist, or the unmoved mover. It dwells upon the tender
> elements in the world, which slowly and in quietness operate by love.
> (idem)
>
>
>
One of the major reasons why Whitehead's process philosophy is
popular among theologians, and gave rise to process theology, is the
fact that it helps to overcome the doctrine of an omnipotent God
creating everything out of nothing. This *creatio ex nihilo*
doctrine implies God's responsibility for everything that is
evil, and also that God is the only ultimate reality. In other words,
it prevents the reconciliation of divine love and human suffering as
well as the reconciliation of the various religious traditions, for
example, theistic Christianity and nontheistic Buddhism. In yet other
words, the *creatio ex nihilo* doctrine is a stumbling block
for theologians involved in theodicy or interreligious dialogue.
Contrary to it, Whitehead's process philosophy holds that there
are three ultimate (but inseparable) aspects of total reality: God
(the divine actual entity), the world (the universe of all finite
actual occasions), and the creativity (the twofold power to exert
efficient and final causation) that God and all finite actual
occasions embody. The distinction between God and creativity (that God
is not the only instance of creativity) implies that there is no God
with the power completely to determine the becoming of all actual
occasions in the world--they are instances of creativity too. In
this sense, God is not omnipotent, but can be conceived as "the
fellow-sufferer who understands" (1929c [1985: 351]). Moreover,
the Whiteheadian doctrine of three ultimates--the one supreme
being or God, the many finite beings or the cosmos, and being itself
or creativity--also implies a religious pluralism that holds that
the different kinds of religious experience are (not experiences of
the same ultimate reality, but) diverse modes of experiencing diverse
ultimate aspects of the totality of reality. For example:
>
>
> One of these [three ultimates], corresponding with what Whitehead
> calls "creativity", has been called
> "Emptiness" ("*Sunyata*") or
> "Dharmakaya" by Buddhists, "Nirguna Brahman"
> by Advaita Vedantists, "the Godhead" by Meister Eckhart,
> and "Being Itself" by Heidegger and Tillich (among
> others). It is the *formless* ultimate reality. The other
> ultimate, corresponding with what Whitehead calls "God",
> is not Being Itself but the *Supreme* Being. It is in-formed
> and the source of forms (such as truth, beauty, and justice). It has
> been called "Amida Buddha", "Sambhogakaya",
> "Saguna Brahman", "Ishvara",
> "Yaweh", "Christ", and "Allah".
> (D. Griffin 2005: 47)
>
>
>
> [Some] forms of Taoism and many primal religions, including Native
> American religions [...] regard the cosmos as sacred. By
> recognizing the cosmos as a third ultimate, we are able to see that
> these cosmic religions are also oriented toward something truly
> ultimate in the nature of things. (D. Griffin 2005: 49)
>
>
>
The religious pluralism implication of Whitehead's doctrine of
three ultimates has been drawn most clearly by John Cobb. In
"John Cobb's Whiteheadian Complementary Pluralism",
David Griffin writes:
>
>
> Cobb's view that the totality of reality contains three
> ultimates, along with the recognition that a particular tradition
> could concentrate on one, two, or even all three of them, gives us a
> basis for understanding a wide variety of religious experiences as
> genuine responses to something that is really there to be experienced.
> "When we understand global religious experience and thought in
> this way", Cobb emphasizes, "it is easier to view the
> contributions of diverse traditions as complementary". (D.
> Griffin 2005: 51)
>
>
>
## 8. Whitehead's Influence
Whitehead's key philosophical concept--the internal
relatedness of occasions of experience--distanced him from the
idols of logical positivism. Indeed, his reliance on our intuition of
the extensive relatedness of events (and hence, of the space-time
metric) was at variance with both Poincare's
conventionalism and Einstein's interpretation of relativity: his
reliance on our intuition of the causal relatedness of events, and of
both the efficient and the final aspects of causation, was an insult
to the anti-metaphysical dogmas of Hume and Russell; his method of
causal explanation was also an antipode of Ernst Mach's method
of economic description; his philosophical affinity with James and
Bergson as well as his endeavor to harmonize science and religion made
him liable to the Russellian charge of anti-intellectualism; and his
genuine modesty and aversion to public controversy made him invisible
at the philosophical firmament dominated by the brilliance of Ludwig
Wittgenstein.
At first--because of Whitehead's *Principia
Mathematica* collaboration with Russell as well as his application
of mathematical logic to abstract the basic concepts of
physics--logical positivists and analytic philosophers admired
Whitehead. But when Whitehead published *Science and the Modern
World*, the difference between Whitehead's thought and
theirs became obvious, and they grew progressively more dissatisfied
over the direction in which Whitehead was moving. Susan Stebbing of
the Cambridge school of analysis is only one of many examples that
could be evoked here (cf. Chapman 2013: 43-49), and in order to
find a more positive reception of Whitehead's philosophical
work, one has to turn to opponents of analytic philosophy such as
Robin George Collingwood (for example, Collingwood 1945). The
differences with logical positivism and analytic philosophy, however,
should not lead philosophers to neglect the affinities of
Whitehead's thought with these philosophical currents (cf.
Shields 2003, Desmet & Weber 2010, Desmet & Rusu 2012, Riffert
2012).
Despite signs of interest in Whitehead by a number of famous
philosophers--for example, Hannah Arendt, Maurice Merleau-Ponty
and Gilles Deleuze--it is fair to say that Whitehead's
process philosophy would most likely have entered oblivion if the
Chicago Divinity School and the Claremont School of Theology had not
shown a major interest in it. In other words, not philosophers but
theologians saved Whitehead's process philosophy from oblivion.
For example, Charles Hartshorne, who taught at the University of
Chicago from 1928 to 1955, where he was a dominant intellectual force
in the Divinity School, has been instrumental in highlighting the
importance of Whitehead's process philosophy, which dovetailed
with his own, largely independently developed thought. Hartshorne
wrote:
>
>
> The century which produced some terrible things produced a scientist
> scarcely second in genius and character to any that ever lived,
> Einstein, and a philosopher who, I incline to say, is similarly second
> to none, unless it be Plato. To make no use of genius of this order is
> hardly wise; for it is indeed a rarity. A mathematician sensitive to
> so many of the values in our culture, so imaginative and inventive in
> his thinking, so eager to learn from the great minds of the past and
> the present, so free from any narrow partisanship, religious or
> irreligious, is one person in hundreds of millions. He can be
> mistaken, but even his mistakes may be more instructive than most
> other writers' truth. (2010: 30)
>
>
>
After mentioning a number of other theologians next to Hartshorne as
part of "the first wave of ... impressive
Whitehead-inspired scholars", Michel Weber--in his
Introduction to the two-volume *Handbook of Whiteheadian Process
Thought*--writes:
>
>
> In the sixties emerged John B. Cobb, Jr. and Shubert M. Ogden.
> Cobb's *Christian Natural Theology* remains a landmark in
> the field. The journal *Process Studies* was created in 1971 by
> Cobb and Lewis S. Ford; the *Center for Process Studies* was
> established in 1973 by Cobb and David Ray Griffin in Claremont. The
> result of these developments was that Whiteheadian process scholarship
> has acquired, and kept, a fair visibility ... (Weber &
> Desmond 2008: 25)
>
>
>
Indeed, inspired mainly by Cobb and Griffin, many other centers,
societies, associations, projects and conferences of Whiteheadian
process scholarship have seen the light of day all over the world. An
important society is the *European Society for Process
Thought* with its annual summer schools and its book series,
*European Studies in Process Thought* (nine titles).
Another active society is the *Deutsche Whitehead
Gesellschaft.* Thirty six of today's most active
process centers, however, are located in the
People's Republic of China. They were founded between 2002 and
2018 by *The Institute for Postmodern Development of
China*, which also founded the *Cobb
Eco-Academy* in 2018, in Zhejiang. The *Beijing
Whitehead International Kindergarten* is just
one example of Chinese involvement with Whitehead.
Out of Belgium, Michel Weber has created in 2001 the
*Whitehead Psychology Nexus* and the *Chromatiques
whiteheadiennes* scholarly societies, and he has been the driving
force behind several book series, including the
series published by Weber's own publishing company, Les
editions Chromatika (forty four titles at
present), and the *Process Thought Series* published by
Ontos Verlag / De Gruyter (twenty seven
titles). The already mentioned *Handbook of Whiteheadian
Process Thought* is part of the latter book series. In
it, 101 internationally renowned Whitehead scholars give an
impressive overview of the 2008 status of their research findings in
an enormous variety of domains (cf. Armour 2010). Missing in the
*Handbook*, however, are most Whitehead scholars reading
Whitehead through Deleuzian glasses--especially Isabelle
Stengers, whose 2011 book, *Thinking with Whitehead*, cannot be
ignored. *The Lure of Whitehead*, edited by Nicholas Gaskill
and A. J. Nocek in 2014, can largely remedy that shortcoming.
Important for Whitehead scholarship, is the fact that the
*Handbook* is in the process of being transformed into an
online Whitehead encyclopedia, which will contain additional entries,
and of which the entries will be kept up to date. This is being
done by the *Whitehead Research Project*, that is, by the
people responsible for editing and publishing the book
series, *Contemporary Whitehead Studies* (fifteen
titles), and *The Edinburgh Critical Edition of the
Complete Works of Alfred North Whitehead*, of which two volumes of
*The Harvard Lectures of Alfred North Whitehead* have already
seen the light of day. The *Whitehead Research Project* website
also refers to a *Whitehead Reading Group*, contains
a *Research Blog*, and provides access to the *Whitehead
Research Library*, which holds a wealth of
mostly unpublished materials, such as letters from and to
Whitehead (cf. Whitehead Research Project
[OIR]).
The oldest Whiteheadian book series is the *SUNY Series
in Constructive Postmodern Thought* (thirty two titles), and
the two most recent series are published by the Process Century
Press (cf. Process Century Press
[OIR]).
The most recent title in its *Theological Exploration
Series* (nine titles) is *James & Whitehead on
Life after Death* by David Griffin (2022), and in the Series
Preface of its successful *Toward Ecological Civilization
Series* (twenty two titles), John Cobb writes:
>
>
> We live in the ending of an age. But the ending of the modern period
> differs from the ending of previous periods, such as the classical or
> the medieval. The amazing achievements of modernity make it possible,
> even likely, that its end will also be the end of civilization, of
> many species, or even of the human species. At the same time, we are
> living in an age of new beginnings that give promise of an ecological
> civilization. Its emergence is marked by a growing sense of urgency
> and deepening awareness that the changes must go to the roots of what
> has led to the current threat of catastrophe.
>
>
>
> In June 2015, the 10th Whitehead International Conference was held in
> Claremont, CA. Called "Seizing an Alternative: Toward an
> Ecological Civilization", it claimed an organic, relational,
> integrated, nondual, and processive conceptuality is needed, and that
> Alfred North Whitehead provides this in a remarkably comprehensive and
> rigorous way. We proposed that he could be "the philosopher of
> ecological civilization". With the help of those who have come
> to an ecological vision in other ways, the conference explored this
> Whiteheadian alternative, showing how it can provide the shared vision
> so urgently needed.
>
>
>
Cobb refers to the tenth *International Whitehead
Conference (IWC).* The *IWC*s are sponsored by
the *International Process Network*. The *IWC* has
been held at locations around the globe since 1981--the 12th one
was in 2019, in Brasilia, Brazil, and the 13th one,
"Whitehead and the History of Philosophy," will be in July
2023, in Munich, Germany. The *IWC* is an
important venture in global Whiteheadian thought, as key Whiteheadian
scholars from a variety of disciplines and countries come together for
the continued pursuit of critically engaging a process worldview.
Finally, in 2019, the *Cobb Institute--A Community for
Process & Practice* was formed, which has, among other
interesting initiatives, weekly *John Cobb &
Friends* online gatherings to hear presentations from
prominent scholars across the world. Recently, for
example, Jeremy Lent gave a presentation on his award
winning bestseller, *The Web of Meaning--Integrating science and
traditional wisdom to find our place in the universe* (2021). The
*Cobb & Friends Gatherings YouTube Channel* contains a
sample of recordings of these presentations, among which the Lent
presentation. |
william-auvergne | ## 1. Life
As with so many important medieval figures, we know little about
William's early years. According to one manuscript source, he
was born in Aurillac, in the province of Auvergne in south-central
France. His birth date is unknown, but since he was a professor of
theology at the University of Paris by 1225, a position rarely
attained before the age of 35, he is not likely to have been born
later than 1190, and scholars have placed his probable birth date some
time between 1180 and 1190. He may have come from a poor background,
as the Dominican, Stephen of Bourbon (died c. 1261), tells a story of
William begging as a young child (Valois 1880, 4). He was a canon of
Notre Dame and a master of theology by 1223, and is mentioned in bulls
of Pope Honorius III in 1224 and 1225.
The story of his elevation to the episcopacy in 1228 paints a picture
of a man of great determination and self-confidence. The previous year
the Bishop of Paris, Bartholomeus, had died, and the canons of the
chapter of Notre Dame met to select his successor. The initial
selection of a cantor named Nicholas did not secure unanimous
agreement and was contested, in particular, by William. Nicholas
excused himself, and the canons went on to choose the Dean of the
Cathedral. William again contested the election and went to Rome to
appeal to the Pope to vacate it. He made a favorable impression, for
the Pope, impressed by his "eminent knowledge and spotless
virtue," as he put it (Valois 1880, 11), both ordained him
priest and made him Bishop of Paris, a position he retained until his
death in 1249.
It was not long before the Pope would have regrets. In February 1229 a
number of students were killed by the forces of the queen-regent,
Blanche of Castile, when they intervened in a drunken student riot
during Carnival. Outraged, the masters and students appealed to
William for redress of their rights, but William failed to take
action. The students and masters went on strike, dispersing from Paris
and appealing to Pope Gregory IX. It was apparently during this period
that William gave the Dominicans their first chair in theology at the
university. The Pope, who was to remark that he regretted
"having made this man," rebuked William, appointed a
commission to settle the dispute, and ordered William to reinstate the
striking masters. Nevertheless, William went on to receive important
missions from the Pope in subsequent years, acting, for example, as a
papal representative in peace negotiations between France and England
in 1231.
In 1239 William was closely involved in the condemnation of the
Talmud. The Pope had asked him for his response to a list of heresies
in the Talmud proposed by a converted Jew, Nicholas. William's
response led to a papal bull ordering the confiscation of sacred books
from synagogues in 1240, and to their burning in 1242 (for details,
see Valois 1880, Smith 1992 and de Mayo 2007).
William died at the end of March (the exact date is unsure) in 1249,
and was buried in the Abbey of St. Victor.
## 2. Works
William's most important philosophical writings form part of a
vast seven-part work he calls the *Magisterium divinale et
sapientiale*, a title Teske translates as *Teaching on God in
the Mode of Wisdom*. The *Magisterium* is generally taken
to consist of the following seven works in the order given:
* *De Trinitate, sive de primo principio* (*On the
Trinity, or the First Principle*)
* *De universo* (*On the Universe*)
* *De anima* (*On the Soul*)
* *Cur Deus homo* (*Why God Became Man*)
* *De fide et legibus* (*On Faith and Laws*)
* *De sacramentis* (*On the Sacraments*)
* *De virtutibus et moribus* (*On Virtues and
Morals*).
William did not compose the parts of the *Magisterium* in their
rational order, and appears to have developed his conception of its
structure as he wrote its various parts. He viewed it as having two
principal parts. The first part is comprised of *De Trinitate*,
*De universo*, and *De anima*. In this, the most
philosophical part of the *Magisterium*, William proceeds by
"the paths of proofs" or philosophizing rather than by an
appeal to the authority of revelation. He intends to combat a range of
philosophical errors incompatible, as he sees it, with the Christian
faith. In *On the Trinity* he develops his metaphysics,
attacking, among others, errors regarding creation, divine freedom and
the eternity of the world, and follows it with a philosophical
treatment of the Trinity. William's vast work, *On the
Universe*, is divided into two principal parts. The first part
(Ia, i.e. *prima pars*) is concerned with the corporeal
universe and is in turn divided into three parts, the first (Ia-Iae,
i.e. *prima pars primae partis*) arguing for a single first
principle and for the oneness of the universe, the second (IIa-Iae)
taking up the question of the beginning of the universe and its future
state, and the third (IIIa-Iae) treating the governance of the
universe through God's providence. The second principal part
(IIa) is concerned with the spiritual universe, and is divided into
three parts concerning, respectively, the intelligences or spiritual
beings posited by Aristotle and his followers (Ia-IIae), the good and
holy angels (IIa-IIae), and the bad and wicked angels, that is, demons
(IIIa-IIae). *On the Soul* is concerned with the human soul. It
is divided into seven parts concerning, respectively, its existence,
its essence, its composition, the number of souls in a human being,
how the human soul comes into being, the state of the soul in the
body, and the soul's relation to God, with a focus on the human
intellect.
The works comprising the second part of the *Magisterium* are
more expressly theological in nature and appeal to the authority of
revelation. Nevertheless, William continually recurs to his
philosophical doctrines in all his works, and even his most
theological writings contain material of considerable philosophical
interest. In particular, his treatises *On Virtues and Morals*
and *On Faith and Laws* are of great importance for his moral
philosophy.
Scholars agree that *On the Trinity* is the earliest work in
the *Magisterium* and that it was probably written in the early
1220s. The other works can be partially dated by references to
contemporaneous events, though the sheer size of many individual works
suggests they were probably written over a period of years. The whole
*Magisterium* itself was probably written during a period
extending from the early 1220s to around 1240, with *On the
Universe* written in the 1230s and *On the Soul* completed
by around 1240.
Besides the works contained in the *Magisterium*, other works
of philosophical importance are *De bono et malo I* (*On
Good and Evil*), where William develops a theory of value; *De
immortalitate animae* (*On* *the Immortality of the
Soul*), a companion to the treatment of immortality in *On the
Soul*; and *De gratia et libero arbitrio* (*On Grace and
Free Choice*), a treatment of the Pelagian heresy.
In addition, William wrote some biblical commentaries, a number of
other major theological works, and a large number of sermons (for
details, see Ottman 2005). The more than 550 sermons that can be
attributed to William have recently been critically edited by F.
Morenzoni; they comprise four volumes in the *Corpus
Christianorum* series (*Opera homiletica*, CCCM
230-230C).
With the exception of the sermons, most of William's works have
not been critically edited; several are extant only in manuscript
form. The standard edition, which includes all the works that comprise
the *Magisterium*, is the 1674 Orleans-Paris *Opera
omnia* (OO). (The sermons contained in this edition are by William
Perrauld, not William of Auvergne.) As might be expected, the texts in
this old edition are often in need of correction; Teske has
conjectured many corrections in his translations.
## 3. The Character of William's Philosophical Works
The reader of William's philosophical works is struck by their
difference in character from the mainstream of philosophical and
theological writing of the early thirteenth century. Whereas
William's contemporaries tend to compose works as a series of
linked questions, each treated according to the question method (a
paradigm of which is Aquinas's later *Summa theologiae*),
William instead follows the practice of Avicenna and Avicebron and
composes treatises much more akin to a modern book. Indeed, William
even models his writing style to some degree on the Latin translations
of these authors. These features, together with William's
frequent use of analogies, metaphors and examples drawn from everyday
life, his long-winded, rambling prose and frequent digressions, his
harsh assessments of opponents as imbeciles or morons, and his urging
that proponents of dangerous doctrines be wiped out by fire and sword,
make for an inimitable and immediately recognizable style.
One of the costs of this style, however, is that at times it can be
hard to tell precisely what William thinks. Too often he lets
metaphors or analogies bear the argumentative burden in lieu of more
careful analysis. And while William's inventive mind could
generate arguments for a position with ease, their quality is uneven:
often he proposes problematic arguments without appearing to recognize
the difficulties they contain, as may be apparent below to the
discerning reader, while at other times he shows an acute awareness of
the fundamental problems an issue raises.
## 4. Sources
William was one of the first thinkers in the Latin West to begin to
engage seriously with Aristotle's writings on metaphysics and
natural philosophy and with the thought of Islamic and Jewish
thinkers, especially Avicenna (Ibn Sina, 980-1037) and
Avicebron (Solomon Ibn Gabirol, 1021/2-1057/8) (see e.g.
Bertolacci 2012, Caster 1996c, de Vaux 1934 and Teske 1999). The works
of these thinkers, which had been coming into circulation in Latin
translations from around the middle of the twelfth century, presented
both philosophical sophistication and theological danger. Indeed, in
1210, public and private lecturing on Aristotle's works on
natural philosophy and commentaries on them was forbidden at Paris,
and in 1215 the papal legate, Cardinal Robert of Courcon
forbade "masters in arts from 'reading,' i.e.,
lecturing, on Aristotle's books on natural philosophy along with
the *Metaphysics* and *Summae* of the same (probably
certain works of Avicenna and perhaps of Alfarabi)" (Wippel
2003, 66). This prohibition remained in effect until it unofficially
lapsed around 1231. Nevertheless, it must be emphasized that personal
study of these texts had not been forbidden, and William clearly
worked hard to master them. He came to be keenly aware of the
incompatibility of much of the teaching of these works with Christian
doctrine, yet at the same time he found them to be a source of
philosophical inspiration, and his thought brims with ideas drawn from
Avicenna and Avicebron. His attitude to these thinkers is expressed
well in a passage in *On the Soul*, where, regarding Aristotle,
he writes that:
>
> though on many points one must contradict Aristotle, as is really
> right and proper--and this holds for all the statements by which
> he contradicts the truth--he should be accepted in all those
> statements in which he is found to have held the right view. (OO II
> suppl., 82a; Teske 2000, 89)
>
Of course, William was also strongly influenced by Christian thinkers.
In particular, his writings are permeated by the thought of Saint
Augustine, whom William refers to as "one of the more noble of
Christian thinkers," if perhaps in a less obvious way than by
the thought of Avicenna. Indeed, as the list in Valois 1880,
198-206, attests, William employs a remarkable range of sources
and must be reckoned one of the most well-read thinkers of his
day.
The focus of the following outline is William's views on
metaphysics and the soul, but even within these limits the vast range
of his thought has required that many issues in these areas not be
treated. In a future supplement, William's views on value theory
and morality will be addressed. Teske's English translations are
quoted whenever possible; otherwise, translations are my own (i.e. N.
Lewis's).
## 5. Metaphysics
William is the first thinker in the Latin West to develop a systematic
metaphysics (as ontology) based on the concepts of being
(*esse*) and essence (*essentia*). Influenced by
Boethius and Avicenna, he develops this metaphysics in his early work,
*On the Trinity*, as a preliminary to his account of the
Trinity, and he returns to it in other works, especially in *On the
Universe*. The central theme of his metaphysics is that all
existing things, besides God, are composites of essence, on the one
hand, and their being or existence, on the other, which is said to be
acquired or partaken from God, the source of all being. God, in
contrast, involves no composition of essence and being, but is his
being.
Inspired by Boethius's distinction in *De hebdomadibus*
between what is good by participation and what is good by substance,
William works these ideas out in terms of a distinction between a
being by participation (*ens participatione*) and a being by
substance or essence (*ens substantia / ens per essentiam*). By
the essence of a thing William means what is signified by its
definition or species name. Under the influence of Avicenna, he often
refers to essence as a thing's quiddity (*quidditas*,
literally, "whatness"), which he identifies with
Boethius's notion of *quod est* or "that which
is." Analogously, he identifies a thing's being
(*esse*) or existence with the notion of *quo est* or
"by which it is." In the case of a being by participation,
its essence and features incidental to the essence are distinct from
its being; William describes its being as clothed by them or as what
would be left if they were stripped away. Such a being (*ens*)
is said to participate in or acquire its being (*esse*). At
times William, following Avicenna, speaks of its being as accidental
to it, by which he just emphasizes that its being is not part of the
essence.
A being (*ens*) by substance, in contrast, does not participate
in or acquire being but is its very being: "there is also the
being whose essence is for it being (*esse*) and whose essence
we predicate when we say, 'It is,' so that it itself and
its being (*esse*) ... are one thing in every way"
(Switalski 1976, 17; Teske and Wade 1989, 65).
In the second chapter of *On the Trinity*, William also notes
that the term "being" (*esse*) may be used to mean
a thing's essence (similar to Avicenna's *esse
proprium*), but this sense of "being" plays a minor
role in his thought.
William argues that there must be a being by essence; otherwise, no
being could be intelligible. Thus, if there were just beings by
participation, there would have to be either a circle of beings each
of which participates in the being of what is prior to it in the
circle, with the absurd consequence that a thing would ultimately
participate in its own being; or else there would have to be an
infinite series of beings, each member of which participates in the
being of what is prior in the series. But then, for a member of the
series--say, *A*--to be, would be for it to have or
participate in the being of *B*; and for *B* to be, in
turn, would be for it to have or participate in the being of
*C*, and so on. But if this were so, the being of no member of
the series could be intelligible, since any attempt to spell it out
would result in an account of the form: *A*'s having
*B*'s having *C*'s having *ad
infinitum*. Since being is intelligible, William concludes that
there is a being by essence.
Besides the distinctions between being by participation and being by
essence, William mentions sixteen other distinctions, including the
distinctions between being of need (*esse indigentiae*) and
being of sufficiency (*esse sufficientiae*), possible being
(*esse possibile*) and being necessary through itself (*esse
necesse per se ipsum*), false and true being, and flowing and
lasting being. William takes these distinctions to be coextensive with
the distinction between being by participation and being by essence.
Using a form of argument drawn from Avicenna, he argues that given the
first member of each distinction, we can prove the existence of the
second (see *On the Trinity*, ch. 6).
The distinction between possible being (*possibile esse*) and
necessary being through itself (*necesse esse per se ipsum*),
which William draws from Avicenna, plays a prominent role in
William's thought. William's notion of a possible is not
that of something that exists in some possible world, but is rather
the notion of something whose essence neither requires nor rules out
its having being. William treats such possibles as in some sense prior
to being. In the case of those that have being and thus actually
exist, he holds that the possible "and its being
(*esse*), which does not belong to it essentially, are really
two. The one comes to the other and does not fall within its
definition or quiddity. Being (*ens*) [i.e., a being] in this
way is, therefore, composite and also resolvable into its possibility,
or quiddity, and its being (*esse*)" (Switalski 1976, 44;
Teske and Wade 1989, 87). Some commentators see in such remarks the
idea that a real or mind-independent distinction obtains between being
and essence in possible beings.
In contrast, the being and essence of a being necessary through itself
are not two; such a being does not participate in being but is its
being and cannot but exist.
William argues there can be only one being by substance or being
necessary through itself. Speaking from a metaphysical point of view,
he follows Avicenna and calls it the First (*primum*). Speaking
as a Christian, he identifies it with God, the creator, as he thinks
is indicated by Biblical references to God as "he who is":
"being," he says, is a proper name of God (see e.g. *On
the Trinity*, ch. 4).
The uniqueness of a being by substance is, William argues, a
consequence of its absolute simplicity, since nothing could
differentiate two or more absolutely simple beings. The absolute
simplicity of a being by substance, in turn, is a consequence of the
fact that it cannot be caused (not, at least by an external causation;
William argues that the being by substance is the Christian Trinity of
persons and he admits a kind of internal causation in the Trinity).
For everything that is caused is other than its being and thus is a
being by participation, whereas a being by essence or substance is its
being. But a composite or non-simple being must have an external
cause, and must therefore be a being by participation. Therefore, the
being by substance, that is to say, God, cannot be composite, but must
be absolutely simple.
God's absolute simplicity also entails that he is neither a
universal (i.e., neither a genus nor species), nor an individual
falling under such. For, William argues, genera and species are
themselves composite, and the individuals that fall under them have a
quiddity or definition and therefore compositeness. Nevertheless,
William thinks God is an individual, since he holds that all that
exists is either a universal or an individual (OO I, 855ab).
William appeals to God's power to show God has will and
knowledge. He argues that God must be omnipotent: he cannot be forced
or impeded and has a "two-way" power (*potentia*),
identified by William with what Aristotle calls a rational power. This
is a power that "extends to both opposites, namely, to make and
not to make" (Switalski 1976, 57; Teske and Wade 1989, 99), in
contrast with something, such as fire, which when "it encounters
what can be heated is not able to heat or not to heat . . . but it
necessarily has only the power to heat" (Switalski 1976, 54;
Teske and Wade 1989, 97). However, because a two-way power is poised
between acting and not-acting, it must be inclined to acting by
something, and William thinks this can only be through a choice.
Therefore, God must have will and choice; and since "it is
impossible that there be will and choice where there is no
knowledge" (Switalski 1976, 59; Teske and Wade 1989, 101), God
must also have knowledge. William goes on to argue that God must
indeed be omniscient, since "his wisdom is absolute and free,
not bound in any way to things, nor dependent upon them"
(Switalski 1976, 61; Teske and Wade 1989, 102).
William takes his claim that God has a two-way power to imply that God
could have known or willed otherwise than he did. But, as William
realizes, this raises the difficulty that if God had known or willed
otherwise, it would seem that he would have been different, and this
seems to be impossible given his absolute simplicity. In response,
William frequently emphasizes that by the very same act of knowing or
will by which God in fact knows or wills one thing, he could have
known or willed something else (OO I, 780a; Teske 2007, 105). Thus,
when we say "God knows or wills X," we must distinguish
the act of knowing or willing, which is identical with God and utterly
the same in every possible situation, from the object X to which it is
related.
## 6. God's Relation to Creatures
### 6.1 God as the Source of Being
According to William, everything participates in or acquires its being
from God. But just what he means by this is less than clear. At times,
as in the following passage from *On the Universe*, he seems to
suggest that God is literally present in created things as their
being:
>
> the creator is next to and most present to each of his creatures; in
> fact, he is most interior to each of them. And this can be seen by you
> through the subtraction or stripping away of all the accidental and
> substantial conditions and forms. For when you subtract from each of
> the creatures all these, last of all there is found being or entity,
> and on this account its giver. (OO I, 625b; Teske 1998a, 100)
>
Yet William realizes that such statements may give an incorrect
impression of his thought. Thus, at one point in *On the
Universe* he writes that "every substance includes, *if
it is proper to say so* (*si dici fas est*), the
creator's essence within itself" (OO I, 920b), and he goes
on to note that it is difficult to state clearly the sense in which
the creator is in creatures. In other passages he emphasizes
God's transcendence of creatures: "the first being
(*esse*) is for all things the being (*esse*) by which
they are ... It is one essence, pure, solitary, separate from and
unmixed with all things." Often he appeals to an analogy with
light: the first being fills "all things like light cast over
the universe. By this filling or outpouring it makes all things
reflect it. This is their being (*esse*), namely to reflect
it" (Switalski 1976, 45; Teske and Wade 1989, 89). William
perhaps intends in such remarks to exploit the medieval view that a
point of light (*lux*) and the light emitted from that point
(*lumen*), though distinct, nonetheless are also in some deep
sense the same.
### 6.2 Against the Manichees
Regarding the procession of creatures from God, William is concerned
to attack a range of errors. In *On the Universe* Ia-Iae he
attacks the error of the Manichees. Though he refers to Mani, the
third-century originator of the Manichee sect, he seems chiefly
concerned with the Cathars of his day. The Manichees deny God is the
ultimate source of all things and instead posit two first principles,
one good and one evil. Only in this way, they think, can we explain
the presence of both good and evil in the world. William thinks a
fundamental motivation for this view is the principle that "from
one of two contraries the other cannot come of itself" (OO I,
602a; Teske 1998a, 54), and hence that evil cannot come from good but
must stem from its own first principle. He replies that although one
contrary cannot of itself (*per se*) be from the other, it is
perfectly possible for it to be from the other incidentally (*per
accidens*), and he gives everyday examples to illustrate. Thus,
"drunkenness comes from wine. In that case it is evident that an
evil comes from a good, unless one would say that wine is not
good" (OO I, 602a; Teske 1998a, 54). Likewise, there is a sense
in which evil is ultimately from God, though incidentally and not as
something God aims at or intends as such. The evil of sin, for
example, is a consequence of God's creating creatures with free
will and permitting their misuse of it.
William offers a host of arguments against the possibility of two
first principles. On metaphysical grounds, for example, he objects
that a first principle must be a being necessary through itself, and
that he has shown that a plurality of such beings is impossible. And
he gives arguments to the effect that the notion of a creative first
evil principle is incoherent. Such a being, for example, could not do
good for anything, and hence could not have created anything, for in
doing so it would have done good for what it created (see also
Bernstein 2005 and Teske 1993a).
### 6.3 Causation, and Creation by Intermediaries
William claims "the creator alone properly and truly is worthy
of the title 'cause,' but other things are merely
messengers and bearers of the last things received as if sent from the
creator" (OO I, 622a; Teske 1998a, 92). We speak of creatures as
causes, but William says this is "*ad sensum"*
(Switalski 1976, 79; Teske and Wade 1989, 117), that is, in respect of
how things appear in sense experience. Both medieval and contemporary
commentators have on the basis of such remarks attributed to William
the denial that creatures are genuine causes (see Reilly 1953).
Certainly he denies that any creature can exercise fully independent
causal agency, a view he thinks "the philosophers" have
adopted. But this claim is compatible with admission of genuine
secondary causes that are reliant for their exercise of causation on
the simultaneous activity of prior causes, and ultimately on that of
the first cause, God. Is William simply making this point but using
the term "cause" in a more restricted manner than his
contemporaries? A negative answer is suggested by William's
rationale for his usage. He thinks of causation as the giving of
being, and he thinks that anything that receives being, and therefore
all creatures, cannot itself give being but can merely transmit it.
Moreover, his metaphors of creatures as riverbeds or windows through
which the divine causal influence flows strongly suggest that he
really does wish to hold that creatures themselves do not really bring
anything about but are simply the conduits through which divine
causality flows (see Miller 1998 and 2002).
Whether in fact William admits genuine causal agency among creatures,
he clearly thinks that God alone can create from nothing. Thus, he
rejects Avicenna's so-called "emanationist" doctrine
of creation, according to which, as he puts it, God creates from
nothing only one thing, the first intelligence (or spiritual being),
which in turn creates the next intelligence, and so on until the tenth
or agent intelligence is reached, the creator of souls and the
sensible world. William thinks one ground of this doctrine is
Avicenna's principle that "from what is one in so far as
it is one there cannot in any way come anything but what is one"
(*ex uno, secundum quod unum, non potest esse ullo modorum nisi
unum*) (OO I, 618b; Teske 1998a, 82), and hence from God, who is
absolutely one, can at most come one thing. In fact, William agrees
with this principle and applies it in the context of Trinitarian
theology, but he thinks Avicenna has misapplied it by using it to
explain creation. Instead, William follows the Jewish thinker
Avicebron (whom he thought was a Christian) and holds that creatures
come from God not insofar as he is one, but "through his will
and insofar as he wills, just as a potter does not shape clay vessels
through his oneness, but through his will" (OO I, 624a; Teske
1998a, 96).
### 6.4 Against the Necessity of Creation
William thinks the will is necessarily free; thus the fact that God
creates through his will means that creation is an exercise of
God's free will. This point serves to undermine another aspect
of Avicenna's doctrine of the procession of creatures from God.
For Avicenna posits not only a series of creators, but also holds that
this procession is utterly necessary: the actual world must exist and
could not have been other than it is. William thinks this conclusion
stems from a failure to grasp God's freedom in creation.
Avicenna and others imposed "not only necessity, but natural
servitude upon the creator, supposing that he operates in the manner
of nature" (OO I, 614b; Teske 1998a, 72). Thus, they think the
universe issued from God as brightness issues from the sun or heat
from fire, and that God no more had it in his power to do otherwise
than do the sun or heat. Against this William argues that it is a
consequence of God's free creation that he had it in his power
to create otherwise than he did.
### 6.5 Against the Eternity of the World
One of the most contentious doctrines presented by Greek and Islamic
philosophy to thinkers in the early thirteenth century is that of the
eternity of the world, the view that the world had no beginning but
existed over an infinite past. This view seems to contradict the
opening words of the Bible: "In the beginning God created heaven
and earth." In response, a number of thinkers in the early
thirteenth century, including the Franciscan Alexander of Hales,
argued that Aristotle, at least, had been misunderstood and had never
proposed the eternity of the world. But William, like his Oxford
contemporary, Robert Grosseteste, would have none of this:
"Whatever may be said and whoever may try to excuse Aristotle,
it was undoubtedly his opinion that the world is eternal and that it
did not begin to be, and he held the same view concerning motion.
Avicenna held this after him" (OO I, 690b; Teske 1998a,
117).
The importance of this issue to William is indicated by the space he
devotes to it. His treatment of the question of the eternity
*aeternitas* of the world in *On the Universe* IIa-Iae
is one of the longest discussions of the issue in the middle ages, and
the first substantial treatment to be made in the light of Greek and
Islamic thought.
William thinks, in fact, that to call the world eternal in the
aforementioned sense is not to use the term "eternal" in
its strictest sense. Accordingly, he starts by explaining different
senses of the term in the first extended treatment of the distinction
between eternity and time in the middle ages. He then presents and
refutes arguments that the world has no beginning, and provides his
own positive arguments that the world must have a temporal beginning,
a first instant of its existence.
William's chief aim in his treatment of time and eternity is to
emphasize the fundamental differences between eternity and time. He is
especially concerned to attack a conception of eternity as existence
at every time, a conception he attributes to Aristotle. In his view,
eternity in a strict sense is proper to the creator alone and is his
very being, though "the name 'eternity' says more
than his being, namely, the privations of beginning and ending, as
well as of flux and change, and this both in act and in potency"
(OO I, 685b-686a; Teske 1998a, 109). The fundamental difference
between time and eternity is that "as it is proper and essential
to time to flow or cease to be, so it is proper and essential to
eternity to remain and stand still" (OO I, 683b; Teske 1998a,
102). William, like other medieval thinkers, tends to describe
eternity in terms of what it is not. It has nothing that flows and
hence no before or after or parts. Its being is whole at once, not
because it all exists at one time, but because it all exists with
nothing temporally before or after it. William speaks of duration in
eternity, but he holds that the term has a different (unfortunately
unexplained) sense in this context than when applied to time.
William had read Aristotle's *Physics* and agrees with
Aristotle that time and motion are coextensive (OO I, 700a). Yet he
does not propose Aristotle's definition of time as the number of
motion in respect of before and after. Rather, in his account of the
essential nature of time he describes time simply as being that flows
and does not last, "that is, it has nothing of itself that lasts
in act or potency" (OO I, 683a; Teske 1998a, 102). Echoing
Aristotle and Augustine, he holds that time has the weakest being, and
that "of all things that are said to be in any way, time is the
most remote from eternity, and this is because it only touches
eternity by the very least of itself, the now itself or a point of
time" (OO I, 748a).
William introduces his account of the eternity of the world in *On
the Universe* by reference to Aristotle, but he devotes the bulk
of his discussion to Avicenna's arguments for this doctrine. In
parallel material in *On the Trinity* he claims that at the
root of Avicenna's view lies the principle that "if the
one essence is now as it was before when nothing came forth from it,
something will not now come forth from it" (Switalski 1976, 69;
Teske and Wade 1989, 108). That is, nothing new is caused to exist
unless there is a change in the cause. Thus God, a changeless being,
cannot start to create the world after not creating it; and therefore
he must have created it from eternity and without a beginning.
Therefore, the world must itself lack a beginning.
William replies that Avicenna's principle leads to an infinite
regress; for if something's operating after not operating
requires a change in it, this change will itself require a new cause,
and that new cause will itself require a new cause, and so on *ad
infinitum*. William instead holds that it is possible that God,
without changing in any way, start to operate after not operating.
What William means by this is that the effect of God's eternal
will is a world with a temporal beginning. In general, he holds that
verbs such as "create" express nothing in the creator
himself but rather something in things. We might put "God begins
to create the world" to have the import of: "God by an
eternal willing creates the world, and the world has a
beginning."
William emphasizes that God does not will that the world exist without
qualification, but rather that it exist with a given temporal
structure, namely, a beginning (Switalski 1976, 67-68; Teske and
Wade 1989, 107). This might suggest that he thinks God might in fact
have created a world without a beginning, but chose not to do so, as
Aquinas would hold. But in fact, William argues that this option was
not open to God: a world without a beginning is simply not a creatable
item. William attempts to establish this point by a host of arguments,
many of which are found in his writings for the first time in the
Latin West and repeated by later thinkers such as Bonaventure and
Henry of Ghent. On the whole, these arguments either allege that the
notion of a created thing itself entails that what is created has a
temporal beginning or point to alleged paradoxes stemming from the
supposition of an infinite past.
For example, William argues (OO I, 696b; Teske 1998a, 133) that
because the world in itself is merely a possible, in itself it is in
non-being and non-being is therefore natural to it. Hence its
non-being, being natural to it, is prior to its being, and therefore
in being created it must receive being after non-being, and thus have
a beginning.
Among arguments involving infinity, William argues that if the whole
of past time is infinite then the infinite must have been traversed to
arrive at the present; but since it is impossible to traverse the
infinite (OO I, 697b; Teske 1998a, 136), the past must be finite. This
line of argument is frequently found in later thinkers.
William also presents arguments that the view that a continuum, such
as time, is infinite results in paradoxes (OO I, 698a-700b).
These arguments, which William develops at some length, constitute an
important early medieval engagement with puzzles relating to
infinity.
### 6.6 Providence
Part IIIa-Iae of *On the Universe* is a lengthy treatise on
God's providence (*providentia*). William believes the
denial of God's providence, and in particular the denial that
the good will be rewarded and the evil punished, is an error "so
harmful and so pernicious for human beings that it eliminates from
human beings by their roots all concern for moral goodness, all the
honor of the virtues, and all hope of future happiness" (OO I,
776a; Teske 2007, 92).
According to William, God's providence (*providentia*) or
providing for the universe differs from foreknowledge
(*praescientia*) in that, unlike foreknowledge, it embraces
only goods and not also evils (see OO I, 754b; Teske 2007, 30).
William is especially concerned to argue that God's providence
is not just general in nature, but extends to all individual
creatures. God's care for the universe and its constituents is a
matter of God's knowing and paying attention to all things. From
God's point of view, nothing happens by chance. William
introduces final causality into his explanation by describing the
universe as a teleological or goal-directed order, in which each thing
has a function or purpose established by God. God's care is his
concern that each thing fulfills its appointed function. Thus, God is
not necessarily concerned with what we might think is best for
individual things: flies get eaten by spiders, but this in fact is an
end for which they were created. And ultimately animals were created
for the sake of humans, as William points out (for details on animals
and providence, see Wei 2020, 49-74). In the case of humans,
however, who are the apex of creation, their end is to experience
happiness in union with God, and God's providential care of
human beings is aimed at this.
But if this is so, why do the evil prosper and the good suffer? What
are we to make of natural disasters, of pain, suffering, death, and
the like? What, in short, are we to make of evil in the world?
Although William does not take up the logical compatibility of evil
with an all-good, knowing and powerful god, he does attempt to explain
at length how a range of particular kinds of evil are compatible with
God's providence and care for human beings. His explanation, in
general, is not novel and has roots in Augustine. He distinguishes the
evils perpetrated by free agents and for which they are therefore to
blame from other forms of evil, and argues that these other evils are
in fact not really evil but good. Pain, for example, provides many
benefits, among which is that it mortifies and extinguishes "the
lusts for pleasures and carnal and worldly desires" (OO I, 764a;
Teske 2007, 58). In many cases, what we call evil is in fact
God's just punishment, which William implies always has
beneficial effects. As for the blameworthy evils done by free agents,
William thinks that God permits these, but that it is incorrect to say
they come from God or are intended by him. They are rather the
consequence of rational creatures' misuse of the free will with
which they were created, and the moral balance will be set right in
the future life, if not this one.
If the evils of this life appear to conflict with God's
providence, God's providence and foreknowledge themselves seem
to entail that everything occurs of necessity and hence that there is
no free will. This concern leads William to consider a number of
doctrines that aim to show that all events happen of necessity. In
addition to foreknowledge or providence, he considers the views that
all events are the necessary causal outcomes of the motion and
configuration of the heavenly bodies, or that all events result from
the *heimarmene*, i.e. the "interconnected and
incessantly running series of causes and the necessary dependence of
one upon another" (OO I, 785a; Teske 2007, 120); or finally,
that all things happen through fate (*fatum sive fatatio*).
William rejects all these grounds for the necessity of events (esp. on
fate, see Sannino 2016).
In an interesting discussion of foreknowledge in part IIIa-Iae,
chapter 15, he considers the argument that since it is necessary that
whatever God foresees will come about will come about, because
otherwise God could be deceived or mistaken, then since God foresees
each thing that will come about, it is necessary that it will come
about. William replies that the proposition "It is necessary
that whatever God foresees (*providit*) will come about will
come about" has two interpretations, "because the
necessity can refer to individuals or to the whole" (OO I, 778b;
Teske 2007, 99). The argument for the necessity of all events
equivocates between these two readings and is therefore invalid.
William notes that the distinction he makes is close to the
distinction between a divided and composite sense, but he is critical
of those who instead distinguish an ambiguity between a *de re*
(about the thing) and *de dicto* (about what is said)
reading.
## 7. The Soul
William's treatise *On the Soul* is one of the most
substantial Latin works on the topic from before the middle of the
thirteenth century. Although William begins this treatise with
Aristotle's definition of the soul as the perfection of an
organic body potentially having life, his conception of the soul is
heavily influenced by Avicenna and decidedly Platonic in character. He
interprets Aristotle's definition to mean that the body is an
instrument of the soul (as he thinks the term "organic"
indicates) and that "a body potentially having life" must
refer to a corpse, since a body prior to death actually has life. And
although he calls a soul, with Aristotle, a form, unlike Aristotle he
holds that a soul is in fact a substance, something individual,
singular and a "this something" (*hoc aliquid*),
not at all dependent on the body for its existence.
William adopts the standard view that souls serve to vivify or give
life to bodies, and thus he posits not just souls of human beings
(i.e., rational souls), but also those of plants and animals
(vegetative and sensitive souls respectively), as they all have living
bodies. But in the case of the human soul, William also identifies the
soul with the human being, the referent of the pronoun
"I."
### 7.1 The Existence and Substantiality of Souls
Among William's many arguments for the existence of souls is the
argument that bodies are instruments of souls and the power to carry
out operations through an instrument cannot belong to the instrument
itself. Thus, there must be something other than the bodies of living
things that operates by means of bodies, and this is their soul. In
the case of the human or rational soul, he argues that its existence
follows in particular from the fact that understanding and knowledge
are found in a human being but not in the whole body or any part of
it. Since these are operations of a living substance, there must be a
living incorporeal substance in which they are, and this is what
people mean when they speak of their soul.
These arguments serve not only to show the existence of souls, but
also that they are substances. In the case of the human soul, William
argues, the operations of understanding and knowing must be treated as
proper operations of the soul, and this requires that the soul itself
be a substance. To the objection that the soul itself does not
literally understand or know but is instead that thanks to which a
human being can understand and know, William will argue that the soul
and human being, which he equates with the referent of the pronoun
"I," are one and the same thing. More generally, the
substantiality of souls--and not just of human souls--is
indicated by the fact that bodies are their instruments.
### 7.2 The Incorporeality of Souls
William argues at length that a soul cannot be a body, a conclusion he
applies not just to human or rational souls, but also to those of
plants and animals. In the case of human souls, he uses, among others,
Avicenna's "flying-man" argument to establish this
point, arguing that a human being flying in the air who lacks and
never had the use of his senses would know that he exists but deny
that he has a body. Therefore "it is necessary that he have
being that does not belong to the body and, for this reason, it is
necessary that the soul not be a body" (OO II suppl., 83a; Teske
2000, 91) (For details, see Hasse 2000).
More generally, William argues that no body could perform the
soul's function of giving life to the body. For example, if a
body were to give life, it would itself have to be a living body, and
thus it would itself have to have a soul that rendered it alive. This
soul, in turn, would have to be a living body and thus have its own
soul, and so on *ad infinitum*. Thus, the doctrine that souls
are bodies leads to an infinite regress.
### 7.3 The Human Soul and the Human Being
Perhaps the most striking aspect of William's teaching about the
soul is his identification of the human soul with the human being and
his denial that the human body constitutes in whole or part the human
being. Instead, William describes the body variously as a house, the
soul its inhabitant; as a prison, the soul its captive; as a cloak,
the soul its wearer, and so on. Nevertheless, the definition of a
human being--he perhaps has in mind the Platonic conception of
the human being as "a soul using a body"--involves a
reference to the body, and it this, William thinks, that has led
people to believe that the body is a part of a human being. But this
is an error akin to thinking that because a horseman is defined by
reference to a horse, the horse must therefore be a part of the
horseman. William is aware, of course, that at times we do seem to
ascribe to ourselves, and thus to the human being, the acts of the
body or things that happen to the body, but he holds that these are
non-literal modes of speaking, akin to a rich man's complaint
that he has been struck by hail, when in fact it was his vineyard that
was struck.
William, as usual, offers numerous arguments for the identification of
the soul and human being. He argues that the body cannot be a part of
a human being, since the soul and body no more produce a true unity
than, for example, a carpenter and his ax, and a human being is a true
unity. And he notes that since the soul is subject to reward and
punishment in the afterlife for actions we correctly attribute to the
human being or ourselves, it must be identified with the human being.
He also points to cases where our use of names or demonstratives
referring to human beings must be understood as referring to their
souls, and concludes that this shows that human beings are their
souls.
### 7.4 The Simplicity of the Soul
According to William, souls are among the simplest of created
substances. Nevertheless, like all created substances, they must
involve some kind of composition. This composition cannot be a bodily
composition, of course, since they are not bodies and thus cannot be
divided into bodily parts. According to Avicebron, as well as some of
William's contemporaries, souls are composite in the sense that
they are composites of form and matter, though not a matter that
involves physical dimensions, but rather what some called
"spiritual matter." This view is part of the doctrine now
termed "universal hylomorphism," according to which every
created substance is a composite of form and matter (see Weisheipl
1979). William is one of the first to reject this doctrine; he is
later followed by Aquinas. In support of his rejection, in *On the
Universe* he makes one of his rare references to Averroes, citing
with approval his claim that prime matter is a potentiality only of
perceptible substances and hence is not found in spiritual substances.
He also argues that there is no need to posit matter in angels or
souls in order to explain their receptivity in cognition, as some had
thought necessary (OO I, 851b-852b). Rather, souls and other spiritual
substances are "pure immaterial forms" without matter.
William also denies a real plurality of powers in the soul. He holds
instead that each power of the soul is identical with the soul and not
a part of it. When we speak of a power of the soul, we are really
speaking of the soul considered as the source or cause of a certain
kind of operation. Thus, the power to understand characteristic of a
human or rational soul is the soul considered as a source or cause of
the operation of understanding.
What sort of composition, then, is to be found in souls and other
spiritual beings? William holds that every being other than God
>
> is in a certain sense composed of that which is (*quod est*)
> and of that by which it is (*quo est*) or its being or entity
> ... since being or entity accrues and comes to each thing apart
> from its completed substance and account. The exception is the first
> principle, to which alone [being] is essential and one with it in the
> ultimate degree of unity. (OO I, 852a)
>
Here William uses Boethius's language of *quod est* and
*quo est* to make the point that in souls, as in all created
substances, there is, to use Avicennian terms, a composition of being
and essence, a doctrine to be developed by Aquinas.
### 7.5 Denial of a Plurality of Souls in the Human Body
Besides rejecting universal hylomorphism, William also rejects the
doctrine of a plurality of souls in a human being. This doctrine,
which was popular among Franciscan thinkers, holds that the body of a
human being is informed not just by a single soul, the rational soul,
but also by a vegetative and a sensitive soul. This multiplicity of
souls, it is held, serves to explain how human beings have the vital
operations characteristic of plants and animals in addition to
peculiarly human ones, and also explains the development of the
fetus's vital functions. Against this doctrine, William claims
to have shown earlier in *On the Soul* that not only the
operation of understanding, but also the sensitive operations of
seeing and hearing "are essential and proper to the soul
itself" (OO II suppl., 108b; Teske 2000, 164) and "hence,
it cannot be denied except through insanity that one and the same soul
carries out such operations." Rather than posit a multiplicity
of souls, William holds that higher souls incorporate the kinds of
operations attributed to lower ones. The rational soul is infused into
the body directly by God when the fetus reaches a suitable stage of
organization, and the prior animal or sensitive soul in the fetus
ceases to be. In this case too a view along similar lines was to be
developed by Aquinas.
Scholars have noted, however, that while William rejects a plurality
of souls in the human body, he does subscribe to a version of the
doctrine of a plurality of substantial forms in the sense that he
treats the human body as itself a substance independent of its
association with the soul, and hence having its own corporeal form,
while also being informed by the human or rational soul, which renders
it a living thing. (For details see Bazan 1969.)
### 7.6 The Immortality of the Soul
According to William, the immortality of the human soul would be
evident if not for the fact that the soul has, as it were, been put to
sleep by the corruption of the body stemming from the punishment for
sin. In our current state, however, William thinks the human
soul's immortality can be shown by means of philosophical
arguments, to which he devotes considerable attention, both in *On
the Soul* and in a shorter work, *On the Immortality of the
Soul*.
He believes the error that the soul is naturally mortal
"destroys the foundation of morality and of all religion"
(Bulow 1897, 1; Teske 1991, 23), since those who believe in the
mortality of the soul will have no motive for acting morally and
honoring God. Thus, he notes that if the soul is not immortal, the
honor of God is pointless in this life, since in the present life it
"involves much torment and affliction for the soul"
(Bulow 1897, 3; Teske 1991, 25) and receives no reward.
From a metaphysical point of view, William argues that the soul does
not depend on the body for its being, and therefore the destruction of
the body does not entail the non-existence of the soul. The rational
soul's independence from the body is due to the fact it is a
self-subsistent substance whose proper operations do not involve or
require a human body.
Since William takes the souls of plants and animals to be incorporeal
substances, it might be thought that he would treat them as immortal
too. But William holds that these souls cease to exist upon the death
of the plant or animal. This is because all their proper operations,
unlike those of the rational soul, depend on the body, and thus there
would be no point for their continued existence after the destruction
of the body.
### 7.7 The Powers Characteristic of the Human Soul
Characteristic of human souls, or human beings, are the intellective
and motive powers, that is to say, the intellect and will. William
thinks that of these, the will is by far the more noble power, and he
is accordingly puzzled by the fact that neither Aristotle nor Avicenna
have much to say about it. These authors do, however, discuss the
intellective power at great length, and William accordingly devotes
considerable attention to combating errors he sees in their teachings
on the intellect.
#### 7.7.1 The Motive Power: the Will
William develops his account of the will using the metaphor of a king
in his kingdom. The will "holds the position in the whole human
being and in the human soul of an emperor and king," while
"the intellective or rational power holds the place and function
of counselor in the kingdom of the human soul" (OO II suppl.
95a; Teske 2000, 126, 129). William treats acts of
will--expressed in the indicative mood by "I will"
(*volo*) or "I refuse" (*nolo*)--as a
kind of command that cannot be disobeyed, as distinguished from mere
likings expressed in the form "I would like"
(*vellem*) or "I would rather not'
(*nollem*). He holds that as a king must understand in order to
make use of his counselors' advice, so the will must not simply
be an appetitive power but must also have its own capacity to
apprehend. Likewise, the intellect must have its own desires. But he
notes in *On Virtues*, that like a king the will need not heed
the advice of its counselor; it may instead "give itself over to
the advice of its slaves, that is, the inner powers and the
senses"; in so doing it "gives itself over into becoming a
slave of its slaves and is like a king ... who follows the will
and advice of senseless children" (OO I, 122a).
William frequently emphasizes the difference between voluntary action
and the actions of brute animals. Animals operate in a
"servile" manner, in the sense that they respond to their
passions without having control over these responses. A human being,
in contrast, has power over the suggestion of its desires; its will is
"most free and is in every way in its own power and
dominion" (OO II suppl., 94a; Teske 2000, 123). This is why
human beings, unlike animals, can be imputed with blame or merit for
their deeds.
According to William, the freedom of the will consists in the fact
that the will "can neither be forced in any way to its proper
and first operation, which is to will or refuse, nor prevented from
the same; and I mean that it is not possible for someone to will
something and entirely refuse that he will it or to refuse to will
something, entirely willing that he will it" (OO I, 957aA). In
other words, William defines force and prevention of the will in terms
of higher-order volitions and refusals: for the will to be forced is
for it to refuse to will *X* and nonetheless be made to will
*X*, and for it to be prevented is for it to will to will
*X* and be made to refuse *X*. In a number of works,
William argues that neither force nor prevention of the will is
possible: if someone refuses to will *X*, this must be because
he takes *X* to be bad and thus he must refuse *X*
rather than will it; and if someone wills to will *X*, this
must be because he takes *X* to be good, and thus he must will
*X* rather than refuse it.
#### 7.7.2 The Intellective Power
William's discussions of the intellective power are driven by a
concern to attack errors he sees in his contemporaries and Greek and
Islamic thinkers. While his reasons for rejecting these errors are
clear enough, if at times based on a confused understanding of his
sources, his own views on the nature of cognition are less well
developed. Yet it is clear that, with the exception of our knowledge
of first principles and the concepts they involve and certain cases of
supernaturally revealed knowledge, William wishes to present a
naturalistic account of cognition in this life in which God plays no
special role.
In both *On the Soul* and *On the Universe* William
attacks at length theories of cognition that posit an agent intellect
(*intellectus agens*) or agent intelligence (*intelligentia
agens*). He incorrectly ascribes the latter theory to Aristotle,
although in fact it is Avicenna's teaching; he ascribes the
former theory to certain unnamed contemporaries. These theories view
cognition as involving the active impression of intelligible signs or
forms--what we might call concepts--on a receptive or
passive recipient. The agent intellect or agent intelligence is taken
to perform this function of impressing intelligible forms on our
material intellect, which is so-called because like matter it is
receptive of forms, albeit intelligible forms. The chief difference
between the agent intellect and agent intelligence, as far as the
theory of cognition is concerned, is that the former is treated as a
part of the human soul, while the latter is taken to be a spiritual
substance apart from the human soul, identified with Avicenna's
tenth intelligence. William notes how the agent intellect or agent
intelligence is viewed by analogy with light or the sun; as light or
the sun through its light serves to render actual merely potentially
existing colors, so the agent intellect or intelligence serves to
render actual intelligible forms existing potentially in the material
intellect.
William as usual offers a large number of objections to these
theories. He attacks as inappropriate the analogies with light or the
sun, and argues that the theory of the agent and material intellects
is incompatible with the simplicity of the soul, as it posits these
intellects as two distinct parts of the soul, one active and one
receptive. In the case of both the agent intellect and the agent
intelligence, he argues that their proponents will be forced to hold
the absurd result that human beings know everything that is naturally
knowable, whereas in fact we must study and learn and observe in order
to acquire much of our knowledge.
As for his own views on the intellective power, William holds that,
like all the powers of the soul, the intellective power in the present
life of misery has been corrupted. Therefore an account of it must
distinguish its operation in its pure, uncorrupted state from its
operation in the present life. In its pure state it is capable of
knowing, without reliance on the senses, everything naturally
knowable, even sensible things, and thus presumably also is capable of
possessing the concepts in terms of which such knowledge is
formulated. In its present state, however, most of its knowledge and
repertoire of concepts stems in some manner from sense experience.
The knowledge of which the intellective power is naturally capable
extends not just to universals, but also to singulars. For the
intellect is naturally directed at the true as the will is at the
good, and has as its end knowledge of the first truth, the creator,
who is singular. William also notes our knowledge of ourselves and the
importance of singular knowledge for our dealings with the world.
Unfortunately, he provides no theory of the nature of singular
cognition, and this issue must be left on the agenda until taken up by
thinkers such as Scotus and Ockham in the late thirteenth and early
fourteenth centuries.
William proposes a representationalist theory of cognition. He holds
that it is not things or states of affairs themselves that are in the
intellective power but rather signs or intelligible forms that serve
to represent them. He takes these signs to be mental habits, that is
to say, not bare potentialities but potentialities most ready to be
actualized. Propositions are those complexes formed from signs through
which states of affairs are represented to the mind. Although William
speaks of these signs as likenesses, he is well aware of the
difficulty in thinking of concepts as such, and he concludes that in
fact there need not be a likeness between a sign and that of which it
is a sign (OO II suppl. 214b; Teske 2000, 454).
A key question William confronts is how we acquire concepts and
knowledge in the present life. In *On the Soul* he draws an
important distinction. In the case of the first rules of truth and
morality, he speaks of God as a mirror or book of all truths in which
human beings naturally and without an intermediary, and thus without
reliance on sense experience, read these rules. Likewise, God
impresses or inscribes on our intellective power the intelligible
signs in terms of which these first rules or principles are
formulated. Thus, William posits at least some innate concepts; he
notes, for example, in *On Virtues and Morals* that the
concept of the true (*veri*) is innate (OO I, 124a). God also
reveals to some prophets in this life "hidden objects of
knowledge to which the created intellect cannot attain except by the
gift and grace of divine revelation" (OO II suppl., 211b; Teske
2000, 445).
Does William therefore hold, as some have held (Gilson 1926), that at
least in the case of these rules God in some special manner
illuminates the human intellect? This might be suggested by his
references to the intellective power as spiritual vision and his
frequent use of the language of vision and illumination in his
accounts of cognition. And yet, according to William, the first rules
of truth and morality are "known through themselves" or
self-evident. They are "lights in themselves ... visible
through themselves without the help of something else" (OO II
suppl., 210b; Teske 2000, 443), and they serve to illuminate to the
intellect the conclusions drawn from them. God's role, it would
seem, is to impress upon or supply to our intellect, in some way,
these principles, but William gives no indication that he takes God to
shed a special light on these principles in order that they may be
known: once possessed they are, as it were, self illuminating.
The first rules of truth and morality and the concepts in terms of
which they are formulated form only a small subset, however, of our
cognitive repertoire. The remainder of our concepts and knowledge,
leaving to one side the special divine revelation made to prophets,
stems in some way from our dealings with the sensible world. But the
problem William faces is to explain how this is so. The problem, he
says, is that "sensation ... brings to the intellect
sensible substances and intellectual ones united to bodies [i.e.,
souls]. But it does not imprint [*pingit*] upon it their
intelligible forms, because it does not receive such forms of
them" (OO II suppl., 213a; Teske 2000, 449).
William's response to this problem is to hold that the senses
have the role simply of stimulating the intellective power in such a
way that it "forms on its own and in itself intelligible forms
for itself" (OO I, 914b), a doctrine he draws from Augustine.
According to this account, the intellect is fundamentally active in
the acquisition of intelligible forms. William speaks of the intellect
as able, under the prompting of the senses, to consider the substances
that underlie sensible accidents, and speaks of the intellect as
"abstracting" from sense experience in the sense that it
forms concepts of so-called "vague individuals" by
stripping away perceptible features that serve to distinguish one
individual from another. In these cases, William says, "the
intellect is occasionally inscribed by these forms that are more
separate and more appropriate to its nature" (OO II suppl.,
213b; Teske 2000, 450).
A third cognitive process William mentions, conjunction or connection,
is concerned not with the acquisition of intelligible forms, but
rather with how knowledge of one thing brings with it knowledge of
another. In particular, William argues that knowledge of a cause
brings with it knowledge of the effect.
William is particularly concerned, however, that his account of the
intellect's generation of intelligible forms will require a
division of the intellect into active and passive parts--the very
distinction between an agent and material or receptive intellect he
has devoted so much energy to attacking. The need to make this
division of the intellect into two part is because it is impossible
for the same thing to be both active and passive in the same respect,
and yet in treating the intellect as indivisible it looks as though
William is in fact committed to its being both active and passive in
the same respect in the formation and reception of intelligible
forms.
William's attempts to resolve this problem are repeated in a
number of works and couched in highly metaphorical language that poses
severe problems of interpretation. The intellective power is both
"a riverbed and fountain of the scientific and sapiential waters
... Hence, in one respect it overflows and shines forth on
itself, and in another respect it receives such outpouring or
radiance" (OO II, suppl. 216b; Teske 2000, 457). William holds
that because on this account the intellective power is not active and
passive *in the same respect*, he is not forced to divide the
intellect into two distinct parts, one active and one passive.
## 8. William's Influence
Scholars have devoted little attention to William's influence on
later writers. Nevertheless, it is clear that he gave rise to no
school of thought, and subsequent thinkers seem to have picked and
chosen from his works the parts they would accept and the parts they
would reject. Thus, William's arguments that the world must have
a beginning probably influenced thinkers such as Bonaventure and Henry
of Ghent, while his rejection of universal hylomorphism and his
metaphysics of being and essence may have influenced Aquinas, who
presents similar views. There is also little doubt that some of
William's views were the subject of criticism, as for example
his apparent denial of genuine secondary causation (see Reilly 1953).
The large body of manuscripts in which William's works survive
suggests he was being read throughout the middle ages; Ottman 2005,
for example, lists 44 manuscripts known to contain *On the
Universe*. William's works were also published in a number
of printed editions in the 16th and 17th
centuries. |
william-champeaux | ## 1. Life and Works
History is written by the winners. Most of what we know of William of
Champeaux's life and work has been refracted down to us through the
prism of a man who hated him. Peter Abelard lost almost every battle
with William, his teacher and political enemy, yet he tells us that
William was a discredited, defeated, jealous, and resentful man.
Abelard claims to have humiliated William in debate, driving him from
the Paris schools. He alleges that, in defeat, William cast himself in
the role of monastic reformer only to advance his political career by
an unearned reputation for piety. Still, even Abelard recognized that
William was no fraud, calling him "first in reputation and in fact,"
and relocating to study under his tutelage (HC; trans. Radice
1974).
On the scholarly front, Abelard presents only half the story. He brags
of forcing William to abandon a firmly-held realist theory of
universals, but, rather than come over to Abelard's vocalist or
nominalist cause, William developed a second, more sophisticated,
realist view. So what might have appeared as an expression of
intellectual honesty and academic rigor on William's part, Abelard
presents as somehow shameful. What Abelard leaves out is how much he
learned from William. Now that more of William's work has become
available, it is clear that William prodded Abelard to give up
naive vocalism and develop the complex semantic theory we now
associate with Abelard's nominalism.
The extent of William's and Abelard's involvement on opposite sides of
the major political struggles of the day is also beginning to come to
light. William was a man of considerable influence in the monastic
reform movement. Abelard was a man of considerably less influence
allied with the opposite faction. It is probably true that Abelard
attracted the better students and so precipitated William's move from
Paris to St. Victor. But far from retiring in shame, William became a
statesman, ambassador, and confidant to the Pope, all the while
retaining enough political clout to prevent Abelard from either
holding his former chair as canon of Notre Dame or establishing a
school in Paris. One recent biographer of Abelard points out that in
1119, while Abelard was recovering from his castration and was soon to
be facing charges of heresy, William was a bishop acting as papal
negotiator to the court of Emperor Henry V. Unfortunately for
William's reputation, it was Abelard whose writings captured the
imagination of readers in later centuries (Clanchy 1997: 296ff).
Very little is known about William's early life and education. The
date usually assigned for his birth is 1070, but this may be off by as
much as a decade. In 1094 leaders of the monastic reform movement
appointed William as Master of Notre Dame. If this date is correct, he
would have been only 24 at the time, and this was a crucial moment for
the reformers. In 1094 King Philip I had been excommunicated for his
"illegal marriage," and any person appointed to be Master at Notre
Dame would have to have been an established scholar with considerable
political influence, something not usually found in 24
year-olds. Thus, a date of 1060 or earlier may not be unlikely for
William's birth (Iwakuma forthcoming c).
Little is known of William's education before he took this important
post. He is known to have studied with Anselm of Laon and a certain
Manegold, but it is unclear whether this was the theologian, Manegold
of Lautenbach, or the grammarian, Manegold of Chartres. It is even
suspected that William studied with the vocalist Roscelin of
Compiegne.
Early in the first decade of the twelfth century, probably in 1104,
William became archdeacon of Paris. In 1108 the reformist party
received a setback when King Luis VI, King Philip's son by his
"illegal marriage" ascended the throne. William resigned as archdeacon
to move to the abbey of St. Victor and the Paris suburbs, where he
continued to teach and remained an influential reformer. Both Hugh of
St. Victor and Bernard of Clairvaux were among William's
proteges from this period at St. Victor.
In 1113, he was appointed bishop of Chalons-sur-Marne. He
continued to act as papal legate and negotiator and remained
influential in the reform movement, particularly through his patronage
of Bernard of Clairvaux. His continued scholarly reputation is
demonstrated in the somewhat ludicrous tale of Rupert of Duetz, who,
in 1117, set himself on a quest to challenge William of Champeaux and
Anselm of Laon to intellectual combat (Clanchy 1997: 143). On January
18, 1122, William took the habit of a Cistercian and died at Clairvaux
eight days later.
William's corpus of writings is not widely available and a great deal
more textual work needs to be done before any assessment can be made
about his broader philosophical significance. Nevertheless, his basic
philosophical commitments and some of his characteristic views are
known. The texts we do have indicate that he started his career as a
realist in matters of logic and metaphysics and that his commitment to
realism grew stronger with the appearance of the vocalists and early
nominalists. Thanks to Abelard's criticisms, William is best known for
his realist theories of universals. In response to the vocalist
challenge, he seems to have held something like the *Cratylus*
theory of language: the nature of words is intimately tied to the
nature of the things they name. In logic, some vocalists claimed that
the force of inference arises from words or ideas, but William argued
that this is to be found instead in some thing or relation between
things, which he identified as the *locus* or *medium* of
argument. He interpreted Aristotle's *Categories* as ten
general things, not ten general words as the vocalists
maintained. These and other claims suggest a broadly realist
philosophical outlook. It is possible that fragments and excerpts of
William's work passed down to us by his near contemporaries are those
in which he was debating with the vocalists, and so we do not know
whether they represent William's deepest commitments or only brief
forays into an important debate.
William's views are known mostly through references in contemporary
works, especially those of Abelard. Though several of William's works
have been identified, very few have been edited, and none has been
translated into English. The *Introductions* --
*Introductiones dialecticae secundum Wilgelmum*
(ISW) and *Introductiones dialecticae secundum magistrum G.
Paganellum* (IGP) -- are very early works, possibly dating
from William's arrival in Paris in 1094. Edited with these two
*Introductions* are two brief discussions of *media* or
*loci*, and the beginning of an early commentary on Porphyry,
all likely by William (Iwakuma 1993; ISW is also in de Rijk 1967).
These texts defend some of William's characteristic doctrines, but in
1094 the vocalist movement had not yet begun to exert any real
influence in Paris, and so it is likely that he had not yet been
forced to articulate his views in response to serious criticism.
Some of William's later works have also been identified. Although no
editions have been published, several articles contain extensive
excerpts. William is known to have written commentaries on Porphyry's
*Isagoge*, Aristotle's *Categories* and *De*
*Interpretatione*, Boethius' *De* *Differentiis*
*Topicis*, and Cicero's *De* *Inventione* and
*Rhetorica ad Herennium*. Some of his theological writings,
along with those of other leading French theologians, were compiled in
the later twelfth century under the title *Liber*
*Pancrisis.* These have been edited as the
*'Sententiae' of William of Champeaux* (Lottin
1956). The *Liber* *Pancrisis* is thought to be a
compilation of the best Parisian theology of the period, and so it is
likely to represent William's most advanced views.
## 2. Universals
The history of philosophy remembers William for his two theories of
universals, **material essence realism** and **indifference
realism**. These theories emerge from the writings of Peter
Abelard, where they are paired with Abelard's decisive arguments
against them. Indeed, Abelard's critique was so powerful that when
John of Salisbury wrote his famous catalogue of twelfth-century
theories of universals, William is not even mentioned
(*Metalogicon* II 17-20; Abelard LI Por.: 10-16, trans. Spade
1994: 29-37; see also
John of Salisbury,
Peter Abelard, and
King 2004)
Material essence realism proposes that there are ten most general
things or essences: one most general thing corresponding to each of
Aristotle's ten categories. These essences are universal things:
> It should be seen that there are ten common things which are
> the foundations of all other things and are called the most general
> things - as for example this common thing, substance, which is
> dispersed through all substances, and this thing, quantity, which is
> dispersed in all quantities and so on. And just as there are ten common
> things which are the foundations of all other things, so also there are
> ten words which, thanks to the things they signify, are said to be the
> foundations of all other words (C8; Marenbon 1997: 38).
These ten genera exist and are to some degree unformed. They are
formed into subalternate genera and species by the addition of
differentia. Species are formed into individuals by the addition of
accidental forms: "a species is nothing other than a formed genus, an
individual nothing other than a formed species" (P3; Marenbon 2004:
33; Iwakuma 2004: 309; Fredborg 1977: 35). This is how the view gets
its name. The genus exists as the matter. The difference forms the
genus "into a sub-altern genus which in turn becomes the matter for an
inferior sub-altern genus or species. The addition of accidental forms
divides the species into discrete individuals" [*et fiant res
discretae in actu rerum*] (C8; Iwakuma 1999: 103). The
individuals in a species or genus thus share a single material
essence.
Everything in the created world is an accidentally differentiated
individual, but this is an incidental feature of the created world,
and by no means a commitment to concretism. Universal essences exist;
they are simply never found except as accidentally differentiated, qua
individuals.
> In actuality genera and species have their being in
> individual things. I can, however, consider by reason the same thing
> which is individuated with its accidents removed from its make-up, and
> consider the pure simple thing, and the thing considered in this way is
> the same as that which is in the individual. And so I understand it as
> a universal. For it does not go against nature for it to be a
> pure thing if it were to happen that all its accidents were removed.
> But because it will never happen in actuality that any thing exists
> without accidents, so neither in actuality will that pure universal
> thing be found. (P3; Marenbon 2004: 33)
This mental exercise of stripping away forms does not merely produce a
universal concept. It reveals the underlying metaphysical
reality. There is a real universal thing corresponding to our
universal concepts. It is this principle of a single universal
substance individuated by accidents that Abelard reduces to
absurdity.
When faced with Abelard's arguments against material essence realism,
William gave up his belief in universal essences but refused to accept
that universals are simply words or concepts. Indifference realism
rejects the core principle of material essence realism by rejecting
the notion that there are shared essences and holding that individuals
are completely discrete from one another.
The words 'one' and 'same' are ambiguous,
William says: "when I say Plato and Socrates are the same I might
attribute identity of wholly the same essence or I might simply mean
that they do not differ in some relevant respect." William's newfound
ambiguity is the seed of his second theory of universals. The stronger
sense of 'one' and 'same' applies to
Peter/Simon and Saul/Paul (we would say Cicero/Tully), who "are one
and the same according to identity" (Sen 236.123). As for Plato and
Socrates:
> We call them the same in that they are men [*in hoc quod
> sunt homines*], ['same'] pertaining with regard to
> humanity. Just as one is rational, so is the other; just as one is
> mortal, so is the other. But if we wanted to make a true confession, it
> is not the same humanity in each one, but a similar [humanity],
> since they are two men. (Sen 236.115-120)
So, although Plato and Socrates have no common material--matter,
form, or universal essence--they are still said to be the same
because they do not differ. This leads to the claim that Abelard finds
so disturbing. Each individual is both universal **and**
particular:
>
> Those things which *per se* are considered many and wholly
> diverse in essence are one considered in general or specific nature.
> That is, they do not differ in being man (*esse*
> *hominem*).
>
>
>
>
> One Man is many men, taken particularly. Those which are one
> considered in a special nature are many considered particularly. That
> is to say, without accidents they are considered one *per*
> indifference, with accidents many.
>
>
>
>
> It should never be said that many men make one Man. Rather it should
> be said that many men agree in being, in what it is to be a man
> (*in esse hoc quod est esse homo*). Nevertheless, they are
> wholly diverse in essence (P14; Iwakuma 1999: 119).
>
>
>
Indifference realism is not a complete departure from material
essence realism because William still accepts accidental
individuation. When the accidents are stripped away, Plato and
Socrates are still one and the same although in the weaker sense of
'one' and 'same'. They do not share a material
essence, though they each have the same state or *status* of
being a man. William's indifference realism holds that when
individuating accidents are stripped away from two individuals, what
you are left with may be numerically distinct but it is not
individually discernible (you can't tell which one is Socrates). What
is left then are pure things--there are no individuating
characteristics. In this sense, each thing is itself a universal.
## 3. Logic and Philosophy of Language
With so few of William's works identified and even fewer properly
edited and published, any serious discussion of his views on logic is
years away. What follows is a brief presentation of William's thoughts
on various issues. In many cases, these are gleaned from references in
other texts, mostly by Abelard. These references indicate that
William was a wide-ranging and serious thinker, but much as with the
Pre-Socratics, we have just seen just the tip of the iceberg as far as
his actual writings are concerned.
### 3.1 Signification
Logic is the art of discerning truth from falsehood and of making and
judging arguments (ISW I 1.1; IGP I 1.1). The study of logic must
therefore begin with the study of words. The *Introductions*,
ISW and IGP, first define sounds, significant sounds, words, phrases
(*orationes*), and sentences, then proceed to detailed
discussion of complex hypothetical and categorical sentences and
syllogisms. William's approach here became a model for twelfth-century
logic textbooks.
William accepts the standard medieval definition of signification: a
sound is significant if it generates an understanding in the mind of a
hearer. Judging by William's discussion, there was some debate at the
end of the eleventh century as to whether signification required that
a spoken word actually, as opposed to merely potentially, generate an
understanding in the mind of a hearer. Actual signification is too
strong a criterion, but every sound potentially generates an
understanding, even if only in the mind of the person uttering
it. William argues that once a word is imposed--and a convention
established--the word is significant because it is apt to signify
whenever it is uttered (Iwakuma 1999 p109; forthcoming b). William's
views on the conditions for imposing words are not yet fully
known. The vocalists seem to have held that absent a linguistic
convention, there is no connection between any sound and what it is
imposed to signify, a view shared by Abelard (see Guilfoy 2002). On
the other hand William held that "there is such an affinity between
words and things that words draw their properties from things and so
the nature of words is shown more clearly through the nature of
things" (P14; Marenbon 1996: 6).
William clearly invites controversy by claiming that the only
significant sounds, that is, the only words, are those that are
imposed to name presently existing things:
>
>
> That word is significant which is imposed on an existing thing, like
> 'Man'; that word is not significant which is imposed on a
> non-existing thing, like 'chimaera', 'blictrix', and
> 'hircocervus' (IGP I 2.2).
>
>
>
> A significant word is one whose *significatum* is found among
> existing things (ISW I 1.4).
>
>
>
To claim that 'chimaera' is the equivalent to
'blictrix' implies that 'chimaera' is equally
meaningless. The view that significant words must name existing things
presents obvious and difficult cases because we can utter true or
false sentences about things that never existed or that no longer
exist. In such cases, William claims that words have a figurative
signification. 'Chimaeras are imaginary' figuratively
expresses the sentence 'Some mind has the imagination of a
chimaera' (Abelard D 136.32; Kneale and Kneale 1962:
207). 'Homer is a poet' is properly understood
figuratively as 'Homer's work, which he wrote in his role as
poet, exists' (H9; Iwakuma 1999: 113; Abelard D 136.14ff,
168.11ff).
Abelard attributes to William the view that words signify all things
that they were imposed to name (Abelard D 112.24). Other texts
indicate that William might have believed this. An anonymous text
attributes to William the view that 'all' signifies all
things simultaneously (H13; Iwakuma 1999: 111). However, it is also
possible that Abelard misunderstood or misrepresented William (Iwakuma
1999: 107).
It is reported that William held a similar view of infinite terms.
William is said to have argued that an infinite term, e.g.,
'non-man', signifies all those things that are not men
(H13; Iwakuma 1999: 109). However, William's own discussion of
infinite terms is available. In the *Introductions* William
argues that the signification of infinite terms can be taken
affirmatively, negatively, or correctly (ISW I 4.2). Taken
affirmatively 'non-man' positively signifies every thing
that is not a man: each rock, flower, and squirrel. The affirmative
account allows one to substitute 'stone' for
'non-man' in syllogisms with absurd results. Taking
infinite terms negatively solves this problem by claiming that
infinite terms do not signify any existing thing, but William cannot
accept this because it would mean that infinite terms are not
significant. Rather, he argues that the imposition of infinite terms
is related to the imposition of their correlative terms.
> 'Animal' and 'man' signify the
> very same thing, animal and man, by imposition [*ponendo*],
> 'non-animal' and 'non-man' by remotion
> [*removendo*]. But because infinite terms signify by remotion,
> there is nothing which can be concluded by imposition. So it is not
> possible to conclude 'stone' from 'non-animal'
> (ISW I 4.2).
Because the remotive imposition of the infinite term
'non-animal' is related to the positive imposition of the
term 'animal', the infinite term does not signify stone in the
way the word 'stone' would. So although William holds that
the same thing is signified, the mode of signification would preclude
substitution of positive and infinite terms.
Abelard criticizes William for "doing such abuse to language" that he
allowed 'rational' and 'rational animal' to
signify the same thing (Abelard D 541.24). Again, another anonymous
source confirms that William thought that a definition and the term
defined, that is 'rational mortal animal' and
'man', signify the same thing (Green-Pedersen 1974,
frag. 6). Both are consistent with William's theories of universals
and may reflect William's commitment to words signifying presently
existing things. William may well have thought that
'rational' and 'rational animal' have the same
signification because each thing signified by 'rational'
is signified by 'rational animal', and conversely. Such a view
would involve a fairly simplistic theory of meaning, but this alone
does not mean it was not William's view.
Some aspects of William's theory of signification are considerably
more complex. He developed a theory of two modes of signification of
quality terms:
>
> One mode by imposition another by representation. 'White'
> and 'Black' signify the denoted substance by imposition,
> because they are imposed on this substance. They signify the
> qualities whiteness and blackness by representation. The substance
> they signify according to imposition is signified secondarily. They
> principally designate those qualities which they signify by
> representation.(C8; Iwakuma 1999 p.107)
Abelard confirms that William made this sort of distinction. As
Abelard describes it, William held that 'white' signifies
the white individual and also whiteness. 'White' signifies
the subject (*fundamentum*) by denoting (*nominando*)
it. It signifies whiteness by determining it in the subject
(*determinando circa fundamentum*) (Abelard LI Top:
272.14). But at the present time, not enough material is available to
begin to flesh out William's account of signification by denotation
and signification by determination.
### 3.2 Multiple Senses of Sentences
William had several different views on the interpretation of different
kinds of sentences. That he held the views is not much in dispute;
they are attributed to him by several contemporaries and some are
found in his own works. But the possible philosophical relation
between them is a matter of speculation.
William held that sentences have two senses, a grammatical sense
and a dialectical sense. Taken in the grammatical sense, the verb in
sentences such as 'Socrates is white' marks an
intransitive copulation of essence: the same thing is denoted by the
subject and predicate. In the dialectical sense, the verb in
'Socrates is white' marks a predication of inherence: what
is denoted by the predicate inheres in what is denoted by the
subject. To the grammarian, 'Socrates is white' and
'Socrates is whiteness' have different senses and
different truth conditions. 'Socrates is white' says that
Socrates is a subject of whiteness. 'Socrates is
whiteness' says that Socrates is essentially whiteness. To the
dialectician, however, the two sentences have the same sense, since
they both say that whiteness inheres in Socrates. The dialectical
sense is more general (*generalior*, *largior*,
*superior*) because it does not distinguish between essential
and accidental inherence. But the grammatical sense is more precise
(*determinatior*). Taken in the dialectical sense,
'Socrates is white' and 'Socrates is
whiteness' would both be true. So, while the dialectical sense
has some use for dialecticians, sentences are true and false in the
grammatical sense (Abelard LI Top: 271-273; GP 1974 Frag 9; de Rijk
1967 II.I: 183-85).
According to William, maximal propositions contain or signify all the
propositions under them. Thus, the maxim, 'Of whatever the
species is predicated the genus is also predicated' signifies or
contains the sentences 'If it is a man it is an animal',
'If it is a rock it is a substance', etc. Only the most general
form is truly a maximal proposition, but it is the more precise
(*certior*) sentences contained under the maxim that provide
the inferential force for particular arguments. William may have
developed this view in response to the vocalists. Abelard explains
that multiple senses are needed given William's view that things, not
words, warrant inference; only the more precise formulations can
signify the things themselves (Abelard LI Top: 231.26-238.34). An
anonymous author confirms Abelard's understanding of William's
view. The argument, 'Socrates is a man therefore Socrates is an
animal' is warranted by the topical maxim noted above. It is not
the maxim itself but the more precise version contained in it,
'If it is a man it is an animal', that provides the force for
this particular inference. According to this text, William argued that
it is not the words (*voces*) or the sense
(*intellectus*) of the more precise sentence that provides the
inferential force, but the fact that the more precise sentence
signifies relations (*habitudines*) between things
(Green-Pedersen 1974: Frag 8;10;12).
William divides the standard A, E, I, and O sentences according to
what he calls the matter of the sentence *materia
propositionis*). The matter of the sentence is determined by the
relationship between what is signified as the subject and the
predicate. Each standard-form sentence is found in one of three
matters: natural, contingent, or remotive. In a sentence of natural
matter the predicate inheres universally in the subject: 'All
men are animals'. In a sentence of contingent matter the
predicate inheres in the subject, but not universally: 'All men
are white'. In sentences of remotive matter the predicate in no way
inheres in the subject: 'All men are stones'. The validity
of direct inferences between contraries, subalterns, etc. depends on
the matter of the sentences involved. In all matter, contraries cannot
both be true. So regardless of the matter, if an A sentence is true
the corresponding E sentence is false. In natural and remotive matter,
if either the A or E sentence is false the other must be true: if
'No men are animals' is false then 'All men are
animals' must be true. However, in contingent matter both
sentences may be false. It is possible for both 'All men are
white' and 'No men are white' to be false. William
gives a similar treatment for contradictories, subcontraries, and
subalterns (ISW I 3-3.3; II 1.1-1.3; IGP I 5.4-5.6).
### 3.3 Argument, Conditionals, and Loci
Contemporary logicians recognize a difference between arguments and
their corresponding conditionals. But they accept that an
argument:
>
> (1) Premise 1; Premise 2; therefore, Conclusion
>
and its corresponding conditional:
>
>
> (2) If (Premise 1 and Premise 2) then Conclusion
are equivalent by the deduction theorem. (2) in turn is interderivable
with:
>
> (3) If Premise 1 then (if Premise 2 then Conclusion)
These logical relationships were all in dispute during the twelfth
century. (Iwakuma 2004b presents these distinctions and discusses
William's role in the debate.) Logicians at that time attributed much
more significance to the distinction between arguments and
conditionals, and most did not accept what we would now recognize as
the formal rules of material implication. Along these lines, several
views were attributed to William as characteristic of his logical
program, but his arguments in the available texts do not reflect what
others say about him. So, once again, the extent of William's
influence is unclear.
William accepted that (1) and (2) are equivalent, but not for any
metalogical reason. He thought that syllogisms were phrases
(*orationes*) that contain the senses of several other
sentences. The view attributed to William is that syllogisms and
arguments are, as he puts it "subcontinuatives"
[*subcontinuativa*] containing other sentences. 'Socrates
is a man; therefore, Socrates is an animal' contains the sense
of the sentences 'Socrates is a man', 'Socrates is
an animal' and, 'If Socrates is a man, then Socrates is an
animal' (B9 2.1 ed. Iwakuma 2004b: 102; this is the same text as
Green-Pedersen 1974, frag 2). William calls the contained sentences
'continuatives [*continuativa*]'. He also treats
all syllogisms as conditionals of type (2) rather than type (3). There
is no argument for his preference, at least not in the Introductions,
and none of the available sources give his reasons. At present all
that can be said is that William seems to have had a hand in the
origins of a debate that raged during the twelfth century and was not
revived until the twentieth.
William introduces several novelties in his discussion of loci. Most
significantly, he limits his discussion of the traditional Boethian
topics to the loci from the whole, part, opposites, equals, and
immediates. He also introduces loci from the subject and from the
predicate into his discussion of categorical arguments. Both moves
were controversial.
William sometimes refers to the locus as 'the medium', by which
he does not necessarily mean the middle term but rather the thing that
acts as a link between the extremes of an argument. He also called the
locus 'the argument', by which he means that the locus provides
the argumentative or inferential force. But whichever name he chooses,
William's realist commitment is evident.
> The same thing is the argument and the locus according to
> Master W. but the locus is the force of the argument [*sedes*
> *argumenti*], and so this thing is the force of the
> argument. (Green-Pedersen 1974: frag 1)
In the Introductions, William calls loci "words" [*voces*], but
they are words denoting the things themselves or their relations
(*habitudines*) (IGP I 7). But William is not clear about the
importance of this distinction between things and relations. Fragments
of his later work report that he was quite careful to distinguish his
view from any brand of vocalism. He distinguished the word
(*vox*), the understanding or meaning of the word
(*intellectus*), and the thing itself denoted by the word, and
held that the thing itself is the locus. (Green-Pedersen 1974: frag
3,4)
In categorical syllogisms, William introduces the locus from subject,
predicate, both subject and predicate, and from the whole (ISW II 2.2;
IGP I 8ff; see Stump 1989: 117ff). Thus, the argument
>
> All men are animals
>
> No animal is a stone
>
> Therefore, no man is a stone
>
is warranted by the locus from the predicate. The predicate of the
first sentence is the link or medium between the extreme terms in the
conclusion. In this case, *animal* (presumably the universal
thing itself) is the locus or seat of the argument. The rule is, if
something is predicated of some subject universally, then whatever is
removed universally from the predicate is removed universally from the
subject (William provides a lengthy set of rules describing the
logical relations that hold between subjects and predicates). As
described above, William holds that such maximal propositions contain
the sense of those propositions that fall under them. The rule
contains sentences signifying the relations between man and animal and
between animal and stone (IGP I 8.4). The relations of *animal*
to the extremes warrant the inference.
William provides a similar account of hypothetical syllogisms. He
introduces several loci and rules, but these are not formal
rules. What ultimately warrants any hypothetical syllogism is the same
sort of derivative signification of an extra-mental thing or
relation. Provided Socrates is risible, 'Socrates is a
man' is true. The following hypothetical syllogism is warranted
by the locus from the consequent:
>
> (Socrates is a man - Socrates is an animal) - (Socrates is
> risible - Socrates is an animal)
The rule is: whatever follows from the consequent follows from the
antecedent. In this case, the consequent is 'Socrates is a
man', and the antecedent is 'Socrates is
risible'. This looks like a formal rule:
>
> (A-B)-((B-C)-(A-C))
>
but William does not present it as such. 'Socrates is risible
- Socrates is a man' is warranted by the locus from equals,
and 'Socrates is man - Socrates is an animal' is
warranted by the locus from the part (IGP I 9.2). The conclusion,
'Socrates is risible - Socrates is an animal', is
arrived at via topical reasoning, not by formal rules of
inference. This is the basis for much of the later criticism of his
work. Abelard in particular derives contradictions from his views by
treating William's loci as formal rules of inference (see Martin
2004).
### 3.4 Modality
Abelard reports that William thought that all modality was *de
dicto* (or, in Abelard's terminology, *de sensu*)
>
>
> Our teacher taught that modal sentences descend from simple sentences
> because modal sentences are about the sense of simple sentences. So
> that when we say 'It is possible (or necessary) that Socrates
> runs', we are saying that what the sentence 'Socrates
> runs' says is possible or necessary. (Abelard D 195.21; see also
> Mews 2005: 49; Knuuttila 1993: 87; Kneale and Kneale 1962:
> 212).
The extent to which this attribution is accurate is unclear. The view
is at least superficially inconsistent with William's realism and
anti-vocalism in all other areas. Additionally, Abelard was among the
first to recognize the importance of the *de dicto* / *de
re* distinction; the issue may be fairly confused in William's
work, as it was in many others. On the other hand, several anonymous
texts attribute to William the claim that 'possibly' and
other modal terms modify the signification of other words in a
sentence. Thus, 'bishop' signifies actual and existing
bishops, but 'possible bishop' has a figurative sense
signifying all those things whose nature is not repugnant with being a
bishop (Iwakuma 1999: 112). This is not necessarily a rejection of
*de re* modality, but again, more of William's work needs to be
studied.
## 4. Ethics
William's ethics starts with two generally accepted early medieval
claims: (1) evil is not any positively existing thing but only the
privation of good (Sen 277.1); (2) every act or event that occurs is
performed or condoned by God and therefore good (Sen 277.11-15). These
lead William, among others, to situate moral good and evil in the
human mind.
To this end, William names the elements of a fairly complex moral
psychology: vice, desire/lust, pleasure, will, intention, and consent.
Any or all of these elements might play a role in sinful behavior.
Vice itself is not necessarily bad, since the vices are not acquired
habits but inborn dispositions. Vices incline us to bad behavior but
do not necessarily compel the will: there is no sin unless we consent
to the behavior the vice inclines us toward (Sen 278.10). Carnal
desire, lust, and pleasure, on the other hand, are always morally
bad. William writes extensively about sexual lust and pleasure,
especially as they are involved in the transmission of original
sin. Before the fall, sex was no more pleasurable than "putting your
finger in your mouth" (Sen. 254). In our fallen condition, however,
lust and pleasure can never be removed from the act. Because our
"ability" to experience this pleasure is the result of original sin,
the pleasure itself is always culpable. Consent to a sexual act under
the proper circumstances--e.g., between lawful spouses for the
purpose of procreation--only lessens the gravity of the sin (Sen
255; 246). In William's moral psychology, will, intention, and
consent are the undifferentiated elements of voluntary action. These
play the most significant role, but sin is not exclusively in the
will, consent, or intention.
Our best hope to avoid sin is to will what is right for the right
reasons. However, when it comes to discovering what is right, and
therefore what ought to be willed, we start out at a disadvantage. The
human mind naturally has the power to discern good and evil (Sen
253.10; IGP II 4), but one of the effects of the Fall is that our
rational capacities have been clouded and diminished. Before the Fall
the senses were subject to reason; after the Fall this is reversed
(Sen 253.12-15), for since then the mind has become more closely
attached and even enslaved to the body. This is the *fomes* of
sin, a carnal weakness that clouds our reason and makes us more
inclined to sin (Sen 246.39). Fear, not reason, is the key for
William, since "fear is the beginning of all wisdom"
(Sen. 276.1). William describes three kinds of fear (Sen
276.17-34). First is the natural fear of danger and pain; even Jesus
was subject to this fear. Second is fear of losing material things or
the fear of punishment in hell. This fear shows that one values
comfort and material things more that goodness and justice. People
motivated by this second fear will act rightly out of fear of losing
material goods or fear of the torments of hell, but not out of good or
right intention: "he does not merit grace who serves not his love of
justice but his love of things or his fear of punishment" (Sen
261.23-25). Third is the fear that arises from respect for God's
justice and power. This respectful fear of God, with knowledge of our
own weakness and fallibility, "is better called the love of God" (Sen
276.39-43). This humble fear of God should be our guide to what is
right.
William's influence on Abelard here is obvious. Primarily, William and
perhaps others from the school of Anselm of Laon were responsible for
the delineating the complex moral psychology that is at the core of
Abelard's ethics. The major differences are with regard to reason and
responsibility. Because of our fallen state, William has little regard
for the ability of human reason to correctly discern what is
good. William is also content to prove that evil is no thing by
situating its source somehow in the human mind. He is not interested
in showing that we are responsible, only that we, and not God, are
accountable. Our will, like our mind and body, is defective and we
suffer the consequences of its failures. It is Abelard who takes this
moral psychology and develops from it a theory of moral
responsibility.
## 5. Philosophical Theology
William is committed to the belief that the mysteries of faith are
beyond the scope of human reason, but this does not prevent him from
discussing the issues philosophically and using his skill as a
logician to prove his point.
The problem of free will and divine foreknowledge is familiar: if
God's foreknowledge is infallible, then future events, including the
actions of human beings, all happen of necessity; on the other hand,
if future events,including the actions of human beings, could occur
otherwise, then God's foreknowledge would fallible. William denies
both propositions. With regard to the first he offers two
arguments. He claims that God foresees the whole range of possible
choices and infallibly foreknows which option free creatures will
choose (Sen 237; 238). The event itself is not necessary; in fact, God
infallibly foresees that the event is not necessary. He then adds an
interesting claim about future contingent propositions: they are
determinately true or false, but it is the infallibility of divine
foreknowledge that makes them so. The future event itself which the
proposition is about does not yet exist and is indeterminate
(*eventus rerum de quibus agitur indeterminate*) (Sen
238.36). William seems to hold both that the future is indeterminate
and that God's foreknowledge is determinately true or false.
The second proposition, 'if future events, including the actions
of human beings, could occur otherwise, then God's foreknowledge is
fallible', was interpreted in several ways by William's
contemporaries. Some held that if the event could occur otherwise God
would be fallible. Others held that because the event could occur
otherwise God actually is fallible (see Mews 2005: 135). The former
would reject the claim that the event could occur otherwise, but the
latter would simply reject divine infallibility, arguing rather that
God is just very lucky epistemically. Both views are theologically
suspect, of course, and William argued that both inferences are
invalid. But he does so by introducing an unexplained modal intuition
that in context looks question-begging: the inference from 'the
event could occur otherwise' to 'God is or could be
deceived' is not necessary. It is possible that the former be
true and the latter false because "it is never necessary that the
impossible follow from the possible, and it is impossible for God to
be, or to have been, deceived" (Sen 237.68).
William deftly avoids the overreaching that got philosophers like
Roscelin and Abelard into such trouble. He outlines a theory of
metaphorical language for describing God, and argues that the
fundamental relations of the Trinity are beyond the powers of the
human mind to comprehend.
Predicates such as good, just, etc.,when applied to God have a
metaphorical rather than their usual literal meaning. As described
above, William argued that all significant words are imposed to name
existing things in the world and these words generate sound
understandings of those things they were imposed to signify. When we
impose the word 'just' or 'good' to name a
just or good person, we do so because justice or goodness is in some
way a quality of that person or her actions. God, on the other hand,
is goodness itself and justice itself. The dignity and power of God
cannot be grasped by human reason and so we cannot even conceive of
God sufficiently to accurately impose words that name divine
attributes: "words imposed for human use are used metaphorically for
speaking about God" [*ad loquendum de Deo transferuntur*] (Sen
236.46).
Whenever he discusses the Trinity, William associates the Father with
power, the Son with wisdom and the Holy Spirit with love. But this
metaphor is not intended to help us toward rational understanding of
the Trinity, or to provide a clear delineation of properties of the
three ersons in any way that would allow an explanation of their
relations and differences. William argues that there is nothing like
the Trinity in the created world and so there is nothing the human
mind is able to comprehend that could be used as an analogue for
explaining the fundamental relation of the Trinity, three persons
irreducibly present in one substance: "I do not see by what quality I
would be able to explain this, since nothing similar can be found in
the nature of any thing" (Sen 236.86). William rejects Augustine's
metaphors of the sun and sunshine and the mind and reason. Sunshine is
an accident of the air and is no part of the substance of the
sun. Likewise, reason is a power of the mind and not its substance
(Sen 236.101). William also explains that neither of his theories of
universals can be applied to explain the sameness and difference of
the divine persons. Material essence realism would imply that the
persons in the Godhead are accidentally individuated, and this is
unacceptable. The indifference theory would require the non-identity
of three separate substances, which is also contrary to the teaching
of the faith. What is William's ultimate conclusion? "Because no
likeness <to any created thing> could be described, the Trinity
must be defended by faith alone" (Sen 236.91). |
ockham | ## 1. Life
Ockham led an unusually eventful life for a philosopher. As with so
many medieval figures who were not prominent when they were born, we
know next to nothing about the circumstances of Ockham's birth
and early years, and have to estimate dates by extrapolating from
known dates of events later in his
life.[1]
Ockham's life may be divided into three main periods.
### 1.1 England (c. 1287-1324)
Ockham was born, probably in late 1287 or early 1288, in the village
of Ockham (= Oak Hamlet) in Surrey, a little to the southwest of
London.[2]
He probably learned basic Latin at a village school in Ockham or
nearby, but this is not
certain.[3]
At an early age, somewhere between seven and thirteen, Ockham was
"given" to the Franciscan order (the so called
"Greyfriars").[4]
There was no Franciscan house (called a "convent") in the
tiny village of Ockham itself; the nearest one was in London, a
day's ride to the northeast. It was there that Ockham was
sent.
As an educational institution, even for higher education, London
Greyfriars was a distinguished place; at the time, it was second only
to the full-fledged Universities of Paris and Oxford. At Greyfriars,
Ockham probably got most of his "grade school" education,
and then went on to what we might think of as "high
school" education in basic logic and "science"
(natural philosophy), beginning around the age of fourteen.
Around 1310, when he was about 23, Ockham began his theological
training. It is not certain where this training occurred. It could
well have been at the London Convent, or it could have been at Oxford,
where there was another Franciscan convent associated with the
university. In any event, Ockham was at Oxford studying theology by at
least the year 1318-19, and probably the previous year as well,
when (in 1317) he began a required two-year cycle of lectures
commenting on Peter Lombard's *Sentences,* the standard
theological textbook of the day. Then, probably in 1321, Ockham
returned to London Greyfriars, where he remained. Although he had
taken the initial steps in the theology program at Oxford (hence his
occasional nickname, the *Venerabilis Inceptor*,
"Venerable Beginner"), Ockham did not complete the program
there, and never became a fully qualified "master" of
theology at Oxford. Nevertheless, London Greyfriars was an
intellectually lively place, and Ockham was by no means isolated from
the heat of academic controversy. Among his "housemates"
were two other important Franciscan thinkers of the day, Walter
Chatton and Adam Wodeham, both sharp critics of Ockham's views.
It was in this context that Ockham wrote many of his most important
philosophical and theological works.
In 1323 Ockham was called before the Franciscan province's
chapter meeting, held that year in Bristol, to defend his views, which
were regarded with suspicion by some of his confreres. About the same
time, someone--it is not clear who--went from England to the
Papal court at Avignon and charged Ockham with teaching
heresy.[5]
As a result, a commission of theologians was set up to study the
case. Ockham was called to Avignon in May, 1324, to answer the
charges. He never went back to England.
### 1.2 Avignon (1324-28)
While in Avignon, Ockham stayed at the Franciscan convent there. It
has sometimes been suggested that he was effectively under
"house arrest," but this seems an exaggeration. On the
contrary, he appears to have been free to do more or less as he
pleased, although of course he did have to be "on hand" in
case the investigating commission wanted to question him about his
writings. The investigation must not have demanded much of
Ockham's own time, since he was able to work on a number of
other projects while he was in Avignon, including finishing his last
major theological work, the *Quodlibets*. It should be pointed
out that, although there were some stern pronouncements that came out
of the investigation of Ockham, his views were never officially
condemned as heretical.
In 1327, Michael of Cesena, the Franciscan "Minister
General" (the chief administrative officer of the order)
likewise came to Avignon, in his case because of an emerging
controversy between the Franciscans and the current Pope, John XXII,
over the idea of "Apostolic poverty," the view that Jesus
and the Apostles owned no property at all of their own but, like the
mendicant Franciscans, went around begging and living off the
generosity of others. The Franciscans held this view, and maintained
that their own practices were a special form of "imitation of
Christ." Pope John XXII rejected the doctrine, which is why
Michael of Cesena was in Avignon.
Things came to a real crisis in 1328, when Michael and the Pope had a
serious confrontation over the matter. As a result, Michael asked
Ockham to study the question from the point of view of previous papal
statements and John's own previous writings on the subject. When
he did so, Ockham came to the conclusion, apparently somewhat to his
own surprise, that John's view was not only wrong but outright
heretical. Furthermore, the heresy was not just an honest mistake; it
was *stubbornly* heretical, a view John maintained *even
after he had been shown it was wrong*. As a result, Ockham argued,
Pope John was not just teaching heresy, but was a heretic himself in
the strongest possible sense, and had therefore effectively abdicated
his papacy. In short, Pope John XXII was no pope at all!
Clearly, things had become intolerable for Ockham in Avignon.
### 1.3 Munich (1328/29-47)
Under cover of darkness the night of May 26, 1328, Michael of Cesena,
Ockham, and a few other sympathetic Franciscans fled Avignon and went
into exile. They initially went to Italy, where Louis (Ludwig) of
Bavaria, the Holy Roman Emperor, was in Pisa at the time, along with
his court and retinue. The Holy Roman Emperor was engaged in a
political dispute with the Papacy, and Ockham's group found
refuge under his protection. On June 6, 1328, Ockham was officially
excommunicated for leaving Avignon without
permission.[6]
Around 1329, Louis returned to Munich, together with Michael, Ockham
and the rest of their fugitive band. Ockham stayed there, or at any
rate in areas under Imperial control, until his death. During this
time, Ockham wrote exclusively on political
matters.[7]
He died on the night of April 9/10, 1347, at roughly the age of
sixty.[8]
## 2. Writings
Ockham's writings are conventionally divided into two groups:
the so called "academic" writings and the
"political" ones. By and large, the former were written or
at least begun while Ockham was still in England, while the latter
were written toward the end of Ockham's Avignon period and
later, in
exile.[9]
With the exception of his *Dialogue,* a huge political work,
all are now available in modern critical editions, and many are now
translated into English, in whole or in
part.[10]
The academic writings are in turn divided into two groups: the
"theological" works and the "philosophical"
ones, although both groups are essential for any study of
Ockham's philosophy.
Among Ockham's most important writings are:
* Academic Writings
+ Theological Works
- *Commentary on the Sentences of Peter Lombard*
(1317-18). Book I survives in an *ordinatio* or
*scriptum*--a revised and corrected version, approved by
the author himself for distribution. Books II-IV survive only as a
*reportatio*--a transcript of the actually delivered
lectures, taken down by a "reporter," without benefit of
later revisions or corrections by the author.
- *Seven Quodlibets* (based on London disputations held in
1322-24, but revised and edited in Avignon 1324-25).
+ Philosophical Works
- Logical Writings
* *Expositions* of Porphyry's *Isagoge* and of
Aristotle's *Categories, On Interpretation,* and
*Sophistic Refutations* (1321-24).
* *Summa of Logic* (c. 1323-25). A large, independent
and systematic treatment of logic and semantics.
* *Treatise on Predestination and God's Foreknowledge with
Respect to Future Contingents* (1321-24).
- Writings on Natural Philosophy
* *Exposition of Aristotle's Physics* (1322-24).
A detailed, close commentary. Incomplete.
* *Questions on Aristotle's Books of the Physics*
(before 1324). Not strictly a commentary, this work nevertheless
discusses a long series of questions arising out of Aristotle's
*Physics.*
* Political Writings
+ *Eight Questions on the Power of the Pope*
(1340-41).
+ *The Work of Ninety Days* (1332-34).
+ *Letter to the Friars Minor* (1334).
+ *Short Discourse* (1341-42).
+ *Dialogue* (c. 1334-46).
Several lesser items are omitted from the above list.
## 3. Logic and Semantics
Ockham is rightly regarded as one of the most significant logicians of
the Middle Ages. Nevertheless, his originality and influence should
not be exaggerated. For all his deserved reputation, his logical views
are sometimes
derivative[11]
and occasionally very
idiosyncratic.[12]
Logic, for Ockham, is crucial to the advancement of knowledge. In the
"Prefatory Letter" to his *Summa of Logic,* for
example, he praises it in striking language:
>
> For logic is the most useful tool of all the arts. Without it no
> science can be fully known. It is not worn out by repeated use, after
> the manner of material tools, but rather admits of continual growth
> through the diligent exercise of any other science. For just as a
> mechanic who lacks a complete knowledge of his tool gains a fuller
> [knowledge] by using it, so one who is educated in the firm principles
> of logic, while he painstakingly devotes his labor to the other
> sciences, acquires at the same time a greater skill at this art.
>
Ockham's main logical writings consist of a series of
commentaries (or "expositions") on Aristotle's and
Porphyry's logical works, plus his own *Summa of Logic,*
his major work in the field. His *Treatise on Predestination*
contains an influential theory on the logic of future contingent
propositions, and other works as well include occasional discussions
of logical topics, notably his *Quodlibets*.
### 3.1 The *Summa of Logic*
Ockham's *Summa of Logic* is divided into three parts,
with the third part subdivided into four subparts. Part I divides
language, in accordance with Aristotle's *On
Interpretation* (1, 16a3-8, as influenced by
Boethius's interpretation), into written, spoken and mental
language, with the written kind dependent on the spoken, and the
spoken on mental language. Mental language, the language of thought,
is thus the most primitive and basic level of language. Part I goes on
to lay out a fairly detailed theory of terms, including the
distinctions between (a) categorematic and syncategorematic terms, (b)
abstract and concrete terms, and (c) absolute and connotative terms.
Part I then concludes with a discussion of the five
"predicables" from Porphyry's *Isagoge* and
of each of Aristotle's categories.
While Part I is about terms, Part II is about
"propositions," which are made up of terms. Part II gives
a systematic and nuanced theory of truth conditions for the four
traditional kinds of assertoric categorical propositions on the
"Square of Opposition," and then goes on to tensed, modal
and more complicated categorical propositions, as well as a variety of
"hypothetical"
(molecular[13])
propositions. The vehicle for this account of truth conditions is the
semantic theory of "supposition," which will be treated
below.
If Part I is about terms and Part II about propositions made up of
terms, Part III is about arguments, which are in turn made up of
propositions made up of terms. It is divided into four subparts. Part
III.1 treats syllogisms, and includes a comprehensive theory of modal
syllogistic.[14]
Part III.2 concerns demonstrative syllogisms in particular. Part
III.3 is in effect Ockham's theory of consequence, although it
also includes discussions of semantic paradoxes like the Liar (the so
called *insolubilia*) and of the still little-understood
disputation form known as "obligation." Part III.4 is a
discussion of fallacies.
Thus, while the *Summa of Logic* is not in any sense a
"commentary" on Aristotle's logical writings, it
nevertheless covers all the traditional ground in the traditional
order: Porphyry's *Isagoge* and Aristotle's
*Categories* in Part I, *On Interpretation* in Part II,
*Prior Analytics* in Part III.1, *Posterior Analytics*
in Part III.2, *Topics* (and much else) in Part III.3, and
finally *Sophistic Refutations* in Part III.4.
### 3.2 Signification, Connotation, Supposition
Part I of the *Summa of Logic* also introduces a number of
semantic notions that play an important role throughout much of
Ockham's philosophy. None of these notions is original with
Ockham, although he develops them with great sophistication and
employs them with skill.
The most basic such notion is "signification." For the
Middle Ages, a term "signifies" *what it makes us think
of*. This notion of signification was unanimously accepted;
although there was great dispute over *what* terms signified,
there was agreement over the
criterion.[15]
Ockham, unlike many (but no means all) other medieval logicians, held
that terms do not in general signify thought, but can signify anything
at all (including things not presently existing). The function of
language, therefore, is not so much to communicate thoughts from one
mind to another, but to convey information about the
world.[16]
In *Summa of Logic* I.33, Ockham acknowledges four different
kinds of signification. In his first sense, a term signifies whatever
things it is truly predicable of by means of a present-tensed,
assertoric copula. That is, a term *t* signifies a thing
*x* if and only if 'This is a *t*' is true,
pointing to *x*. In the second sense, *t* signifies
*x* if and only if 'This is (or was, or will be, or can
be) a *t*' is true, pointing to
*x.*[17]
These first two senses of signification are together called
"primary" signification.
In the third sense, terms can also be said to signify certain things
they are *not* truly predicable of, no matter the tense or
modality of the copula. For instance, the word 'brave' not
only makes us think of brave people (whether presently existing or
not); it also makes us think of the *bravery* in virtue of
which we call them "brave." Thus, 'brave'
signifies and is truly predicable of brave people, but also signifies
bravery, even though it is not truly predicable of bravery. (Bravery
is not brave.) This kind of signification is called
"secondary" signification.
To
a first approximation, we can say that a "connotative"
term is just a term that has a secondary signification, and that such
a connotative term "connotes" exactly what it secondarily
signifies; in short, connotation is just secondary
signification.[18]
The fourth sense, finally, is the broadest one: according to it any
linguistic unit, including a whole sentence, can be said to signify
whatever things it makes us think of in some way or other. A sentence
signifies in this sense whatever it is that its terms primarily or
secondarily signify.
The theory of supposition was the centerpiece of late medieval
semantic theory. Supposition is not the same as signification. First
of all, terms signify wherever we encounter them, whereas they have
supposition only in the context of a proposition. But the differences
go beyond that. Whereas signification is a psychological, cognitive
relation, the theory of supposition is, at least in part, a theory of
reference. For Ockham, there are three main kinds of
supposition[19]:
* Personal supposition, in which a term supposits for (refers to)
what it signifies (in either of the first two senses of signification
described above). For example, in 'Every dog is a mammal',
both 'dog' and 'mammal' have personal
supposition.
* Simple supposition, in which a term supposits for a concept it
does not signify. Thus, in 'Dog is a species' or
'Dog is a universal', the subject 'dog' has
simple supposition. For Ockham the nominalist, the only real
universals are universal concepts in the mind and, derivatively,
universal spoken or written terms expressing those concepts.
* Material supposition, in which a term supposits for a spoken or
written expression it does not signify. Thus, in 'Dog has three
letters', the subject 'dog' has material
supposition.[20]
Personal supposition, which was the main focus, was divided into
various subkinds, distinguished in terms of a theory of "descent
to singulars" and "ascent from singulars." A quick
example will give the flavor: In 'Every dog is a mammal',
'dog' is said to have "confused and
distributive" personal supposition insofar as
* It is possible to "descend to singulars" as follows:
"Every dog is a mammal; therefore, Fido is a mammal, and Rover
is a mammal, and Bowser is a mammal ...," and so on for all
dogs.
* It is *not* possible to "ascend from any one
singular" as follows: "Fido is a mammal; therefore, every
dog is a mammal."
Although the mechanics of this part of supposition theory are well
understood, in Ockham and in other authors, its exact purpose remains
an open question. Although at first the theory looks like an account
of truth conditions for quantified propositions, it will not work for
that purpose. And although the theory was sometimes used as an aid to
spotting and analyzing fallacies, this was never done systematically
and the theory is in any event ill suited for that
purpose.[21]
### 3.3 Mental Language, Connotation and Definitions
Ockham was the first philosopher to develop in some detail the notion
of "mental language" and to put it to work for him.
Aristotle, Boethius and several others had mentioned it before, but
Ockham's innovation was to systematically transpose to the
fine-grained analysis of human thought both the grammatical categories
of his time, such as those of noun, verb, adverb, singular, plural and
so on, and -- even more importantly -- the central
semantical ideas of signification, connotation and supposition
introduced in the previous
section.[22]
Written words for him are "subordinated" to spoken words,
and spoken words in turn are "subordinated" to mental
units called "concepts", which can be combined into
syntactically structured mental propositions, just as spoken and
written words can be combined into audible or visible sentences.
Whereas the signification of terms in spoken and written language is
purely conventional and can be changed by mutual agreement (hence
English speakers say 'dog' whereas in French it is
*chien*), the signification of mental terms is established by
nature, according to Ockham, and cannot be changed at will. Concepts,
in other words, are *natural signs*: my concept of dog
naturally signifies dogs. How this "natural signification"
is to be accounted for in the final analysis for Ockham is not
entirely clear, but it seems to be based both on the fact that simple
concepts are normally caused within the mind by their objects (my
simple concept of dog originated in me as an effect of my perceptual
encounter with dogs), and on the fact that concepts are in some way
"naturally similar" to their
objects.[23]
This arrangement provides an account of synonymy and equivocation in
spoken and written language. Two simple terms (whether from the same
or different spoken or written languages) are *synonymous* if
they are ultimately subordinated to the same concept; a single given
term of spoken or written language is *equivocal* if it is
ultimately subordinated to more than one concept.
This raises an obvious question: Is there synonymy or equivocation in
mental language itself? (If there is, it will obviously have to be
accounted for in some other way than for spoken/written language.) A
great deal of modern secondary literature has been devoted to this
question. Trentman [1970] was the first to argue that no, there is no
synonymy or equivocation in mental language. On the contrary, mental
language for Ockham is a kind of lean, stripped down,
"canonical" language with no frills or inessentials, a
little like the "ideal languages" postulated by logical
atomists in the first part of the twentieth century. Spade [1980]
likewise argued in greater detail, on both theoretical and textual
grounds, that there is no synonymy or equivocation in mental language.
More recently, Panaccio [1990, 2004], Tweedale [1992] (both on largely
textual grounds), and Chalmers [1999] (on mainly theoretical grounds)
have argued for a different interpretation, which now tends to be more
widely accepted. What comes out at this point is that Ockham's
mental language is *not* to be seen as a logically ideal
language and that it does incorporate both some redundancies and some
ambiguities.
The question is complicated, but it goes to the heart of much of what
Ockham is up to. In order to see why, let us return briefly to the
theory of
connotation.[24]
Connotation was described
above
in terms of primary and secondary signification. But in *Summa of
Logic* I.10, Ockham himself draws the distinction between absolute
and connotative terms by means of the theory of definition.
For Ockham, there are two kinds of definitions: *real*
definitions and *nominal* definitions. A real definition is
somehow supposed to reveal the essential metaphysical structure of
what it defines; nominal definitions do not do that. As Ockham sets it
up, all *connotative* terms have nominal definitions, never
real definitions, and *absolute* terms (although not all of
them) have real definitions, never nominal definitions. (Some absolute
terms have no definitions at
all.[25])
As an example of a real definition, consider: 'Man is a rational
animal' or 'Man is a substance composed of a body and an
intellective soul'. Each of these traditional definitions is
correct, and each in its own way expresses the essential metaphysical
structure of a human being. But notice: the two definitions do not
*signify* (make us think of) exactly the same things. The first
one makes us think of all rational things (in virtue of the first word
of the definiens) plus all animals (whether rational or not, in virtue
of the second word of the definiens). The second definition makes us
think of, among other things, all substances (in virtue of the word
'substance' in the definiens), whereas the first one does
not. It follows therefore that an absolute term can have several
distinct real definitions that don't always signify exactly the
same things. They will *primarily* signify--be truly
predicable of--exactly the same things, since they will primarily
signify just what the term they define primarily signifies. But they
can also (secondarily) signify other things as
well.[26]
Nominal definitions, Ockham says, are different: There is one and only
one nominal definition for any given connotative
term.[27]
While a real definition is expected to provide a structural
description of certain things (which can be done in various ways, as
we just saw), a nominal definition, by contrast, is supposed to unfold
in a precise way the signification of the connotative term it serves
to define, and this can only be done, Ockham thinks, by explicitly
mentioning, in the right order and with the right connections, which
kind of things are primarily signified by this term and which are
secondarily signified. The nominal definition of the connotative term
"brave", to take a simple example, is "a living
being endowed with bravery"; this reveals that
"brave" primarily signifies certain living beings
(referred to by the first part of the definition) and that it
secondarily signifies -- or connotes -- singular qualities
of bravery (referred to by the last part of the
definition).[28]
Any non-equivalent nominal definition is bound to indicate a
different signification and would, consequently, be unsuitable if the
original one was correct.
Now, several commentators, following Trentman and Spade, concluded on
this basis that there are no *simple* connotative terms in
Ockham's mental language. They reasoned as follows: a
connotative term is synonymous with its nominal definition, but there
is no synonymy in mental language according to Ockham; mental
language, therefore, cannot contain both a simple connotative term and
its complex nominal definition; since it must certainly have the
resources for formulating adequate definitions, what must be dispensed
with is the defined simple term; and since *all* connotative
terms are supposed to have a nominal definition, it follows that
mental language contains only absolute terms (along with
syncategorematic ones, of course). It even came to be supposed in this
line of interpretation, that the very central point of Ockham's
nominalist program was to show that if anything can be truly said
about the world, it can be said using only absolute and
syncategorematic terms, and that this is precisely what happens in
mental language.
The consequences were far-reaching. Not only did this interpretation
claim to provide an overall understanding of what Ockham was up to,
but it also inevitably led to conclude that his whole nominalist
program was bound to failure. All relational terms, indeed, are taken
to be connotative terms in Ockham's semantics. The program,
consequently, was thought to require the semantical reduction of all
relational terms to combinations of non-relational ones, which seems
hardly possible. Thus, the question whether there are simple
connotative terms or not in Ockham's mental language is crucial
to our understanding of the success of his overall ontological
project. Since spoken and written languages are semantically
derivative on mental language, it is vital that we get the semantics
of mental language to work out right for Ockham, or else the
systematic coherence of much of what he has to say will be in
jeopardy.
In view of recent scholarship, though, it appears highly doubtful that
Ockham's purpose really was to use nominal definitions to
eliminate all simple connotative terms from mental language. For one
thing, as Spade had remarked himself, Ockham never systematically
engages in explicit attempts at such semantical reductions, which
would be quite odd if this was the central component of his
nominalism. Furthermore it was shown that Ockham did in fact hold that
there *are* simple connotative terms in mental language. He
says it explicitly and repeatedly, and in a variety of texts from his
earlier to his later philosophical and theological
writings.[29]
The secondary literature, consequently, has now gradually converged
on the view that, for Ockham, there is no synonymy among
*simple* terms in mental language, but that there can be some
redundancy between simple terms and complex expressions, or between
various complex expressions. If so, nothing prevents a simple
connotative concept to coexist in mental language with its nominal
definition.
Ockham indeed explicitly *denies* that a complex definition is
in general wholly synonymous with the corresponding defined
term.[30]
His point, presumably, is that the definition usually signifies
*more* things than the defined term. Take "brave"
again. Its definition, remember, is "a living being endowed with
bravery". Now, the first part of this complex expression makes
us think of *all* living beings, whereas the simple term
"brave" has only the brave ones as its primary
significates and does not signify in any way the non-brave living
beings. This shows in effect that simple connotative terms are not
-- at least not always -- shorthand abbreviations for their
nominal definitions in Ockham's view. And it must be conjectured
that *some* simple connotative concepts can be directly
acquired on the basis of perceptual experiences, just as absolute ones
are supposed to be (think of a relational concept like "taller
than" or a qualitative one like "white").
Ockham's nominal definitions, then, should not be seen as
reductionist devices for eliminating certain terms, but as a
privileged means for making conspicuous what the (primary and
secondary) significates of the defined terms are. The main point here
is that such definitions, when correctly formulated, explicitly reveal
the ontological commitments associated with the normal use of the
defined terms. The definition of "brave" as "a
living being endowed with bravery", for example, shows that the
correct use of the term "brave" commits us only to the
existence of singular living beings and singular braveries.
Ockham's nominalism does not require the elimination of simple
connotative concepts after all; its main relevant thesis, on the
contrary, is that their use is ontologically harmless since they do
not signify (either primarily or secondarily) anything but individual
things, as their nominal definitions are supposed to make it
clear.
## 4. Metaphysics
Ockham was a nominalist, indeed he is the person whose name is perhaps
most famously associated with nominalism. But nominalism means many
different things:
* A denial of metaphysical universals. Ockham was emphatically a
nominalist in this sense.
* An emphasis on reducing one's ontology to a bare minimum, on
paring down the supply of fundamental ontological categories. Ockham
was likewise a nominalist in this sense.
* A denial of "abstract" entities. Depending on what one
means, Ockham was or was not a nominalist in this sense. He believed
in "abstractions" such as *whiteness* and
*humanity,* for instance, although he did not believe they were
universals. (On the contrary, there are at least as many distinct
whitenesses as there are white things.) He certainly believed in
immaterial entities such as God and angels. He did not believe in
mathematical ("quantitative") entities of any kind.
The first two kinds of nominalism listed above are independent of one
another. Historically, there have been philosophers who denied
metaphysical universals, but allowed (individual) entities in more
ontological categories than Ockham does. Conversely, one might reduce
the number of ontological categories, and yet hold that universal
entities are needed in the categories that remain.
### 4.1 Ockham's Razor
Still, Ockham's "nominalism," in both the first and
the second of the above senses, is often viewed as derived from a
common source: an underlying concern for ontological parsimony. This
is summed up in the famous slogan known as "Ockham's
Razor," often expressed as "Don't multiply entities
beyond
necessity."[31]
Although the sentiment is certainly Ockham's, that particular
formulation is nowhere to be found in his texts. Moreover, as usually
stated, it is a sentiment that virtually *all* philosophers,
medieval or otherwise, would accept; no one wants a needlessly bloated
ontology. The question, of course, is which entities are needed and
which are not.
Ockham's Razor, in the senses in which it can be found in Ockham
himself, never allows us to *deny* putative entities; at best
it allows us to refrain from positing them in the absence of known
compelling reasons for doing so. In part, this is because human beings
can never be sure they know what is and what is not "beyond
necessity"; the necessities are not always clear to us. But even
if we did know them, Ockham would still not allow that his Razor
allows us to *deny* entities that are unnecessary. For Ockham,
the only truly necessary entity is God; everything else, the whole of
creation, is radically contingent through and through. In short,
Ockham does not accept the Principle of Sufficient Reason.
Nevertheless, we do sometimes have sufficient methodological grounds
for positively affirming the existence of certain things. Ockham
acknowledges three sources for such grounds (three sources of positive
knowledge). As he says in *Sent.* I, dist. 30, q. 1: "For
nothing ought to be posited without a reason given, unless it is
self-evident (*literally,* known through itself) or known by
experience or proved by the authority of Sacred Scripture."
### 4.2 The Rejection of Universals
In the case of universal entities, Ockham's nominalism is
*not* based on his Razor, his principle of parsimony. That is,
Ockham does not hold merely that there is no good reason for affirming
universals, so that we should refrain from doing so in the absence of
further evidence. No, he holds that theories of universals, or at
least the theories he considers, are outright incoherent; they either
are self-contradictory or at least violate certain other things we
know are true in virtue of the three sources just cited. For Ockham,
the only universal entities it makes sense to talk about are universal
concepts, and derivative on them, universal terms in spoken and
written language. Metaphysically, these "universal"
concepts are singular entities like all others; they are
"universal" only in the sense of being "predicable
of many."
With respect to the exact ontological status of such conceptual
entities, however, Ockham changed his view over the course of his
career. To begin with, he adopted what is known as the
*fictum*-theory, a theory according to which universals have no
"real" existence at all in the Aristotelian categories,
but instead are purely "intentional objects" with a
special mode of existence; they have only a kind of
"thought"-reality. Eventually, however, Ockham came to
think this intentional realm of "fictive" entities was not
needed, and by the time of his *Summa of Logic* and the
*Quodlibets* adopts instead a so called
*intellectio*-theory, according to which a universal concept is
just the act of thinking about several objects at once; metaphysically
such an "act" is a singular quality of an individual mind,
and is "universal" only in the sense of being a mental
sign of several things at once and being predicable of them in mental
propositions.[32]
### 4.3 Exposition or Parsing Away Entities
Thus, Ockham is quite certain there are no metaphysically universal
entities. But when it comes to paring down the number of basic
ontological categories, he is more cautious, and it is there that he
uses his Razor ruthlessly--always to suspend judgment, never to
deny.
The main vehicle for this "ontological reduction" is the
theory of connotation, coupled with the related theory of
"exposition." The theory of exposition, which is not fully
developed in Ockham, will become increasingly prominent in authors
immediately after him. In effect, the theory of connotation is related
to the theory of exposition as explicit definition is related to
contextual definition. The notion of the "square" of a
number can be explicitly defined, for example, as the result of
multiplying that number by itself. Contextual definition operates not
at the level of terms, but at the level of propositions. Thus,
Bertrand Russell famously treated 'The present king of France is
bald' as amounting to 'There is an *x* such that
*x* is a present king of France and *x* is bald, and for
all *y* if *y* is a present king of France then
*y* = *x*'. We are never given any outright
definition of the term 'present king of France', but
instead are given a technique of paraphrasing away seemingly
referential occurrences of that term in such a way that we are not
committed to any actually existing present kings of France. So too,
Ockham tries to provide us, at the propositional level, with
paraphrases of propositions that seem at first to refer to entities he
sees no reason to believe
in.[33]
For example, in *Summa of Logic,* II.11, among other places,
Ockham argues that we can account for the truth of 'Socrates is
similar to Plato' without having to appeal to a relational
entity called "similarity":
>
> For example, for the truth of 'Socrates is similar to
> Plato', it is required that Socrates have some quality and that
> Plato have a quality of the same species. Thus, from the very fact
> that Socrates is white and Plato is white, Socrates is similar to
> Plato and conversely. Likewise, if both are black, or hot, [then] they
> are similar *without anything else added.* (Emphasis added.)
>
In
this way, Ockham removes all need for entities in seven of the
traditional Aristotelian ten categories; all that remain are entities
in the categories of substance and quality, and a few entities in the
category of relation, which Ockham thinks are required for theological
reasons pertaining to the Trinity, the Incarnation and the Eucharist,
even though our natural cognitive powers would see no reason for them
at
all.[34]
As is to be expected, the ultimate success of Ockham's program
is a matter of considerable
dispute.[35]
It should be stressed again, however, that this program in no way
requires that it should be possible to dispense altogether with terms
from any of the ten Aristotelian categories (relational and
quantitative terms in particular). Ockham's claim is simply that
all our basic scientific terms, whether absolute or connotative,
signify nothing but singular substances or qualities (plus some
singular relations in certain exceptional theological cases).
## 5. Natural Philosophy
Ockham's "physics" or natural philosophy is of a
broadly Aristotelian sort, although he interprets Aristotle in his own
fashion. Ockham wrote a great deal in this area; indeed his
*Exposition of Aristotle's Physics* is his longest work
except for his *Commentary on the
Sentences*.[36]
As a nominalist about universals, Ockham had to deal with the
Aristotelian claim in the *Posterior Analytics* that science
pertains to certain propositions about what is universal and
necessary. He discusses this issue in the Prologue to his
*Exposition of the
Physics*,[37]
and there agrees with Aristotle. But he interprets Aristotle's
dictum as saying that knowledge bears upon certain propositions with
general (universal) terms in them; it is only in that sense that
science deals with the universal. This of course does not mean that
for Ockham our scientific knowledge can never get beyond the level of
language to actual things. He distinguishes various senses of
'to know' (*scire*, from which we get
*scientia* or "science"):
* In one sense, to "know" is to know a proposition, or a
term in that proposition. It is in this sense that the object of a
science is universal, and this is what Aristotle had in mind.
* In another sense, we can be said to "know" what the
proposition is about, what its terms have supposition for. What we
"know" in that sense is always metaphysically individual,
since for Ockham there isn't anything else. This is not the
sense in which Aristotle was speaking.
As described
earlier,
Ockham holds that we do not need to allow special entities in all ten
of Aristotle's categories. In particular, we do not need them in
the category of quantity. For Ockham, there is no need for real
"mathematical" entities such as numbers, points, lines,
and surfaces as distinct from individual substances and qualities.
Apparent talk about such things can invariably be parsed away, via the
theory of connotation or exposition, in favor of talk about substances
and qualities (and, in certain theological contexts, a few relations).
This Ockhamist move is illustrative of and influential on an important
development in late medieval physics: the application of mathematics
to non-mathematical things, culminating in Galileo's famous
statement that the "book of nature" is written in the
"language of mathematics."
Such an application of mathematics violates a traditional Aristotelian
prohibition against *metabasis eis allo genos,* grounded on
quite reasonable considerations. The basic idea is that things cannot
be legitimately compared in any respect in which they differ in
species. Thus it makes little sense to ask whether the soprano's
high C is higher or lower than Mount Everest--much less to ask
(quantitatively) *how much* higher or lower it is. But for
Aristotle, straight lines and curved lines belong to different species
of lines. Hence they cannot be meaningfully compared or measured
against one another. The same holds for rectilinear motion and
circular motion.
Although the basic idea is reasonable enough, Ockham recognized that
there are problems. The length of a coiled rope, for example, can
straightforwardly be compared to the length of an uncoiled rope, and
the one can meaningfully be said to be longer or shorter than, or
equal in length to, the other. For that matter, a *single* rope
surely stays the same length, whether it is coiled or extended
full-length. Ockham's solution to these problems is to note
that, on his ontology, straight lines and curved lines are not
*really* different species of lines--because lines are not
extra things in the first place. Talk about lines is simply a
"manner of speaking" about substances and qualities.
Thus, to compare a "curved" (coiled) rope with a
"straight" (uncoiled) one is not really to talk about the
lengths of lines in two different species; it is to talk about two
*ropes*. To describe the one as curved (coiled) and the other
as straight (uncoiled) is not to appeal to specifically different
kinds of entities--curvature and straightness--but merely to
describe the ropes in ways that can be expounded according to two
different patterns. Since such talk does not have ontological
implications that require specifically different kinds of entities,
the Aristotelian prohibition of *metabasis* does not apply.
Once one realizes that we can appeal to connotation theory, and more
generally the theory of exposition, without invoking new entities, the
door is opened to applying mathematical analyses (all of which are
exponible, for Ockham) to all kinds of things, and in particular to
physical nature.
Ockham's contributions were by no means the only factor in the
increasing mathematization of science in the fourteenth century. But
they were important
ones.[38]
## 6. Theory of Knowledge
Like most medieval accounts of knowledge, Ockham's is not much
concerned with answering skeptical doubts. He takes it for granted
that humans not only can but frequently do know things, and focuses
his attention instead on the "mechanisms" by which this
knowledge comes about.
### 6.1 The Rejection of Species
Ockham's theory of knowledge, like his natural philosophy, is
broadly Aristotelian in form, although--again, like his natural
philosophy--it is "Aristotelian" in its own way. For
most Aristotelians of the day, knowledge involved the transmission of
a
"*species*"[39]
between the object and the mind. At the sensory level, this species
may be compared to the more recent notion of a sense
"impression." More generally, we can think of it as the
structure or configuration of the object, a structure or configuration
that can be "encoded" in different ways and found
isomorphically in a variety of contexts. One recent author, describing
the theory as it occurs in Aquinas, puts it like
this:[40]
>
> Consider, for example, blueprints. In a blueprint of a library, the
> configuration of the library itself, that is, the very configuration
> that will be in the finished library, is captured on paper but in such
> a way that it does not make the paper itself into a library. Rather,
> the configuration is imposed on the paper in a different sort of way
> from the way it is imposed on the materials of the library. What
> Aquinas thinks of as transferring and preserving a configuration we
> tend to consider as a way of encoding information.
>
The configuration of features found in the external object is also
found in "encoded" form as a species in the organ that
senses the object. (Depending on the sense modality, it may also be
found in an intervening medium. For example, with vision and hearing,
the species is transmitted through the air to the sense organ.) At the
intellectual level, the so called "agent intellect" goes
to work on this species and somehow produces the universal concept
that is the raw material of intellectual
cognition.[41]
Ockham rejected this entire theory of species. For him, species are
unnecessary to a successful theory of cognition, and he dispenses with
them.[42]
Moreover, he argues, the species theory is not supported by
experience; introspection reveals no such species in our cognitive
processes.[43]
This rejection of the species theory of cognition, which had been
foreshadowed by several previous authors (such as Henry of Ghent in
the thirteenth century), was an important development in late medieval
epistemology.[44]
### 6.2 Intuitive and Abstractive Cognition
One of the more intriguing features of late medieval epistemology in
general, and of Ockham's view in particular, is the development
of a theory known as "intuitive and abstractive
cognition." The theory is found in authors as diverse as Duns
Scotus, Peter Auriol, Walter Chatton, and Ockham. But their theories
of intuitive and abstractive cognition are so different that it is
hard to see any one thing they are all supposed to be theories of.
Nevertheless, to a first approximation, intuitive cognition can be
thought of as perception, whereas abstractive cognition is closer to
imagination or remembering. The fit is not exact, however, since
authors who had a theory of intuitive and abstractive cognition
usually also allowed the distinction at the *intellectual*
level as well.
It is important to note that abstractive cognition, in the sense of
this theory, has nothing necessarily to do with
"abstraction" in the sense of producing universal concepts
from cognitive encounters with individuals. Instead, what abstractive
cognition "abstracts" from is the question of the
*existence or non-existence* of the object. By contrast,
intuitive cognition is very much tied up with the existence or
non-existence of the object. Here is how Ockham distinguishes
them:[45]
>
> For intuitive cognition of a thing is a cognition such that by virtue
> of it it can be known whether the thing exists or not, in such a way
> that if the thing does exist, the intellect at once judges it to exist
> and evidently knows it to exist ... Likewise, intuitive cognition
> is such that when some things are known, one of which inheres in the
> other or the one is distant in place from the other or is related in
> another way to the other, it is at once known by virtue of the
> incomplex cognitions of those things whether the thing inheres or does
> not inhere, whether it is distant or not distant, and so on for other
> contingent truths ...
>
>
> Abstractive cognition, however, is that by virtue of which it cannot
> be evidently known of the thing whether it exists or does not exist.
> And in this way abstractive cognition, as opposed to intuitive
> cognition, "abstracts" from existence and non-existence,
> because by it neither can it be evidently known of an existing thing
> that it exists, nor of a non-existent one that it does not exist.
>
>
>
Ockham's main point here is that an intuitive cognition
*naturally* causes in the mind a number of true
*contingent* judgements about the external thing(s) that caused
this intuitive cognition; for example, that this thing exists, or that
it is white, and so on. This does not prevent God from deceiving any
particular creature if He wants to, even when an intuitive cognition
is present, but in such a case, God would have to neutralize the
natural causal effect of this intuitive cognition (this is something
He can always do, according to Ockham) and directly cause instead a
false judgement. Intuitive cognitions, on the other hand, can
sometimes induce false beliefs, too, if the circumstances are abnormal
(in cases of perceptual illusions in particular), but even then, they
would still cause some true contingent judgements. The latter at any
rate is their distinctive feature. Abstractive cognitions, by
contrast, are not such as to naturally cause true judgements about
contingent
matters.[46]
## 7. Ethics
Ockham's ethics combines a number of themes. For one, it is a
*will*-based ethics in which intentions count for everything
and external behavior or actions count for nothing. In themselves, all
actions are morally neutral.
Again, there is a strong dose of divine command theory in
Ockham's ethics. Certain things (i.e., in light of the previous
point, certain *intentions*) becomes morally obligatory,
permitted or forbidden simply because God decrees so. Thus, in Exodus,
the Israelites' "spoiling the Egyptians" (or rather
their *intention* to do so, which they carried out) was not a
matter of theft or plunder, but was morally permissible and indeed
obligatory--because God had commanded it.
Nevertheless, despite the divine command themes in Ockham's
ethics, it is also clear that he wanted morality to be to some extent
a matter of reason. There is even a sense in which one can find a kind
of natural law theory in Ockham's ethics; one way in which God
conveys his divine commands to us is by giving us the natures we
have.[47]
Unlike Augustine, Ockham accepted the possibility of the
"virtuous pagan"; moral virtue for Ockham does not depend
on having access to revelation.
### 7.1 The Virtues
But while moral virtue is possible even for the pagan, moral virtue is
not by itself enough for salvation. Salvation requires not just virtue
(the opposite of which is moral vice) but merit (the opposite of which
is sin), and merit requires grace, a free gift from God. In short,
there is no necessary connection between virtue--moral
goodness--and salvation. Ockham repeatedly emphasizes that
"God is a debtor to no one"; he does not *owe* us
anything, no matter what we do.
For Ockham, acts of will are morally virtuous either extrinsically,
i.e. derivatively, through their conformity to some more fundamental
act of will, or intrinsically. On pain of infinite regress, therefore,
extrinsically virtuous acts of will must ultimately lead back to an
intrinsically virtuous act of will. That intrinsically virtuous act of
will, for Ockham, is an act of "loving God above all else and
for his own sake."
In his early work, *On the Connection of the Virtues*, Ockham
distinguishes five grades or stages of moral virtue, which have been
the topic of considerable speculation in the secondary
literature:[48]
1. The first and lowest stage is found when someone wills to act in
accordance with "right reason"--i.e., because it is
"the right thing to do."
2. The second stage adds moral "seriousness" to the
picture. The agent is willing to act in accordance with right reason
even in the face of contrary considerations, even--if
necessary--at the cost of death.
3. The third stage adds a certain exclusivity to the motivation; one
wills to act in this way *only* because right reason requires
it. It is not enough to will to act in accordance with right reason,
even heroically, if one does so on the basis of extraneous, non-moral
motives.
4. At the fourth stage of moral virtue, one wills to act in this way
"precisely for the love of God." This stage "alone
is the true and perfect moral virtue of which the Saints
speak."
5. The fifth and final stage can be built immediately on either the
third or the fourth stage; thus one can have the fifth without the
fourth stage. The fifth stage adds an element of extraordinary moral
heroism that goes beyond even the "seriousness" of stage
two.
The difficulty in understanding this hierarchy comes at the fourth
stage, where it is not clear exactly what *moral* factor is
added to the preceding three
stages.[49]
### 7.2 Moral Psychology
At the beginning of his *Nicomachean Ethics*, Aristotle
remarked that "the good is that at which all things aim."
Each thing, therefore, aims at the good, according to the demands of
its nature. In the Middle Ages, "Aristotelians" like
Thomas Aquinas held that the good for human beings in particular is
"happiness," the enjoyment of the direct vision of God in
the next life. And, whether they realize it or not, that is what all
human beings are ultimately aiming at in their actions. For someone
like Aquinas, therefore, the human will is "free" only in
a certain restricted sense. We are *not* free to choose for or
against our final end; that is built into us by nature. But we are
free to choose various *mean*s to that end. All our choices,
therefore, are made under the aspect of leading to that final goal. To
be sure, sometimes we make the wrong choices, but when that occurs it
is because of ignorance, distraction, self-deception, etc. In an
important sense, then, someone like Aquinas accepts a version of the
so called Socratic Paradox: No one knowingly and deliberately does
evil.[50]
Ockham's view is quite different. Although he is very suspicious
of the notion of final causality (teleology) in general, he thinks it
is quite appropriate for intelligent, voluntary agents such as human
beings. Thus the frequent charge that Ockham severs ethics from
metaphysics by denying teleology seems
wrong.[51]
Nevertheless, while Ockham grants that human beings have a natural
orientation, a tendency toward their own ultimate good, he does not
think this restricts their choices.
For Ockham, as for Aristotle and Aquinas, I can choose the means to
achieve my ultimate good. But in addition, for Ockham unlike Aristotle
and Aquinas, I can choose whether to *will* that ultimate good.
The natural orientation and tendency toward that good is built in; I
cannot do anything about that. But I *can* choose whether or
not to *to act* to achieve that good. I might choose, for
example, to do nothing at all, and I might choose this knowing full
well what I am doing. But more: I can choose to act knowingly directly
*against* my ultimate good, to *thwart*
it.[52]
I can choose evil *as evil*.
For Ockham, this is required if I am going to be morally responsible
for my actions. If I could not help but will to act to achieve my
ultimate good, then it would not be morally praiseworthy of me to do
so; moral "sins of omission" would be impossible (although
of course I could be mistaken in the means I adopt). By the same
token, moral "sins of commission" would be impossible if I
could not knowingly *act against* my ultimate good. But for
Ockham these conclusions are not just required by theory; they are
confirmed by experience.
## 8. Political Philosophy
The divine command themes so prominent in Ockham's ethics are
much more muted in his political theory, which on the contrary tends
to be far more "natural" and
"secular."[53]
As sketched
above,
Ockham's political writings began at Avignon with a discussion
of the issue of poverty. But later on the issues were generalized to
include church/state relations more broadly. He was one of the first
medieval authors to advocate a form of church/state separation, and
was important for the early development of the notion of property
rights.
The Franciscan Order at this time was divided into two parties, which
came to be known as the "Conventuals" and the
"Spirituals" (or "zealots"). The Spirituals,
among whom were Ockham, Michael of Cesena, and the other exiles who
joined them in fleeing Avignon, tried to preserve the original ideal
of austere poverty practiced and advocated by St. Francis himself (c.
1181-1226). The Conventuals, on the other hand, while
recognizing this ideal, were prepared to compromise in order to
accommodate the practical needs of a large, organized religious order;
they were by far the majority of the order. The issue between the two
parties was never one of doctrine; neither side accused the other of
heresy. Rather, the question was one of how to shape and run the
order--in particular, whether the Franciscans should (or even
could) renounce all property rights.
### 8.1 The Ideal of Poverty
The ideal of poverty had been (and still is) a common one in religious
communities. Typically, the idea is that the individual member of the
order owns no property at all. If a member buys a car, for instance,
it is not strictly his car, even though he may have exclusive use of
it, and it was not bought with his money; he doesn't have any
money of his own. Rather it belongs to the order.
The original Franciscan ideal went further. Not only did the
individual friar have no property of his own, *neither did the
order*. The Franciscans, therefore, were really supposed to be
"mendicants," to live by begging. Anything donated to the
order, such as a house or a piece of land, strictly speaking remained
the property of the original owner (who merely granted the
*use* of it to the Franciscans). (Or, if that would not
work--as, for example, in the case of a bequest in a will, after
the original owner had died--the ownership would go to the
Papacy.)
Both the Spirituals and the Conventuals thought this ideal of
uncompromising poverty was exhibited by the life of Jesus and the
Apostles, who--they said--had given up all property, both
individually *and collectively*. St. Francis regarded this as
the clear implication of several Scriptural passages: e.g., Matt.
6:24-34, 8:20, 19:21. In short, the Apostolic (and Franciscan)
ideal was, "Live without a safety net."
Of course, if everyone lived according to this ideal, so that no one
owned any property either individually or collectively, then there
would be no property at all. The Franciscan ideal, then, shared by
Conventuals and Spirituals alike, entailed the total abolition of all
property rights.
Not everyone shared this view. Outside the Franciscan order, most
theoreticians agreed that Jesus and the Apostles lived without
individual property, but thought they did share property collectively.
Nevertheless, Pope Nicholas III, in 1279, had officially approved the
Franciscan view, not just as a view about how to organize the
Franciscan order, but about the interpretation of the Scriptural
passages concerning Jesus and the Apostles. His approval did not mean
he was endorsing the Franciscan reading as the correct interpretation
of Scripture, but only that it was a permissible one, that there was
nothing doctrinally suspect about
it.[54]
Nevertheless, this interpretation was a clear reproach to the Papacy,
which at Avignon was wallowing in wealth to a degree it had never seen
before. The clear implication of the Franciscan view, therefore, was
that the Avignon Popes were conspicuously *not* living their
lives as an "imitation of Christ." Whether for this reason
or another, the Avignon Pope John XXII decided to reopen discussion of
the question of Apostolic poverty and to come to some resolution of
the matter. But, as Mollat [1963] puts it (perhaps not without some
taking of
sides):[55]
>
> When discussions began at Avignon, conflicting opinions were freely
> put forward. Meanwhile, Michael of Cesena, acting with insolent
> audacity, did not await the Holy See's decision: on 30 May 1322
> the chapter-general [of the Franciscan order] at Perugia declared
> itself convinced of the absolute poverty of Christ and the Apostles.
>
It was this act that provoked John XXII to issue his first
contribution to the dispute, his bull *Ad conditorem* in 1322.
There he put the whole matter in a legal framework.
### 8.2 The Legal Issues
According to Roman law, as formulated in the Code of Justinian,
"ownership" and "legitimate use" cannot be
permanently separated. For example, it is one thing for me to own a
book but to let you use it for a while. Ownership in that case means
that I can recall the book, and even if I do not do so, you should
return it to me when you are done with it. But it is quite another
matter for me to own the book but to grant you *permanent* use
of it, to agree not to recall it as long as you want to keep it, and
to agree that you have no obligation to give it back *ever*.
John XXII points out that, from the point of view of Roman law, the
latter case makes no sense. There is no practical difference in that
case between your having the use of the book and your owning it; for
all intents and purposes, it is yours.
Notice the criticism here. It is a legal argument against the claim
that the Papacy as an institution can own something and yet the
Franciscans as an order, collectively, have a permanent right to use
it. The complaint is *not* against the notion that an
individual friar might have a right to use something until he dies, at
which time use reverts to the order (or as the Franciscans would have
it, to the Papacy). This would still allow some distinction between
ownership and mere use. Rather the complaint is against the notion
that the order would not own anything outright, but would nevertheless
have permanent use of it that goes beyond the life or death of any
individual friar, so that the ownership somehow remained permanently
with the Papacy, even though the Pope could not reclaim it, use it, or
do anything at all with it. John XXII argues that this simply
abolishes the distinction between use and ownership.
### 8.3 Property Rights
Special problems arise if the property involved is such that the use
of it involves consuming it--e.g., food. In that case, it appears
that there is no real difference between ownership and even temporary
use. For things like food, using them amounts for practical purposes
to owning them; they cannot be recalled after they are used. In short,
for John XXII, it follows that it is impossible fully to live the life
of absolute poverty, even for the individual person (much less for a
permanent institution like the Franciscan order). The institution of
property, and property "rights," therefore began in the
Garden of Eden, the first time Adam or Eve ate something. These
property rights are not "natural" rights; on the contrary,
they are established by a kind of positive law by God, who gave
everything in the Garden to Adam and Eve.
Ockham disagreed. For him, there was no "property" in the
Garden of Eden. Instead, Adam and Eve there had a natural right to use
anything at hand. This natural right did not amount to a
*property* right, however, since it could not have been used as
the basis of any kind of legal claim. Both John XXII and Ockham seem
to agree in requiring that "property" (ownership) be a
matter of positive law, not simply of natural law. But John says there
was such property in the Garden of Eden, whereas Ockham claims there
was not; there was only a *natural* right, so that Adam and
Eve's use of the goods there was legitimate. For Ockham,
"property" first emerged only after the Fall when, by a
kind of divine permission, people began to set up special positive
legal arrangements assigning the legal right to use certain things to
certain people (the owners), to the exclusion of anyone else's
having a *legal* right to them. The owners can then give
permission to others to use what the owners own, but that permission
does not amount to giving them a legal right they could appeal to in a
court of law; it can be revoked at any time. For Ockham, this is the
way the Franciscans operate. Their benefactors and donors do not give
them any legal rights to use the things donated to them--i.e., no
right they could appeal to in a court of law. Rather the donation
amounts only to a kind of permission that restores the original
natural (not legal) right of use in the Garden of
Eden.[56] |
william-sherwood | ## 1. Life and works
William of Sherwood was born between 1200 and 1205, probably in
Nottinghamshire, and died between 1266 and 1272. His school career is
uncertain, but given references to his works and his influence on
people such as Peter of Spain, Lambert of Auxerre, Albert the Great,
and Thomas Aquinas, it is likely that he was teaching logic at the
University of Paris between 1235 and 1250. After that period, he seems
to have left logic. He was a master at Oxford in 1252 and treasurer of
the cathedral of Lincoln about two years later; still later, he was
rector at Aylesbury, in Buckinghamshire, and Attleborough, in Norfolk.
He was still alive in 1266 but dead by 1272 (Kretzmann 1968, p.
3).
Part of the difficulty with establishing his biography is that he has
previously been conflated with various other 13th-century Englishmen
named William, such as William of Leicester (d. 1213) and William of
Durham (d. 1249) (cf. Grabmann 1937, pp. 11-13; Kretzmann 1966, p. 3;
Mullinger 1873, p. 177). He has also sometimes been identified with
others bearing the same surname, such as the author of commentary, now
lost, on the *Sentences* of Peter Lombard with the title
*Shirovodus super sententias*, a work that was known to John
Leland in the 16th century (Kretzmann 1966, p. 12), or Ricardus de
Schirewode, who wrote a treatise on insolubles surviving in Cambridge,
St John's MS 100. This confusion, along with the usual problems
presented by anonymous material, makes it difficult to identify
William's works with certainty.
Two works can be confidently ascribed to Sherwood, an *Introduction
to Logic* and a treatise on syncategorematic terms, the
*Syncategoremata*. The *Introduction* survives in complete
form in only one manuscript, Bibliotheque Nationale MS. Lat.
16,617, from the late 13th or 14th century, which begins with the
heading *Introductiones Magistri Guilli. de Shyreswode in
logicam*, "Introduction to Logic of Master William of
Sherwood". The *Syncategoremata* immediately follows, and
is likewise explicitly attributed to Sherwood. The
*Syncategoremata* also survives in a second manuscript, Bodleian
MS Digby 55, where it is again explicitly labeled, *Sincategoreumata
Magistri Willielmi de Sirewode* "[Treatise on]
Syncategorematics of Master William of Sherwood", and Chapter
Five of the *Introduction* survives in a second manuscript, MS
Worcester Cath. Q. 13, written around 1293/94, and edited by Pinborg
& Ebbesen (1984).
The *Introduction* was almost certainly written while William was
lecturing on logic at the University of Paris. It was first edited by
Grabmann (1937) and translated into English by Kretzmann (1966). A new
critical edition was produced by Brands and Kann (1995), accompanied
by a translation into German. It has been argued (Mullinger 1873, p.
177) that this book was heavily influenced by a *Synopsis*
treatise written by the 11th-century Byzantine logician Michael
Constantine Psellus. The *Introduction* contains the first
appearance of the syllogistic mnemonic verse 'Barbara,
Celarent...', which Augustus de Morgan described as "magic
words ... more full of meaning than any that ever were made", and
Sherwood may even have been the inventor of the verse, though
references to individual names appear earlier (Kretzmann 1966, pp.
66-67).
Sherwood's treatise on syncategorematic terms has long been held to be
if not the earliest such treatise, then the earliest still accessible
(Boehner 1952, p. 7). The text was first edited by O'Donnell (1941),
and this edition along with the Paris MS was the basis of Kretzmann's
1968 translation into English. O'Donnell's edition has since been
superseded by the critical edition of Kann and Kirchhoff (2012), which
also includes a translation into German. Kirchhoff (2008) is an extensive discussion and commentary on the *Syncategormeta*.
Three other logical treatises follow the *Introductiones* and the
*Syncategoremata* in the Paris manuscript, which have as a result
been tentatively attributed to Sherwood. The first is a treatise on
insolubles, ff. 46-54, edited by Roure (1970), and the second a
treatise on *obligationes* found in Paris, Bibl. Nat. 16617. This
text was edited by Green (1963), who tentatively attributed it to
Sherwood. The third is the text *Petitiones Contrariorum*, edited
by de Rijk (1976), covering puzzles arising from hidden contradictions
in sets of premises. Sherwood may also be the author of the
*Fallaciae magistri Willelmi* "Fallacies, of Master
William" preserved in British Museum, King's Library MS 9 E XII,
as well as the treatise ascribed to Richard of Sherwood noted
above.
Three further treatises have previously also been attributed to
Sherwood, but the evidence for their attribution is less clear: the
commentary on the *Sentences* of Peter Lombard noted above, a
*Distictiones theologicae*, and a *Concionces*, or
collection of sermons (Kretzmann 1968, p. 4).
## 2. Important doctrines
Along with Peter of Spain, Lambert of Auxerre, and Roger Bacon,
William of Sherwood was one of the first terminist logicians, working
in a context where the newly discovered and translated Aristotelian
material of the 12th century had become fully integrated into the
logical and philosophical curricula through the works of people such
as John le Page (Uckelman & Lagerlund, 2016). The terminist period
of logic was marked by the development of logical genres and
techniques that go beyond Aristotle and mere commentaries on his
works. The two primary branches of terminist logic are the study of
the properties of terms and the study of the meaning and function of
syncategorematic words. Sherwood's *Introduction* picks up the
first of these two branches in its fifth chapter, and his
*Syncategoremata* deals with the second. The remainder of the
*Introductiones* is devoted to the standard Aristotelian
material: Chapter One on statements corresponds to *On
Interpretation*; Chapter Two on predicables to the
*Categories*; Chapter Three on syllogisms to the *Prior
Analytics*; Chapter Four on dialectical reasoning to the
*Topics*; and Chapter Six on sophistical reasoning to the
*Sophistical Refutations*. We will focus our attention in this
article on the unusual or distinctive aspects of his logical and
semantic theory, as found in Chapter Five of the *Introduction* and in the *Syncategoremata*.
### 2.1. Modality
In his *Introduction*, Sherwood gives two definitions for
'mode', depending on whether the term is being used
broadly or strictly: "Broadly speaking, a mode is the
determination of an act, and in this respect it goes together with
every adverb", while strictly speaking we count not all adverbs
as modes but only those which determine the inherence of the predicate
in the subject (Kretzmann 1966, p. 40). Thus, on the broad conception
of 'mode', both "Socrates is necessarily
running" and "Socrates is swiftly running" count as
modal statements, but on the narrow conception, only the former
does. The six modes are 'true', 'false',
'possible', 'contingent',
'impossible', and 'necessary', though
"the first two do not distinguish a modal proposition from an
assertoric statement" (Kretzmann 1966, pp. 40-41). The
other four modes give rise to properly modal propositions, and in
statements expressing these propositions, the associated modal adverbs
are 'possibly', 'contingently',
'impossibly', and 'necessarily'. Each of
these modal adverbs is a syncategorematic term; and in
the *Syncategoremata*, Sherwood devotes an entire chapter
to *necessario* ('necessarily')
and *contingenter* ('contingently').
Here it must be noted that Sherwood's views of modality differ
significantly from modern views. Modern modal logic interprets modes
in a nominal way, e.g., 'necessary:',
'impossible:', etc. But Sherwood is reluctant to admit a
statement such as "That Socrates is running is necessary"
as genuinely modal (Kretzmann 1966, p. 43), instead viewing this as an
assertoric categorical statement predicating 'necessary'
of "that Socrates is running". Similarly, "Socrates
is running necessarily" does not count as a modal statement,
because here 'necessarily' modifies 'running',
not "the inherence of the predicate in the subject". The
only genuinely modal construction is that exhibited by "Socrates
is necessarily running" (Kretzmann 1966, p. 42). The importance
of the adverbial construction, where the modal adverb modifies the
inherence of the predicate with the subject, is emphasized in the
discussions of modal adverbs in *Syncategoremata*, which include
not only the chapter mentioned above but also analyses of how modal
adverbs interact with other syncategorematic terms such as
"only", "unless", and "if"
(Uckelman 2020, SS5).
Since Aristotle it was standard practice to note that
'possible' can be used in two ways. In the first
'possible' is compatible with 'necessary', but
in the second, that which is necessary is not (merely) possible. This
second way is usually called 'contingent'. Sherwood
mentions this distinction, but goes on to say that he will be using it
in a broader sense, in which possibility is compatible with necessity
(Kretzmann 1966, p. 41). However, he makes a further distinction,
which he does maintain, between the two ways that
'impossible' and 'necessary' can be used.
These different ways are expressed in temporal notions:
>
>
> [impossible] is used in one way of whatever cannot be true now or in
> the future or in the past; and this is 'impossible *per
> se*' ... It is used in the other way of whatever cannot be
> true now or in the future although it could have been true in the
> past ... and this is 'impossible *per accidens*'.
> Similarly, in case something cannot be false now or in the future or
> in the past it is said to be 'necessary *per se*' ... But it
> is 'necessary *per accidens*' in case something cannot be
> false now or in the future although it could have been [false] in the
> past (Kretzmann 1966, p. 41).
>
>
>
Sherwood says that the reason it is important to separate modal
propositions from assertoric ones is that
>
>
> [s]ince our treatment is oriented toward syllogism, we have to
> consider them under those differences that make a difference in
> syllogism. These are such differences as ... modal, assertoric; and
> others of that sort. For one syllogism differs from another as a
> result of those differences (Kretzmann 1966, p. 39).
>
>
>
Despite this, there is "no treatment of modal syllogisms in any
of the works that have been ascribed to Sherwood" (Kretzmann
1966, p. 39, fn. 58). For more on Sherwood's account of modality, see
(Uckelman 2008).
### 2.2. Properties of terms
The theory of the "properties of terms" forms the basis of
medieval semantic theory and provides an account of how terms function
within the broader context of sentences, as well as how these
different functions can account for the different rules governing
inference. Sherwood identifies the four main properties of terms as
(1) signification; (2) supposition; (3) copulation; and (4)
appellation (Kretzmann 1966, p. 105). In this classification he
differs from other accounts which mention (1) signification; (2)
supposition and copulation; (3) ampliation and restriction; (4)
appellation; and (5) relation.
Sherwood defines 'signification' as "a presentation
of the form of something to the understanding", a definition
harking back to 12th-century semantic theory (Kretzmann 1966, p. 105,
fn. 2). Supposition and copulation are symmetrically related to each
other; 'supposition' is "an ordering of the
understanding of something under something else" while
'copulation' is "an ordering of the understanding of
something over something else" (Kretzmann 1966, p. 105). Only
substantive nouns, pronouns, and definite descriptions can have
supposition, while only adjectives, participles, and verbs can have
copulation (Kretzmann 1966, p. 106).
Finally, 'appellation' is "the present correct
application of a term -- i.e., the property with respect to which what
the term signifies can be said of something through the use of the
verb 'is'" (Kretzmann 1966, p. 106). That is, the
*appellata* of a term are the presently existing things of which
the term can currently be truly predicated. Appellation is between
supposition and copulation: Substantive nouns, adjectives, and
participles can have appellation, but pronouns and verbs cannot
(Kretzmann 1966, p. 106).
The central notion of these three is supposition, and the majority of
the chapter is devoted to the division of supposition into its various
kinds:
![Tree diagram: At the top is Supposition with 2 branches: Material and Formal; Formal has 2 branches: Simple and Personal; Simple has 3 leaves: Manerial, Reduplicative, and Unfixed; Personal has 2 branches: Determinate and Confused; Confused has 2 branches: Merely confused and Distributive confused; Distributive confused has 2 leaves: Mobile and Immobile.](sherwood-supposition.jpg)
For a discussion of most of the notions in this division as well as a
comparison of Sherwood's views to those of his 13th-century
contemporaries and his 14th-century posterity, see the article
Medieval Theories: Properties of Terms.
We briefly comment on 'manerial' supposition, a type
which does not appear outside of Sherwood. Some have considered
*manerialis* an error for *materialis*, and considered
Sherwood to have been talking here about material supposition
(Grabmann 1937). However, the manuscript shows *n* rather than
*t*, and the type of supposition is not the same as the material
supposition, which is distinguished from formal supposition, but is
itself a type of formal supposition. Sherwood says that manerial
supposition is a type of simple supposition in which "a word can
be posited for its *significatum* without any connection with
things", and he gives as an example "Man is a
species". The explanation is that this is manerial supposition
because 'man' "supposits for the specific character
in itself" (Kretzmann 1966, p. 111). The notion of
*maneries* meaning 'character, way, mode, manner' can
be found in 12th-century discussions of universals; for example, John
of Salisbury notes that Joscelin, Bishop of Soissons, says that
sometimes the words *genus* and *species* "are to be
understood as things of a universal kind" and sometimes he
"interprets them as modes (*maneries*) of things"
(Hall, Book II, ch. 17, p. 84). John says he does not know where
Joscelin got such a distinction, but it occurs in Abelard and in the
12th-century *Fallacie Parvipontane* (Hall, Book II, ch. 17, p.
84, fnn. a, b; de Rijk 1962, pp. 139, 562).
### 2.3. Semantics of universal affirmatives
Statements are composed of two types of parts: principal and
secondary. The principal parts of a statement are the nouns and verbs,
that is, the subjects and predicates whose properties are the focus of
the first branch of terminist logic. The secondary parts are those
that remain: "the adjectival name, the adverb, and conjunctions
and prepositions, for they are not necessary for the statement's
being" (Kretzmann 1968, p. 13). These are words which do not
have signification in themselves, but only in conjunction with other
significative words, namely the subject and predicate (Kretzmann 1966,
p. 24). Because these words do not have signification in themselves,
they do not have any of the usual properties of terms -- supposition,
copulation, and ampliation. Nevertheless, they still have a
significative function. Explaining this function is the focus of
Sherwood's treatise on syncategorematic terms.
It is in this treatise that we can see evidence for what some have
said is a distinctively English approach to syncategorematic terms
(Uckelman & Lagerlund 2016). Sherwood and his English contemporary
Bacon both consider distributive terms such as 'every' to
be syncategorematic, while in continental works such as those by John
le Page, Peter of Spain, and Nicholas of Paris, distributive terms are
usually considered in summary treatises in the chapter on distribution
(a chapter found in no English summary) (Braakhuis 1981, pp. 138-139).
In his consideration of 'every', Sherwood argues that
propositions involving 'all' require that there be at
least three things for which the predicate term truly stands:
>
>
> Rule [II] The sign 'every' or 'all' requires
> that there be at least three appellata (Kretzman 1968, p. 23).
>
>
>
(Remember that an *appellatum* is something which both exists and
which the subject stands for or refers to.) The justification for this
is taken from Aristotle, who Sherwood quotes as saying "Of two
men we say that they are two, or both, and not that they are
all" (Kretzmann 1968, p. 23). That is, if there were fewer than
three, it would be more appropriate to say 'Both' or
'One' rather than 'All'.
Sherwood is also noteworthy for being the first logician we know of to
treat 'is' as a syncategorematic term (Kretzmann 1968, p.
90). He argues that 'is' is equivocal, in that it can
indicate either (1) actual being (*esse actuale*) or (2) habitual
being (*esse habituale*) (Kretzmann 1968, p. 92, Kretzmann 1966,
p. 125; Kretzmann translates *habituale* as
'relational' in Kretzmann 1966 and as
'conditional' in Kretzmann 1968). As a result, the
sentence "Every man is an animal" is ambiguous. When
'is' is taken to indicate actual being, "Every man
is an animal" "is false when no man exists"
(Kretzmann 1968, p. 93). When 'is' is taken to indicate
habitual being, then "insofar as ["Every man is an animal"] is
necessary it has the force of this conditional: 'if it is man it
is an animal'" (Kretzmann 1966, p. 125). In analyzing
universal affirmative categorical sentences in this way, Sherwood can
be seen as an 'early adopter' of our modern truth
conditions for these sentences. A consequence of this view is a
commitment to the existence of *possibilia*, things that could
exist but which do not; such *possibilia* have a diminished sort
of being (*esse diminutum*) (Kretzmann 1968, p. 93). This was an
unusual position in the thirteenth century (Knuuttila 1993), and it
was later denounced by William of Ockham (Freddoso & Schuurman
1980, p. 99).
## 3. Influence
Bacon's approbation of Sherwood is not the only explicit reference we
have to him by his contemporaries. The copy of Chapter Five of the
*Introduction* found in MS Worcester Cath. Q. 13 mentioned
earlier is followed by a collection of anonymous notes (also edited in
Pinborg & Ebbesen 1984 and there called the *Dubitationes*).
Pinborg and Ebbesen argue that these lecture notes, composed either
for a teacher's or a student's use, are unlikely to have been written
in response to lectures given in the 1290s, i.e., when the manuscript
was copied, but rather date from the 1260s or the 1270s, favoring a
date around 1270 (Pinborg and Ebbesen 1984, p. 104). This text thus
provides evidence that Sherwood's works continued to be taught and
circulated.
Beyond these explicit mentions, it is clear that Sherwood's works were
widely influential among logicians at the University of Paris in the
second half of the 13th century. The most direct line of influence
runs from William of Sherwood to his much better-known colleague Peter
of Spain. It was for many years believed that Peter was the first
person to translate Psellus's text (Mullinger 1873, p. 176), prior to
the rediscovery of Sherwood's *Introduction*, which pre-dates
Peter by two decades and which is closer to Psellus's work. Peter's
*Summary of Logic* is clearly indebted to Sherwood. Kretzmann
notes that "many of the features of the *Summulae* that
made it far and away the most popular logic textbook in the Middle
Ages seem quite plainly to have developed as modifications of more
difficult discussions and less felicitous mnemonic devices in the
*Introductiones*" (Kretzmann 1968, p. 6). On the other
hand, Sherwood appears to have been influenced by Peter's treatise on
syncategorematic terms in writing his own; Kretzmann provides a
summary of passages in Sherwood's treatise which "can be read as
allusions to positions taken by Peter" (1968, p. 7). Other
influences can be found in Albert the Great, who used Sherwood's
notion of appellation rather than Peter of Spain's (Kretzmann 1966, p.
5), and Thomas Aquinas, whose analysis of modal propositions follows
Sherwood (Kretzmann 1966, p. 5; Uckelman 2008).
Though Sherwood's indirect influence on the development of logic and semantics, via his influence on Peter of Spain, should not be underestimated, his direct influence beyond the 13th century was more minimal, in part because the influence of Peter of Spain was so great. In the 14th century, any influence Sherwood's approach to the properties of terms might have still had was almost entirely eclipsed by figures such as William of Ockham, John Buridan, and Albert of Saxony (Bos 2011, p. 773). |
williams-bernard | ## 1. Biography
Bernard Williams was born in Essex in 1929, and educated at Chigwell
School and Balliol College, Oxford, where he read Greats, the uniquely
Oxonian degree that begins with Homer and Vergil and concludes with
Thucydides, Tacitus, and (surprisingly perhaps) the latest in
contemporary philosophy. Both Williams' subject of study and his
tutors, especially Richard Hare, remained as influences throughout his
life: the Greeks' sort of approach to philosophy never ceased to
attract him, Hare's sort of approach never ceased to have the
opposite effect. (Williams' contemporaries at Balliol, John
Lucas for example, still report their mischievous use of
"combined tactics" in philosophy tutorials with Hare; or
perhaps the relevant preposition is "against".)
Early in his career, Williams sat on a number of British government
committees and commissions, most famously chairing the Committee on
Obscenity and Censorship of 1979, which applied Mill's
"harm principle" to the topic, concluding that
restrictions were out of place where no harm could reasonably be
thought to be done, and that by and large society has other problems
which are more worth worrying about. At this time, he also began to
publish books. His first book, *Morality: an introduction to
ethics* (1972), already announced many of the themes that were to
be central to his work. Already evident, in particular, were his
questioning attitude to the whole enterprise of moral theory, his
caution about the notion of absolute truth in ethics, and his
hostility to utilitarianism and other moral theories that seek to
systematise moral life and experience on the basis of such an
absolute; as he later put it, "There cannot be any very
interesting, tidy or self-contained theory of what morality is...
nor... can there be an ethical theory, in the sense of a
philosophical structure which, together with some degree of empirical
fact, will yield a decision procedure for moral reasoning"
(1981: ix-x). His second book, *Problems of the Self* (= PS;
1973), was a collection of his philosophical papers from 1956 to 1972;
his further collections of essays (*Moral Luck*, 1981, and
*Making Sense of Humanity*, 1998) were as much landmarks in the
literature as this first collection. (Posthumously three further
collections appeared: *In the Beginning was the Deed* (ed.
Geoffrey Hawthorn), 2005, *A Sense of the Past*, 2005, and
*Philosophy as a Humanistic Discipline* (2006); at least the
second and third of these three collections are already having a
considerable impact on philosophy, partly because they include essays
that were already well-known and widely discussed in their original
places of appearance.) In 1973 Williams also brought out a co-authored
volume, *Utilitarianism: For and Against*, with J.J.C.Smart (=
UFA); his contribution to this (the *Against* bit) being, in
the present writer's view, a *tour de force* of
philosophical demolition. Then in 1978 Williams produced
*Descartes: The Project of Pure Enquiry*. This study could be
described as his most substantial work outside ethics, but for the
fact that the key theme of the book is the impossibility of
Descartes' ambition to give a foundation, in the first-personal
perspective, to the "absolute conception" of the world, a
representation of the world "as it is anyway" that
includes, explains, and rationally interrelates all other possible
representations of the world (Williams 1968: 65)--a theme that is
in an important sense not outside ethics at all.
Williams worked in Britain until 1987, when he left
for Berkeley in protest at the impact of the Thatcher
government's policies on British universities. In 1985, he
published the book that offers the most unified and sustained
presentation of what Williams had to say about ethics and human life:
*Ethics and the Limits of Philosophy*. On his return to Britain
in 1990 (incidentally the year of Mrs. Thatcher's resignation)
he succeeded his old tutor Richard Hare as White's Professor of
Moral Philosophy at Oxford. While in the Oxford chair he produced
*Shame and Necessity* (1993), a major study of Greek ethics
which aims to distinguish what we think about ethics "from what
we think that we think" (1993: 91): Williams' thesis is
that our deepest convictions are often more like classical Greek
ethical thought, and less like the post-Enlightenment "morality
system", as Williams came to call it, than most of us have yet
realised. (More about the morality system in sections 2 and 3.)
In 1999 he published an introductory book on *Plato*
(Routledge). After 1999--when he was knighted--he began to
be affected by the cancer which eventually killed him, but was still
able to bring out *Truth and Truthfulness* in 2002. In this
Williams argues, against such deniers of the possibility or importance
of objective truth as the pragmatist Richard Rorty and the
deconstructionist Jacques Derrida, that it is indispensable to any
human society to accept both truth and truthfulness as values, and
sincerity and accuracy as corresponding virtues. Nor need such beliefs
imply anything disreputably "metaphysical", in the
Nietzschean sense that they lead us into a covert worship of what
Williams takes to be the will o' the wisps of theism or
Platonism. On the contrary, Williams argues, Nietzsche is on his side,
not the deniers', because Nietzsche himself believes that, while
a vindicatory history of the notions of truth and truthfulness
certainly has to be a naturalistic one, that is not to say that such a
history is impossible. We can write this history if we can supply a
"potential explanation", to use Robert Nozick's term
(Nozick 1974: 7-9), of how these notions could have arisen.
Williams himself attempts to provide such a potential explanation,
which if plausible will--given the impossibility of recovering
the actual history--provide us with as much insight as we can
reasonably hope for into how the notions of truth and truthfulness did
in fact arise. Such an understanding of truth and truthfulness,
Williams concludes, cannot lead us back into the pre-modern
philosophical Edens where truth and truthfulness are taken to have
their origin in something entirely transcendent, such as Plato's
Forms, or God, or the cognitive powers of the Kantian subject; but it
can lead us to the less elevated and more realistic hope that truth,
as a human institution, will continue to sustain the virtues of truth
"in something like the more courageous, intransigent, and
socially effective forms that they have acquired over their
history... and that the ways in which future people will come to
make sense of things will enable them to see the truth and not be
broken by it" (2002: 269). However, it should be noted that
Williams, perhaps confusingly, claims to have vindicated the idea that
truthfulness is an *intrinsic* value,
while at the same time admitting that his genealogical explanation for
the emergence of truthfulness only makes reference to what
truthfulness effects or accomplishes. Difficult questions remain about
his use of the Platonic category of 'intrinsic value' here
(see Rorty 2002, Queloz 2018).
Some of Williams' critics have complained that his work is
largely "destructive" or "negative". Part of
Williams' reply is that his nuanced and particularistic approach
to ethics--via the *detail* of ethical questions--is
negative only from the point of view of those operating under a
completely undefended assumption. This is the assumption that, if
there is to be serious ethical thought, then it must inevitably take
the form of moral theory. The impression that any other approach could
not be more than "negative" is itself part of the mindset
that he is attacking.
Williams often also meets the charge of negativity with a
counter-offensive, which can be summarised as the retort that
there's plenty to be negative about (1995: 217). "Often,
some theory has been under criticism, and the more particular material
[e.g. Williams' famous examples (UFA: 93-100) of George
and Jim: see section 4 below] has come in to remind one of the
unreality and, worse, distorting quality of the theory. The
material... is itself extremely schematic, but... it at
least brings out the basic point that... the theory is frivolous,
in not allowing for anyone's experience, including the
author's own. Alternatively, the theory does represent
experience, but an impoverished experience, which it holds up as the
rational norm--that is to say, the theory is stupid."
But Williams did publish positive and constructive philosophy, most
notably in *Truth and Truthfulness* itself. Moreover, this
conception of Williams as a negative thinker is contestible. Miranda
Fricker argues that Williams work represents an affirmation of what
she calls "ethical freedom", which follows from the
recognition that rationality itself significantly underdetermines how
we should live (Fricker 2020). If the Platonic dream is that objective
reason can always give us sufficient practical guidance, then Williams
negative position is that this ambitious form of rationalism must
fail. The correlative positive idea is that each agent has a certain
kind of subjective freedom to shape their own conceptions of
successful action and of the good life, conceptions which are not
subject to censure or approval by some kind of universal rationality.
It is important to notice that this only appears to be a negative
thesis because we are assuming that a positive contribution to moral
philosophy consists in establishing universal limits or boundaries on
justified action. But from the perspective of the deciding agent,
Williams' thesis is anything but negative; rather, it is
positively liberating, freeing agents to shape their own projects in
accordance with their own character.
Since one of Williams' main objectives is to demonstrate the
frivolity and/ or stupidity of too much contemporary moral theory, it
is natural to structure our more detailed examination of his
contributions to philosophy by beginning with its critical side. The
first two of the three themes from Williams that we pick for closer
attention are both campaigns of argument *against* positions:
respectively, against the "morality system" (sections 2
and 3), and against utilitarianism (section 4). The aptness of this
arrangement comes out in the fact that, as we shall see, most of the
constructive positions that Williams adopts can be seen as the
"morals" of these essentially destructive stories. Even
what we take to be Williams' single most important positive
thesis, a view about the nature of motivation and reasons for action,
emerges from his critique of other people's views about reasons
for action; more about that, his famous "internal reasons"
argument, in section 5.
## 2. Williams and Moral Philosophy
In the Preface to Williams' first book he notes the charge
against contemporary moral philosophy "that it is peculiarly
empty and boring".
[1].
Moral philosophy, he claims, has found an original way of being
boring: and this is "by not discussing moral issues at
all."
Certainly, this charge is no longer as fair now as it was in 1972.
Today there is an entire discipline called "applied" or
"practical" ethics, not to mention sub-disciplines called
environmental, business, sport, media, healthcare, and medical ethics,
to the extent that hardly any moral issues are *not* discussed
by philosophers nowadays. However, while some or even many
philosophers today do applied ethics by applying some general,
abstract theory, a problem with many of them, as Williams pointed out
in an interview in 1983, is that those who proceed in this way often
seem to lose any real interest in the perspectives of the human beings
who must actually live with a moral problem:
>
> I do think it is perfectly proper for some philosophers all of the
> time and for other philosophers some of the time to be engaged in
> technical issues, without having to worry all the time whether their
> work is going to revolutionise our view of the employment situation,
> or something of that kind. Indeed, without criticising any particular
> thinkers or publicists, a problem with "applied ethics" is
> that some people have a bit of ready-made philosophical theory, and
> they whiz in, a bit like hospital auxiliary personnel who aren't
> actually doctors. That kind of applied philosophy isn't even
> half-interesting...[2]
>
He continues:
>
> ...the temptation is to find a way to apply philosophy to
> immediate and practical problems and to do so by arguing about those
> problems in a legalistic way. You are tempted to make your moral
> philosophy course into a quasi-legal course... All the
> philosophical journals are full of issues about women's rights,
> abortion, social justice, and so on. But an awful lot of it consists
> of what can be called in the purely technical sense a kind of
> casuistry, an application of certain moral systems or principles or
> theories to discussing what we should think about abortion.
>
We are now able to see how Williams conceived of the relation between
philosophy and lived ethical experience. He firmly believed that
philosophy *should* speak to that experience, on pain of being
"empty and boring". But he did not think that we should
follow the moral philosophers of his day in preferring the schematic
over the detailed, or the general over the particular. Here, Williams
joins a long critical tradition that stretches at least back to G.W.F.
Hegel, whose own claim that Kant's moral theory was
"empty" is importantly related to Williams' own
charge against moral theorizing. Moreover, as a general criticism of
moral philosophy, this point arguably remains quite correct even
today.[3]
Bearing this general orientation in mind, we now turn to a discussion
of Williams' more determinate charges against various types of
moral theory.
## 3. Against the "peculiar institution"
The unwillingness to be drawn into discussing particular ethical
issues that Williams complains of was a reflection of earlier
developments. In particular, it was a reflection of the logical
positivists' disdain for "moralising", a disdain
which arose naturally from the emotivist conviction of philosophers
such as A.J.Ayer that to utter one's first-order moral beliefs
was to say nothing capable of truth or falsehood, but simply to
express one's attitudes, and hence not a properly
*philosophical* activity at all. More properly philosophical,
on emotivist and similar views, was a research-programme that became
absolutely dominant during the 1950s and 1960s in Anglophone
philosophy, including moral philosophy. This was linguistic analysis
in the post-Wittgensteinian style of J.L.Austin, who hoped, starting
from an examination of the way we talk (whoever "we" may
be: more on that in a minute), to reveal the deep structure of a wide
variety of philosophically interesting phenomena: among the most
successful applications of Austin's method were his studies of
intention, other minds, and responsibility.
When Ayer's dislike of preaching and Austin's method of
linguistic analysis were combined in moral philosophy, one notable
result[4]
was Richard Hare's "universal presciptivism", a
moral system which claimed to derive the form of all first-order moral
utterances simply from linguistic analysis of the two little words
"ought" and "good". Hare argued that it
followed from the logic of these terms, when used in their full or
specially moral sense, that moral utterances were (1) distinct from
other utterances in being, not assertions about how the world is, but
prescriptions about how we think it ought to be; and (2) distinct from
other prescriptions in being *universalisable*, by which Hare
meant that anyone who was willing to make such a prescription about
any agent, e.g. himself, should be equally willing to make it about
any other similarly-placed agent. In this way Hare's theory
preserved the important emotivist thesis that a person's moral
commitments are not rationally challengeable for their content, but
only for their coherence with that person's other moral
commitments--and thus tended to keep philosophical attention away
from questions about the content of such
commitments.[5]
At the same time, his system was also able to accommodate a central
part of the Kantian outlook, because it gave a
rationale[6]
for the twin views that moral commitments are overriding as
motivations (so that they *will* motivate if present), and that
they are overriding as rational justifications (so that they
*rationally must* motivate if they are present). Hence cases
like *akrasia*, where a moral commitment appears to be present
in an agent but gets overridden by something else along the way to
action, must on Hare's view be cases where something has gone
wrong: either the agent is irrational, or else she has not really
uttered a full-blown moral *ought*, a properly moral
commitment, either because (1) the prescription that she claims to
accept is not really one that she accepts at all, or (2) because
although she does sincerely accept this prescription, she is not
prepared to give it a fully universalised form, and hence does not
accept it as a distinctively *moral* prescription.
In assessing a position like Hare's, Williams and other critics
often begin with the formidable difficulties involved in the project
of deducing anything much about the structure of morality from the
logic of moral language: see e.g., Geach, "Good and Evil",
*Analysis* 1956, and Williams 1972: 52-61. These
difficulties are especially acute when the moral language we consider
is basically just the words "ought" and "good"
and their opposites. "If there is to be attention to language,
then there should be attention to more of it" (Williams 1985:
127); the closest Williams comes to inheriting the ambitions of
linguistic analysis is his defence of the notion of morally
"thick concepts" (1985:
140-143)[7].
These--Williams gives *coward, lie, brutality* and
*gratitude* as examples--are concepts that sustain an
ethical load of a culturally-conditioned form, and hence succeed both
in being action-guiding (for members of that culture), and in making
available (to members of that culture) something that can reasonably
be described as ethical *knowledge*. Given that my society has
arrived at the concept of brutality, that is to say has got clear, at
least implicitly, about the circumstances under which it is or is not
applicable, there can be *facts* about brutality (hence,
ethical facts) and also justified true
beliefs[8]
about brutality (hence, ethical knowledge). Moreover, this knowledge
can be lost, and will be lost, if the concept and its social context
is lost. (For a strikingly similar philosophical project to that
suggested by this talk of thick concepts, cp. Anscombe 1958a, and
Philippa Foot's papers "Moral Beliefs" and
"Moral Arguments", both in her *Virtues and
Vices.*)
Before we even get to the problem how the structure of morality is
supposed to follow from moral language, there is the prior question
"*Whose* moral language?"; and this is a deeper
question. We do not suppose that all moral language (not even--to
gesture towards an obviously enormous difficulty--all moral
language *in* *English*) has always and everywhere had
exactly the same presuppositions, social context, or cultural
significance. So why we should suppose that moral language has always
and everywhere had exactly the same meaning, and has always been
equally amenable to the analysis of its logical structure offered by
Hare? (Or by anyone else: it can hardly be insignificant that when
G.E.Moore (Principia Ethica sections 17, 89) anticipated Hare by
offering a linguistic analysis of "good", his analysis of
this term was on the face of it quite different from Hare's,
despite Moore's extreme historical and cultural proximity to
Hare.) Basing moral objectivism on the foundations of a linguistic
approach leaves it more vulnerable to relativistic worries than other
foundations do. For on the linguistic approach, we also face a
question of authority, the question why, even if something like the
offered analysis of our moral language were correct, that should
license us to think that the moral language of *our* society
has any kind of universal jurisdiction over *any*
society's. In its turn, this question is very apt to breed the
further question how, if our moral language lacks this universal
jurisdiction over other societies, it can make good its claim to
jurisdiction even in our society.
These latter points about authority are central to Williams'
critique of contemporary moral philosophy. Like Anscombe before him,
Williams argues that the analysts' tight focus on such words as
"ought", "right", and "good" has
come, in moral theory, to give those words (when used in their alleged
"special moral sense") an air of authority which they
could only earn against a moral and religious backdrop--roughly,
the Christian world-view--that is nowadays largely missing. What
Williams takes to be the correct verdict on modern moral theory is
therefore rather like Nietzsche's on George
Eliot:[9]
the idea that morality can and will go on just as before in the
absence of religious belief is simply an illusion that reflects a lack
of "historical sense". As
Anscombe[10]
puts it (1958: 30), "it is not possible to have a [coherent law
conception of ethics] unless you believe in God as a law-giver...
It is as if the notion 'criminal' were to remain when
criminal law and criminal courts had been abolished and
forgotten." And as Williams puts it (1985: 38), the
"various features of the moral judgement system support each
other, and collectively they are modelled on the prerogatives of a
Pelagian God."
What then are these features? That is a big question, because Williams
spent pretty well his whole career describing and criticising them.
But he gives his most straightforward, and perhaps the definitive,
summary of what the "morality system" comes to in the last
chapter of *Ethics and the Limits of Philosophy*. (The
chapter's title provocatively describes morality as "the
peculiar institution", this phrase being the American
Confederacy's standard euphemism for
slavery.[11])
Following this account, we may venture to summarise the
"morality system" in nine leading
theses.[12]
**First**, the morality system is essentially practical:
my moral obligations are always things that I can do, so that
"if my deliberation issues in something that I cannot do, then I
must deliberate again" (1985: 175). This implies,
**second**, that moral obligations cannot (really)
conflict (185: 176). **Third**, the system includes a
pressure towards generalisation which Williams calls "the
*obligation out-obligation in* principle": this is the
view that every particular moral obligation needs the logical backing
of a general moral obligation, of which it is to be explained as an
instance. **Fourth**, "moral obligation is
inescapable" (185: 177): "the fact that a given agent
would prefer not to be in [the morality] system will not excuse
him", because moral considerations are, in some sense like the
senses sharpened up by Kant and by Hare, *overriding*
considerations. In any deliberative contest between a moral obligation
and some other consideration, the moral obligation will always win
out, according to the morality system. The only thing that
*can* trump an obligation is another obligation (1985: 180);
this is a **fifth** thesis of the morality system, and it
creates pressure towards a **sixth**, that as many as
possible of the considerations that we find practically important
should be represented as moral obligations, and that considerations
that cannot take the form of obligations cannot really be important
after all (1985: 179). **Seventh**, there is a view about
the impossibility of "moral luck" that we might call, as
Williams calls it, the "purity of morality" (1985:
195-6): "morality makes people think that, without its
very special obligation, there is only inclination; without its utter
voluntariness, there is only force; without its ultimately pure
justice, there is no justice"; whereas "in truth",
Williams insists, "almost all worthwhile human life lies between
the extremes that morality puts before us" (1985: 194).
**Eighth**, "blame is the characteristic reaction
of the morality system" to a failure to meet one of its
obligations (1985: 177); and "blame of anyone is directed to the
voluntary" (1985: 178). **Ninth**, and finally, the
morality system is impersonal. We shall set this last feature of the
system aside until section 4, and focus, for now, on the other
eight.
For each of the theses, Williams has something (at least one thing) of
deep interest to say about why we should reject it. The
**first** and **second**--about the
practicality of morality and the impossibility of real
conflict--are his target in his well-known early paper
"Ethical Consistency" (PS: 166-186). In real life,
Williams argues, there surely are cases where we find ourselves under
ethical demands which conflict. These conflicts are not always
eliminable in the way that the morality system requires them always to
be--by arguments leading to the conclusion that one of the
*ought*s was only *prima facie* (in Ross's
terminology: see Williams 1985: 176-177), or *pro tanto*
(in a more recent terminology: see Kagan 1989), or in some other way
eliminable from our moral accounting. But, Williams argues, "it
is surely falsifying of moral
thought[13]
to represent its logic as demanding that in a conflict... one of
the conflicting *ought*s must be totally rejected [on the
grounds that] it did not actually apply" (PS:
183-4).[14]
For the fact that it did actually apply is registered by all sorts of
facts in our moral experience, including the very important phenomenon
of ineliminable agent-regret, regret not just that something happened,
but that it was me who made it happen (1981: 27-30).
Suppose for
example[15]
that I, an officer of a wrecked ship, take the hard decision to
actively prevent further castaways from climbing onto my already
dangerously overcrowded lifeboat. Afterwards, I am tormented when I
remember how I smashed the spare oar repeatedly over the heads and
hands of desperate, drowning people. Yet what I did certainly brought
it about that as many people as possible were saved from the
shipwreck, so that a utilitarian would say that I brought about the
best consequences, and anyone might agree that I found the only
practicable way of avoiding a dramatically worse outcome. Moreover, as
a Kantian might point out, there was nothing *unfair* or
*malicious* about what I did in using the minimum force
necessary to repel further boarders: my aim, since I could not save
every life, was to save those who by no choice of mine just happened
to be in the lifeboat already; this was an aim that I properly had,
given my role as a ship's officer; and it was absolutely not my
intention to kill or (perhaps) even to injure anyone.
So what will typical advocates of the morality system have to say to
me afterwards about my dreadful sense of
regret?[16]
If they are--as perhaps they had better not be--totally
consistent and totally honest with me, what they will have to say is
simply "Don't give it a second thought; you did what
morality required, so your deep anguish about it is irrational."
And that, surely, cannot be the right thing for anyone to say. My
anguish is not irrational but entirely justified. Moreover, it is
justified *simply as an ex post facto response to what I did*:
it does not for instance depend for its propriety upon the
suggestion--a characteristic one, for many modern moral
theorists--that there is prospective value for the future in my
being the kind of person who will have such reactions.
The **third** thesis Williams mentions as a part of the
morality system is the *obligation out-obligation in*
principle, the view that every particular moral obligation needs the
backing of a general moral obligation, of which it is to be explained
as an instance. Williams argues that this thesis will typically engage
the deliberating agent in commitments that he should not have. For one
thing, the principle commits the agent to an implausibly demanding
view of morality (1985: 181-182):
>
> The immediate claim on me, "In this emergency, I am under an
> obligation to help", is thought to come from, "One is
> under this general obligation: to help in an emergency"...
> But once the journey into more general obligations has started, we may
> begin to get into trouble--not just philosophical trouble, but
> conscience trouble--with finding room for morally indifferent
> actions... if we have accepted general and indeterminate
> obligations to further various moral objectives... they will be
> waiting to provide work for idle hands, and the thought can gain a
> footing that... I am under an obligation not to waste time in
> doing things that I am under no obligation to do. At this stage,
> certainly, only an obligation can beat an obligation [cp. the
> **fourth** thesis], and in order to do what I wanted to
> do, I shall need one of those fraudulent items, a duty to myself.
>
It is only the pressure to systematise that leads us to infer that, if
it is *X*'s particular obligation in *S* to ph,
then this must be because there is a general obligation, on any
*X*-like agent, to ph in any *S*-like
situation.[17]
Unless some systematic account of morality is true--as Williams
of course denies--there is no obvious reason why this inference
must hold in any more than trivial sense. But even if it does hold, it
is not clear how the general duty *explains* the particular
one; why are general obligations any more explanatory than particular
ones? Certainly anyone who is puzzled as to why there is *this*
particular obligation, say to rescue one's wife, is unlikely to
find it very illuminating to be pointed towards the general obligation
of which it is meant to be an instance. (Williams' closeness to
certain particularist strategies should be obvious here: cp. Dancy
2004, and Chappell 2005.)
Another inappropriate commitment arising from the *obligation
out-obligation in* principle, famously spelled out at 1981: 18, is
the agent's commitment to a "thought too many". If
an agent is in a situation where he has to choose which of two people
to rescue from some catastrophe, and chooses the one of the two people
who is his wife, then "it might have been hoped by some people
(for instance, by his wife) that his motivating thought, fully spelled
out, would be the thought that it was his wife, not that it was his
wife and that in situations of this kind it is permissible to save
one's wife." The morality system, Williams is suggesting,
makes nonsense of the agent's action in rescuing his wife: its
insistence on generality obscures the particular way in which this
action is really justified for the agent. Its real justification has
nothing to do with the impersonal and impartial standards of morality,
and everything to do with the place in the agent's life of the
person he chooses to rescue. For Williams, the standard of "what
makes life meaningful" is always deeper and more genuinely
explanatory than the canon of moral obligation; the point is central,
and we shall come back to it below in sections 3 and 4.
Williams' opposition to the **fourth** thesis,
about the inescapability of morality, rests on the closely-related
contrast he draws between moral considerations, and considerations
about "importance": "ethical life is important, but
it can see that things other than itself are important" (1985:
184). This notion of importance is grounded, ultimately, in the fact
"that each person has a life to lead" (1985: 186). What is
important, in this sense, is whatever humans need to make it possible
to lead what can reasonably be recognised as meaningful lives; the
notion of importance is of ethical use because, and insofar as, it
reflects the facts about "what we can understand men as needing
and wanting" (1972: 95). The notion that moral obligation is
inescapable is undermined by careful attention to this concept of
importance, simply because reflection shows that the notion of moral
obligation will have to be grounded in the notion of importance if it
is to be grounded in anything that is not simply illusory. But if it
is grounded in that, then it cannot itself be the only thing that
matters. Hence moral obligation cannot be inescapable, which refutes
the **fourth** thesis of the morality system; other
considerations can sometimes override or trump an obligation without
themselves being obligations, which refutes the
**fifth**; and there can be no point in trying to
represent every practically important consideration as a moral
obligation, so that it is for instance a distortion for Ross (*The
Right and The Good*, 21 ff.) to talk of "*duties* of
gratitude" (1985: 181); which refutes the
**sixth**.
It is worth noting that the **fourth** thesis**,**that morality is inescapable for all agents in all
situations, has implications beyond the realm of personal ethics.
Williams' most enduring contribution to political philosophy is
his denial of political moralism, the view that that *politics*
is always and everywhere regulated by morality. The result is his
celebrated political realist position, which denies that legitimate
politics can just consist in the systematic application of some moral
theory or principle. Rather, for Williams, the basic political
question is: can the state secure the bare conditions of "order,
protection, safety, trust, and the conditions of cooperation"?
If so, it has met the *Basic Legitimation Demand*, which is not
a moral demand but rather a kind of precondition for the existence of
politics at all. For Williams, all of this means that political
normativity stands outside of the morality system (IBD, ch.1). More
concretely, this means that certain kinds of considerations
distinctive to politics must be allowed to retain self-standing
practical significance. We might illustrate this thought by noting
that in the realm of ordinary interpersonal ethics, it seems perfectly
reasonable to try to reduce or minimize coercive relations, whereas in
politics this demand is nonsensical, since politics begins with the
question of how a governing body's coercive power ought to be
deployed.
Political philosophy aside, Williams also denies that personal
decision-making must always and everywhere be regulated by moral
normativity. Another vivid instance of the escapability of moral
obligations is Williams' own example of "Gauguin", a
(fictionalised) artist who deliberately rejects a whole host of moral
obligations (to his family, for instance) because he finds it more
"important", in this sense, to be a painter. As Williams
comments (1981: 23), "While we are sometimes guided by the
notion that it would be the best of worlds in which morality were
universally respected and all men were of a disposition to affirm it,
we have, in fact, deep and persistent reasons to be grateful that that
is not the world we have"; in other words, moral obligation is
escapable because it is not in the deepest human interest that it
should be inescapable. ("Because": the fact that this sort
of inference is possible in ethics is itself a revealing fact about
the nature of ethics.)
Williams' Gauguin example, we have suggested, has force against
the thesis that morality is inescapable. It also has force against the
**seventh** thesis of the morality system, its insistence
on "purity" and its denial of what Williams calls
"moral luck". To understand this notion, begin with the
familiar legal facts that attempted murder is a different and less
grave offence than murder, and that dangerous driving typically does
not attract the same legal penalty if no one is actually hurt.
Inhabitants of the morality system will characteristically be puzzled
by this distinction. How can it be right to assign different levels of
blame, and different punishments, to two agents whose *mens*
*rea* was exactly the same--it was just that one would-be
murderer dropped the knife and the other didn't--or to two
equally reckless motorists--one of whom just happened to miss the
pedestrians while the other just happened to hit them?
One traditional answer--much favoured by the
utilitarians--is that these sorts of thoughts only go to show
that the point of blame and punishment is prospective
(deterrence-based), not retrospective (desert-based). There are
reasons for thinking that blame and punishment cannot be made sense of
in this instrumental fashion (cp. UFA: 124, 1985: 178). "From
the inside", both notions seem essentially retrospective, so
that if a correct understanding of them said that they were really
fictions serving a prospective social function, no one who knew that
could continue to use these notions "from the inside":
that is, the notions would have proved unstable under reflection for
this person, who would thereby have lost some ethical knowledge. If
this gambit fails, another answer--favoured by Kantians, but
available to utilitarians too--is that the law would need to
engage in an impossible degree of mind-reading to pick up all and only
those cases of *mens rea* that deserve punishment irrespective
of the outcomes. Even if this is the right thing to say about the law,
the answer cannot be transposed to the case of morality: morality
contrasts with the law precisely because it is supposed to apply even
to the inner workings of the mind. Thus, morality presumably ought to
be just as severe on the attempted murderer and the reckless but lucky
motorist as it is on their less fortunate doubles.
Williams has a different answer to the puzzle why we blame people more
when they are successful murderers, or not only reckless but lethal
motorists, despite the fact that they have no voluntary control over
their success as murderers or their lethality as motorists. His answer
is that--despite what the morality system tells us--our
practice of blame is not in fact tied exclusively to voluntary
control. We blame people not only for what they have
*voluntarily* done, but also for what they have done *as a
matter of luck*: we might also say, of their *moral* luck.
The way we mostly think about these matters often does not distinguish
these two elements of control and luck at all clearly--as is also
witnessed by the important possibility of blaming people for what they
*are*. These phenomena, Williams argues, help to reveal the
basic unclarity of our notion of the voluntary; they also help to show
how "what we think" about blame is not always the same as
"what we think we think".
Parallel points apply with praise. Someone like the Gauguin of
Williams' story can be seen as taking a choice of the demands of
art over the obligations of family life which will be praiseworthy or
blameworthy *depending on how it turns out* ("The only
thing that will justify his choice will be success itself",
1981: 23). Here success or failure is quite beyond Gauguin's
voluntary control, and thus, if the morality system were right, would
have to be beyond the scope of praise and blame as well. A fault-line
in our notions of praise and blame is revealed by the fact that,
intuitively, it is not: the case where Gauguin tries and fails to be
an artist is one where we condemn him "for making such a mess of
his and others' lives", the case where he tries and
succeeds is, very likely, one where we say, a little grudgingly
perhaps, "Well, all right then -- well done." We have
the morality system's narrow or "pure" versions of
these notions, in which they apply only to (a narrow or
"pure" version of) the voluntary; but we also have a wider
version of the notions of praise and blame, in which they also apply
to many things that are not voluntary on any account of the voluntary.
Williams' thesis about moral luck is that the wider notions are
more useful, and truer to experience. (For a sustained defense of
Williams on these basic points, see Joseph Raz "Agency and
Luck")
Nor is it only praise and blame that are in this way less tightly
connected to conditions about voluntariness than the morality system
makes them seem. Beyond the notion of blame lie other, equally
ethically important, notions such as regret or even anguish at
one's actions; and these notions need not show any tight
connection with voluntariness either. As we saw in my shipwreck
example above, the mere fact that it was unreasonable to expect the
ship's officer to do much better than he did in his desperate
circumstances does not make it reasonable to fob off his anguish with
"Don't give it a second thought". Likewise, to use
an example of Williams' own (1981: 28), if you were talking to a
driver who through no fault of his own had run over a child, there
would be something remarkably obtuse--something irrelevant and
superficial, even if correct--about telling him that he
shouldn't feel bad about it *provided* it wasn't
his fault. As the Greeks knew, such terrible happenings will leave
their mark, their *miasma*, on the agent. "The whole of
the *Oedipus Tyrannus*, that dreadful machine, moves towards
the discovery of just one thing, that *he did it.* Do we
understand the terror of that discovery only because we residually
share magical beliefs in blood-guilt, or archaic notions of
responsibility? Certainly not: we understand it because we know that
in the story of one's life there is an authority exercised by
what one has done, and not merely by what one has intentionally
done" (1993: 69).
This sums up Williams' case for thinking that the wider notion
of praise and blame is tenable in a way that the narrower notion is
not because of its dependence on a questionably "pure"
account of the voluntary (1985: 194; cp. MSH Essays 1-3). In
this way, he controverts the **eighth** thesis of the
morality system, its insistence on the centrality of blame; which was
the last thesis that we listed apart from impersonality, the
discussion of which we have postponed till the next section.
So much on Williams' critique of the "morality
system". How far our discussion has delivered on its promise to
show how Williams' positive views emerge from his negative
programmes of argument, we leave, for now, to the reader's
judgement: we shall say something more to bring the threads together
in section 5. Before that, we turn to Williams' critique of
utilitarianism, the view that actions, rules, dispositions, motives,
social structures, (...etc.: different versions of utilitarianism
feature, or stress, some or all of these things) are to be chosen if
and only if they maximally promote utility or well-being.
## 4. "The day cannot be too far off...": Williams against utilitarianism
>
> [T]he important issues that utilitarianism raises should be discussed
> in contexts more rewarding than that of utilitarianism itself...
> the day cannot be too far off in which we hear no more of it (UFA:
> 150).[18]
>
Williams opposes utilitarianism partly for the straightforward reason
that it is an
"ism",[19]
a systematisation--often a deliberately brisk or indeed
"simple-minded" one (UFA: 149)--of our ethical
thinking. As we have already seen, he believes that ethical thinking
cannot be systematised without intolerable distortions and losses,
because to systematise is, inevitably, to streamline our ethical
thinking in a reductionist style: "Theory typically uses the
assumption that we probably have too many ethical ideas, some of which
may well turn out to be mere prejudices. Our major problem now is
actually that we have not too many but too few, and we need to cherish
as many as we can" (1985: 117). Again, as a normative system,
utilitarianism is inevitably a systematisation of our responses, a way
of telling us how we *should* feel or react. As such it faces
the same basic and (for Williams) unanswerable question as any other
such systematisation, "*by what right* does it legislate
to the moral sentiments?" (1981: x).
Of course, Williams also opposes utilitarianism because of the
particular kind of systematisation that it is--namely, a
manifestation of the morality system. Pretty well everything said in
sections 2 and 3 against morality in general can be more tightly
focused to yield an objection to utilitarianism in particular, and
sometimes this is all we will need to bear in mind to understand some
specific objection to utilitarianism that Williams offers. Thus, for
instance, utilitarianism in its classic form is bound to face the
objections that face any moral system that ultimately is committed to
denying the possibility of real moral conflict or dilemma, and the
rationality of agent-regret. Given its insistence on generality, it
faces the "one thought too many" objection as well, at
least in any version that keeps criterion of rightness and decision
procedure in communication with each other.
Above all, utilitarianism is in trouble, according to Williams,
because of the central theoretical place that it gives to the ninth
thesis of the morality system--the thesis that we put on one side
earlier, about impersonality. Other forms of the morality system are
impersonal too, of course, notably Kantianism: "if Kantianism
abstracts in moral thought from the identity of
persons,[20]
utilitarianism strikingly abstracts from their separateness"
(1981: 3). Like Kantianism, but on a different theoretical basis,
utilitarianism abstracts from the question of *who* acts well,
which for utilitarianism means "who produces good
consequences?". It is concerned only that good consequences be
produced, but it does not offer a tightly-defined account of what it
is for anything to be a consequence. Or rather it does offer an
account, but on this account the notion of a consequence is so loosely
defined as to be all-inclusive (1971: 93-94):
>
> Consequentialism is basically indifferent to whether a state of
> affairs consists in what I do, or is produced by what I do, where that
> notion is itself wide... All that consequentialism is interested
> in is the idea of these doings being *consequences* of what I
> do, and that is an idea broad enough to include [many sorts of]
> relations.
>
This explains why consequentialism has the strong doctrine of negative
responsibility that leads it to what Williams regards as its
fundamental absurdity. Because, for the utilitarian, it can't
matter in itself whether (say) a given death is a result of what I do
in that I pull the trigger, or a result of what I do in that I refuse
to lie to the gunman who is looking for the person who dies, doing and
allowing must be morally on a par for the utilitarian, as also must
intending and foreseeing. Williams himself is not particularly
impressed by those venerable
distinctions;[21]
but he does think that there is a real and crucial distinction that
is closely related to them, and that it is a central objection to
utilitarianism that it ignores this distinction. The distinction in
question, which utilitarian ignores by being impersonal, is the
distinction between my agency and other people's. It is this
distinction, and its fundamental moral importance, that lies at the
heart of Williams' famous (but often misunderstood)
"integrity objection".
In a slogan, the integrity objection is this: agency is always
*some particular person's* agency; or to put it another
way, there is no such thing as impartial agency, in the sense of
impartiality that utilitarianism requires. The objection is that
utilitarianism neglects the fact that "practical deliberation
[unlike epistemic deliberation] is in every case first-personal, and
the first person is not derivative or naturally replaced by [the
impersonal] *anyone*" (1985: 68). Hence we are not
"agents of the universal satisfaction system", nor indeed
primarily "janitors of any system of values, even our own"
(UFA: 118). No agent can be expected to be what a utilitarian agent
has to be--someone whose decisions "are a function of all
the satisfactions which he can affect from where he is" (UFA:
115); no agent can be required, as all are required by utilitarianism,
to abandon his own particular life and projects for the
"impartial point of view" or "the point of view of
morality", and do all his decision-making, including (if it
proves appropriate) a decision to give a lot of weight to his own life
and projects, exclusively from there. As Williams famously puts it
(UFA: 116-117):
>
> The point is that [the agent] is identified with his actions as
> flowing from projects or attitudes which... he takes seriously at
> the deepest level, as what his life is about... It is absurd to
> demand of such a man, when the sums come in from the utility network
> which the projects of others have in part determined, that he should
> just step aside from his own project and decision and acknowledge the
> decision which utilitarian calculation requires. It is to alienate him
> in a real sense from his actions and the source of his action in his
> own convictions. It is to make him into a channel between the input of
> everyone's projects, including his own, and an output of
> optimific decision; but this is to neglect the extent to which
> *his* projects and *his* decisions have to be seen as
> the actions and decisions which flow from the projects and attitudes
> with which he is most closely identified. It is thus, in the most
> literal sense, an attack on his integrity.
>
Here, Williams' commitment to the importance of subjective
authenticity is on full display. "The most literal sense"
of "integrity" is, according to Chambers' Dictionary
(1977 edition), "entireness, wholeness: the unimpaired state of
anything"; then "uprightness, honesty, purity". For
our purposes the latter three senses in this dictionary entry should
be ignored. It is the first three that are relevant to Williams'
argument; the word's historical origin in the Latin
*in-teger*, meaning what is not touched, taken away from, or
interfered with, is also revealing.
An agent's integrity, in Williams' sense, is his ability
to originate actions, to further his own initiatives, purposes or
concerns, and thus to be something more than a conduit for the
furtherance of others' initiatives, purposes or
concerns--including, for example and in particular, those which
go with the impartial view. Moreover, integrity is an essential
component of character, since, for Williams, an agent's
character is identical to their set of deep projects and commitments.
Williams' point, then, is that unless any particular agents are
allowed to initiate actions and to have "ground projects",
then either the agents under this prohibition will be subjects for
manipulation by other agents who *are* allowed to have ground
projects--the situation of ideological oppression. Or else, if
every agent lies under this prohibition and all agents are made to
align themselves only with the ground projects of "the impartial
point of view", there will not *be* any agents. To put it
another way, all will be ideologically oppressed, but by the ideology
itself rather than by another agent or group of agents who impose this
ideology. For all agents will then have lost their integrity, in the
sense that no single agent will be an unimpaired and individual whole
with projects of his own that he might identify himself with; all
agents will have to abandon all "ground projects" except
the single project that utilitarianism gives them, that of maximising
utility by whatever means looks most efficient, and to order all their
doings around no other initiatives except those that flow from this
single project. What we previously thought of as individual agents
will be subsumed as parts of a single super-agent--the
utilitarian collective, if you like--which will pursue the ends
of impartial morality without any special regard for the persons who
compose it, and which is better understood as a single super-agent
than as a group of separate agents who cooperate; rather like a swarm
of bees or a nest of ants.
It is important not to misunderstand this argument. One important
misunderstanding can arise fairly naturally from Williams' two
famous examples (UFA: 97-99) of "Jim", who is told
by utilitarianism to murder one Amazon villager to prevent twenty
being murdered, and "George", who is told by
utilitarianism to take a job making weapons of mass destruction, since
the balance-sheet of utilities shows that if George refuses, George
and his family will suffer poverty and someone else--who will do
more harm than George--will take the job anyway. It is easy to
think that these stories are simply another round in the familiar game
of rebutting utilitarianism by counter-examples, and hence that
Williams' integrity objection boils down to the straightforward
inference (1) utilitarianism tells Jim to do X and George to do Y, (2)
but X and Y are wrong (perhaps because they violate integrity?), so
(3) utilitarianism is false. But this cannot be Williams'
argument, because in fact Williams denies (2). Not only does he not
claim that utilitarianism tells both Jim and George to do the wrong
things. He even suggests, albeit rather grudgingly, that
utilitarianism tells Jim (at least) to do the right thing. (UFA: 117:
"...if (as I suppose) the utilitarian is right in this
case...") Counter-examples, then, are not the point:
"If the stories of George and Jim have a resonance, it is not
the sound of a principle being dented by an intuition" (WME
211). The real point, he tells us, is not "just a question of
the rightness or obviousness of these answers"; "It is
also a question of what sort of considerations come into finding the
answer" (UFA: 99). "Over all this, or round it, and
certainly at the end of it, there should have been heard 'what
do you think?', 'does it seem like that to you?',
'what if anything do you want to do with the notion of
integrity?'" (WME 211).
Again, despite Williams' interest in the moral category of
"the unthinkable" (UFA: 92-93; cp. MSH Essay 4), it
is not Williams' claim that either Jim or George, if they are
(in the familiar phrase) "men of integrity", are bound to
find it literally unthinkable to work in WMD or to shoot a villager,
or will regard these actions as the sort of things that come under the
ban of some absolute prohibition that holds (in Anscombe's
famous phrase) *whatever the consequences*: "this is a
much stronger position than any involved, as I have defined the
issues, in the denial of consequentialism... It is perfectly
consistent, and it might be thought a mark of sense, to believe, while
not being a consequentialist, that there was no type of action which
satisfied [the conditions for counting as morally prohibited no matter
what]" (UFA:
90).[22]
Nor therefore, to pick up a third misunderstanding of the integrity
objection, is Williams offering an argument in praise of "the
moral virtue of integrity", where "integrity"
is--in jejune forms of this misreading--the virtue of doing
the right thing not the wrong thing, or--in more sophisticated
forms--a kind of honesty about what one's values really are
and a firm refusal to compromise those values by hypocrisy or
cowardice (usually, with the implication that one has hold of the
right values). An agent can be told by utilitarianism to do something
terrible in order to avoid something even worse, as Jim and George
are. Williams is *not* opposing this sort of utilitarian
conclusion by arguing that the value of "integrity" in the
sense of the word that he anyway does not have in mind--the
personal quality--is something else that has to be put into the
utilitarian balance-sheet, and that when you put it in, the
utilitarian verdict comes out differently. Nor is Williams saying,
even, that the value of integrity in the sense of the word that he
*does* have in mind--roughly, allowing agents to be
agents--is something else that has to be put into the utilitarian
balance-sheet, as it is characteristically put in by indirect
utilitarians such as Peter Railton and Amartya Sen: "The point
here is not, as utilitarians may hasten to say, that if the project or
attitude is that central to his life, then to abandon it will be very
disagreeable to him and great loss of utility will be involved. I have
already argued in section 4 that it is not like that; on the contrary,
once he is prepared to look at it like that, the argument in any
serious case is over anyway" (UFA: 116). Williams' point
is rather that the whole business of compiling balance-sheets of the
utilitarian sort is incompatible with the phenomenon of agency as we
know it: "the reason why utilitarianism cannot understand
integrity is that it cannot coherently describe the relations between
a man's projects and his actions" (UFA: 100). As soon as
we take up the viewpoint which aims at nothing but the overall
maximisation of utility, and which sees agents as no more than nodes
in the causal network that is to be manipulated to produce this
consequence, we have lost sight of the very idea of agency.
And why should it matter if we lose sight of that? To say it again,
the point of the integrity objection is not that the world will be a
better place if we don't lose sight of the very idea of agency
(though Williams thinks this as
well[23]).
The point is rather that a world-view that has lost sight of the real
nature of agency, as the utilitarian world-view has, *simply does
not make sense*: as Williams puts it in the quotation above, it is
"absurd".
Why is it absurd? Because the view involves deserting one's
position in the universe for "what Sidgwick, in a memorably
absurd phrase, called 'the point of view of the
universe'" (1981:
xi).[24]
That this is what utilitarianism's impartial view ultimately
requires is argued by Williams in his discussion of Sidgwick at MSH
169-170:
>
> The model is that I, as theorist, can occupy, if only temporarily and
> imperfectly, the point of view of the universe, and see everything
> from the outside, including myself and whatever moral or other
> dispositions, affections or projects, I may have; and from that
> outside view, I can assign to them a value. The difficulty is...
> that the moral dispositions... cannot simply be regarded, least
> of all by their possessor, just as devices for generating actions or
> states of affairs. Such dispositions and commitments will
> characteristically be what gives one's life some meaning, and
> gives one some reason for living it... there is simply no
> conceivable exercise that consists in stepping completely outside
> myself and from that point of view evaluating *in toto* the
> dispositions, projects, and affections that constitute the substance
> of my own life... It cannot be a reasonable aim that I or any
> other particular person should take as the ideal view of the
> world... a view from no point of view at all.
>
As Williams also put it, "Philosophers... repeatedly urge
one to view the world *sub specie aeternitatis*; but for most
human purposes"--science is the biggest exception, in
Williams' view--"that is not a very good
*species* to view it under" (UFA: 118). The utilitarian
injunction to see things from the impartial standpoint is, if it means
anything, an injunction to adopt the "absolute conception"
of the world (1978: 65-67). But even if such a conception were
available--and Williams argues repeatedly that it is not
available for ethics, even if it is for science (1985
Ch.8)--there is no reason to think that the absolute conception
could provide me with the best of all possible viewpoints for ethical
thinking. There isn't even reason to think that it can provide
me with a better viewpoint than the viewpoint of my own life. That
latter viewpoint does after all have the pre-eminent advantage of
being mine, and the one that I already occupy anyway (indeed cannot
but occupy). "My life, my action, is quite irreducibly mine, and
to require that it is at best a *derivative* conclusion that it
should be lived from the perspective that happens to be mine is an
extraordinary misunderstanding" (MSH 170).
(Notice that Williams is also making the point here that there is no
sense in the indirect-utilitarian supposition that my living my life
from my own perspective is something that can be given a philosophical
vindication from the impartial perspective, and can then reasonably be
regarded (by me or anyone else) as justified. Williams sees an
incoherence at the very heart of the project of indirect
utilitarianism, because he does not believe that the ambition to
justify one's life "from the outside" in the
utilitarian fashion can be coherently combined with the ambition to
live that life "from the
inside".[25]
The kind of factors that make a life make sense are so different from
the kind of factors that utilitarianism is structurally obliged to
prize that we have every reason to hope that people will not think in
the utilitarian way. In other words, it will be best even from the
utilitarian point of view if no one is actually a utilitarian; which
means that, at best, "utilitarianism's fate is to usher
itself from the scene" (UFA: 134).) While some utilitarians have
claimed to be unfazed by this result--it does not imply the
falsity of utilitarianism *qua* theory of right
action--the fact that they continue to publish books and articles
defending utilitarianism suggests that they do not really wish for the
theory to play no direct role in our moral deliberations.
On the issue of impartiality, it will no doubt be objected that
Williams overstates his case. It seems possible to engage in the kind
of impartial thinking that is needed, not just by utilitarianism, but
by any plausible morality, without going all the way to
Sidgwick's very peculiar notion of "the point of view of
the universe". When ordinary people ask, as they always have
asked, the question "How would *you* like it?", or
when Robert Burns utters his famous optative "O wad some
pow'r the giftie gie us/ To see oorselves as ithers see
us",[26]
it does not (to put it mildly) make best sense of what they are
saying to attribute to them a faithful commitment to the theoretical
extravagances of a high-minded Victorian moralist. Can't
morality find a commonsense notion of impartiality that
*doesn't* involve the point of view of the universe?
Indeed, if Williams' own views about impartiality are plausible,
mustn't he himself use some such notion?
To this Williams will reply, we think, that a commonsense notion of
impartiality is indeed available--to us, though not to moral
theory. The place of commonsense impartiality in our ordinary ethical
thought is utterly different from the theoretical role of
utilitarianism's notion of impartiality. The commonsense notion
of impartiality is not, unlike the utilitarian notion, a lowest common
theoretical denominator for notions of rightness, by reference to
which all other notions of rightness are to be understood. Rather,
commonsense impartiality is *one ethical resource among
others.* (Cp. the quotation above from 1985: 117 about avoiding
sparseness and reduction in our ethical thinking, and
"cherishing as many ethical ideas as we can".) Moreover,
and crucially, Williams' acceptance of "methodological
intuitionism" (see MSH essay 15) commits him to saying that the
relation of the commonsense notion of impartiality to other ethical
resources or considerations is essentially indeterminate: "It
may be obvious that in general one sort of consideration is more
important than another... but it is a matter of judgement whether
in a particular case that priority is preserved: other factors alter
the balance, or it may be a very weak example of the kind of
consideration that generally wins... there is no reason to
believe that there is one currency in terms of which all relations of
comparative importance can be represented" (MSH 190). The
indeterminacy of the relations between commonsense impartiality and
other ethical considerations means that commonsense impartiality
resists the kind of systematisation that moral theory demands. Hence,
there is indeed a notion of impartiality that makes sense, and there
is indeed a notion of impartiality that is available to a moral theory
such as utilitarianism; but the impartiality that is available to
utilitarianism does not make sense, and the impartiality that makes
sense is not available to utilitarianism.
Williams argues, then, that the utilitarian world-view is absurd
because it requires agents to be impartial, not merely in the weak and
everyday sense that they take impartiality to be one ethical
consideration among an unsystematic collection of other considerations
that they (rightly) recognise, but in the much stronger, reductive and
systematising, sense that they adopt the absolute impartiality of
Sidgwick's "point of view of the universe".
We can also say something that sounds quite different, but which in
the end is at least a closely related point. We can say that Williams
takes the utilitarian world-view to be absurd, because it requires
agents to act on external reasons. I turn to that way of putting the
point in section 5.
## 5. Internal and external reasons
In his famous paper "Internal and external reasons" (1981:
101-113) Williams presents what I'll call "the
internal reasons thesis": the claim that all reasons are
internal, and that there are no external reasons.
The internal reasons thesis is a view about how to read sentences of
the form "A has reason to ph". We can read such
sentences as implying that "A has some motive which will be
served or furthered by his phing" (1981: 101), so that, if
there is no such motive, it will not be true that "A has reason
to ph". This is the *internal* interpretation of such
sentences. We can also read sentences of the form "A has reason
to ph" as not implying this, but as saying that A has reason
to ph even if none of his motives will be served or furthered by
his phing. This is the *external* interpretation of such
sentences, on which, according to Williams, all such sentences are
false.
Since he is widely misinterpreted on this point, it is important to
see that Williams is only offering a *necessary* condition for
the truth of sentences of the form "A has reason to
ph". He is not (officially) committed to the stronger claim,
that the presence of some motive which is served or furthered by
phing is *sufficient* for the possession of a reason to
ph. Officially, then, Williams is not defining or fully analyzing
the concept of a reason; rather, the necessary condition itself
represents a threat. (We say "officially" because there
are indications that Williams unofficially held a stronger view; see
the final paragraphs in this piece for elaboration.)
Very roughly, then, the basic idea of Williams' internal reasons
thesis is that we cannot have genuine reasons to act that have no
connection whatever with anything that we care about. His positive
defense of the thesis can be roughly stated as follows: since it must
be possible for an agent to act *for* a reason, reasons must be
capable of *explaining* actions. This argument, it should be
noted, brings together a normative and a descriptive concept in a
robustly naturalistic manner. The notion of a reason, he argues, is
inextricably bound up with the notion of explanation. Absent a motive
which can be furthered by some action, it seems impossible for an
agent to actually perform the action except under conditions of false
information. If an external reason is one that is supposed to obtain
in the absence of the relevant motive even under conditions of full
information, then external reasons can never explain actions, and
hence cannot be reasons at all (1981: 107).
This thesis presents a challenge to certain natural and traditional
ways of thinking about ethics. When we tell someone that he should not
rob bank-vaults or murder bank-clerks, we usually understand ourselves
to be telling him that he has *reason* not to rob bank-vaults
or murder bank-clerks. If the internal reasons thesis is true, then
the bank-robber can prove that he has no such reason simply by showing
that he doesn't care about anything that is achieved by
abstaining from bank-robbing. So we seem to reach the disturbing
conclusion that morality's rules are like the rules of some
sport or parlour-game--they apply only to those who choose to
join in by obeying them.
One easy way out of this is to distinguish between moral
*demands* and moral *reasons.* If all reasons to act are
internal reasons, then it certainly seems that the bank-robber has no
*reason* not to rob banks. It doesn't follow that the
bank-robber is not subject to a moral *demand* not to rob
banks. If (as we naturally assume) there is no opting out of obeying
the rules of morality, then everyone will be subject to that moral
demand, including the bank-robber. In that case, however, this moral
demand will not be grounded on a reason that applies
universally--to everyone, and hence even to the bank-robber. At
most it will be grounded in the reasons that *some* of us have,
to want there to be no bank-robbing, and in the thought that it would
be nice if people like the bank-robber were to give more general
recognition to the presence of that sort of reason in
others--were, indeed, to add it to their own repertoire of
reasons.
If we take this way out, then the moral demand not to rob banks will
turn out to be grounded not on universally-applicable moral reasons,
but on something more like Humean empathy. Williams himself thinks
that this is, in general, a much better way to ground moral demands
than the appeal to reasons ("Having sympathetic concern for
others is a necessary condition of being in the world of
morality", 1972: 26; cp. 1981: 122, 1985 Ch.2). In this he
stands outside the venerable tradition of rationalism in ethics, which
insists that if moral demands *cannot* be founded on moral
reasons, then there is something fundamentally suspect about morality
itself. It is this tradition that is threatened by the internal
reasons thesis.
Of course, we might wonder how significant the threat really is. As we
paraphrased it, the internal reasons thesis says that "we cannot
have genuine reasons to act that have no connection whatever with
anything that we care about". Let us take up this notion of
"connections". As Williams stresses, the internal reasons
thesis is not the view that, unless I *actually* have a given
motive *M*, I cannot have an internal reason corresponding to
*M*.[27]
The view is rather that I will have no internal reason unless either
(a) I actually have a given motivation *M* in my
"subjective motivational set" ("my S": 1981:
102), or (b) I could come to have *M* by following "a
sound deliberative route" (MSH 35) from the beliefs and
motivations that I do actually have--that is, a way of reasoning
that builds conservatively on what I already believe and care about.
So, to cite Williams' own example (1981: 102), the internal
reasons thesis is not falsified by the case of someone who is
motivated to drink gin and believes that this is gin, hence is
motivated to drink this--where "this" is in fact
petrol. We are not obliged to say, absurdly, that this person has a
genuine internal reason to drink petrol, nor to say, in contradiction
of the internal reasons thesis, that this person has a genuine
external reason not to drink what is in front of him. Rather we should
note the fact that, even though he is not actually motivated not to
drink the petrol, he *would* be motivated not to drink it
*if he realised that it was petrol*. He can get to the
motivation not to drink it by a sound deliberative route from where he
already is; hence, by (b), he has an internal reason not to drink the
petrol.
It is this notion of "sound deliberative routes" that
prompts the question, how big a threat the internal reasons thesis
really is to ethical rationalism. Going back to the bank-robber, we
might point out how very unlikely it is to be true that he
doesn't care about *anything* that is achieved by not
robbing banks, or lost by robbing them. Doesn't the bank-robber
want, like anyone else, to be part of society? Doesn't he want,
like anyone else, the love and admiration of others? If he has either
of these motivations, or any of a galaxy of other similar ones, then
there will very probably be a sound deliberative route from the
motivations that the bank-robber actually has, to the conclusion that
even he should be motivated not to rob banks; hence, that even he has
internal reason not to rob banks. But then, of course, it seems likely
that we can extend and generalise this pattern of argument, and
thereby show that just about anyone has the reasons that (a sensible)
morality says they have. For just about anyone will have internal
reason to do all the things that morality says they should do,
provided only that they have any of the kind of social and extroverted
motivations that we located in the bank-robber, and used to ground his
internal reason not to rob banks. Hence, we might conclude, the
internal reasons thesis is no threat either to traditional ethical
rationalism, nor indeed to traditional morality--not at least
once this is shorn by critical reflection of various excrescences that
really are unreasonable.
This line of thought does echo a pattern of argument that is found in
many ethicists, from Plato's *Republic* to Philippa
Foot's "Moral Beliefs". However, it does not ward
off the threat to ethical rationalism. The threat still lurks in the
"if". We have suggested that the bank-robber will have
internal reason not to rob banks, *if* he shares in certain
normal human social motivations. But what if he *doesn't*
share in these? The problem is not merely that, if he doesn't,
then we won't know what to say to him. The problem is that the
applicability of moral reasons is still conditional on people's
actual motivations, and local to those people who have the right
motivations. But it seems to be a central thought about moral reasons,
as they have traditionally been understood, that they should be
*unconditionally* and *universally* overriding: that it
should not be possible even in principle for any rational agent to
stand outside their reach, or to elude them simply by saying
"Sorry, but I just don't *care* about
morality". On the present line of thought, this possibility
remains open; and so the internal reasons thesis remains a threat to
ethical rationalism.
One way of responding to this continuing threat is to find an argument
for saying that every agent has, at least fundamentally, the same
motivations: hence moral reasons, being built upon these motivations,
are indeed unconditionally and universally overriding, as the ethical
rationalist hoped to show. One way of doing this is the
Thomist-Aristotelian way, which grounds the universality of our
motivations in our shared nature as human beings, and in certain
claims which are taken to be essentially true about humans just as
such.[28]
Another is the Kantian way, which grounds the universality of our
motivations in our shared nature as agents, and in certain claims
which are taken to be essentially true about agents just as such.
It is interesting to note that this sort of ethical-rationalist
response to the internal reasons thesis can seem to undercut
Williams' distinction between external and internal reasons. For
the
Thomist/neo-Aristotelian[29]
or the Kantian, the point is not that we can truly say, with the
external reasons theorist, that an agent has some reasons that bear no
relation at all to the motivations in his present *S*
(subjective motivational set), or even to those motivations he might
come, by some sound deliberative route, to derive from his present
*S*. The point is rather that there are some motivations which
*are derivable from any *S*
whatever.*[30]
Williams himself recognises this point in the case of Kant (WME 220,
note 3): "Kant thought that a person would recognise the demands
of morality if he or she deliberated correctly from his or her
existing *S*, whatever that *S* might be, but he thought
this because he took those demands to be implicit in a conception of
practical reason which he could show to apply to *any rational
deliberator as such.* I think that it best preserves the point of
the internalism/ externalism distinction to see this as a limiting
case of
internalism."[31]
So for the Kantian and the neo-Aristotelian or Thomist, there are
motivations which appear to ground internal reasons only, since the
reasons that they ground are always genuinely related to whatever the
agent actually cares about. On the other hand, these motivations also
appear to ground reasons which have exactly the key features that the
ethical rationalist wanted to find in external reasons. Two in
particular: first, these reasons are *unconditional*, because
they depend on features of the human being (Aquinas) or the agent
(Kant) which are *essential* features--it is a necessary
truth that these features are present; and second, these reasons are
*universal*, because they depend on *ubiquitous*
features--features which are present in *every* human or
agent. So Williams' response to the neo-Aristotelian or the
Kantian view of practical reason had better not be (and indeed is not)
simply to invoke his internal reasons thesis. As he realises, he also
needs to argue that there can't be reasons of the kinds that the
neo-Aristotelian and the Kantian posit: reasons which are genuinely
unconditional, but also genuinely related to each and every
agent's actual motivations. Whatever else may be wrong with the
neo-Aristotelian and Kantian theories of practical reason, it
won't be simply that they invoke external reasons; for it is
fairly clear that they *don't* (the contemporary Kantian
philosopher who has most effectively pushed this point is Christine
Korsgaard, see her *Sources of Normativity*, 1996).
If not even Kant counts as an external reasons theorist, who does?
That is a natural question at this point, since it is probably Kant
who is usually taken to be the main target of Williams' argument
against external reasons. This assumption is perhaps based on the
evidence of 1981: 106, where (despite the points we have already noted
about Kant's theory which Williams recognised at least by 1995)
Williams certainly attributes to Kant the view that there can be
"an 'ought' which applies to an agent independently
of what the agent happens to want". Even here, however, Williams
is actually rather cagey about saying that Kant is an external reasons
theorist: he tells us that the question 'What is the status of
external reasons claims?' is "not the same question as
that of the status of a supposed categorical imperative";
"or rather, it is not undoubtedly the same question",
since the relation between oughts and reasons is a difficult issue,
and anyway there are certainly external reasons claims which are not
moral claims at all, such as Williams' own example of Owen
Wingrave's family's pressure on him to follow his father
and grandfather into the army (1981: 106).
In any case, it is important to see that there do not have to be
*any* examples of philosophers who clear-headedly and
definitely espouse an external reasons theory. The point is rather
that no one could be a clear-headed and definite external reasons
theorist if Williams is right, because, in that case, the notion of
external reasons is basically unintelligible (MSH 39:
"mysterious", "quite obscure").
Williams' internal reasons thesis is that it is unintelligible
to suppose that something could genuinely be a reason for me to act
which yet had no relation either to anything I care about, nor to
anything that I might, without brainwashing or other violence to my
deliberative capacities, come to care
about.[32]
If this thesis is true, then perhaps we should not expect to find any
definite examples of clear-headed external reasons theorists. It will
be no surprise if someone who tries to develop a clear-headed external
reasons theory turns out not to be *definitely* an external
reasons theorist: thus for example John McDowell's theory in WME
Essay 5, even though it is explicitly presented as an example of
external reasons theory, is probably not best understood that way.
(Very quickly, this is because McDowell wants to develop an external
reasons theory as a view about moral perception, "the
acquisition of a way of seeing things" (WME 73). But
*literal* perception does not commit us to external reasons.
When I literally "just see" something, my visual
perception--even my well-habituated and skilful
perception--adds something to my stock of internal, not external,
reasons. If we take the perceptual analogy seriously in ethics, it is
hard to see why we can't say the same about moral perceptions.)
Nor, conversely, will it be surprising if someone who tries to develop
what is definitely an external reasons theory turns out not to be, so
far forth, very clear-headed. Thus Peter Singer's exhortations
to us to take up the moral point of view (see e.g. *Practical
Ethics*
10-11[33])
give us perhaps the most definite example available of an external
reasons theory in contemporary moral philosophy--but are also one
of the least clearly-explained or justified parts of Singer's
position. The notion of an external reason is, basically, a confused
notion, and Williams' fundamental aim is to expose the
confusion.[34]
The fact that there can be no clear and intelligible account of
external reasons has important consequences, consequences which go to
the heart of the morality system discussed in sections 2 and 3, and
which also relate back to the critique of utilitarianism that we saw
Williams develop in section 4. If there can be no external reasons,
then there is no possibility of saying that the same set of moral
reasons is equally applicable to all agents. (Not at least unless some
universalising system like Kantianism or neo-Aristotelianism can be
vindicated without recourse to external reasons; Williams, as
we've seen, rejects these systems on other grounds.) Deprived of
this possibility, we are thrown immediately into a
*historicised* way of doing ethics--a project with roots
in both the Hegelian and Nietzschean traditions. No absolute
conception of ethics will be available to us; hence, neither will the
kind of impartiality that utilitarianism depends upon. Agents'
reasons, and what agents' reasons can become, will always be
relativised to their particular contexts and their particular lives;
and that fact too will be another manifestation of "moral
luck".
Furthermore--a consequence that Williams particularly
emphasises--without external reasons, or alternatively something
like Kantianism or neo-Aristotelianism, there will be no possibility
of deploying the notion of *blame* in the way that the morality
system wants to deploy it. "Blame involves treating the person
who is blamed like someone who had a reason to do the right thing but
did not do it" (MSH 42). But in cases where someone had no
*internal* reason to do (what we take to be) the right thing
that they did not do, it was not in fact true that they had
*any* reason to do that thing; for internal reasons are the
only reasons. Typical cases of blaming people will, then, often have
an unsettling feature closely related to one that we noted at the
beginning of this section. They will rest on the fiction that the
people blamed had really signed up for the standards whereby they are
blamed. And so, once again, there will seem to be something optional
about adherence to the standards of morality: morality will seem to be
escapable in just the sense that the morality system denies.
Williams' denial of the possibility of external
reasons--understood in light of his naturalism and his
anti-systematic outlook--thus underwrites and supports his views
on a whole range of other matters. And though the internal reasons
thesis too is, in an important way, a negative thesis, it is arguably
the cornerstone of a more robust, positive conception of our practical
lives. While he only defended the negative condition in print, there
is evidence that Williams actually believed that the right kind of
strong desire is a sufficient condition for the possession of a
practical reason. In *Ethics and the Limits of Philosophy*, he
briefly opined that "desiring to do something is of course a
reason for doing it," (1985:19), and he developed a theory of
*practical necessity* according to which our deep commitments
("ground projects") necessarily constrain and direct our
practical rationality (MSH 17).
Seen in this light, the internal reasons thesis is the seed out of
which most of Williams' ethical ideas grow. At the outset of his
writing career, he took for his own "a phrase of D.H.
Lawrence's in his splendid commentary on the complacent moral
utterances of Benjamin Franklin: 'Find your deepest impulse, and
follow that'" (1972: 93). Thirty years later he added,
when looking back over his career, "If there's one theme
in all my work it's about authenticity and
self-expression... It's the idea that some things are in
some real sense really you, or express what you and others
aren't.... The whole thing has been about spelling out the
notion of inner
necessity."[35] |
williams-dc | ## 1. Life
Donald Williams was born on 28 May 1899 in Crows Landing, California,
at that time a strongly rural district, and died in Fallbrook, also in
his beloved California, and also at that time far from cities, on 16
January 1983. His father was Joseph Cary Williams, who seems to have
been a jack of all countrymen's trades; his mother Lula Crow, a
local farmer's daughter. Donald was the first in his family to
pursue an academic education. After studies in English Literature at
Occidental College (BA 1923), he went to Harvard for his Masters, this
time in Philosophy (AM 1925). He then undertook further graduate study
in philosophy, first at the University of California at Berkeley
(1925-27), then at Harvard, where he took his PhD in 1928.
Also in 1928 he married Katherine Pressly Adams, from Lamar, Colorado,
whom he had met at Berkeley, where she was something of a
pioneer--a woman graduate student in psychology. In time, there
were two sons to the marriage. The couple spent a year in Europe in
1928-29 ("immersing himself in Husserl's
Phenomenology to the point of immunization", Firth, Nozick and
Quine 1983, p 246). Then Donald began his life's work as a
Professor of Philosophy, first spending ten years at the University of
California, Los Angeles, and then from 1939 until his retirement in
1967, at Harvard.
Throughout his long and distinguished career in philosophy he retained
a down-to-earth realism and naturalism in metaphysics, and a
conservative outlook on moral and political issues, characteristic of
his origins. A stocky, genial, and cheerful man, he found neither the
content nor the validation of ethics to be problematic. Although
originally a student of literature, whose first publication was a book
of poetry, he had not the slightest tincture of literary or academic
bohemianism. He was among the very least alienated academics of his
generation; which is not surprising, as his career was indeed one
version of The American Dream.
## 2. The Nature of Philosophy
The traditional ambition of philosophy, in epistemology and
metaphysics, is to provide a systematic account of the extent and
reliability of our knowledge, and on that basis, to provide a synoptic
and well-based account of the main features of Reality. When Williams
was in his prime, this ambition was largely repudiated as
inappropriate or unattainable, and a much more modest role for
philosophy was proposed. In setting forth his own position, in the
preface to the collection of his selected essays (1966, p.viii), he
lists some of these fashionable philosophies from the mid-twentieth
century:
>
>
> ...logical positivism, logical behaviorism, operationalism,
> instrumentalism, the miscellaneous American linguisticisms, the
> English varieties of Wittgensteinism, the Existentialisms, and Zen
> Buddhism...
>
>
>
Each of these is, in its own way, a gospel of relaxation. They all
propose that, in place of the struggle to uncover how things are,
careful descriptions of how things appear will suffice. None is
ambitious enough to set about constructing a positive and systematic
epistemology and metaphysics.
Undeterred by this spirit of the age, Williams continued to insist
that philosophical issues, including those of traditional metaphysics,
are real questions with genuine answers (Fisher 2017). Conceptual
analysis, concentration on phenomenological description, or
exploration of the vagaries of language, may have their (subordinate)
place, but to elevate them to a central position is an evasion of
philosophy's main task.
Still worse was the suggestion that philosophical questions are mere
surface expressions of a philosopher's underlying
psycho-pathology. The claim of Morris Lazerowitz to that effect,
suggesting that Bradley's Absolute Idealism was no more than an
intellectual's poorly expressed death wish, or that
McTaggart's argument against the reality of time a panic fear of
change, he met with a stern rebuke (1959, pp 133-56).
He set forth a Realist philosophy on traditional empiricist
principles: "He thought that practically everything was right
out there where it belonged" (Firth, Nozick and Quine 1983, p
246). He maintained that while all knowledge of fact rests on
perceptual experience, it is not limited to the perceptually given,
but can be extended beyond that by legitimate inference (1934a). In
this way his Realism can develop the breadth and depth required to do
justice to all the scientific techniques which so far surpass mere
perception.
Williams's empiricism extended to philosophy itself. He
challenged the prevailing orthodoxy that philosophy is a purely *a
priori* discipline. He emphasized the provisional character of
much philosophizing, and the striking absence of knock-down arguments
in philosophic controversy ("Having Ideas In The Head"
1966, 189-211).
## 3. Metaphysics--Cosmology
Following his own prescription for an affirmative and constructive
philosophy, Williams worked steadily towards the development of his
own distinctive position in metaphysics. He introduced the useful
division of the subject into Speculative Cosmology, which deals with
the basic elements making up the world we live in, such as matter,
mind, and force, together with the relations between them, and
Analytic Ontology, which explores the fundamental categories, such as
Substance and Property, and how they relate to one another.
Speculative Cosmology, in particular, needs to be open to developments
in the fundamental sciences, and so needs to be seen as always
provisional and *a posteriori.* Analytic Ontology, the
exploration of the categories of being, is a more purely reflective
and a priori discipline aiming to elucidate the range of different
elements in any universe.
To begin with Speculative Cosmology, Williams's position has
three main features.
It is Naturalistic. The natural world of space, time, and matter, with
all its constituents, is a Reality in its own right. Contrary to all
Idealist metaphysics, this world, except for the finite minds that are
to found within it, is independent of any knowing mind. Moreover, it
is the only world. There are no divinities or supernatural powers
beyond the realm of Nature ("Naturalism and the Nature of
Things" 1966, pp 212-238). Even mathematical realities
belong in the natural world: numbers--at least natural
ones--as abstractions from clusters, and geometrical objects as
abstractions from space. The philosophy of mathematics was an aspect
of his position that was never fully worked through.
Second, his position is not only Naturalist but also Materialist. This
Materialism is not of the rather crude kind that supposes that every
reality is composed of a solid, crunchy substance, the stuff that
makes up the atoms of Greek speculation. Any spatio-temporal reality,
whether an 'insubstantial' property, such as a color, or
something as abstract as a relation, such as farther-away-than, or
faster-than, so long as it takes its place as a spatio-temporal
element at home in the world of physics, is accepted as part of this
one great spatio-temporal world. Williams's Materialism is thus
one which can accept whatever physical theory posits as the most
plausible foundation for the natural sciences, provided that it
specifies a world which develops according to natural law, without any
teleological (final) causes.
The Mental is accommodated in the same way. Although Mind is
unquestionably real, mental facts are as spatio-temporally located as
any others ("The Existence of Consciousness", "Mind
as a Matter of Fact" 1966, pp 23-40, 239-261). The
Mental is not an independent realm parallel to and equal to the
Material, but rather a tiny, rather insignificant fragment of Being,
dependent upon, even if not reducible to, the physical or biological
nature of living beings. Williams has a capacious conception of the
Material--his position could perhaps be better described as
Spatio-Temporal Naturalism.
Thirdly, Williams's metaphysics is 4-dimensional. Or, more
precisely, given the development of multi-dimensional string theories
since his time, it takes Time to be a dimension in the same way that
Space has dimensions. The first step is to insist, against Aristotle
and his followers, that statements about the future, no less than
those concerning the present and the past, are timelessly true or
false. They need not await the event they refer to, in order to gain a
truth value ("The Sea Fight Tomorrow" 1966, pp
262-288.) This encourages the further view that the facts that
underpin truths about the future are (timelessly) Real. All points in
time are (timelessly) Real, as are all points on any dimension of our
familiar Space. Whatever account is to be given of Change, it does not
consist in items gaining or losing Reality. This stance receives
powerful support from physical theory. Williams embraced and argued
for the conception of Time as a fourth dimension introduced by
Minkowski's 'Block Universe' interpretation of
Einstein's Theory of Special Relativity. A consequence of this
is that the experience of the flow or passage of Time must be some
sort of illusion. Williams embraced that consequence, and argued for
it in a celebrated paper ('The Myth of Passage',
1951).
## 4. Metaphysics--Ontology
Apart from his Materialistic or Spatio-Temporal Naturalism,
Williams's major contributions to metaphysics lie in the realm
of ontology, and concern the fundamental constituents of Being. His
key proposal is that properties are indeed real--in fact Reality
consists in nothing but properties--but that these properties are
not Universals, as commonly supposed, but particulars with unique
spatio-temporal locations. The structure of Reality comprises a single
fundamental category, Abstract Particulars, or "tropes".
Tropes are particular cases of general characteristics. A general
characteristic, or Universal, such as redness or roundness, can occur
in any one of indefinitely many instances. Williams' focus was
on the particular case of red which occurs as the color, for example,
of a particular rose at a specific location in space and time, or the
particular case of circularity presented by some particular coin in my
hand on a single, particular occasion. These tropes are as particular,
and as grounded in place and time, as the more familiar objects, the
rose and the coin, to which they belong.
These tropes are the building blocks of the world. In his analogy,
they provide 'the Alphabet of Being' from which the
entities belonging to more complex categories--objects,
properties, relations, events--can be constructed, just as words
and sentences can be built using the letters of the alphabet. Familiar
objects such as shoes and ships and lumps of sealing wax, and their
parts as revealed by empirical scientific investigation, such as
crystals, molecules and atoms, are concrete particulars or things. In
Williams's scheme, each of these consists in a
*compresent* *cluster* of tropes--the particular
thing's particular shape, size, temperature, and consistency,
its translucency, or acidity, or positive charge, and so on. All the
multitude of different tropes that comprise some single complex
particular do so by virtue of their sharing one and the same place, or
sequence of places, in Space-Time. That is what
'compresent' means. There is no inner substratum or
individuator to hold all the tropes together. The tropes are
individuals in their own right, and do not inhere in any thing-like
particular. So Williams's view is a No-Substance theory, or,
otherwise described, a theory in which each trope is itself a simple
Humean substance, capable of independent existence.
Universal properties and quantities such as acidity and velocity,
which are common to many objects, are not beings in their own right,
but *resemblance classes* of individual tropes. If two objects
match in color, both being red, for example, the tropes of color
belonging to each are separate tropes, both being members of the class
of similar color tropes which constitutes Redness.
Relations are treated along the same lines. If London is Larger-than
Edinburgh, and Dublin Larger-than Belfast, we have two instances of
the Larger-than relation, two relational tropes. And they, along with
countless other cases, all belong to the resemblance class whose
members are all and only the cases of Larger-than. This account denies
that, literally speaking, there is any single entity which is
simultaneously fully present in two different cases of the same color,
or temperature, or whatever. So it is a No-Universals view, and often
described as a version of Nominalism. This is understandable, but it
is better to confine the term 'Nominalism' to the denial
of the reality of properties at all. So far from denying properties,
on Williams's theory the entire world consists in nothing but
tropes, which are properties construed as particulars. So his position
is better described not as Nominalist but as Particularist.
Tropes provide an elegant and economical base for an ontology. Unlike
almost all competing ontologies, that of Williams rests on just one
basic category, which can be used in the construction not just of
things and their properties and relations, but of further categories,
such as events and processes. Events are replacements of the tropes
that are to be found in a given location. Processes are sequences of
such changes. Trope theory is well placed to furnish an attractive
analysis of causality, as involving power tropes that govern and drive
the transformations to be found in events and processes.
It can also be of use in other areas of philosophy, for example in
valuation theory, where the existence of many tropes, rather than one
single unitary reality, can explain our sometimes divided attitudes
toward what, on a Substance ontology, we would regard as the same
thing. Something can be good in some respects (tropes), but not in
others. To view the manifest world as comprising, for the most part,
clusters of compresent tropes makes explicit the complexity of the
realities with which we are ordinarily in contact.
## 5. Objections to Williams's Trope Ontology
Many philosophers have admitted tropes into their scheme of things:
Aristotle, Locke, Spinoza and Leibniz, for example. What is
distinctive in Williams is not that tropes are admitted as a category,
but as the *only* fundamental category, a trope-based form of
what Schaffer calls "property primitivism." (Schaffer
2003, 125). All else is constructed out of tropes, including concrete
particulars and general properties, whereas tropes themselves are not
constructed out of anything else, for example, out of a substance, a
universal and the relation of exemplification. Not surprisingly,
various aspects of Williams' trope primitivism have been
subjected to serious philosophical challenges either directly or
indirectly.
Some philosophers reject all forms of property primitivism, including
that of Williams, on the grounds that properties cannot serve as the
only independent elements of being. Properties must be had by objects
and a property cannot be instantiated in isolation from wholly
distinct properties so they lack the requisite independence, the
capacity of existing in any combination with "wholly distinct
existences." (Some philosophers go so far as to claim that if a
property *P* is a trope, then *P* is dependent on the
specific thing that has it and so could not have existed without that
thing existing; see Mulligan, Simons, & Smith 1984, Heil 2003.)
There are two ways for this objection to go. (1) Armstrong takes the
ostensive fact that properties must be had by objects to establish the
dependence of properties on non-property particulars, substrata, and,
thus, the falsity of property primitivism (Armstrong 1989, 115). (2)
Alternately, one can take the apparent fact that there can be no
properties that are not clustered with other properties to show that
properties cannot play this role.
In response to (1), one can challenge the assumption that if
properties must be had by objects, then they are not capable of
independent existence. Ross Cameron, for example, suggests that
properties might both be capable of independent
existence--including existing without substrata--and not be
capable of existing without being the property of something. (2006,
104). The dependence of properties on objects is compatible with
property primitivism if one adopts a bundle theory of concrete
particulars. In that case, properties are always had by objects since
they are always found in bundles, even if only a bundle of one, but
exist without substrata.
As for (2), some philosophers reject the requirement that if
properties are the only ultimate constituents of reality, then they
must be capable of existing in isolation from all other wholly
distinct properties (Simons 1994; Denkel 1997). Properties can be both
the only ultimate constituents of reality and inter-dependent. This
response requires the rejection of the Humean principle that the basic
independent units of being can exist in any combination, including
unaccompanied (Schaffer 2003, 126). A very different response to (2)
rejects the necessity of trope clustering altogether (Williams 1966,
97; Campbell 1981, 479; Schaffer 2003).
>
>
> Plausible though it be, however, that a color or a shape cannot exist
> by itself, I think we have to reject the notion of a standard of
> concreteness. ... (Williams 1966, 97)
>
>
>
At best, it is a contingent matter that there are no tropes that are
not compresent with any other tropes--for example, a mass trope
on its own. (Campbell goes so far as to suggest that there is reason
to think that there are actual cases of free-floating tropes (1981,
479)).
Even if the trope primitivist can get around these quite general
objections, there remains the most serious objection to
Williams' brand of trope primitivism, an objection that is
specific to Williams' conception of tropes, depending on more
than the assumption that tropes are properties. The charge is that a
Williamsonian trope is not genuinely simple, but complex embracing, at
least, an element that furnishes the nature or content of the trope,
and an element providing its particularity (Hochberg 1988; Armstrong
2005; Moreland 1985; Ehring 2011). In short, Williamsonian tropes are
constructed out of something else, making them incompatible with trope
primitivism.
This objection is based on two assumptions. First, under
Williams' conception, the nature of a trope is a non-reducible,
intrinsic matter that is not determined by relations to anything else,
including resemblance relations to other tropes or memberships in
various natural classes of tropes. And, second, anything that stands
in more than one arbitrarily different relation, each of which is
grounded intrinsically in that entity, must be complex since that
entity will have intrinsic "aspects" that are not
identical to each other. The two relevant relations are numerical
difference from other tropes and resemblance to other tropes, each of
which is grounded intrinsically in the trope relata under the Williams
conception. Hence, there are "intrinsic aspects" of each
trope that are not identical to each other, a particularity-generating
component and a nature-generating component.
In response to this "complexity" objection, Campbell
claims that the distinction between a trope's nature and its
particularity is merely a "formal" distinction, a product
of different levels of abstraction, and not a real distinction between
different components of a trope. One should no more distinguish a
particularity-component from a nature-component of a trope than
distinguish components of warmth and orangeness in an orange
trope:
>
>
> To recognize the case of orange as warm is not to find a new feature
> in it, but to treat it more abstractly, less specifically, than in
> recognising it as a case of orange. (Campbell 1990, 56-7;
> further in Maurin 2005; Fisher 2020)
>
>
>
Alternately, Ehring suggests that we grant this objection, but
preserve trope simplicity by switching to a non-Williamsonian
conception of tropes, according to which a trope's nature is
determined by its memberships in various natural classes of tropes
rather than intrinsically, thereby sidestepping one of the assumptions
operative in the objection (2011; discussion in Hakkarainen and
Keinanen 2017).
Coming under criticism as well is Williams' resemblance-based
account of general characteristics and property agreement. According
to Williams, a fully determinate general color
characteristic--say, the shade of red that characterizes this
shirt and this chair---is just the set of all tropes that exactly
resemble the red trope of this shirt. Different objects of that
"same" shade of red each possess a different trope from
this set of exactly similar red tropes. But this analysis seems to
generate an infinite regress. If trope *t*1 is
related to trope *t*2 by resemblance trope
*r*1, *t*2 is related to trope
*t*3 by resemblance trope *r*2,
and *t*3 is related to *t*1 by
resemblance trope *r*3, then these resemblance
tropes will also resemble each other, giving rise to further
resemblance tropes, and so on. To stop this vicious regress, the
objection continues, resemblance must be taken to be a universal (Daly
1997, 150).
One response to this objection tries to stop the regress before it
starts by denying that there are any resemblance trope-relations
holding between tropes. In particular, the trope theorist might follow
Oliver's advise
>
>
> to avoid saying that when two tropes are exactly similar ...,
> there exists a relation-trope of exact similarity ... holding
> between the two tropes. (1996, 37)
>
>
>
There are no resemblance-tropes corresponding to these resemblance
predicates. Another response denies that resemblance relations mark an
addition to our ontology and, hence, there can be no regress of
resemblance relations. Campbell, for example, argues that the
successive resemblance relations in the regress are nothing over and
above the non-relational tropes that ultimately ground these
relations. Since resemblance is an internal relation, it supervenes on
these ground-level non-resemblance tropes, but supervenient
"additions" are not real additions to one's ontology
(Campbell 1990, 37).
There is also a whole host of objections in the literature to
Williams' trope bundle theory. According to Williams, concrete
particulars are not substrata instantiating various properties. They
are wholly constructed out of tropes, forming bundles of tropes, the
trope constituents of which are pairwise tied together by a
compresence relation. Compresence, in turn, is collocation for
Williams, "the unique congress in the same volume." (In
order to allow for non-spatial objects, Williams grants the
possibility of "locations" in systems analogous to space
(1966, 79).) One immediate worry, raised by Campbell, is the
possibility that there may be cases of overlapping objects
demonstrating that collocation is not sufficient for compresence
(1990, footnote 5, 175). In response, the trope bundle theorist can
opt for the view that compresence is non-reducible. However, even with
this revision there remain significant objections concerning the
possibility of accidental properties, the possibility of change, and
an apparent vicious regress of compresence relations.
Bundle theory has been charged with ruling out the possibility of
accidental properties in concrete particulars. Bundles of properties
have all of their constituent properties essentially. Objects
generally do not. This chair could have been blue instead of red, but
the bundle of properties that characterize the chair could not have
failed to include that red property. One way around this objection is
proposed by O'Leary-Hawthorne and Cover: combine bundle theory
(although for them properties are universals) with a specific account
of modality, a counterpart semantics for statements about ordinary
particulars (1998). What makes it true that a particular object
*o* could have had different properties is that a non-identical
counterpart to *o*, *n*, in another possible world has
different properties than does *o*. As long as there is a
possible world in which there is a object-bundle, *n*, that is
non-identical counterpart to *o* and differs from *o*
with respect to its properties, then *o* could have differed in
just that way.
Simons suggests a very different approach. He proposes to replace
unstructured bundles with "nucleus theory." An object
consists of an inner core of essential tropes and an outer band of
accidental tropes, but no non-property substratum (1994).
In like manner, bundle theory has been charged with ruling out the
possibility of change in objects. The same bundle complex cannot be
composed of one set of property at one time, but a different set at a
different time. Concrete objects, on the other hand, can and do
change. In response, it has been suggested that this objection loses
its force if bundle theory is combined with a four-dimensionalist
account of object persistence (Casullo 1988; see also Ehring 2011).
For example, if ordinary objects are spacetime worms made up of
appropriately related instantaneous temporal parts that are themselves
complete bundles of compresent tropes, then change can be read as a
matter of having different temporal parts that differ in their
constitutive properties. Another response to the change-is-impossible
objection is to adopt Simons' "nucleus theory" in
place of traditional bundle theory. Nucleus theory seems to allow for
change in the outer band of accidental properties (Simons 1994).
Trope bundle theory has also been accused of giving rise to
Bradley-style vicious regress. According to trope bundle theory, for
an object o to exist, its tropes must be mutually compresent. However,
it would seem, for trope *t*1 to be compresent with
trope *t*2, they must be linked by a compresence
trope, say, *c*1. But, the existence of
*t*1, *t*2, and
*c*1 is insufficient to make it the case that
*t*1 and *t*2 are compresent since
these tropes could each be parts of different, non-overlapping
bundles. So for *t*1 and *t*2 to
be linked by compresence trope *c*1,
*c*1 must be compresent with *t*1
by way of a further compresence relation, say *c*2
(and with *t*2 by, say, *c*3) and
so on, giving rise to either a vicious, or at least, uneconomical
regress (Maurin 2010, 315). In response, one can try to break the link
between the relevant predicates and tropes. Oliver suggests that the
trope theorist should reject the assumption that there are any
relation-tropes corresponding to the predicate "....is
compresent with..." even though that predicate has some
true applications (1996, 37). This response might be indirectly
supported by reference to Lewis's claim that it is an impossible
task to give an analysis of all predications since any analysis will
bring into play a new predication, itself requiring analysis (Lewis
1983, 353).
A second, quite different response is modeled on Armstrong's
view that "instantiation" is a "tie," not a
relation (since "the thisness and nature are incapable of
existing apart"), and, hence, it is not subject to a relation
regress. (1978, 109). The idea is that the union of compresent tropes
is too intimate to speak of a relation between them since the tropes
of the same object could not have existed apart from each other. This
response, however, requires more than generic dependencies--for
example, that this specific mass trope requires the existence of some
solidity trope or other--since generic dependencies would not
guarantee that the specific mass and solidity tropes, say, in this
particular bundle could not have existed without being compresent.
Maurin provides an alternative response that grants the existence of
compresence relation-tropes, but denies the regress on the grounds
that relation-tropes, including compresence relations, necessitate the
existence of that which they relate. There is no need for further
compresence relations holding between a compresence-relation and its
terms (2002, 164). A fourth response, from Ehring, suggests that this
regress can be stopped once it is recognized that compresence is a
self-relating relation, a relation that can take itself as a relatum.
The supposed infinite regress for the bundle theorist involves an
unending series of compresence tropes, *c*1,
*c*2, ..., and *c**n*,
but the series, *c*1, *c*2,
..., and *c**n* is taken to be infinite
because it is assumed that each "additional" compresence
trope is *not* identical to the immediate preceding compresence
trope in the series. However, if compresence is a
"self-relating" relation, this assumption may be false
(2011).
Finally, there is an objection to the very notion of a trope and,
hence, to the foundations of trope primitivism (and, perhaps, to any
ontology that includes tropes). The idea is that if properties are
tropes, then exactly similar properties can be swapped across objects,
but there is no such possibility.
>
>
> If the redness of this rose is exactly similar to but numerically
> distinct from the redness of that rose, then the redness of this rose
> could have been the redness of that rose and vice versa. But this is
> not really a possibility and, thus, properties are not tropes.
> (Armstrong 1989, 131-132)
>
>
>
A similar argument is based on the possibility of tropes swapping
positions in space (Campbell 1990, 71). "Property
swapping," it is claimed, is an unreal possibility since
property swapping would make no difference to the world. (Note that
the cross-object version of this objection cannot get off the ground
if tropes are not transferable between objects, a view that is found
in (Martin 1980), although Martin rejects trope primitivism since he
posits substrata in addition to tropes).
One response to the no-swapping objection, given by Campbell and
Labossiere, rejects the assumption that trope swaps would make no
difference to the world. For example, although the effects of
"swapped" situations would be exactly similar in nature,
those effects would differ in their causes (Campbell 1990, 72;
Labossiere 1993, 262). Schaffer, on the other hand, denies that the
trope theory is automatically committed to the possibility of trope
swapping (2001). If tropes are individuated by times/locations and a
counterpart theory of modality is right, then trope swapping is ruled
out:
>
>
> The redness which would be here has exactly the same inter- and
> intraworld resemblance relations as the redness which actually is
> here, and the same distance relations, and hence it is a better
> counterpart than the redness which would be *there*. (Schaffer
> 2001, 253).
>
>
>
What is clear is that Williams' trope ontology remains at the
center of a vibrant and ongoing debate, counting as a serious option
among a small field of contenders. His brand of trope primitivism is
certainly of more than merely historical interest.
Williams has also left a wider, if less plainly manifest legacy as a
metaphysician. For a discussion of his influence on David Lewis, and
through him on later thinkers, see Fisher 2015.
## 6. The Ground of Induction
The Problem of Induction is the problem of vindicating as rational our
unavoidable need to generalize beyond our current evidence to
comparable cases that we have not yet observed, or that never will be
observed. Without inductive inferences of this kind, not only all
science but all meaningful conduct of everyday life is paralyzed.
Indeed, Williams prefaced his philosophical treatment with a
declaration of the evils of inductive skepticism in eroding rational
standards in general, even in politics:
>
>
> In the political sphere, the haphazard echoes of inductive skepticism
> which reach the liberal's ear deprive him of any rational right
> to champion liberalism, and account already as much as anything for
> the flabbiness of liberal resistance to dogmatic encroachments from
> the left or the right. (Williams 1947, pp 15-20)
>
>
>
David Hume, in the eighteenth century, had shown that all such
inferences must involve risk: no matter how certain our premises, no
inductively reached conclusion can have the same degree of certainty.
Hume went further, and held that inductive reasoning provides no
rational support whatever for its conclusions. This is Hume's
famous inductive scepticism.
Williams was almost alone in his time in holding not only that the
problem does admit of a solution, but in presenting a novel solution
of his own. To do this, he needed to argue against Hume, and
Hume's twentieth century successors Bertrand Russell and Karl
Popper, who had declared the problem insoluble, and also against
contemporaries such as P. F. Strawson and Paul Edwards, who had
claimed that there was no real problem at all (Russell 1912, Chapter
6; Popper 1959; Edwards 1949, pp 141-163; Strawson 1952, pp
248-263). Williams tackled the problem head-on. In *The
Ground of Induction* (1947) he makes original use of results
already established in probability theory, whose significance for the
problem of induction he was the first to appreciate. He treats
inductive inference as a special case of the problem of validating
sampling techniques. Among any population, that is, any class of
similar items, there will be a definite proportion having any possible
characteristic. For example, among the population of penguins, 100%
will be birds, about 50% will be female, some 10%, perhaps, will be
Emperor penguins, some 35%, perhaps, will be more than seven years
old, and so on. This is the *complexion* of the population,
with regard to femaleness, or whatever.
Now in the pure mathematics of the relations between samples and
populations, Jacob Bernoulli had in the 18th century
established the remarkable fact that, for populations of any large
size whatever (say above 2500), the vast majority of samples of 2500
or more closely match in complexion the population from which they are
drawn. In the case of our penguins, for example, if the population
contains 35% aged 7 years or more, well over nine tenths of samples of
2500 penguins will contain between 33% and 37% aged 7 years or more.
This is a necessary, purely mathematical, fact. In the language of
statistics, the vast majority of reasonably sized samples are
*representative* of their population, that is, closely resemble
it in complexion.
Bernoulli's result enables us to infer from the complexion of a
population to the complexion of most reasonably-sized samples taken
from it, since the complexion of most samples closely resembles the
complexion of the population. Williams's originality was this:
he noticed that resemblance is symmetrical. If we can prove, as
Bernoulli did, that most samples resemble the population from which
they are drawn, then conversely the population's complexion
resembles the complexion of most of the samples.
This brings us to the problem of induction. What our observations of
the natural world provide us with can be regarded as samples from
larger populations. For example the penguins we have observed up to
this point provide us with a sample of the wider population of all
penguins, at all times, whether observed or not. What can we infer
about this wider population from the sample we have? That, in all
probability, the population's complexion is close to that of the
sample. We *may* of course, have an atypical sample before us.
But with samples of more than 2500, well over 90% represent the
population fairly closely, so the odds are against it.
Thus Williams assimilates the problem of induction to an application
of the statistical syllogism (also called direct inference or the
proportional syllogism). A standard syllogism concerns complexions of
100%, and has a determinate conclusion: if *all S are P*, and
this present item is an *S*, then it must be *P*. A
statistical syllogism deals with complexions of less than 100%, and
its conclusion is not definite but only probable: for example, if
*95% of S are P,* this present *S* is *probably
P*. Some logicians claim that the probability in question is
exactly 0.95, but Williams does not need to rely on that additional
claim. It is enough that the probability be high. Applying the
statistical syllogism to Williams' reversal of the Bernoulli
result, we have: 95% of reasonably-sized samples are closely
representative of their population, so the sample we have, provided
there are no grounds to think otherwise, is probably one of them. On
that basis, we are rationally entitled to infer that the population
probably closely matches the sample, whose complexion is known to us.
The inference is only probable. Induction cannot deliver certainty. In
any given case, it is abstractly possible that our sample may be a
misleading, unrepresentative one. But to expect an inference from the
observed to the unobserved to yield certainty is to expect the
impossible.
## 7. Objections to Williams' proposed solution to the problem of induction
Williams' treatment of induction created quite a stir when it
appeared, but attracted criticisms of varying power from commentators
wedded to more defeatist attitudes, and was eclipsed by the Popperian
strategy of replacing an epistemology of confirmation with one that
focused on refutation. It thus exercised less influence than it
deserves. It is a closely reasoned, deductively argued defense of the
rationality of inductive inference, well meriting continued
attention.
Its reliance on *a priori* reasoning (as opposed to any
contingent principles such as the "uniformity of nature"
or the action of laws of nature) means that it should hold in all
possible worlds. Williams' argument would thus be easily
defeated by the exhibition of a possible world in which inductive
reasoning did not work (in the sense of mostly yielding false
conclusions). However, critics of Williams have not offered such a
possible non-inductive world. Chaotic worlds are not non-inductive (as
the induction from chaos to more chaos is correct in them), while a
world with an anti-inductive demon, who falsifies, say, most of the
inductive inferences I make, is not clearly non-inductive either
(since although most of *my* inductions have false conclusions,
it does not follow that most inductions in general have false
conclusions).
Marc Lange (2011) does however propose a counterexample, arising from
the "purely formal" nature of Williams' argument.
Should it not apply equally to "grue" as to
"green"? Objects are grue if they are green up to some
future point in time, and blue thereafter. The problem is to show that
our sample, to date, of green things is not a sample of things
actually grue. (Williams's own views on the grue problem are
given in his "How Reality is Reasonable" in (2018)).
Stove (1986, 131-144) argued that his more specific version of
Williams's argument (see below) was not subject to the
objection, and that the failure of induction in the case of
'grue' showed that inductive logic was not purely
formal--but then neither was deductive logic.
Other criticisms arise from the suggestion that the proportional
syllogism in general is not a justified form of inference without some
assumption of randomness. Any proportional syllogism (with exact
numbers) is of the form
* The proportion of *F*s that are *G* is
*r*.
* *a* is *F*.
* So, the probability that a is *G* is *r*.
(Or, if we let *B* be the proposition that the proportion of
*F*s that are *G* is *r*, and we let
*p*(*h* | *e*) be the conditional
probability of hypothesis *h* on evidence *e*, then we
can express the above proportional syllogism in the language of
probability as follows: *p*(*Ga* | *Fa*
& *B*) = *r*.)
Do we not need to assume that *a* is chosen
"randomly", in the sense that all *F*s have an
equal chance of being chosen? Otherwise, how do we know that
*a* is not chosen with some bias, which would make its
probability of being *G* different from *r*?
Defenders of the proportional syllogism (McGrew 2003; Campbell and
Franklin 2004) argue that no assumption of randomness is needed. Any
information about bias would indeed change the probability, but that
is a trivial fact about any argument. An argument infers from given
premises to a given conclusion; a different argument, with a different
force, moves from some other (additional) premises to that conclusion.
Given just that the vast majority of airline flights land safely, I
can have rational confidence that my flight will land safely, even
though there are any number of other possible premises (such as that I
have just seen the wheels fall off) that would change the probability
if I added them to the argument. The fact that the probability of the
conclusion on *other* evidence would be different is no reason
to change the probability assessment on the given evidence.
Similar reasoning applies to any other property that *a* may
have (or, in the case of the Williams argument, that the sample may
have), such as having been observed or being in the past. If there is
some positive reason to think that property relevant to the conclusion
(that *a* is *G*), that reason needs to be explained; if
not, there is no reason to believe it affects the argument and the
original probability given by the proportional syllogism stands.
Serious criticisms specific to Williams's argument have been
based on claims that the proportional syllogism, though correct in
general, has been misapplied by Williams. Any proportional
syllogism,
* The proportion of *F*s that are *G* is
*r*.
* *a* is *F*.
* So, the probability that *a* is *G* is
*r*,
is subject to the objection that, in the case at hand, there is
actually further information about the *F*s that is relevant to
the conclusion *Ga*. For example, in
* The proportion of candidates who will be appointed to the board of
Albert Smith Corp is 10%.
* Albert Smith Jr is a candidate.
* So, the probability that Albert Smith Jr will be appointed is
10%,
it is arguable that information is hidden in the proper names that is
favorably relevant to the younger Smith's chance of success. The
question then is whether the same could happen with the proportional
syllogism in Williams' argument for induction. Maher (1996)
argues that there is such a problem. In this version of
Williams' argument:
* The proportion of large samples whose complexion approximately
matches the population is over 95%.
* *S* is a large sample.
* So, the probability that the complexion of *S* matches the
population is over 95%,
is there, typically, any further information about the sample
*S* that is relevant to whether it matches the population? The
potentially relevant information one has is the proportion in
*S* of the attribute to be predicted (blue, or whatever it
might be). Can that be relevant to matching?
It certainly can be relevant. For example, if the proportion of blue
items in the sample is 100%, it suggests that the population
proportion is close to 100%; that is positively relevant to matching
since samples of near-homogeneous populations are more likely to
match. (For example, if the population proportion is 100%,
*all* samples match the population.) Conversely, fewer samples
match the population when the population proportion is near one
half.
David Stove (1986), in defending Williams' argument, proposed to
avoid this problem by taking a more particular case of the argument
which would not be subject to the objection. Stove proposed:
>
>
> If *F* is the class of ravens, *G* the class of black
> things, *S* a sample of 3020 ravens, *r* = 0.95, and
> 'match' means having proportion within 3% of the
> population proportion, then it is evident that 'The proportion
> of *F*s that are *G*s is *x*' is not
> substantially unfavorable to matching (for all *x*).
>
>
>
For even in the worst case, when the proportion of black things is one
half, it is still true that the vast majority of samples match the
population (Stove 1986, 66-75).
Stove's reply emphasizes how strong the mathematical truth about
matching of samples is: it is not merely that, for any population size
and proportion, one can find a sample size and degree of match such
that samples of that size mostly match; it is the much stronger result
that one can fix the sample size and degree of match beforehand,
without needing knowledge of the population size and proportion. For
example, the vast majority of 3020-size samples match to within
3%--irrespective of the population size (provided of course that
it is larger than 3020) and the proportion of objects with the
characteristic under investigation..
Maher also objects that the attribute itself, such as
"blue", might be *a priori* relevant to its
proportion in the sample and hence to matching. For example, if
"blue" is one of a large range of possible colors, then
*a priori* it is unlikely that an individual is blue and
unlikely that a sample will have many blue items. As with any Bayesian
reasoning, a prior probability close to zero (or one) requires a lot
of evidence to overcome; in this case, we would conclude that a sample
with a high proportion of blue items was most likely a coincidence,
and the posterior probability of the sample matching the population
would still not be high.
Scott Campbell (2001), in reply to Maher, argues that priors do not
dominate observations in the way Maher suggests. By analogy, suppose
that, while blindfolded, I throw a dart at a dartboard. I am told that
99 of the 100 spots on the board are the same color, and that there
are 145 choices of color for the spots. After throwing, I observe just
the spot I have hit and find it is blue. Then (in the absence of
further information), the chance is very high that almost all the
other spots are blue. The prior improbability of blue does not prevent
that. In the same way, the fact that the vast majority of samples
match the population gives good reason to suppose that the observed
sample does too, irrespective of any prior information of the kind
Maher advances.
Williams' defense of induction thus has resources to supply
answers to the criticisms that have been made of it. It remains the
most objectivist and ambitious justification of induction. |
wilson | ## 1. Life and Work
Details about the life of John Cook Wilson, or 'Cook
Wilson' as he is commonly called, may be found in a memoir
published in 1926 by his pupil A. S. L. Farquharson, in his edition of
Cook Wilson's posthumous writings, *Statement and Inference
with other Philosophical Papers* (SI, xii-lxiv). He was born
in Nottingham on June 6, 1849, the son of a Methodist minister.
Educated at Derby Grammar School, he went up to Balliol College in
1868, elected on an exhibition set up by Benjamin Jowett for students
from less privileged schools. At Balliol, Cook Wilson read classics
with Henry Chandler, from whom he certainly got his penchant for
minutiae, and mathematics with Henry J. S. Smith. He also studied
philosophy with Jowett and T. H. Green, who steered him towards Kant
and idealism (SI, 880), along with other students from his generation,
such as Bernard Bosanquet, F. H. Bradley, Richard Lewis Nettleship and
William Wallace. He even went to Gottingen in 1873-74 to
attend lectures by Hermann Lotze (SI, xxvii), whose portrait he kept
in his study. Cook Wilson wrote late in his life that "from the
first I would not commit myself even to the most attractive form of
idealism, tho' greatly attracted by it" (SI, 815).
Farquharson also mentions Friedrich Ueberweg's *System of
Logic and History of Logical Doctrines* (Ueberweg 1871) as an
early 'realist' influence (SI, 880). If we are to follow
Prichard, however, Cook Wilson abandoned idealism "with extreme
hesitation" (Prichard 1919, 309) and "it was only towards
the close of his life that he really seemed to find himself"
(Prichard 1919, 318). (See also on this point Farquharson's
reminiscence in (SI, xix).)
Cook Wilson was elected fellow of Oriel College in 1873 and he
succeeded Thomas Fowler as Wykeham Professor of Logic, New College in
1889. Bernard Bosanquet, Thomas Case, and John Venn had been among his
rivals. He finally moved to New College in 1901, where he remained
until his death from pernicious anemia on 11 August 1915. His wife
Charlotte, whom he had met in Germany, predeceased him; they had a son
(Joseph 1916b, 557). Cook Wilson lived the uneventful life of an
Oxford don. Among his awards, he became Fellow of the British Academy
in 1907. A Liberal in his convictions (SI, xxix), he did not get
involved in politics, nor did he take a prominent part in the affairs
of his university. (He is, for example, seldom mentioned in (Engel
1983).) His most cherished extra-curricular activity appears to have
been the development of tactics for military bicycle units.
Cook Wilson published little during his lifetime. Setting apart
publications on military cycling and other incidental writings, the
bulk of his publications were in his chosen fields of study, classics
and mathematics. His work in Ancient philosophy, which form the bulk
of his publications, is discussed in the next section.
In mathematics, he published a strange treatise that arose out of his
failed attempt at proving the four-colour theorem, *On the
Traversing of Geometrical Figures* (TGF). It had virtually no
echo. Farquharson quoted the mathematician E. W. Hobson explaining
that Cook Wilson "hardly gave sufficient time and thought to the
subject to make himself really conversant with the modern aspects of
the underlying problems" (SI, xxxviii). Cook Wilson also
published two short papers on probability (IP, PBT) in which he gave
new proofs of the discrete Bayes' formula and of Jacob
Bernouilli's theorem (this last being known today as the weak
law of large numbers). Edgeworth called the former an "elegant
proof" (Edgeworth 1911, 378, n.10), but Cook Wilson was
seemingly ignorant of an already better proof of the latter via the
Bienayme-Chebyshev inequality (Seneta 2012, 448 & 2013,
1104). His mathematical endeavours were otherwise largely wasted on
trying to prove the inconsistency of non-Euclidean geometries. Cook
Wilson claimed that he had 'apprehended' and thus
'knows' the truth of Euclid's axiom of the
parallels--he held it to be "absolutely self-evident"
(SI, 561)--calling the idea of a non-Euclidean space a
"chimera" (SI, 456) and non-Euclidean geometries
"the mere illusion of specialists" (SI, xxxix). He thus
tried in vain to find a contradiction to "convince the rank and
file of mathematicians" so that "they would at least not
suppose the philosophic criticism, by which I intended anyhow to
attack, somehow wrong" (SI, xcvi). (See section 5 for further
discussion of the point about knowledge.)
Cook Wilson published very little in philosophy: his inaugural
lecture, *On an Evolutionist Theory of the Axioms* (ETA, see
also SI, 616-634), a critique of Spencer opening with a very
short encomium to Green and Lotze (ETA, 3-4), and a short piece
in *Mind* (CLP) on the 'Barber Shop Paradox'. In
the former, he argued that Spencer's claim that the criterion of
truth *p* is the impossibility to think its contradictory and
that this criterion is the product of evolution is based on a circular
reasoning: to prove that the criterion results from evolution, one
must apply it. The 'Barber Shop Paradox'--not to be
confused with the 'Barber's Paradox'--is
attributed to Lewis Carroll (Carroll 1894), but it originated in a
private debate about 'hypotheticals' with Cook Wilson, who
attempted his own solution in CLP (Moktefi 2007a, chap. v, sect. 2),
(Moktefi 2007b), (Moktefi 2008, sect. 5.2), while Russell had already
satisfactorily resolved it in a footnote to *The Principles of
Mathematics* (Russell 1903, 18). Cook Wilson was also involved on
the same occasion in the genesis of Carroll's better-known
'paradox of inference', in 'What the Tortoise Said
to Achilles' (Carroll 1895), which will be discussed in section
8.
Cook Wilson's reluctance to publish was partly caused by the
fact that he constantly kept revising his views. As mentioned, he
apparently reached a more or less stable viewpoint only late in his
life. One of his better-known sayings is that
>
>
> ... the (printed) letter killeth, and it is extraordinary how it
> will prevent the acutest from exercising their wonted clearness of
> vision. (SI, 872) [See also Collingwood (2013, 19-20).]
>
>
>
His argument was that authors that have committed to print their views
on a given issue would, should they prove to be erroneous, more often
than not feel obliged to defend them and to engage in pointless
rhetorical exchanges instead of seeing immediately the validity of
arguments against them.
As a result, Cook Wilson resorted throughout his career to the
printing for private circulation of pamphlets, known as
*Dictata*, which he began revising for publication shortly
before his death. Only 11 years later did the two volumes of
*Statement and Inference* appear, in 1926, put together by
Farquharson from his lecture notes and *Dictata*, along with
some letters. These volumes are subdivided in five parts and 582
sections. Their arrangement, which is not Cook Wilson's, betrays
their origin in his lectures in logic and in the theory of knowledge;
also interspersed are texts that originate from his study of and
lectures on Plato and Aristotle. As the chronological table of the
various sections shows (SI, 888-9), the texts thus assembled
were written at different dates and, in light of Cook Wilson's
frequent change of mind (including his move away from idealism), they
express views that are at times almost contradictory. This makes any
study of his philosophy particularly difficult and, more often than
not, accounts of his views are influenced by those, equally important,
of his pupil H. A. Prichard.
## 2. Ancient Philosophy
Cook Wilson published over 50 papers in Ancient philosophy, in
scholarly journals such as *Classical Review*, *Classical
Quarterly*, *Transactions of the Oxford Philological
Society* and *Philologische Rundschau*, and a few
book-length studies on Aristotle's *Nicomachean Ethics*
(AS) and on Plato's *Timaeus.* His main philological
claim concerning the structure of the seventh book of *Nicomachean
Ethics* was that it contained traces of three versions probably
written by some Peripatetic later than Eudemus. To this he added a
year later a discussion (ASV) of interpolations in
*Categories*, *Posterior Analytics* and *Eudemian
Ethics*. In a postscript to the revised version of AS (1912), he
claimed, however, that the variants were probably different drafts
written by Aristotle himself. His pamphlet on the *Timaeus*
(IPT) was mainly polemical, painstakingly detailing R. D.
Archer-Hind's 'obligations' in his 1888 edition of
the *Timaeus* to earlier authors, J. G. Stallbaum more than any
other.
Cook Wilson is mainly remembered today for two contributions to Plato
studies, foremost for his paper 'On the Platonist Doctrine of
the asumbletoi
arithmoi' (OPD), which bears
on the debate on 'intermediates' in Plato. This issue
originates in Aristotle's claim in *Met*. A 6
987b14-17 that Plato believed that the objects of mathematics
occupy an "intermediate position" between sensible things
and Ideas/Forms. Since there is no explicit commitment to this claim
in Plato, scholars either rejected Aristotle's testimony or
looked for passages where Plato might be said to have implicitly
endorsed it, such as the Line at the end of Book VI of the
*Republic*. Cook Wilson argued against reading this passage as
implicitly endorsing 'intermediates', claiming that
objects of thought (dianoia) are
Ideas, because Plato stated in *Rep.* 511d2-3 that they
are "intelligible given a principle"
(kaitoi
noeton onton
met' arkes), and, as
Cook Wilson saw the matter: "nothing but an
idea can be an object of
nous" (OPD, 259).
He also argued that universals being 'one' in contrast
with the 'many' to which they correspond, there could be
only one 'Circularity', one 'number Two', etc.
This means that, for example, 'the number Two' or the
universal 'twoness' being one, it cannot be a plurality
composed of units and thus that 'two and two make four' is
not an addition of universals such as 'twoness and twoness make
fourness' or 'the number Two added to the number Two makes
the number Four'. These expressions have no sense according to
Cook Wilson, for whom 'two and two make four' merely means
that 'any two things added to any other two things make four
things' (OPD, 249). This is why numbers *qua* universals
are 'unaddible' or 'uncombinable'
(asumbletoi) as
Aristotle put it in, e.g., *Met*. M 8 1083a34, a view that Cook
Wilson takes to be "exactly the Platonic doctrine" (OPD,
250).
The need to posit 'intermediates' can be seen as arising
from the fact that one cannot perform arithmetical operations on these
'Idea numbers', as Cook Wilson called them, while one
would need entities that are 'in between' (ta
metaxu) sensible things and Idea numbers
to account for arithmetical truths as elementary as 'two and two
make four'. Cook Wilson nevertheless argued against commitment
to 'intermediates' (OPD, SS4), his view being that
arithmetical operations are always on particulars and if the
'monadic numbers' of arithmetic are plurality of units,
Idea numbers are properties of these numbers. That Plato had reached
this view of Idea numbers as
asumbletoi
arithmoi at the time he wrote the
*Republic* or later on would be yet another issue.
Cook Wilson also argued (OPD, SS5) that Idea numbers form a series
ordered by a relation of 'before and after'
(proteron kai
usteron) and, following
Aristotle's testimony in *Eth. Nic.* I 6,
1096a17-19 and *Met.* B 3, 999a6-14, that there
could be no genus of the species forming such ordered series (for a
critique of this reading of Aristotle, see Lloyd (1962)). In other
words, there can be no Form or universal of the Idea numbers. He would
thus claim that in the proposition 'The number Two is a
universal', the expression 'a universal' cannot
denote in turn a particularization of 'universalness',
because the latter is not a true universal as it lacks an
"intrinsic character" (SI, 342 n.1 & 351), and this he
took to be the point of Plato's doctrine of "unaddible
numbers" (SI, 352).
Cook Wilson not only believed that these views are the true
interpretation of Plato, but also that they are true
*simpliciter* and he criticized Dedekind's definition of
continuity in *Stetigkeit und die irrationale Zahlen* (Dedekind
1872) for "not realising the truth attained so long ago in Greek
philosophy that [numbers] are Universals" and not magnitudes
(OPD, 250, n.1 & SI, ciii) (Joseph 1948, 59-60). He never
explained in any details how this critique is supposed to work against
Dedekind, but provided instead lengthy and unconvincing arguments for
also rejecting Russell's logicist definition of natural numbers
on similar ground (see section 8).
Despite the fact that he drew such preposterous consequences from it,
Cook Wilson's reading of Plato and Aristotle in OPD remained
influential, albeit controversial, throughout the last century,
through Ross' edition of Aristotle's *Metaphysics*
(1924, liii-lvii, *ad* B 3, 999a6-10 & M 6,
1080a15-b4) and especially through its advocacy by Harold
Cherniss in *Aristotle's Criticism of Plato and the
Academy* (Cherniss 1944, App. vi) and *The Riddle of the Early
Academy* (Cherniss 1945, 34-37 & 76). Cherniss'
student Reginald Allen still claimed in the 1980s that Cook
Wilson's is the "true view of Plato's
arithmetic" (Allen 1983, 231-233). (For further
endorsements of Cook Wilson's reading see (Joseph 1948, 33 &
chap. v), (Klein 1968, 62) or (Taran 1981, 13-18) and for
criticisms see (Hardie 1936, chap. vi), (Austin 1979, 302) or
(Burnyeat 2012, 166-167).)
Cook Wilson's other notable contribution is the provision, in
'On the Geometrical Problem in Plato's *Meno*, 86e
sqq.' (GPP), of a key addition to S. H. Butcher's
clarification (1888) of the notoriously obscure geometrical
illustration of *Meno* 86e-87b, showing that Plato did
not intend to offer an actual solution to the problem of the
inscription of an area as a triangle within a circle, but simply to
determine, while alluding to the method of analysis, the possibility
of its solution. An analogous explanation of that passage was provided
later on by T. L. Heath (1921, 298-303) and A. S. L. Farquharson
(1923), while Knorr (1986, 71-74) and Menn (2002, 209-214)
defended this interpretation without mention of Cook Wilson. (See also
the critical discussion of the 'Cook Wilson/Heath/Knorr
interpretation' in (Lloyd 1992), as well as (Scott 2006,
133-137) and (Wolfsdorf 2008, 164-169).)
Apart from these, Cook Wilson's impact seems to have been
limited to minute points of philology, such as the references to his
critical comments on Apelt's edition of *De Melisso Xenophane
Gorgia* (APT) in (Kerferd 1955) or to his pamphlet on the
*Timaeus* in A. E. Taylor's and F. M. Cornford's
own commentaries of that dialogue, and the critical discussion of his
views in Ross' edition of *Aristotle's Prior and
Posterior Analytics* (Ross 1949, 496-497).
Given that Cook Wilson's views on knowledge and belief (see
section 5) are linked to Plato's own distinction between
episteme and
doxa, it is worth mentioning that they also had an
impact on the study of Plato's dialogues. For example, although
A. D. Woozley held a different view of knowledge as "not
something generically different from belief, but as the limited case
of belief" (Woozley 1949, 193), Cook Wilson's distinction
between knowledge and belief is introduced as an interpretative tool
in chapter 8 of R. C. Cross and A. D. Woozley, *Plato's
Republic. A Philosophical Commentary* (Cross & Woozley 1964),
while the idea that knowledge involves 'reflection' (the
'accretion' described in section 5) and the concept of
'being under the impression that' are involved in
Ryle's discussion of knowledge in Plato's
*Theaetetus* (Ryle 1990, 23 & 27-28). (For a similar
claim about *Theaetetus*, see Prichard (1950, 88), and for Ryle
on Cook Wilson on *Parmenides*, see end of section 7.) The
impact of Cook Wilson's views was at any rate not limited to
Plato studies: they form, for example, the basis for H. A.
Prichard's critique of Kant in his *Kant*'s*Theory of Knowledge* (Prichard 1909) or of his lectures on the
Descartes, Locke, Berkeley and Hume on knowledge (Prichard 1950, chap.
5). (See in particular Prichard (1909, 245), quoted below or (1950,
86, 88 & 96) for statements of the distinction between knowledge
and belief.)
## 3. On Method: Ordinary Language
If Cook Wilson's reverence towards ordinary language derived
from his training as a philologist, it was not limited to occasional
references to instances of usage to buttress his arguments. He
believed that in philosophy one must above all "uncompromisingly
[...] try to find out what a given activity of thought
presupposes as implicit or explicit in our consciousness", i.e.,
to "try to get at the facts of consciousness and not let them be
overlaid as is so commonly done with preconceived theories" (SI,
328). He also spoke of the latter as originating in 'reflective
thought' and argued it has two major defects. First, it is based
on principles that, for all we know, might be false and,
concomitantly, it is too abstract, because it is not based on the
consideration of particular concrete examples. Indeed, in a passage
where he criticized Bradley's regress arguments against the
reality of relations (Bradley 1897, chap. III), Cook Wilson begins by
pointing out that "throughout this chapter there is not a single
illustration, though it is of the last importance that there should
be" (SI, 692). (On this critique of Bradley, see also Joseph
(1916a, 37).) This is, however, slightly misleading, given that
Bradley opens his discussion in his previous chapter, where a first
regress is deployed, with the case of a lump of sugar (Bradley 1897,
16).) As H. H. Price put it later, for Cook Wilson and his epigones
"to philosophize without instances would be merely a waste of
time" (Price 1947, 336).
Secondly, Cook Wilson thought that philosophers are most likely to
introduce distinctions of their own that do not correspond to the
'facts of consciousness' and thus distort our
understanding of them. He therefore strove to uncover these
'facts of consciousness' through an analysis of concrete
examples which would be free of philosophical jargon. This is strongly
reminiscent of the 'descriptive psychology' of the
Brentano School. As a matter of fact, Gilbert Ryle, who described
himself as a "fidgetty Cook Wilsonian" in his youth (Ryle
1993, 106) and who was also probably the only Oxonian who knew
something about phenomenology in the 1920s, believed Cook
Wilson's descriptive analyses to be as good as any from Husserl
(Ryle 1971, vol. I, 176 & 203n.).
In what may be deemed a variant of the 'linguistic turn',
Cook Wilson believed that an examination of the "*verbal*
form of statement" was needed in order "to see what light
the form of expression might throw upon problems about the mental
state" (SI, 90), thus that ordinary language would be the guide
to the 'facts of consciousness', because it embodies
philosophically relevant distinctions:
>
> It is not fair to condemn the ordinary view wholly, nor is it safe:
> for, if we do, we may lose sight of something important behind it.
> Distinctions in current language can never safely be neglected. (SI,
> 46)
>
>
> The authority of language is too often forgotten in philosophy, with
> serious results. Distinctions made or applied in ordinary language are
> more likely to be right than wrong. Developed, as they have been, in
> what may be called the natural course of thinking, under the influence
> of experience and in the apprehension of particular truths, whether of
> everyday life or of science, they are not due to any preconceived
> theory. In this way the grammatical forms themselves have arisen; they
> are not the issue of any system, they are not invented by any one.
> They have been developed unconsciously in accordance with distinctions
> which we come to apprehend in our experience. (SI, 874)
>
>
> Reflective thought tends to be too abstract, while experience which
> has developed the popular distinctions recorded in language is always
> in contact with the particular facts. (SI, 875)
>
For this reason, Cook Wilson considered it "repugnant to create
a technical term out of all relation to ordinary language" (SI,
713) and SI is replete with appeals to ordinary language. For example,
he argued in support of his views on universals that "ordinary
language reflects faithfully a true metaphysics of universals"
(SI (208), see section 7). But such appeals were not just meant to
undermine 'preconceived theories', they were also
constructive, e.g., when he distinguished between the activity of
thinking and 'what we think', i.e., between
'act' and 'content' (SI, 63-64), arguing
that this is "likely to be right" because "it is the
natural and universal mode of expression in ordinary untechnical
language, ancient and modern" (SI, 67) and "it comes from
the very way of speaking which is natural and habitual with those who
do not believe in any form of idealism" (SI, 64). As it turns
out, Cook Wilson believed that the 'content', i.e.,
'what we think' is not *about* the thing we think
about, but the thing itself (so knowledge contains its object, see
section 6).
These views were to prove particularly influential in the case of J.
L. Austin, who began his studies at Oxford four years after the
publication of SI. It is a common mistake to think of Wittgenstein as
having had some formative influence on Austin, as he was arguably the
least influenced by Wittgenstein of the Oxford philosophers (Hacker
1996, 172). (On this point see also Marion (2011), for the contrary
claim, Harris & Unnsteinsson (2018).) At any rate, the evidence
adduced here and elsewhere since Marion (2000) ought not to be ignored
and, the philosophy of G. E. Moore notwithstanding, it is rather Cook
Wilson and epigones such as Prichard that are the source of the
peculiar brand of 'analytical philosophy' that was to take
root in Oxford in the 1930s, known as 'Oxford philosophy'
or 'ordinary language philosophy'. One merely needs here
to recall the following well-known passage from Austin's
'A Plea for Excuses', which is almost a paraphrase of Cook
Wilson:
>
> Our common stock of words embodies all the distinctions men have found
> worth drawing, and the connections they have found worth marking, in
> the lifetimes of many generations: these surely are likely to be more
> numerous, more sound, since they have stood up to the long test of the
> survival of the fittest, and more subtle, at least in all ordinary and
> reasonably practical matters, than any that you or I are likely to
> think up in our arm-chairs of an afternoon--the most favoured
> alternative method. (Austin 1979, 182)
>
Neither Cook Wilson nor Austin believed, however, that ordinary
language was not open to improvements. Cook Wilson was explicit about
this when detailing his procedure:
>
>
> Obviously we must start from the facts of the use of a name, and shall
> be guided at first certainly by the name: and so far we may appear to
> be examining the meaning of a name. Next we have to think about the
> individual instances, to see what they have in common, what it is that
> has actuated us. [...] At this stage we must take first what
> seems to us common in certain cases before us: next test what we have
> got by considering other instances of *our own* application of
> the name, other instances in which has been working in us. Now when
> thus thinking of other instances, we may see that they do not come
> under the formula that we have generalized. [...] There is a
> further stage when we have, or think we have, discovered the nature of
> the principle which has really actuated us. We may now correct some of
> our applications of the name because we see that some instances do not
> really possess the quality which corresponds to what we now understand
> the principle to be. This explains how it should be possible to
> criticize the facts out of which we have been drawing our data. (SI,
> 44-45)
>
>
>
Austin's 'linguistic phenomenology' (Austin 1979,
182) was devised along similar lines (on this point, see Longworth
2018a), and he also thought that ordinary language could be improved
upon:
>
>
> Certainly ordinary language has no claim to be the last word, if there
> is such a thing. [...] ordinary language is *not* the last
> word: in principle it can everywhere be supplemented and improved upon
> and superseded. Only remember, it is the *first* word. (Austin
> 1979, 185)
>
>
>
It is worth noting that Cook Wilson uses in the above passage the
example of Socrates' search for definitions as an illustration
of his procedure. This strongly suggests that he derived it from
consideration of the method of induction
(epagoge) in Ancient philosophy:
collect first a number of applications of a given term and, focussing
on salient features of these cases, formulate as an hypothesis a
general claim covering them, then test it for counterexamples against
novel applications. (Longworth (2018a) has also drawn some interesting
parallels with 'experimental mathematics' (see Baker
(2008) and the entry to this Encyclopedia).)
## 4. Apprehension & Judgement
Since Cook Wilson's philosophy was largely defined in opposition
to British Idealism, it is worth beginning with some points of
explicit disagreement. Roughly put, under the idealist view knowledge
is constituted by a coherent set of mutually supporting beliefs, none
of which are basic, while others would be derivative. Surprisingly,
when H. H. Joachim published *The Nature of Truth* (1906),
perhaps the best statement of the coherence theory of truth usually
attributed to the British Idealists, Cook Wilson criticized him for
relying on a discredited 'correspondence' theory (SI,
809-810). Cook Wilson did not argue directly against the
coherence theory, as Russell did (Russell 1910, 131-146), but
simply took the opposite foundationalist stance. He reasoned that the
chain of justification ought to come to an end and that this end point
is some non-derivative knowledge, which he called
'apprehension' (SI, 816). As he put it: "it becomes
evident that there must be apprehensions not got by inference or
reasoning" (UL, SS 18).
As Farquharson noted (SI, 78 n.), Cook Wilson did not define his key
notion of 'apprehension'. (This is related with Cook
Wilson's claim discussed in section 5 that knowledge is
undefinable.) The notion appears to be at the same time close to
Aristotle's 'noesis' and to Russell's
'acquaintance'. Cook Wilson obviously took his lead from a
tradition beginning with *Posterior Analytics* B 19 and his
comments are reminiscent of Thomas Reid, who argues in his *Essays
on the Intellectual Powers of Man* (Bk. II, chap. v) that
perception involves some conception of the object and the conviction
of its existence, this conviction being immediate, non-inferential and
not open to doubt. Cook Wilson was not exactly faithful to Aristotle
and Reid, however, since he argued that apprehensions can be both
perceptual and non-perceptual (SI, 79), and that some are obtained by
inference while some are not, the latter being the material of
inference (SI, 84-85). Furthermore, he argued that perceptual
apprehensions should not be confused with sensations, as the mere
having of a sensation is not yet to know what the sensation is, an
idea that has echoes in Austin (Austin 1979, 91-94 & 122
n.2). For this, one needs an "originative act of
consciousness" that goes beyond mere passivity and compares the
sensation in order to apprehend its definite character. As Cook Wilson
put it: "we are really comparing but we do not recognize that we
are" (SI, 46).
Cook Wilson thought it misleading to base logic on
'judgement' instead of 'proposition' or
'statement' (SI, 94) and he questioned the traditional
analysis of proposition under the form '*S* is
*P*', which he saw has having various meanings (Joseph
(1916a, 6), see also section 9 for a related point). In his polemics
against idealism, Cook Wilson's main target was the traditional
theory of judgement that one finds, e.g., in Bradley's
*Principles of Logic* (Bradley 1928), where the topic is simply
divided into 'judgement' and 'inference'.
There would thus be a common form of thinking called the
'judgement' that '*S* is *P*',
which would include non-inferred knowledge, opinion, and belief, but
would exclude inferred knowledge. One would be misled, he argued, by
the common verbal form '*S* is *P*' that
knowledge, belief, and opinion are species of the same genre called
'judgement' (SI, 86-7). He claimed instead to follow
'ordinary usage' in adopting a 'judicial'
account of 'judgement':
>
> A judgement is a decision. To judge is to decide. It implies previous
> indecision; a previous thinking process, in which we were doubting.
> Those verbal statements, therefore, which result from a state of mind
> not preceded by such doubt, statements which are not decisions, are
> not judgements, though they may have the same verbal form as
> judgements. (SI, 92-3)
>
He further argued first that inferring is thus one of the forms of
judgement: "if we take judging in its most natural sense, that
is as decision on evidence after deliberation, then inferring is just
one of those form of apprehending to which the words judging and
judgement most properly apply" (SI, 86). Some inferences are,
however, immediately apprehended, e.g., when one recognizes that it
follows from 'if *p*, then *q*' and
'*p*', that '*q*'. Furthermore,
the presence of a prior indecision or doubt, as opposed to confidence,
is deemed an essential ingredient of judgement. It does not, however,
fully put an end to doubt: as a judge may well be mistaken, our
ordinary judgements "form fallible opinions only" (Joseph
1916a, 160).
Now, if indecision and doubt are involved prior to judgement,
apprehension or knowledge (perceptual or not) could not be judgement,
because, by definition, there is no room for doubt in these cases.
When one is of the opinion that *p*, one has found the evidence
being in favour of *p*, without being conclusive. But Cook
Wilson thought statements of opinion as not involving the expression
of a decision, so they not judgement either:
>
>
> It is a peculiar thing--the result of estimate--and we call
> it by a peculiar name, opinion. For it, taken in its strict and proper
> sense, we can use no term that belongs to knowing. For the opinion
> that A is B is founded on evidence we know to be insufficient, whereas
> it is of the very nature of knowledge not to make its statements at
> all on grounds recognized to be insufficient, nor to come to any
> decision except that the grounds are insufficient; for it is here that
> in the knowing activity we stop. (SI, 99-100)
>
>
>
Moreover, there is no 'common mental attitude' involved in
'knowing' and 'opining':
>
>
> One need hardly add that there is no verbal form corresponding to any
> such fiction as a mental activity manifested in a common mental
> attitude to the object about which we know or about which we have an
> opinion. Moreover it is vain to seek such a common quality in belief,
> on the ground that the man who knows that *A* is *B* and the
> man who has that opinion both believe
> that *A* *is* *B*. (SI, 100)
>
>
>
It is an important characteristic of 'believing', setting
it apart from other 'activities of consciousness', that it
is accompanied with a feeling of confidence, greater than in
opining:
>
> To a high degree of such confidence, where it naturally exists, is
> attached the word belief, and language here, as not infrequently, is
> true to distinctions which have value in our consciousness. It is not
> opinion, it is not knowledge, it is not properly even judgement. (SI,
> 102)
>
The upshot of these remarks is that 'knowledge',
'belief', and 'opinion' are not, as idealists
would have it, species of the same genre, 'judgement' or
'thinking': these are all distinct and *sui
generis*. This leads to the all-important distinction between
'knowledge' and 'belief', discussed in the
next section: they do not merely differ in kind, they are not even two
species of the same genus (Prichard 1950, 87). But Cook Wilson also
held the view that knowing is more foundational, so to speak, as it is
presupposed by other 'activities of thinking' such as
judging and opining. For example, opinion involves knowledge, but goes
beyond it:
>
>
> There will be something else besides judgement to be recognized in the
> formation of opinion, that is to say knowledge, as manifested in such
> activities as occur in ordinary perception; activities, in other
> words, which are not properly speaking *decisions*. (SI,
> 96)
>
>
>
## 5. Knowing & Believing
Given that "our experience of knowing [is] the presupposition of
any inquiry we can undertake", Cook Wilson reasoned that
"we cannot make knowing itself a subject of inquiry in the sense
of asking what knowing is" (SI, 39). It follows immediately from
this impossibility of inquiring about the nature of knowledge that a
'theory of knowledge' is itself impossible, a consequence
he first expressed in a letter to Prichard in 1904:
>
> We cannot *construct* knowing--the act of
> apprehending--out of any elements. I remember quite early in my
> philosophic reflection having an instinctive aversion to the very
> expression '*theory* of knowledge'. I felt the
> words themselves suggested a fallacy. (SL, 803)
>
>
> Knowledge is *sui generis* and therefore a 'theory'
> of it is impossible. Knowledge is simply knowledge, and an attempt to
> state it in terms of something else must end in describing something
> which is not knowledge. (Prichard 1909, 245)
>
Thus, knowledge, as obtained in 'apprehension', could not
be defined in terms of belief augmented by some other property or
properties, as in the definition of knowledge as 'justified true
belief'. Cook Wilson is thus to be counted among early
20th-century opponents of this view (Dutant (2015), Le
Morvan (2017) & Antognazza (2020)). This view is known since
Timothy Williamson's *Knowledge and its Limits* as
'knowledge first' (see Williamson (2000, v) and Adam
Carter, Gordon & Jarvis (2017) for recent developments of this
view).
At the turn of the last century, it was also held by the neo-Kantian
philosopher Leonard Nelson (Nelson 1908 & 1949), to whom Cook
Wilson alludes in SI (872), but it was also held unbeknownst to him by
members of the Brentano school, such as Adolf Reinach, Max Scheler,
and Edmund Husserl. (See Mulligan (2014) for a survey.) It was to
become the central plank of 'Oxford Realism'. For the
Oxonians and the Brentanians one knows that *p* only if one
'apprehends' that *p*. As Kevin Mulligan put it:
"one knows that *p* in the strict sense only if one has
perceived that *p* and such perceiving is not itself any sort
of belief or judging" (Mulligan 2014, 382). A version of this
view is defended today by Timothy Williamson, according to whom
knowledge is a mental state "being in which is necessary *and
sufficient* for knowing *p*" (Williamson 2000, 21),
and which "cannot be analysed into more basic concepts"
(Williamson 2000, 33). The claim is, however, about 'knowledge
that *p*' and not anymore about 'apprehending that
*p*'.
If knowledge is indeed distinct from belief, then the difference
cannot be one of degree in the feeling of confidence or in the amount
of evidential support:
>
> In knowing, we can have nothing to do with the so-called
> 'greater strength' of the evidence on which the opinion is
> grounded; simply because we know that this 'greater
> strength' of evidence of A's being B is compatible with
> A's not being B after all. (SI, 100)
>
>
> To know is not to have a belief of a special kind, differing from
> beliefs of other kinds; and no improvement in a belief and no increase
> in the feeling of conviction which it implies will convert it into
> knowledge. (Prichard 1950, 87)
>
Austin has a nice supporting example in *Sense and Sensibilia*,
using the fact that 'seeing that' is factive: if no pig is
in sight, I might accumulate evidence that one lives here: buckets of
pig food, pig-like marks on the ground, the smell, etc. But if the pig
suddenly appears:
>
>
> ... there is no longer any question of collecting evidence; its
> coming into view doesn't provide me with more *evidence*
> that it's a pig, I can now just *see* that it is, the
> question is settled. (Austin 1962, 115)
>
>
>
There is a parallel move by John McDowell in 'Criteria,
Defeasibility and Knowledge' against the notion of
'criteria', deployed by some commentators of Wittgenstein
as a sort of 'highest common factor' between evidence and
proof (McDowell 1998, 369-394). Here, a 'highest common
factor' would be a state of mind that would count as knowing,
depending on one or more added factors, but would count in their
absence as something else such as believing (McDowell 1998, 386).
McDowell is now seen as having thus put forward a form of
'disjunctivism' about knowledge and belief (more on
disjunctivism in section 6). Although McDowell does not mention these
authors, Travis (2005) has shown that this move has roots in both Cook
Wilson and Austin. The fact that there is no highest common factor to
knowledge and belief entails for Cook Wilson a rejection of
'hybrid' and 'externalist' accounts of
knowledge. Any 'hybrid' account would factor knowledge
into an internal part, possibly a copy of the object known, and a
relation of that copy to the object itself (see immediately below and
section 6). Since there is no such highest common factor, this view,
integral to 'externalist' accounts of knowledge, had to be
rejected (Travis 2005, 287).
But Cook Wilson was led here to a further thesis. If one is prepared
to say 'I believe *p*', when one is not sure that
evidence already known is sufficient to claim that 'I know that
*p*', then it looks as if one should always be in a
position to know if one knows or if one merely believes. He argued,
however, that 'knowing that one knows' should not mean
that, once a particular piece of knowledge has been obtained, one
should then decide if it counts as knowledge or not, because this
decision would count again as a piece knowledge and "we should
get into an unending series of knowings" (SI, 107). This is why
he insisted that knowing that one knows "must be contained
within the knowing process itself":
>
>
> Belief is not knowledge and the man who knows does not believe at all
> when he knows: he knows it. (SI, 101)
>
>
>
> The consciousness that the knowing process is a knowing process must
> be contained within the knowing process itself. (SI, 107)
>
>
>
This claim was given further emphasis by Prichard, but his own
formulation differ in an important respect, since he introduces the
idea of a "reflection" in virtue of which when one knows
that *p*, one is able to know that one knows that *p*
and, when one believes that *p*, one is also able to know that
one believes that *p*, so that it would be impossible to
mistake knowledge for belief and vice-versa:
>
>
> We must recognize that whenever we know something we either do, or at
> least can, by reflecting, directly know that we are knowing it, and
> that whenever we believe something, we similarly either do or can
> directly know that we are believing it and not knowing it. (Prichard
> 1950, 86 & 88)
>
>
>
One can see that this is not quite Cook Wilson's position, given
his regress argument. Prichard assumes that, whenever one does not
know, one knows that one does not know:
>
>
> When knowing, for example, that the noise we are hearing is loud, we
> do or can know that we are knowing this and so cannot be mistaken, and
> when believing that the noise is due to a car we know or can know that
> we are believing and not knowing this. (Prichard 1950, 89)
>
>
>
Charles Travis called the claim that in knowing that *p* one
can always distinguish one's condition from all states in which
not-*p* the 'accretion' (Travis (2005, 290) &
Kalderon & Travis (2013, 501)) and he argued that it damages the
core of Cook Wilson's and Prichard's positions (Travis
2005, 289-294). To use the above example of the pig suddenly
coming into view, the claim is that, although one can know by
reflection that one knows that one is seeing a pig, one cannot by mere
reflection exclude the possibility that one is in fact seeing some
cleverly engineered 'ringer'. The upshot of this argument
is not immediately clear: if it is the case that one knows *p*
only if *p* and one knows that one knows that *p*, how
could one be unable to exclude the case that not-*p*, hence
that one does not know *p*? At all events, Travis sees this as
reinstalling the argument from illusion (Travis 2005, 291), that had
already been subjected to numerous critiques from an early paper by
Prichard to Austin's *Sense and Sensibilia*:
>
>
> That statements about appearances imply that we at least know enough
> of reality to say that real things have certain *possible*
> predicates, e.g., bent or convergent. To deny this is to be wholly
> unable to state how things look. [...] It is only because we know
> that our distance from an object affects its apparent size that we can
> draw a distinction between the size it looks and the size it is. If we
> forget this we can draw no distinction at all. (Prichard 1906,
> 225-226)
>
>
>
> ... it is important to remember that all talk of deception only
> *makes sense* against a background of general non-deception.
> (You can't fool all the people all of the time.) It must be
> possible to *recognize* a case of deception by checking the odd
> case against more normal ones. (Austin 1962, 11)
>
>
>
Guy Longworth (2019) has detailed how much Austin owed to Cook Wilson
on knowledge (see Urmson (1988) for a discussion in relation to
Prichard): Austin held the 'knowledge first' view and,
concomitantly, rejected the possibility of a theory of knowledge
(Austin 1962, 124), he viewed knowledge as akin to proof, thus as
being different in kind from belief on accumulation of evidence (see
Austin (1962, 115) & (1979, 99)). But Austin dropped the
'accretion' in 'Other Minds' as he shifted the
analysis of the claim that 'If I know, I can't be
wrong' (SI (69) & Prichard (1950, 88)) to that of 'I
know that *p*' as providing a form of warrant,
one's authority for saying that *p* (Austin 1979,
99-100). Krista Lawlor (2013) recently suggested that Austin
introduced here the speech act of 'assurance'. (Although
it has been claimed that the idea of 'performatives'
originates in an exchange of letters on promises with Prichard
(Warnock 1963, 347), that dimension remained unexplored in Oxford
Realism before Austin, who was consciously moving away here from
strict focus on 'statements'.)
The 'accretion' raises indeed an issue concerning
'other minds', given that it is one's reflective
view that, supposedly, authoritatively determines that one knows:
could there be other ways to determine whether someone else knows?
(See Longworth (2019).) This is how Austin, who also viewed knowledge
as a state of mind, was arguably led to explore the ramifications the
challenge 'How do you know?' to the person claiming
'I know'. Although he warned against the
'descriptive fallacy' (Austin 1979, 103), Austin's
claim appears to be, rather, that 'I know' has functions
"over and above describing the subjective mental state of
knowing" (Longworth (2019, 195), see Austin (1979,
78-79)).
To come back to Cook Wilson, he grappled here with related problems.
It is implied by the above that knowledge requires a sort of warrant
very much akin to a proof and, consequently, "we are forced to
allow that we are certain of very much less than we should have said
otherwise" (Prichard 1950, 97). Mathematical knowledge appears
paradigmatic. As Prichard put it: "In mathematics we have,
without real possibility of question, an instance of knowledge; we are
certain, we *know*" (Prichard 1919, 302). (Joseph used
this view to argue against Mill's empiricist account of
mathematics, using 'intuition' where Cook Wilson would use
'apprehension' (Joseph 1916a, 543-553).) Alas, Cook
Wilson put forth the axiom of parallels in Euclidean geometry as an
example of knowledge, and dismissed non-Euclidean geometries as
inconsistent (see section 1). If for any *p* such as the axiom
of parallels, someone fails to 'apprehend' it, then all he
could do is, somehow lamely, ask them to try and "remove
[...] whatever confusions or prejudices [...] prevent them
from apprehending the truth of the disputed proposition"
(Furlong 1941, 128).
This reply raises a *prima facie* problem for Cook Wilson,
because he could not have known, in the sense of 'knowing
*p* only if *p*', that 'non-Euclidean
geometries are inconsistent', since it has been proved that they
are not. He thus unwittingly provided an illustration of the need to
account, from his own internalist standpoint, for the sort of error
(or 'false judgement') committed when one claims to know
that *p*, while it is the case that not-*p*. For someone
to know or to be in error while thinking that one knows would just be
two undistinguishable states, since which of the two happens to be the
case would depend on some external factors:
>
> ... the two states of mind in which the man conducts his
> arguments, the correct and the erroneous one, are quite
> undistinguishable to the man himself. But if this is so, as the man
> does not know in the erroneous state of mind, neither can he know in
> the other state. (SI, 107)
>
Cook Wilson saw this a threat to the very possibility of demonstrative
knowledge, since one would never be sure that any demonstration is
true (SI, 107-108). Answering the threat, he would need here is
thus to make room for errors--when one thinks that one knows but
one does not--but without excluding by the same token the
possibility that knowing entails being in a position that one knows
that one knows.
Cook Wilson was thus led to distinguish, in some of his most
intriguing descriptive analyses, a further 'form of
consciousness', different from both knowledge and belief (or
opinion), which he called 'being under the impression
that' (SI, 109-113). A typical example being when one sees
the back of Smith on the street and, 'being under the impression
that' it is a friend, say Jones, one slaps him on the back, only
to realize one's mistake when he turns his head. The
"essential feature" of this state of mind is, according to
Cook Wilson, "the absence of any sense of uncertainty or doubt,
the action being one which would not be done if we felt the slightest
uncertainty" (SI, 110). Thus, one did not falsely judge that the
man on the street was Jones, because there was no judgement at all:
one was merely 'under the impression that' it was him.
Maybe one had some evidence that it was Jones, but the point is that
one acted on it without questioning the evidence: it was not used as
evidence, there was no assessment out of which one may be said to
prefer this possibility that other possibility, because no other
possibility was entertained. In this state of mind, the possibility of
error is somehow excluded, given that it does not occur to one that
'This man is Jones' might be false. It is thus not the
case that one thought that one knew but really did not, because it is
not true that to begin with one thought one knew, since one had not
reflected on one's evidence. The absence of doubt or uncertainty
opens the door to the possibility of being mistaken while taking
oneself as being certain.
The notion of 'being under the impression that' played a
significant role in the writings of the Oxford Realists, not just
those of Prichard, who also spoke of the 'an unquestioning frame
of mind', 'thinking without question' or
'taking for granted' (Prichard 1950, 79 &
96-98), but also those of William Kneale, H. H. Price and J. L.
Austin--see, e.g., Kneale (1949, 5 & 18), Price (1935), and
Austin (1962, 122). Perhaps most strikingly, Prichard was to drop the
'accretion' and argue in a late essay,
'Perception', that perception is not a kind of knowing: we
merely see colour extensions, which we systematically mistake for
objects or 'take for granted' to be objects (Prichard
1950, 52-68). Still, the notion of 'being under the
impression that' had its critics at Oxford, such as H. P. Grice,
who was aware of the difficulties raised by the
'accretion':
>
>
> This difficulty led Cook Wilson and his followers to the admission of
> a state of "taking for granted", which supposedly is
> subjectively indistinguishable from knowledge but unlike knowledge
> carried no guarantee of truth. But the modification amounts to
> surrender; for what enables us to deny that all of our so-called
> knowledge is really only "taking for granted"? (Grice
> 1989, 383-384)
>
>
>
One may also justifiably feel that Cook Wilson has not fully explained
away cases of error such as his own in being certain that
'non-Euclidean geometries are inconsistent', because his
conviction was not the result of merely 'being under the
impression that'.
Nevertheless, the influence of Cook Wilson's conceptions was
felt in a variety of ways in the second half of the last century. H.
H. Price offered in *Belief* an important commentary on Cook
Wilson's notion of 'being under the impression that'
(Price (1969, 204-220), reprising some of the content of Price
(1935)). He deemed it an important addition to the traditional
'occurrence' analysis of belief, as opposed to the
'dispositional' analysis. Price usefully contrasted
'being under the impression that' with
'assent' (Price 1969, 211-212), which is usually
said to involve preference and confidence: in prefering *p* one
would decide in favour of *p* having alternatives *q*
and *r* in mind, but this is precisely not the case when
'being under the impression that', since in this state one
does not entertain alternatives. As Cook Wilson points out,
"there is a certain passivity and helplessness" involved
(SI, 113). There is no confidence either, since in this state of mind
doubt is not an option, hence no degree of certainty is involved.
Price also explored (1969, 212-216) connections between this
unquestioning state of mind and the notion of 'primitive
credulity' (Bain 1888, 511), i.e., the idea harking back to
Spinoza (*Ethics* IIp49s) that one naturally believes in the
reality of anything that is presented to one's mind, unless some
contradicting evidence is also occurring.
In contrast to Price, Jonathan Cohen argued that beliefs are
dispositions (Cohen (1989, 368) & (1992, 5)). He also rejected the
Cook Wilson's claim that belief involves confidence (see section
4), but, without crediting him (except privately), Cohen made use of
'being under the impression that' to define belief as a
disposition to 'normally to feel that *p*' or
'to feel it true that *p*' (Cohen (1989, 368) &
(1992, 7)). So defined, belief would thus differ from
'acceptance', which results from a conscious and voluntary
choice and involves, like Price's 'assent',
preference and confidence. Cohen further added to these differences
that acceptance is also subjectively closed under deductibility, while
this is not the case with belief:
>
>
> ... you may well feel it true that *p* and that
> if *p* then *q*, without feeling it true
> that *q*. You will just be failing to put two and two together,
> as it were. And detective-story writers, for example, show us how
> often and easily we can fail to do this with our beliefs. (Cohen 1992,
> 31-32)
>
>
>
So acceptance of *p* involves 'premissing', i.e.,
the decision to use *p* as a premise or rule of inference in
further reasonings, and Cohen thought that he was carving nature at
its joint here: "Belief is a disposition to feel, acceptance a
policy for reasoning" (Cohen 1992, 5). This is in many ways not
faithful to Cook Wilson's original ideas, but one can sense
their presence within the idea of being unreflectively disposed to
feel that *p* being conceptually distinct from accepting
*p*.
On another note, John McDowell described knowledge in 'Knowledge
and the Internal' as a "standing in the space of
reasons" and argued against an "interiorization of the
space of reasons" that would occur if one were to think of
knowledge as achieving flawless standings in the space of reason,
"without needing the world to do us any favours" (McDowell
1998 395-396). If appearances were to be misleading, it would be
argued that this not be the result of faulty moves within the space of
reason but simply an "unkindness of the world". This
conception, McDowell sees as opening the door to an
'hybrid' account of knowledge, with flawless standings in
the space of reasons as an internal part, which would provide
necessary conditions for knowledge, and favours from the
world--when thing are as they appear to be--as an extra
condition (McDowell 1998, 400). But, McDowell concludes, "the
very idea of reason as having a sphere of operation within which it is
capable of ensuring, without being beholden to the world that
one's postures are all right [...] has the look of a
fantasy" (McDowell 1998, 405). This is so precisely because the
resources from the space of reasons could not provide factiveness on
their own, so knowledge could *not* be completely constituted
by standings within it. To rid oneself of the fantasy, one needs
simply to recognize that on occasions when the world is what it
appears to be, this favour is "not extra to the person's
standing in the space of reasons" (McDowell 1998, 405) and that
"we are vulnerable to the world's playing us false; and
when the world does not play us false we are indebted to it"
(McDowell 1998, 407). Here too, although the terminology is taken from
Wilfrid Sellars, there is a recognizable source in Cook Wilson.
Finally, another significant development related to the
'accretion' is its connection with the 'knowing that
one knows' principle in epistemic logic, first introduced in
Jaakko Hintikka's ground-breaking *Knowledge and Belief*
(Hintikka 1962). Hintikka argued for the equivalence between 'I
know that *p*' and 'I know that I know that
*p*' or, more generally, that '*i* knows
that *p*' (*Kip*) implies
'*i* knows that *i* knows that *p*'
(*KiKip*) (Hintikka 1962, chap. 5). In
connection with this, he noticed that Prichard's introduction
the idea of "reflection" (see Prichard (1950, 88) quoted
above) turns the argument into an argument from introspection that
does not sustain the more general claim that '*i*
believes that *p*' (*Bip*) implies
'*i* knows that *i* believes that
*p*' (*KiBip*) (Hintikka
1962, 109-110). Still, Hintikka thought that Cook Wilson and
Prichard would be right, if their remarks were to be understood as
restricted to the case where *i* is the first-person pronoun
'I' (Hintikka 1962, 110).
Although Williamson has picked up the Oxonian banner of
'knowledge first', this is one point where he did not
follow Cook Wilson and Prichard, since he argued against the
'knowing that one knows' principle with help of the notion
of 'luminosity'. A condition C is said to be
'luminous' if and only if for every case *a*, if C
obtains in *a* then in *a* one is in a position to know
that C obtains (Williamson 2000, 95). Williamson (2000, chap. 4)
provided arguments against 'luminosity', thus that one can
know without being able to know that one knows.
## 6. Perception
Cook Wilson also argued against idealism that in apprehension it is
neither the case that the object exists only within the apprehending
consciousness, nor that it is constituted by it: the object is
independent of the apprehending consciousness. As he wrote: "the
apprehension of an object is only possible through a being of the
object other than its being apprehended, and it is this being, no part
itself of the apprehending thought, which is what is
apprehended" (SI, 74). This independence, he considered to be
presupposed by the very idea of knowledge expounded above. But he
further rejected the distinction between act, content and object of
perception, first by negating the act-object distinction:
>
>
> In our ordinary experience and in the sciences, the thinker or
> observer loses himself in a manner in the particular object he is
> perceiving or the truth he is proving. That is what he is thinking
> about, and not about himself; and, though knowledge and perception
> imply both the distinction of the thinker from the object and the
> active working of that distinction, we must not confuse this with the
> statement that the thinking subject, in actualizing this distinction,
> thinks explicitly about himself, and his own activity, as distinct
> from the object. (SI, 79)
>
>
>
And, secondly, by rejecting the notion of 'content':
"For the only thing that can be found as 'content'
of the apprehending thought is the nature of the object
apprehended" (SI, 75). Austin echoed this last point saying that
"our senses are dumb" (Austin 1962, 11). (There is an
extensive literature on this claim, see, e.g., (Travis 2004) or
(Massin 2011).) The point is not that perception does not aim at an
object, but merely to deny that it does so through a
'content' acting as intermediary:
>
> what I think of the red object is its own redness, not some mental
> copy of redness in my mind. I regard it as having real redness and not
> as having my copy of redness. [...] If we ask in any instance
> what it is we think of a given object of knowledge, we find it always
> conceived as the nature or part of the nature of the thing known. (SI,
> 64)
>
What one apprehends must be the real object itself, not "some
mental copy" of it, so Cook Wilson is claiming here that, as a
state of mind, knowledge contains its object: "what we apprehend
[...] is included in the apprehension as a part of the activity
or reality of apprehending" (SI, 70). A long letter to G. F.
Stout (SI, 764-800) is of particular importance here, where Cook
Wilson criticized Stout on 'Primary and Secondary
Qualities' (Stout 1904), with this key diagnosis, of the
'objectification of the appearing as appearance':
>
>
> This is sometimes spoken from the side of the object as the
> *appearance* of the object to us. This 'appearance'
> then gets distinguished from the object [...] But next the
> *appearance*, though properly the appear*ing* of the
> object, gets itself to be looked on as itself an object and the
> immediate object of our consciousness, and being already, as we have
> seen, distinguished from the object and related to our subjectivity,
> becomes, so to say, a merely subjective
> 'object'--'appearance' in that sense. and
> so, as *appearance* of the object, it has now to be represented
> not as the object but as some phenomenon caused in our consciousness
> by the object. Thus for the true appearance (= appearing) to us of the
> *object* is substituted, through the
> 'objectification' of the appearing as appearance, the
> appearing to us of an *appearance*, the appearing of a
> phenomenon caused in us by the object. (SI, 796)
>
>
>
Cook Wilson's rejection of 'hybrid' accounts of
knowledge (see the previous section) is linked to his rejection of
epistemological 'intermediaries', so that knowledge could
not be of some such 'objectified' appearance. He
considered all such intermediaries ('images',
'copies', 'representative', *tertium
quid*, etc.) as "not only useless in philosophy but
misleading as tending to obscure the solution of a difficult
problem" (SI, 772). In this he stood in the tradition of Thomas
Reid, his arguments were as a matter of fact first developed against
the empiricism of Locke, Berkeley and Hume (for example in UL (SS
10)). In his letter to Stout, Cook Wilson put it thus:
>
>
> You begin an important section of your argument by assuming the idea
> of sensations being *representative.*
>
>
>
> They {represent--express--stand for} something other than
> themselves.
>
>
>
> Now, I venture to think that the idea of such *representation*
> in philosophy, or psychology rather, is very loose and treacherous
> and, if used at all, should be preceded by a 'critique' of
> such *representative* character, and an explanation of the
> exact sense in which the word representative is used. (SI, 769)
>
>
>
Against views of this kind, Cook Wilson developed three arguments in
his letter. First, he pointed out that it is impossible to know
anything about the relation between the representative and the object,
since one can never truly compare the former to the latter. Secondly,
he claimed that representationalist theories are always in danger of
leading towards idealism, since one must then somehow
'prove' the existence of the object which is, so to speak,
'behind' its representatives--there might be none.
Thirdly, he claimed that all such theories beg the question, since the
representative has to be apprehended in turn by the mind, and not only
this further 'apprehension' remains unexplained, it would
require that the mind be equipped with the very apparatus that the
representationalist theories were, to begin with, devised to
explain:
>
> We want to explain knowing an object and we explain it solely in terms
> of the object known, and that by giving the mind not the object but
> some idea of it which is said to be like it--an image (however
> the fact may be disguised). The chief fallacy of this is not so much
> the impossibility of knowing such image is like the object, or that
> there is any object at all, but that it assumes the very thing it is
> intended to explain. The image itself has to be apprehended and the
> difficulty is only repeated. (SI, 803)
>
Cook Wilson also inveighed against Stout's notion of
'sensible extension' pointing out *inter alia* that
it makes no sense to claim that these are extended without being in
space (SI, 783) and he tried to explain how a given object may appear
to have different shapes from different perspectives, without making
an appeal to any representative (SI, 790f.).
Stout answered these criticisms in print (for a discussion see Nasim
2008, 30-40 & 94-98). He argued that he had not been
holding a view akin to Locke's representationalism, claiming
that the 'representative function' of his
'presentations' is of a different nature, more like a
memory-image would represent what is remembered (Stout 1911, 14f.),
but it is at first blush unclear what he meant by this. Against Cook
Wilson's first argument, he claimed that in his conception
presentations and presented objects form an "inseparable
unity" (Stout 1911, 22), this being, once more, unclear. At all
events, both Stout and Russell, in his theory of
'sense-data' as 'objects of perception'
(Russell 1912), insisted that the physical object and the
representative are 'real'. But one might say that this
'objectification of the appearing as appearance', does not
annul Cook Wilson's diagnosis of the difficulties inherent in
that position. At least Russell was clearer about its implications,
requiring a 'logical construction' of physical objects as
functions of 'sense-data'. One way to counter Cook Wilson
on the absurdity of the locution 'sensible extensions' is
to distinguish between 'private' and 'public'
space, as Russell was to do in 'The Relation of Sense-Data to
Physics' (Russell 1917, 139-171); as it is well-known,
this postulation generates its own set of difficulties, e.g., the
claim that space must have 6 dimensions (Russell 1917, 154). Russell
on 'sense-data' was to become a favourite target for
Prichard's acerbic wit (Prichard 1915 & 1928).
Cook Wilson also wove his joint critique of the 'objectification
of the appearing as appearance' and of representationalism into
a broad historical narrative according to which "empiricism ends
in the Subjective Idealism it was intended to avoid" (UL, SS
10, see also SI, 60-63); he spoke of an "insidious and
scarcely 'conscious' dialectic" that "has done
much mischief in modern metaphysics and theories of perception"
(SI, 797). Wilfrid Sellars, who attended Prichard's classes as a
Rhodes Scholar at Oxford in the mid-1930s, found the idea of such a
'dialectic' appealing:
>
> I soon came under the influence of H. A. Prichard and, through him, of
> Cook Wilson. I found here, or at least seemed to find, a clearly
> articulated approach to philosophical issues which undercut the
> dialectic, rooted in Descartes, which led to both Hume and 19th
> Century Idealism. At the same time, I discovered Thomas Reid and found
> him appealing for much the same reasons. (Sellars 1975, 284)
>
Although Cook Wilson's philosophy may be construed as a
continuation of sorts of the Scottish School of Hutcheson and Reid
(see also section 10), it is striking that there are hardly any
references to these authors in his writings, even more so given that
they contain critiques of Locke, Berkeley and Hume from a similar
standpoint. (See Alsaleh (2003), focussing on attacks on Hume and
their impact on Price (1932) and Austin (1962), and Marion (2009) for
an overview of the 19th-century stages of this
'dialectic').
There was, however, no doctrinal unity on perception among Cook
Wilson's epigones. Prichard was probably the first to put Cook
Wilson's views into print in 'Appearances and
Reality' (Prichard 1906) but, as already pointed out, he ended
up arguing that in perception we systematically mistake colour
expanses for objects. H. H. Price, who was at first close to Cook
Wilson (Price 1924), incorporated a sense-data theory while rejecting
phenomenalism in *Perception* (Price 1932). He became for that
reason one of Austin's targets in *Sense and Sensibilia*
(Austin 1962), which remains, for all its novelty, faithful to Cook
Wilson's orthodoxy on knowledge. At all events, a form of direct
realism in the theory of perception is one of the characteristic
features of Oxford Realism. It is an ancestor to contemporary variants
such as the position argued for by John McDowell in *Mind and
World* (McDowell 1994) and, after a long eclipse, Cook
Wilson's views are once more playing a role in current debates
about perception. (See for example references to them in Kalderon
(2011, 241; 2018, xv, 49, 88 & 184), Siegel (2018, 2), Stoneham
(2008, 319-320).)
One topic of particular interest in this respect is
'disjunctivism' (see Soteriou (2016) for an overview).
This is the view that in perception one is faced *either* with
cases of genuine perception *or* with cases of illusion or
hallucination, as in 'I see a flash of light *or* I am
having the perfect illusion of doing so' (Hinton 1973, 39). Such
disjunctions were first analysed in detail by Michael Hinton (see
Hinton (1973) and Snowdon (2008)), but there have been suggestions
that disjunctivism harks back to Austin or even Cook Wilson and
Prichard (Kalderon & Travis (213, 498-499), for
Austin's case, see also Longworth (2019)). Adjudicating such
claims depends on one's understanding of disjunctivism itself,
but a few points can be adduced. Disjunctivism should not be confused
with any 'naive' or 'direct realism', but
one may appeal to it to defend such views, therefore one should expect
Cook Wilson and Oxford Realists to have taken some steps towards it.
The sharp distinction between knowledge and belief (section 5), with
the rejection of any 'highest common factor' between the
two, should count as a first step. (The above-mentioned critiques of
the argument from illusion reinforce this point.) A further step
towards a form of disjunctivism is also taken when, having
distinguished knowledge from belief, Cook Wilson also claimed that
belief presupposes knowledge: it is only when assessing what I may
know about some thing that I might realize that it is not sufficient
to claim that 'I know that *p*', so that I remain
circumspect and merely claim that 'I believe that
*p*' (see sections 4 & 5).
## 7. Universals
In metaphysics too, Cook Wilson had first to cope with idealism, thus
with the looming threat of Bradley's regress about relations
(Bradley 1897, chaps. II-III). Given a relation *R* between
*a* and *b*, a regress arises when one asks what relates
the relation to its relata, e.g., what relates *a* to *R*(and *R* to *b*), and one assumes as an answer that
a further relation, say, *R\**, is needed to tie *a* and
*R*, a bit like a string that ties two objects together that
needs a further string to both of its ends to tie it to the objects.
Since a need to explain the tie between *a* and *R\**
arises by the same token, one needs to introduce yet another relation
*R\*\**, and so forth. In a brief chapter (SI, 692-695),
Cook Wilson argued that the first move here--supposing that
*R\** is needed to tie *a* to *R*--is simply
'unreal', because there could not be a *new*
relation that relates a relation to one of its terms. Thus, the
regress would not be generated. One problem with Cook Wilson's
objections is that they admittedly aim at the regress concerning
'external relations' (SI, 692), but they are not fully
developed against 'internal relations' that Bradley also
entertains (Bradley 1897, 17-18). Cook Wilson may not have a
full and appropriate answer to Bradley, but he thought he was thus
free to entertain the relation of a subject to its attributes.
Although this is rarely noted, Cook Wilson is one of the forerunners
of what we now call 'trope theories'. What Edmund Husserl
called 'moments' and later on G. F. Stout called
'characters' and D. C. Williams 'tropes', Cook
Wilson called instead 'particularization of the universal'
(SI, 336) or 'particularized qualities' (SI (713),
Mulligan et al. (1984, 293 n.13)). Among his contemporaries, Cook
Wilson's ideas stand indeed closest to both Stout's and
Husserl's. He seems not to have known about the latter, but one
noticeable common feature concerns 'dependence' (Mulligan
et al. 1984, 294). Cook Wilson suggested towards the end of his life
that 'things' taken in themselves should be called
'existences' (SI, 713) and, keeping close to the language
of Aristotle (tode ti, 'a
this'), he defined an 'existence' as 'a this
such and such' (SI, 713). Starting with the subject-attribute
distinction, in the case of an existence where the subject is said to
be a 'substance', he argued that the "ordinary
conception" of its 'attribute-element' is that it is
"always a dependent reality" (SI, 157), with these
"dependent existences" having in turn further existences
dependent on them (SI, 153).
Under his conception 'existences' are, one might say,
'bundles' of particularized qualities. But he did not
consider 'substance' as a sort of substrate in which
universals are particularized, as many defenders of tropes do: the
'thing' is according to him a mere 'unity' of
elements, "not something over and above them. which has them,
but their unified existence" (SI, 155). He also expressed this
by saying that the universal "covers the whole nature of the
substance" (SI, 349), as "the particular does not have the
universal *in* it and something *also* besides the
universal to make it particular" (SI, 336, see also Joseph
(1916a, 23)). It contains nothing besides the particularization of the
universal: "the particular is not something that has the
quality, it is the particularized quality. This animal is
particularized animality" (SI, 713). Likewise, "the
differentia cannot be separated from the genus as something added on
to it" (SI, 359), the species are just the forms that the
universal, as genus, takes (SI, 335). Stout, citing Cook Wilson with
approval, put it thus: "square shape is not squareness plus
shape; squareness itself is a special way of being a shape"
(Stout 1930, 398). (The view harks back to Aristotle, *Met.* I
8 1058a21-26, see also on this point Joseph (1916a,
85-86), Wisdom (1934, 29), Prior (1949, 5-6) and Butcharov
(1966, 147-153)).
Trope theories usually explain the fact that two things
'share', say, a particular shade of yellow with help of a
resemblance class. Here, Cook Wilson parts company, claiming that the
universal is "something identical in the particulars, which
identity cannot be done away by substituting the term *similar*
for *same*" (SI, 344 & 347). Although the universal
is, according to Cook Wilson, nothing outside its particularizations,
it is claimed to be a "unity and identity in particulars"
or a "real unity in objects" (SI, 344). It is meant to
possess an 'intrinsic character' (SI, 342n. & 351),
for which Cook Wilson reluctantly introduced a technical term:
"the characteristic being of the universal" (SI, 342). But
he introduced no equivalent to Stout's *fundamentum
relationis* or 'distributive unity of a class' (Stout
1930, 388), because he did not define the universal in terms of
membership of a class. Although he recognized that universals have an
extension, i.e., the "total being of the universal"
composed of the "whole of the particulars as the
particularization of this unity", he did not identify the
universal with it (SI, 338).
For this reason and in virtue of the realist epistemology which he
tied to this conception (immediately below), Cook Wilson is often read
as having held a form of 'immanent realism', as opposed to
the moderate forms of nominalism more typical of trope theories
(Armstrong (1978, 85) or Moreland (2001, 165 n. 16)). One objection
against this stance is that there is not much point having both
universals and tropes. As David Armstrong put it, one of the two is
bound to be redundant: "Either get rid of universals, embrace a
trope version of Resemblance Nominalism or else cut out the middlemen,
namely, the tropes" (Armstrong 1989, 17 & 132). This reading
cannot be the last word, though, since Cook Wilson also appears only
to have admitted one sort of entities, the tropes or
'particularizations of the universal'. Thus, there seems
to be a tension, at best unresolved, in Cook Wilson's stance,
which could explain why it has found few supporters, one exception
being J. R. Jones (1949, 1950 & 1951).
Cook Wilson's 'particularizations of the universal'
are thus "strictly objective" and "not a mere
thought of ours" (SI, 335-336); they thus cannot be
phenomenal entities. His thinking here is of apiece with his
anti-idealist views on 'apprehension'. He had argued
against T. H. Green's neo-Kantian stance, that apprehension has
no 'synthetic' character: any synthesis apprehended is
attributed to the object and not the result of an activity of the
'apprehending mind'. As he put it, "in the judgement
of knowledge and the act of knowledge in general we do not combine our
apprehensions, but apprehend a combination" (SI, 279), and it is
"the nature of the elements themselves" which
"determines which unity they have or can have"; the
'apprehending mind' has "no power whatever to
*make* a complex idea out of simple ones" (SI, 524). This
view implies that universals and connections between them, as
particularized, are *in rebus* and to be apprehended as such
(Price 1947, 336). The view is, therefore, that there is no possible
apprehension of the universal except as particularized:
>
>
> Just as the universal cannot be, except as particularized, so we
> cannot apprehend it except in the apprehension of a particular. (SI,
> 336)
>
>
>
Cook Wilson further reasoned that, when one states that
'*a* is a triangle', one is predicating of
*a* the universal 'triangularity' and that,
analogously, in stating that 'triangularity is a
universal' one would then put the universal
'triangularity' in the subject position--the
'nominative case to the verb' as he quaintly puts it (SI,
349)--and treat it as if it were a particular, while putting
'universal' in the predicate position. But to talk about
the universal in this way would require *per impossibile* that
one apprehends the universal in abstraction from any of its
particularizations. Cook Wilson held his conception as being, if not
popular among philosophers, at least in accordance with ordinary
language and common sense (SI, 344-345).
It is thus the particularization of a given 'characteristic
being' which we are said to apprehend, but "neither as
universal nor as particularized" (SI, 343). There is no
suggestion in Cook Wilson's writings of something akin to
Husserl's act of 'categorial intuition' or
'ideational abstraction', in virtue of which the universal
would be "brought to consciousness and [...] actual
givenness" (Husserl 2001, 292). He believed instead that the
'intrinsic character' of any universal is inexplicable,
because the relation between particular and universal, although
fundamental and thus presupposed by any explanation (SI, 335 &
345), is *sui generis*, therefore not explainable in terms of
something else:
>
>
> I seem to have discovered that the true source of our metaphysical
> difficulties lies in the attempt, a mistaken attempt too frequent in
> philosophy, to explain the nature of the universal in terms of
> something other than itself. In fact the relation of the universal to
> the particular is something *sui generis*, presupposed in any
> explanation of anything. The nature of the universal therefore
> necessarily and perpetually eludes any attempt to explain itself. The
> recognition of this enable one to elucidate the whole puzzle of the
> *Parmenides* of Plato. (SI, 348, see also SI, 361)
>
>
>
Cook Wilson thus believed to have found an answer to the notorious set
of regress arguments known as the 'third man' in
*Parm*. 132a-143e. (Ryle argued later on that Cook
Wilson's answer would not do, developing his own regress
argument against the notion of 'being an instance of',
that Ryle read Cook Wilson as presupposing, while merely claiming that
it is *sui generis* (Ryle 1971, vol. I, 9-10).)
## 8. Philosophy of Logic
Cook Wilson did not contribute to logic. His inclination was at any
rate conservative. For example, there was for him no room even to try
and make a case for an alternative logic, since he held the
Aristotelian principles (the principle of syllogism, the law of
excluded middle and the principle of contradiction) to be "those
simple laws or forms of thoughts to which thought must conform to be
thought at all. Thought therefore cannot throw any doubt on them
without committing suicide" (ETA, 17 & SI, 626). Cook Wilson
also rejected new developments in symbolic logic from Boole to
Russell. He was not as such adverse to the "symbolization of
forms of statements": when criticizing Boole's he alluded
to "an improved calculus of [his] own" (SI, 638), which he
called "fractional method" (SI, 662). But he did not
publish it and all we have is the beginnings of an outline (SI,
192-210). His main objection to Boole--a common one at the
time--was to the algebraist's use of equations, perceived
as an intrusion of mathematics in logic (SI, 635-636). (See the
whole chapter (SI, 635-662).) And the little he knew of
mathematical logic, Cook Wilson ferociously opposed. He did think of
syllogistic as "a science in the same sense as pure
mathematics" (SI, 437), but he was opposed to the very idea of
logical foundations for mathematics because he believed that logical
inferences are exhausted by syllogistic and, "mathematical
inference *as such* is not syllogistic" (SI, xcvi). This
is a misunderstanding, common at the time, of the expressive power of
quantification theory as developed among others by Gottlob Frege and
C. S. Peirce, of which he was clearly insufficiently cognizant, if not
plainly ignorant.
Cook Wilson used his interpretation of Plato's doctrine of Idea
numbers as
asumbletoi
arithmoi as a basis for an attack
on Russell and the logicist definition of numbers, Plato's
doctrine being understood (see section 2) as entailing that there
cannot be a universal of the members of the ordered series of
universals: 1-ness, 2-ness, 3-ness, etc. because they do not share an
'intrinsic character'. Confusing this series with that of
the natural numbers, he concluded that the logicist definition in
terms of classes of classes, e.g., of the number 5 as the class of all
classes equinumerous with a given quintuplet, is "a mere
fantastic chimera" (SI, 352). In line with the argument about
putting the universal 'triangularity' in the subject
position (previous section), Cook Wilson reasoned for the case of
natural numbers that there would be an alleged 'universal of
numberness' and that this would lead straight to a
contradiction: since all particulars of a universal are said to
possess its quality, a group of 5 as a particular of
'5-ness' would thus possess the 'universal of
numberness', thus contradicting his claim that a particular
cannot be a universal (SI, 353).
By the same token, Cook Wilson thought that this line of reasoning
shows that Russell's paradox of the class of all classes that do
not contain themselves (Russell 1903, chap. X) is a "mere
fallacy of language" (SI, cx). He thus argued at length (SI,
SSSS 422-32), including in his correspondence with
Bosanquet (SI, SSSS 477-518) that there can no more be a
'class of classes' than 'universalness' could
be a 'universal of universals' and that a class can no
more be a member of itself than 'universalness' could be a
particular of itself: the implied 'universal of
universals' or 'universalness', of which universals
would be the particulars, would be a particular of itself, which is,
Cook Wilson claims, "obviously absurd" (SI, 350). For bad
reasons such as these, Cook Wilson was contemptuous of what he called
"the puerilities of certain paradoxical authors" (SI,
348). He even wrote to Bosanquet:
>
> I am afraid I am obliged to think that a man is conceited as well as
> silly to think such puerilities are worthy to be put in print: and
> it's simply exasperating to think that he finds a publisher
> (where was the publisher's reader?), and that in this way such
> contemptible stuff can even find its way into examinations. (SI, 739)
>
The problem with Cook Wilson's arguments is that they are based
on an elementary confusion between membership of a class and inclusion
of classes (see for example SI, cx & 733-734). Peter Geach
called Cook Wilson "an execrably bad logician" (Geach
1978, 123) for committing blunders such as this. (Cook Wilson's
claim in a letter to Lewis Carroll that it is not possible to know
that 'Some *S* is *P*' without knowing which
*S* it is which is *P* is another such elementary
blunder (Carroll 1977, 376).)
Fortunately, Cook Wilson made a more interesting contribution to
philosophy of logic in his discussion of Lewis Carroll's paradox
of inference (Carroll 1895), of which he gave the following
formulation:
>
>
> ... let the argument be A1 = B1, B1= C1, therefore A1 = C1. The
> rule which has to be put as the major premiss is, things being equal
> to the same thing are equal to one another. Under this we subsume
> A1 and C1 are things equal to the same thing,
> and so draw the conclusion that they are equal to one another. This is
> syllogism I. Now syllogism I, which is of the form MP, SM, SP, in turn
> exemplifies another rule of inference which is the so-called
> *dictum de omni et nullo*. This must now appear as a major
> premiss. The resulting syllogism may be put variously; the following
> short form will serve. Every inference which obey the dictum is
> correct; the inference of syllogism I obeys the dictum; therefore it
> is correct. This is a new syllogism (II) which again has for rule of
> inference the same dictum; hence a new syllogism (III) and so on
> *in saecula saeculorum*. (SI, 444)
>
>
>
Leaving aside the incorrect identification of the inference rule as
the *dictum de omni,* this is recognizably the infinite regress
in Carroll's paradox and, while Carroll did not provide one,
Cook Wilson offered the following diagnosis:
>
>
> ... it is clearly a fallacy to represent the rule according to
> which the inference is to be drawn from premisses as one of the
> premisses themselves. We should anticipate that this must somehow
> produce an infinite regress. (SI, 443)
>
>
>
These passages cannot be precisely dated and his correspondence with
Carroll for the relevant period is lost, so one cannot tell who framed
the paradox first (Moktefi 2007, chap. V, sect. 3.1). They were both
anticipated, however, by Bernard Bolzano, who already stated the
paradox in his *Wissenschaftslehre* (1837) and provided a
similar diagnosis (Bolzano 1972, SS 199). (For a discussion, see
Marion (2016).) Interestingly, Cook Wilson's diagnosis is linked
to his own views on the apprehension of universals *via* their
particularizations (see the previous section):
>
>
> A direct refutation may, however, be given as follows. In the above
> procedure the rule of inference is made a premiss and a particular
> inference is represented as deduced from it. But, as we have seen,
> that its an inversion of the true order of thought. The validity of
> the general rule of inference can only be apprehended in a particular
> inference. If we could not see the truth directly in the particular
> inference, we should never get the general rule at all. Thus it is
> impossible to deduce the particular inference from the general rule.
> (SI, 445)
>
>
>
In Lewis Carroll's presentation of the paradox, the Tortoise
refuses to infer the conclusion when faced with an instance of the
rule of *Modus Ponens* and Achilles suggests that one should
then add the rule as a further premise, but the Tortoise still refuses
to infer, so that Achilles then suggest to add the whole formula
resulting from adding the rule as a premise as yet a further premise
to no avail, and so forth. It is often claimed that the
Tortoise's repeated refusals indicates that rules of inference
are in themselves normatively inert, so that a further ingredient is
needed for one to infer, e.g., a ''rational
insight'' (Bonjour 1998, 106-107). Cook
Wilson's claim that one can only 'apprehend' the
validity of the rule in a particular inference, thus 'seeing the
truth directly' or possessing a 'direct intuition'
(SI, 441), is an analogous move.
The idea that a rule of inference cannot be introduced as a premise in
an inference in accordance with it, on pains of an infinite regress,
was also reprised by Ryle (Ryle 1971, vol. II, 216 & 238). But he
used it to argue for his celebrated distinction between 'knowing
how' and 'knowing that': ''Knowing a
rule of inference is not possessing a bit of extra information.
Knowing a rule is knowing how. It is realised in performances which
conform to the rule, not in theoretical citations of it.''
(Ryle 1971, vol. II, 217). However, if 'knowing how' is no
longer a state of mind (even dispositionally), then the view is no
longer Cook Wilson's.
Cook Wilson believed that all statements are
'categorical', arguing away 'hypothetical
judgements' with the claim that "in the hypothetical
attitude", we apprehend "a relation between two
problems" (SI, 542-543 & Joseph (1916a, 185)). In
other words, conditionals do not express judgements but connections
between questions. This view was elaborated by Ryle into his
controversial stance on indicative conditionals as
'inference-tickets' in ''If',
'So', and 'Because'' (Ryle 1971, vol.
II, 234-249). Ryle compared conditionals of the form 'If
*p*, then *q*', to "bills for statements
that statements could fill" (Ryle 1971, vol. II, 240) and he
rejected the form 'If *p* then *q*, but
*p*, therefore *q*', claiming that in some way the
*p* in the major premise cannot for that reason be the same as
*p* asserted by itself. This, of course, runs afoul of
Geach's 'Frege Point' (Geach 1972).
## 9. Philosophy of Language
Cook Wilson also questioned superficial uniformity of the form
'*S* is *P*', calling the subject of
attributes 'metaphysical' (SI, 158) in order to
distinguish it from the subject of predication. He thus distinguished
the *ontological* distinction between substance and attribute
from the *logical* subject-predicate distinction. Using the
traditional definition of the subject as 'what supports the
predicate' and the predicate as 'what is said concerning
the subject' (quoting Boethius, SI, 114-115), he noted
that a 'statement' such as 'That building is the
Bodleian' has different analyses, depending on the occasion in
which it is used. If in answer to 'What building is
that?', with the 'stress accent' on 'the
Bodleian' as in 'That building is *the
Bodleian*', the subject is 'that building' and
the predicate as 'what is said concerning the subject' is
'that that building is the Bodleian' (SI, 117 & 158).
But, if in answer to the question 'Which building is the
Bodleian?' with the 'stress accent' now on
'that' as in '*That* building is the
Bodleian', then the Bodleian is the subject and the predicate is
'that building pointed out was it' (SI, 119). The same
goes for 'glass is *elastic*', where elasticity is
predicated of glass and '*glass* is elastic', when
it is stated in answer to someone looking for substances that are
elastic. This shows that the relation of subject to predicate is
somehow symmetric, but it is not the case, however, with the
subject/attribute distinction because ''The subject cannot
be an attribute of one of its own attributes'' (SI, 158).
Cook Wilson also noted that ''the stress accent is upon
the part of the sentence which conveys the new
information'' (SI, 118), and he would thus say that the
subject-predicate relation depends on the *subjective* order in
which we apprehend them (SI, 139), while the relation between
'subject' and 'attribute' is *objective*in the sense that it is holding between a particular thing and a
'particularized quality', it is a ''relation
between realities without reference to our apprehension of
them'' (Robinson 1931, 103).
In 'How to Talk' (Austin 1979, 134-153), Austin
further developed distinctions akin to Cook Wilson's
differentiation between the logical subject/predicate and metaphysical
subject/attribute distinctions, and this differentiation was shared by
P. F. Strawson (Strawson 1959, 144), who also believed in
'particularized qualities' (Strawson (1959, 168) &
(1974, 131)). In a bid to avoid Bradley's regress (Strawson
1959, 167), he introduced the idea of 'non-relational
ties' between subject and attribute, leaving
'relations' for the link between logical subject and
predicate. Some non-relational ties are thus said to hold between
particulars and particulars: to the relation between Socrates and the
universal 'dying' corresponds an 'attributive
tie' between the particulars that are both Socrates *and*
the event of his death. That such 'ties' are less obscure
than 'relations' and that this maneuver actually succeeds
in stopping the regress are further issues, but it is interesting to
note that Strawson chose the name 'attributive tie' in
honour of Cook Wilson (Strawson (1959, 168), see SI, 193).
Strawson also noted that Cook Wilson's argument for
differentiating between the logical subject/predicate and metaphysical
subject/attribute distinctions involves an appeal to
''pragmatic considerations'' (Strawson 1957,
476). Cook Wilson's claim that a sentence such as 'glass
is elastic' may state something different depending on the
occasion in which it is used also had an important continuation in J.
L. Austin's more general point that, although the meaning of
words plays a role in determining truth-conditions, it is not an
exhaustive one: it "does not fix for *them* a
truth-condition" because that depends on how truth is to be
decided on the occasion of their use (see Travis (1996, 451) and
Kalderon & Travis (2013, 492 & 496)):
>
>
> ... the question of truth and falsehood does not turn only on
> what a sentence *is*, nor yet on what it *means*, but
> on, speaking very broadly, the circumstances in which it is uttered.
> (Austin 1962, 111)
>
>
>
This line of thought has been pursued further by Charles Travis under
the name of 'occasion-sensitivity', i.e.,
''the fact that the same state of the world may require
different answers on different occasions to the question of whether
what was said in a given statement counts as true''
(Travis (1981, 147), see also (1989, 255)), this being a recurring
theme, see the papers collected in Part 1 of (Travis 2008)). Thus,
Cook Wilson's above examples, 'That building is the
Bodleian' or 'glass is elastic', are genealogically
related to what are commonly known as 'Travis cases',
i.e., sentences used to make a true statement about an item in one
occasion and a false one about the same item in another. (For examples
of these, see Travis (1989, 18-19), (1997, 89), (2008, 26 &
111-112).)
R. G. Collingwood also took Cook Wilson as putting forth a slightly
different thesis, namely that the meaning of a statement is determined
by the question to which it is an answer (Collingwood 1938, 265 n.).
He used this idea as the basis for his 'logic of questions and
answers' (Collingwood 2013, chap. 5) and for his theory of
presuppositions (Collingwood 1998, chaps. 3-4), this last being
further developed by the French linguist Oswald Ducrot (1980, 42f.).
Collingwood was reluctant, however, to recognize Cook Wilson as an
inspiration, since he thought ill of the idea of
'apprehensions' as a non-derivative basis for knowledge
and he believed instead that knowledge comes from asking questions
first (Collingwood 2013, 25), and thus that knowledge depends on a
'complex of questions and answers' (Collingwood 2013,
37).
## 10. Moral Philosophy
Cook Wilson hardly wrote on topics outside the theory of knowledge and
logic, but two remarks ought to be made concerning moral philosophy.
First, the last piece included in *Statement and Inference* is
composed of notes for an address to a discussion society in 1897, that
was announced as 'The Ontological Proof for God's
Existence'. In this text, which opens with a discussion of
Hutcheson and Butler, Cook Wilson argued that in the case of
"emotions as are proper to the moral consciousness", such
as the feeling of gratitude:
>
> We cannot separate the judgement from the act as something in itself
> speculative and in itself without the emotion. We cannot judge here
> except emotionally. This is true also of all moral and aesthetic
> judgements. Reason in them can only manifest itself emotionally. (SI,
> 860)
>
He argued further, in what amounts to a form of moral realism, that
there must be a real experience, i.e., in the case of gratitude,
"Goodwill of a person, then, must here be a real
experience" (SI, 861), and that the feeling of "reverence
with its solemnity and awe" is in itself "not fear, love,
admiration, respect, but something quite *sui generis*"
(SI, 861). It is a feeling that, Cook Wilson argued, "seems
directed to one spirit and one alone, and only possible for spirit
conceived as God" (SI, 864). In other words, the existence of
the feeling of reverence presupposes that God exists. Cook Wilson thus
sketched within the span of a few pages a theory of emotions, which is
echoed today in the moral realism that has been developed, possibly
without knowledge of it, in the wake of David Wiggins'
'Truth, Invention and the Meaning of Life' (Wiggins 1976)
and a series of influential papers by John McDowell--now
collected in McDowell (2001).
Cook Wilson's ideas had a limited impact on Oxford theology.
Acknowledging his debt, the theologian and philosopher C. C. J. Webb
described religious experience as one that "cannot be adequately
accounted for except as apprehension of a real object" (Webb
1945, 38), but he nevertheless chose to describe his standpoint as a
form of 'Platonic idealism' (Webb 1945, 35). Cook
Wilson's realism also formed part of the philosophical
background to C. S. Lewis' "new look" in the 1920s,
*via* E. F. Carritt's teaching. In *Surprised by Joy,*Lewis also described awe as "a commerce with something
which [...] proclaims itself sheerly objective" (Lewis
1955, 221), but he quickly moved away from this position (see Lewis
(1955, chaps. XIII-XIV) and McGrath (2014, chap. 2)).
Secondly, Prichard is also responsible for an extension of Cook
Wilson's conception of knowledge to moral philosophy, with his
paper 'Does Moral Philosophy Rest on a Mistake?' (Prichard
1912), whose main argument is analogous to Cook Wilson's
argument for the impossibility of defining knowledge. In a nutshell,
duty is *sui generis* and not definable in terms of anything
else. The parallel is explicit in Prichard (1912, 21 &
35-36). That we ought to do certain things, we are told, arises
"in our unreflective consciousness, being an activity of moral
thinking occasioned by the various situations in which we find
ourselves", and the demand that it is proved that we ought to do
these things is "illegitimate" (Prichard 1912, 36). In
order to find out our duty, "the only remedy lies in actually
getting into a situation which occasions the obligation" and
"then letting our moral capacities of thinking do their
work" (Prichard 1912, 37). This paper became so influential that
Prichard was elected in 1928 to the White's Chair of Moral
Philosophy at Corpus Christi, although his primary domain of
competence had been the theory of knowledge. His papers in moral
philosophy were edited after his death as Prichard (1949, now 2002).
Prichard stands at the origin of the school of 'moral' or
'Oxford intuitionism', of which another pupil of Cook
Wilson, the Aristotle scholar W. D. Ross (Ross 1930, 1939) remains the
foremost representative, along with H. W. B. Joseph, E. F. Carritt,
and J. Laird. Some of the views they expressed have recently gained
new currency within 'moral particularism', e.g., in the
writings of Jonathan Dancy (Dancy 1993, 2004).
## 11. Legacy
The historical importance of Cook Wilson's influence ought not
to be underestimated. In his obituary, H. W. B. Joseph described him
as being "by far the most influential philosophical teacher in
Oxford", adding that no one had held a place so important since
T. H. Green (Joseph 1916b, 555). This should be compared with the
claim that in the 1950s Wittgenstein was "the most powerful and
pervasive influence" (Warnock 1958, 62). The
'realist' reaction against British Idealism at the turn of
the 20th century was at any rate not confined to the
well-known rebellion of G. E. Moore and Bertrand Russell at Cambridge.
There were also 'realisms' sprouting in Manchester (with
Robert Adamson and Samuel Alexander), and in Oxford too, where Thomas
Case had already argued for realism in *Physical Realism* (Case
1888) (Marion 2002b), although it is clearly Cook Wilson's
influence that swayed Oxford away from idealism. Since he published so
little, it is therefore mainly through teaching and personal contact
that he made a significant impact on Oxford philosophy, not only
through the peculiar tutorial style to which generations of
'Greats' students were subjected--as described in
Walsh (2000) or Ackrill (1997, 2-5)--but also through
meetings that he initiated, which were to become the
'Philosophers' Teas' under Prichard's
tutelage, the 'Wee Teas' under Ryle's and
'Saturday Mornings' under Austin's.
His legacy can thus be plotted through successive generations of
Oxford philosophers. E. F. Carritt, R. G. Collingwood (who reverted to
a form of idealism later on), G. Dawes Hicks, H. W. B. Joseph, H. A.
Prichard, W. D. Ross and C. C. J. Webb are among his better-known
pupils at the turn of the century. After his death, his influence
extended through the teaching of Carritt, Joseph and Prichard, and the
posthumous volumes of *Statement and Inference* to the
post-World War I generation of the 1920s, including Frank Hardie, W.
C. Kneale, J. D. Mabbott, H. H. Price, R. Robinson and G. Ryle, and
the early analytic philosophers of the 1930s, J. L. Austin, I. Berlin,
J. O. Urmson, and H. L. A. Hart, in particular. For example, Isaiah
Berlin's described Hart as "an excellent solid Cook
Wilsonian" in a letter to Price (Berlin 2004, 509), and admitted
himself to have been at first an Oxford Realist (Jahanbegloo 1992,
153). (See Marion (2000, 490-508) for further details.) Thus,
Oxford Realism first dislodged British Idealism from its position of
prominence at Oxford and then transformed itself into ordinary
language philosophy and, as pointed out in the previous section, moral
intuitionism. In the post-World War II years, Cook Wilson's name
gradually faded away, however, while 'ordinary language
philosophy', which owed a lot to his constant reliance on
ordinary language against philosophical jargon, blossomed. It became
one of the strands that go under the name of 'analytic
philosophy', so Cook Wilson should perhaps be seen as one of its
many ancestors.
The only Oxford philosopher of note who opposed the
'realists' before World War II was R. G. Collingwood, who
died too soon in 1943. He felt increasingly alienated and ended up
reduced to invective, describing their theory of knowledge as
"based upon the grandest foundation a philosophy can have,
namely human stupidity" (Collingwood 1998, 34) and their
attitude towards moral philosophy as a "mental kind of
decaudation" (Collingwood 2013, 50). Collingwood objected to
Cook Wilson's anti-idealist claim that 'knowing makes no
difference to the object known', that in order to vindicate it
one would need to compare the object as it is being known with the
object independently of its being known, which is the same as knowing
something unknown, a contradiction (Collingwood 2013, 44). But
Collingwood's argument did not rule out the possibility of
coming to know an object, while knowing that it was not altered in the
process. (For critical appraisals, see Donagan (1985, 285-289)
Jacquette (2006) and Beaney (2013).) In another telling complaint, he
criticized the Oxford Realists for being interested in assessing the
truth or falsity of specific philosophical theses without paying
attention to the fact that the meaning of the concepts involved may
have evolved through history, and so there is simply no 'eternal
problem' (Collingwood 2013, chap. 7). This points to a lack of
historical sensitivity, which is indeed another feature of analytic
philosophy that arguably originates in Cook Wilson.
There was another deleterious side to Cook Wilson's influence in
Oxford: his contempt for mathematical logic. It explains why one had
to wait until the appointment of Hao Wang in the 1950s for modern
formal logic first to be taught at Oxford. In the 1930s, H. H. Price
was still teaching deductive logic from H. W. B. Joseph's *An
Introduction to Logic* (Joseph 1916a) and inductive logic from J.
S. Mill's *System of Logic*. This reactionary attitude
towards modern logic and later objections to 'ordinary language
philosophy' go a long way to explain why Cook Wilson's
reputation dropped significantly in the second half of last century.
In the 1950s, Wilfrid Sellars was virtually alone in his praise:
>
>
> I can say in all seriousness that twenty years ago I regarded
> Wilson's *Statement and Inference* as the philosophical
> book of the century, and Prichard's lectures on perception and
> on moral philosophy, which I attended with excitement, as veritable
> models of exposition and analysis. I may add that while my
> philosophical ideas have undergone considerable changes since 1935, I
> still think that some of the best philosophical thinking of the past
> hundred years was done by these two men. (Sellars 1957, 458).
>
>
>
As the tide of 'ordinary language philosophy' ebbed, Cook
Wilson's views on knowledge showed more resilience. In the
1960s, Phillips Griffiths' anthology on *Knowledge and
Belief* included excerpts from Cook Wilson (Phillips Griffiths
1967, 16-27) and John Passmore was able to write that
"Cook Wilson's logic may have had few imitators; but his
soul goes marching on in Oxford theories of knowledge" (Passmore
1968, 257).
As shown in sections 5 and 6, Cook Wilson's views on knowledge
and perception are now once more involved in contemporary debates.
They had remained influential all along, although his name was often
not mentioned. His peculiar combination of the claims that knowledge
is a factive state of mind and that it is undefinable, argued for anew
by J. L. Austin, has been taken up and further developed by John
McDowell (McDowell 1994, 1998), Charles Travis (Travis 1989, 2008),
and Timothy Williamson (Williamson 2000, 2007), who is currently
Wykeham Professor of Logic, New College. One has, therefore, what
Charles Travis once described as "an Oxford tradition despite
itself" (Travis (1989, xii), on this last point, see also
Williamson (2007, 269-270n)).
During the twentieth century, secondary literature on Cook
Wilson's philosophy was not considerable, with a few papers of
unequal value by Foster (1931), Furlong (1941), Lloyd Beck (1931) and
Robinson (1928a, 1928b), along with a few studies on universals in the
post-war years (see section 7), and only one valuable commentary,
Richard Robinson's *The Province of Logic* (Robinson
1931). Interest in the study of his philosophy was only revived at the
beginning of this century, with Marion (2000) giving a first overview
of Oxford Realism. In a short book, Kohne (2010) charts the views of
Cook Wilson, Prichard and Austin on knowledge as a mental state. An
important contribution, Kalderon & Travis (2013) secured Oxford
Realism's place in the history of analytic philosophy, comparing
it with other forms of realism in Frege, Russell and Moore, while
drawing links with later developments in the writings of J. L. Austin,
J. M. Hinton and John McDowell. As a result of this revival of
interest, the philosophies of J. L. Austin (Longworth 2018a, 2018b
& 2019, and the entry to this Encyclopedia) and of Wilfrid Sellars
(Brandhoff 2020) are now being re-interpreted in light of Cook
Wilson's legacy. |
wilhelm-windelband | ## 1. Biographical Sketch
Wilhelm Windelband was born in 1848 in Potsdam, Germany. His father,
Johann Friedrich Windelband was a state secretary for the Province of
Brandenburg. Windelband studied in Jena, Berlin, and Gottingen,
attending lectures by Kuno Fischer (1824-1907) in Jena and
studying with Hermann Lotze (1817-1881) in Gottingen.
Fischer and Lotze would deeply influence Windelband's
philosophical thinking, as well as his work as a historian of
philosophy.
In 1870, Windelband completed his dissertation on *Die Lehren vom
Zufall* [*Doctrines of chance*] under Lotze's
supervision. The following year he served as a soldier in the
German-French war. After his military service, he completed his
habilitation in Leipzig and took up a position as
"Privatdozent" there. His habilitation was published in
1873 under the title *Ueber die Gewissheit der Erkenntniss: eine
psychologisch-erkenntnisstheoretische Studie* [*On the
certainty of knowledge: a psychological-epistemological study*].
In 1874, Windelband married Martha Wichgraf with whom he would have
four children.
Two years later, Windelband became professor (ordinarius) of
"inductive philosophy" in Zurich. He lectured on
psychology before taking up a position as professor of philosophy in
Freiburg im Breisgau in 1877. In 1882 he accepted an offer from the
University of Strasbourg. While his inaugural lectures in Zurich
and Freiburg had centered on the relation between psychology and
philosophy, his works from the Strasbourg period develop his core
themes in the philosophy of values and the philosophy of history.
Windelband served as "Rektor" of the University of
Strasbourg in 1894/95 and 1897/98. He remained in Strasbourg until
1903, when he accepted a call from the University of Heidelberg.
Between 1905 and 1908 he served as representative of the University of
Heidelberg in the Baden "Landtag". He was a member of the
Berlin Academy of Sciences, and of the Academies of Sciences of
Gottingen, Bayern and Heidelberg. He remained in Heidelberg and
taught there until his death in 1915.
## 2. From Kant to the Philosophy of Values
Windelband's views on normativity are strongly influenced by his
teacher Lotze. In his *Logic* (1874), Lotze distinguishes
between psychological laws which determine how thinking proceeds as a
matter of fact, and logical laws, which are normative laws and
prescribe how thinking ought to proceed (Lotze 1874: SSx;
SS332, SS337). This distinction also corresponds to a
distinction between act and content. Lotze observes that
"ideas" do occur in us as acts or events of the mind. But
their content does not consist in such acts, is not reducible to
mental activity, and does not exist in the way empirical processes and
entities may be said to exist. It is not real, but "valid"
(Lotze 1874: SSSS314-318).
The distinction between the factual and the normative would become the
cornerstone of Windelband's "philosophy of values".
In his early writings, however, Windelband does not yet embrace this
distinction. In his habilitation thesis *Uber die
Gewissheit der Erkenntnis* (1873), he argues that logic is a
normative discipline, but that it needs to be put on a psychological
basis. This is because the justification of our knowledge claims is
always dependent on and relative to specific epistemic purposes which,
in turn, are given psychologically. Although he criticizes the
identification of the conditions of knowledge with psychophysical
processes, he does think of psychology and logic or epistemology as
continuous with one another. His 1875 "Die Erkenntnislehre unter
dem volkerpsychologischen Gesichtspunkte" ["The
theory of knowledge from the perspective of folk-psychology"]
which was published in Moritz Lazarus' and Heyman
Steinthal's *Zeitschrift fur Volkerpsychologie und
Sprachwissenschaft* [*Journal of folk psychology and
linguistics*], is more radical. It denies that logical norms are
independent of the conscious mind and claims that the origins of
logical norms are to be found in the social history of humankind: the
principle of contradiction emerges in situations of social conflict
together with the distinction between true and false beliefs; and the
law of sufficient reason comes into being when conflicts between rival
views are no longer settled by brute force. On this picture, logical
principles exist only when humans cognize them. There is no boundary
between psychological acts and objective logical laws, or between
actual historical acceptance and normative validity.
This is precisely the type of thinking that Windelband will later
reject as "psychologistic", "historicist", and
"relativist" (1883: 116-117, 132). It is not clear
what led Windelband to this change of view, but a deeper engagement
with the *Critique of Pure Reason* and with the Kant
scholarship of his time, in particular with works by Kuno Fischer
(1860), Herman Cohen (1871), and Friedrich Paulsen (1875), seem to
have factored in. The clear contours of Windelband's
anti-psychologistic interpretation of Kant emerge for the first time
in his 1877 paper "Uber die verschiedenen Phasen der
Kantischen Lehre vom Ding-an-sich" ["On the different
stages of the Kantian doctrine of the thing-in-itself"]. In this
essay, Windelband argues against the idea, held by prominent figures
of the "back-to-Kant" movement like Friedrich Albert Lange
(1828-1875) and Hermann von Helmholtz (1821-1894), that
knowledge emerges from an interaction between subject and object.
According to their view, the object affects the subject's mind,
while the subject provides the *a priori* cognitive structures
that organize representations. These *a priori* structures are
innate as they consist in the psychophysical constitution of the human
sensory apparatus and mind. Cohen's critique of this
(mis)interpretation focuses on the concept of the *a priori*
and the question of objectivity (Cohen 1871). Windelband, in contrast,
takes the problem of the thing-in-itself and the concept of truth as
his starting points. His critique of the subject-object-interaction
model leads him to an immanent conception of truth, according to which
truth consists in the normative rules according to which our judgments
*ought* to be formed. The immanent concept of truth thus shifts
the focus of philosophical analysis on the universal and necessary
"rules" that ground our judgments.
The 1877 essay gives a genetic account of the development of
Kant's views about the thing-in-itself from the *Inaugural
Dissertation* (1770) to the second edition of the *Critique of
Pure Reason* (1787). This genetic reconstruction is supposed to
reveal the underlying philosophical problems and motivations that
drove Kant's thinking, as well as the inner tensions that,
according to Windelband, permeate critical philosophy. In particular,
Windelband identifies a "gulf" (1877: 225) between
Kant's critique of knowledge and the metaphysics of morals, as
well as a tension between the psychological account of the faculties,
and Kant's mature anti-psychologism. He distinguishes between
four phases in Kant's thinking about the thing-in-itself, and
argues that residues of the latter three stages are present in both
editions of the *Critique* (1781; 1787) and in the
*Prolegomena* (1783).
The first phase, Windelband argues, is that of the *Inaugural
Dissertation* (1770). Kant adopts Leibniz's distinction
between noumena and phenomena while formulating a genuinely novel
thought: he introduces the psychological distinction between receptive
sensibility and the spontaneous intellect. This distinction allows him
to maintain the claim that, unlike intuitions, concepts relate to
things-in-themselves (1877: 240-241).
In the second phase, Kant concerns himself more deeply with the
question how the concepts of the understanding can relate to objects.
He is driven to the insight that
>
>
> [w]e can only have *a priori* knowledge of that which we
> produce by the lawlike forms of our rational activities
> [*Vernunfthandlungen*]. (1877: 246)
>
>
>
Here, Windelband follows Fischer's claim that the categories of
the understanding are *a priori* valid for experience because
they produce or "make" it (Fischer 1860). Because the
understanding cannot "make" the thing-in-itself, the
thing-in-itself cannot be known.
According to Windelband, Kant then proceeds to inquire why we are even
assuming the existence of things-in-themselves if we cannot know them.
Here he enters his third, most radical phase. Kant now thinks of the
thing-in-itself as a fiction, an illegitimate hypostasis in which
>
>
> the universal form of the synthetic act of the understanding is seen
> as something that exists independently of experience. (1877: 254)
>
>
>
Windelband argues that it is only by dismissing the existence of the
thing-in-itself and by jettisoning the phenomena-noumena distinction
that Kant can undertake his anti-psychologistic turn. With the
rejection of the thing-in-itself, the characterization of sensibility
as "receptive" also needs to be abandoned. And that makes
a psychological construal of the faculties nonsensical. Kant is then
also able to abandon the concept of truth as correspondence between
representations and objects in favor of a strictly immanent
conception. The immanent conception defines truth in terms of the
universal and necessary rules that the relations between our
representations need to accord to (1877: 259-260).
And yet, Kant could not rest with this radical view given his
commitments in moral philosophy. Practical reason, and in particular
the idea that the moral law does not depend on the qualities of humans
as sensuous beings, but only on reason, demands a return to the
assumption that the thing-in-itself exists (1877: 262). In the fourth
phase, Kant thus reintroduces the thing-in-itself, while maintaining
his anti-psychologism.
This genetic reconstruction allows Windelband to think of the conflict
between the different views of the thing-in-itself that had emerged
within the neo-Kantian tradition as reflective of inherent tensions in
the *Critique*. He argues that Kant was especially unclear with
respect to psychologism: the distinction between judgment (as a
cognitive process) and justification (as logical and normative) is
inadequately articulated in the first *Critique*. Hence Kant is
at least partly responsible for the fact that in the first wave of
neo-Kantianism represented by Lange and Helmholtz "the new
concept of aprioricity was soon dragged down to the old idea of
psychological priority" (1883: 101).
But Windelband's approach to Kant is not merely historical. His
sympathies clearly lie with the Kant of the third phase. For
Windelband, the insight that natural psychological processes are
"utterly irrelevant" for the truth value of our
representations (1882b: 24), and the idea that the ultimate problem of
philosophy is that of normativity and justification--not a
*quaestio facti*, but a *quaestio juris* (1882b:
26)--need to be defended from Kant's own unclarities on
these matters. Windelband's "philosophy of values"
can thus be understood as an attempt to purify, explicate and develop
the radical insights of the anti-psychologistic move in Kant's
"third phase".
The cornerstone of this endeavor is the immanent conception of truth.
Windelband repeatedly returns to discussing the correspondence theory
of knowledge with its metaphor of a "mirror relation"
between mind and object. He criticizes the misconception that our
sensual perceptions are the things-in-themselves and that these things
could be compared with our representations. Any comparison must occur
between representations, since things and representations are
"incommensurable" (1881: 130). According to Windelband,
Kant's central innovation consists in the insight that the truth
of our judgments, and the relation of our representations to an object
are not to be found in correspondence at all. Rather, it consists in
the "rules" for combining representations (1881: 134).
This leads Windelband to define truth as the "normality of
thinking" (1881: 138)--with "normality" meaning
that thinking proceeds in accordance with rules or norms. Windelband
also conceptualizes the object of knowledge in terms of the rules of
judgment. The object of knowledge is nothing other than
>
>
> a rule according to which representational elements ought to organize
> themselves, in order for them to be recognized as universally valid in
> this organization. (1881: 135).
>
>
>
Windelband uses the terms "axioms" and
"values" interchangeably to refer to the most fundamental
rules, and he uses the term "norms" often, but not fully
consistently to refer to values or axioms as they relate to
psychological experience and the cultural-historical world. Focusing
his interpretation on "values" and "norms",
Windelband captures the structural similarity of Kant's three
*Critiques* in terms of immanent truth: if truth is nothing
other than accordance with a rule, then there is moral and aesthetic
truth in just the same way as there is epistemic truth (1881: 140). In
a later text, Windelband describes the unified project of the three
*Critiques* in terms of the necessary and universal relation
between thought and object. He writes that the postulates of practical
reason relate to intelligible objects (ideas) just "as
necessarily" as the intuitions and categories relate to the
object of experience, and that teleological judgment constructs the
purposive whole of nature just "as universally" as the
principles of pure understanding apply the categories to experience
(1904a: 151). While this formulation glosses over some of the nuances
of the distinction between constitutive and regulative uses of reason,
Windelband's main intention is not that of giving a detailed and
fully accurate reconstruction of Kant's philosophy. In line with
his famous dictum that "understanding Kant means to go beyond
him" (1915: iv), he instead seeks to revive the critical project
in a manner that allows it to answer the needs of his own time. And
Windelband thinks that the critical philosophy required at the time of
his writing is a "philosophy of values" which reveals the
most fundamental values in epistemology, ethics, and aesthetics.
## 3. The Factual, the Normative, and the Method of Philosophy
As indicated above, Windelband bases his "philosophy of
values" on Lotze's distinction between the factual and the
normative. Throughout his career, he seeks to explicate and clarify
this distinction and to illuminate its consequences for philosophical
method.
In "Was ist Philosophie?" ["What is
philosophy?"] (1882), Windelband approaches the factual-normative
distinction by identifying two basic and irreducible types of
cognitive operations: judgments and evaluations. While judgments
relate representations in a synthesis, and thus expand our knowledge
about an object, evaluations presuppose an object as given. They do
not expand our knowledge. Rather, they express a relation between the
"evaluating consciousness" and the represented object in a
"feeling" of approval or disapproval (1882b:
29-30).
Despite characterizing evaluations in terms of the feelings and
subjective attitudes of the evaluating consciousness, Windelband
argues that some evaluations are "absolutely valid". Even
if they are not embraced by everyone as a matter fact, they entail a
normative demand: they *ought* to be accepted universally
according to an absolute value (1882b: 37). The basic idea seems to be
that the normative force of any particular evaluation that is carried
out by an empirical consciousness is derived from its relation to a
non-empirical, absolute value. Windelband argues that even if there is
disagreement about which evaluations ought to be embraced as universal
and necessary, the demand for absolute validity itself can be
recognized by everyone: we all believe in the distinction between that
which is absolutely valid and that which is not (1882b: 43).
Accordingly, there must be a system of absolute values from which the
validity of judgments in epistemology, aesthetics, and ethic derives.
Critical philosophy, then, is nothing other than the "science of
necessary and universal values" (1882b: 26) that explicates this
system and thus reveals the grounds of normative appraisal and valid
judgment. Note that for Windelband, normativity and validity are
closely linked, if not identical. Absolute values endow our judgments
with a normative demand. Our judgments are valid if and only if they
raise a normative demand, that is if they *ought* to be
accepted universally and necessarily.
Having distinguished the factual and the normative by reference to
different cognitive operations, Windelband also seeks to identify the
points of contact between the two realms. Ultimately, he wants to
explain how empirical beings can recognize absolute values. His idea
that there is a system of absolute values is thus accompanied by the
conception of a "normal consciousness". In normal
consciousness the absolute system of values is, at least partially,
represented in the form of norms that are known by "empirical
consciousness" and that have an effect on it.
Starting from the immanent conception of truth as accordance with a
rule, Windelband distinguishes from among the infinite possible
combinations of representations that might or might not be formed by
empirical consciousness a subset of combinations: the subset that
accords with universal and necessary rules, and which hence ought to
be formed (1881: 135-139: 1882a: 72-73). This subset is
what he calls "normal consciousness" (1881: 139). Thought
is related to and valid for an object if of the infinite possible
combinations of our representations, our thinking forms exactly those
judgments that "ought to be thought" (1881: 135). One
might say that empirical consciousness contains (parts) of the system
of absolute values as its "normal consciousness". To the
extent that philosophy seeks not only to reveal the absolute system of
values, but also to inquire into how they can be norms for
empirical--embodied, psychological and historically
situated--human beings, critical philosophy is not only a
"science of values", but also a "science of normal
consciousness" (1882b: 46).
Although Windelband refers to the "philosophy of values"
as a science, he emphasizes that philosophy does not rely on the
methods of the empirical sciences. Philosophy is a second-order
science that reflects on the methods and results of the various
empirical disciplines in order to reveal the values "by virtue
of which we can evaluate the form and extent of their validity"
(1907: 9). Crucially, this reflective endeavor cannot be carried out
by means of empirical investigation. Here, the distinction between the
factual and the normative assumes a methodological dimension.
In "Kritische oder genetische Methode?" ["Critical or
genetic method?"] (1883) Windelband lays out in great detail the
differences between the "explanatory" and
"genetic" method of the empirical sciences, on the one
hand, and the "teleological" or "critical"
method of philosophy, on the other. And he warns of the devastating
consequences that result if the two are conflated. He singles out two
disciplines that might be thought to be relevant to philosophical
questions about values: individual psychology and cultural history. He
does not call into question the legitimacy of these disciplines or of
the "genetic method" in general. He even thinks that the
genetic method can be applied to values. That is, individual
psychology and cultural history can yield valid genetic theories that
explain the actual acceptance and development of values in an
individual's mental life and in cultural history. But actual
acceptance is not the same as normative validity, and validity proper
cannot be found by generalizing from the empirical. A firm boundary
separates the genetic method and its approach toward actually accepted
values from the critical method of philosophy which concerns values as
normative.
Windelband's argument against the application of the genetic
method to philosophical questions has three components. First, he
argues that the genetic method cannot solve philosophical questions
about normativity and validity, because there is too much variety
regarding the values that have been and are actually accepted. The
empirical method will not uncover values that are universally embraced
by all cultures (1883: 114-115).
Second, the genetic method can show and explain why some values have
been accepted by this or that individual, or in this or that culture.
But insofar as it is an empirical method, it cannot establish that the
values in question are universal and necessary.
>
>
> The universally valid can be found neither by inductive comparison of
> all individuals and peoples nor by deductive inference from ...
> the 'essence' of man. (1883: 115)
>
>
>
Windelband points to the absurdity of trying to justify by empirical
means that which is the presupposition of any empirical theory: the
axioms upon which the validity of any theory is based (1883: 113).
Third, Windelband argues that the genetic method leads to relativism.
The argument rests on the two claims just outlined, and can be
reconstructed as follows. There is variation between individuals and
cultures regarding which beliefs are actually prevalent, and the
naturally necessary [*naturnotwendig*] laws of psychology lead
to the formation of both true and false beliefs. As an empirical
method with no access to the universal and necessary, the genetic
method of cultural history and individual psychology has no criterion
for distinguishing between valid and invalid beliefs. This means that
it has to treat all beliefs as "equally justified [*alle
gleich berechtigt*]" (1882b: 36).
>
>
> For [the genetic explanation], there is thus no absolute measure; it
> must treat all beliefs as equally justified because they are all
> equally necessary by nature... [R]elativism is the necessary
> consequence of the purely empiricist treatment of philosophy's
> cardinal question. (1883: 115-116)
>
>
>
Note that Windelband does not differentiate between the idea that all
beliefs have only relative validity and the claim that they are all
equally justified or equally valid. In his view, the genetic method
does not merely render belief relative to individuals and cultures; it
also forces us to conclude that all beliefs are equally valid. An
empirical psychology or cultural history that oversteps its boundaries
and tries to address philosophical questions about normativity and
validity leads to "historicism",
"psychologism", and "relativism" and destroys
the basis of normative appraisal altogether.
Having rejected the genetic method in philosophy, Windelband explains
that philosophical method is purely "formal". The axioms,
or values, on which the validity of our judgments is based upon cannot
be proven. But it can be shown that the purposes of recognizing truth,
beauty, and can only be achieved if absolute values are presupposed.
Hence the critical method of philosophy has a teleological
structure:
>
>
> [F]or the critical method these axioms, regardless of the extent to
> which they are actually accepted, are norms which ought to be valid if
> thinking wants to fulfil the purpose of being true, volition the
> purpose of being good, and feeling the purpose of capturing beauty, in
> a manner that warrants universal validation. (1883: 109)
>
>
>
And yet, while Windelband insists on the distinction between the
empirical-genetic method of science and the critical-teleological
method of philosophy, he takes his theory of "normal
consciousness" to imply that empirical facts do play a role in
philosophy. In particular, he wants to maintain that empirical facts
about individual psychology and culture can provide the starting
points for philosophical reflection. He therefore describes the
philosophical method as a method of reflection
[*Selbstbesinnung*], in which the empirical mind becomes aware
of its own "normal consciousness". Philosophy examines
existing claims to validity in light of teleological considerations,
and in this way reveals the "processual forms of psychic life
that are necessary conditions for the realization of universal
appraisal" (1883: 125). Put differently, teleological
considerations allow the empirical consciousness to distinguish within
itself between empirical and contingent contents, on the one hand, and
the "contents and forms" that "have the value of
normal consciousness", on the other (1882b: 45-46; 1881:
139).
## 4. The Problem of Freedom
The distinction between the normative and the factual and the question
how the two realms are related also structure Windelband's
reflections on human freedom. Throughout his intellectual career,
Windelband returns to this problem, presenting different strategies
for reconciling causal determinism and human freedom. His
dissertation, completed under Lotze in 1870, deals with the concepts
of chance, causal necessity, and freedom. At that time, Windelband
still embraces the Kantian concept of transcendental freedom,
according to which the noumenal self is the uncaused cause of all
intentional action (1870: 16-19). But after the 1877 essay,
which had uncovered the anti-metaphysical rejection of the
thing-in-itself in Kant's "third phase" as the
radical starting point for the "philosophy of values",
Kant's metaphysical solution to the problem of freedom ceased to
convince him.
Windelband's most fully developed effort to arrive at an
alternative solution to the problem of freedom can be found in his
1882 essay "Normen und Naturgesetze" ["Norms and
natural laws"] and in his 1904 lectures *Uber
Willenfreiheit* [*On freedom of the will*]. In these texts,
Windelband articulates the following core claims. First, the Kantian
dualism between phenomena and noumena needs to be overcome, and we
need to think of moral responsibility in a way that does not
presuppose a noumenal realm. Second, causal explanation and normative
evaluation are two irreducible, but ultimately compatible, ways of
viewing, or constructing, the world of appearances. Third, the object
of moral evaluation is neither a particular moral action, nor a
transcendentally free will, but a
"personality"--understood as a set of relatively
stable motivations and psychological dispositions--that is the
natural cause of our actions. Although articulating these same core
thoughts, the two texts differ in how they motivate these ideas, and
in the consequences that they draw from them.
"Normen und Naturgesetze" begins by postulating a conflict
between natural law and moral law: if the moral law demands an action
that would also result from natural causes alone, it is superfluous.
But if it demands an action that does not accord with natural causes,
it is useless, because natural necessity cannot be violated (1882a:
59).
Windelband holds that causal determinism extends to mental life and
that for this reason the conception of freedom as a fundamental
capacity that violates "the naturally necessary functions of
psychic life" (1882a: 60) is implausible from the get-go.
However, he grants that there are two different and irreducible ways
of viewing the same objects: on the one hand, there is psychological
science which explains what the facts of mental life are. On the other
hand, there are ideal norms, which do not explain what the facts are,
but express how they ought to be (1882a: 66-67). The solution to
the antagonism between natural law and moral law is to be found in the
relation between these two points of view.
Here, Windelband introduces the claim that although ideal norms differ
from causal laws, they are not incommensurable with them and, in fact,
act on us causally. His argument builds on his conception of
"normal consciousness" as the representation of the system
of absolute values within empirical consciousness. Windelband argues
that a mind that becomes aware of its own "normal
consciousness" is capable of acting on the basis of and in
agreement with the norms that it has discovered within itself:
>
>
> [E]ach norm carries with it a sense that the real process of thinking
> or willing ought to form itself in accordance with it. With immediate
> evidence a form of psychological coercion attaches itself to the
> awareness of the norm. (1882a: 85)
>
>
>
The norm becomes a determining factor in and for empirical
consciousness. It acts as "part of the causal law" and
determines psychological life with natural necessity (1882a: 87; see
also 1883: 122).
The result is what Windelband calls a "deterministic concept of
freedom" (1882a: 88), according to which freedom consists in
nothing other than the becoming-aware of the norms that command how we
ought to act: our becoming-aware of them determines our actions with
natural necessity. Freedom is the "determination of empirical
consciousness by normal consciousness" (1882a: 88).
But Windelband still needs to explain how it is possible that we can
be aware of a moral norm and not act on it. Long passages in the 1904
lectures are devoted to developing the thought that what determines
our moral decisions and actions, and hence decisions if and when we
act in accordance with the moral law is our "personality".
Windelband approaches this as a theoretical, not as a normative,
question and concludes that we may well call those decisions and
actions "free" that are predominantly determined by our
constant personality, as opposed to being determined by external
circumstances or contingent affects. Freedom is "the unhindered
causality of a pre-existing willing" (1904b: 106).
However, Windelband concedes that this analysis does not exhaust our
concept of freedom since there is not merely a theoretical, but also a
normative use of the concept. In this context, Windelband acknowledges
the attraction of the Kantian argument that moral responsibility is
possible only if we have transcendental free will and with it the
capacity for genuine alternatives: we could act differently given the
same circumstances. But Windelband rejects the project of grounding
human freedom in a noumenal world. He thinks that Kant's
distinction between an intelligible noumenal self that is the uncaused
cause of our actions, and a deterministic empirical world as
constructed by our understanding reproduces the same problems that
earlier metaphysical accounts of freedom had encountered. He discusses
two problems in particular.
First, on the one hand, the personality that a particular individual
has developed is part of the empirical world and therefore, on the
metaphysical picture, does not feature into the free decisions of the
individual. But, on the other hand, the noumenal self is empty. It is
an abstract, general self, uniform in all of us; and thus it cannot
account for the differences in the moral life of individuals (1904b:
161-163).
Second, transcendental freedom is fundamentally incompatible with the
"all- encompassing reality and causality of the deity"
(1904b: 187). God is the ultimate uncaused cause, and the only way to
understand this thought is by assuming a "timeless
causality" between God and the intelligible characters (noumenal
selves)--a view which ends up undermining the freedom of the
latter (1904b: 186-9)
Having concluded that Kantian dualism fails to avoid the pitfalls of
earlier metaphysical approaches to freedom, Windelband abandons the
concept of transcendental freedom altogether. But while in 1882 his
alternative had been the "deterministic concept of
freedom", in 1904 Windelband proceeds to articulate the view
that the concept of free will is a mere placeholder in our normative
discourse. We are not free in the metaphysical sense, but we are
perfectly entitled to pass moral judgment. And we use the language of
freedom to express the fact that when passing moral judgment we
disregard questions of causal determination.
To spell out this idea, Windelband takes up the 1882 distinction
between the "points of view" of explanatory science and
ideal norms. He now argues that there are two ways of constructing the
world of appearances: we construct the world of appearances according
to causal laws, and we construct it according to our normative
evaluations. Evaluation
>
>
> reflects within the manifoldness of the given on those moments only
> which can be put in relation to the norm... [O]ne could call the
> manner in which the objects of experience, the given manifoldness of
> the factual, appear uniformly in light of such an evaluation another
> form of "appearance".... (1904b: 195-6)
>
>
>
Freedom then means not that the will is an uncaused cause. When
speaking of freedom, we appeal to the uncaused merely in the sense
that we evaluate matters independently of causal deterministic
processes (1904b: 197-198).
Windelband believes that his view preserves moral responsibility. As
described above, his theoretical investigation had yielded the result
that that which determines the extent to which we act in agreement
with a moral norm is our personality or character. Personality is the
constant cause of voluntary action, it determines our actions
necessarily according to general psychological laws. From a practical
standpoint, personality then is also the ultimate object of moral
appraisal. We hold personality responsible, and we are justified in
doing so, even if the formation of personality is itself a causal
process over which the individual has no control. Ultimately, the
upshot of Windelband's discussion is that moral responsibility
does not presuppose a noumenal world and transcendental freedom,
because it does not presuppose that we could act otherwise. It merely
presupposes that another person in the same circumstances could act
otherwise. The idea that one could have acted differently refers
>
>
> not to the concrete human being in these concrete circumstances, but
> to the generic concept of the human being. (1904b: 212)
>
>
>
## 5. The Natural and the Historical Sciences
In Windelband's view, a reconsideration of the Kantian project
is not only necessary because of its inherent tensions; broader
developments in nineteenth-century culture and science also
necessitate an adaptation of the critical method to changed historical
circumstances. One important factor is the professionalization of the
historical disciplines that had been underway since the early
nineteenth century (1907: 12). One of Windelband's central and
best known philosophical contributions concerns the question what
distinguishes the "historical sciences", that is, those
disciplines that study the human-historical and cultural world, from
the natural sciences. Windelband's answer to this question is in
line with his formal-teleological conception of philosophy: by
explicating the autonomous presuppositions of historical method,
critical philosophy safeguards the historical sciences against
methodological holism, namely the view that there is only one
scientific method, and that this is the method of physics and natural
science.
Windelband shares the goal of securing the autonomy of the historical
disciplines with his contemporary Wilhelm Dilthey. In his 1883
*Einleitung in die Geisteswissenschaften* [*Introduction to
the human sciences*] Dilthey had founded the distinction between
the natural sciences and the human sciences or "sciences of
spirit" on a distinction between outer and inner experience.
While outer experience forms the basis of hypothetical knowledge in
natural science, inner experience discloses "from within"
how the individual is an intersection of social and cultural relations
(Dilthey 1883: 30-32, 60-61, 88-89). Inner
experience is at the same time psychological and socio-historical, and
a descriptive psychology capable of grasping the integrated nexus of
inner experience can provide the "sciences of spirit" with
a solid foundation (Dilthey 1894).
Windelband profoundly disagrees with Dilthey's strategy for
demarcating the historical disciplines. In his Strasbourg
rector's address "Geschichte und Naturwissenschaft"
["History and natural science"] from 1894, he takes issue
with the suggestion that the facts of the "sciences of
spirit" derive from a particular type of experience. He takes
Dilthey to endorse an introspective view of psychological method. To
this he objects that the facts of the historical disciplines do not
derive from inner experience alone, and that inner perception is a
dubious method in the first place. He also classifies psychology with
the natural sciences, rejecting Dilthey's idea that the
human-historical disciplines could be founded on a non-explanatory and
non-hypothetical descriptive psychology. Perhaps most fundamentally,
Windelband rejects the term "sciences of spirit" on the
ground that it suggests that the distinction between different
sciences rests on a material distinction between different objects:
spirit and nature (Windelband 1894: 141-143).
Windelband seeks an alternative method for science-classification that
is purely formal. His reflections take scientific justification in its
most abstract form as their starting point: justification in science
is either inductive or deductive, Windelband argues, and the basic
relation on which all knowledge is based is between the general and
the particular (1883: 102-103).
The distinction between different empirical sciences is then also to
be sought at this level. In particular, Windelband argues that science
might pursue one of two different "knowledge goals" (1894:
143): it "either seeks the general in the form of natural law or
the particular in the historically determined form" (1894: 145).
The former approach is that of the "nomothetic sciences"
which seek to arrive at universal apodictic judgments, treating the
particular and unique as a mere exemplar or special case of the type
or of the generic concept. The "idiographic sciences", in
contrast, aim to arrive at singular assertoric judgments that
represent a unique object in its individual formation (1894: 150).
Windelband emphasizes that the distinction between nomothetic and
idiographic sciences is a purely formal and teleological one. One and
the same object can be approached from both points of view, and which
method is appropriate depends entirely on the goal or purpose of the
investigation. Moreover, most sciences will involve both general and
particular knowledge. The idiographic sciences in particular depend on
general and causal knowledge which has been derived from the
nomothetic sciences (1894: 156-157).
Note, however, that Windelband is not consistently restricting his
analysis to the formal level. He tends to use the terms
"nomothetic" and "natural" sciences, and the
terms "idiographic" and "historical" sciences
interchangeably, which at least suggests a correspondence between
scientific goals, methods, and objects.
Although Windelband does not spell this out in great detail, he also
suggests that values are of integral importance to the idiographic
method. First, he argues that the selection of relevant historical
facts depends on an assessment of what is valuable to us (1894:
153-154). Second, he suggests that the integration of particular
facts into larger wholes is only possible if meaningful, value-laden
relations can be established such that "the particular feature
is a meaningful part of a living intuition of the whole
[*Gesamtanschauung*]" (1894: 154). And third, he claims
that we value the particular, unique, and individual in a way in which
we do not value the general and recurrent, and that the experience of
the individual is indeed at the root of our "feelings of
value" (1894: 155-156).
It is Windelband's student Heinrich Rickert who takes up these
suggestions and develops them into a systematic account of the
"individuating" and "value-relating"
"concept-formation" of the "historical sciences of
culture" (Rickert 1902). Rickert argues that "scientific
concept formation", by which science overcomes the
"extensive and intensive manifold" of reality, depends on
a principle of selection. The principle of selection at work in the
natural sciences is that of "generalization". For this
reason, the natural sciences cannot account for the unique and
unrepeatable character of reality. The historical sciences, in
contrast, form their concepts in a manner that allows them to capture
individual realities (Rickert 1902: 225, 236, 250-251). They do
so on the basis of values: values guide the selection of which
particular historical facts belong to and can be integrated into a
specific historical "individuality" (examples being
"the Renaissance", "the Reformation" or
"the German nation state"). According to Rickert, one can
clearly distinguish the theoretical value-relation that is the basis
of historical science and practical evaluation. The historian relies
on values but does not evaluate his material (Rickert 1902:
364-365). Building on Windelband's core ideas about
historical method, Rickert arrives at a more refined and systematized
account of how historical science forms its concepts, that ultimately
leads to an account of what culture as an object of scientific study
amounts to.
In some of his later writings, Windelband will pick up the more
developed thoughts of his student Rickert. For example, he claims that
each science creates its objects according to the manner in which it
forms its concepts (1907: 18), and that the historical sciences rely
on a system of universal values when making selections about what
enters into their concepts (1907: 20).
## 6. The History of Philosophical Problems
A large part of Windelband's *oeuvre* consists of
writings in the history of philosophy. Windelband primarily covers
modern philosophy from the Renaissance to his own time, but he also
published on ancient philosophy. As a historian, Windelband is heavily
indebted to Fischer. And yet, he goes significantly beyond his
teacher, developing a new method and mode of historical presentation.
Windelband conceives of the history of philosophy as a "history
of problems". Rather than presenting a chronological sequence of
great minds, he organizes the presentation of philosophical ideas
according to the fundamental problems around which the philosophical
debates and arguments of an age were structured. Windelband also takes
a reflective attitude towards his own historiographical practice and
seeks to clarify the goals and systematic relevance of the history of
philosophy.
In these reflections, Windelband seeks to integrate two main thoughts:
First, the idea that philosophy is a reflection of its time and age
and, second, the conviction that the history of philosophy has
systematic significance. Windelband finds both thoughts in
Hegel's approach to the history of philosophy. Mirroring
Hegel's famous dictum that "philosophy is its own time
apprehended in thought", he speaks of philosophy the
"self-consciousness of developing cultural life" (1909:
4). He also applauds Hegel's "deep insight" that the
history of philosophy realizes reason and thus has intrinsic relevance
for systematic philosophy (1883: 133). Windelband thinks of history as
the "organon of philosophy" (1905: 184), as a guide to the
absolute values that the critical method seeks to reveal.
But despite the Hegelian gloss of these two claims, Windelband is
critical of Hegelianism. This is primarily because he has a different
understanding of the idea that philosophy is "its own time
apprehended in thought". For Windelband, this means that
philosophical thinking is shaped by historically contingent
factors.
Accordingly, Windelband formulates two criticisms of the Hegelian
conception of the history of philosophy. First, he finds fault with
the idea that the order in which successive philosophical ideas emerge
is necessary and that--in virtue of being necessary--it has
systematic significance:
>
>
> [I]n its essentially empirical determination which is accidental with
> respect to the "idea" the historical process of
> development cannot have this systematic significance. (1883: 133, see
> also 1905: 176-177)
>
>
>
The history of philosophy is not only shaped by the necessary
self-expression of reason, but also by the causal necessity of
cultural history. The cultural determinants of philosophical thinking
lead to "problem-convolutions" (1891: 11), in which
various, conceptually unrelated philosophical questions merge with one
another. Windelband also emphasizes that the "individual
factor" of the philosopher's character and personality is
relevant for how philosophical problems and concepts are articulated
in a given historical moment (1891: 12)
Second, although Windelband agrees that the historical development of
philosophical ideas is partly driven by critique and self-improvement,
he does not think of this process in terms of progress. Windelband
probes the often unacknowledged presuppositions of our talk about
progress. In "Pessimismus und Wissenschaft" (1876)
Windelband argues that science cannot decide between historical
optimism and historical pessimism (1876: 243). In "Kritische und
genetische Methode" (1883) he gives a more detailed analysis of
why this is the case. Historical change itself is not progress, he
observes. In order to determine whether a given historical development
is progressive, we need to be in possession of "a standard, the
idea of a purpose, which determines the value of the change"
(1883: 119). Windelband is wary of triumphalist narratives that
identify the historical development of present-day values with
progress. History is determined by contingent cultural factors and
could have produced "delusions and follies ... which we
only take to be truths now because we are inescapably trapped in
them" (1883: 121). Of progress we can only speak legitimately,
if we are in possession of an absolute value that allows us to assess
the historical development. At minimum, this means that the appeal to
a progressive history of philosophy is of no help when it comes to the
systematic task of uncovering the system of absolute values: progress
cannot aid in revealing absolute values as it presupposes them.
But while the fact that philosophy is at least partly determined by
contingent cultural factors undercuts necessity and progress, it also
opens the door for a reconceptualization of the "essence"
of philosophy. In his *Einleitung in die Philosophie*
[*Introduction to philosophy*] (1914) Windelband reflects on
the fact that everyday life, culture, and science do already contain
general concepts of the world. These concepts form the initial content
of philosophical reflection (1914: 6). But the business of philosophy
only takes off when these initial concepts, and the assumptions that
are baked into them, become unstable and collapse. An experience of
shock and unsettling prompts philosophy to question, rethink, and
critically assess the concepts and ideas of everyday life and science.
In this process of critical assessment, philosophy strives to purify
these concepts and ideas, and to connect them into a coherent, unified
system. According to Windelband, in this process of conceptual
reorganization, a rational necessity exerts itself. The
"vigorous and uncompromising rethinking of the preconditions of
our spiritual life" creates certain philosophical problems
"with objective necessity" (1914: 8).
Windelband provides neither a very detailed account of what
"philosophical problems" exactly are, nor of how precisely
they spring from the unsettling of everyday concepts. But he makes
three claims that, taken together, establish that philosophy, even
when reflecting its particular age, is not solely determined by
contingent cultural factors. First, philosophical puzzles stem from
the "inadequacy and contradictory imbalance" of the
contents that philosophy receives from life and science (1891: 10).
That is, true philosophical problems emerge whenever the systematizing
drive of philosophy is confronted with the deep incoherence of life.
Second, philosophical problems are necessary because the conceptual
"material" found in life already contains "the
objective presuppositions and logical coercions for all rational
deliberation about it" (1891: 10). The necessity of
philosophical problems is logical necessity. Third, despite historical
change
>
>
> [c]ertain differences of world- and life-attitudes reoccur over and
> over again, combat each other and destroy each other in mutual
> dialectics. (1914: 10)
>
>
>
Because philosophical problems emerge "necessarily" and
"objectively", they also reoccur throughout history.
Philosophical problems are thus timeless and eternal (1891:
9-10; 1914: 11)).
Windelband also believes that the history of philosophy, pursued
empirically and scientifically, can disentangle from one another the
"temporal causes and timeless reasons" (1905: 189) that
together give rise to the emergence of philosophical problems.
>
>
> Only through knowledge of the empirical trajectory that is free of
> constructions can come to light .... what is the share of
> ... the needs of the age on the one hand...but on the other
> hand that of the objective necessities of conceptual progress. (1905:
> 189)
>
>
>
At this point, Windelband reaches a clear verdict on the goal and
systematic significance of the history of philosophy: it lies in
disentangling that which is contingently actually accepted from that
which is "valid in itself" (1905: 199). History is the
"organon of critical philosophy" precisely to the extent
that it allows us to distinguish the actually accepted norms of
cultural life from that which is absolutely valid (1883: 132).
## 7. Critical Philosophy and World-Views
Windelband's "philosophy of values" is also an
intervention into debates that had preoccupied the neo-Kantian
movement and German academic philosophy more broadly since the
materialism controversy [*Materialismusstreit*]: what the
relation between science and world-views is, and whether philosophy is
capable of providing a "world-view".
Given that he takes the core of the critical project to consist in a
*quaestio juris*--a concern with normativity--it
seems surprising that throughout the 1880s Windelband insists that
critical philosophy does not provide a "world-view": the
project of revealing the highest values of human life has nothing to
do with "world-views", he argues, because it does not
provide a metaphysical account of the world and our place in it (1881:
140-141, 145). Questions about optimism or pessimism cannot be
answered by means of "scientific" philosophy, since they
depend on the idea that the world as a whole has a purpose. And claims
about the ultimate purpose of the world arise only from an
unscientific, subjective, and arbitrary projection of particular
purposes onto the universe as such (1876: 231). Philosophy is a
science, albeit a second-order formal science, and is thus barred from
formulating a world-view that would be metaphysical in character.
After 1900, however, Windelband's position on the question of
world-views changes. He now claims that it is the aim of philosophy to
provide a "world-view" with scientific justification
(1907: 1). Given that Windelband holds that philosophy in general and
the Kantian project in particular need to be adapted to the developing
circumstances of time and culture, this change of position is not
inconsequential: the cultural landscape of the early 1900s demands a
revision of the neo-Kantian project that highlights not only its
"negative" and critical aspects, but also develops its
positive implications for cultural life (1907, 8). The shift to a more
positive attitude towards philosophy as a world-view corresponds to a
re-evaluation of Kant's oeuvre, in which Windelband at least
partly suspends the anti-metaphysical rigor of the "third
phase" of the genesis of the *Critique of Pure Reason*,
and attributes more relevance to the *Critique of the Power of
Judgment*. This view is taken up by Rickert, who later argues in
more detail that the third *Critique* forms
the core of critical philosophy, and that Kant's metaphysical
project can provide the basis for an encompassing theory of
world-views (Rickert 1924: 153-168).
In his 1904 "Nach hundert Jahren" ["After one
hundred years"] Windelband provides yet another take on the
relation between the natural and the normative. Declaring the question
how the realm of natural laws is related to the realm of values to be
the "highest" philosophical endeavor (1904a: 162), he now
sees in the *Critique of the Power of Judgment* the possibility
of solving this problem: here one finds the idea that the purposive
system of nature gives rise to the value-determined process of human
history. The central concept that allows for connecting nature and
history in this way is that of "realization" (1904a:
162-3).
The turn to the concept of "realization" also leads to a
shift in how Windelband approaches the problem of science
classification. In 1894, Windelband had thought of the nomothetic and
idiographic sciences as two fundamental yet disjointed and even
incommensurable approaches to seeking knowledge of the word:
"Law and event persist next to each other as the last,
incommensurable factors of our representation of the world"
(1894: 160). But in 1904, he claims that the concept of realization
provides a common, unified basis for the natural and the historical
sciences (1904a: 163).
Windelband's language assumes a Hegelian tone. Critical
philosophy equipped with the concept of realization is able to grasp
and express the "spiritual value content" of reality
(1904a: 165). History is capable of revealing the universal value
content in the contingent maze of human interests and desires, and in
so doing captures "the progressive realization of rational
values" (1907: 20-21). The human as a rational being is
not determined psychologically or naturally, but humanity is a
historical task: "only as a historical being, as the developing
species, do we have a share in world-reason" (1910a: 283: see
also 1905: 185). Windelband even suggests that history itself has a
goal, namely the realization of a common humanity (1908: 257).
These remarks remain cursory however, and there is room for debate
over how much of Hegel's ontology of history Windelband takes on
board, as well as whether his talk about the progressive realization
of reason in history is incompatible with the formal-teleological
method that he had endorsed in the 1880s and 1890s. His
"Kulturphilosophie und transzendentaler Idealismus"
["Philosophy of culture and transcendental idealism"] of
1910 presents us with a puzzling fusion of Kantian and Hegelian
language. The goal of critical philosophy is that of revealing the
unity of culture, Windelband now claims, and this unity can only be
found
>
>
> by grasping the essence of the function ... which is common for
> all the particular ...cultural activities: and this can be
> nothing else than the self-consciousness of reason which produces its
> objects and in them the realm of validity itself. (1910b: 293).
>
>
> |
wisdom | ## 1. Wisdom as Epistemic Humility
Socrates' view of wisdom, as expressed by Plato in *The
Apology* (20e-23c), is sometimes interpreted as an example of a
humility theory of wisdom (see, for example, Ryan 1996 and Whitcomb,
2010). In Plato's
*Apology*, Socrates and his friend Chaerephon visit the oracle
at Delphi. As the story goes, Chaerephon asks the oracle whether
anyone is wiser than Socrates. The oracle's answer is that Socrates is
the wisest person. Socrates reports that he is puzzled by this answer
since so many other people in the community are well known for their
extensive knowledge and wisdom, and yet Socrates claims that he lacks
knowledge and wisdom. Socrates does an investigation to get to the
bottom of this puzzle. He interrogates a series of politicians, poets,
and craftsmen. As one would expect, Socrates' investigation reveals
that those who claim to have knowledge either do not really know any
of the things they claim to know, or else know far less than they
proclaim to know. The most knowledgeable of the bunch, the craftsmen,
know about their craft, but they claim to know things far beyond the
scope of their expertise. Socrates, so we are told, neither suffers
the vice of claiming to know things he does not know, nor the vice of
claiming to have wisdom when he does not have wisdom. In this
revelation, we have a potential resolution to the wisdom puzzle
in *The Apology*.
Although the story may initially appear to deliver a clear theory of
wisdom, it is actually quite difficult to capture a textually accurate
and plausible theory here. One interpretation is that Socrates is
wise because he, unlike the others, believes he is not wise, whereas
the poets, politicians, and craftsmen arrogantly and falsely believe
they are wise. This theory, which will be labeled Humility Theory 1
(H1), is simply (see, for example, Lehrer & Smith 1996, 3):
>
> **Humility Theory 1 (H1)**:
>
> *S* is wise iff *S* believes s/he is not
> wise.
This is a tempting and popular interpretation because Socrates
certainly thinks he has shown that the epistemically arrogant poets,
politicians, and craftsmen lack wisdom. Moreover, Socrates claims that
he is not wise, and yet, if we trust the oracle, Socrates is actually
wise.
Upon careful inspection, (H1) is not a reasonable interpretation of
Socrates' view. Although Socrates does not
*boast* of his own wisdom, he does believe the oracle. If he
was convinced that he was not wise, he would have rejected the oracle
and gone about his business because he
would not find any puzzle to unravel. Clearly, he believes, on
some level, that he is wise. The mystery is: what *is* wisdom if he has
it and the others lack it? Socrates nowhere suggests that
he has become unwise after believing the oracle. Thus, (H1) is not an
acceptable interpretation of Socrates' view.
Moreover, (H1) is false. Many people are clear counterexamples to
(H1). Many people who believe they are not wise are correct in their
self-assessment. Thus, the belief that one is not wise is not a
sufficient condition for wisdom. Furthermore, it seems that the belief
that one is not wise is not necessary for wisdom. It seems plausible
to think that a wise person could be wise enough to realize that she
is wise. Too much modesty might get in the way of making good
decisions and sharing what one knows. If one thinks Socrates was a
wise person, and if one accepts that Socrates did, in fact, accept
that he was wise, then Socrates himself is a counterexample to
(H1). The belief that one is wise could be a perfectly well justified
belief for a wise person. Having the belief that one is wise does not,
in itself, eliminate the possibility that the person is wise. Nor does
it guarantee the vice of arrogance. We should hope that a wise person
would have a healthy dose of epistemic self-confidence, appreciate
that she is wise, and share her understanding of reality with the rest
of us who could benefit from her wisdom. Thus, the belief that
one is not wise is not required for wisdom.
(H1) focused on believing one is not wise. Another version of the
humility theory is worth considering. When Socrates demonstrates that
a person is not wise, he does so by showing that the person lacks some
knowledge that he or she claims to possess. Thus, one might think
that Socrates' view could be better captured by focusing on the idea
that wise people believe they lack knowledge (rather than lacking
wisdom). That is, one might consider the following view:
>
> **Humility Theory 2 (H2):**
>
> *S* is wise iff *S* believes *S* does not know
> anything.
Unfortunately, this interpretation is not any better than (H1). It
falls prey to problems similar to those that refuted (H1) both as an
interpretation of Socrates, and as an acceptable account of
wisdom. Moreover, remember that Socrates admits that the craftsmen do
have some knowledge. Socrates might have considered them to be wise if
they had restricted their confidence and claims to knowledge to what
they actually did know about their craft. Their problem was that they
professed to have knowledge beyond their area of expertise. The
problem was not that they claimed to have knowledge.
Before turning to alternative approaches to wisdom, it is worth
mentioning another interpretation of Socrates that fits with the
general spirit of epistemic humility. One might think that what
Socrates is establishing is that his wisdom is found in his
realization that human wisdom is not a particularly valuable kind of
wisdom. Only the gods possess the kind of wisdom that is truly
valuable. This is clearly one of Socrates' insights, but it does not
provide us with an understanding of the nature of wisdom. It tells us
only of its comparative value. Merely understanding this evaluative
insight would not, for reasons similar to those discussed with (HP1)
and (HP2), make one wise.
Humility theories of wisdom are not promising, but they do, perhaps,
provide us with some important character traits associated with wise
people. Wise people, one might argue, possess epistemic
self-confidence, yet lack epistemic arrogance. Wise people tend to
acknowledge their fallibility, and wise people are reflective,
introspective, and tolerant of uncertainty. Any acceptable theory of
wisdom ought to be compatible with such traits. However, those traits
are not, in and of themselves, definitive of wisdom.
## 2. Wisdom as Epistemic Accuracy
Socrates can be interpreted as providing an epistemic accuracy, rather
than an epistemic humility, theory of wisdom. The poets, politicians,
and craftsmen all believe they have knowledge about topics on which
they are considerably ignorant. Socrates, one might argue, believes he
has knowledge when, and only when, he really does have knowledge.
Perhaps wise people restrict their confidence to propositions for
which they have knowledge or, at least, to propositions for which they
have excellent justification. Perhaps Socrates is better interpreted
as having held an Epistemic Accuracy Theory such as:
>
> **Epistemic Accuracy Theory 1 (EA1)**:
>
> *S* is wise iff for all *p*, (*S* believes
> *S* knows
> *p* iff S knows *p*.)
According to (EA1), a wise person is accurate about what she knows and
what she does not know. If she really knows *p*, she believes she knows
*p*. And, if she believes she knows *p*, then she really does know
*p*. (EA1) is consistent with the idea that Socrates accepts that he is
wise and with the idea that Socrates does have some knowledge. (EA1)
is a plausible interpretation of the view Socrates endorses, but it is
not a plausible answer in the search for an understanding of
wisdom. Wise people can make mistakes about what they know. Socrates,
Maimonides, King Solomon, Einstein, Goethe, Gandhi, and every other
candidate for the honor of wisdom have held false beliefs about what
they did and did not know. It is easy to imagine a wise person being
justified in believing she possesses knowledge about some claim, and
also easy to imagine that she could be shown to be mistaken, perhaps
long after her death. If (EA1) is true, then just because a person
believes she has knowledge when she does not, she is not wise. That
seems wrong. It is hard to imagine that anyone at all is, or ever has
been, wise if (EA1) is correct.
We could revise the Epistemic Accuracy Theory to get around this
problem. We might only require that a wise person's belief is
*highly justified* when she believes she has knowledge.
That excuses people with bad epistemic luck.
>
> **Epistemic Accuracy 2 (EA2)**:
>
> *S* is wise iff for all *p*, (*S* believes
> *S* knows
> *p* iff *S*'s belief in *p* is highly
> justified.)
(EA2) gets around the problem with (EA1). The Socratic Method
challenges one to produce reasons for one's view. When
Socrates' interlocutor is left dumbfounded, or reduced to
absurdity, Socrates rests his case. One might argue that through
his questioning, Socrates reveals not that his opponents
lack knowledge because their beliefs are false, but he
demonstrates that his opponents are not justified in holding the
views they profess to know. Since the craftsmen, poets, and
politicians questioned by Socrates all fail his interrogation,
they were shown, one might argue, to have claimed to have knowledge
when their beliefs were not even justified.
Many philosophers would hesitate to endorse this interpretation of
what is going on in *The Apology*. They would argue that a
failure to defend one's beliefs from Socrates' relentless questioning
does not show that a person is not justified in believing a
proposition. Many philosophers would argue that having very good
evidence, or forming a belief via a reliable process, would be
sufficient for justification.
Proving, or demonstrating to an interrogator, that one is justified is
another matter, and not necessary for simply being
justified. Socrates, some might argue, shows only that the craftsmen,
poets, and politicians cannot defend themselves from his questions. He
does not show, one might argue, that the poets, politicians, and
craftsmen have unjustified beliefs. Since we gain very little insight
into the details of the conversation in this dialogue, it would be
unfair to dismiss this interpretation on these grounds. Perhaps
Socrates did show, through his intense questioning, that the
craftsmen, poets, and politicians formed and held their beliefs
without adequate evidence or formed and held them through unreliable
belief forming processes. Socrates only reports that they did not
know all that they professed to know. Since we do not get to witness
the actual questioning as we do in Plato's other dialogues, we should
not reject (EA2) as an interpretation of Socrates' view of wisdom
in *The Apology*.
Regardless of whether (EA2) is Socrates' view, there are problems for
(EA2) as an account of what it means to be wise. Even if (EA2) is
exactly what Socrates meant, some philosophers would argue that one
could be justified in believing a proposition, but not realize that
she is justified. If that is a possible situation for a wise person to
be in, then she might be justified, but fail to believe she has
knowledge. Could a wise person be in such a situation, or is it
necessary that a wise person would always recognize the epistemic
value of what he or she
believes?[1]
If this situation is impossible,
then this criticism could be avoided. There is no need to resolve this
issue here because (EA1) and (EA2) fall prey to another, much less
philosophically thorny and controversial problem.
(EA1) and (EA2) suffer from a similar, and very serious,
problem. Imagine a person who has very little knowledge. Suppose
further, that the few things she does know are of little or no importance. She
could be the sort of person that nobody would ever go to for
information or advice. Such a person could be very cautious and believe that
she knows only what she actually knows. Although she would have
accurate beliefs about what she does and does not know, she would not
be wise. This shows that (EA1) is flawed. As for (EA2), imagine that
she believes she knows only what she is actually justified in
believing. She is still not wise. It should be noted, however, that
although accuracy theories do not provide an adequate account of
wisdom, they reveal an important insight. Perhaps a necessary
condition for being wise is that wise people think they have knowledge
only when their beliefs are highly justified. Or, even more simply,
perhaps wise people have epistemically justified, or rational, beliefs.
## 3. Wisdom as Knowledge
An alternative approach to wisdom focuses on the more positive idea
that wise people are very knowledgeable people. There are many views
in the historical and contemporary philosophical literature on wisdom
that have knowledge, as opposed to humility or accuracy, as at least a
necessary condition of wisdom. Aristotle (*Nichomachean
Ethics* VI, ch. 7), Descartes (*Principles of Philosophy*),
Richard Garrett (1996), John Kekes (1983), Keith Lehrer & Nicholas
Smith (1996), Robert Nozick (1989), Plato (*The Republic*),
Sharon Ryan (1996, 1999), Valerie Tiberius (2008), Dennis Whitcomb
(2010) and Linda Zagzebski (1996) for example, have all defended
theories of wisdom that require a wise person to have knowledge of
some sort. All of these views very clearly distinguish knowledge from
expertise on a particular subject. Moreover, all of these views
maintain that wise people know "what is important." The
views differ, for the most part, over what it is important for a wise
person to know, and on whether there is any behavior, action, or way
of living, that is required for wisdom.
Aristotle distinguished between two different kinds of wisdom,
theoretical wisdom and practical wisdom. Theoretical wisdom is,
according to Aristotle, "scientific knowledge, combined with
intuitive reason, of the things that are highest by nature"
(*Nicomachean Ethics*, VI, 1141b). For Aristotle, theoretical
wisdom involves knowledge of necessary, scientific, first principles
and propositions that can be logically deduced from them. Aristotle's
idea that scientific knowledge is knowledge of necessary truths and
their logical consequences is no longer a widely accepted view. Thus,
for the purposes of this discussion, I will consider a theory that
reflects the spirit of Aristotle's view on theoretical wisdom, but
without the controversy about the necessary or contingent nature of
scientific knowledge. Moreover, it will combine scientific knowledge
with other kinds of factual knowledge, including knowledge about
history, philosophy, music, literature, mathematics, etc. Consider the
following, knowledge based, theory of wisdom:
>
> **Wisdom as Extensive Factual Knowledge (WFK)**:
>
> *S* is wise iff *S* has extensive factual knowledge about
> science, history, philosophy, literature, music, etc.
According to (WFK), a wise person is a person who knows a lot about
the universe and our place in it. She would have extensive knowledge
about the standard academic subjects. There are many positive things
to say about (WFK). (WFK) nicely distinguishes between narrow
expertise and knowledge of the mundane, from the important, broad, and
general kind of knowledge possessed by wise people. As Aristotle puts
it, "...we think that some people are wise in general, not
in some particular field or in any other limited
respect..." (*Nicomachean Ethics*, Book 6,
1141a).
The main problem for (WFK) is that some of the most knowledgeable
people are not wise. Although they have an abundance of very important
factual knowledge, they lack the kind of practical know-how that is a
mark of a wise person. Wise people know how to get on in the world in
all kinds of situations and with all kinds of people. Extensive
factual knowledge is not enough to give us what a wise person
knows. As Robert Nozick points out, "Wisdom is not just knowing
fundamental truths, if these are unconnected with the guidance of life
or with a perspective on its meaning" (1989, 269). There is more to
wisdom than intelligence and knowledge of science and philosophy or
any other subject matter. Aristotle is well aware of the limitations
of what he calls theoretical wisdom. However, rather
than making improvements to something like (WFK), Aristotle
distinguishes it as one kind of wisdom. Other philosophers would be
willing to abandon (WFK), that is, claim that it provides insufficient
conditions for wisdom, and add on what is missing.
Aristotle has a concept of practical wisdom that makes up for what is
missing in theoretical wisdom. In Book VI of the *Nicomachean
Ethics*, he claims, "This is why we say Anaxagoras, Thales,
and men like them have philosophic but not practical wisdom, when we
see them ignorant of what is to their own advantage, and why we say
that they know things that are remarkable, admirable, difficult, and
divine, but useless; viz. because it is not human goods they
seek" (1141a). Knowledge of contingent facts that are useful to
living well is required in Aristotle's practical wisdom. According to
Aristotle, "Now it is thought to be the mark of a man of
practical wisdom to be able to deliberate well about what is good and
expedient for himself, not in some particular respect, e.g. about what
sorts of thing conduce to health or to strength, but about what sorts
of thing conduce to the good life in general" (*Nichomachean
Ethics*, VI, 1140a-1140b). Thus, for Aristotle, practical
wisdom requires knowing, in general, how to live well. Many
philosophers agree with Aristotle on this point. However, many would
not be satisfied with the conclusion that theoretical wisdom is one
kind of wisdom and practical wisdom another. Other philosophers,
including Linda Zagzebski (1996), agree that there are these two types
of wisdom that ought to be distinguished.
Let's proceed, without argument, on the assumption that it is possible
to have a theory of one, general, kind of wisdom. Wisdom, in general,
many philosophers would argue, requires practical knowledge about
living. What Aristotle calls theoretical wisdom, many would contend,
is not wisdom at all. Aristotle's theoretical wisdom is merely
extensive knowledge or deep understanding. Nicholas Maxwell (1984),
in his argument to revolutionize education, argues that we should be
teaching for wisdom, which he sharply distinguishes from standard
academic knowledge. Similar points are raised by Robert Sternberg
(2001) and Andrew Norman (1996). Robert Nozick holds a view very
similar to Aristotle's theory of practical wisdom, but Nozick is
trying to capture the essence of wisdom, period. He is not trying to
define one, alternative, kind of wisdom. Nozick claims, "Wisdom
is what you need to understand in order to live well and cope with the
central problems and avoid the dangers in the predicaments human
beings find themselves in" (1989, 267). And, John Kekes
maintains that, "What a wise man knows, therefore, is how to
construct a pattern that, given the human situation, is likely to lead
to a good life" (1983, 280). More recently, Valerie Tiberius (2008)
has developed a practical view that connects wisdom with well being,
requiring, among other things, that a wise person live the sort of
life that he or she could sincerely endorse upon reflection. Such
practical views of wisdom could be expressed, generally, as
follows.
>
> **Wisdom as Knowing How To Live Well (KLW)**:
>
> *S* is wise iff *S* knows how to live well.
This view captures Aristotle's basic idea of practical wisdom. It also
captures an important aspect of views defended by Nozick, Plato,
Garrett, Kekes, Maxwell, Ryan, and Tiberius. Although giving an
account of what it means to know how to live well may prove as
difficult a topic as providing an account of wisdom, Nozick provides a
very illuminating start.
>
>
> Wisdom is not just one type of knowledge, but diverse.
> What a wise person needs to know and understand constitutes a varied
> list: the most important goals and values of life - the ultimate
> goal, if there is one; what means will reach these goals without too
> great a cost; what kinds of dangers threaten the achieving of these
> goals; how to recognize and avoid or minimize these dangers; what
> different types of human beings are like in their actions and motives
> (as this presents dangers or opportunities); what is not possible or
> feasible to achieve (or avoid); how to tell what is appropriate when;
> knowing when certain goals are sufficiently achieved; what limitations
> are unavoidable and how to accept them; how to improve oneself and
> one's relationships with others or society; knowing what the true and
> unapparent value of various things is; when to take a long-term
> view; knowing the variety and obduracy of facts, institutions, and
> human nature; understanding what one's real motives are; how to cope
> and deal with the major tragedies and dilemmas of life, and with the
> major good things
> too. (1989, 269)
With Nozick's explanation of what one must know in order to live well,
we have an interesting and quite attractive, albeit somewhat rough,
theory of wisdom. As noted above, many philosophers, including
Aristotle and Zagzebski would, however, reject (KLW) as the full story
on wisdom. Aristotle and Zagzebski would obviously reject (KLW) as
the full story because they believe theoretical wisdom is another kind
of wisdom, and are unwilling to accept that there is a conception of
one, general, kind of wisdom. Kekes claims, "The possession of
wisdom shows itself in reliable, sound, reasonable, in a word, good
judgment. In good judgment, a person brings his knowledge to bear on
his actions. To understand wisdom, we have to understand its
connection with knowledge, action, and judgment" (1983, 277). Kekes
adds, "Wisdom ought also to show in the man who has it"
(1983, 281). Many philosophers, therefore, think that wisdom is not
restricted even to knowledge about how to live well. Tiberius thinks
the wise person's actions reflect their basic values. These
philosophers believe that being wise also includes action. A person
could satisfy the conditions of any of the principles we have
considered thus far and nevertheless behave in a wildly reckless
manner. Wildly reckless people are, even if very knowledgeable about
life, not wise.
Philosophers who are attracted to the idea that knowing how to live
well is a necessary condition for wisdom might want to simply tack on
a success condition to (KLW) to get around cases in which a person
knows all about living well, yet fails to put this knowledge into
practice. Something along the lines of the following theory would
capture this idea.
>
> **Wisdom as Knowing How To, and Succeeding at, Living
> Well (KLS)**:
>
> *S* is wise iff (i) *S* knows how to live well, and
> (ii) *S* is successful at living well.
>
The idea of the success condition is that one puts one's knowledge
into practice. Or, rather than using the terminology of success, one
might require that a wise person's beliefs and values cohere with
one's actions (Tiberius, 2008). The main idea is that one's actions
are reflective of one's understanding of what it means to live well.
A view along the lines of (KLS) would be embraced by Aristotle and
Zagzebski (for practical wisdom), and by Kekes, Nozick, and
Tiberius. (KLS) would not be universally embraced, however (see Ryan
1999, for further criticisms). One criticism of (KLS) is that one
might think that all the factual knowledge required by (WFK) is
missing from this theory. One might argue that (WFK), the view that a
wise person has extensive factual knowledge, was rejected only because
it did not provide sufficient conditions for wisdom. Many philosophers
would claim that (WFK) does provide a necessary condition for
wisdom. A wise person, such a critic would argue, needs to know how to
live well (as described by Nozick), but she also needs to have some
deep and far-reaching theoretical, or factual, knowledge that may have
very little impact on her daily life, practical decisions, or
well being. In the preface of his *Principles of Philosophy*,
Descartes insisted upon factual knowledge as an important component of
wisdom. Descartes wrote, "It is really only God alone who has
Perfect Wisdom, that is to say, who has a complete knowledge of the
truth of all things; but it may be said that men have more wisdom or
less according as they have more or less knowledge of the most
important truths" (*Principles*, 204). Of course, among
those important truths, one might claim, are truths about living well,
as well as knowledge in the basic academic subject areas.
Moreover, one might complain that the insight left standing from
Epistemic Accuracy theories is also missing from (KLS). One might
think that a wise person not only knows a lot, and succeeds at living
well, she also confines her claims to knowledge (or belief that she
has knowledge) to those propositions that she is justified in
believing.
## 4. Hybrid Theory
One way to try to accommodate the various insights from the theories
considered thus far is in the form of a hybrid theory. One such idea
is:
>
> *S* is wise iff 1. *S* has extensive factual and theoretical knowledge.
> 2. *S* knows how to live well.
> 3. *S* is successful at living well.
> 4. *S* has very few unjustified beliefs.
>
>
>
Although this Hybrid Theory has a lot going for it, there are a number
of important criticisms to consider. Dennis Whitcomb (2010) objects
to all theories of wisdom that include a living well condition, or an
appreciation of living well condition. He gives several interesting
objections against such views. Whitcomb thinks that a person who is
deeply depressed and totally devoid of any ambition for living well
could nevertheless be wise. As long as such a person is deeply
knowledgeable about academic subjects and knows how to live well, that
person would have all they need for wisdom. With respect to a very
knowledgeable and deeply depressed person with no ambition but to stay
in his room, he claims, "If I ran across such a person, I would
take his advice to heart, wish him a return to health, and leave the
continuing search for sages to his less grateful advisees. And I
would think he was wise despite his depression-induced failure to
value or desire the good life. So I think that wisdom does not
require valuing or desiring the good life."
In response to Whitcomb's penetrating criticism, one could argue that
a deeply depressed person who is wise, would still live as well as she
can, and would still value living well, even if she falls far short of
perfection. Such a person would attempt to get help to deal with her
depression. If she really does not care at all, she may be very
knowledgeable, but she is not wise. There is something irrational
about knowing how to live well and refusing to try to do so. Such
irrationality is not compatible with wisdom. A person with this
internal conflict may be extremely clever and shrewd, one to listen to
on many issues, one to trust on many issues, and may even win a Nobel
Prize for her intellectual greatness, but she is not admirable enough,
and rationally consistent enough, to be wise. Wisdom is a virtue and
a way of living, and it requires more than smart ideas and
knowledge.
Aristotle held that "it is evident that it is impossible to be
practically wise without being good" (*Nicomachean
Ethics*, 1144a, 36-37). Most of the philosophers mentioned
thus far would include moral virtue in their understanding of what it
means to live well. However, Whitcomb challenges any theory of wisdom
that requires moral virtue. Whitcomb contends that a deeply evil
person could nevertheless be wise.
Again, it is important to contrast being wise from being clever and
intelligent. If we think of wisdom as the highest, or among the
highest, of human virtues, then it seems incompatible with a deeply
evil personality.
There is, however, a very serious problem with the Hybrid Theory.
Since so much of what was long ago considered knowledge has been
abandoned, or has evolved, a theory that requires truth (through a
knowledge condition) would exclude almost all people who are now long
dead, including Hypatia, Socrates, Confucius, Aristotle, Homer, Lao
Tzu, etc. from the list of the wise. Bad epistemic luck, and having
lived in the past, should not count against being wise. But, since
truth is a necessary condition for knowledge, bad epistemic luck is
sufficient to undermine a claim to knowledge. What matters, as far as
being wise goes, is not that a wise person has knowledge, but that she
has highly justified and rational beliefs about a wide variety of
subjects, including how to live well, science, philosophy,
mathematics, history, geography, art, literature, psychology, and so
on. And the wider the variety of interesting topics, the better.
Another way of developing this same point is to imagine a person with
highly justified beliefs about a wide variety of subjects, but who is
unaware that she is trapped in the Matrix, or some other skeptical
scenario. Such a person could be wise even if she is sorely lacking
knowledge. A theory of wisdom that focuses on having rational or
epistemically justified beliefs, rather than the higher standard of
actually having knowledge, would be more promising. Moreover, such a
theory would incorporate much of what is attractive about epistemic
humility, and epistemic accuracy, theories.
## 5. Wisdom as Rationality
The final theory to be considered here is an attempt to capture all
that is good, while avoiding all the serious problems of the other
theories discussed thus far. Perhaps wisdom is a deep and
comprehensive kind of rationality (Ryan, 2012).
>
> **Deep Rationality Theory** (DRT):
>
> *S* is wise iff
>
> 1. *S* has a wide variety of epistemically justified beliefs
> on a wide variety of valuable academic subjects.
> 2. *S* has a wide variety of justified beliefs on how to live
> rationally (epistemically, morally, and practically).
> 3. *S* is committed to living rationally.
> 4. *S* has very few unjustified beliefs and is sensitive to
> her limitations.
>
>
>
In condition (1), DRT takes account of what is attractive about
some knowledge theories by requiring epistemically justified beliefs
about a wide variety of standard academic subjects. Condition (2)
takes account of what is attractive about theories that require
knowledge about how to live well. For example, having justified
beliefs about how to live in a practically rational way would include
having a well-reasoned strategy for dealing with the practical aspects
of life. Having a rational plan does not require perfect success. It
requires having good reasons behind one's actions, responding
appropriately to, and learning from, one's mistakes, and having a
rational plan for all sorts of situations and problems. Having
justified beliefs about how to live in a morally rational way would
not involve being a moral saint, but would require that one has good
reasons supporting her beliefs about what is morally right and
wrong, and about what one morally ought and ought not do in a wide
variety of circumstances. Having justified beliefs about living in an
emotionally rational way would involve, not dispassion, but having
justified beliefs about what is, and what is not, an emotionally
rational response to a situation. For example, it is appropriate to
feel deeply sad when dealing with the loss of a loved one. But,
ordinarily, feeling deeply sad or extremely angry is not an
appropriate emotion to spilled milk. A wise person would have
rational beliefs about the emotional needs and behaviors of other
people.
Condition (3) ensures that the wise person live a life that
reflects what she or he is justified in believing is a rational way to
live. In condition (4), DRT respects epistemic humility. Condition
(4) requires that a wise person not believe things without epistemic
justification. The Deep Rationality Theory rules out all of the
unwise poets, politicians, and craftsmen that were ruled out by
Socrates. Wise people do not think they know when they lack
sufficient evidence. Moreover, wise people are not epistemically
arrogant.
The Deep Rationality Theory does not require knowledge or perfection.
But it does require rationality, and it accommodates degrees of
wisdom. It is a promising theory of wisdom. |
wittgenstein | ## 1. Biographical Sketch
Wittgenstein was born on April 26, 1889 in Vienna, Austria, to a
wealthy industrial family, well-situated in intellectual and cultural
Viennese circles. In 1908 he began his studies in aeronautical
engineering at Manchester University where his interest in the
philosophy of pure mathematics led him to Frege. Upon Frege's
advice, in 1911 he went to Cambridge to study with Bertrand Russell.
Russell wrote, upon meeting Wittgenstein: "An unknown German
appeared ... obstinate and perverse, but I think not
stupid" (quoted by Monk 1990: 38f). Within one year, Russell was
committed: "I shall certainly encourage him. Perhaps he will do
great things ... I love him and feel he will solve the problems I
am too old to solve" (quoted by Monk 1990: 41). Russell's
insight was accurate. Wittgenstein was idiosyncratic in his habits and
way of life, yet profoundly acute in his philosophical
sensitivity.
During his years in Cambridge, from 1911 to 1913, Wittgenstein
conducted several conversations on philosophy and the foundations of
logic with Russell, with whom he had an emotional and intense
relationship, as well as with Moore and Keynes. He retreated to
isolation in Norway, for months at a time, in order to ponder these
philosophical problems and to work out their solutions. In 1913 he
returned to Austria and in 1914, at the start of World War I
(1914-1918), joined the Austrian army. He was taken captive in
1918 and spent the remaining months of the war at a prison camp. It
was during the war that he wrote the notes and drafts of his first
important work, *Tractatus Logico-Philosophicus*. After the war
the book was published in German and translated into English.
In the 1920s Wittgenstein, now divorced from philosophy (having, to
his mind, solved all philosophical problems in the
*Tractatus*), gave away his part of his family's fortune
and pursued several 'professions' (gardener, teacher,
architect, etc.) in and around Vienna. It was only in 1929 that he
returned to Cambridge to resume his philosophical vocation, after
having been exposed to discussions on the philosophy of mathematics
and science with members of the Vienna Circle, whose conception of
logical empiricism was indebted to his *Tractatus* account of
logic as tautologous, and his philosophy as concerned with logical
syntax. During these first years in Cambridge his conception of
philosophy and its problems underwent dramatic changes that are
recorded in several volumes of conversations, lecture notes, and
letters (e.g., *Ludwig Wittgenstein and the Vienna Circle*,
*The Blue and Brown Books*, *Philosophical Grammar,
Philosophical Remarks*). Sometimes termed the 'middle
Wittgenstein,' this period heralds a rejection of dogmatic
philosophy, including both traditional works and the
*Tractatus* itself.
In the 1930s and 1940s Wittgenstein conducted seminars at Cambridge,
developing most of the ideas that he intended to publish in his second
book, *Philosophical Investigations*. These included the turn
from formal logic to ordinary language, novel reflections on
psychology and mathematics, and a general skepticism concerning
philosophy's pretensions. In 1945 he prepared the final
manuscript of the *Philosophical Investigations*, but, at the
last minute, withdrew it from publication (and only authorized its
posthumous publication). For a few more years he continued his
philosophical work, but this is marked by a rich development of,
rather than a turn away from, his second phase. He traveled during
this period to the United States and Ireland, and returned to
Cambridge, where he was diagnosed with cancer. Legend has it that, at
his death in 1951, his last words were "Tell them I've had
a wonderful life" (Monk: 579).
## 2. The Early Wittgenstein
### 2.1 *Tractatus Logico-Philosophicus*
*Tractatus Logico-Philosophicus* was first published in German
in 1921 and then translated--by C.K. Ogden (and F. P.
Ramsey)--and published in English in 1922. It was later
re-translated by D. F. Pears and B. F. McGuinness. Coming out of
Wittgenstein's *Notes on Logic* (1913), "Notes
Dictated to G. E. Moore" (1914), his *Notebooks*, written
in 1914-16, and further correspondence with Russell, Moore, and
Keynes, and showing Schopenhauerian and other cultural influences, it
evolved as a continuation of and reaction to Russell and Frege's
conceptions of logic and language. Russell supplied an introduction to
the book claiming that it "certainly deserves ... to be
considered an important event in the philosophical world." It is
fascinating to note that Wittgenstein thought little of
Russell's introduction, claiming that it was riddled with
misunderstandings. Later interpretations have attempted to unearth the
surprising tensions between the introduction and the rest of the book
(or between Russell's reading of Wittgenstein and
Wittgenstein's own self-assessment)--usually harping on
Russell's appropriation of Wittgenstein for his own agenda.
The *Tractatus*'s structure purports to be representative
of its internal essence. It is constructed around seven basic
propositions, numbered by the natural numbers 1-7, with all
other paragraphs in the text numbered by decimal expansions so that,
e.g., paragraph 1.1 is (supposed to be) a further elaboration on
proposition 1, 1.22 is an elaboration of 1.2, and so on.
The seven basic propositions are:
| | | |
| --- | --- | --- |
| | **Ogden translation** | **Pears/McGuinness translation** |
| 1. | The world is everything that is the case. | The world is all that is the case. |
| 2. | What is the case, the fact, is the existence of atomic
facts. | What is the case--a fact--is the existence of states
of affairs. |
| 3. | The logical picture of the facts is the thought. | A logical picture of facts is a thought. |
| 4. | The thought is the significant proposition. | A thought is a proposition with sense. |
| 5. | Propositions are truth-functions of elementary
propositions. | A proposition is a truth-function of elementary
propositions. |
| | (An elementary proposition is a truth function of itself.) | (An elementary proposition is a truth function of itself.) |
| 6. | The general form of truth-function is \([\bar{p}, \bar{\xi},
N(\bar{\xi})]\). | The general form of a truth-function is \([\bar{p}, \bar{\xi},
N(\bar{\xi})]\). |
| | This is the general form of proposition. | This is the general form of a proposition. |
| 7. | Whereof one cannot speak, thereof one must be silent. | What we cannot speak about we must pass over in silence. |
Clearly, the book addresses the central problems of philosophy which
deal with the world, thought and language, and presents a
'solution' (as Wittgenstein terms it) of these problems
that is grounded in logic and in the nature of representation. The
world is represented by thought, which is a proposition with sense,
since they all--world, thought, and proposition--share the
same logical form. Hence, the thought and the proposition can be
pictures of the facts.
Starting with a seeming metaphysics, Wittgenstein sees the world as
consisting of facts (1), rather than the traditional, atomistic
conception of a world made up of objects. Facts are existent states of
affairs (2) and states of affairs, in turn, are combinations of
objects. "Objects are simple" (*TLP* 2.02) but
objects can fit together in various determinate ways. They may have
various properties and may hold diverse relations to one another.
Objects combine with one another according to their logical, internal
properties. That is to say, an object's internal properties
determine the possibilities of its combination with other objects;
this is its logical form. Thus, states of affairs, being comprised of
objects in combination, are inherently complex. The states of affairs
which do exist could have been otherwise. This means that states of
affairs are either actual (existent) or possible. It is the totality
of states of affairs--actual and possible--that makes up the
whole of reality. The world is precisely those states of affairs which
do exist.
The move to thought, and thereafter to language, is perpetrated with
the use of Wittgenstein's famous idea that thoughts, and
propositions, are pictures--"the picture is a model of
reality" (*TLP* 2.12). Pictures are made up of elements
that together constitute the picture. Each element represents an
object, and the combination of elements in the picture represents the
combination of objects in a state of affairs. The logical structure of
the picture, whether in thought or in language, is isomorphic with the
logical structure of the state of affairs which it pictures. More
subtle is Wittgenstein's insight that the possibility of this
structure being shared by the picture (the thought, the proposition)
and the state of affairs is the pictorial form. "*That*
is how a picture is attached to reality; it reaches right out to
it" (*TLP* 2.1511). This leads to an understanding of
what the picture can picture; but also what it cannot--its own
pictorial form.
While "the logical picture of the facts is the thought"
(3), in the move to language Wittgenstein continues to investigate the
possibilities of significance for propositions (4). Logical analysis,
in the spirit of Frege and Russell, guides the work, with Wittgenstein
using logical calculus to carry out the construction of his system.
Explaining that "Only the proposition has sense; only in the
context of a proposition has a name meaning" (*TLP* 3.3),
he provides the reader with the two conditions for sensical language.
First, the structure of the proposition must conform to the
constraints of logical form, and second, the elements of the
proposition must have reference (*Bedeutung*). These conditions
have far-reaching implications. The analysis must culminate with a
name being a primitive symbol for a (simple) object. Moreover, logic
itself gives us the structure and limits of what can be said at
all.
"The general form of a proposition is: This is how things
stand" (*TLP* 4.5) and every proposition is either true
or false. This bi-polarity of propositions enables the composition of
more complex propositions from atomic ones by using truth-functional
operators (5). Wittgenstein supplies, in the *Tractatus*, a
vivid presentation of Frege's logic in the form of what has
become known as 'truth-tables.' This provides the means to
go back and analyze all propositions into their atomic parts, since
"every statement about complexes can be analyzed into a
statement about their constituent parts, and into those propositions
which completely describe the complexes" (*TLP* 2.0201).
He delves even deeper by then providing the general form of a
truth-function (6). This form, \([\bar{p}, \bar{\xi}, N(\bar{\xi})]\),
makes use of one formal operation \((N(\bar{\xi}))\)and one
propositional variable \((\bar{p})\) to represent Wittgenstein's
claim that any proposition "is the result of successive
applications" of logical operations to elementary
propositions.
Having developed this analysis of world-thought-language, and relying
on the one general form of the proposition, Wittgenstein can now
assert that all meaningful propositions are of equal value.
Subsequently, he ends the journey with the admonition concerning what
can (or cannot) and what should (or should not) be said (7), leaving
outside the realm of the sayable propositions of ethics, aesthetics,
and metaphysics.
### 2.2 Sense and Nonsense
In the *Tractatus* Wittgenstein's logical construction of
a philosophical system has a purpose--to find the limits of
world, thought, and language; in other words, to distinguish between
sense and nonsense. "The book will ... draw a limit to
thinking, or rather--not to thinking, but to the expression of
thoughts .... The limit can ... only be drawn in language
and what lies on the other side of the limit will be simply
nonsense" (*TLP* Preface). The conditions for a
proposition's having sense have been explored and seen to rest
on the possibility of representation or picturing. Names must have a
*bedeutung* (reference/meaning), but they can only do so in the
context of a proposition which is held together by logical form. It
follows that only factual states of affairs which can be pictured can
be represented by meaningful propositions. This means that what can be
said are only propositions of natural science and leaves out of the
realm of sense a daunting number of statements which are made and used
in language.
There are, first, the propositions of logic itself. These do not
represent states of affairs, and the logical constants do not stand
for objects. "My fundamental thought is that the logical
constants do not represent. That the *logic* of the facts
cannot be represented" (*TLP* 4.0312). This is not a
happenstance thought; it is fundamental precisely because the limits
of sense rest on logic. Tautologies and contradictions, the
propositions of logic, are the limits of language and thought, and
thereby the limits of the world. Obviously, then, they do not picture
anything and do not, therefore, have sense. They are, in
Wittgenstein's terms, senseless (*sinnlos*). Propositions
which do have sense are bipolar; they range within the
truth-conditions drawn by the truth-tables. But the propositions of
logic themselves are "not pictures of the reality ... for
the one allows *every* possible state of affairs, the other
*none*" (*TLP* 4.462). Indeed, tautologies (and
contradictions), being senseless, are recognized as true (or false)
"in the symbol alone ... and this fact contains in itself
the whole philosophy of logic" (*TLP* 6.113).
The characteristic of being senseless applies not only to the
propositions of logic but also to mathematics or the pictorial form
itself of the pictures that do represent. These are, like tautologies
and contradictions, literally sense-less, they have no sense.
Beyond, or aside from, senseless propositions Wittgenstein identifies
another group of statements which cannot carry sense: the nonsensical
(*unsinnig*) propositions. Nonsense, as opposed to
senselessness, is encountered when a proposition is even more
radically devoid of meaning, when it transcends the bounds of sense.
Under the label of *unsinnig* can be found various
propositions: "Socrates is identical," but also "1
is a number" and "there are objects." While some
nonsensical propositions are blatantly so, others seem to be
meaningful--and only analysis carried out in accordance with the
picture theory can expose their nonsensicality. Since only what is
"in" the world can be described, anything that is
"outside" is denied meaningfulness, including the notion
of limit and the limit points themselves. Traditional metaphysics, and
the propositions of ethics and aesthetics, which try to capture the
world as a whole, are also excluded, as is the truth in solipsism, the
very notion of a subject, for it is also not "in" the
world but at its limit.
Wittgenstein does not, however, relegate all that is not inside the
bounds of sense to oblivion. He makes a distinction between
*saying* and *showing* which is made to do additional
crucial work. "What can be shown cannot be said," that is,
what cannot be formulated in sayable (sensical) propositions can only
be shown. This applies, for example, to the logical form of the world,
the pictorial form, etc., which show themselves in the form of
(contingent) propositions, in the symbolism, and in logical
propositions. Even the unsayable (metaphysical, ethical, aesthetic)
propositions of philosophy belong in this group--which
Wittgenstein finally describes as "things that cannot be put
into words. They make themselves manifest. They are what is
mystical" (*TLP* 6.522).
### 2.3 The Nature of Philosophy
Accordingly, "the word 'philosophy' must mean
something which stands above or below, but not beside the natural
sciences" (*TLP* 4.111). Not surprisingly, then,
"most of the propositions and questions to be found in
philosophical works are not false but nonsensical" (*TLP*
4.003). Is, then, philosophy doomed to be nonsense
(*unsinnig*), or, at best, senseless (*sinnlos*) when it
does logic, but, in any case, meaningless? What is left for the
philosopher to do, if traditional, or even revolutionary, propositions
of metaphysics, epistemology, aesthetics, and ethics cannot be
formulated in a sensical manner? The reply to these two questions is
found in Wittgenstein's characterization of philosophy:
philosophy is not a theory, or a doctrine, but rather an activity. It
is an activity of clarification (of thoughts), and more so, of
critique (of language). Described by Wittgenstein, it should be the
philosopher's routine activity: to react or respond to the
traditional philosophers' musings by showing them where they go
wrong, using the tools provided by logical analysis. In other words,
by showing them that (some of) their propositions are nonsense.
"All propositions are of equal value" (*TLP*
6.4)--that could also be the fundamental thought of the book. For
it employs a measure of the value of propositions that is done by
logic and the notion of limits. It is here, however, with the
constraints on the value of propositions, that the tension in the
*Tractatus* is most strongly felt. It becomes clear that the
notions used by the *Tractatus*--the logical-philosophical
notions--do not belong to the world and hence cannot be used to
express anything meaningful. Since language, thought, and the world,
are all isomorphic, any attempt to say in logic (i.e., in language)
"this and this there is in the world, that there is not"
is doomed to be a failure, since it would mean that logic has got
outside the limits of the world, i.e. of itself. That is to say, the
*Tractatus* has gone over its own limits, and stands in danger
of being nonsensical.
The "solution" to this tension is found in
Wittgenstein's final remarks, where he uses the metaphor of the
ladder to express the function of the *Tractatus*. It is to be
used in order to climb on it, in order to "see the world
rightly"; but thereafter it must be recognized as nonsense and
be thrown away. Hence: "whereof one cannot speak, thereof one
must be silent" (7).
### 2.4 Interpretative Problems
The *Tractatus* is notorious for its interpretative
difficulties. In the decades that have passed since its publication it
has gone through several waves of general interpretations. Beyond
exegetical and hermeneutical issues that revolve around particular
sections (such as the world/reality distinction, the difference
between representing and presenting, the Frege/Russell connection to
Wittgenstein, or the influence on Wittgenstein by existentialist
philosophy) there are a few fundamental, not unrelated, disagreements
that inform the map of interpretation. These revolve around the
realism of the *Tractatus*, the notion of nonsense and its role
in reading the *Tractatus* itself, and the reading of the
*Tractatus* as an ethical tract.
There are interpretations that see the *Tractatus* as espousing
realism, i.e., as positing the independent existence of objects,
states of affairs, and facts. That this realism is achieved via a
linguistic turn is recognized by all (or most) interpreters, but this
linguistic perspective does no damage to the basic realism that is
seen to start off the *Tractatus* ("The world is all that
is the case") and to run throughout the text ("Objects
form the substance of the world" (*TLP* 2.021)). Such
realism is also taken to be manifested in the essential bi-polarity of
propositions; likewise, a straightforward reading of the picturing
relation posits objects there to be represented by signs. As against
these readings, more linguistically oriented interpretations give
conceptual priority to the symbolism. When "reality is compared
with propositions" (*TLP* 4.05), it is the form of
propositions which determines the shape of reality (and not the other
way round). In any case, the issue of realism (vs. anti-realism) in
the *Tractatus* must address the question of the limits of
language and the more particular question of what there is (or is not)
beyond language. Subsequently, interpreters of the *Tractatus*
have moved on to questioning the very presence of metaphysics within
the book and the status of the propositions of the book
themselves.
'Nonsense' became the hinge of Wittgensteinian
interpretative discussion during the last decade of the 20th century.
Beyond the bounds of language lies nonsense--propositions which
cannot picture *anything*--and Wittgenstein bans
traditional metaphysics to that area. The quandary arises concerning
the question of what it is that inhabits that realm of nonsense, since
Wittgenstein does seem to be saying that there is something there to
be shown (rather than said) and does, indeed, characterize it as the
'mystical.' The traditional readings of the
*Tractatus* accepted, with varying degrees of discomfort, the
existence of that which is unsayable, that which cannot be put into
words, the nonsensical. More recent readings tend to take nonsense
more seriously as exactly that--nonsense. This also entails
taking seriously Wittgenstein's words in 6.54--his famous
ladder metaphor--and throwing out the *Tractatus* itself,
including the distinction between what can be said and what can only
be shown. The *Tractatus*, on this stance, does not point at
ineffable truths (of, e.g., metaphysics, ethics, aesthetics, etc.),
but should lead us away from such temptations. An accompanying
discussion must then also deal with how this can be recognized, what
this can possibly mean, and how it should be used, if at all.
This discussion is closely related to what has come to be called the
ethical reading of the *Tractatus*. Such a reading is based,
first, on the supposed discrepancy between Wittgenstein's
construction of a world-language system, which takes up the bulk of
the *Tractatus*, and several comments that are made about this
construction in the Preface to the book, in its closing remarks, and
in a letter he sent to his publisher, Ludwig von Ficker, before
publication. In these places, all of which can be viewed as external
to the content of the *Tractatus*, Wittgenstein preaches
silence as regards anything that is of importance, including the
'internal' parts of the book which contain, in his own
words, "the final solution of the problems [of
philosophy]." It is the importance given to the ineffable that
can be viewed as an ethical position. "My work consists of two
parts, the one presented here plus all that I have *not*
written. And it is precisely this second part that is the important
point. For the ethical gets its limit drawn from the inside, as it
were, by my book; ... I've managed in my book to put
everything firmly into place by being silent about it .... For
now I would recommend you to read the *preface* and the
*conclusion*, because they contain the most direct expression
of the point" (*ProtoTractatus*, p.16). Obviously, such
seemingly contradictory tensions within and about a text--written
by its author--give rise to interpretative conundrums.
There is another issue often debated by interpreters of Wittgenstein,
which arises out of the questions above. This has to do with the
continuity between the thought of the early and later Wittgenstein.
Again, the 'standard' interpretations were originally
united in perceiving a clear break between the two distinct stages of
Wittgenstein's thought, even when ascertaining some
developmental continuity between them. And again, the more recent
interpretations challenge this standard, emphasizing that the
fundamental therapeutic motivation clearly found in the later
Wittgenstein should also be attributed to the early.
## 3. The Later Wittgenstein
### 3.1 Transition and Critique of *Tractatus*
The idea that philosophy is not a doctrine, and hence should not be
approached dogmatically, is one of the most important insights of the
*Tractatus*. Yet, as early as 1931, Wittgenstein referred to
his own early work as 'dogmatic' ("On
Dogmatism" in *VC*, p. 182). Wittgenstein used this term
to designate any conception which allows for a gap between question
and answer, such that the answer to the question could be found at a
later date. The complex edifice of the *Tractatus* is built on
the assumption that the task of logical analysis was to discover the
elementary propositions, whose form was not yet known. What marks the
transition from early to later Wittgenstein can be summed up as the
*total* rejection of dogmatism, i.e., as the working out of
*all* the consequences of this rejection. The move from the
realm of logic to that of the grammar of ordinary language as the
center of the philosopher's attention; from an emphasis on
definition and analysis to 'family resemblance' and
'language-games'; and from systematic philosophical
writing to an aphoristic style--all have to do with this
transition towards anti-dogmatism in its extreme. It is in the
*Philosophical Investigations* that the working out of the
transitions comes to culmination. Other writings of the same period,
though, manifest the same anti-dogmatic stance, as it is applied,
e.g., to the philosophy of mathematics or to philosophical
psychology.
### 3.2 *Philosophical Investigations*
*Philosophical Investigations* was published posthumously in
1953. It was edited by G. E. M. Anscombe and Rush Rhees and translated
by Anscombe. It comprised two parts. Part I, consisting of 693
numbered paragraphs, was ready for printing in 1946, but rescinded
from the publisher by Wittgenstein. Part II was added on by the
editors, trustees of his *Nachlass*. In 2009 a new edited
translation, by P. M. S. Hacker and Joachim Schulte, was published;
Part II of the earlier translation, now recognized as an essentially
separate entity, was here labeled "Philosophy of Psychology
- A Fragment" (*PPF*).
In the Preface to *PI*, Wittgenstein states that his new
thoughts would be better understood by contrast with and against the
background of his old thoughts, those in the *Tractatus*; and
indeed, most of Part I of *PI* is essentially questioning. Its
new insights can be understood as primarily exposing fallacies in the
traditional way of thinking about language, truth, thought,
intentionality, and, perhaps mainly, philosophy. In this sense, it is
conceived of as a *therapeutic* work, viewing philosophy itself
as *therapy*. (Part II (*PPF*), focusing on
philosophical psychology, perception etc., was different, pointing to
new perspectives (which, undoubtedly, are not disconnected from the
earlier critique) in addressing specific philosophical issues. It is,
therefore, more easily read alongside Wittgenstein's other
writings of the later period.)
*PI* begins with a quote from Augustine's
*Confessions* which "give us a particular picture of the
essence of human language," based on the idea that "the
words in language name objects," and that "sentences are
combinations of such names" (*PI* 1). This picture of
language cannot be relied on as a basis for metaphysical, epistemic or
linguistic speculation. Despite its plausibility, this reduction of
language to representation cannot do justice to the whole of human
language; and even if it is to be considered a picture of only the
representative function of human language, it is, as such, a poor
picture. Furthermore, this picture of language is at the base of the
whole of traditional philosophy, but, for Wittgenstein, it is to be
shunned in favor of a new way of looking at both language and
philosophy. The *Philosophical Investigations* proceeds to
offer the new way of looking at language, which will yield the view of
philosophy as therapy.
### 3.3 Meaning as Use
"For a *large* class of cases of the employment of the
word 'meaning'--though not for all--this word
can be explained in this way: the meaning of a word is its use in the
language" (*PI* 43). This basic statement is what
underlies the change of perspective most typical of the later phase of
Wittgenstein's thought: a change from a conception of meaning as
representation to a view which looks to use as the crux of the
investigation. Traditional theories of meaning in the history of
philosophy were intent on pointing to something exterior to the
proposition which endows it with sense. This 'something'
could generally be located either in an objective space, or inside the
mind as mental representation. As early as 1933 (*The Blue
Book*) Wittgenstein took pains to challenge these conceptions,
arriving at the insight that "if we had to name anything which
is the life of the sign, we should have to say that it was its
*use*" (*BB* 4). Ascertainment of the use (of a
word, of a proposition), however, is not given to any sort of
constructive theory building, as in the *Tractatus*. Rather,
when investigating meaning, the philosopher must "look and
see" the variety of uses to which the word is put. An analogy
with tools sheds light on the nature of words. When we think of tools
in a toolbox, we do not fail to see their variety; but the
"functions of words are as diverse as the functions of these
objects" (*PI* 11). We are misled by the uniform
appearance of our words into theorizing upon meaning:
"Especially when we are doing philosophy!" (*PI*
12)
So different is this new perspective that Wittgenstein repeats:
"Don't think, but look!" (*PI* 66); and such
looking is done *vis a vis* particular cases, not
generalizations. In giving the meaning of a word, any explanatory
generalization should be replaced by a description of use. The
traditional idea that a proposition houses a content and has a
restricted number of Fregean forces (such as assertion, question, and
command), gives way to an emphasis on the diversity of uses. In order
to address the countless multiplicity of uses, their un-fixedness, and
their being part of an activity, Wittgenstein introduces the key
concept of 'language-game.' He never explicitly defines it
since, as opposed to the earlier 'picture,' for instance,
this new concept is made to do work for a more fluid, more
diversified, and more activity-oriented perspective on language.
Hence, indeed, the requirement to define harkens back to an old dogma,
which misses the playful and active character of language.
### 3.4 Language-games and Family Resemblance
Throughout the *Philosophical Investigations*, Wittgenstein
returns, again and again, to the concept of language-games to make
clear his lines of thought concerning language. Primitive
language-games are scrutinized for the insights they afford on this or
that characteristic of language. Thus, the builders'
language-game (*PI* 2), in which a builder and his assistant
use exactly four terms (block, pillar, slab, beam), is utilized to
illustrate that part of the Augustinian picture of language which
might be correct but which is, nevertheless, strictly limited because
it ignores the essential role of action in establishing meaning.
'Regular' language-games, such as the astonishing list
provided in *PI* 23 (which includes, e.g., reporting an event,
speculating about an event, forming and testing a hypothesis, making
up a story, reading it, play-acting, singing catches, guessing
riddles, making a joke, translating, asking, thanking, and so on),
bring out the openness of our possibilities in using language and in
describing it.
Language-games are, first, a part of a broader context termed by
Wittgenstein a form of life (see below). Secondly, the concept of
language-games points at the rule-governed character of language. This
does not entail strict and definite systems of rules for each and
every language-game, but points to the conventional nature of this
sort of human activity. Still, just as we cannot give a final,
essential definition of 'game,' so we cannot find
"what is common to all these activities and what makes them into
language or parts of language" (*PI* 65).
It is here that Wittgenstein's rejection of general
explanations, and definitions based on sufficient and necessary
conditions, is best pronounced. Instead of these symptoms of the
philosopher's "craving for generality," he points to
'family resemblance' as the more suitable analogy for the
means of connecting particular uses of the same word. There is no
reason to look, as we have done traditionally--and
dogmatically--for one, essential core in which the meaning of a
word is located and which is, therefore, common to all uses of that
word. We should, instead, travel with the word's uses through
"a complicated network of similarities overlapping and
criss-crossing" (*PI* 66). Family resemblance also serves
to exhibit the lack of boundaries and the distance from exactness that
characterize different uses of the same concept. Such boundaries and
exactness are the definitive traits of form--be it Platonic form,
Aristotelian form, or the general form of a proposition adumbrated in
the *Tractatus*. It is from such forms that applications of
concepts can be deduced, but this is precisely what Wittgenstein now
eschews in favor of appeal to similarity of a kind with family
resemblance.
### 3.5 Rule-following and Private Language
One of the issues most associated with the later Wittgenstein is that
of rule-following. Rising out of the considerations above, it becomes
another central point of discussion in the question of what it is that
can apply to all the uses of a word. The same dogmatic stance as
before has it that a rule is an abstract entity--transcending all
of its particular applications; knowing the rule involves grasping
that abstract entity and thereby knowing how to use it.
Wittgenstein begins his exposition by introducing an example:
"... we get [a] pupil to continue a series (say '+
2') beyond 1000--and he writes 1000, 1004, 1008,
1012" (*PI* 185). What do we do, and what does it mean,
when the student, upon being corrected, answers "But I did go on
in the same way"? Wittgenstein proceeds (mainly in *PI*
185-243, but also elsewhere) to dismantle the cluster of
attendant questions: How do we learn rules? How do we follow them?
Wherefrom the standards which decide if a rule is followed correctly?
Are they in the mind, along with a mental representation of the rule?
Do we appeal to intuition in their application? Are they socially and
publicly taught and enforced? In typical Wittgensteinian fashion, the
answers are not pursued positively; rather, the very formulation of
the questions as legitimate questions with coherent content is put to
the test. For indeed, it is both the Platonistic and mentalistic
pictures which underlie asking questions of this type, and
Wittgenstein is intent on freeing us from these assumptions. Such
liberation involves elimination of the need to posit any sort of
external or internal authority beyond the actual applications of the
rule.
These considerations lead to *PI* 201, often considered the
climax of the issue: "This was our paradox: no course of action
could be determined by a rule, because every course of action can be
made out to accord with the rule. The answer was: if everything can be
made out to accord with the rule, then it can also be made out to
conflict with it. And so there would be neither accord nor conflict
here." Wittgenstein's formulation of the problem, now at
the point of being a "paradox," has given rise to a wealth
of interpretation and debate since it is clear to all that this is the
crux of the general issue of meaning, and of understanding and using a
language. One of the influential readings of the problem of following
a rule (introduced by Fogelin 1976 and Kripke 1982) has been the
interpretation, according to which Wittgenstein is here voicing a
skeptical paradox and offering a skeptical solution. That is to say,
there are no facts that determine what counts as following a rule, no
real grounds for saying that someone is indeed following a rule, and
Wittgenstein accepts this skeptical challenge (by suggesting other
conditions that might warrant our asserting that someone is following
a rule). This reading has been challenged, in turn, by several
interpretations (such as Baker and Hacker 1984, McGinn1984, and Cavell
1990), while others have provided additional, fresh perspectives
(e.g., Diamond, "Rules: Looking in the Right Place" in
Phillips and Winch 1989, and several in Miller and Wright 2002).
Directly following the rule-following sections in *PI*, and
therefore easily thought to be the upshot of the discussion, are those
sections called by interpreters "the private-language
argument." Whether it be a veritable argument or not (and
Wittgenstein never labeled it as such), these sections point out that
for an utterance to be meaningful it must be possible in principle to
subject it to public standards and criteria of correctness. For this
reason, a private-language, in which "words ... are to
refer to what only the speaker can know--to his immediate private
sensations ..." (*PI* 243), is not a genuine,
meaningful, rule-governed language. The signs in language can only
function when there is a possibility of judging the correctness of
their use, "so the use of [a] word stands in need of a
justification which everybody understands" (*PI*
261).
### 3.6 Grammar and Form of Life
Grammar, usually taken to consist of the rules of correct syntactic
and semantic usage, becomes, in Wittgenstein's hands, the
wider--and more elusive--notion which captures the essence
of language as a special rule-governed activity. This notion replaces
the stricter and purer logic, which played such an essential role in
the *Tractatus* in providing a scaffolding for language and the
world. Indeed, "*Essence* is expressed in grammar
... Grammar tells what kind of object anything is. (Theology as
grammar)" (*PI* 371, 373). As opposed to grammar-book
rules, the "rules" of grammar are not technical
instructions from on-high for correct usage and are not idealized as
an external system to be conformed to, independently of context.
Therefore, they are not appealed to explicitly in any formulation, but
are only used in cases of philosophical perplexity to clarify where
language misleads us into false illusions. Thus, for example, "I
can know what someone else is thinking, not what I am thinking. It is
correct to say 'I know what you are thinking,' and wrong
to say 'I know what I am thinking.' (A whole cloud of
philosophy condensed into a drop of grammar.)"
(*Philosophical Investigations* 1953, p.222). In this example,
being sensitive to the grammatical uniqueness of first-person avowals
saves us from the blunders of foundational epistemology.
Grammar is hence situated within the regular activity with which
language-games are interwoven: "... the word
'language-*game*' is used here to emphasize the
fact that the *speaking* of language is part of an activity, or
of a form of life" (*PI* 23). What enables language to
function and therefore must be accepted as "given" are
precisely forms of life. In Wittgenstein's terms, "It is
not only agreement in definitions but also (odd as it may sound) in
judgments that is required" (*PI* 242), and this is
"agreement not in opinions, but rather in form of life"
(*PI* 241). Used by Wittgenstein sparingly--five times in
the *Investigations*--this concept has given rise to
interpretative quandaries and subsequent contradictory readings. Forms
of life can be understood as constantly changing and contingent,
dependent on culture, context, history, etc.; or as a background
common to humankind, "shared human behavior" which is
"the system of reference by means of which we interpret an
unknown language" (*PI* 206); or as a notion which can be
read differently in different cases - sometimes as relativistic,
in other cases as expressing a more universalistic approach.
### 3.7 The Nature of Philosophy
In his later writings Wittgenstein holds, as he did in the
*Tractatus*, that philosophers do not--or should
not--supply a theory, neither do they provide explanations.
"Philosophy just puts everything before us, and neither explains
nor deduces anything.--Since everything lies open to view there
is nothing to explain" (*PI* 126). The anti-theoretical
stance is reminiscent of the early Wittgenstein, but there are
manifest differences. Although the *Tractatus* precludes
philosophical theories, it does construct a systematic edifice which
results in the general form of the proposition, all the while relying
on strict formal logic; the *Investigations* points out the
therapeutic non-dogmatic nature of philosophy, verily instructing
philosophers in the ways of therapy. "The work of the
philosopher consists in marshalling reminders for a particular
purpose" (*PI* 127). Working with reminders and series of
examples, different problems are solved. Unlike the *Tractatus*
which advanced one philosophical method, in the
*Investigations* "there is not a single philosophical
method, though there are indeed methods, different therapies, as it
were" (*PI* 133d). This is directly related to
Wittgenstein's eschewal of *the* logical form or of any
a-priori generalization that can be discovered or made in philosophy.
Trying to advance such general theses is a temptation which lures
philosophers; but the real task of philosophy is both to make us aware
of the temptation and to show us how to overcome it. Consequently
"a philosophical problem has the form: 'I don't know
my way about.'" (*PI* 123), and hence the aim of
philosophy is "to show the fly the way out of the
fly-bottle" (*PI* 309).
The refusal of theory goes hand in hand with Wittgenstein's
objection to have "anything hypothetical in our [philosophical]
considerations" (*PI* 109): "All
*explanation* must disappear, and description alone must take
its place." In *The Blue Book*, Wittgenstein explained
the difficulty encountered by philosophers to work on a minor scale
and to take seriously "the particular case" as originating
from their "craving for generality." This craving is
strongly influenced by philosophers' "preoccupation with
the method of science," a reductive and unifying tendency which
is "the real source of metaphysics" (*BB* 18).
Wittgenstein's determined anti-scientism should not be read as
an opposition to science in itself; it is the insistence that
philosophy and science are to be kept apart - that (contrary to
the prevalent attitude of modern civilization) the scientific
framework is not appropriate everywhere, and certainly not for
philosophical investigations.
The style of the *Investigations* is strikingly different from
that of the *Tractatus*. Instead of strictly numbered sections
which are organized hierarchically in programmatic order, the
*Investigations* fragmentarily voices aphorisms about
language-games, family resemblance, forms of life, "sometimes
jumping, in a sudden change, from one area to another"
(*PI* Preface). This variation in style is of course essential
and is "connected with the very nature of the
investigation" (*PI* Preface). As a matter of fact,
Wittgenstein was acutely aware of the contrast between the two stages
of his thought, suggesting publication of both texts together in order
to make the contrast obvious and clear.
Still, it is precisely via the subject of the nature of philosophy
that the fundamental continuity between these two stages, rather than
the discrepancy between them, is to be found. In both cases philosophy
serves, first, as critique of language. It is through analyzing
language's illusive power that the philosopher can expose the
traps of meaningless philosophical formulations. This means that what
was formerly thought of as a philosophical problem may now dissolve
"and this simply means that the philosophical problems should
*completely* disappear" (*PI* 133). Two
implications of this diagnosis, easily traced back in the
*Tractatus*, are to be recognized. One is the inherent
dialogical character of philosophy, which is a responsive activity:
difficulties and torments are encountered which are then to be
dissipated by philosophical therapy. In the *Tractatus*, this
took the shape of advice: "The correct method in philosophy
would really be the following: to say nothing except what can be said,
i.e. propositions of natural science ... and then whenever
someone else wanted to say something metaphysical, to demonstrate to
him that he had failed to give a meaning to certain signs in his
propositions" (*TLP* 6.53) The second, more far-
reaching, "discovery" in the *Investigations*
"is the one that enables me to break off philosophizing when I
want to" (*PI* 133). This has been taken to revert back
to the ladder metaphor and the injunction to silence in the
*Tractatus*.
### 3.8 Interpretative Problems
Whereas the *Tractatus* has been interpreted in different ways
right from its publication, for about two decades since the
publication of *Philosophical Investigations* in 1953 there
seemed to be wide agreement on the proper way to read the book (along
with the vast material from 1930 onwards in Wittgenstein's
*Nachlass*, gradually released over the years).
Wittgenstein's turn from his early emphasis on the role of
logical analysis in his philosophical method to the later preference
of particular grammatical descriptions was read, quite unanimously, as
marking the two clearly distinct - "early" and
"later" - phases in his thought. The later phase, it
was agreed, consisted mainly of particular descriptions of our
ordinary uses of words - especially those words which tend to
create the illusion that they represent something profound, words
which seem to point at the "essence of language."
"Whereas, in fact, if the words 'language,'
'experience,' 'world' have a use, it must be
as humble a one as that of the words 'table,'
'lamp,' 'door'." (*PI* 97)
With the publication of Stanley Cavell's *Must We Mean What
We Say?* (1969) and especially *The Claim of Reason:
Wittgenstein, Skepticism, Morality, and Tragedy* (1979), it turned
out that such notions as 'description,'
'ordinary,' and 'usage' are not as innocuous
as they had seemed. Cavell's reading of *Philosophical
Investigations* suggested a more radical emphasis on the
particularity of contexts and blocked the way to any general
description of "grammatical rules." For Cavell,
Wittgenstein's idea that meanings are determined *only*
within language games, i.e., *only* in particular cases, is the
key to reading his philosophy. When ignored, we are bound to import an
outside requirement into our investigation and thus, favoring a
distilled and rigid account of our linguistic exchanges, miss (and
misrepresent) their vividness: "As if the meaning were an aura
the word brings along with it and retains in every kind of use"
(*PI* 117).
In 1980, Oxford philosophers G.P. Baker and P.M.S. Hacker launched the
first volume of an analytical commentary on Wittgenstein's
*Investigations*. This gigantic project, spanning over 4
volumes (the 3rd and 4th written by Hacker
alone), advocated and developed the orthodox reading of
Wittgenstein's later work, according to which a systematic
mapping - an overview, or *Ubersicht* (Cf.
*PI* 122) - of the use of our words is not only possible
but also necessary, if we wish to dissolve the philosophical problems
arising from the traditional philosophical inattention to ordinary
use, due to a prior metaphysical requirement. For Baker and Hacker, a
key notion for reading Wittgenstein's later philosophy is that
of philosophical 'grammar.' Acute attention to grammatical
rules of use offers a way to elucidate meanings and thus to expose
philosophical blunders.
Gradually, these alternatives in reading *Philosophical
Investigations* came to form two schools of interpretation that
are broadly parallel to the ones we encounter regarding
Wittgenstein's *Tractatus*. As noted above (in section
2.4), there is, first, the question of the continuity between
Wittgenstein's earlier and later writings. When emphasis is
given - as in the orthodox reading - to
Wittgenstein's switch from logic to grammar, the break between
the philosophical stages is easily marked. When, on the contrary, it
is the resistance to generalization (of any kind) that is emphasized,
then the similarity in Wittgenstein's therapeutic motivation
throughout his life is much more clearly seen.
As in the previous stage, the debate here has to do with the question
of ethics and its centrality in reading Wittgenstein's oeuvre.
According to the traditional approach, amplified in Hacker and
Baker's interpretive project, Wittgenstein's main
objective is to resolve or dissolve general philosophical puzzles
- concerning, e.g., the mind, the nature of color, perception,
and rule-following - by clarifying, arranging, and contrasting
different grammatical rules relevant to the notion examined. This is
an intellectual investigation and, apart from its being critical of
traditional philosophy and its reflection in contemporary science, it
has nothing particularly relevant to ethics and should not be read as
an ethical endeavor. Readers taking their cue from Cavell, on the
other hand, believe that the gist of the Wittgensteinian effort *in
toto* is ethical, cultural, or even political. They point at the
*Investigations'* motto ("The trouble about
progress is that it always looks much greater than it really is"
- Nestroy) and Wittgenstein's mention of "the
darkness of this time" in the book's Preface; but more
substantially, they argue that Wittgenstein's insistence on the
ordinary, the practice of every-day living, is ethical through and
through. Our lives are saturated by ethics; we are members of a
historically and culturally conditioned society; and our linguistic
practices cannot but have an important ethical aspect. Ignoring this
aspect, according to these interpreters, entails a rejection of what
is human in our linguistic exchanges.
To be sure, the interpretations' map is much more complicated
than what is briefly sketched here. There are philosophers who object
to the characterization each side in the above debate attaches to the
other. There are those who claim that while there is a genuine dispute
about the right interpretation of the *Tractatus*, the various
interpreters of *Philosophical Investigations* are much closer
to one another than they are ready to acknowledge; and there are
independent readings of Wittgenstein which draw their clues from both
sides and attempt a different path altogether.
## 4. The Middle Wittgenstein
It is noteworthy that the conventional conceptualization of two
Wittgensteins, originally thought to be clearly demarcated between the
early and later Wittgensteins, has been repeatedly re-worked in the
ensuing interpretative project of understanding the
philosopher's writings, thoughts, and ideas, with no show of
subsiding. One of the natural outcomes of such intense investigation
is the awareness that, indeed, understanding Wittgenstein means
recognizing other times and other developments, such as before the
early Wittgenstein (of the *Tractatus*), between the early
Wittgenstein and the later Wittgenstein (of *Philosophical
Investigations*), and following that later Wittgenstein.
The middle Wittgenstein, he of the period between the early
Wittgenstein and the later Wittgenstein, was, early on, identified as
worthy of exposure and more interpretative work, but such a venture
was clearly a function of two mutually impactful grand questions: What
is the relationship between the early and later Wittgenstein (as
adumbrated in 2.4 and 3.8) and, following upon answers to that
question, what is then the more exact time-frame that is worthy of the
independent label "middle"? Thus, there is presently much
more work being done on the middle Wittgenstein as the questions of
the relationship between the early and the later Wittgenstein have
taken center-stage and become a constant in Wittgenstein
scholarship.
How, then, to best demarcate the middle Wittgenstein? The wide date
recognition, all along from 1929 (or even earlier) to 1944 (the time
when Wittgenstein began the final revision of *PI*) might seem
too obvious, though it can house readings of the
continuity-between-early-and-later-Wittgenstein school persuasively.
Such is work by, for instance, Joachim Schulte (1992), who notes a
middle Wittgenstein already in the *TLP* leading all the way to
later Wittgenstein's key treatments of grammar or mathematics.
The opposite extreme, of pinpointing one point in time as expressing a
sole significant transformation, seems too summary. This strategy is
adopted, for example, by the Hintikkas (1986b) regarding the year 1929
or by Kienzler (1997) concerning a transitional break in 1931. Then
there are the various more moderate determinations, which adopt a
time-span of a few years, usually sometime between 1929 and 1935,
explaining their delineations by philosophical interpretations of the
texts and contexts of those times. Such are: O. K. Bouwsma (1961), who
points to the period of the *Blue and Brown Books* (1933-35);
P. M. S. Hacker (1986), who follows a two year period -
1929-1930 - as the middle Wittgenstein "bridge";
Alois Pichler (2004), who recognizes a middle Wittgenstein in
1930-1935 based on Wittgensteinian styles; or David Stern (1991), who
briefly gives the demarcation of "the late 1920s and early
1930s."
If we adopt that last marker of dates, the late 1920s to the early (or
mid-)1930s, we can address, in general, the objects of interpretation
that have provided the basis for a middle Wittgenstein. Although
everything in the *Nachlass* is now digitally available to the
aspiring interpreter, the published versions of
typescripts/manuscripts of talks and notes (sometimes dictated to
students) have been the evidence upon which middle Wittgenstein
interpretation has flourished. These are (in chronological order of
their occurrence, not their publication) "Some Remarks on
Logical Form" (1929), "A Lecture on Ethics" (1929),
*Philosophical Remarks* (1929-1930), *Philosophical
Grammar* (1932-33), *The Big Typescript*
(1932-1933); the *Blue and Brown Books*
(1933-1935), and "Notes for Lectures on 'Private
Experience' and 'Sense Data'"
(1935-36). Just as illuminating, even tantalizing, are the
publications of notes taken by participants in Wittgenstein's
classes in those philosophically eventful years, since it was in those
surroundings that Wittgenstein was working out - live -
the deliberations and moves into what would become the later
Wittgenstein. Supplied by Alice Ambrose, Desmond Lee, G. E. Moore,
Rush Rhees, and Friedrich Waismann (on Wittgenstein's
conversations with members of the Vienna Circle), these are volumes
replete with questions of philosophy, meta-philosophy, method, and
style that provide the Wittgensteinian interpreter with voluminous raw
data in order to characterize a middle Wittgenstein.
The themes that populate the texts above are wide and varied. Some
harken back to Tractarian refrains, more appear to introduce later
Wittgensteinian terms; a number of subjects make transitory
appearances, never to return. Phenomenology, for one, is a constant
Wittgensteinian concern, mostly for interpreters of all Wittgensteins
who bicker over the very ascription of phenomenology to
Wittgenstein's thought, accompanied by their attempt to define
Wittgenstein's use of the term. But it is in the middle
Wittgenstein that phenomenology makes a literal appearance,
specifically in the "Phenomenology is Grammar" chapter of
*The Big Typescript*. The questions then become whether
phenomenology can be ascribed to the early Wittgenstein, whether it
was really only adopted in the later Wittgenstein, and most relevant
here - whether it can be identified as either vacating the scene
in the middle Wittgenstein or, contrarily, there becoming
terminologically and therefore significantly present. There is also
the option that phenomenology appears exclusively in the middle
Wittgenstein. These last options may then be seen as (at least partly)
definitive of a middle period.
Mathematics, and Wittgenstein's original thoughts on it, can
undoubtedly be seen as another investigative anchor of middle
Wittgenstein. Although it is clear that mathematics was of great
interest to him in *TLP*, his thoughts on it became more
enigmatic precisely in the middle period. Several "isms"
are attributed to his view(s) on mathematics here -
verificationism, formalism, and finitism, along with recognition of a
"calculus conception" of mathematics. (Intuitionism, a
favorite label for interpreters of Wittgenstein's mathematics,
is only identified in the *Remarks on the Foundations of
Mathematics* (1937-1944) and after, i.e., in the later
Wittgenstein.) It appears, however, that middle Wittgenstein's
dealings with mathematics in this period are, as claimed by Juliet
Floyd (2005), constantly changing - and even that, not in any
consistent developmental way but in "piecemeal and uneven"
fashion. Indeed, along with written evidence of Wittgenstein's
influence on and by mathematicians, we can identify the work on
mathematics done particularly by the later Wittgenstein and even he of
the post-*Investigations* period, as a substantial overturning
of his early *and* middle conceptions of mathematics.
While phenomenology and mathematics stand out as almost quirky (and
perhaps suspiciously unclear) subjects making their idiosyncratic
appearance in the middle Wittgenstein, the more evident
Wittgensteinian constructs of this period are, of course, logic,
language, and method. Unsurprisingly, "Some Remarks on Logical
Form" in 1929 heralds the transition from early to later
Wittgenstein, moving from (ideal, formal) logic to (the rules of use
of) grammar. Language is also viewed as the connecting factor between
the early and later Wittgenstein, either by discovering still
"representational" aspects in descriptions of its use or
by deciphering roots of the private language arguments in this
Wittgenstein's musings. Other times, analysis of linguistic
phenomena as language games rather than tools of logic provides what
we have always acknowledged as paradigmatic transitions from early to
later Wittgenstein. These all involve methodology - Wittgenstein
espousing a method of doing philosophy that is different from that of
*TLP*, but still not the plurality of methods proposed in
*PI*. Furthermore, the method proposed is the turn to grammar
as a back turning on dogmatism (and/of metaphysics). David Stern calls
the early Wittgenstein's views on logic, language, and method
"logical atomism" and the later Wittgenstein's
"practical holism," tagging the middle Wittgenstein as
"logical holism." Between logical atomism and practical
holism seems to lie a drastic divide, but seeing logical holism as
bridging the divide gives us the perception - and understanding
- of a natural development between them. As spelled out by Mauro
Engelmann (2013), this development can then also encompass all the
above elements in a comprehensive transition gesturing towards an
"anthropological" view.
## 5. After the *Investigations*
It has been submitted that the writings of the period from 1946 until
his death (1951) constitute a distinctive phase of
Wittgenstein's thought. These writings include, in addition to
the second part of the first edition of the *Philosophical
Investigations (PPF)*, texts edited and collected in volumes such
as *Remarks on Colour*, *Remarks on the Philosophy of
Psychology*, *Zettel*, *On Certainty*, and parts of
*Remarks on t**he Foundations of Mathematics*.
Part II of the *Philosophical Investigations* was, in fact, an
item of contention *ab initio*. In Anscombe and Rhees's
first edition of 1953 (translated by Anscombe), reasons were given for
including it, admittedly as a differentiated part, in the book. In
Hacker and Schulte's (translated and edited) fourth edition of
2009, its position was contrarily explained. It now became, instead of
Part II, an almost independent entity - *Philosophy of
Psychology - A Fragment (PPF)*. It is also emphasized that
it was a "work in progress." Be that as it may, that
installment of a post-*Philosophical Investigations* text
exhibits all the vagaries of Wittgensteinian interpretation -
behooving philosophers to attend to questions of placement (which
Wittgenstein is here represented, middle, later, or other?), affinity
(how do these comments relate to Wittgenstein's later work in
general?), and topic (what is Wittgenstein dealing with here,
psychology, epistemology, or the method of philosophy?).
PPF is the *locus classicus* of a key Wittgensteinian term
- "seeing aspects" (*PPF* xi), where
"two uses of the word 'see'" are elaborated.
The second use, where one "sees" a likeness in two
objects, is the one that has given rise to the question of aspect
perception and the attendant phenomena of aspect-dawning and change of
aspect. "I observe a face, and then suddenly notice its likeness
to another. I *see* that it has not changed; and yet I see it
differently. I call this experience 'noticing an
aspect'" (113). Aspect seeing involves noticing something
about an object - an aspect of the object - that one
hadn't noticed before and thereby seeing it as something
different. Importantly, it also arises as a result of a change of
context of our perceptions. This immensely insightful discovery by
Wittgenstein, and its successive development, has been the source of a
multitude of discussions dealing with questions of objectivity vs.
subjectivity, conception vs. perception, and psychology vs.
epistemology. It also highlights the move from dogmatic, formalistic
universalism to open, humanistic context-laden behavior, aptly
reverberating in the to-and-fro of seeing aspects.
On Certainty is generally accepted as a work of epistemology, as
opposed to *PPF*, which is usually recognized as dealing with
psychology (though even that distinction is called into question by
some interpretations). It tackles skeptical doubts and foundational
solutions but is, in typical Wittgensteinian fashion, a work of
therapy which discounts presuppositions common to both. As always
before in the interpretative game, the general view sees Wittgenstein
as landing on the non-skeptical side of the epistemological debate,
choosing instead to peruse "hinge propositions." These are
propositions about which doubt cannot be entertained, but whether this
be due to their being epistemically foundational, naturally certain,
or logically unavailable to doubt is still a matter for philosophical
explanation; as is the question of Wittgenstein's object of
critique - Cartesian or radical skepticism - or, in some
quarters, the very assumption of his unequivocal anti-skepticism. This
is intimately related to another of *On Certainty*'s
themes--the primacy of the deed to the word, or, in
Wittgenstein's *PI* terminology, of form of life to
logos. Such a clearly delineated philosophical phase has garnered the
recognition of a "third Wittgenstein" (initially by
Daniele Moyal-Sharrock).
*Remarks on Colour* is a collection of remarks, composed by
Wittgenstein during the last year and a half of his life. In these
remarks he attempts to clarify the language we use in describing our
experience and impression of colours, sameness of colours, saturated
colours, light and shade, brightness and opaqueness of surfaces,
transparency, etc. These reflections present us with a "geometry
of colours," though Wittgenstein makes it clear that due to
"the indefiniteness in the concept of colour" (III-78) the
"logic of colour concepts" (I-22) should be given
piecemeal. When we consider colours "there is merely an
inability to bring the concepts into some kind of order"
(II-20). Philosophical attention should be given rather to such
particularities as the blending in of white (as opposed to blending in
yellow), "the coloured intermediary between two colours"
(III-49), " the difference between black-red-gold and
black-red-yellow" (III-51), etc. The investigation is therefore
directed not merely to colour phenomena and the language we use to
describe them but also to philosophical methodology, in comparison
with scientific inquiry, and to what is taken to be conceivable and
what is not. It is also unique in reminding us of a hint, by
Wittgenstein, that it was prompted by Johann Wolfgang von
Goethe's *Zur Farbenlehre* (*On the Theory of
Colour*, 1810)!
The general tenor of all the writings of this last period can thence
be viewed as, on the one hand, a move away from the critical (some
would say destructive) positions of the *Investigations* to a
more positive perspective on the same problems that had been facing
him since his early writings; on the other hand, this move does not
necessarily bespeak a break from the later period but might be more
properly viewed as its continuation, in a new light. In other words,
the grand question of interpreting Wittgenstein, i.e., the question of
continuities or breaks, remains at the forefront of understanding
Wittgenstein. |
wittgenstein-aesthetics | ## 1. The Critique of Traditional Aesthetics
Wittgenstein's opening remark is double-barreled: he states that
the field of aesthetics is both very big and entirely misunderstood. By
"very big", I believe he means both that the aesthetic
dimension weaves itself through all of philosophy in the manner
suggested above, and that the reach of the *aesthetic* in human
affairs is very much greater than the far more restricted reach of the
*artistic*; the world is densely packed with manifestations of
the aesthetic sense or aesthetic interest, while the number of works of
art is very much smaller. There is good reason, in his discussions that
follow, to believe that he intends that any comprehensive account of
the aesthetic would acknowledge the former, and not just - as a
good number of philosophical accounts have so restricted themselves
- the latter. By "entirely misunderstood", it
emerges that he means both (1) that aesthetic questions are of a
conceptual type *very* distinct from empirical questions and the
kind of answer, or conceptual satisfaction, we want is very unlike what
we might get from an experiment in empirical psychology, and (2) that
the philosophically traditional method of essentialistic definition
- determining the essence that all members of the class
"works of art" exhibit and by virtue of which they are so
classified - will conceal from our view more than it
reveals.
### 1.1 Properties and Essence
It is also vividly apparent from the outset of these lectures that
Wittgenstein is urging a heightened vigilance to the myriad ways in
which words can, on their grammatical surface, mislead. If, right at
the beginning of the inquiry, we see that we use the word
"beautiful" as an adjective, we may well very shortly find
ourselves asking what is the essence of the *property* beauty
that this particular example exhibits. Imagining a book of philosophy
investigating parts of speech (but very many more parts than we find
in an ordinary grammar book) where we would give very detailed and
nuanced attention to "seeing", "feeling",
along with other verbs of personal experience, and equally lengthy
studies of "all", "any", "some",
numerals, first person pronoun usages, and so forth, he suggests that
such a book would, with sufficient attention paid to the contextual
intricacies of the grammar of each usage, lay bare the confusion into
which language can lead us. Later, in his *Philosophical
Investigations* (Wittgenstein 1958), he will go on to famously
develop the analogy between tools and language as a way of breaking
the hold of the conceptual picture that words work in one way (by
naming things--including the naming of properties, as in the way
we too-quickly think of the problem of beauty above), showing the
diversity of *kind* and of *use* among the various
things we find in the tool box (e.g. hammer, glue, chisel,
matches). If we redirect our attention, away from the *idee
fixe* of the puzzle concerning the common property named by the
word "beauty" or the description "beautiful",
and look to the actual use to which our aesthetic-critical vocabulary
is put, we will see that it is not some intrinsic meaning carried
internally by the linguistic sign (Wittgenstein 1958a) that makes the
word in question function as an aesthetic or critical interjection or
expression of approval. We will, rather, be able to focus our
redirected attention on what actually does make the word in question
function aesthetically, i.e. "on the enormously complicated
situation in which the aesthetic expression has a place", and he
adds that, with such an enlarged vision, we see that "the
expression itself has almost a negligible place" (p. 2). He here
mentions (seeing aesthetic issues as interwoven with the rest of
philosophy) that if he had to identify the main mistake made in
philosophical work of his generation, it would be precisely that of,
when looking at language, focusing on the form of words, and not the
use made of the form of words. He will go on to imply, if not quite to
directly assert, that the parallel holds to the work of art: to see it
within a larger frame of reference, to see it in comparison to other
works of the artist in question and to see it juxtaposed with still
other works from its cultural context, is to see what role itplayed in
the dialogically unfolding artistic
"language-game"[1]
of its time and place. In using language, he says next in the
lectures, in understanding each other--and in mastering a
language initially--we do not start with a small set of words or
a single word, but rather from specific occasions and activities. Our
aesthetic engagements are occasions and activities of just this kind;
thus aesthetics, as a field of conceptual inquiry, should start not
from a presumption that the central task is to analyze the determinant
properties that are named by aesthetic predicates, but rather with a
full-blooded consideration of the *activities* of aesthetic
life.
### 1.2 Predicates and Rules
But the adjectival form of many--not all--critical
predicates quickly reinforces the "property-with-name"
model, and against this Wittgenstein places examples from musical and
poetical criticism, where we simply call attention to the rightness of
a transition or to the precision or aptness of an image. And it is
here that Wittgenstein reminds us that descriptions such as
"stately", "pompous", or
"melancholy" (where the latter is said of a Schubert
piece) are like giving the work a face (Shiner 1978), or we could
instead (or in further specification of such descriptions) use
gestures.[2]
In cases of re-construing a work (e.g. the meter of a poem) so that
we understand its rhythm and structure anew, we make gestures, facial
expressions, and non-descriptive-predicate based remarks, where
aesthetic adjectives play a diminished role or no role at all. And we
show our approval of a tailor's work not by describing the suit, but
by wearing it. Occasions and activities are fundamental, descriptive
language secondary.
Wittgenstein here turns to the subject of rules, and rule-following,
in aesthetic decision-making. This stems from his reflection on the
word "correct" in aesthetic discourse, and he mentions the
case of being drilled in harmony and counterpoint. Such rule-learning,
he claims, allows what he calls an interpretation of the rules in
particular cases, where an increasingly refined judgment comes from an
increasingly refined mastery of the rules in question. It is of
particular interest that a qualification is entered here, that such
rules in contexts of artistic creativity and aesthetic judgment,
"may be extremely explicit and taught, or not formulated at
all" (Wittgenstein 1966). This itself strongly suggests that,
in this way too, *actions* come first, where these actions may
(to invoke Kant's famous distinction) either explicitly follow from
the rule, or stand in accordance with them but (for Wittgenstein) in
an inexplicit way. The mastery of a practice can be, but need not be,
characterized as a cognitive matter of rule-following. Here we thus
find one of the points of intersection between Wittgenstein's
work in aesthetics and his work in the philosophy of language: the
rule-following considerations (Holtzman and Leich 1981 and McDowell
1998) and the debate concerning non-cognitivism (McDowell 2000) link
directly to this discussion.
Regrettably, Wittgenstein closes this matter prematurely (claiming
that this issue should not come in at this point); the linkage between
rule-following in language and in aesthetics is still to this day too
little investigated. Yet there is a sense in which Wittgenstein
extends the discussion, if only implicitly. In investigating the kinds
of things meant by aesthetic "appreciation", he does say
that "an appreciator is not shown by the interjections he
uses" but rather by his choices, selections, his actions on
specific occasions. To describe what appreciation consists in, he
claims, would prove an impossibility, for "to describe what it
consists in we would have to describe the whole environment"
(Wittgenstein 1966, 7), and he returns to the case of the
tailor--and thus implicitly to rule-following. Such rules, as we
discern them in action ranging on a continuum from the
cognitively-explicit and linguistically encapsulated to the
non-cognitively implicit and only behaviorally manifested, have a life
- have an identity as rules--within, and *only*
within, those larger contexts of engagement, those "whole
environment[s]". And those environments, those contexts, those
language-games, are not reducible to a unitary kind which we then
might analyze for essential properties: "correctness", for
example, plays a central role in some cases of aesthetic appreciation
and understanding, and it is irrelevant in others, e.g. in
garment-cutting versus Beethoven symphonies, or in domestic
architecture versus the Gothic cathedral. And he explicitly draws, if
too briefly, the analogy that emerges here between ethics and
aesthetics: he notes the difference between saying of a person that he
behaves well versus saying that a person made a great or profound
impression. Indeed, "the entire
*game* is different" (Wittgenstein 1966, 8).
### 1.3 Culture and Complexity
The central virtue of these lectures is that Wittgenstein never loses
a sense of the *complexity* of our aesthetic engagements, our
language attending and in cases manifesting those engagements, and the
contextually embedded nature of the aesthetic actions he is working to
elucidate. Nor does he lose a sense of the relation--a relation
necessary to the meaning of the aesthetic language we use -
between the aesthetically-descriptive expressions we employ within
particular contexts and "what we call the culture of the
period" (Wittgenstein 1966, 8). Of those aesthetic words, he
says, "To describe their use or to describe what you mean by a
cultured taste, you have to describe a culture" (Wittgenstein
1966, 8). And again making the relation between his work in the
philosophy of language and his work in the philosophy of art explicit
(if again too briefly), he adds, "What belongs to a language
game is a whole culture" (Wittgenstein 1966, 8). There the link
to the irreducible character of rule-following is made again as well:
"To describe a set of aesthetic rules fully means really to
describe the culture of a period" (Wittgenstein 1966, 8, n. 3).
If our aesthetic engagements and the interrelated uses of our
aesthetic terms are widely divergent and context-sensitive, so are our
aesthetic actions and vocabularies context-sensitive in a larger sense
as well: comparing a cultured taste of *fin-de-siecle* Vienna
or early-twentieth-century Cambridge with the Middle Ages, he
says--implicitly referring to the radically divergent
constellations of meaning-associations from one age to the other,
"An entirely different game is played in different ages"
(Wittgenstein 1966, p. 8). Of a generic claim made about a person
that he or she appreciates a given style or genre of art, Wittgenstein
makes it clear that he would not yet know--stated in that
generic, case-transcending way--not merely whether it was true or
not, but more interestingly and more deeply, what it *meant* to
say this. The word "appreciates" is, like the rest of our
aesthetic vocabulary, not detachable from the
particular context within which it has its
life.[3]
If, against this diversity or context-sensitivity, we seek to find
and then analyze what all such cases of aesthetic engagement have in
common, we might well focus on the *word*
"appreciation". But that word will not have meaning with a
bounded determinacy fixed prior to its contextualized use, which means
that we would find ourselves, because of this philosophical strategy
(the entire field "is very big and entirely
misunderstood"), in a double bind: first, we would not know the
meaning of the term upon which we were focusing; and second, we would,
in trying to locate what all cases of aesthetic engagement have in
common, leave out of view what he calls in this connection "an
immensely complicated family of cases" (Wittgenstein 1966),
blinding ourselves to nuance and complexity in the name of a
falsifying neatness and overarching generality. "In order to get
clear about aesthetic words you have to describe ways of living. We
think we have to talk about aesthetic judgments like 'This is
beautiful', but we find that if we have to talk about aesthetic
judgments we don't find these words at all, but a word used
something like a gesture, accompanying a complicated
activity" (Wittgenstein 1966, 11).
## 2. The Critique of Scientism
Wittgenstein turns to the idea of a science of aesthetics, an idea for
which he has precious little sympathy ("almost too ridiculous
for words" [Wittgenstein 1966, 11]). But as is often the case
in Wittgenstein's philosophical work, it does not follow from
this scornful or dismissive attitude that he has no interest in the
etiology of the idea, or in excavating the hidden steps or components
of thought that have led some to this idea. In the ensuing discussion
he unearths a picture of causation that under-girds the very idea of a
scientific explanation of aesthetic judgment or preference. And in
working underground in this way, he reveals the analogies to cases of
genuine scientific explanation, where the "tracing of a
mechanism" just is the process of giving a causal account, i.e.
where the observed effect is described as the inevitable result of
prior links in the causal chain leading to it. If, to take his
example, an architect designs a door and we find ourselves in a state
of discontentment because the door, within the larger design of the
facade (within its stylistic "language-game", we
might say), is too low, we are liable to describe this on the model of
scientific explanation. Then, we make a substantive of the discontent,
see it as the causal result of the lowness of the door, and in
identifying the lowness as the cause, think ourselves able to dislodge
the inner entity, the discontent, by raising the door. But this
mischaracterizes our aesthetic reactions, or what we might call, by
analogy to moral psychology, our aesthetic psychology.
### 2.1 Aesthetic Reactions
The true aesthetic reaction--itself rarely described *in
situ* in terms of a proximate cause ("In these cases the
word 'cause' is hardly ever used at all"
[Wittgenstein 1966, 14]) is far more immediate, and far more
intertwined with, and related to, what we see
in[4]
the work of art in question. "It is a reaction analogous to my
taking my hand away from a hot plate" (Wittgenstein 1966). He
thus says:
>
>
> To say: "I feel discomfort and know the cause", is
> entirely misleading because "know the cause" normally
> means something quite different. How misleading it is depends on
> whether when you said: "I know the cause", you meant it to
> be an explanation or not. "I feel discomfort and know the
> cause" makes it sound as if there were two things going on in my
> soul--discomfort and knowing the cause (Wittgenstein 1966,
> 14).
>
>
But there is, as he next says, a "Why?" to such a case of
aesthetic discomfort, if not a cause (on the conventional scientific
model). But both the question and its multiform answers will take,
indeed, very different forms in different cases. Again, if what he
suggested before concerning the significance of context for meaning is
right, the very *meaning* of the "Why?"-question
will vary case to case. This is not a weaker thesis concerning
variation on the level of inflection, where the underlying structure
of the "Why?"-question is causal. No, here again that
unifying, model-imposing manner of proceeding would leave out a
consideration of the nuances that give the "Why?"-question
its determinate sense in the first place.
But again, Wittgenstein's fundamental concern here is to point out
the great conceptual gulf that separates aesthetic perplexities from
the methodology of empirical psychology. To run studies of quantified
responses to controlled and isolated aesthetic stimuli, where emergent
patterns of preference, response, and judgment are recorded within a
given population's sample, is to pass by the true character of the
aesthetic issue--the actual puzzlement, such as we feel it, will
be conceptual, not empirical. And here again we see a direct link to
his work in the philosophy of psychology: the penultimate passage of
Part II of *Philosophical Investigations* (1958, sec. xiv), was
"The existence of the experimental method makes us think we have
the means of solving the problems which trouble us; though problem and
method pass one another by" (Wittgenstein 1958, II, iv, 232).
He says, near the close of this part of his lectures on aesthetics,
"Aesthetic questions have nothing to do with psychological
experiments, but are answered in an entirely different way"
(Wittgenstein 1966, 17). A stimulus-response model adapted from
scientific psychology--what we might now call the naturalizing of
aesthetics--falsifies the genuine complexities of aesthetic
psychology through a methodologically enforced reduction to one narrow
and unitary conception of aesthetic engagement. For Wittgenstein
complexity, and not reduction to unitary essence, is the route to
conceptual clarification. Reduction to a simplified model, by
contrast, yields only the illusion of clarification in the form of
conceptual incarceration ("a picture held us
captive").[5]
### 2.2 The "Click" of Coherence
Aesthetic satisfaction, for Wittgenstein, is an experience that is
only possible within a culture and where the reaction that constitutes
aesthetic satisfaction or justification is both more immediate, and
vastly larger and more expansive, than any simple mechanistic account
could accommodate. It is more immediate in that it is not usually
possible to specify in advance the exact conditions required to
produce the satisfaction, or, as he discusses it, the
"click" when everything falls into place. Such exacting
pre-specifications for satisfaction are possible in narrowly
restricted empirical cases where, for example, we wait for two
pointers in a vision examination to come into a position directly
opposite each other. And this, Wittgenstein says, is the kind of
simile we repeatedly use, but misleadingly, for in truth "really
there is nothing that clicks or that fits anything"
(Wittgenstein 1966, 19). The satisfaction is more immediate, then,
than the causal-mechanistic model would imply. And it is much broader
than the causal-mechanistic model implies as well: there is no direct
aesthetic analogue to the matched pointers in the case of a larger and
deeper form of aesthetic gratification. Wittgenstein does, of course,
allow that there are very narrow, isolated circumstances within a work
where we do indeed have such empirical pre-specifiable conditions for
satisfaction (e.g. where in a piece we see that we wanted to hear a
minor ninth, and not a minor seventh, chord). But, contrary to the
empirical-causal account, these will not add up, exhaustively or
without remainder, to the experience of aesthetic satisfaction. The
problem, to which Wittgenstein repeatedly returns in these lectures,
is with the *kind* of answer we want to aesthetic puzzlement as
expressed in a question like "Why do these bars give me such a
peculiar impression?" (Wittgenstein 1966, 20) In such cases,
statistical results regarding percentages of subjects who report this
peculiar impression rather than another one in precisely these
harmonic, rhythmic, and melodic circumstances are not so much
impossible as just beside the point; the *kind* of question we
have here is not met by such methods. "The sort of explanation
one is looking for when one is puzzled by an aesthetic impression is
not a causal explanation, not one corroborated by experience or by
statistics as to how people react.... This is not what one means
or what one is driving at by an investigation into aesthetics"
(Wittgenstein 1966, 21). It is all too easy to falsify, under the
influence of explanatory models misappropriated from science, the many
and varied kinds of things that happen when, aesthetically speaking,
everything seems to click or fall into place.
### 2.3 The Charm of Reduction
In the next passages of Wittgenstein's lectures he turns to a fairly
detailed examination of the distinct charm, for some, of
psycho-analytic explanation, and he interweaves this with the
distinction between scientific-causal explanation of an action versus
motive-based (or personally-generated) explanations of that action. It
is easy, but mistaken, to read these passages as simply
subject-switching anticipations of his lectures on Freud to
follow. His fundamental interest here lies with the powerful charm, a
kind of conceptual magnetism, of reductive explanations that promise a
brief, compact, and propositionally-encapsulated account of what some
much larger field of thought or action "really" is. (He
cites the example of boiling Redpath, one of his auditors, down to
ashes, etc., and then saying "This is all Redpath really
is", adding the remark "Saying this might have a certain
charm, but would be misleading to say the least" (Wittgenstein
1966, 24). "To say the least" indeed: the example is
striking precisely because one can feel the attraction of an
encapsulating and simplifying reduction of the bewildering and
monumental complexity of a human being, while at the same time
feeling, as a human being, that any such reduction to a few physical
elements would hardly capture the essence of a person. This, he
observes, is an explanation of the "This is only really
this" form. Reductive causal explanations function in just the
same way in aesthetics, and this links directly to the problems in
philosophical methodology that he adumbrated in the *Blue and Brown
Books* (1958a, 17-19), particularly where he discusses what
he there calls "craving for generality" and the attendant
"contemptuous attitude toward the particular case". If the
paradigm of the sciences (which themselves, as he observes in passing,
carry an imprimatur of epistemic prestige and the image of
incontrovertibility) is Newtonian mechanics, and we then implant that
model under our subsequent thinking about psychology, we will almost
immediately arrive at an idea of a science of the mind, where that
science would progress through the gradual accumulation of
psychological laws. (This would constitute, as he memorably puts it, a
"mechanics of the soul" [Wittgenstein 1966, 29]). We then
dream of a psychological science of aesthetics, where--although
"we'd not thereby have solved what we feel to be aesthetic
puzzlement" (Wittgenstein 1966, 29)--we may find ourselves
able to predict (borrowing the criterion of predictive power from
science) what effect a given line of poetry, or a given musical
phrase, may have on a certain person whose reaction patterns we have
studied. But aesthetic puzzlement, again, is of a different
*kind*, and--here he takes a major step forward, from the
critical, to the constructive, phase of his lectures. He writes,
"What we really want, to solve aesthetic puzzlements, is certain
comparisons--grouping together of certain cases"
(Wittgenstein 1966, 29). Such a method, such an approach, would never
so much as occur to us were we to remain both dazzled by the
misappropriated model of mechanics and contemptuous of the particular
case.
## 3. The Comparative Approach
Comparison, and the intricate process of grouping together certain
cases--where such comparative juxtaposition usually casts certain
significant features of the work or works in question in higher
relief, where it leads to the emergence of an organizational
gestalt,[6]
where it shows the evolution of a style, or where it shows what is
strikingly original about a work, among many other things--also
focuses our attention on the particular case in another way. In
leading our scrutiny to critically and interpretatively relevant
particularities, it leads us away from the aesthetically blinding
presumption that it is the *effect*, brought about by the
"cause" of that particular work, that matters (so that any
minuet that gives a certain feeling or awakens certain images would do
as well as any other).
### 3.1 Critical Reasoning
These foregoing matters together lead to what is to my mind central to
Wittgenstein's thoughts on aesthetics. In observing that on one
hearing of a minuet we may get a lot out of it and on the next hearing
nothing, he is showing how easy it can be to take conceptual missteps
in our aesthetic thought from which it can prove difficult to
recover. From such a difference, against how we can all too easily
model the matter, it does not follow that what we get out of the
minuet is *independent* of the minuet. That would constitute an
imposition of a Cartesian dualism between the two ontologies of mind
and matter, but in its aesthetic guise. And that would lead in turn to
theories of aesthetic content of a mental kind, where the materials of
the art form (materials we would then call "external"
materials) only serve to carry the real, internal or non-physical
content. Criticism would thus be a process of arguing inferentially
from outward evidence back to inward content; creation would be a
process of finding outward correlates or carriers for that prior
inward content; and artistic intention would be articulated as a full
mental pre-conception of the contingently finished (or
"externalized", as we would then call it) work. It is no
accident that Wittgenstein immediately moves to the analogy to
language and our all-too-easy misconception of linguistic meaning,
where we make "the mistake of thinking that the meaning or
thought is just an accompaniment of the word, and the word doesn't
matter" (Wittgenstein 1966, 25). This indeed would prove
sufficient to motivate contempt for the particular case in aesthetic
considerations, by misleadingly modeling a dualistic vision of art
upon an equally misleading dualistic model of language (Hagberg
1995). "The sense of a proposition", he says, "is
very similar to the business of an appreciation of art"
(Wittgenstein 1966, 29). This might also be called a subtractive
model, and Wittgenstein captures this perfectly with a question:
"A man may sing a song with expression and without
expression. Then why not leave out the song--could you have the
expression then?" (Wittgenstein 1966, 32 editorial note and
footnote) Such a model corresponds, again, to mind-matter Cartesian
dualism, which is a conceptual template that also, as we have just
seen above, manifests itself in the philosophy of language where we
would picture the thought as an inner event and the external word or
sign to which it is arbitrarily attached as that inner event's
outward corresponding physicalization. It is no surprise that, in this
lecture, he turns directly to false and imprisoning pictures of this
dualism in linguistic meaning, and then back to the aesthetic
case. When we contemplate the expression of a drawn face, it is deeply
misleading--or deeply misled, if the dualistic picture of
language stands behind this template-conforming thought--to ask
for the expression to be given without the face. "The
expression"--now linking this discussion back to the
previous causal considerations--"is not an *effect*
of the face". The template of cause-and-effect, and of a dualism
of material and expressive content (on the subtractive model), and the
construal of the material work of art as a *means* to the
production of a separately-identifiable intangible, experiential
*end,* are all out of place here. All of the examples
Wittgenstein gives throughout these lectures combat these pictures,
each in their own way.
But then Wittgenstein's examples also work in concert: they
together argue against a form of aesthetic reductionism that would
pretend that our reactions to aesthetic objects are isolable, that they
can be isolated as variables within a controlled experiment, that they
can be hermetically sealed as the experienced effects of isolatable
causes. He discusses our aesthetic reactions to subtle
differences between differently drawn faces, and our equally subtle
reactions to the height or design of a door (he is known to have had
the ceiling in an entire room of the house in Vienna he designed for
his sister moved only a few inches when the builders failed to realize
his plan with sufficient exactitude). The enormous subtlety, and the
enormous complexity, of these reactions, are a part of--and as
complicated as--our natural history. He gives as an example the
error, or the crudeness, of someone responding to a complaint
concerning the depiction of a human smile (specifically, that the smile
did not seem genuine), with the reply that, after all, the lips are
only parted one one-thousandths of an inch too much: such differences,
however small in *measure*, in truth matter enormously.
### 3.2 Seeing Connections
Beneath Wittgenstein's examples, as developed throughout these
lectures, lies another interest (which presses its way to the surface
and becomes explicit on occasion): he is eager to show the
significance of making connections in our perception and understanding
of art works--connections between the style of a poet and that of
a composer (e.g. Keller and Brahms), between one musical theme and
another, between one expressive facial depiction and another, between
one period of an artist's work and another. Such connections--we
might, reviving a term from first-generation Wittgensteinians, refer
to the kind of work undertaken to identify and articulate such
connections as "connective analysis"--are, for
Wittgenstein, at the heart of aesthetic experience and aesthetic
contemplation. And they again are of the kind that reductive causal
explanation would systematically miss. In attempting to describe
someone's feelings, could we do better than to imitate the way the
person actually said the phrase we found emotionally revelatory,
Wittgenstein pointedly asks? The disorientation we would feel in
trying to describe the person's feeling with subtlety and precision
*without* any possibility of imitating his precise expressive
utterance--"the way he said it" - shows how
very far the dualistic or subtractive conceptual template is from our
human experience, our natural history.
Connections, of the kind alluded to here--a web of
variously-activated relations between the particular aspect of the
work to which we are presently attending and other aspects, other
parts of the work, or other works, groups of works, or other artists,
genres, styles, or other human experiences in all their
particularity--may include what we call associations awakened by
the work, but connections are not reducible to them only (and
certainly not to undisciplined, random, highly subjective, or free
associations).[7]
The impossibility of the simplifying subtractive template emerges
here as well: "Suppose [someone says]: 'Associations are
what matter--change it slightly and it no longer has the same
associations'. But can you separate the associations from the
picture, and have the same thing?" The answer is clearly, again
like the case of the singing with and then without expression above,
negative: "You can't say: 'That's just as good as the
other: it gives me the same associations'" (Wittgenstein
1966, 34). Here again, Wittgenstein shows the great gulf that
separates what we actually do with, what we actually say about, works
of art, and how we would speak of them *if* the conceptual
pictures and templates with which he has been doing battle were
correct. To extend one of Wittgenstein's examples, we would very
much doubt the aesthetic discernment, and indeed the sympathetic
imagination and the human connectedness, of a person who said of two
poems (each of which reminded him of death) that either will do as
well as the other to a bereaved friend, that they would do the same
thing (where this is uttered in a manner dismissive of nuance, or as
though it is being said of two detergents). Poetry, Wittgenstein is
showing, does not play that kind of role in our lives, as the nature
and character of our critical verbal interactions about it
indicate. And he ascends, momentarily, to a remark that characterizes
his underlying philosophical methodology (or one dimension of it) in
the philosophy of language that is being put to use here within the
context of his lectures on aesthetics: "If someone talks bosh,
imagine the case in which it is not bosh. The moment you imagine it,
you see at once it is not like that in our case" (Wittgenstein
1966, 34). The gulf that separates what we should say if the
generalizing templates were accurate from what we in actual
particularized cases do say could further call into question not only
the applicability or accuracy, but indeed the very intelligibility, of
the language used to express those templates, those explanatory
pictures. Wittgenstein leaves that more aggressive, and ultimately
more clarifying and conceptually liberating, critique for his work on
language and mind in *Philosophical Investigations* (1958) and
other writings, but one can see from these lectures alone how such an
aggressive critique might be undertaken.
### 3.3 The Attitude Toward the Work of Art
Near the end of his lectures Wittgenstein turns to the question of the
attitude we take toward the work of art. He employs the case of seeing
the very slight change (of the kind mentioned above) in the depiction
of a smile within a picture of a monk looking at a vision of the
Virgin Mary. Where the slight and subtle change of line yields a
transformation of the smile of the monk from a kindly to an ironic
one, our attitude in viewing might similarly change from one in which
for some we are almost in prayer to one that would for some be
blasphemous, where we are almost leering. He then gives voice to his
imagined reductive interlocutor, who says, "Well there you
are. It is all in the attitude" (Wittgenstein 1966, 35), where
we would then focus, to the exclusion of all of the rest of the
intricate, layered, and complex human dimensions of our reactions to
works, solely on an analysis of the attitude of the spectator and the
isolable causal elements in the work that determine it. But that,
again, is only to give voice to a reductive impulse, and in the brief
ensuing discussion he shows, once again, that in some cases, an
attitude of this kind may emerge as particularly salient. But in other
cases, not. And he shows, here intertwining a number of his themes
from these lectures, that the very idea of "a description of an
attitude" is itself no simple thing. Full-blooded human beings,
and not stimulus-response-governed automata, have aesthetic
experience, and that experience is as complex a part of our natural
history as any other.
Wittgenstein ends the lectures discussing a simple heading: "the
craving for simplicity" (Wittgenstein 1966, 36). To such a mind,
he says, if an explanation is complicated, it is disagreeable for that
very reason. A certain kind of mind will insist on seeking out the
single, unitary essence of the matter, where--much like Russell's
atomistic search for the essence of the logic of language beneath what
he regarded as its misleadingly and distractingly variegated
surface--the reductive impulse would be given free
reign. Wittgenstein's early work in the *Tractatus* (1961)
followed in that vein. But in these lectures, given in 1938, we see a
mind well into a transition away from those simplifying templates,
those conceptual pictures. Here, examples *themselves* do a
good deal of philosophical work, and their significance is that they
*give*, rather than merely illustrate, the philosophical point
at hand. He said, earlier in the lectures, that he is trying to teach
a new way of thinking about aesthetics (and indeed about philosophy
itself).
## 4. Conclusion
The subject, as he said in his opening line, is very big and entirely
misunderstood. It is very big in its scope--in the reach of the
aesthetic dimension throughout human life.
But we can now, at the end of his lectures, see that it is a big
subject in other senses too: aesthetics is conceptually expansive in
its important linkages to the philosophy of language, to the
philosophy of mind, to ethics, and to other areas of philosophy, and
it resists encapsulation into a single, unifying problem. It is a
multi-faceted, multi-aspected human cultural phenomenon where
connections, of diverging kinds, are more in play than causal
relations. The form of explanation we find truly satisfying will thus
strikingly diverge from the form of explanation in science - the
models of explanation in *Naturwissenschaften* are misapplied
in *Geisteswissenschaften,* and the viewing of the latter
through the lens of the former will yield reduction, exclusion, and
ultimately distortion. The humanities are thus, for Wittgenstein, in
this sense autonomous.
All of this, along with the impoverishing and blinding superimposition
of conceptual models, templates, and pictures onto the extraordinarily
rich world of aesthetic engagement, also now, at the end of his
lectures, gives content to what Wittgenstein meant at the beginning
with the words "entirely misunderstood". For now, at this
stage of Wittgenstein's development, where the complexity-accepting
stance of the later *Philosophical Investigations* (1958) and
other work is unearthing and uprooting the philosophical
presuppositions of the simplification-seeking earlier work, examples
themselves have priority as indispensable instruments in the struggle
to free ourselves of misconception in the aesthetic realm. And these
examples, given due and detailed attention, will exhibit a
context-sensitive particularity that makes generalized pronouncements
hovering high above the ground of that detail look otiose,
inattentive, or, more bluntly, just a plain falsification of
experience. What remains is not, then--and this is an idea
Wittgenstein's auditors must themselves have struggled with in those
rooms in Cambridge, as many still do today--another theory built
upon now stronger foundations, but rather a clear view of our
multiform aesthetic practices. Wittgenstein, in his mature, later
work, did not generate a theory of language, of mind, or of
mathematics. He generated, rather, a vast body of work perhaps united
only in its therapeutic and intricately labored search for conceptual
clarification. One sees the same philosophical aspiration driving his
foray into aesthetics. |
wittgenstein-atomism | ## 1. Names and Objects
The "names" spoken of in the *Tractatus* are not
mere signs (i.e., typographically or phonologically identified
inscriptions), but rather signs-together-with-their-meanings --
or "symbols." Being symbols, names are identified and
individuated only in the context of significant sentences. A name is
"semantically simple" in the sense that its meaning does
not depend on the meanings of its orthographic parts, even when those
parts are, in other contexts, independently meaningful. So, for
example, it would not count against the semantic simplicity of the
symbol 'Battle' as it figures in the sentence
"Battle commenced" that it contains the orthographic part,
"Bat," even though this part has a meaning of its own in
other sentential contexts. For Wittgenstein, however, something else
does count against this symbol's semantic simplicity, namely,
that it is analyzable away in favour of talk of the actions of people,
etc. This point suggests that in natural language Tractarian names
will be rare and hard to find. Even apparently simple singular terms
such as 'Obama,' 'London,' etc., will not be
counted as "names" by the strict standards of the
*Tractatus* since they will disappear on further analysis.
(Hereafter, 'name' will mean "Tractarian name"
unless otherwise indicated.)
It is a matter of controversy whether the *Tractatus* reserves
the term 'name' for those semantically simple symbols that
refer to particulars, or whether the term comprehends semantically
simple symbols of all kinds. Since objects are just the referents of
names, this issue goes hand in hand with the question whether objects
are one and all particulars or whether they include properties and
relations. The former view is defended by Irving Copi (Copi 1958) and
Elizabeth Anscombe (Anscombe 1959 [1971, 108 ff]), among others. It is
supported by *Tractatus* 2.0231: "[Material properties]
are first presented by propositions -- first formed by the
configuration of objects." This might seem to suggest that
simple properties are not objects but rather arise from the combining
or configuring of objects. The Copi-Anscombe interpretation has been
taken to receive further support from *Tractatus* 3.1432:
>
> We must not say, "The complex sign '*aRb*'
> says '*a* stands in relation *R* to
> *b*;'" but we must say, "*That*
> '*a*' stands in a certain relation to
> '*b*' says *that aRb*."
>
This has suggested to some commentators that relations are not,
strictly speaking, nameable, and so not Tractarian objects (see, for
example, Ricketts, 1996, Section III). It may, however, be intended
instead simply to bring out the point that Tractarian names are not
confined to particulars, but include relations between particulars; so
this consideration is less compelling.
The opposing view, according to which names include predicates and
relational expressions, has been defended by Erik Stenius and Merrill
and Jaakko Hintikka, among others (Stenius, 1960, 61-69;
Hintikka and Hintikka, 1986, 30-34). It is supported by a
*Notebooks* entry from 1915 in which objects are explicitly
said to include properties and relations (*NB*, 61). It is
further buttressed by Wittgenstein's explanation to Desmond Lee
(in 1930-1) of *Tractatus* 2.01:
"'Objects' also include relations; a proposition is
not two things connected by a relation. 'Thing' and
'relation' are on the same level." (*LK*,
120).
The Anscombe-Copi reading treats the forms of elementary propositions
as differing radically from anything we may be familiar with from
ordinary -- or even Fregean -- grammar. It thus respects
Wittgenstein's warning to Waismann in 1929 that "The
logical structure of elementary propositions need not have the
slightest similarly with the logical structure of [non-elementary]
propositions" (*WWK*, 42).
Going beyond this, Wittgenstein seems once to have held that there
*can be* no resemblance between the apparent or surface forms
of non-elementary propositions and the forms of elementary
propositions. In "Some Remarks on Logical Form" (1929) he
says: "One is often tempted to ask from an a priori standpoint:
What, after all, can be the only forms of atomic propositions, and to
answer, e.g., subject-predicate and the relational propositions with
two or more terms further, perhaps, propositions relating predicates
and relations with one another, and so on. But this, I believe, is a
mere playing with words" (Klagge and Nordman, 1993, 30). A
similar thought already occurs in a more compressed form in the
*Tractatus* itself: "There cannot be a hierarchy of the
forms of the elementary propositions. Only that which we ourselves
construct can we foresee" (5.556).
It is possible, then, that the options we began with represent a false
dichotomy. Perhaps Wittgenstein simply did not have an antecedent
opinion on the question whether Tractarian names will turn out to be
names of particulars only, particulars and universals, or whatnot. And
perhaps he believed that the final analysis of language would (or
might) reveal the names to defy such classifications altogether. This
broader range of interpretive possibilities has only recently begun to
receive the attention it deserves (See Johnston 2009).
## 2. Linguistic Atomism
By "Linguistic atomism" we shall understand the view that
the analysis of every proposition terminates in a proposition all of
whose genuine components are names. It is a striking fact that the
*Tractatus* contains no explicit argument for linguistic
atomism. This fact has led some commentators -- e.g., Peter
Simons (1992) -- to suppose that Wittgenstein's position
here is motivated less by argument than by brute intuition. And
indeed, Wittgenstein does present some conclusions in this vicinity as
if they required no argument. At 4.221, for example, he says:
"*It is obvious that* in the analysis of propositions we
must come to elementary propositions, which consist of names in
immediate combination" (emphasis added). Nonetheless, some basic
observations about the *Tractatus*'s conception of
analysis will enable us to see why Wittgenstein should have thought it
obvious that analysis must terminate in this way.
### 2.1 Wittgenstein's Early Conception of Analysis
A remark from the *Philosophical Grammar*, written in 1936,
throws light on how Wittgenstein had earlier conceived of the process
of analysis:
>
> Formerly, I myself spoke of a 'complete analysis,' and I
> used to believe that philosophy had to give a definitive dissection of
> propositions so as to set out clearly all their connections and remove
> all possibilities of misunderstanding. I spoke as if there was a
> calculus in which such a dissection would be possible. I vaguely had
> in mind something like the definition that Russell had given for the
> definite article (*PG*, 211).
>
One of the distinctive features of Russell's definition is that
it treats the expression "the *x* such that
*Fx*" as an "incomplete symbol." Such symbols
have no meaning in isolation but are given meaning by contextual
definitions that treat of the sentential contexts in which they occur
(cf. *PM*, 66). Incomplete symbols do, of course, *have*
meaning because they make a contribution to the meanings of the
sentences in which they occur (cf. *Principles*, Introduction,
x). What is special about them is that they make this contribution
without expressing a propositional constituent. (For more on the
nature of incomplete symbols, see Pickel 2013)
Russell's definition is contained in the following clauses (For
the sake of expository transparency, his scope-indicating devices are
omitted.).
>
>
> [1] *G*(the *x*: *Fx*) =
> [?]*x*([?]*y*(*Fy*-*y*=*x*)
> & *Gx*) Df.
>
>
> (cf. Russell 1905b; Russell 1990, 173)
>
>
>
> [2] (the *x*: *Fx*) exists =
> [?]*x*[?]*y*(*Fy*-*y*=*x*)
> Df.
>
>
> (cf. Russell 1990, 174)
>
>
>
The fact that existence is dealt with by a separate definition shows
that Russell means to treat the predicate 'exists' as
itself an incomplete symbol.
One can understand why Wittgenstein should have taken there to be an
affinity between the theory of descriptions and his own envisioned
"calculus," for one can extract from his remarks in the
*Tractatus* and elsewhere two somewhat parallel proposals for
eliminating what he calls terms for "complexes":
>
> [3] *F*[*aRb*] iff *Fa* & *Fb* &
> *aRb*
>
>
> [4] [*aRb*] exists iff *aRb*
>
Clauses [1] to [4] share the feature that any sentence involving
apparent reference to an individual is treated as false rather than as
neither true nor false if that individual should be discovered not to
exist.
Wittgenstein's first contextual definition -- our [3]
-- occurs in a *Notebooks* entry from 1914 (*NB*,
4), but it is also alluded to in the *Tractatus*:
>
> Every statement about complexes can be analysed into a statement about
> their constituent parts, and into those propositions which completely
> describe the complexes (2.0201).
>
In [3] the statement "about [the complex's] constituent
parts" is "*Fa* & *Fb*," while the
proposition which "completely describes" the complex is
"*aRb*." If the propositions obtained by applying
[3] and [4] are to be further analysed, a two-stage procedure will be
necessary: first, the apparent names generated by the analysis --
in the present case '*a*' and
'*b*' -- will need to be
replaced[3]
with symbols that are overtly terms for complexes, e.g.,
'[*cSd*]' and '[*eFg*];' second,
the contextual definitions [3] and [4] will need to be applied again
to eliminate these terms. If there is going to be a unique final
analysis, each apparent name will have to be *uniquely* paired
with a term for a complex. So the program of analysis at which
Wittgenstein gestures, in addition to committing him to something
analogous to Russell's theory of descriptions, also commits him
to the analogue of Russell's "description theory of
ordinary names" (cf. Russell 1905a). The latter is the idea that
every apparent name not occurring at the end of analysis is equivalent
in meaning to some definite description.
Wittgenstein's first definition, like Russell's, strictly
speaking, stands in need of a device for indicating scope, for
otherwise it would be unclear how to apply the analysis when we
choose, say, "~*G*" as our instance of
"*F*." In such a case the question would arise
whether the resulting instance of [3] is [5]:
"~*G*[*aRb*] = ~*Ga* & ~*Gb*
& *aRb*," which corresponds to giving the term for a
complex wide scope with respect to the negation operator, or whether
it is: [6] "~*G*[*aRb*] = ~[*Ga* &
*Gb* & *aRb*]," which corresponds to giving
the term for a complex narrow scope. One suspects that
Wittgenstein's intention would most likely have been to follow
Russell's convention of reading the logical operator as having
narrow scope unless the alternative is expressly indicated (cf.
*PM*, 172).
Definition [3] has obvious flaws. While it may work for such
predicates as "*x* is located in England," it
obviously fails for certain others, e.g., "*x* is greater
than three feet long" and "*x* weighs exactly four
pounds." This problem can hardly have escaped Wittgenstein; so
it seems likely that he would have regarded his proposals merely as
tentative illustrations, open to supplementation and refinement.
Although Wittgenstein's second contextual definition -- our
[4] -- does not occur in the *Tractatus*, it is implied by
a remark from the *Notes on Logic* that seems to anticipate
2.0201:
>
> Every proposition which seems to be about a complex can be analysed
> into a proposition about its constituents and ... the proposition
> which describes the complex perfectly; *i.e., that proposition
> which is equivalent to saying the complex exists* (*NB*,
> 93; emphasis
> added)[4]
>
Since the proposition that "describes the complex,"
[*aRb*], "perfectly" is just the proposition that
*aRb*, Wittgenstein's clarifying addendum amounts to the
claim that the proposition "*aRb*" is equivalent to
the proposition "[*aRb*] exists." And this
equivalence is just our [4].
It turns out, then, that existence is defined only in contexts in
which it is predicated of complexes. Wittgenstein proposal thus
mirrors Russell's in embodying the idea that it makes no sense
to speak of the existence of immediately given (that is, named)
simples (cf. *PM*, 174-5). This is why Wittgenstein was
later to refer to his "objects" as "that for which
there is neither existence nor non-existence" (*PR*, 72).
His view seems to be that when '*a*' is a
Tractarian name, what we try to say by uttering the nonsense string
"*a* exists" will, strictly speaking, be
*shown* by the fact that the final analysis of some proposition
contains '*a*' (cf. 5.535). But of course, the
*Tractatus* does not always speak strictly. Indeed, what is
generally taken to be the ultimate conclusion of the
*Tractatus*'s so-called "Argument for
Substance" (2.021-2.0211) itself tries to say something
that can only be shown, since it asserts the *existence* of
objects. The sharpness of the tension here is only partly disguised by
the oblique manner in which the conclusion is formulated. Instead of
arguing for the existence of objects, the *Tractatus* argues
for the thesis that the world "has substance." However,
because "objects constitute the substance of the world"
(2.021), and because substance is that which *exists*
independently of what is the case (2.024), this is tantamount to
saying that objects exist. So it seems that Wittgenstein's
argument for substance must be regarded as a part of the ladder we are
supposed to throw away (6.54). But having acknowledged this point, we
shall set it aside as peripheral to our main concerns.
The most obvious similarity between the two sets of definitions is
that each seeks to provide for the elimination of what purport to be
semantically complex referring expressions. The most obvious
difference consists in the fact that Wittgenstein's definitions
are designed to eliminate not definite descriptions, but rather terms
for complexes, for example the expression
"[*aRb*]," which, judging by remarks in the
*Notebooks*, is to be read: "*a in the relation R to
b*" (*NB*, 48) (This gloss seems to derive from
Russell's manner of speaking of complexes in *Principia
Mathematica*, where examples of terms for complexes include, in
addition to "*a* in the relation *R* to
*b*," "*a* having the quality
*q*", and "*a* and *b* and *c*
standing in the relation *S*" (*PM*, 44).). One
might wonder why this difference should exist. That is to say, one
might wonder why Wittgenstein does not treat the peculiar locution
"*a* in the relation *R* to *b*" as a
definite description -- say, "the complex consisting of
*a* and *b*, combined so that *aRb*"? This
description could then be eliminated by applying the
*Tractatus*'s own variant upon the theory of
descriptions:
>
> The *F* is *G* - [?]*x*(*Fx*
> & *Gx*) & ~[?]*x*,*y*(*Fx*
> & *Fy*)
>
>
> (cf. 5.5321)
>
Here the distinctness of the variables (the fact that they are
distinct) replaces the sign for distinctness "[?]" (cf.
5.53).
Since Wittgenstein did not adopt this expedient, it seems likely that
he would have regarded the predicate "*x* is a complex
consisting of *a* and *b*, combined so that
*aRb*" as meaningless in virtue of -- among other
things -- its containing ineliminable occurrences of the
pseudo-concepts "complex," "combination," and
"constitution." Only the first of these notions figures on
his list of pseudo-concepts in the *Tractatus* (4.1272), but
there is no indication that that list is supposed to be exhaustive.
There is a further respect in which Wittgenstein's analytical
proposals differ from Russell's. Russell's second
definition -- our [2] -- has the effect of shifting the
burden of indicating ontological commitment from the word
'exists' to the existential quantifier. In
Wittgenstein's definition, by contrast, no single item of
vocabulary takes over the role of indicating ontological commitment.
Instead, that commitment is indicated only after the *final*
application of the definition, by the meaningfulness of the names in
the fully analysed proposition -- or, more precisely, by the fact
that certain symbols are names (cf. 5.535). The somewhat paradoxical
consequence is that one can assert a statement of the form
"[*aRb*] exists" without thereby manifesting any
ontological commitment to the complex [*aRb*] (cf.
*EPB*, 121). What this shows is that the two theories relieve
the assertor of ontological commitments of quite different kinds. In
Russell's case, the analysis -- our [2] -- removes a
commitment to an apparent propositional constituent -- a
"denoting concept"
[5]
--expressed by the phrase 'the *F*,' but it
does not remove the commitment to the *F* itself. For
Wittgenstein, by contrast, the analysis shows that the assertor never
was ontologically committed to the complex [*aRb*] by an
utterance of "[*aRb*] exists."
Russell's conception of analysis at the time of the theory of
descriptions -- c.a. 1905 -- is relatively clear: Analysis
involves pairing up one sentence with another that expresses the very
same Russellian proposition but which does so more perspicuously. The
analysans counts as more perspicuous than the analysandum because the
former is free of some of the latter's merely apparent
ontological commitments. By the time of *Principia
Mathematica*, however, this relatively transparent conception of
analysis is no longer available. Having purged his ontology of
propositions in 1910, Russell can no longer appeal to the idea that
analysans and analysandum express one and the same proposition. He now
adopts "the multiple relation theory of judgment,"
according to which the judgment (say) that Othello loves Desdemona,
instead of being, as Russell had formerly supposed, a dyadic relation
between the judging mind and the proposition *Othello loves
Desdemona*, is now a non-dyadic, or, in Russell's
terminology, "multiple," relation whose relata are the
judging mind and those items that were formerly regarded as
constituents of the proposition *Othello loves Desdemona*
(Russell 1910 [1994, 155]). After 1910 Russell can say that a speaker who
sincerely assertively uttered the analysans (in a given context) would
be guaranteed to make the same judgment as one who sincerely
assertively uttered the analysandum (in the same context), but he can
no longer explain this accomplishment by saying that the two sentences
express the same proposition.
A further departure from the earlier, relatively transparent
conception of analysis is occasioned by Russell's resolution of
the set-theoretic version of his paradox. In this resolution one gives
an analysis of a sentence whose utterance could not be taken to
express *any* judgment. One argues that the sentence
"{*x*: *phx*} e {*x*:
*phx*}" is nonsense because the contextual definitions
providing for the elimination of class terms yield for this case a
sentence that is itself nonsense by the lights of the theory of types
(*PM*, 76). It's (apparent) negation is, accordingly,
also nonsense. In *Principia*, then, there is no very clear
model of what is preserved in analysis. The best we can say is that
Russell's contextual definitions have the feature that a
(sincere, assertive) utterance of the analysans is guaranteed to
express the same judgment as the analysandum, *if* the latter
expresses a judgment at all.
Some of the unclarity in the conception of analysis introduced by
Russell's rejection of propositions is inherited by
Wittgenstein, who similarly rejects any ontology of shadowy entities
expressed by sentences. In the *Tractatus* a
"proposition" (*Satz*) is a "propositional
sign in its projective relation to the world" (3.12). This makes
it seem as though any difference between propositional signs ought to
suffice for a difference between propositions, in which case analysans
and analysandum could at best be distinct propositions with the same
truth conditions.
Enough has now been said to make possible a consideration of
Wittgenstein's reasons for describing the position I have been
calling "linguistic atomism" as "obvious."
Since the model for Tractarian analysis is the replacement of apparent
names with (apparently) co-referring "terms for
complexes," together with eliminative paraphrase of the latter,
it follows trivially that the endpoint of analysis, if there is one,
will contain neither "terms for complexes" nor expressions
that can be replaced by terms for complexes.
Wittgenstein, moreover, thinks it obvious that in the case of every
proposition this process of analysis *does* terminate. The
reason he supposes analysis cannot go on forever is that he conceives
an unanalyzed proposition as *deriving* its sense from its
analysis. As *Tractatus* 3.261 puts it: "Every defined
sign signifies via those signs by which it is defined" (Cf.
*NB*, 46; *PT* 3.20102). It follows that no proposition
can have an infinite analysis, on pain of never acquiring a sense. So
the process of analysis must terminate, and when it does so the
product will be propositions devoid of incomplete symbols.
That much, at least, *is* plausibly obvious, but unfortunately
it does not follow that the final analysis of language will be wholly
devoid of complex symbols. The trouble is that for all we have said so
far, a fully analysed proposition might yet contain one or more
complex symbols *that have meaning in their own right*.
Clearly, Wittgenstein must have been assuming that all genuine
referring expressions must be semantically simple: they must lack
anything like a Fregean sense. But why should that be so? The seeds of
one answer are contained in *Tractatus* 3.3, the proposition in
which Wittgenstein enunciates his own version of Frege's context
principle: "Only the proposition has sense; only in the context
of a proposition has a name meaning" (3.3). Wittgenstein's
juxtaposition of these two claims suggests that the context principle
is supposed to be his ground for rejecting senses for sub-sentential
expressions. But just how it could provide such a ground is far from
clear. Another, more concrete, possibility is that Wittgenstein simply
accepted the arguments Russell had given in "On Denoting"
for rejecting senses for sub-sentential expressions.
## 3. Metaphysical Atomism
By "Metaphysical atomism" we will understand the view that
the semantically simple symbols occurring in a proposition's
final analysis refer to simples. The *Tractatus* does not
contain a *distinct* freestanding argument for this thesis,
but, as we will see, the needed argument is plausibly extractable from
the famous "Argument for Substance" of 2.0211-2:
>
> 2.0211 If the world had no substance, then whether a proposition had
> sense would depend on whether another proposition was true.
>
>
> 2.0212 It would then be impossible to draw up a picture of the world
> (true or false).
>
To see what precisely is being contended for in this argument one
needs to appreciate the historical resonances of Wittgenstein's
invocation of the notion of "substance."
### 3.1 Objects as the Substance of the World
The *Tractatus*'s notion of substance is the modal
analogue of Kant's temporal notion. Whereas for Kant, substance
is that which "persists" (in the sense of existing at all
times) (*Critique*, A 182), for Wittgenstein it is that which,
figuratively speaking, "persists" through a
"space" of possible worlds. (Compare the idea of a road
that crosses several U.S. States. Such a road might be said,
metaphorically speaking, to "persist" from one State to
the next: in such a locution, as in Wittgenstein's, a temporal notion
is enlisted to do spatial duty, though in Wittgenstein's case the
space in question is logical rather than physical space and
persistence amounts to reaching through the *whole* of logical
space.) Tractarian substance is the "unchanging" in the
metaphorical sense of that which does not undergo existence change in
the passage (also metaphorical) from world to world. Less
figuratively, Tractarian substance is that which exists with respect
to every possible world. For Kant, to assert that there is substance
(in the schematized sense of the category) is to say that that there
is some stuff such that every existence change (i.e., origination or
annihilation) is necessarily an alteration or reconfiguration of that
stuff. For Wittgenstein, analogously, to say that there is substance
is to say that there are some things such that all "existence
changes" in the metaphorical passage from world to world are
reconfigurations of them. What undergo "existence changes"
are atomic states of affairs (configurations of objects): a state of
affairs exists with respect to one world but fails to exist with
respect to another. Those things that remain in existence through
these existence changes, and which are reconfigured in the process,
are Tractarian objects. It follows that the objects that
"constitute the substance of the world" (2.021) are
necessary existents. The *Tractatus* , rather wonderfully,
compresses this whole metaphorical comparison into a single remark:
"The object is the fixed, the existing [*das
Bestehende*]; the configuration is the changing [*das
Wechselnde*]." (2.0271). "*Wechsel*," it
should be noted, is the word that Kant expressly reserves for the
notion of existence change as opposed to alteration
(*Critique*, A 187/B 230). (Unfortunately, however, whether
Wittgenstein had read the *Critique* in time for this
circumstance to have influenced his own phrasing in the
*Tractatus* is unknown.)
Tractarian objects are what any "imagined"--or, more
accurately, *conceivable*--world has in common with the
real world (2.022). Accordingly, they constitute the world's
"fixed form" (2.022-3). 'Fixed' because,
unlike the world's content, objects (existentially speaking)
hold fast in the transition from world to world. 'Form'
because they constitute what is shared by all the worlds. (On
Wittgenstein's conception of possibility, the notion of an
"alien" Tractarian object -- one which is
*merely* possible -- is not even intelligible). If the
objects make up the world's form, what makes up its content? The
answer, I think, is the various obtaining atomic states of affairs.
Distinct worlds differ with respect to content because they differ
with respect to which possible states of affairs obtain in them. A
complication arises becasue possible atomic states of affairs also
have both form and content form and content. Their form is the manner
of combination of their components, their content those components
themselves (that is, their contained objects). It follows that
substance -- the totality of objects -- is indeed, as
Wittgenstein says, "both form and content"
(2.024-5). It is at once both the form of the world and the
content of possible states of affairs (These and further details of
this interpretation of Wittgenstein's conception of substance as
the fixed or unchanging are provided in Proops 2004; see also
Zalabardo 2015, Appendix II for more on simples, names, and necessary
existents).
### 3.2 The Argument for Substance
As we have seen, the immediate goal of the argument for substance is
to establish that there are things that exist necessarily. In the
context of the assumption that anything complex could fail to exist
through decomposition, this conclusion entails that there are simples
(2.021). While the argument is presented as a two-stage *modus
tollens*, it is conveniently reconstructed as a *reductio ad
absurdum* (The following interpretation of the argument is a
compressed version of that provided in Proops 2004. For two recent
alternatives, see Zalabardo 2015, 243-254 and Morris 2008, ch.1,
and 2017; and for criticisms of Morris, see Potter 2009):
>
> Suppose, for *reductio*, that
>
>
> [1] There is no substance (that is, nothing exists in
> every possible world).
>
>
> Then
>
>
> [2] Everything exists contingently.
>
>
> But then
>
>
> [3] Whether a proposition has sense depends on whether
> another proposition is true.
>
>
> So
>
>
> [4] We cannot draw up pictures of the world (true or
> false).
>
>
> But
>
>
> [5] We *can* draw up such pictures.
>
>
>
> Contradiction
>
> So
>
>
> [6] There is substance (that is, some things exist in
> every possible world).
>
>
>
Our [5] is the main suppressed premise. It means, simply, that we can
frame senseful propositions. Let us now consider how we might try to
defend the inference from [2] to [3] on Wittgensteinian principles. As
a preliminary, note that, given Wittgenstein's equation in the
*Notes on Logic* of having sense with having truth-poles
(*NB*, 99), it seems reasonable to suppose that for a sentence
to "have sense" with respect to a given world is for it to
have a truth value with respect to that world. Let us assume that this
is so. Now suppose that everything exists contingently. Then, in
particular, the referents of the semantically simple symbols occurring
in a fully analysed sentence will exist contingently. But then any
such sentence will contain a semantically simple symbol that fails to
refer with respect to some possible world (As we will shortly see,
this step is in fact controversial.). Suppose, as a background
assumption, that there are no contingent simples. (It will be argued
below that this assumption plausibly follows from certain Tractarian
commitments.) Then, if we assume that a sentence containing a
semantically simple term is neither true nor false evaluated with
respect to a world in which its purported referent (namely, a complex
existing contingently at the actual world) fails to exist -- and,
for now, we do -- then, for any such fully analysed sentence,
there will be some world such that the sentence depends for its truth
valuedness with respect to that world on the truth with respect to
that world of some other sentence, *viz*., the sentence stating
that the constituents of the relevant complex are configured in a
manner necessary and sufficient for its existence. It follows that if
everything exists contingently, then whether a sentence is senseful
with respect to a world will depend on whether another sentence is
true with respect to that world.
The step from [3] to [4] runs as follows. Suppose that whether any
sentence "has sense" (i.e., on our reading, has a
truth-value) depends (in the way just explained) on whether another is
true. Then every sentence will have an "indeterminate
sense" in the sense that it will lack a truth value with respect
to at least one possible world. But an indeterminate sense is no sense
at all, for a proposition by its very nature "reaches through
the whole logical space" (3.42) (i.e., it is truth-valued with
respect to every possible
world).[6]
So if every sentence depended for its "sense" (i.e.,
truth-valuedness) on the truth of another, no sentence would have a
determinate sense, and so no sentence would have a sense. In which
case we would be unable to frame senseful propositions (i.e., to
"draw up pictures of the world true or false").
One apparent difficulty concerns the assumption that to have sense is
just to be true or false. How can such a view be attributed to the
Wittgenstein of the *Tractatus* given his view that tautology,
which is true, and contradiction, which is false, are without sense
(*sinnlos*) (4.461)? The seeds of an answer may be contained in
a remark from Wittgenstein's lectures at Cambridge during the
year 1934-1935. Looking back on what he'd written in the
*Tractatus*, he says:
>
> When I called tautologies senseless I meant to stress a connection
> with a quantity of sense, namely 0. ([*AM*]), 137)
>
It is possible, then, that Wittgenstein is thinking of a
*sinnlos* proposition as a proposition that "has
sense" but has it to a zero degree. According to this
conception, a tautology, being true, is, in contrast to a nonsensical
string, in the running for possessing a non-zero quantity of sense,
but is so constructed that, in the end, it doesn't get to have
one. And, importantly, in virtue of being in the running for having a
non-zero quantity of sense its possession of a zero quantity amounts
to its, broadly speaking, 'having sense'. Such a view,
according to which, for some non-count noun N, an N-less entity has N,
but has a zero quantity of it, is not without precedent in the
tradition. Kant, for example, regards rest (motionlessness) as a
species of motion: a zero quantity of it (Bader, Other Internet
Resources, 22-23). If, in a similar fashion,
*Sinnlosigkeit* is a species of *Sinn*, the equation of
having *Sinn* with being true or false will be preserved. To
offer a full defence of this understanding of *Sinnlosigkeit*
would take us too far afield, but I mention it to show that the
current objection is not decisive.
Another apparent difficulty for this reconstruction arises from its
appearing to contradict *Tractatus* 3.24, which clearly
suggests that if the complex entity *A* were not to exist, the
proposition "*F*[*A*]" would be false,
rather than, as the argument requires, without truth value. But the
difficulty is only apparent. It merely shows that 3.24 belongs to a
theory that assumes that the world *does* have substance. On
this assumption Wittgenstein can say that whenever an apparent name
occurs that appears to mention a complex this is only because it is
not, after all, a genuine name -- and this is what he does say.
But on the assumption that the world has no substance, so that
*everything* is complex, Wittgenstein can no longer say this.
For now he must allow that the semantically simple symbols occurring
in a proposition's final analysis do refer to complexes. So in
the context of the assumption that every proposition has a final
analysis, the *reductio* assumption of the argument for
substance entails the falsity of 3.24. But since 3.24 is assumed to be
false only in the context of a *reductio*, it is something that
Wittgenstein can consistently endorse (This solution to the apparent
difficulty for the present reconstruction is owed, in its essentials,
to David Pears (see Pears 1987 [1989, 78]).
To complete the argument it only remains to show that Tractarian
commitments extrinsic to the argument for substance rule out
contingent
simples.[7]
Suppose *a* is a contingent simple. Then "*a*
exists" must be a contingent proposition. But it cannot be an
elementary proposition because it will be entailed by any elementary
proposition containing '*a*,' and elementary
propositions are logically independent (4.211). So "*a*
exists" must be non-elementary, and so further analyzable. And
yet there would seem to be no satisfactory analysis of this
proposition on the assumption that '*a*' names a
contingent simple -- no analysis, that is to say, that is both
intrinsically plausible and compatible with Tractarian principles.
Wittgenstein cannot analyse "*a* exists" as the
proposition "[?]*x*(*x* = *a*)"
for two reasons. First, he would reject this analysis on the grounds
that it makes an ineliminable use of the identity sign (5.534).
Second, given his analysis of existential quantifications as
disjunctions, the proposition "[?]*x*(*x* =
*a*)" would be further analysed as the
*non-contingent* proposition "*a* = *a*
[?] *a* = *b* [?] *a* =
*c*...". Nor can he analyse "*a*
exists" as "~[ ~*Fa* & ~*Ga* &
~*Ha*...]" -- that is, as the negation of the
conjunction of the negations of every elementary proposition involving
"*a*." To suppose that it could, is to suppose that
the proposition "~*Fa* & ~*Ga* &
~*Ha*..." means "*a* does not
exist," and yet by the lights of the *Tractatus* this
proposition would *show* *a*'s existence --
or, more correctly, it would show something that one tries to put into
words by saying "*a* exists" (cf. 5.535,
*Corr*, 126)). So, pending an unforeseen satisfactory analysis
of "*a* exists," this proposition will have to be
analysed as a complex of propositions not involving *a*. In
other words, '*a*' will have to be treated as an
incomplete symbol and the fact of *a*'s existence will
have to be taken to consist in the fact that objects other than
*a* stand configured thus and so. But that would seem to entail
that *a* is not simple.
The argument for substance may be criticized on several grounds.
First, the step leading from [2] to [3] relies on the assumption that
a name fails to refer with respect to a possible world at which its
actual-world referent does not exist. This amounts to the
controversial assumption that names do not function as what Nathan
Salmon has called "obstinately rigid designators" (Salmon
1981, 34). Secondly, the step leading from [3] to [4] relies on the
assumption that a sentence that is neither true nor false with respect
to some possible world fails to express a sense. As Wittgenstein was
later to realize, the case of intuitively senseful, yet vague
sentences plausibly constitutes a counterexample (cf. *PI*
Section 99). Lastly, one may question the assumption that it makes
sense to speak of a final analysis, given that the procedure for
analysing a sentence of ordinary language has not been made clear (See
*PI*, Sections 60, 63-4, and Section 91).
## 4. The Epistemology of Logical Atomism
How could we possibly know that something is a Tractarian object?
Wittgenstein has little or nothing to say on this topic in the
*Tractatus*, and yet it is clear from his retrospective remarks
that during the composition of the *Tractatus* he did think it
possible *in principle* to discover the Tractarian objects (See
*AM*, 11 and *EPB*, 121). So it seems worth asking by
what means he thought such a discovery might be made.
Sometimes, it can seem as though Wittgenstein just expected to hit
upon the simples by reflecting from the armchair on those items that
struck him as most plausibly lacking in proper parts. This impression
is most strongly suggested in the *Notebooks*, and in
particular in a passage from June 1915 in which Wittgenstein seems to
express confidence that certain objects already within his ken either
count as Tractarian objects or will turn out to do so. He says:
"It seems to me perfectly possible that patches in our visual
field are simple objects, in that we do not perceive any single point
of a patch separately; the visual appearances of stars even seem
certainly to be so" (*NB*, 64). By "patches in our
visual field" in this context Wittgenstein means parts of the
visual field with no noticeable parts. In other words, *points*
in visual space (cf. *KL*, 120). Clearly, then, Wittgenstein at
one stage believed he was in a position to specify some Tractarian
objects. However, the balance of the evidence suggests that this idea
was short-lived. For he was later to say that he and Russell had
pushed the question of examples of simples to one side as a matter to
be settled on a future occasion (*AM*, 11). And when Norman
Malcolm pressed Wittgenstein to say whether when he wrote the
*Tractatus* he had decided on anything as an example of a
"simple object," he had replied -- according to
Malcolm's report -- that "at the time his thought had
been that he was a logician; and that it was not his business as a
logician, to try to decide whether this thing or that was a simple
thing or a complex thing, that being a purely empirical matter"
(Malcolm 1989, 70).
Wittgenstein was not suggesting that the correct way to establish that
something is a Tractarian object is to gather evidence that its
decomposition is *physically* impossible. That reading would
only have a chance of being correct if Wittgenstein had taken
metaphysical possibility to coincide with physical possibility, and
that is evidently not
so.[8]
His meaning seems rather to be just that the objects must be
discovered rather than postulated or otherwise specified in advance of
investigation (cf. *AM*, 11). But since Wittgenstein was later
to accuse his Tractarian self of having entertained the concept of a
distinctive kind of *philosophical* discovery (see *WVC*
182, quoted below), we must not jump -- as Malcolm appears to
have done, to the conclusion that he conceived of the discovery in
question as "empirical" in anything like the contemporary
sense of the word.
We know that Wittgenstein denied categorically that we could
*specify* the possible forms of elementary propositions and the
simples *a priori* (4.221, 5.553-5.5541, 5.5571). But he
did not deny that these forms would be revealed as the result of
logical analysis. In fact, he maintained precisely this view. This
idea is not explicit in the *Tractatus*, but it is spelled out
in a later self-critical remark from G. E. Moore's notes of
Wittgenstein's 1933 lectures at Cambridge:
>
> I say in [the] *Tractatus* that you can't say anything
> about [the] structure of atomic prop[osition]s: my idea being the
> wrong one, that logical analysis would reveal what it would reveal
> (entry for 6 February, 1933, Stern et. al., 2016, 252)
>
Speaking of Tractarian objects in another retrospective remark, this
time from a German version of the *Brown Book*, Wittgenstein
says: "What these [fundamental constituents] of reality are it
seemed difficult to say. I thought it was the job of further logical
analysis to discover them" (*EPB* 121). These remarks
should be taken at face value: it is logical analysis -- the
analysis of propositions -- that is supposed to enable us to
discover the forms of elementary propositions and the objects. The
hope is that when propositions have been put into their final, fully
analysed forms by applying the "calculus" spoken of in the
*Philosophical Grammar* we will finally come to know the names
and thereby the objects. Presumably, we will know the latter *by
acquaintance* in the act of grasping propositions in their final
analysed forms.
Admittedly, Wittgenstein's denial that we can know the objects
*a priori* looks strange given the fact that the analytical
procedure described in Section 2 above seems to presuppose that we
have a priori knowledge both of the correct analyses of ordinary names
and of the contextual definitions by means of which terms for
complexes are to beeliminated. But some tension in
Wittgenstein's position on this point is just what we should
expect in view of his later rather jaundiced assessment of his earlier
reliance on the idea of philosophical discovery:
>
> I [used to believe that] the elementary propositions could be
> specified at a later date. Only in recent years have I broken away
> from that mistake. At the time I wrote in a manuscript of my book,
> "The answers to philosophical questions must never be
> surprising. In philosophy you cannot discover anything." *I
> myself, however, had not clearly enough understood this and offended
> against it*. (*WVC*, 182, emphasis added)
>
The remark that Wittgenstein quotes here from "a manuscript of
the *Tractatus*" did not survive into the final version,
but its sentiment is clearly echoed in the related remark that there
can: "never be surprises in logic" (6.1251). Wittgenstein
is clear that in the *Tractatus* he had unwittingly proceeded
as though there could be such a thing as a *philosophical*
surprise or discovery. His idea that the true objects would be
discovered through analysis, but are nonetheless not known *a
priori*, is plausibly one instance of this mistake.
On the conception of the *Tractatus*, objects are to be
discovered by grasping fully analysed propositions, presumably
*with* the awareness that they *are* fully analysed. But
since that is so, we shall not have fully explained how we are
supposed to be able to discover the simples unless we explain how, in
practice, we can know we have arrived at the final analysis of a
proposition. But on this point, unfortunately, Wittgenstein has little
to say. In fact, the only hint he offers is the rather dark one
contained in *Tractatus* 3.24:
>
> That a propositional element signifies [*bezeichnet*] a complex
> can be seen from an indeterminateness in the propositions in which it
> occurs. We know that everything is not yet determined by this
> proposition. (The notation for generality contains a prototype).
> (3.24)
>
It is an indeterminateness in propositions -- whatever this might
amount to -- that is supposed to alert us to the need for further
analysis. In Wittgenstein's view, then, we possess a positive
test for analyzability. However, since the notion of
"indeterminateness" in question is unclear, the test is of
little practical value. The indeterminateness in question is plainly
not the one we considered in section 3: what is in question at the
present juncture is the indeterminateness of propositions, not of
senses. But what does that amount to?
According to one line of interpretation, due originally to W. D. Hart
(Hart 1971), a proposition is indeterminate when there is more than
one way it can be true. Thus if I say "Barack Obama is in the
United States," I leave open where in particular he might be.
The source of the indeterminacy is the implied generality of this
statement, which is tantamount to: "Obama is *somewhere*
in the United States." This line of interpretation has the merit
of promising to make sense of the closing parenthetical remark of
3.24. But it cannot be correct. The kind of indeterminacy that
Wittgenstein has in mind at 3.24 is supposed to serve as a sign of
further analysability. But Hart's notion cannot play this role,
since any disjunctive proposition would be indeterminate in his sense,
even a fully analysed proposition consisting of a disjunction of
elementary propositions.
According to a second line of interpretation, a proposition is
indeterminate in the relevant sense if the result of embedding it in
some context is structurally ambiguous. Consider, for example, the
result of embedding "*F* [*A*]" in the
context "it is not true that," where
'*A*' is temporarily treated as a semantically
simple term designating a complex (Keep in place the assumption that a
sentence containing a non-referring semantically simple term is
neither true nor false). In this case the question would arise whether
the result of this embedding is neither true nor false evaluated with
respect to a world in which *A* does not exist, or simply true.
The first option corresponds to giving the apparent name wide scope
with respect to the logical operator, the second to giving it narrow
scope. Such a scope ambiguity could not exist if
'*A*' were a genuine Tractarian name, so its
presence could reasonably be taken to signal the need for further
analysis.
So far, so good, but where does the business about the generality
notation "containing a prototype" come in? Nothing in the
present explanation has yet done justice to this remark. Nor does the
present explanation really pinpoint what it is that signals the need
for further analysis. That, at bottom, is the fact that we can imagine
circumstances in which the supposed referent of
'*A*' fails to exist. So, again, there is reason to
be dissatisfied with this gloss on indeterminacy.
It is hard to resist the conclusion that Wittgenstein never supplied
an adequate way of recognizing when a proposition is fully analysed,
and consequently that he failed to specify a means for recognizing
something as a Tractarian object.
## 5. The Dismantling of Logical Atomism
Wittgenstein's turn away from logical atomism may be divided
into two main phases. During the first phase (1928-9),
documented in his 1929 article "Some Remarks on Logical
Form" (Klagge and Nordmann, 1993, 29-35), Wittgenstein
exhibits a growing dissatisfaction with certain central details of the
*Tractatus*'s logical atomism, and notably with the
thesis of the independence of elementary propositions. During this
phase, however, he is still working within the broad conception of
analysis presupposed, if not fully developed, in the
*Tractatus*. The second phase (1931-2) involves a
revolutionary break with that very conception.
### 5.1 First phase: The colour-exclusion problem
The so-called "colour-exclusion problem" is a difficulty
that arises for the *Tractatus*'s view that it is
metaphysically possible for each elementary proposition to be true or
false regardless of the truth or falsity of the others (4.211). In
view of its generality, the problem might more accurately be termed
"the problem of the manifest incompatibility of apparently
unanalysable statements." The problem may be illustrated as
follows: Suppose that *a* is a point in the visual field.
Consider the propositions *P*: "*a* is blue at
*t*" and *Q*: "*a* is red at
*t*" (supposing "red" and "blue"
to refer to determinate shades). It is clear that *P* and
*Q* cannot both be true; and yet, on the face of it, it seems
that this incompatibility (or "exclusion" in
Wittgenstein's parlance) is not a *logical*impossibility. In the *Tractatus* Wittgenstein's
response was to treat the problem as merely apparent. He supposed that
in such cases further analysis would reveal the incompatibility to be
logical in nature:
>
> For two colours, *e.g*., to be at one place in the visual field
> is impossible, and indeed logically impossible, for it is excluded by
> the logical structure of colour. Let us consider how this
> contradiction presents itself in physics. Somewhat as follows: That a
> particle cannot at the same time have two velocities, that is, that at
> the same time it cannot be in two places, that is, that particles in
> different places at the same time cannot be identical (6.3751)
>
As F. P. Ramsey observes in his review of the *Tractatus*(Ramsey, 1923), the analysis described here actually fails to
reveal a logical incompatibility between the two statements in
question; for, even granting the correctness of the envisaged
reduction of the phenomenology of colour perception to facts about the
velocities of particles, the fact that one and the same particle
cannot be (wholly) in two places at the same time still looks very
much like a synthetic *a priori* truth. It turns out, however,
that Wittgenstein was well aware of this point. He knew that he had
not taken the analysis far enough to bring out a logical
contradiction, but he was confident that he had taken a step in the
right direction. In a *Notebooks* entry from August 1916 he
remarks that: "The fact that a particle cannot be in two places
at the same time does look *more like* a logical impossibility
[than the fact that a point cannot be red and green at the same time].
If we ask why, for example, then straight away comes the thought:
Well, we should call particles that were in two places [at the same
time] different, and this in its turn all seems to follow from the
structure of space and particles" (*NB*, 81; emphasis
added). Here Wittgenstein is *conjecturing* that it will turn
out to be a conceptual (hence, for him *logical*) truth about
particles and space (and presumably also time) that particles in two
distinct places (at the same time) are distinct. He does not yet
possess the requisite analyses to demonstrate this conjecture, but he
is optimistic that they will be found.
The article "Some Remarks on Logical Form" (1929) marks
the end of this optimism. Wittgenstein now arrives at the view that
some incompatibilities cannot, after all, be reduced to logical
impossibilities. His change of heart appears to have been occasioned
by a consideration of incompatibilities involving the attribution of
qualities that admit of gradation -- *e.g*., the pitch of
a tone, the brightness of a shade of colour, etc. Consider, for
example, the statements: "*A* has exactly one degree of
brightness" and "*A* has exactly two degrees of
brightness." The challenge is to provide analyses of these
statements that bring out the logical impossibility of their being
true together. What Wittgenstein takes to be the most plausible
suggestion -- or at least a sympathetic reconstruction of it
-- adapts the standard definitions of the numerically definite
quantifiers to the system described in the *Tractatus*,
analysing these claims as respectively:
"[?]*x*(*Bx* & *A* has *x*)
& ~[?]*x*,*y*(*Bx* & *By*
& *A* has *x* and *A* has *y*)"
("*Bx*" means "*x* is a degree of
brightness") and "[?]*x*,*y*(*Bx*
& *By* & *A* has *x* and *A* has
*y*) & ~[?]*x*,*y*,*z*(*Bx*
& *By* & *Bz* & *A* has *x*
& *A* has *y* & *A* has
*z*)." But the suggestion will not do. The trouble is
that this analysis -- absurdly -- makes it seem as though
when something has just one degree of brightness there could be a
substantive question about which (if any) of the three mentioned in
the analysis of the second claim-- *x* or *y* or
*z* -- it was--as if a degree of brightness were a
kind of corpuscle whose association with a thing made it bright (cf.
Klagge and Nordmann, 33). Wittgenstein concludes that the independence
of elementary propositions must be abandoned and that terms for real
numbers must enter into atomic propositions, so that the impossibility
of something's having both exactly one and exactly two degrees
of brightness emerges as an irreducibly mathematical impossibility.
This, in turn, contradicts the *Tractatus*'s idea that
all necessity is logical necessity (6.37).
### 5.2 Second phase: Generality and Analysis
Unlike Frege and Russell, the *Tractatus* does not treat the
universal and existential quantifiers as having meaning in isolation.
Instead, it treats them as incomplete symbols to be analysed away
according to the following schemata:
>
> [?]*x*. *Ph*x - *Pha* &
> *Phb* & *Phc*...
>
>
> [?]*x*. *Phx* - *Pha* [?]
> *Phb* [?] *Phc*...
>
Universal (existential) quantification is treated as equivalent to a
possibly infinite conjunction (disjunction) of propositions.
Wittgenstein's later dissatisfaction with this view is expressed
most clearly in G. E. Moore's notes of Wittgenstein's
lectures from Michaelmas term 1932.
>
> Now there is a temptation to which I yielded in [the]
> *Tractatus*, to say that
>
>
> >
> > (*x*).*fx* = logical
> > product,[9]
> > *fa* . *fb* . *fc*...
> >
> >
> > ([?]*x*).*fx* = [logical] sum, *fa* [?]
> > *fb* [?] *fc*...
> >
> >
> >
>
>
>
> This is wrong, but not as absurd as it looks. (entry for 25 November,
> 1932, Stern et. al., 2016,
> 215)[10].
>
>
>
>
Explaining why the *Tractatus*'s analysis of generality
is not *palpably* absurd, Wittgenstein says:
>
> Suppose we say that: Everybody in this room has a hat = Ursell has a
> hat, Richards has a hat etc. This is obviously false, because you have
> to add "& *a*, *b*, *c*,... are
> the only people in the room." This I knew and said in [the]
> *Tractatus*. But now, suppose we talk of
> "individuals" in R[ussell]'s sense, e.g., atoms or
> colours; and give them names, then there would be no prop[osition]
> analogous to "And *a*, *b*, *c* are the
> only people in the room." (ibid.)
>
Clearly, in the *Tractatus* Wittgenstein was not making the
simple-minded mistake of forgetting that "Every *F* is
*G*" cannot be analysed as "*Ga* &
*Gb* & *Gc*..." even when *a*,
*b*, *c*, etc. are in fact the only *F*s.
(Unfortunately, his claim that he registered this point in the
*Tractatus* is not borne out by the text). His idea was rather
that the *Tractatus*'s analysis of generality is offered
only for the special case in which *a*, *b*, *c*,
etc, are "individuals" in Russell's sense.
Wittgenstein had supposed that in this case there is no proposition to
express the supplementary clause that is needed in the other cases.
Unfortunately, Wittgenstein does not explain why there should be no
such proposition, but the answer seems likely to be the following:
What we are assumed to be analysing is actually "Everything is
*G*." In this case any allegedly necessary competing
clause -- for example, "*a*, *b*, *c*
etc., are the only *things"* (that is, Tractarian
objects) -- would just be a nonsense-string produced in the
misfired attempt to put into words something that is *shown* by
the fact that when analysis bottoms out it yields as names only such
as figure in the conjunction "*Ga* & *Gb*
& *Gc*..." (cf. *Tractatus* 4.1272).
What led Wittgenstein to abandon the *Tractatus*'s
analysis of generality was his realization that he had failed
adequately to think through the infinite case. He had proceeded as
though the finite case could be used as a way of thinking about the
infinite case, the details of which could be sorted out at a later
date. By 1932 he had come to regard this attitude as mistaken. The
point is made in a passage from the *Cambridge Lectures* whose
meaning can only be appreciated after some preliminary explanation.
The passage in question makes a crucial claim about something
Wittgenstein refers to as "The Proposition". By this
phrase in this context he means the joint denial of all the
propositions that are values of the propositional function
"*x* is in this room". This proposition can be
written:
>
> (*x* is in this room) [- - - - - T]
>
>
> (Entry for November 25th, 1932, Compare Stern et. al, 217)
>
Here the symbol '[- - - - - T]' symbolizes
the joint-denial operation, and the whole symbol expresses the result
of applying this operation to arbitrarily many values of the
propositional function "*x* is in the room". The
dashes in the symbol for joint denial represent rows in the
truth-table on which one or more of the truth-arguments--that is
values of the propositional function--is true. The result of
applying the operation of joint denial to those truth-arguments is
accordingly false. (In a variant on this notation each of the dashes
could be replaced with 'F'). Wittgenstein is interested in
the fact that while we write down finitely many dashes we intend the
arguments for the joint-denial operation to be arbitrarily many and
possibly infinitely many. His criticism of these conceptions runs as
follows:
>
> There is a most important mistake in [the] Tract[atus]...I
> pretended that the Proposition was a logical product; but it
> isn't, because "..." don't give you a
> logical product. It is [the] fallacy of thinking 1 + 1 + 1 ... is
> a sum. It is muddling up a sum with the limit of a sum (ibid.)
>
His point is that the Proposition does not, despite appearances,
express a logical product. It rather, he now seems to be saying,
expresses something like an indefinitely extensible process.
Wittgenstein came to see his earlier hope that it did express a
logical product rested on the mistake of confusing "dots of
infinitude" with "dots of laziness.". The upshot
could scarely be more important: if Wittgenstein is right, the
*Tractatus*'s very conception of the general form of the
proposition, because it makes essential appeal to the idea of the
joint denial of arbitrarily many values of a propositional function,
is itself infected with confusion.
Wittgenstein, however, does not think that the confusion of kinds of
dots was the deepest mistake he made in the *Tractatus*. Beyond
this: "There was a deeper mistake -- confusing logical
analysis with chemical analysis. I thought
'([?]*x*)*fx*' *is* a definite
logical sum, only I can't at the moment tell you which"
(November 25, 1932, ibid.; cf. *PG*, 210). Wittgenstein had
supposed that there was a fact of the matter -- unknown, but in
principle knowable -- about which logical sum
"([?]*x*).*fx*" is equivalent to. But
because he had failed to specify the analytical procedure in full
detail, and because he had not adequately explained what analysis is
supposed to preserve, this idea was unwarranted. Indeed, it
exemplified an attitude he was later to characterize as amounting to a
kind of unacceptable "dogmatism" (*WWK*, 182). |
wittgenstein-mathematics | ## 1. Wittgenstein on Mathematics in the *Tractatus*
Wittgenstein's non-referential, *formalist* conception of
mathematical propositions and terms begins in the
*Tractatus*.[1]
Indeed, insofar as he sketches a rudimentary Philosophy of
Mathematics in the *Tractatus*, he does so by
*contrasting* mathematics and mathematical equations with
genuine (contingent) propositions, sense, thought, propositional signs
and their constituent names, and truth-by-correspondence.
In the *Tractatus*, Wittgenstein claims that a genuine
proposition, which rests upon conventions, is used by us to assert
that a state of affairs (i.e., an elementary or atomic fact;
'*Sachverhalt*') or fact (i.e., multiple states of
affairs; '*Tatsache*') obtain(s) in the one and
only real world. An elementary proposition is isomorphic to the
*possible* state of affairs it is used to represent: it must
contain as many names as there are objects in the possible state of
affairs. An elementary proposition is true *iff* its possible
state of affairs (i.e., its 'sense';
'*Sinn*') obtains. Wittgenstein clearly states this
Correspondence Theory of Truth at (4.25):
>
>
> If an elementary proposition is true, the state of affairs exists; if
> an elementary proposition is false, the state of affairs does not
> exist.
>
>
>
But propositions and their linguistic components are, in and of
themselves, dead--a proposition only has sense because we human
beings have endowed it with a *conventional* sense (5.473).
Moreover, propositional signs may be *used* to do any number of
things (e.g., insult, catch someone's attention); in order to
*assert* that a state of affairs obtains, a person must
'project' the proposition's sense--its possible
state of affairs--by 'thinking' of (e.g., picturing)
its sense as one speaks, writes or thinks the proposition (3.11).
Wittgenstein connects *use*, *sense*,
*correspondence*, and *truth* by saying that "a
proposition is true if we *use* it to say that things stand in
a certain way, and they do" (4.062; italics added).
The *Tractarian* conceptions of genuine (contingent)
propositions and the (original and) core concept of truth are used to
construct theories of logical and mathematical
'propositions' *by contrast*. Stated boldly and
bluntly, tautologies, contradictions and mathematical propositions
(i.e., mathematical equations) are neither true nor false--we say
that they are true or false, but in doing so we use the words
'true' and 'false' in very different senses
from the sense in which a contingent proposition is true or false.
Unlike genuine propositions, tautologies and contradictions
"have no 'subject-matter'" (6.124),
"lack sense", and "say nothing" about the
world (4.461), and, analogously, mathematical equations are
"pseudo-propositions" (6.2) which, when 'true'
('correct'; '*richtig*' (6.2321)),
"merely mark ... [the] equivalence of meaning [of
'two expressions']" (6.2323). Given that
"[t]autology and contradiction are the limiting
cases--indeed the *disintegration*--of the
combination of signs" (4.466; italics added), where
>
>
> the conditions of agreement with the world--the representational
> relations--cancel one another, so that [they] do[] not stand in
> any representational relation to reality,
>
>
>
tautologies and contradictions do not picture reality or possible
states of affairs and possible facts (4.462). Stated differently,
tautologies and contradictions do not have sense, which means we
cannot use them to make assertions, which means, in turn, that they
cannot be either true or false. Analogously, mathematical
pseudo-propositions are equations, which indicate or show that two
expressions are equivalent in meaning and therefore are
intersubstitutable. Indeed, we arrive at mathematical equations by
"the method of substitution":
>
>
> starting from a number of equations, we advance to new equations by
> substituting different expressions in accordance with the equations.
> (6.24)
>
>
>
We prove mathematical 'propositions' 'true'
('correct') by 'seeing' that two expressions
have the same meaning, which "must be manifest in the two
expressions themselves" (6.23), and by substituting one
expression for another with the same meaning. Just as "one can
recognize that ['logical propositions'] are true from the
symbol alone" (6.113), "the possibility of proving"
mathematical propositions means that we can perceive their correctness
without having to compare "what they express" with facts
(6.2321; cf. *RFM* App. III, SS4).
The demarcation between contingent propositions, which can be used to
correctly or incorrectly represent parts of the world, and
mathematical propositions, which can be decided in a purely formal,
syntactical manner, is maintained by Wittgenstein until his death in
1951 (*Zettel* SS701, 1947; *PI* II, 2001 edition,
pp. 192-193e, 1949). Given linguistic and symbolic conventions,
the truth-value of a contingent proposition is entirely a function of
how the world is, whereas the "truth-value" of a
mathematical proposition is entirely a function of its constituent
symbols and the formal system of which it is a part. Thus, a second,
closely related way of stating this demarcation is to say that
mathematical propositions are decidable by purely formal means (e.g.,
calculations), while contingent propositions, being about the
'external' world, can only be decided, if at all, by
determining whether or not a particular fact obtains (i.e., something
external to the proposition and the language in which it resides)
(2.223; 4.05).
The Tractarian formal theory of mathematics is, specifically, a theory
of *formal operations*. Over the past 20 years,
Wittgenstein's theory of operations has received considerable
examination (Frascolla 1994, 1997; Marion 1998; Potter 2000; and Floyd
2002), which has interestingly connected it and the Tractarian
equational theory of arithmetic with elements of Alonzo Church's
\(\lambda\)-calculus and with R. L. Goodstein's equational
calculus (Marion 1998: chapters 1, 2, and 4). Very briefly stated,
Wittgenstein presents:
1. ... the sign '[\(a, x, O \spq x\)]' for the
general term of the series of forms \(a\), \(O \spq a\), \(O \spq O
\spq a\).... (5.2522)
2. ... the general form of an operation
\(\Omega\spq(\overline{\eta})\) [as]
\[
[\overline{\xi}, N(\overline{\xi})]\spq (\overline{\eta}) (= [\overline{\eta}, \overline{\xi}, N(\overline{\xi})]). (6.01)
\]
3. ... the general form of a proposition
("truth-function") [as] \([\overline{p}, \overline{\xi},
N(\overline{\xi})]\). (6)
4. The general form of an integer [natural number] [as] \([0, \xi ,
\xi + 1]\). (6.03)
adding that "[t]he concept of number is... the general form
of a number" (6.022). As Frascolla (and Marion after him) have
pointed out, "the general form of a proposition is a
*particular case* of the general form of an
'operation'" (Marion 1998: 21), and all three
general forms (i.e., of operation, proposition, and natural number)
are modeled on the variable presented at (5.2522) (Marion 1998: 22).
Defining "[a]n operation [as] the expression of a relation
between the structures of its result and of its bases" (5.22),
Wittgenstein states that whereas "[a] function cannot be its own
argument,... an operation can take one of its own results as its
base" (5.251).
On Wittgenstein's (5.2522) account of '[\(a, x, O \spq
x\)]',
>
>
> the first term of the bracketed expression is the beginning of the
> series of forms, the second is the form of a term \(x\) arbitrarily
> selected from the series, and the third [\(O \spq x\)] is the form of
> the term that immediately follows \(x\) in the series.
>
>
>
Given that "[t]he concept of successive applications of an
operation is equivalent to the concept 'and so on'"
(5.2523), one can see how the natural numbers can be generated by
repeated iterations of the general form of a natural number, namely
'[\(0, \xi , \xi +1\)]'. Similarly, truth-functional
propositions can be generated, as Russell says in the Introduction to
the *Tractatus* (p. xv), from the general form of a proposition
'[\(\overline{p}\), \(\overline{\xi}\), \(N(\overline{\xi})\)]' by
>
>
> taking any selection of atomic propositions [where \(p\) "stands
> for all atomic propositions"; "the bar over the variable
> indicates that it is the representative of all its values"
> (5.501)], negating them all, then taking any selection of the set of
> propositions now obtained, together with any of the originals [where
> \(x\) "stands for any set of propositions"]--and so
> on indefinitely.
>
>
>
On Frascolla's (1994: 3ff) account,
>
>
> a numerical identity "\(\mathbf{t} = \mathbf{s}\)" is an
> arithmetical theorem if and only if the corresponding equation
> "\(\Omega^t \spq x = \Omega^s \spq x\)", which is framed
> in the language of the general theory of logical operations, can be
> proven.
>
>
>
By proving
>
>
> the equation "\(\Omega^{2 \times 2}\spq x = \Omega^{4}\spq
> x\)", which translates the arithmetic identity "\(2 \times
> 2 = 4\)" into the operational language (6.241),
>
>
>
Wittgenstein thereby outlines "a translation of numerical
arithmetic into a sort of general theory of operations"
(Frascolla 1998: 135).
Despite the fact that Wittgenstein clearly does *not* attempt
to reduce mathematics to logic in either Russell's manner or
Frege's manner, or to tautologies, and despite the fact that
Wittgenstein criticizes Russell's Logicism (e.g., the Theory of
Types, 3.31-3.32; the Axiom of Reducibility, 6.1232, etc.) and
Frege's Logicism (6.031, 4.1272,
etc.),[2]
quite a number of commentators, early and recent, have interpreted
Wittgenstein's Tractarian theory of mathematics as a variant of
Logicism (Quine 1940 [1981: 55]; Benacerraf & Putnam 1964a: 14;
Black 1964: 340; Savitt
1979 [1986: 34]; Frascolla 1994: 37; 1997: 354, 356-57, 361;
1998: 133; Marion 1998: 26 & 29; and Potter 2000: 164 and
182-183). There are at least four reasons proffered for this
interpretation.
1. Wittgenstein says that "[m]athematics is a method of
logic" (6.234).
2. Wittgenstein says that "[t]he logic of the world, which is
shown in tautologies by the propositions of logic, is shown in
equations by mathematics" (6.22).
3. According to Wittgenstein, we ascertain the *truth* of
*both* mathematical and logical propositions by the symbol
alone (i.e., by purely formal operations), without making any
('external', non-symbolic) observations of states of
affairs or facts in the world.
4. Wittgenstein's iterative (inductive) "interpretation
of numerals as exponents of an operation variable" is a
"reduction of arithmetic to operation theory", where
"operation" is construed as a "*logical*
operation" (italics added) (Frascolla 1994: 37), which shows
that "the label 'no-classes logicism' tallies with
the *Tractatus* view of arithmetic" (Frascolla 1998: 133;
1997: 354).
Though at least three Logicist interpretations of the
*Tractatus* have appeared within the last 20 years, the
following considerations (Rodych 1995; Wrigley 1998) indicate that
none of these reasons is particularly cogent.
For example, in saying that "[m]athematics is a method of
logic" perhaps Wittgenstein is only saying that since the
general form of a natural number and the general form of a proposition
are both instances of the general form of a (purely formal) operation,
just as truth-functional propositions can be constructed using the
general form of a proposition, (true) mathematical equations can be
constructed using the general form of a natural number. Alternatively,
Wittgenstein may mean that mathematical *inferences* (i.e., not
substitutions) are in accord with, or make use of, logical inferences,
and insofar as mathematical reasoning is logical reasoning,
mathematics is a method of logic.
Similarly, in saying that "[t]he logic of the world" is
shown by tautologies and true mathematical equations (i.e., #2),
Wittgenstein may be saying that since mathematics was invented to help
us count and measure, insofar as it enables us to infer contingent
proposition(s) from contingent proposition(s) (see 6.211 below), it
thereby *reflects* contingent facts and "[t]he logic of
the world". Though logic--which is inherent in natural
('everyday') language (4.002, 4.003, 6.124) and which has
evolved to meet our communicative, exploratory, and survival
needs--is not *invented* in the same way, a valid logical
inference captures the relationship between possible facts and a
*sound* logical inference captures the relationship between
existent facts.
As regards #3, Black, Savitt, and Frascolla have argued that, since we
ascertain the truth of tautologies and mathematical equations without
any appeal to "states of affairs" or "facts",
true mathematical equations and tautologies are *so* analogous
that we can "aptly" describe "the philosophy of
arithmetic of the *Tractatus*... as a kind of
logicism" (Frascolla 1994: 37). The rejoinder to this is that
the similarity that Frascolla, Black and Savitt recognize does not
make Wittgenstein's theory a "kind of logicism" in
Frege's or Russell's sense, because Wittgenstein does not
define numbers "logically" in either Frege's way or
Russell's way, and the similarity (or analogy) between
tautologies and true mathematical equations is neither an identity nor
a relation of reducibility.
Finally, critics argue that the problem with #4 is that there is no
evidence for the claim that the relevant operation is *logical*
in Wittgenstein's or Russell's or Frege's sense of
the term--it seems a purely formal, syntactical operation. "Logical operations are performed with propositions,
arithmetical ones with numbers", says Wittgenstein (*WVC*
218); "[t]he result of a logical operation is a proposition, the
result of an arithmetical one is a number". In sum, critics of
the Logicist interpretation of the *Tractatus* argue that
##1-4 do not individually or collectively constitute cogent
grounds for a Logicist interpretation of the *Tractatus*.
Another crucial aspect of the *Tractarian* theory of
mathematics is captured in (6.211).
>
>
> Indeed in real life a mathematical proposition is never what we want.
> Rather, we make use of mathematical propositions *only* in
> inferences from propositions that do not belong to mathematics to
> others that likewise do not belong to mathematics. (In philosophy the
> question, 'What do we actually use this word or this proposition
> for?' repeatedly leads to valuable insights.)
>
>
>
Though mathematics and mathematical activity are purely formal and
syntactical, in the *Tractatus* Wittgenstein tacitly
distinguishes between purely formal games with signs, which have no
application in contingent propositions, and mathematical propositions,
which are used to make inferences from contingent proposition(s) to
contingent proposition(s). Wittgenstein does not explicitly say,
however, *how* mathematical equations, which are *not*
genuine propositions, are used in inferences from genuine
proposition(s) to genuine proposition(s) (Floyd 2002: 309; Kremer
2002: 293-94). As we shall see in
SS3.5,
the later Wittgenstein returns to the importance of
extra-mathematical application and uses it to distinguish a mere
"sign-game" from a genuine, mathematical
language-game.
This, in brief, is Wittgenstein's Tractarian theory of
mathematics. In the Introduction to the *Tractatus*, Russell
wrote that Wittgenstein's "theory of number"
"stands in need of greater technical development",
primarily because Wittgenstein had not shown how it could deal with
transfinite numbers (Russell 1922[1974]: xx).
Similarly, in his review of
the *Tractatus*, Frank Ramsey wrote that Wittgenstein's
'account' does not cover all of mathematics partly because
Wittgenstein's theory of equations cannot explain inequalities
(Ramsey 1923: 475). Though it is doubtful that, in 1923, Wittgenstein
would have thought these issues problematic, it certainly is true that
the Tractarian theory of mathematics is essentially a sketch,
especially in comparison with what Wittgenstein begins to develop six
years later.
After the completion of the *Tractatus* in 1918, Wittgenstein
did virtually no philosophical work until February 2, 1929, eleven
months after attending a lecture by the Dutch mathematician L.E.J.
Brouwer.
## 2. The Middle Wittgenstein's Finitistic Constructivism
There is little doubt that Wittgenstein was invigorated by L.E.J.
Brouwer's March 10, 1928 Vienna lecture "Science,
Mathematics, and Language" (Brouwer 1929), which he attended
with F. Waismann and H. Feigl, but it is a gross overstatement to say
that he returned to Philosophy because of this lecture or that his
intermediate interest in the Philosophy of Mathematics issued
primarily from Brouwer's influence. In fact,
Wittgenstein's return to Philosophy and his intermediate work on
mathematics is also due to conversations with Ramsey and members of
the Vienna Circle, to Wittgenstein's disagreement with Ramsey
over identity, and several other factors.
Though Wittgenstein seems not to have read any Hilbert or Brouwer
prior to the completion of the *Tractatus*, by early 1929
Wittgenstein had certainly read work by Brouwer, Weyl, Skolem, Ramsey
(and possibly Hilbert) and, apparently, he had had one or more private
discussions with Brouwer in 1928 (Finch 1977: 260; Van Dalen 2005:
566-567). Thus, the rudimentary treatment of mathematics in the
*Tractatus*, whose principal influences were Russell and Frege,
was succeeded by detailed work on mathematics in the middle period
(1929-1933), which was strongly influenced by the 1920s work of
Brouwer, Weyl, Hilbert, and Skolem.
### 2.1 Wittgenstein's Intermediate Constructive Formalism
To best understand Wittgenstein's intermediate Philosophy of
Mathematics, one must fully appreciate his strong variant of
formalism, according to which "[w]e *make*
mathematics" (*WVC* 34, note 1; *PR* SS159) by
inventing purely formal mathematical calculi, with
'stipulated' axioms (*PR* SS202), syntactical
rules of transformation, and decision procedures that enable us to
invent "mathematical truth" and "mathematical
falsity" by algorithmically deciding so-called mathematical
'propositions' (*PR* SSSS122, 162).
The *core idea* of Wittgenstein's formalism from 1929 (if
not 1918) through 1944 is that mathematics is essentially syntactical,
devoid of reference and semantics. The most obvious aspect of this
view, which has been noted by numerous commentators who do not refer
to Wittgenstein as a 'formalist' (Kielkopf 1970:
360-38; Klenk 1976: 5, 8, 9; Fogelin 1968: 267; Frascolla 1994:
40; Marion 1998: 13-14), is that, *contra Platonism*, the
signs and propositions of a mathematical calculus do not
*refer* to anything. As Wittgenstein says at (*WVC* 34,
note 1), "[n]umbers are not represented by proxies; numbers
*are there*". This means not only that numbers are there
in the *use*, it means that the numerals *are* the
numbers, for "[a]rithmetic doesn't talk about numbers, it
works with numbers" (*PR* SS109).
>
>
> What arithmetic is concerned with is the schema \(||||\).--But
> does arithmetic talk about the lines I draw with pencil on
> paper?--Arithmetic doesn't talk about the lines, it
> *operates* with them. (*PG* 333)
>
>
>
In a similar vein, Wittgenstein says that (*WVC* 106)
"mathematics is always a machine, a calculus" and
"[a] calculus is an abacus, a calculator, a calculating
machine", which "works by means of strokes, numerals,
etc". The "justified side of formalism", according
to Wittgenstein (*WVC* 105), is that mathematical symbols
"lack a meaning" (i.e.,
*Bedeutung*)--they do not "go proxy
for" *things* which are "their
meaning[s]".
>
>
> You could say arithmetic is a kind of geometry; i.e. what in geometry
> are constructions on paper, in arithmetic are calculations (on
> paper).--You could say it is a more general kind of geometry.
> (*PR* SS109; *PR* SS111)
>
>
>
This is the core of Wittgenstein's life-long formalism. When we
prove a theorem or decide a proposition, we operate in a *purely
formal*, syntactical manner. In *doing* mathematics, we do
not discover pre-existing truths that were "already there
without one knowing" (*PG* 481)--we *invent*
mathematics, bit-by-little-bit. "If you want to know what \(2 +
2 = 4\) means", says Wittgenstein, "you have to ask how we
work it out", because "we consider the process of
calculation as the essential thing" (*PG* 333). Hence,
the only meaning (i.e., sense) that a mathematical proposition has is
*intra-systemic* meaning, which is wholly determined by its
syntactical relations to other propositions of the calculus.
A second important aspect of the intermediate Wittgenstein's
strong formalism is his view that extra-mathematical application
(and/or reference) is *not* a necessary condition of a
mathematical calculus. Mathematical calculi *do not require*
extra-mathematical applications, Wittgenstein argues, since we
"can develop arithmetic completely autonomously and its
application takes care of itself since wherever it's applicable
we may also apply it" (*PR* SS109; cf. *PG*
308, *WVC* 104).
As we shall shortly see, the middle Wittgenstein is also drawn to
strong formalism by a new concern with questions of
*decidability*. Undoubtedly influenced by the writings of
Brouwer and David Hilbert, Wittgenstein uses strong formalism to forge
a new connection between mathematical meaningfulness and algorithmic
decidability.
>
>
> An equation is a rule of syntax. Doesn't that explain why we
> cannot have questions in mathematics that are in principle
> unanswerable? For if the rules of syntax cannot be grasped,
> they're of no use at all.... [This] makes intelligible the
> attempts of the formalist to see mathematics as a game with signs.
> (*PR* SS121)
>
>
>
In
Section 2.3,
we shall see how Wittgenstein goes beyond both Hilbert and Brouwer by
*maintaining* the Law of the Excluded Middle in a way that
restricts mathematical propositions to expressions that are
algorithmically decidable.
### 2.2 Wittgenstein's Intermediate Finitism
The single most important difference between the Early and Middle
Wittgenstein is that, in the middle period, Wittgenstein rejects
quantification over an infinite mathematical domain, stating that,
contra his *Tractarian* view, such 'propositions'
are not infinite conjunctions and infinite disjunctions simply because
there are no such things.
Wittgenstein's *principal reasons* for developing a
finitistic Philosophy of Mathematics are as follows.
1. Mathematics as Human Invention: According to the middle
Wittgenstein, we invent mathematics, from which it follows that
mathematics and so-called mathematical objects do not exist
independently of our inventions. Whatever is mathematical is
fundamentally a product of human activity.
2. Mathematical Calculi Consist Exclusively of Intensions and
Extensions: Given that we have invented only mathematical extensions
(e.g., symbols, finite sets, finite sequences, propositions, axioms)
and mathematical intensions (e.g., rules of inference and
transformation, irrational numbers *as* rules), these
extensions and intensions, and the calculi in which they reside,
constitute the entirety of mathematics. (It should be noted that
Wittgenstein's usage of 'extension' and
'intension' as regards mathematics differs markedly from
standard contemporary usage, wherein the extension of a predicate is
the set of entities that satisfy the predicate and the intension of a
predicate is the meaning of, or expressed by, the predicate. Put
succinctly, Wittgenstein thinks that the extension of this notion of
concept-and-extension from the domain of existent (i.e., physical)
objects to the so-called domain of "mathematical objects"
is based on a faulty analogy and engenders conceptual confusion. See
#1 just below.)
These two reasons have at least five immediate *consequences*
for Wittgenstein's Philosophy of Mathematics.
1. Rejection of Infinite Mathematical Extensions: Given that a
mathematical extension is a symbol ('sign') or a finite
concatenation of symbols *extended* in space, there is a
categorical difference between mathematical intensions and (finite)
mathematical extensions, from which it follows that "the
mathematical infinite" resides only in recursive rules (i.e.,
intensions). An infinite mathematical extension (i.e., a
*completed*, infinite mathematical extension) is a
contradiction-in-terms
2. Rejection of Unbounded Quantification in Mathematics: Given that
the mathematical infinite can only be a recursive rule, and given that
a mathematical proposition must have sense, it follows that there
cannot be an infinite mathematical proposition (i.e., an infinite
logical product or an infinite logical sum).
3. Algorithmic Decidability vs. Undecidability: If mathematical
extensions of all kinds are necessarily *finite*, then, *in
principle*, all mathematical propositions are *algorithmically
decidable*, from which it follows that an "undecidable
mathematical proposition" is a contradiction-in-terms. Moreover,
since mathematics is essentially what we have and what we know,
Wittgenstein restricts algorithmic decidability to *knowing*
how to decide a proposition with a known decision procedure.
4. Anti-Foundationalist Account of Real Numbers: Since there are no
infinite mathematical extensions, irrational numbers are rules, not
extensions. Given that an infinite set is a recursive rule (or an
induction) and no such rule can generate all of the things
mathematicians call (or want to call) "real numbers", it
follows that there is no set of 'all' the real numbers and
no such thing as the mathematical continuum.
5. Rejection of Different Infinite Cardinalities: Given the
non-existence of infinite mathematical extensions, Wittgenstein
rejects the standard interpretation of Cantor's diagonal proof
as a proof of infinite sets of greater and lesser cardinalities.
Since we invent mathematics *in its entirety*, we do not
discover pre-existing mathematical objects or facts or that
mathematical objects have certain properties, for "one cannot
discover any connection between parts of mathematics or logic that was
already there without one knowing" (*PG* 481). In
examining mathematics as a purely human invention, Wittgenstein tries
to determine what exactly we have invented and why exactly, in his
opinion, we erroneously think that there are infinite mathematical
extensions.
If, first, we examine what we have invented, we see that we have
invented formal calculi consisting of finite extensions and
intensional rules. If, more importantly, we endeavour to determine
*why* we believe that infinite mathematical extensions exist
(e.g., why we believe that the actual infinite is intrinsic to
mathematics), we find that we conflate mathematical
*intensions* and mathematical *extensions*, erroneously
thinking that there is "a dualism" of "the law and
the infinite series obeying it" (*PR* SS180). For
instance, we think that because a real number "endlessly yields
the places of a decimal fraction" (*PR* SS186), it
*is* "a totality" (*WVC* 81-82, note
1), when, in reality, "[a]n irrational number isn't the
extension of an infinite decimal fraction,... it's a
law" (*PR* SS181) which "yields
extensions" (*PR* SS186). A law and a list are
fundamentally different; neither can 'give' what the other
gives (*WVC* 102-103). Indeed, "the mistake in the
set-theoretical approach consists time and again in treating laws and
enumerations (lists) as essentially the same kind of thing"
(*PG* 461).
Closely related with this conflation of intensions and extensions is
the fact that we mistakenly act as if the word 'infinite'
is a "number word", because in ordinary discourse we
answer the question "how many?" with both (*PG*
463; cf. *PR* SS142). But "'[i]nfinite'
is not a *quantity*", Wittgenstein insists (*WVC*
228); the word 'infinite' and a number word like
'five' do not have the same syntax. The words
'finite' and 'infinite' do not function as
adjectives on the words 'class' or 'set',
(*WVC* 102), for the terms "finite class" and
"infinite class" use 'class' in completely
different ways (*WVC* 228). An infinite class is a recursive
rule or "an induction", whereas the symbol for a finite
class is a list or extension (*PG* 461). It is because an
induction has much in common with the multiplicity of a finite class
that we erroneously call it an infinite class (*PR*
SS158).
In sum, because a mathematical extension is necessarily a finite
sequence of symbols, an infinite mathematical extension is a
contradiction-in-terms. This is the foundation of Wittgenstein's
finitism. Thus, when we say, e.g., that "there are infinitely
many even numbers", we are *not* saying "there are
an infinite number of even numbers" in *the same sense*
as we can say "there are 27 people in this house"; the
infinite series of natural numbers is nothing but "the infinite
possibility of finite series of numbers"--"[i]t is
senseless to speak of the *whole* infinite number series, as if
it, too, were an extension" (*PR* SS144). The
infinite is understood rightly when it is understood, not as a
quantity, but as an "infinite possibility" (*PR*
SS138).
Given Wittgenstein's rejection of infinite mathematical
extensions, he adopts finitistic, constructive views on mathematical
quantification, mathematical decidability, the nature of real numbers,
and Cantor's diagonal proof of the existence of infinite sets of
greater cardinalities.
Since a mathematical set is a finite extension, we cannot
*meaningfully* quantify over an infinite mathematical domain,
simply because there is no such thing as an infinite mathematical
domain (i.e., totality, set), and, derivatively, no such things as
infinite conjunctions or disjunctions (G.E. Moore 1955: 2-3; cf.
*AWL* 6; and *PG* 281).
>
>
> [I]t still looks now as if the quantifiers make no sense for numbers.
> I mean: you can't say '\((n) \phi n\)', precisely
> because 'all natural numbers' isn't a bounded
> concept. But then neither should one say a general proposition follows
> from a proposition about the nature of number.
>
>
>
> But in that case it seems to me that we can't use
> generality--all, etc.--in mathematics at all. There's
> no such thing as 'all numbers', simply because there are
> infinitely many. (*PR* SS126; *PR* SS129)
>
>
>
'Extensionalists' who assert that
"\(\varepsilon(0).\varepsilon(1).\varepsilon(2)\) and so
on" is an infinite logical product (*PG* 452) assume or
assert that finite and infinite conjunctions are close
cousins--that the fact that we cannot write down or enumerate all
of the conjuncts 'contained' in an infinite conjunction is
only a "human weakness", for God could surely do so and
God could surely survey such a conjunction in a single glance and
determine its truth-value. According to Wittgenstein, however, this is
*not* a matter of *human* limitation. Because we
mistakenly think that "an infinite conjunction" is similar
to "an enormous conjunction", we erroneously reason that
just as we cannot determine the truth-value of an enormous conjunction
because we don't have enough time, we similarly cannot, due to
human limitations, determine the truth-value of an infinite
conjunction (or disjunction). But the difference here is not one of
degree but of kind: "in the sense in which it is impossible to
check an infinite number of propositions it is also impossible to try
to do so" (*PG* 452). This applies, according to
Wittgenstein, to human beings, but more importantly, it applies also
to God (i.e., an omniscient being), for even God cannot write down or
survey infinitely many propositions because for him too the series is
never-ending or limitless and hence the 'task' is not a
genuine task because it cannot, *in principle*, be done (i.e.,
"infinitely many" is not a number word). As Wittgenstein
says at (*PR* 128; cf. *PG* 479): "'Can God
know all the places of the expansion of \(\pi\)?' would have
been a good question for the schoolmen to ask", for the question
is strictly 'senseless'. As we shall shortly see, on
Wittgenstein's account, "[a] statement about *all*
numbers is not represented by means of a proposition, but by means of
induction" (*WVC* 82).
Similarly, there is no such thing as a mathematical proposition about
*some* number--no such thing as a mathematical proposition
that existentially quantifies over an infinite domain (*PR*
SS173).
>
>
> What is the meaning of such a mathematical proposition as
> '\((\exists n) 4 + n = 7\)'? It might be a
> disjunction--\((4 + 0 = 7) \vee{}\) \((4 + 1 = 7) \vee {}\) etc. *ad
> inf.* But what does that mean? I can understand a proposition with
> a beginning and an end. But can one also understand a proposition with
> no end? (*PR* SS127)
>
>
>
We are particularly seduced by the feeling or belief that an infinite
*mathematical* disjunction makes good sense in the case where
we can provide a recursive rule for generating each next member of an
infinite sequence. For example, when we say "There exists an odd
perfect number" we are asserting that, in the infinite sequence
of odd numbers, there is (at least) one odd number that is
perfect--we are asserting '\(\phi(1) \vee \phi(3) \vee
\phi(5) \vee{}\) and so on' and we know what would make it true
and what would make it false (*PG* 451). The mistake here made,
according to Wittgenstein (*PG* 451), is that we are implicitly
"*comparing* the proposition
'\((\exists n)\)...' with the proposition...
'There are two foreign words on this page'", which
doesn't provide the grammar of the former
'proposition', but only indicates an analogy in their
respective rules.
On Wittgenstein's intermediate finitism, an expression
quantifying over an infinite domain is *never* a meaningful
proposition, not even when we have proved, for instance, that a
particular number \(n\) has a particular property.
>
>
> The important point is that, even in the case where I am given that
> \(3^2 + 4^2 = 5^2\), I ought *not* to say '\((\exists x,
> y, z, n) (x^n + y^n = z^n)\)', since taken extensionally
> that's meaningless, and taken intensionally this doesn't
> provide a proof of it. No, in this case I ought to express only the
> first equation. (*PR* SS150)
>
>
>
Thus, Wittgenstein adopts the radical position that *all*
expressions that quantify over an infinite domain, whether
'conjectures' (e.g., Goldbach's Conjecture, the Twin
Prime Conjecture) or "proved general theorems" (e.g.,
"Euclid's Prime Number Theorem", the Fundamental
Theorem of Algebra), are *meaningless* (i.e.,
'senseless'; '*sinnlos*') expressions
as opposed to "genuine mathematical
*proposition*[s]" (*PR* SS168). These
expressions are not (meaningful) mathematical propositions, according
to Wittgenstein, because the Law of the Excluded Middle does not
apply, which means that "we aren't dealing with
propositions of mathematics" (*PR* SS151). The
crucial question *why* and in exactly what sense the Law of the
Excluded Middle does not apply to such expressions will be answered in
the next section.
### 2.3 Wittgenstein's Intermediate Finitism and Algorithmic Decidability
The middle Wittgenstein has other grounds for rejecting unrestricted
quantification in mathematics, for on his idiosyncratic account, we
must distinguish between four categories of concatenations of
mathematical symbols.
1. Proved mathematical propositions in a particular mathematical
calculus (no need for "mathematical truth").
2. Refuted mathematical propositions in (or of) a particular
mathematical calculus (no need for "mathematical
falsity").
3. Mathematical propositions for which we know we have in hand an
applicable and effective decision procedure (i.e., we know
*how* to decide them).
4. Concatenations of symbols that are not part of any mathematical
calculus and which, for that reason, are not mathematical propositions
(i.e., are non-propositions).
In his 2004 (p. 18), Mark van Atten says that
>
> ... [i]ntuitionistically, there are four ["possibilities
> for a proposition with respect to truth"]:
>
>
> 1. \(p\) has been experienced as true
> 2. \(p\) has been experienced as false
> 3. Neither 1 nor 2 has occurred yet, but we know a procedure to
> decide \(p\) (i.e., a procedure that will prove \(p\) or prove \(\neg
> p)\)
> 4. Neither 1 nor 2 has occurred yet, and we do not know a procedure
> to decide \(p\).
>
>
>
What is immediately striking about Wittgenstein's ##1-3
and Brouwer's ##1-3 (Brouwer 1955: 114; 1981: 92) is the
enormous similarity. And yet, for all of the agreement, the
disagreement in #4 is absolutely crucial.
As radical as the respective #3s are, Brouwer and Wittgenstein agree
that an undecided \(\phi\) is a mathematical proposition (for
Wittgenstein, *of* a particular mathematical calculus) if we
know of an applicable decision procedure. They also agree that until
\(\phi\) is decided, it is neither true nor false (though, for
Wittgenstein, 'true' means no more than "proved in
calculus \(\Gamma\)"). What they disagree about is the status of
an ordinary mathematical conjecture, such as Goldbach's
Conjecture. Brouwer admits it as a mathematical proposition, while
Wittgenstein rejects it because we do not know how to algorithmically
decide it. Like Brouwer (1948 [1983: 90]), Wittgenstein holds that
there are no "unknown truth[s]" in mathematics, but unlike
Brouwer he denies the existence of "undecidable
propositions" on the grounds that such a
'proposition' would have no 'sense',
"and the consequence of this is precisely that the propositions
of logic lose their validity for it" (*PR* SS173). In
particular, if there *are* undecidable mathematical
propositions (as Brouwer maintains), then at least some mathematical
propositions are not propositions of *any existent*
mathematical calculus. For Wittgenstein, however, it is a defining
feature of a mathematical proposition that it is either decided or
decidable by a known decision procedure *in a mathematical
calculus*. As Wittgenstein says at (*PR* SS151),
>
>
> where the law of the excluded middle doesn't apply, no other law
> of logic applies either, because in that case we aren't dealing
> with propositions of mathematics. (Against Weyl and Brouwer).
>
>
>
The point here is *not* that we need truth and falsity in
mathematics--we don't--but rather that every
mathematical proposition (including ones for which an applicable
decision procedure is known) is *known* to be part of a
mathematical calculus.
To maintain this position, Wittgenstein distinguishes between
(meaningful, genuine) mathematical propositions, which have
mathematical sense, and meaningless, senseless
('*sinnlos*') expressions by stipulating that an
expression is a meaningful (genuine) proposition of a mathematical
calculus *iff* we *know* of a proof, a refutation, or an
applicable decision procedure (*PR* SS151; *PG* 452;
*PG* 366; *AWL* 199-200). "Only where
there's a method of solution [a 'logical method for
finding a solution'] is there a [mathematical] problem",
he tells us (*PR* SSSS149, 152; *PG* 393).
"We may only put a question in mathematics (or make a
conjecture)", he adds (*PR* SS151), "where the
answer runs: 'I must work it out'".
At (*PG* 468), Wittgenstein emphasizes the importance of
algorithmic decidability clearly and emphatically:
>
>
> In mathematics *everything* is algorithm and *nothing*
> is meaning [*Bedeutung*]; even when it
> doesn't look like that because we seem to be using
> *words* to talk *about* mathematical things. Even these
> words are used to construct an algorithm.
>
>
>
When, therefore, Wittgenstein says (*PG* 368) that if
"[the Law of the Excluded Middle] is supposed not to hold, we
have altered the concept of proposition", he means that an
expression is only a meaningful mathematical proposition if we
*know* of an applicable decision procedure for deciding it
(*PG* 400). If a genuine mathematical proposition is
*undecided*, the Law of the Excluded Middle holds in the sense
that we *know* that we will *prove or refute* the
proposition by applying an applicable decision procedure (*PG*
379, 387).
For Wittgenstein, there simply is no distinction between syntax and
semantics in mathematics: everything is syntax. If we wish to
demarcate between "mathematical propositions" versus
"mathematical pseudo-propositions", as we do, then the
*only* way to ensure that there is no such thing as a
meaningful, but *undecidable* (e.g., independent), proposition
of a given calculus is to stipulate that an expression is only a
meaningful proposition *in* a given calculus (*PR*
SS153) if either it has been decided or we *know* of an
applicable decision procedure. In this manner, Wittgenstein defines
*both* a mathematical calculus *and* a mathematical
proposition in *epistemic* terms. A calculus is defined in
terms of stipulations (*PR* SS202; *PG* 369),
*known* rules of operation, and *known* decision
procedures, and an expression is only a mathematical proposition
*in* a given calculus (*PR* SS155), and only if that
calculus *contains* (*PG* 379) a known (and applicable)
decision procedure, for "you cannot have a logical plan of
search for a *sense* you don't know" (*PR*
SS148).
Thus, the middle Wittgenstein rejects undecidable mathematical
propositions on *two* grounds. First, number-theoretic
expressions that quantify over an infinite domain are not
algorithmically decidable, and hence are not meaningful mathematical
propositions.
>
>
> If someone says (as Brouwer does) that for \((x) f\_1 x = f\_2 x\),
> there is, as well as yes and no, also the case of undecidability, this
> implies that '\((x)\)...' is meant extensionally and
> we may talk of the case in which all \(x\) happen to have a property.
> In truth, however, it's impossible to talk of such a case at all
> and the '\((x)\)...' in arithmetic cannot be taken
> extensionally. (*PR* SS174)
>
>
>
"Undecidability", says Wittgenstein (*PR*
SS174) "presupposes... that the bridge *cannot*
be made with symbols", when, in fact, "[a] connection
between symbols which exists but cannot be represented by symbolic
transformations is a thought that cannot be thought", for
"[i]f the connection is there,... it must be possible to
see it". Alluding to algorithmic decidability, Wittgenstein
stresses (*PR* SS174) that "[w]e can assert anything
which can be *checked in practice*", because
"it's a question of the *possibility of
checking*" (italics added).
Wittgenstein's second reason for rejecting an undecidable
mathematical proposition is that it is a
*contradiction-in-terms*. There cannot be "undecidable
propositions", Wittgenstein argues (*PR* SS173),
because an expression that is not decidable in some *actual*
calculus is simply not a *mathematical* proposition, since
"every proposition in mathematics must belong to a calculus of
mathematics" (*PG* 376).
This radical position on decidability results in various radical and
counter-intuitive statements about unrestricted mathematical
quantification, mathematical induction, and, especially, the
*sense* of a newly proved mathematical proposition. In
particular, Wittgenstein asserts that uncontroversial mathematical
conjectures, such as Goldbach's Conjecture (hereafter
'GC') and the erstwhile conjecture "Fermat's
Last Theorem" (hereafter 'FLT'), have no sense (or,
perhaps, no *determinate* sense) and that the
*unsystematic* proof of such a conjecture gives it a sense that
it didn't previously have (*PG* 374) because
>
>
> it's unintelligible that I should admit, when I've got the
> proof, that it's a proof of precisely *this* proposition,
> or of the induction meant by this proposition. (*PR*
> SS155)
>
>
>
>
>
> Thus Fermat's [Last Theorem] makes no *sense* until I can
> *search* for a solution to the equation in cardinal numbers.
> And 'search' must always mean: search systematically.
> Meandering about in infinite space on the look-out for a gold ring is
> no kind of search. (*PR* SS150)
>
>
>
> I say: the so-called 'Fermat's Last Theorem'
> isn't a proposition. (Not even in the sense of a proposition of
> arithmetic.) Rather, it corresponds to an induction. (*PR*
> SS189)
>
>
>
To see how Fermat's Last Theorem isn't a proposition and
how it *might* correspond to an induction, we need to examine
Wittgenstein's account of mathematical induction.
### 2.4 Wittgenstein's Intermediate Account of Mathematical Induction and Algorithmic Decidability
Given that one cannot quantify over an infinite mathematical domain,
the question arises: What, if anything, does *any*
number-theoretic proof by mathematical induction actually
*prove*?
On the standard view, a proof by mathematical induction has the
following paradigmatic form.
| | |
| --- | --- |
| **Inductive Base**: | \(\phi(1)\) |
| **Inductive Step**: | \(\forall n(\phi(n) \rightarrow \phi(n + 1))\) |
| **Conclusion**: | \(\forall n\phi(n)\) |
If, however, "\(\forall n\phi(n)\)" is *not* a
meaningful (genuine) mathematical proposition, what are we to make of
this proof?
Wittgenstein's initial answer to this question is decidedly
enigmatic. "An induction is the expression for arithmetical
generality", but "induction isn't itself a
proposition" (*PR* SS129).
>
>
> We are not saying that when \(f(1)\) holds and when \(f(c + 1)\)
> follows from \(f(c)\), the proposition \(f(x)\) is *therefore*
> true of all cardinal numbers: but: "the proposition \(f(x)\)
> holds for all cardinal numbers" *means* "it holds
> for \(x = 1\), and \(f(c + 1)\) follows from \(f(c)\)".
> (*PG* 406)
>
>
>
In a proof by mathematical induction, we do no actually prove the
'proposition' [e.g., \(\forall n\phi(n)\)] that is
customarily construed as the *conclusion* of the proof
(*PG* 406, 374; *PR* SS164), rather this
pseudo-proposition or 'statement' stands
'proxy' for the "infinite possibility" (i.e.,
"the induction") that we come to
'*see*' by means of the proof (*WVC* 135).
"I want to say", Wittgenstein concludes, that "once
you've got the induction, it's all over"
(*PG* 407). Thus, on Wittgenstein's account, a particular
proof by mathematical induction should be understood in the following
way.
| | |
| --- | --- |
| **Inductive Base**: | \(\phi(1)\) |
| **Inductive Step**: | \(\phi(n) \rightarrow \phi(n + 1)\) |
| **Proxy Statement**: | \(\phi(m)\) |
Here the 'conclusion' of an inductive proof [i.e.,
"what is to be proved" (*PR* SS164)] uses
'\(m\)' rather than '\(n\)' to indicate that
'\(m\)' stands for any *particular* number, while
'\(n\)' stands for any *arbitrary* number. For
Wittgenstein, the *proxy statement* "\(\phi(m)\)"
is *not* a mathematical proposition that "assert[s] its
generality" (*PR* SS168), it is an
*eliminable* pseudo-proposition standing proxy for the proved
inductive base and inductive step. Though an inductive proof
*cannot prove* "the infinite possibility of
application" (*PR* SS163), it enables us "to
*perceive*" that a *direct* proof of any
*particular* proposition can be constructed (*PR*
SS165). For example, once we have proved "\(\phi(1)\)"
and "\(\phi(n) \rightarrow \phi(n + 1)\)", we need not
reiterate *modus ponens* \(m - 1\) times to prove the
particular proposition "\(\phi(m)\)" (*PR*
SS164). The direct proof of, say, "\(\phi\)(714)"
(i.e., without 713 iterations of *modus ponens*) "cannot
have a still better proof, say, by my carrying out the derivation as
far as this proposition itself" (*PR* SS165).
A second, very important impetus for Wittgenstein's radically
constructivist position on mathematical induction is his rejection of
an *undecidable* mathematical proposition.
>
>
> In discussions of the provability of mathematical propositions it is
> sometimes said that there are substantial propositions of mathematics
> whose truth or falsehood must remain undecided. What the people who
> say that don't realize is that such propositions, *if* we
> can use them and want to call them "propositions", are not
> at all the same as what are called "propositions" in other
> cases; because a proof alters the grammar of a proposition.
> (*PG* 367)
>
>
>
In this passage, Wittgenstein is alluding to Brouwer, who, as early as
1907 and 1908, states, first, that "the question of the validity
of the principium tertii exclusi is equivalent to the question
*whether unsolvable mathematical problems exist*",
second, that "[t]here is not a shred of a proof for the
conviction... that there exist no unsolvable mathematical
problems", and, third, that there are meaningful
propositions/'questions', such as "*Do there
occur in the decimal expansion of \(\pi\) infinitely many pairs of
consecutive equal digits*?", to which the Law of the
Excluded Middle does not apply *because* "it must be
considered as uncertain whether problems like [this] are
solvable" (Brouwer, 1908 [1975: 109-110]). "A
fortiori it is not certain that any mathematical problem can either be
solved or proved to be unsolvable", Brouwer says (1907 [1975:
79]), "though HILBERT, in 'Mathematische Probleme',
believes that every mathematician is deeply convinced of
it".
Wittgenstein takes the same data and, in a way, draws the opposite
conclusion. If, as Brouwer says, we are *uncertain* whether all
or some "mathematical problems" are solvable, then we
*know* that we do *not* have in hand an applicable
decision procedure, which means that the alleged mathematical
propositions are *not decidable*, here and now. "What
'mathematical questions' share with genuine
questions", Wittgenstein says (*PR* SS151), "is
simply that they can be answered". This means that if we do not
know how to decide an expression, then we do not know how to
*make* it either proved (true) or refuted (false), which means
that the Law of the Excluded Middle "doesn't apply"
and, therefore, that our expression is *not* a mathematical
proposition.
Together, Wittgenstein's finitism and his criterion of
algorithmic decidability shed considerable light on his highly
controversial remarks about putatively *meaningful* conjectures
such as FLT and GC. GC is not a mathematical proposition because we do
not *know how* to decide it, and if someone like G. H. Hardy
says that he 'believes' GC is true (*PG* 381;
*LFM* 123; *PI* SS578), we must answer that s/he
only "has a hunch about the possibilities of extension of the
present system" (*LFM* 139)--that one can only
*believe* such an expression is 'correct' if one
knows *how* to prove it. The only sense in which GC (or FLT)
can be proved is that it can "correspond to a *proof* by
induction", which means that the unproved inductive step (e.g.,
"\(G(n) \rightarrow G(n + 1)\)") and the expression
"\(\forall nG(n)\)" are not mathematical propositions
because we have no algorithmic means of looking for an induction
(*PG* 367). A "general proposition" is senseless
prior to an inductive proof "because the question would only
have made sense if a general method of decision had been known
*before* the particular proof was discovered"
(*PG* 402). Unproved 'inductions' or inductive
steps are not meaningful propositions because the Law of the Excluded
Middle does not hold in the sense that we do not know of a decision
procedure by means of which we can prove or refute the expression
(*PG* 400; *WVC* 82).
This position, however, seems to rob us of any reason to search for a
'decision' of a meaningless 'expression' such
as GC. The intermediate Wittgenstein says only that "[a]
mathematician is... guided by... certain analogies with the
previous system" and that there is nothing "wrong or
illegitimate if anyone concerns himself with Fermat's Last
Theorem" (*WVC* 144).
>
>
> If e.g. I have a method for looking at integers that satisfy the
> equation \(x^2 + y^2 = z^2\), then the formula \(x^{n} + y^n = z^{n}\)
> may stimulate me. I may let a formula stimulate me. Thus I shall say,
> Here there is a *stimulus*--but not a *question*.
> Mathematical problems are always such stimuli. (*WVC* 144, Jan.
> 1, 1931)
>
>
>
More specifically, a mathematician may let a senseless conjecture such
as FLT stimulate her/him if s/he wishes to know whether a calculus can
be extended without altering its axioms or rules (*LFM*
139).
>
>
> What is here going [o]n [in an attempt to decide GC] is an
> unsystematic attempt at constructing a calculus. If the attempt is
> successful, I shall again have a calculus in front of me, *only a
> different one from the calculus I have been using so far*.
> (*WVC* 174-75; Sept. 21, 1931; italics added)
>
>
>
If, e.g., we succeed in proving GC by mathematical induction (i.e., we
prove "\(G(1)\)" and "\(G(n) \rightarrow G(n +
1)\)"), we will then have a proof of the inductive step, but
since the inductive step was not algorithmically decidable beforehand
(*PR* SSSS148, 155, 157; *PG* 380), in
constructing the proof we have constructed a *new* calculus, a
new *calculating machine* (*WVC* 106) in which we
*now know how* to use this new "machine-part"
(*RFM* VI, SS13) (i.e., the unsystematically proved
inductive step). Before the proof, the inductive step is not a
mathematical proposition with sense (in a particular calculus),
whereas after the proof the inductive step *is* a mathematical
proposition, with a new, determinate sense, in a newly created
calculus. This demarcation of expressions without mathematical sense
and proved or refuted propositions, each with a determinate sense in a
particular calculus, is a view that Wittgenstein articulates in myriad
different ways from 1929 through 1944.
Whether or not it is ultimately defensible--and this is an
absolutely crucial question for Wittgenstein's Philosophy of
Mathematics--this strongly counter-intuitive aspect of
Wittgenstein's account of algorithmic decidability, proof, and
the *sense* of a mathematical proposition is a piece with his
rejection of *predeterminacy* in mathematics. Even in the case
where we algorithmically decide a mathematical proposition, the
connections thereby made do not pre-exist the algorithmic decision,
which means that even when we have a "mathematical
question" that we decide by decision procedure, the expression
only has a determinate sense *qua* proposition when it is
decided. On Wittgenstein's account, both middle and later,
"[a] new proof gives the proposition a place in a new
system" (*RFM* VI, SS13), it "locates it in the
whole system of calculations", though it "does not
mention, certainly does not describe, the whole system of calculation
that stands behind the proposition and gives it sense"
(*RFM* VI, SS11).
Wittgenstein's unorthodox position here is a type of
structuralism that partially results from his rejection of
mathematical semantics. We erroneously think, e.g., that GC has a
fully determinate sense because, given "the misleading way in
which the mode of expression of word-language represents the sense of
mathematical propositions" (*PG* 375), we call to mind
false pictures and mistaken, *referential* conceptions of
mathematical propositions whereby GC is *about* a mathematical
reality and so has just a determinate sense as "There exist
intelligent beings elsewhere in the universe" (i.e., a
proposition that *is* determinately true or false, whether or
not we ever know its truth-value). Wittgenstein breaks with this
tradition, in *all* of its forms, stressing that, in
mathematics, unlike the realm of contingent (or empirical)
propositions, "if I am to know what a proposition like
Fermat's last theorem says", I must know its
*criterion* of truth. Unlike the criterion of truth for an
empirical proposition, which can be known *before* the
proposition is decided, we cannot know the criterion of truth for an
undecided mathematical proposition, though we are "acquainted
with criteria for the truth of *similar* propositions"
(*RFM* VI, SS13).
### 2.5 Wittgenstein's Intermediate Account of Irrational Numbers
The intermediate Wittgenstein spends a great deal of time wrestling
with real and irrational numbers. There are two distinct reasons for
this.
First, the *real* reason many of us are unwilling to abandon
the notion of the actual infinite in mathematics is the prevalent
conception of an irrational number as a *necessarily* infinite
extension. "The confusion in the concept of the 'actual
infinite' *arises*" (italics added), says
Wittgenstein (*PG* 471),
>
>
> from the unclear concept of irrational number, that is, from the fact
> that logically very different things are called 'irrational
> numbers' without any clear limit being given to the concept.
>
>
>
Second, and more fundamentally, the intermediate Wittgenstein wrestles
with irrationals in such detail because he opposes foundationalism and
especially its concept of a "gapless *mathematical*
continuum", its concept of a *comprehensive* theory of
the real numbers (Han 2010), and set theoretical conceptions and
'proofs' as a foundation for arithmetic, real number
theory, and mathematics as a whole. Indeed, Wittgenstein's
discussion of irrationals is one with his critique of set theory, for,
as he says, "[m]athematics is ridden through and through with
the pernicious idioms of set theory", such as "the way
people speak of a line as composed of points", when, in fact,
"[a] line is a law and isn't composed of anything at
all" (*PR* SS173; *PR* SSSS181, 183,
& 191; *PG* 373, 460, 461, & 473).
#### 2.5.1 Wittgenstein's Anti-Foundationalism and Genuine Irrational Numbers
Since, on Wittgenstein's terms, mathematics consists exclusively
of extensions and intensions (i.e., 'rules' or
'laws'), an irrational is only an extension insofar as it
is a sign (i.e., a 'numeral', such as
'\(\sqrt{2}\)' or '\(\pi\)'). Given that there
is no such thing as an infinite mathematical *extension*, it
follows that an irrational number is not a unique *infinite
expansion*, but rather a unique recursive rule or *law*
(*PR* SS181) that yields rational numbers (*PR*
SS186; *PR* SS180).
>
>
> The rule for working out places of \(\sqrt{2}\) is itself the numeral
> for the irrational number; and the reason I here speak of a
> 'number' is that I can calculate with these signs (certain
> rules for the construction of rational numbers) just as I can with
> rational numbers themselves. (*PG* 484)
>
>
>
Due, however, to his anti-foundationalism, Wittgenstein takes the
radical position that not all recursive real numbers (i.e., computable
numbers) are genuine real numbers--a position that distinguishes
his view from even Brouwer's.
The problem, as Wittgenstein sees it, is that mathematicians,
especially foundationalists (e.g., set theorists), have sought to
accommodate physical continuity by a theory that
'describes' the mathematical continuum (*PR*
SS171). When, for example, we think of continuous motion and the
(mere) density of the rationals, we reason that if an object moves
continuously from A to B, and it travels *only* the distances
marked by "rational points", then it must *skip*
some distances (intervals, or points) *not* marked by rational
numbers. But if an object in continuous motion travels distances that
*cannot* be commensurately measured by rationals alone, there
must be 'gaps' between the rationals (*PG* 460),
and so we must fill them, first, with recursive irrationals, and then,
because "the set of *all* recursive irrationals"
still leaves gaps, with "lawless irrationals".
>
>
> [T]he enigma of the continuum arises because language misleads us into
> applying to it a picture that doesn't fit. Set theory preserves
> the inappropriate picture of something discontinuous, but makes
> statements about it that contradict the picture, under the impression
> that it is breaking with prejudices; whereas what should really have
> been done is to point out that the picture just doesn't
> fit... (*PG* 471)
>
>
>
We add nothing that is needed to the differential and integral calculi
by 'completing' a theory of real numbers with
pseudo-irrationals and lawless irrationals, first because there are no
gaps on the number line (*PR* SSSS181, 183, & 191;
*PG* 373, 460, 461, & 473; *WVC* 35) and, second,
because these alleged irrational numbers are not needed for a theory
of the 'continuum' simply because there is no mathematical
continuum. As the later Wittgenstein says (*RFM* V, SS32),
"[t]he picture of the number line is an absolutely natural one
up to a certain point; that is to say so long as it is not used for a
general theory of real numbers". We have gone awry by
misconstruing the nature of the geometrical line as a continuous
collection of points, each with an associated real number, which has
taken us well beyond the 'natural' picture of the number
line in search of a "general theory of real numbers" (Han
2010).
Thus, the principal reason Wittgenstein rejects certain constructive
(computable) numbers is that they are unnecessary creations which
engender conceptual confusions in mathematics (especially set theory).
One of Wittgenstein's main aims in his lengthy discussions of
rational numbers and pseudo-irrationals is to show that
pseudo-irrationals, which are allegedly needed for the mathematical
continuum, are not needed at all.
To this end, Wittgenstein demands (a) that a real number must be
"compar[able] with any rational number taken at random"
(i.e., "it can be established whether it is greater than, less
than, or equal to a rational number" (*PR* SS191))
and (b) that "[a] number must measure in and of itself"
and if a 'number' "leaves it to the rationals, we
have no need of it" (*PR* SS191) (Frascolla 1980:
242-243; Shanker 1987: 186-192; Da Silva 1993:
93-94; Marion 1995a: 162, 164; Rodych 1999b, 281-291;
Lampert 2009).
To demonstrate that some recursive (computable) reals are not genuine
real numbers because they fail to satisfy (a) and (b), Wittgenstein
defines the putative recursive real number
\[
\substack{5 \rightarrow 3 \\ \sqrt{2}}
\]
as the rule "Construct the decimal expansion for \(\sqrt{2}\),
replacing every occurrence of a '5' with a
'3'" (*PR* SS182); he similarly defines
\(\pi '\) as
\[
\substack{7 \rightarrow 3\\ \pi}
\]
(*PR* SS186) and, in a later work, redefines \(\pi '\)
as
\[
\substack{777 \rightarrow 000 \\ \pi}
\]
(*PG* 475).
Although a pseudo-irrational such as \(\pi '\) (on either definition)
is "as unambiguous as ... \(\pi\) or \(\sqrt{2}\)"
(*PG* 476), it is 'homeless' according to
Wittgenstein because, instead of using "the idioms of
arithmetic" (*PR* SS186), it is dependent upon the
particular 'incidental' notation of a particular system
(i.e., in some particular base) (*PR* SS188; *PR*
SS182; and *PG* 475). If we speak of various
base-notational systems, we might say that \(\pi\) belongs to
*all* systems, while \(\pi '\) belongs only to one, which shows
that \(\pi '\) is not a genuine irrational because "there
can't be irrational numbers of different types"
(*PR* SS180). Furthermore, pseudo-irrationals do
*not* measure because they are homeless, artificial
constructions parasitic upon numbers which have a natural place in a
calculus that can be used to measure. We simply do not need these
aberrations, because they are not sufficiently comparable to rationals
and genuine irrationals. They are *not* irrational numbers
according to Wittgenstein's criteria, which define, Wittgenstein
interestingly asserts, "precisely what has been meant or looked
for under the name 'irrational number'" (*PR*
SS191).
For exactly the same reason, if we define a "lawless
irrational" as either (a) a *non*-rule-governed,
non-periodic, infinite expansion in some base, or (b) a
"free-choice sequence", Wittgenstein rejects
"lawless irrationals" because, insofar as they are not
rule-governed, they are not comparable to rationals (or irrationals)
and they are not needed.
>
>
> [W]e cannot say that the decimal fractions developed in accordance
> with a law still need supplementing by an infinite set of irregular
> infinite decimal fractions that would be 'brushed under the
> carpet' if we were to *restrict* ourselves to those
> *generated by a law*,
>
>
>
Wittgenstein argues, for "[w]here is there such an infinite
decimal that is generated by no law" "[a]nd how would we
notice that it was missing?" (*PR* SS181; cf.
*PG* 473, 483-84). Similarly, a free-choice sequence,
like a recipe for "endless bisection" or "endless
dicing", is not an infinitely complicated *mathematical
law* (or rule), but rather no law at all, for after each
individual throw of a coin, the point remains "infinitely
indeterminate" (*PR* SS186). For closely related
reasons, Wittgenstein ridicules the Multiplicative Axiom (Axiom of
Choice) both in the middle period (*PR* SS146) and in the
latter period (*RFM* V, SS25; VII, SS33).
#### 2.5.2 Wittgenstein's Real Number Essentialism and the Dangers of Set Theory
Superficially, at least, it seems as if Wittgenstein is offering an
essentialist argument for the conclusion that real number arithmetic
*should not* be extended in such-and-such a way. Such an
*essentialist* account of real and irrational numbers seems to
conflict with the actual freedom mathematicians have to extend and
invent, with Wittgenstein's intermediate claim (*PG* 334)
that "[f]or [him] one calculus is as good as another", and
with Wittgenstein's acceptance of complex and imaginary numbers.
Wittgenstein's foundationalist critic (e.g., set theorist) will
undoubtedly say that we have extended the term "irrational
number" to lawless and pseudo-irrationals because they are
needed for the mathematical continuum and because such
"conceivable numbers" are much more like rule-governed
irrationals than rationals.
Though Wittgenstein stresses differences where others see similarities
(*LFM* 15), in his intermediate attacks on pseudo-irrationals
and foundationalism, he is not just emphasizing differences, he is
attacking set theory's "pernicious idioms"
(*PR* SS173) and its "crudest imaginable
misinterpretation of its own calculus" (*PG*
469-70) in an attempt to dissolve "misunderstandings
without which [set theory] would never have been invented",
since it is "of no other use" (*LFM* 16-17).
Complex and imaginary numbers have grown organically within
mathematics, and they have proved their mettle in scientific
applications, but pseudo-irrationals are *inorganic* creations
invented solely for the sake of mistaken foundationalist aims.
Wittgenstein's main point is *not* that we cannot create
further recursive real numbers--indeed, we can create as many as
we want--his point is that we can only really speak of different
*systems* (sets) of real numbers (*RFM* II, SS33)
that are enumerable by a rule, and any attempt to speak of "the
set of all real numbers" or any piecemeal attempt to add or
consider new recursive reals (e.g., diagonal numbers) is a useless
and/or futile endeavour based on foundational misconceptions. Indeed,
in 1930 manuscript and typescript (hereafter MS and TS, respectively)
passages on irrationals and Cantor's diagonal, which were not
included in *PR* or *PG*, Wittgenstein says: "The
concept 'irrational number' is a dangerous
pseudo-concept" (MS 108, 176; 1930; TS 210, 29; 1930). As we
shall see in the next section, on Wittgenstein's account, if we
do not understand irrationals rightly, we *cannot but* engender
the mistakes that constitute set theory.
### 2.6 Wittgenstein's Intermediate Critique of Set Theory
Wittgenstein's critique of set theory begins somewhat benignly
in the *Tractatus*, where he denounces Logicism and says
(6.031) that "[t]he theory of classes is completely superfluous
in mathematics" because, at least in part, "the generality
required in mathematics is not accidental generality". In his
middle period, Wittgenstein begins a full-out assault on set theory
that never abates. Set theory, he says, is "utter
nonsense" (*PR* SSSS145, 174; WVC 102;
*PG* 464, 470), 'wrong' (*PR* SS174),
and 'laughable' (*PG* 464); its "pernicious
idioms" (*PR* SS173) mislead us and the crudest
possible misinterpretation is the very impetus of its invention
(Hintikka 1993: 24, 27).
Wittgenstein's intermediate critique of transfinite set theory
(hereafter "set theory") has two main components: (1) his
discussion of the intension-extension distinction, and (2) his
criticism of non-denumerability *as cardinality*. Late in the
middle period, Wittgenstein seems to become more aware of the
unbearable conflict between his *strong formalism* (*PG*
334) and his denigration of set theory as a purely formal,
*non*-mathematical calculus (Rodych 1997: 217-219),
which, as we shall see in
Section 3.5,
leads to the use of an extra-mathematical application criterion to
demarcate transfinite set theory (and other purely formal sign-games)
from mathematical calculi.
#### 2.6.1 Intensions, Extensions, and the Fictitious Symbolism of Set Theory
The search for a comprehensive theory of the real numbers and
mathematical continuity has led to a "fictitious
symbolism" (*PR* SS174).
>
>
> Set theory attempts to grasp the infinite at a more general level than
> the investigation of the laws of the real numbers. It says that you
> can't grasp the actual infinite by means of mathematical
> symbolism at all and therefore it can only be described and not
> represented. ... One might say of this theory that it buys a pig
> in a poke. Let the infinite accommodate itself in this box as best it
> can. (*PG* 468; cf. *PR* SS170)
>
>
>
As Wittgenstein puts it at (*PG* 461),
>
>
> the mistake in the set-theoretical approach consists time and again in
> treating laws and enumerations (lists) as essentially the same kind of
> thing and arranging them in parallel series so that one fills in gaps
> left by the other.
>
>
>
This is a mistake because it is 'nonsense' to say
"we cannot enumerate all the numbers of a set, but we can give a
description", for "[t]he one is not a substitute for the
other" (*WVC* 102; June 19, 1930); "there
isn't a dualism [of] the law and the infinite series obeying
it" (*PR* SS180).
"Set theory is wrong" and nonsensical (*PR*
SS174), says Wittgenstein, because it presupposes a fictitious
symbolism of infinite signs (*PG* 469) instead of an actual
symbolism with finite signs. The grand intimation of set theory, which
begins with "Dirichlet's concept of a function"
(*WVC* 102-03), is that we can *in principle*
represent an infinite set by an enumeration, but because of human or
physical limitations, we will instead *describe* it
intensionally. But, says Wittgenstein, "[t]here can't be
possibility and actuality in mathematics", for mathematics is an
*actual* calculus, which "is concerned only with the
signs with which it *actually* operates" (*PG*
469). As Wittgenstein puts it at (*PR* SS159), the fact
that "we can't describe mathematics, we can only do
it" in and "of itself abolishes every 'set
theory'".
Perhaps the best example of this phenomenon is Dedekind, who in giving
his 'definition' of an "infinite class" as
"a class which is similar to a proper subclass of itself"
(*PG* 464), "tried to *describe* an infinite
class" (*PG* 463). If, however, we try to apply this
'definition' to a particular class in order to ascertain
whether it is finite or infinite, the attempt is
'laughable' if we apply it to a *finite* class,
such as "a certain row of trees", and it is
'nonsense' if we apply it to "an infinite
class", for we cannot even attempt "to co-ordinate
it" (*PG* 464), because "the relation \(m = 2n\)
[does not] correlate the class of all numbers with one of its
subclasses" (*PR* SS141), it is an "infinite
process" which "correlates any arbitrary number with
another". So, although we *can* use \(m = 2n\) on the
*rule* for generating the naturals (i.e., our domain) and
thereby construct the pairs (2,1), (4,2), (6,3), (8,4), etc., in doing
so we do not correlate two *infinite* sets or extensions
(*WVC* 103). If we try to apply Dedekind's definition as
a *criterion* for determining whether a given set is infinite
by establishing a 1-1 correspondence between two inductive rules
for generating "infinite extensions", one of which is an
"extensional subset" of the other, we can't possibly
learn anything we didn't already know when we applied the
'criterion' to two inductive rules. If Dedekind or anyone
else insists on calling an inductive rule an "infinite
set", he and we must still mark the categorical difference
between such a set and a finite set with a determinate, finite
cardinality.
Indeed, on Wittgenstein's account, the failure to properly
distinguish mathematical extensions and intensions is the root cause
of the mistaken interpretation of Cantor's diagonal proof as a
proof of the existence of infinite sets of lesser and greater
cardinality.
#### 2.6.2 Against Non-Denumerability
Wittgenstein's criticism of non-denumerability is primarily
implicit during the middle period. Only after 1937 does he provide
concrete arguments purporting to show, e.g., that Cantor's
diagonal *cannot* prove that some infinite sets have greater
'multiplicity' than others.
Nonetheless, the intermediate Wittgenstein clearly rejects the notion
that a non-denumerably infinite set is greater in cardinality than a
denumerably infinite set.
>
>
> When people say 'The set of all transcendental numbers is
> greater than that of algebraic numbers', that's nonsense.
> The set is of a different kind. It isn't 'no longer'
> denumerable, it's simply not denumerable! (*PR*
> SS174)
>
>
>
As with his intermediate views on genuine irrationals and the
Multiplicative Axiom, Wittgenstein here looks at the diagonal proof of
the non-denumerability of "the set of transcendental
numbers" as one that shows only that transcendental numbers
cannot be recursively enumerated. It is nonsense, he says, to go from
the warranted conclusion that these numbers are not, in principle,
enumerable to the conclusion that the *set* of transcendental
numbers is greater in cardinality than the set of algebraic numbers,
which is recursively enumerable. What we have here are two very
different conceptions of a number-type. In the case of algebraic
numbers, we have a decision procedure for determining of any given
number whether or not it is algebraic, *and* we have a method
of enumerating the algebraic numbers such that we can *see*
that 'each' algebraic number "will be"
enumerated. In the case of transcendental numbers, on the other hand,
we have proofs that some numbers are transcendental (i.e.,
non-algebraic), *and* we have a proof that we cannot
recursively enumerate each and every thing we would call a
"transcendental number".
At (*PG* 461), Wittgenstein similarly speaks of set
theory's "mathematical pseudo-concepts" leading to a
fundamental difficulty, which begins when we unconsciously presuppose
that there is sense to the idea of ordering the rationals by
size--"that the *attempt* is
thinkable"--and culminates in similarly thinking that it is
possible to enumerate *the real numbers*, which we then
discover is impossible.
Though the intermediate Wittgenstein certainly seems highly critical
of the alleged proof that some infinite sets (e.g., the reals) are
greater in cardinality than other infinite sets, and though he
discusses the "diagonal procedure" in February 1929 and in
June 1930 (MS 106, 266; MS 108, 180), along with a diagonal diagram,
these and other early-middle ruminations did not make it into the
typescripts for either *PR* or *PG*. As we shall see in
Section 3.4,
the later Wittgenstein analyzes Cantor's diagonal and claims of
non-denumerability in some detail.
## 3. The Later Wittgenstein on Mathematics: Some Preliminaries
The first and most important thing to note about Wittgenstein's
later Philosophy of Mathematics is that *RFM*, first published
in 1956, consists of *selections* taken from a number of
manuscripts (1937-1944), most of one large typescript (1938),
and three short typescripts (1938), each of which constitutes an
Appendix to (*RFM* I). For this reason and because some
manuscripts containing much material on mathematics (e.g., MS 123)
were not used at all for
*RFM*, philosophers have not been able to read
Wittgenstein's later remarks on mathematics as they were written
in the manscripts used for *RFM* and they have not had access
(until the 2000-2001 release of the *Nachlass* on CD-ROM)
to much of Wittgenstein's later work on mathematics. It must be
emphasized, therefore, that this *Encyclopedia* article is
being written during a transitional period. Until philosophers have
used the *Nachlass* to build a comprehensive picture of
Wittgenstein's complete and evolving Philosophy of Mathematics,
we will not be able to say definitively which views the later
Wittgenstein retained, which he changed, and which he dropped. In the
interim, this article will outline Wittgenstein's later
Philosophy of Mathematics, drawing primarily on *RFM*, to a
much lesser extent *LFM* (1939 Cambridge lectures), and, where
possible, previously unpublished material in Wittgenstein's
*Nachlass*.
It should also be noted at the outset that commentators disagree about
the continuity of Wittgenstein's middle and later Philosophies
of Mathematics. Some argue that the later views are significantly
different from the intermediate views (Frascolla 1994; Gerrard 1991:
127, 131-32; Floyd 2005: 105-106), while others argue
that, for the most part, Wittgenstein's Philosophy of
Mathematics evolves from the middle to the later period without
significant changes or renunciations (Wrigley 1993; Marion 1998). The remainder of this article adopts the
second interpretation, explicating Wittgenstein's later
Philosophy of Mathematics as largely continuous with his intermediate
views, except for the important introduction of an extra-mathematical
application criterion.
### 3.1 Mathematics as a Human Invention
Perhaps the most important constant in Wittgenstein's Philosophy
of Mathematics, middle and late, is that he consistently maintains
that mathematics is our, human invention, and that, indeed, everything
in mathematics is invented. Just as the middle Wittgenstein says that
"[w]e *make* mathematics", the later Wittgenstein
says that we 'invent' mathematics (*RFM* I,
SS168; II, SS38; V, SSSS5, 9 and 11; *PG*
469-70) and that "the mathematician is not a discoverer:
he is an inventor" (*RFM*, Appendix II, SS2;
(*LFM* 22, 82). Nothing *exists* mathematically unless
and until we have invented it.
In arguing against mathematical discovery, Wittgenstein is not just
rejecting Platonism, he is also rejecting a rather standard
philosophical view according to which human beings invent mathematical
calculi, but once a calculus has been invented, we thereafter discover
finitely many of its infinitely many provable and true theorems. As
Wittgenstein himself asks (*RFM* IV, SS48), "might it
not be said that the *rules* lead this way, even if no one went
it?" If "someone produced a proof [of
'Goldbach's theorem']",
"[c]ouldn't one say", Wittgenstein asks
(*LFM* 144), "that the *possibility* of this proof
was a fact in the realms of mathematical reality"--that
"[i]n order [to] find it, it must in some sense be
there"--"[i]t must be a possible
structure"?
Unlike many or most philosophers of mathematics, Wittgenstein resists
the 'Yes' answer that we discover truths about a
mathematical calculus that *come into existence* the moment we
invent the calculus (*PR* SS141; *PG* 283, 466;
*LFM* 139). Wittgenstein rejects the modal reification of
possibility as actuality--that provability and constructibility
are (actual) facts--by arguing that it is at the very least
wrong-headed to say with the Platonist that because "a straight
line *can* be drawn between any two points,... the line
already exists even if no one has drawn it"--to say
"[w]hat in the ordinary world we call a possibility is in the
geometrical world a reality" (*LFM* 144; *RFM* I,
SS21). One might as well say, Wittgenstein suggests (*PG*
374), that "chess only had to be *discovered*, it was
always there!"
At MS 122 (3v; Oct. 18, 1939), Wittgenstein once again emphasizes the
difference between illusory mathematical discovery and genuine
mathematical invention.
>
>
> I want to get away from the formulation: "I now know more about
> the calculus", and replace it with "I now have a different
> calculus". The sense of this is always to keep before
> one's eyes the full scale of the gulf between a mathematical
> knowing and non-mathematical
> knowing.[3]
>
>
>
And as with the middle period, the later Wittgenstein similarly says
(MS 121, 27r; May 27, 1938) that "[i]t helps if one says: the
proof of the Fermat proposition is not to be discovered, but to be
*invented*".
>
>
> The difference between the 'anthropological' and the
> mathematical account is that in the first we are not tempted to speak
> of 'mathematical facts', but rather that in this account
> the *facts* are never mathematical ones, never make
> *mathematical* propositions true or false. (MS 117, 263; March
> 15, 1940)
>
>
>
There are no mathematical facts just as there are no (genuine)
mathematical propositions. Repeating his intermediate view, the later
Wittgenstein says (MS 121, 71v; 27 Dec., 1938): "Mathematics
consists of [calculi | calculations], not of propositions". This
radical constructivist conception of mathematics prompts Wittgenstein
to make notorious remarks--remarks that virtually no one else
would make--such as the infamous (*RFM* V, SS9):
"However queer it sounds, the further expansion of an irrational
number is a further expansion of mathematics".
#### 3.1.1 Wittgenstein's Later Anti-Platonism: The Natural History of Numbers and the Vacuity of Platonism
As in the middle period, the later Wittgenstein maintains that
mathematics is essentially syntactical and non-referential, which, in
and of itself, makes Wittgenstein's philosophy of mathematics
anti-Platonist insofar as Platonism is the view that mathematical
terms and propositions *refer* to objects and/or facts and that
mathematical propositions are *true* by virtue of agreeing with
*mathematical facts*.
The later Wittgenstein, however, wishes to 'warn' us that
our thinking is saturated with the idea of "[a]rithmetic as the
natural history (mineralogy) of numbers" (*RFM* IV,
SS11). When, for instance, Wittgenstein discusses the claim that
fractions cannot be ordered by magnitude, he says that this sounds
'remarkable' in a way that a mundane proposition of the
differential calculus does not, for the latter proposition is
associated with an application in physics,
>
>
> whereas *this proposition* ... seems to
> [solely] concern... the natural history of
> mathematical objects themselves. (*RFM* II, SS40)
>
>
>
Wittgenstein stresses that he is trying to 'warn' us
against this 'aspect'--the idea that the foregoing
proposition about fractions "introduces us to the mysteries of
the mathematical world", which exists somewhere as a completed
totality, awaiting our prodding and our discoveries. The fact that we
regard mathematical propositions as being about mathematical objects
and mathematical investigation "as the exploration of these
objects" is "already mathematical alchemy", claims
Wittgenstein (*RFM* V, SS16), since
>
>
> it is not possible to appeal to the meaning
> [*Bedeutung*] of the signs in
> mathematics,... because it is only mathematics that gives them
> their meaning [*Bedeutung*].
>
>
>
Platonism is *dangerously misleading*, according to
Wittgenstein, because it suggests a picture of *pre*-existence,
*pre*determination and discovery that is completely at odds
with what we find if we actually examine and describe mathematics and
mathematical activity. "I should like to be able to
describe", says Wittgenstein (*RFM* IV, SS13),
"how it comes about that mathematics appears to us now as the
natural history of the domain of numbers, now again as a collection of
rules".
Wittgenstein, however, does *not* endeavour to *refute*
Platonism. His aim, instead, is to clarify what Platonism is and what
it says, implicitly and explicitly (including variants of Platonism
that claim, e.g., that if a proposition is *provable* in an
axiom system, then there already exists a path [i.e., a proof] from
the axioms to that proposition (*RFM* I, SS21; Marion 1998:
13-14, 226; Steiner 2000:
334). Platonism is either "a mere truism" (*LFM*
239), Wittgenstein says, or it is a 'picture' consisting
of "an infinity of shadowy worlds" (*LFM* 145),
which, as such, lacks 'utility' (cf. *PI*
SS254) because it explains nothing and it misleads at every
turn.
### 3.2 Wittgenstein's Later Finitistic Constructivism
Though commentators and critics do not agree as to whether the later
Wittgenstein is still a finitist and whether, if he is, his finitism
is as radical as his intermediate rejection of unbounded mathematical
quantification (Maddy 1986: 300-301, 310), the overwhelming
evidence indicates that the later Wittgenstein still rejects the
actual infinite (*RFM* V, SS21; *Zettel* SS274,
1947) and infinite mathematical extensions.
The first, and perhaps most definitive, indication that the later
Wittgenstein maintains his finitism is his continued and consistent
insistence that irrational numbers are rules for constructing finite
expansions, *not* infinite mathematical extensions. "The
concepts of infinite decimals in mathematical propositions are not
concepts of series", says Wittgenstein (*RFM* V,
SS19), "but of the unlimited technique of expansion of
series". We are misled by "[t]he extensional definitions
of functions, of real numbers etc". (*RFM* V, SS35),
but once we recognize the Dedekind cut as "an extensional
*image*", we see that we are not "led to
\(\sqrt{2}\) by way of the concept of a cut" (*RFM* V,
SS34). On the later Wittgenstein's account, there simply is
no *property*, no *rule*, no *systematic means*
of defining each and every irrational number *intensionally*,
which means there is *no criterion* "for the irrational
numbers being *complete*" (*PR* SS181).
As in his intermediate position, the later Wittgenstein claims that
'\(\aleph\_0\)' and "infinite series" get their
mathematical uses from the use of 'infinity' in ordinary
language (*RFM* II, SS60). Although, in ordinary language,
we often use 'infinite' and "infinitely many"
as answers to the question "how many?", and though we
associate infinity with the enormously large, the principal
*use* we make of 'infinite' and
'infinity' is to speak of *the unlimited*
(*RFM* V, SS14) and unlimited *techniques*
(*RFM* II, SS45; *PI* SS218). This fact is
brought out by the fact "that the technique of learning
\(\aleph\_0\) numerals is different from the technique of learning
100,000 numerals" (*LFM* 31). When we say, e.g., that
"there are an infinite number of even numbers" we mean
that we have a mathematical technique or rule for generating even
numbers which is *limitless*, which is markedly different from
a limited technique or rule for generating a finite number of numbers,
such as 1-100,000,000. "We learn an endless
technique", says Wittgenstein (*RFM* V, SS19),
"but what is in question here is not some gigantic
extension".
An infinite sequence, for example, is not a gigantic extension because
it is not an extension, and '\(\aleph\_0\)' is not a
cardinal number, for "how is this picture connected with the
*calculus*", given that "its connexion is not that
of the picture | | | | with 4" (i.e., given that
'\(\aleph\_0\)' is not connected to a (finite) extension)?
This shows, says Wittgenstein (*RFM* II, SS58), that we
ought to avoid the word 'infinite' in mathematics wherever
it seems to give a meaning to the calculus, rather than acquiring its
meaning from the calculus and its use in the calculus. Once we see
that the calculus contains nothing infinite, we should not be
'disappointed' (*RFM* II, SS60), but simply
note (*RFM* II, SS59) that it is not "really
necessary... to conjure up the picture of the infinite (of the
enormously big)".
A second strong indication that the later Wittgenstein maintains his
finitism is his continued and consistent treatment of
'propositions' of the type "There are three
consecutive 7s in the decimal expansion of \(\pi\)" (hereafter
'PIC').[4]
In the middle period, PIC (and its putative negation, \(\neg\)PIC,
namely, "It is not the case that there are three consecutive 7s
in the decimal expansion of \(\pi\)") is *not* a
meaningful mathematical "statement at all" (*WVC*
81-82: note 1). On Wittgenstein's intermediate view,
PIC--like FLT, GC, and the Fundamental Theorem of
Algebra--is *not* a mathematical proposition because we do
not have in hand an applicable decision procedure by which we can
decide it in a particular calculus. For this reason, we can only
meaningfully state *finitistic* propositions regarding the
expansion of \(\pi\), such as "There exist three consecutive 7s
in the first 10,000 places of the expansion of \(\pi\)"
(*WVC* 71; 81-82, note 1).
The later Wittgenstein maintains this position in various passages in
*RFM* (Bernays 1959: 11-12). For example, to someone who says that since "the
rule of expansion *determine*[\(s\)] the series
completely", "it must implicitly determine *all*
questions about the structure of the series", Wittgenstein
replies: "Here you are thinking of finite series"
(*RFM* V, SS11). If PIC were a *mathematical*
question (or problem)--if it were finitistically
restricted--it would be algorithmically decidable, which it is
not (*RFM* V, SS21; *LFM* 31-32, 111, 170;
*WVC* 102-03). As Wittgenstein says at (*RFM* V,
SS9): "The question... changes its status, when it
becomes decidable", "[f]or a connexion is made then, which
formerly *was not there*". And if, moreover, one invokes
the Law of the Excluded Middle to establish that PIC is a mathematical
proposition--i.e., by saying that one of these "two
pictures... must correspond to the fact" (*RFM* V,
SS10)--one simply begs the question (*RFM* V,
SS12), for if we have doubts about the mathematical status of PIC,
we will not be swayed by a person who asserts "PIC \(\vee
\neg\)PIC" (*RFM* VII, SS41; V, SS13).
Wittgenstein's finitism, constructivism, and conception of
mathematical decidability are interestingly connected at (*RFM*
VII, SS41, par. 2-5).
>
>
> What harm is done e.g. by saying that God knows *all*
> irrational numbers? Or: that they are already there, even though we
> only know certain of them? Why are these pictures not harmless?
>
>
>
> For one thing, they hide certain problems.-- (MS 124: 139; March
> 16, 1944)
>
>
>
> Suppose that people go on and on calculating the expansion of \(\pi\).
> So God, who knows everything, knows whether they will have reached
> '777' by the end of the world. But can his
> *omniscience* decide whether they *would* have reached
> it after the end of the world? It cannot. I want to say: Even God can
> determine something mathematical only by mathematics. Even for him the
> mere rule of expansion cannot decide anything that it does not decide
> for us.
>
>
>
> We might put it like this: if the rule for the expansion has been
> given us, a *calculation* can tell us that there is a
> '2' at the fifth place. Could God have known this, without
> the calculation, purely from the rule of expansion? I want to say: No.
> (MS 124, pp. 175-176; March 23-24, 1944)
>
>
>
What Wittgenstein means here is that God's omniscience
*might*, by calculation, find that '777' occurs at
the interval [\(n,n+2\)], but, on the other hand, God might go on
calculating forever without '777' ever turning up. Since
\(\pi\) is not a *completed* infinite extension that can be
completely surveyed by an omniscient being (i.e., it is not a fact
that can be known by an omniscient mind), even God has only the rule,
and so God's omniscience is no advantage in this case
(*LFM* 103-04; cf. Weyl 1921 [1998: 97]). Like us, with
our modest minds, an omniscient mind (i.e., God) can only calculate
the expansion of \(\pi\) to some \(n\)th decimal
place--where our \(n\) is minute and God's \(n\) is
(relatively) enormous--and at no \(n\)th decimal place
could *any mind* rightly conclude that because
'777' has not turned up, it, therefore, will never turn
up.
### 3.3 The Later Wittgenstein on Decidability and Algorithmic Decidability
On one fairly standard interpretation, the later Wittgenstein says
that "true in calculus \(\Gamma\)" is identical to
"provable in calculus \(\Gamma\)" and, therefore, that a
mathematical proposition of calculus \(\Gamma\) is a concatenation of
signs that is either provable (in principle) or refutable (in
principle) in calculus \(\Gamma\) (Goodstein 1972: 279, 282; Anderson
1958: 487; Klenk 1976: 13; Frascolla 1994: 59). On this
interpretation, the later Wittgenstein precludes undecidable
mathematical propositions, but he allows that some *undecided*
expressions are propositions *of* a calculus because they are
decidable in principle (i.e., in the absence of a known, applicable
decision procedure).
There is considerable evidence, however, that the later Wittgenstein
maintains his intermediate position that an expression is a meaningful
mathematical proposition only *within* a given calculus and
*iff* we knowingly have in hand an applicable and effective
decision procedure by means of which we can decide it. For example,
though Wittgenstein vacillates between "provable in PM"
and "proved in PM" at (*RFM* App. III, SS6,
SS8), he does so in order to use the former to consider the
alleged conclusion of Godel's proof (i.e., that there exist
true but unprovable mathematical propositions), which he then rebuts
with his own identification of "true in calculus
\(\Gamma\)" with "*proved* in calculus
\(\Gamma\)" (i.e., *not* with "*provable* in
calculus \(\Gamma\)") (Wang 1991: 253; Rodych 1999a: 177). This
construal is corroborated by numerous passages in which Wittgenstein
rejects the received view that a prov*able* but unproved
proposition is true, as he does when he asserts that (*RFM*
III, SS31, 1939) a proof "makes new connexions",
"[i]t does not establish that they are there" because
"they do not exist until it makes them", and when he says
(*RFM* VII, SS10, 1941) that "[a] new proof gives the
proposition a place in a new system". Furthermore, as we have
just seen, Wittgenstein rejects PIC as a non-proposition on the
grounds that it is not algorithmically decidable, while admitting
finitistic versions of PIC because they are algorithmically
decidable.
Perhaps the most compelling evidence that the later Wittgenstein
maintains algorithmic decidability as his criterion for a mathematical
proposition lies in the fact that, at (*RFM* V, SS9, 1942),
he says in two distinct ways that a mathematical
'question' can *become* decidable and that when
this *happens*, a new connexion is '*made*'
which previously did not exist. Indeed, Wittgenstein cautions us
against appearances by saying that "it *looks* as if a
ground for the decision were already there", when, in fact,
"it has yet to be invented". These passages strongly
militate against the claim that the later Wittgenstein grants that
proposition \(\phi\) is decidable in calculus \(\Gamma\) iff it is
provable or refutable *in principle*. Moreover, if Wittgenstein
held *this* position, he would claim, contra (*RFM* V,
SS9), that a question or proposition does not *become*
decidable since it simply (always) *is* decidable. If it is
provable, and we simply don't yet know this to be the case,
there *already is* a connection between, say, our axioms and
rules and the proposition in question. What Wittgenstein says,
however, is that the modalities *provable* and
*refutable* are shadowy forms of reality--that possibility
is not actuality in mathematics (*PR* SSSS141, 144,
172; *PG* 281, 283, 299, 371, 466, 469; *LFM* 139).
Thus, the later Wittgenstein agrees with the intermediate Wittgenstein
that the only sense in which an *undecided* mathematical
proposition (*RFM* VII, SS40, 1944) can be
*decidable* is in the sense that we *know* how to decide
it by means of an applicable decision procedure.
### 3.4 Wittgenstein's Later Critique of Set Theory: Non-Enumerability vs. Non-Denumerability
Largely a product of his *anti-foundationalism* and his
criticism of the extension-intension conflation, Wittgenstein's
later critique of set theory is highly consonant with his intermediate
critique (*PR* SSSS109, 168; *PG* 334, 369, 469;
*LFM* 172, 224, 229; and *RFM* III, SS43, 46, 85,
90; VII, SS16). Given that mathematics is a "MOTLEY of
techniques of proof" (*RFM* III, SS46), it does not
require a foundation (*RFM* VII, SS16) and it cannot be
given a *self-evident* foundation (*PR* SS160;
*WVC* 34 & 62; *RFM* IV, SS3). Since set theory
was invented to provide mathematics with a foundation, it is,
minimally, unnecessary.
Even if set theory is unnecessary, it still might constitute a solid
foundation for mathematics. In his core criticism of set theory,
however, the later Wittgenstein denies this, saying that the diagonal
proof does not prove non-denumerability, for "[i]t means nothing
to say: '*Therefore* the X numbers are not
denumerable'" (*RFM* II, SS10). When the diagonal is
construed as a *proof* of greater and lesser infinite sets it
is a "puffed-up proof", which, as Poincare argued
(1913: 61-62), purports to prove or show more than "its
means allow it" (*RFM* II, SS21).
>
>
> If it were said: Consideration of the diagonal procedure shews you that the
> *concept* 'real number' has much less analogy with
> the concept 'cardinal number' than we, being misled by
> certain analogies, are inclined to believe, that would have a good and honest sense. But just the
> *opposite* happens: one pretends to compare the
> 'set' of real numbers in magnitude with that of cardinal
> numbers. The difference in kind between the two conceptions is
> represented, by a skew form of expression, as difference of extension.
> I believe, and hope, that a future generation will laugh at this hocus
> pocus. (*RFM* II, SS22)
>
>
>
> The sickness of a time is cured by an alteration in the mode of life
> of human beings... (*RFM* II, SS23)
>
>
>
The "hocus pocus" of the diagonal proof rests, as always
for Wittgenstein, on a conflation of extension and intension, on the
failure to properly distinguish sets *as rules* for generating
extensions and (finite) extensions. By way of this confusion "a
difference in kind" (i.e., unlimited rule vs. finite extension)
"is represented by a skew form of expression", namely as a
difference in the *cardinality* of two *infinite*
extensions. Not only can the diagonal *not* prove that one
infinite set is greater in cardinality than another infinite set,
according to Wittgenstein, *nothing* could prove this, simply
because "infinite sets" are not *extensions*, and
hence not *infinite* extensions. But instead of interpreting
Cantor's diagonal proof honestly, we take the proof to
"show there are numbers bigger than the infinite", which
"sets the whole mind in a whirl, and gives the pleasant feeling
of paradox" (*LFM* 16-17)--a "giddiness
attacks us when we think of certain theorems in set
theory"--"when we are performing a piece of logical
sleight-of-hand" (*PI* SS412; SS426; 1945). This
giddiness and pleasant feeling of paradox, says Wittgenstein
(*LFM* 16), "may be the chief reason [set theory] was
invented".
Though Cantor's diagonal is not a *proof* of
non-denumerability, when it is expressed in a *constructive
manner*, as Wittgenstein himself expresses it at (*RFM* II,
SS1), "it gives sense to the mathematical proposition that
the number so-and-so is different from all those of the system"
(*RFM* II, SS29). That is, the proof proves
*non-enumerability*: it proves that for any given
*definite* real number concept (e.g., recursive real), one
cannot enumerate 'all' such numbers because one can always
construct a diagonal number, which falls under the same concept and is
not in the enumeration. "One might say", Wittgenstein
says,
>
>
> I call number-concept X non-denumerable if it has been stipulated
> that, whatever numbers falling under this concept you arrange in a
> series, the diagonal number of this series is also to fall under that
> concept. (*RFM* II, SS10; cf. II, SSSS30, 31,
> 13)
>
>
>
One lesson to be learned from this, according to Wittgenstein
(*RFM* II, SS33), is that "there are *diverse
systems* of irrational points to be found in the number
line", each of which can be given by a recursive rule, but
"no system of irrational numbers", and "also no
super-system, no 'set of irrational numbers' of
higher-order infinity". Cantor has shown that we can construct
"infinitely many" diverse systems of irrational numbers,
but we cannot construct an *exhaustive* system of *all*
the irrational numbers (*RFM* II, SS29). As Wittgenstein
says at (MS 121, 71r; Dec. 27, 1938), three pages after the passage
used for (*RFM* II, SS57):
>
>
> If you now call the Cantorian procedure one for producing a new real
> number, you will now no longer be inclined to speak of *a system of
> all real numbers*. (italics added)
>
>
>
From Cantor's proof, however, set theorists erroneously conclude
that "the set of irrational numbers" is greater in
multiplicity than any enumeration of irrationals (or the set of
rationals), when the only conclusion to draw is that there is no such
thing as *the set of* all *the irrational numbers*. The
truly dangerous aspect to 'propositions' such as
"The real numbers cannot be arranged in a series" and
"The set... is not denumerable" is that they make
concept formation [i.e., our *invention*] "look like a
fact of nature" (i.e., something we *discover*)
(*RFM* II SSSS16, 37). At best, we have a vague idea of
the concept of "real number", but only if we restrict this
idea to "recursive real number" and only if we recognize
that *having* the concept does not mean *having* a set
of all recursive real numbers.
### 3.5 Extra-Mathematical Application as a Necessary Condition of Mathematical Meaningfulness
The principal and most significant change from the middle to later
writings on mathematics is Wittgenstein's (re-)introduction of
an extra-mathematical application criterion, which is used to
distinguish mere "sign-games" from mathematical
language-games. "[I]t is essential to mathematics that its signs
are also employed in *mufti*", Wittgenstein states, for
>
>
> [i]t is the use outside mathematics, and so the *meaning*
> [*Bedeutung*] of the signs, that makes the
> sign-game into mathematics. (i.e., a mathematical
> "language-game"; *RFM* V, SS2, 1942;
> *LFM* 140-141, 169-70)
>
>
>
As Wittgenstein says at (*RFM* V, SS41, 1943),
>
>
> [c]oncepts which occur in 'necessary' propositions
> *must also* occur and have a meaning
> [*Bedeutung*] in non-necessary ones. (italics
> added)
>
>
>
If two proofs prove the same proposition, says Wittgenstein, this
means that "both demonstrate it as a suitable instrument for the
same purpose", which "is an allusion to *something
outside mathematics*" (*RFM* VII, SS10, 1941;
italics added).
As we have seen, this criterion was present in the *Tractatus*
(6.211), but noticeably absent in the middle period. The reason for
this absence is probably that the intermediate Wittgenstein wanted to
stress that in mathematics everything is syntax and nothing is
meaning. Hence, in his criticisms of Hilbert's
'contentual' mathematics (Hilbert 1925) and
Brouwer's reliance upon intuition to determine the meaningful
content of (especially undecidable) mathematical propositions,
Wittgenstein couched his finitistic constructivism in strong
formalism, emphasizing that a mathematical calculus does not need an
extra-mathematical application (*PR* SS109; *WVC*
105).
There seem to be two reasons why the later Wittgenstein reintroduces
extra-mathematical application as a necessary condition of a
*mathematical* language-game. First, the later Wittgenstein has
an even greater interest in the *use* of natural and formal
languages in diverse "forms of life" (*PI*
SS23), which prompts him to emphasize that, in many cases, a
mathematical 'proposition' functions as if it were an
empirical proposition "hardened into a rule" (*RFM*
VI, SS23) and that mathematics plays diverse applied roles in many
forms of human activity (e.g., science, technology, predictions).
Second, the extra-mathematical application criterion relieves the
tension between Wittgenstein's intermediate critique of set
theory and his strong formalism according to which "one calculus
is as good as another" (*PG* 334). By demarcating
mathematical language-games from non-mathematical sign-games,
Wittgenstein can now claim that, "for the time being", set
theory is merely a formal sign-game.
>
>
> These considerations may lead us to say that \(2^{\aleph\_0} \gt
> \aleph\_0\).
>
>
>
> That is to say: we can *make* the considerations lead us to
> that.
>
>
>
> Or: we can say *this* and give *this* as our reason.
>
>
>
> But if we do say it--what are we to do next? In what practice is
> this proposition anchored? It is for the time being a piece of
> mathematical architecture which hangs in the air, and looks as if it
> were, let us say, an architrave, but not supported by anything and
> supporting nothing. (*RFM* II, SS35)
>
>
>
It is not that Wittgenstein's later criticisms of set theory
change, it is, rather, that once we see that set theory has no
extra-mathematical application, we will focus on its calculations,
proofs, and prose and "subject the *interest* of the
calculations to a *test*" (*RFM* II, SS62). By
means of *Wittgenstein's* "immensely
important" 'investigation' (*LFM* 103), we
will find, Wittgenstein expects, that set theory is uninteresting
(e.g., that the non-enumerability of "the reals" is
uninteresting and useless) and that our entire interest in it lies in
the 'charm' of the mistaken prose interpretation of its
proofs (*LFM* 16). More importantly, though there is "a
solid core to all [its] glistening concept-formations"
(*RFM* V, SS16), once we see it as "as a mistake of
ideas", we will see that propositions such as
"\(2^{\aleph\_0} \gt \aleph\_0\)" are not anchored in an
extra-mathematical practice, that "Cantor's
paradise" "is not a paradise", and we will
*then* leave "of [our] own accord" (*LFM*
103).
It must be emphasized, however, that the later Wittgenstein still
maintains that the operations within a mathematical calculus are
purely formal, syntactical operations governed by rules of syntax
(i.e., the solid core of formalism).
>
>
> It is of course clear that the mathematician, in so far as he really
> is 'playing a game'...[is] *acting* in
> accordance with certain rules. (*RFM* V, SS1)
>
>
>
> To say mathematics is a game is supposed to mean: in proving, we need
> never appeal to the meaning [*Bedeutung*] of the
> signs, that is to their extra-mathematical application. (*RFM*
> V, SS4)
>
>
>
Where, during the middle period, Wittgenstein speaks of
"arithmetic [as] a kind of geometry" at (*PR*
SS109 & SS111), the later Wittgenstein similarly speaks of
"the geometry of proofs" (*RFM* I, App. III,
SS14), the "geometrical cogency" of proofs
(*RFM* III, SS43), and a "geometrical
application" according to which the "transformation of
signs" in accordance with "transformation-rules"
(*RFM* VI, SS2, 1941) shows that "when mathematics is
divested of all content, it would remain that certain signs can be
constructed from others according to certain rules"
(*RFM* III, SS38). Hence, the question whether a
concatenation of signs is a proposition of a given
*mathematical* calculus (i.e., a calculus with an
extra-mathematical application) is still an internal, syntactical
question, which we can answer with knowledge of the proofs and
decision procedures of the calculus.
### 3.6 Wittgenstein on Godel and Undecidable Mathematical Propositions
*RFM* is perhaps most (in)famous for Wittgenstein's
(*RFM* App. III) treatment of "true but unprovable"
mathematical propositions. Early reviewers said that "[t]he
arguments are wild" (Kreisel 1958: 153), that the passages
"on Godel's theorem... are of poor quality or
contain definite errors" (Dummett 1959: 324), and that
(*RFM* App. III) "throws no light on Godel's
work" (Goodstein 1957: 551). "Wittgenstein seems to want
to legislate [[q]uestions about completeness] out of
existence", Anderson said, (1958: 486-87) when, in fact,
he certainly cannot dispose of Godel's demonstrations
"by confusing truth with provability". Additionally,
Bernays, Anderson (1958: 486), and Kreisel (1958: 153-54)
claimed that Wittgenstein failed to appreciate
"Godel's quite explicit premiss of the consistency of
the considered formal system" (Bernays 1959: 15), thereby
failing to appreciate the conditional nature of Godel's
First Incompleteness Theorem. On the reading of these four early
expert reviewers, Wittgenstein failed to understand Godel's
Theorem because he failed to understand the mechanics of
Godel's proof and he erroneously thought he could refute or
undermine Godel's proof simply by identifying "true
in *PM*" (i.e., *Principia Mathematica*) with
"proved/provable in *PM*".
Interestingly, we now have two pieces of evidence (Kreisel 1998: 119;
Rodych 2003: 282, 307) that Wittgenstein wrote (*RFM* App. III)
in 1937-38 after reading *only* the informal,
'casual' (MS 126, 126-127; Dec. 13, 1942)
introduction of (Godel 1931) and that, therefore, his use of a
self-referential proposition as the "true but unprovable
proposition" may be based on Godel's introductory,
informal statements, namely that "the undecidable proposition
[\(R(q);q\)] states... that [\(R(q);q\)] is not provable"
(1931: 598) and that "[\(R(q);q\)] says about itself that it is
not provable" (1931: 599). Perplexingly, only two of the four
famous reviewers even mentioned Wittgenstein's (*RFM*
VII, SSSS19, 21-22, 1941)) explicit remarks on
'Godel's' First Incompleteness Theorem (Bernays
1959: 2; Anderson 1958: 487), which, though flawed, capture the
number-theoretic nature of the Godelian proposition *and*
the functioning of Godel-numbering, probably because Wittgenstein
had by then read or skimmed the body of Godel's 1931 paper.
The first thing to note, therefore, about (*RFM* App. III) is
that Wittgenstein mistakenly thinks--again, perhaps because
Wittgenstein had read only Godel's Introduction--(a)
that Godel proves that there are true but unprovable propositions
of *PM* (when, in fact, Godel syntactically proves that if
*PM* is \(\omega\)-consistent, the Godelian proposition is
undecidable in *PM*) and (b) that Godel's proof uses
a self-referential proposition to semantically show that there are
true but unprovable propositions of *PM*.
For this reason, Wittgenstein has two main aims in (*RFM* App.
III): (1) to refute or undermine, *on its own terms*, the
alleged Godel proof of true but unprovable propositions of
*PM*, and (2) to show that, on his own terms, where "true
in calculus \(\Gamma\)" is identified with
"*proved* in calculus \(\Gamma\)", the very idea of
a true but unprovable proposition of calculus \(\Gamma\) is
meaningless.
Thus, at (*RFM* App. III, SS8) (hereafter simply
'SS8'), Wittgenstein begins his presentation of what
he takes to be Godel's proof by having someone say:
>
>
> I have constructed a proposition (I will use '*P*' to
> designate it) in Russell's symbolism, and by means of certain
> definitions and transformations it can be so interpreted that it says:
> '*P* is not provable in Russell's system'.
>
>
>
That is, Wittgenstein's Godelian constructs a proposition
that is semantically *self-referential* and which specifically
says of itself that it is not provable in *PM*. With this
erroneous, self-referential proposition *P* [used also at
(SS10), (SS11), (SS17), (SS18)], Wittgenstein presents
a proof-sketch very similar to Godel's own
*informal* semantic proof 'sketch' in the
Introduction of his famous paper (1931: 598).
>
>
> Must I not say that this proposition on the one hand is true, and on
> the other hand is unprovable? For suppose it were false; then it is
> true that it is provable. And that surely cannot be! And if it is
> proved, then it is proved that it is not provable. Thus it can only be
> true, but unprovable. (SS8)
>
>
>
The reasoning here is a double *reductio*. Assume (a) that
*P* must either be true or false in Russell's system, and (b)
that *P* must either be provable or unprovable in Russell's
system. If (a), *P* must be *true*, for if we suppose that
*P* is false, since *P* says of itself that it is unprovable,
"it is true that it is provable", and if it is provable,
it must be true (which is a contradiction), and hence, given what
*P* means or says, it is true that *P* is unprovable (which is a
contradiction). Second, if (b), *P* must be unprovable, for if *P*
"is proved, then it is proved that it is not provable",
which is a contradiction (i.e., *P* is provable *and* not
provable in *PM*). It follows that *P* "can only be
true, but unprovable".
To refute or undermine this 'proof', Wittgenstein says
that if you have proved \(\neg P\), you have proved that *P* is
provable (i.e., since you have proved that it is *not* the case
that *P* is not provable in Russell's system), and "you
will now presumably give up the interpretation that it is
unprovable" (i.e., '*P* is not provable in
Russell's system'), since the contradiction is only proved
if we use or retain this self-referential interpretation (SS8). On
the other hand, Wittgenstein argues (SS8), '[i]f you assume
that the proposition is provable in Russell's system, that means
it is true *in the Russell sense*, and the interpretation
"*P* is not provable" again has to be given up',
because, once again, it is only the self-referential interpretation
that engenders a contradiction. Thus, Wittgenstein's
'refutation' of "Godel's proof"
consists in showing that no contradiction arises if we do *not*
interpret '*P*' as '*P* is not provable in
Russell's system'--indeed, without this
interpretation, a proof of *P* does not yield a proof of \(\neg P\)
and a proof of \(\neg P\) does not yield a proof of *P*. In other
words, the mistake in the proof is the mistaken assumption that a
mathematical proposition '*P*' "can be so
interpreted that it says: '*P* is not provable in
Russell's system'". As Wittgenstein says at
(SS11), "[t]hat is what comes of making up such
sentences".
This 'refutation' of "Godel's
proof" is perfectly consistent with Wittgenstein's
syntactical conception of mathematics (i.e., wherein mathematical
propositions have no meaning and hence cannot have the
'requisite' self-referential meaning) and with what he
says before and after (SS8), where his main aim is to show (2)
that, *on his own terms*, since "true in calculus
\(\Gamma\)" is identical with "proved in calculus
\(\Gamma\)", the very idea of a true but unprovable proposition
of calculus \(\Gamma\) is a contradiction-in-terms.
To show (2), Wittgenstein begins by asking (SS5), what he takes to
be, the central question, namely, "Are there true propositions
in Russell's system, which cannot be proved in his
system?". To address this question, he asks "What is
called a true proposition in Russell's system...?",
which he succinctly answers (SS6): "'*p*'
*is true = p*". Wittgenstein then clarifies this answer
by reformulating the second question of (SS5) as "Under what
circumstances is a proposition asserted in Russell's game [i.e.,
system]?", which he then answers by saying: "the answer
is: at the end of one of his proofs, or as a 'fundamental
law' (Pp.)" (SS6). This, in a nutshell, is
Wittgenstein's conception of "mathematical truth": a
true proposition of *PM* is an axiom or a proved proposition,
which means that "true in *PM*" is identical with,
and therefore can be supplanted by, "proved in
*PM*".
Having explicated, to his satisfaction at least, the only real,
non-illusory notion of "true in *PM*", Wittgenstein
answers the (SS8) question "Must I not say that this
proposition... is true, and... unprovable?"
*negatively* by (re)stating his own (SSSS5-6)
conception of "true in *PM*" as
"proved/provable in *PM*":
>
>
> 'True in Russell's system' means, as was said:
> proved in Russell's system; and 'false in Russell's
> system' means: the opposite has been proved in Russell's
> system.
>
>
>
This answer is given in a slightly different way at (SS7) where
Wittgenstein asks "may there not be true propositions which are
written in this [Russell's] symbolism, but are not provable in
Russell's system?", and then answers "'True
propositions', hence propositions which are true in
*another* system, i.e. can rightly be asserted in another
game". In light of what he says in (SSSS5, 6, and 8),
Wittgenstein's (SS7) point is that if a proposition is
'written' in "Russell's symbolism" and
it is true, *it must be proved/provable in another system*,
since that is what "mathematical truth" is. Analogously
(SS8), "if the proposition is supposed to be false in some
other than the Russell sense, then it does not contradict this for it
to be proved in Russell's sense", for "[w]hat is
called 'losing' in chess may constitute winning in another
game". This textual evidence certainly suggests, as Anderson
almost said, that Wittgenstein rejects a true but unprovable
mathematical proposition as a contradiction-in-terms on the grounds
that "true in calculus \(\Gamma\)" means nothing more (and
nothing less) than "proved in calculus \(\Gamma\)".
On this (natural) interpretation of (*RFM* App. III), the early
reviewers' conclusion that Wittgenstein fails to understand the
mechanics of Godel's argument seems reasonable. First,
Wittgenstein erroneously thinks that Godel's proof is
essentially semantical and that it uses and *requires* a
self-referential proposition. Second, Wittgenstein says (SS14)
that "[a] contradiction is unusable" for "a
prediction" that "that such-and-such construction is
impossible" (i.e., that *P* is unprovable in *PM*),
which, superficially at least, seems to
indicate that Wittgenstein fails to appreciate the "consistency
assumption" of Godel's proof (Kreisel, Bernays,
Anderson).
If, in fact, Wittgenstein did not read and/or failed to understand
Godel's proof through at least 1941, how would he have
responded if and when he understood it as (at least) a proof of the
undecidability of *P* in *PM* on the assumption of
*PM*'s consistency? Given his syntactical conception of
mathematics, even with the extra-mathematical application criterion,
he would simply say that *P*, *qua* expression syntactically
independent of *PM*, is not a proposition of *PM*, and
if it is syntactically independent of all existent mathematical
language-games, it is not a mathematical proposition. Moreover, there
seem to be no compelling non-semantical reasons--either
intra-systemic or extra-mathematical--for Wittgenstein to
accommodate *P* by including it in *PM* or by adopting a
non-syntactical conception of mathematical truth (such as Tarski-truth
(Steiner 2000)). Indeed, Wittgenstein questions the intra-systemic and
extra-mathematical *usability* of *P* in various discussions
of Godel in the *Nachlass* and, at
(SS19), he emphatically says that one cannot "make the truth
of the assertion ['*P*' or 'Therefore
*P*'] plausible to me, since you can make no use of it except
to do these bits of legerdemain".
After the initial, scathing reviews of *RFM*, very little
attention was paid to Wittgenstein's (*RFM* App. III and
*RFM* VII, SSSS21-22) discussions of
Godel's First Incompleteness Theorem (Klenk 1976: 13) until
Shanker's sympathetic (1988b). In the last 22 years, however,
commentators and critics have offered various interpretations of
Wittgenstein's remarks on Godel, some being largely
sympathetic (Floyd 1995, 2001) and others offering a more mixed
appraisal (Rodych 1999a, 2002, 2003; Steiner 2001; Priest 2004; Berto
2009a). Recently, and perhaps most interestingly, Floyd & Putnam
(2000) and Steiner (2001) have evoked new and interesting discussions
of Wittgenstein's ruminations on undecidability, mathematical
truth, and Godel's First Incompleteness Theorem (Rodych
2003, 2006; Bays 2004; Sayward 2005; and Floyd & Putnam 2006).
## 4. The Impact of Philosophy of Mathematics on Mathematics
Though it is doubtful that all commentators will agree (Wrigley 1977:
51; Baker & Hacker 1985: 345; Floyd 1991: 145, 143; 1995: 376;
2005: 80; Maddy 1993: 55; Steiner 1996: 202-204), the following
passage seems to capture Wittgenstein's *attitude* to the
Philosophy of Mathematics and, in large part, the way in which he
viewed his own work on mathematics.
>
>
> What will distinguish the mathematicians of the future from those of
> today will really be a greater sensitivity, and *that*
> will--as it were--prune mathematics; since people will then
> be more intent on absolute clarity than on the discovery of new
> games.
>
>
>
> Philosophical clarity will have the same effect on the growth of
> mathematics as sunlight has on the growth of potato shoots. (In a dark
> cellar they grow yards long.)
>
>
>
> A mathematician is bound to be horrified by my mathematical comments,
> since he has always been trained to avoid indulging in thoughts and
> doubts of the kind I develop. He has learned to regard them as
> something contemptible and... he has acquired a revulsion from
> them as infantile. That is to say, I trot out all the problems that a
> child learning arithmetic, etc., finds difficult, the problems that
> education represses without solving. I say to those repressed doubts:
> you are quite correct, go on asking, demand clarification!
> (*PG* 381, 1932)
>
>
>
In his middle and later periods, Wittgenstein believes he is providing
philosophical clarity on aspects and parts of mathematics, on
mathematical conceptions, and on philosophical conceptions of
mathematics. Lacking such clarity and not aiming for absolute clarity,
mathematicians construct new games, sometimes because of a
misconception of the *meaning* of their mathematical
propositions and mathematical terms. Education and especially advanced
education in mathematics does not encourage clarity but rather
represses it--questions that deserve answers are either not asked
or are dismissed. Mathematicians of the future, however, will be more
sensitive and this will (repeatedly) prune mathematical extensions and
inventions, since mathematicians will come to recognize that new
extensions and creations (e.g., propositions of transfinite cardinal
arithmetic) are not well-connected with the solid core of mathematics
or with real-world applications. Philosophical clarity will,
eventually, enable mathematicians and philosophers to "get down
to brass tacks" (*PG* 467). |