Continued insistence on the universal competence of science will serve only to undermine the credibility of science as a whole
By Austin L. Hughes
By Austin L. Hughes
When I decided on a
scientific career, one of the things that appealed to me about science was the
modesty of its practitioners. The typical scientist seemed to be a person who
knew one small corner of the natural world and knew it very well, better than most
other human beings living and better even than most who had ever lived. But
outside of their circumscribed areas of expertise, scientists would hesitate to
express an authoritative opinion. This attitude was attractive precisely
because it stood in sharp contrast to the arrogance of the philosophers of the
positivist tradition, who claimed for science and its practitioners a broad
authority with which many practicing scientists themselves were uncomfortable.
The temptation to overreach, however, seems increasingly indulged today in
discussions about science. Both in the work of professional philosophers and in
popular writings by natural scientists, it is frequently claimed that natural
science does or soon will constitute the entire domain of truth. And this
attitude is becoming more widespread among scientists themselves. All too many
of my contemporaries in science have accepted without question the hype that
suggests that an advanced degree in some area of natural science confers the
ability to pontificate wisely on any and all subjects.
Of course, from the very beginning of the modern scientific enterprise,
there have been scientists and philosophers who have been so impressed with the
ability of the natural sciences to advance knowledge that they have asserted
that these sciences are the only valid way of seeking knowledge in any field. A
forthright expression of this viewpoint has been made by the chemist Peter
Atkins, who in his 1995 essay “Science as Truth” asserts the “universal competence” of science. This
position has been called scientism — a term that was
originally intended to be pejorative but has been claimed as a badge of honor
by some of its most vocal proponents. In their 2007 book Every Thing Must Go: Metaphysics Naturalized, for example,
philosophers James Ladyman, Don Ross, and David Spurrett go so far as to
entitle a chapter “In Defense of Scientism.”
Modern science is often described as having emerged from philosophy; many
of the early modern scientists were engaged in what they called “natural
philosophy.” Later, philosophy came to be seen as an activity distinct from but
integral to natural science, with each addressing separate but complementary
questions — supporting, correcting, and supplying knowledge to one another. But
the status of philosophy has fallen quite a bit in recent times. Central to
scientism is the grabbing of nearly the entire territory of what were once
considered questions that properly belong to philosophy. Scientism takes
science to be not only better than philosophy at answering such questions, but
the only means of answering them. For most of those who dabble
in scientism, this shift is unacknowledged, and may not even be recognized. But
for others, it is explicit. Atkins, for example, is scathing in his dismissal
of the entire field: “I consider it to be a defensible proposition that no
philosopher has helped to elucidate nature; philosophy is but the refinement of
hindrance.”
Is scientism defensible? Is it really true that natural science provides a
satisfying and reasonably complete account of everything we see, experience,
and seek to understand — of every phenomenon in the universe? And is it true
that science is more capable, even singularly capable, of answering the
questions that once were addressed by philosophy? This subject is too large to
tackle all at once. But by looking briefly at the modern understandings of
science and philosophy on which scientism rests, and examining a few case
studies of the attempt to supplant philosophy entirely with science, we might
get a sense of how the reach of scientism exceeds its grasp.
The
Abdication of the Philosophers
If philosophy is regarded as a legitimate and necessary
discipline, then one might think that a certain degree of philosophical
training would be very useful to a scientist. Scientists ought to be able to
recognize how often philosophical issues arise in their work — that is, issues
that cannot be resolved by arguments that make recourse solely to inference and
empirical observation. In most cases, these issues arise because practicing
scientists, like all people, are prone to philosophical errors. To take an
obvious example, scientists can be prone to errors of elementary logic, and
these can often go undetected by the peer review process and have a major
impact on the literature — for instance, confusing correlation and causation,
or confusing implication with a biconditional. Philosophy can provide a way of
understanding and correcting such errors. It addresses a largely distinct set
of questions that natural science alone cannot answer, but that must be
answered for natural science to be properly conducted.
These questions include how we define and understand science itself. One
group of theories of science — the set that best supports a clear distinction
between science and philosophy, and a necessary role for each — can broadly be
classified as “essentialist.” These theories attempt to identify the essential
traits that distinguish science from other human activities, or differentiate true
science from nonscientific and pseudoscientific forms of inquiry. Among the
most influential and compelling of these is Karl Popper’s criterion of
falsifiability outlined in The Logic of Scientific Discovery (1959).
A falsifiable theory is one that makes a specific prediction about what
results are supposed to occur under a set of experimental conditions, so that
the theory might be falsified by performing the experiment and comparing
predicted to actual results. A theory or explanation that cannot be falsified
falls outside the domain of science. For example, Freudian psychoanalysis,
which does not make specific experimental predictions, is able to revise its
theory to match any observations, in order to avoid rejecting the theory
altogether. By this reckoning, Freudianism is a pseudoscience, a theory that
purports to be scientific but is in fact immune to falsification. In contrast,
for example, Einstein’s theory of relativity made predictions (like the bending
of starlight around the sun) that were novel and specific, and provided
opportunities to disprove the theory by direct experimental observation.
Advocates of Popper’s definition would seem to place on the same level as
pseudoscience or nonscience every statement — of metaphysics, ethics, theology,
literary criticism, and indeed daily life — that does not meet the criterion of
falsifiability.
The criterion of falsifiability is appealing in that it highlights
similarities between science and the trial-and-error methods we use in everyday
problem-solving. If I have misplaced my keys, I immediately begin to construct
scenarios — hypotheses, if you will — that might account for their whereabouts:
Did I leave them in the ignition or in the front door lock? Were they in the
pocket of the jeans I put in the laundry basket? Did I drop them while mowing
the lawn? I then proceed to evaluate these scenarios systematically, by testing
predictions that I would expect to be true under each scenario — in other
words, by using a sort of Popperian method. The everyday, commonsense nature of
the falsifiability criterion has the virtue of both showing how science is
grounded in basic ideas of rationality and observation, and thereby also of
stripping away from science the aura of sacred mystery with which some would
seek to surround it.
An additional strength of the falsifiability criterion is that it makes
possible a clear distinction between science properly speaking and the opinions
of scientists on nonscientific subjects. We have seen in recent years a growing
tendency to treat as “scientific” anything that scientists say or believe. The debates over stem cell research, for example,
have often been described, both within the scientific community and in the mass
media, as clashes between science and religion. It is true that many, but by no
means all, of the most vocal defenders of embryonic stem cell research were
scientists, and that many, but by no means all, of its most vocal opponents
were religious. But in fact, there was little science being disputed: the
central controversy was between two opposing views on a particular ethical
dilemma, neither of which was inherently more scientific than the other. If we
confine our definition of the scientific to the falsifiable, we clearly will
not conclude that a particular ethical view is dictated by science just because
it is the view of a substantial number of scientists. The same logic applies to
the judgments of scientists on political, aesthetic, or other nonscientific
issues. If a poll shows that a large majority of scientists prefers neutral
colors in bathrooms, for example, it does not follow that this preference is
“scientific.”
Popper’s falsifiability criterion and similar essentialist definitions of
science highlight the distinct but vital roles of both science and philosophy.
The definitions show the necessary role of philosophy in undergirding and
justifying science — protecting it from its potential for excess and
self-devolution by, among other things, proposing clear distinctions between
legitimate scientific theories and pseudoscientific theories that masquerade as
science.
By contrast to Popper, many thinkers have advanced understandings of
philosophy and science that blur such distinctions, resulting in an inflated
role for science and an ancillary one for philosophy. In part, philosophers
have no one but themselves to blame for the low state to which their discipline
has fallen — thanks especially to the logical positivist and analytic strain
that has been dominant for about a century in the English-speaking world. For
example, the influential twentieth-century American philosopher
W. V. O. Quine spoke modestly of a “philosophy continuous with
science” and vowed to eschew philosophy’s traditional concern with metaphysical
questions that might claim to sit in judgment on the natural sciences. Science,
Quine and many of his contemporaries seemed to say, is where the real action
is, while philosophers ought to celebrate science from the sidelines.
This attitude has been articulated in the other main group of theories of
science, which rivals the essentialist understandings — namely, the
“institutional” theories, which identify science with the social institution of
science and its practitioners. The institutional approach may be useful to
historians of science, as it allows them to accept the various definitions of
fields used by the scientists they study. But some philosophers go so far as to
use “institutional factors” as the criteria of goodscience.
Ladyman, Ross, and Spurrett, for instance, say that they “demarcate good
science — around lines which are inevitably fuzzy near the boundary — by
reference to institutional factors, not to directly epistemological ones.” By
this criterion, we would differentiate good science from bad science simply by
asking which proposals agencies like the National Science Foundation deem worthy
of funding, or which papers peer-review committees deem worthy of publication.
The problems with this definition of science are myriad. First, it is
essentially circular: science simply is what scientists do. Second, the high
confidence in funding and peer-review panels should seem misplaced to anyone
who has served on these panels and witnessed the extent to which preconceived
notions, personal vendettas, and the like can torpedo even the best proposals.
Moreover, simplistically defining science by its institutions is complicated by
the ample history of scientific institutions that have been notoriously
unreliable. Consider the decades during which Soviet biology was dominated by
the ideologically motivated theories of the geneticist Trofim Lysenko, who rejected
Mendelian genetics as inconsistent with Marxism and insisted that acquired
characteristics could be inherited. An observer who distinguishes good science
from bad science “by reference to institutional factors” alone would have
difficulty seeing the difference between the unproductive and corrupt genetics
in the Soviet Union and the fruitful research of Watson and Crick in 1950s
Cambridge. Can we be certain that there are not sub-disciplines of science in
which even today most scientists accept without question theories that will in
the future be shown to be as preposterous as Lysenkoism? Many working
scientists can surely think of at least one candidate — that is, a theory
widely accepted in their field that is almost certainly false, even preposterous.
Confronted with such examples, defenders of the institutional approach will
often point to the supposedly self-correcting nature of science. Ladyman, Ross,
and Spurrett assert that “although scientific progress is far from smooth and
linear, it never simply oscillates or goes backwards. Every scientific
development influences future science, and it never repeats itself.” Alas, in
the thirty or so years I have been watching, I have observed quite
a few scientific sub-fields (such as behavioral ecology) oscillating happily
and showing every sign of continuing to do so for the foreseeable future. The
history of science provides examples of the eventual discarding of erroneous
theories. But we should not be overly confident that such self-correction will
inevitably occur, nor that the institutional mechanisms of science will be so
robust as to preclude the occurrence of long dark ages in which false theories
hold sway.
The fundamental problem raised by the identification of “good science” with
“institutional science” is that it assumes the practitioners of science to be
inherently exempt, at least in the long term, from the corrupting influences
that affect all other human practices and institutions. Ladyman, Ross, and
Spurrett explicitly state that most human institutions, including “governments,
political parties, churches, firms, NGOs, ethnic associations, families ... are
hardly epistemically reliable at all.” However, “our grounding assumption is
that the specific institutional processes of science have inductively
established peculiar epistemic reliability.” This assumption is at best naïve
and at worst dangerous. If any human institution is held to be exempt from the
petty, self-serving, and corrupting motivations that plague us all, the result
will almost inevitably be the creation of a priestly caste demanding adulation
and required to answer to no one but itself.
It is something approaching this adulation that seems to underlie the
abdication of the philosophers and the rise of the scientists as the
authorities of our age on all intellectual questions. Reading the work of
Quine, Rudolf Carnap, and other philosophers of the positivist tradition, as
well as their more recent successors, one is struck by the aura of hero-worship
accorded to science and scientists. In spite of their idealization of science,
the philosophers of this school show surprisingly little interest in science
itself — that is, in the results of scientific inquiry and their potential
philosophical implications. As a biologist, I must admit to finding Quine’s
constant invocation of “nerve-endings” as an all-purpose explanation of human
behavior to be embarrassingly simplistic. Especially given Quine’s intellectual
commitment to behaviorism, it is surprising yet characteristic that he had
little apparent interest in the actual mechanisms by which the nervous system
functions.
Ross, Ladyman, and Spurrett may be right to assume that science possesses a
“peculiar epistemic reliability” that is lacking in other forms of inquiry. But
they have taken the strange step of identifying that reliability with the
institutions and practitioners of science, rather than with any particular
rational, empirical, or methodological criterion that scientists are bound (but
often fail) to uphold. Thus a (largely justifiable) admiration for the work of
scientists has led to a peculiar, unjustified role for scientists themselves —
so that, increasingly, what is believed by scientists and the public to be
“scientific” is simply any claim that is upheld by many scientists, or that is
based on language and ideas that sound sufficiently similar to scientific
theories.
The
Eclipse of Metaphysics
There are at least three areas of inquiry traditionally
in the purview of philosophy that now are often claimed to be best — or only —
studied scientifically: metaphysics, epistemology, and ethics. Let us discuss
each in turn.
What is the nature of reality? Where did all this come from? Did the
universe need a creator? ... Traditionally these are questions for philosophy,
but philosophy is dead. Philosophy has not kept up with modern developments in
science, particularly physics. Scientists have become the bearers of the torch
of discovery in our quest for knowledge.
Though physicists might once have been dismissive of metaphysics as mere
speculation, they would also have characterized such questions as inherently
speculative and so beyond their own realm of expertise. The claims of Hawking
and Mlodinow, and many other writers, thus represent a striking departure from
the traditional view.
In contrast to these authors’ claims of philosophical obsolescence, there
has arisen a curious consilience between the findings of modern cosmology and
some traditional understandings of the creation of the universe. For example,
theists have noted that the model known as the Big Bang has a certain
consistency with the Judeo-Christian notion of creation ex nihilo,
a consistency not seen in other cosmologies that postulated an eternally existent
universe. (In fact, when the astronomer-priest Georges Lemaître first
postulated the theory, he was met with such skepticism by proponents of an
eternal universe that the name “Big Bang” was coined by his opponents — as a
term of ridicule.) Likewise, many cosmologists have articulated various forms
of what is known as the “anthropic principle” — that is, the observation that
the basic laws of the universe seem to be “fine-tuned” in such a way as to be
favorable to life, including human life.
It is perhaps in part as a response to this apparent consilience that we
owe the rise of a large professional and popular literature in recent decades
dedicated to theories about multiverses, “many worlds,” and “landscapes” of
reality that would seem to restore the lack of any special favoring of
humanity. Hawking and Mlodinow, for example, state that
the fine-tunings in the laws of nature can be explained by the existence of
multiple universes. Many people through the ages have attributed to God the
beauty and complexity of nature that in their time seemed to have no scientific
explanation. But just as Darwin and Wallace explained how the apparently
miraculous design of living forms could appear without intervention by a
supreme being, the multiverse concept can explain the fine-tuning of physical
law without the need for a benevolent creator who made the universe for our
benefit.
The multiverse theory holds that there are many different universes, of
which ours is just one, and that each has its own system of physical laws. The
argument Hawking and Mlodinow offer is essentially one from the laws of
probability: If there are enough universes, one or more whose laws are suitable
for the evolution of intelligent life is more or less bound to occur.
Physicist Lee Smolin, in his 1997 book The Life of the Cosmos, goes one step
further by applying the principles of natural selection to a multiverse model.
Smolin postulates that black holes give rise to new universes, and that the
physical laws of a universe determine its propensity to give rise to black
holes. A universe’s set of physical laws thus serves as its “genome,” and these
“genomes” differ with respect to their propensity to allow a universe to
“reproduce” by creating new universes. For example, it happens that a universe
with a lot of carbon is very good at making black holes — and a universe with a
lot of carbon is also one favorable to the evolution of life. In order for his
evolutionary process to work, Smolin also assumes a kind of mutational
mechanism whereby the physical laws of a universe may be slightly modified in
progeny universes. For Smolin, then, not only is our universe bound to occur
because there have been many rolls of the dice, but the dice are loaded in
favor of a universe like ours because it happens to be a particularly “fit”
universe.
Though these arguments may do some work in evading the conclusion that our
universe is fine-tuned with us in mind, they cannot sidestep, or even address,
the fundamental metaphysical questions raised by the fact that something —
whether one or many universes — exists rather than nothing. The main fault of
these arguments lies in their failure to distinguish between necessary and
contingent being. A contingent being is one that might or might not exist, and
thus might or might not have certain properties. In the context of modern
quantum physics, or population genetics, one might even assign probability
values to the existence or non-existence of some contingent being. But a
necessary being is one that must exist, and whose properties could not be other
than they are.
Multiverse theorists are simply saying that our universe and its laws have
merely contingent being, and that other universes are conceivable and so also
may exist, albeit contingently. The idea of the contingent nature of our
universe may cut against the grain of modern materialism, and so seem novel to
many physicists and philosophers, but it is not in fact new. Thomas Aquinas,
for example, began the third of his famous five proofs of the existence of God
(a being “necessary in itself”) with the observation of contingent being (“we
find among things certain ones that might or might not be”). Whether or not one
is convinced by Aquinas, it should be clear that the “discovery” that our
universe is a contingent event among other contingent events is perfectly
consistent with his argument.
Writers like Hawking, Mlodinow, and Smolin, however, use the contingent
nature of our universe and its laws to argue for a very different conclusion
from that of Aquinas — namely, that some contingent universe (whether or not it
turned out to be our own) must have come into being, without
the existence of any necessary being. Here again probability is essential to
the argument: While any universe with a particular set of laws may be very
improbable, with enough universes out there it becomes highly probable. This is
the same principle behind the fact that, when I toss a coin, even though there
is some probability that I will get heads and some probability that I will get
tails, it is certain that I will get heads or tails. Similarly,
modern theorists imply, the multiverse has necessary being even though any
given universe does not.
The problem with this argument is that certainty in the sense of
probability is not the same thing as necessary being: If I toss a coin, it is
certain that I will get heads or tails, but that outcome depends on my tossing the
coin, which I may not necessarily do. Likewise, any particular universe may
follow from the existence of a multiverse, but the existence of the multiverse
remains to be explained. In particular, the universe-generating process assumed
by some multiverse theories is itself contingent because it depends on the
action of laws assumed by the theory. The latter might be called meta-laws,
since they form the basis for the origin of the individual universes, each with
its own individual set of laws. So what determines the meta-laws? Either we
must introduce meta-meta-laws, and so on in infinite regression, or we must
hold that the meta-laws themselves are necessary — and so we have in effect
just changed our understanding of what the fundamental universe is to one that
contains many universes. In that case, we are still left without ultimate
explanations as to why that universe exists or has the
characteristics it does.
When it comes to such metaphysical questions, science and scientific
speculation may offer much in fleshing out details, but they have so far failed
to offer any explanations that are fundamentally novel to philosophy — much
less have they supplanted it entirely.
The
Eclipse of Epistemology
Hawking and Mlodinow, in the chapter of their book
called “The Theory of Everything,” quote Albert Einstein: “The most
incomprehensible thing about the universe is that it is comprehensible.” In
response, Hawking and Mlodinow offer this crashing banality: “The universe is
comprehensible because it is governed by scientific laws; that is to say, its
behavior can be modeled.” Later, the authors invite us to give ourselves a
collective pat on the back: “The fact that we human beings — who are ourselves
mere collections of fundamental particles of nature — have been able to come
this close to an understanding of the laws governing us and our universe is a
great triumph.” Great triumph or no, none of this addresses Einstein’s paradox,
because no explanation is offered as to why our universe is
“governed by scientific laws.”
Moreover, even if we can be confident that our universe has unchanging
physical laws — which many of the new speculative cosmologies call into
question — how is it that we “mere collections of particles” are able to
discern those laws? How can we be confident that we will continue to discern
them better, until we understand them fully? A common response to these
questions invokes what has become the catch-all explanatory tool of advocates
of scientism: evolution. W. V. O. Quine was one of the first modern
philosophers to apply evolutionary concepts to epistemology, when he argued inOntological Relativity and Other Essays (1969) that
natural selection should have favored the development of traits in human beings
that lead us to distinguish truth from falsehood, on the grounds that believing
false things is detrimental to fitness. More recently, scientific theories
themselves have come to be considered the objects of natural selection. For
example, philosopher Bastiaan C. van Fraassen argued in his 1980 book The Scientific Image:
the success of current scientific theories is no miracle. It is not even
surprising to the scientific (Darwinist) mind. For any scientific theory is
born into a life of fierce competition, a jungle red in tooth and claw. Only
the successful theories survive — the ones which in fact latched
onto actual regularities in nature.
The notion that our minds and senses are adapted to find knowledge does
have some intuitive appeal; as Aristotle observed long before Darwin, “all men,
by nature, desire to know.” But from an evolutionary perspective, it is by no
means obvious that there is always a fitness advantage to knowing the truth.
One might grant that it may be very beneficial to my fitness to know certain
facts in certain contexts: For instance, if a saber-toothed tiger is about to
attack me, it is likely to be to my advantage to be aware of that fact.
Accurate perception in general is likely to be advantageous. And simple
mathematics, such as counting, might be advantageous to fitness in many
contexts — for example, in keeping track of my numerous offspring when
saber-toothed cats are about. Plausibly, even the human propensity for
gathering genealogical information, and with it an intuitive sense of degrees
of relatedness among social group members, might have been advantageous because
it served to increase the propensity of an organism to protect members of the
species with genotypes similar to its own. But the general epistemological
argument offered by these authors goes far beyond any such elementary needs.
While it may be plausible to imagine a fitness advantage to simple skills of
classification and counting, it is very hard to see such an advantage to DNA
sequence analysis or quantum theory.
Similar points apply whether one is considering the ideas themselves or the
traits that allow us to form ideas as the objects of natural selection. In
either case, the “fitness” of an idea hinges on its ability to gain wide
adherence and acceptance. But there is little reason to suppose that natural
selection would have favored the ability or desire to perceive the truth in all
cases, rather than just some useful approximation of it. Indeed, in some
contexts, a certain degree of self-deception may actually be advantageous from
the point of view of fitness. There is a substantial sociobiological literature
regarding the possible fitness advantages of self-deception in humans (the
evolutionary biologist Robert L. Trivers reviewed these in a 2000 article in the Annals of the New York Academy of Sciences).
These invocations of evolution also highlight another common misuse of
evolutionary ideas: namely, the idea that some trait must have
evolved merely because we can imagine a scenario under which possession of that
trait would have been advantageous to fitness. Unfortunately, biologists as
well as philosophers have all too often been guilty of this sort of invalid
inference. Such forays into evolutionary explanation amount ultimately to
storytelling rather than to hypothesis-testing in the scientific sense. For a
complete evolutionary account of a phenomenon, it is not enough to construct a
story about how the trait might have evolved in response to a given selection
pressure; rather, one must provide some sort of evidence that it really did so
evolve. This is a very tall order, especially when we are dealing with human
mental or behavioral traits, the genetic basis of which we are far from
understanding.
Evolutionary biologists today are less inclined than Darwin was to expect
that every trait of every organism must be explicable by positive selection. In
fact, there is abundant evidence — as described in books like Motoo Kimura’s The Neutral Theory of Molecular Evolution (1983),
Stephen Jay Gould’sThe Structure of Evolutionary Theory (2002), and
Michael Lynch’s The Origins of Genome Architecture (2007) —
that many features of organisms arose by mutations that were fixed by chance,
and were neither selectively favored nor disfavored. The fact that any species,
including ours, has traits that might confer no obvious fitness benefit is
perfectly consistent with what we know of evolution. Natural selection can explain
much about why species are the way they are, but it does not necessarily offer
a specific explanation for human intellectual powers, much less any sort of
basis for confidence in the reliability of science.
What van Fraassen, Quine, and these other thinkers are appealing to is a
kind of popularized and misapplied Darwinism that bears little relationship to
how evolution really operates, yet that appears in popular writings of all
sorts — and even, as I have discovered in my own work as an evolutionary
biologist, in the peer-reviewed literature. To speak of a “Darwinian” process
of selection among culturally transmitted ideas, whether scientific theories or
memes, is at best only a loose analogy with highly misleading implications. It
easily becomes an interpretive blank check, permitting speculation that seems
to explain any describable human trait. Moreover, even in the strongest
possible interpretation of these arguments, at best they help a little in
explaining why we human beings are capable of comprehending the universe — but
they still say nothing about why the universe itself is comprehensible.
The
Eclipse of Ethics
Perhaps no area of philosophy has seen a greater effort
at appropriation by advocates of scientism than ethics. Many of them tend toward
a position of moral relativism. According to this position, science deals with
the objective and the factual, whereas statements of ethics merely represent
people’s subjective feelings; there can be no universal right or wrong. Not
surprisingly, there are philosophers who have codified this opinion. The
positivist tradition made much of a “fact-value distinction,” in which science
was said to deal with facts, leaving fields like ethics (and aesthetics) to
deal with the more nebulous and utterly disparate world of values. In his
influential book Ethics: Inventing Right and Wrong (1977), the
philosopher J. L. Mackie went even further, arguing that ethics is
fundamentally based on a false theory about reality.
Evolutionary biology has often been seen as highly relevant to ethics,
beginning in the nineteenth century. Social Darwinism — at least as it came to
be explained and understood by later generations — was an ideology that
justified laissez-faire capitalism with reference to the natural “struggle for
existence.” In the writings of authors such as Herbert Spencer, the
accumulation of wealth with little regard for those less fortunate was
justified as “nature’s way.” Of course, the “struggle” involved in natural
selection is not a struggle to accumulate a stock portfolio but a struggle to
reproduce — and ironically, Social Darwinism arose at the very time that the
affluent classes of Western nations were beginning to limit their reproduction
(the so-called “demographic transition”) with the result that the economic
struggle and the Darwinian struggle were at cross-purposes.
Partly in response to this contradiction, the eugenics movement arose, with
its battle cry, “The unfit are reproducing like rabbits; we must do something
to stop them!” Although plenty of prominent Darwinians endorsed such sentiments
in their day, no more incoherent a plea can be imagined from a Darwinian point
of view: If the great unwashed are out-reproducing the genteel classes, that
can only imply that it is the great unwashed who are the fittest — not the
supposed “winners” in the economic struggle. It is the genteel classes, with
their restrained reproduction, who are the unfit. So the foundations of
eugenics are complete nonsense from a Darwinian point of view.
The unsavory nature of Social Darwinism and associated ideas such as
eugenics caused a marked eclipse in the enterprise of evolutionary ethics. But
since the 1970s, with the rise of sociobiology and its more recent offspring
evolutionary psychology, there has been a huge resurgence of interest in
evolutionary ethics on the part of philosophers, biologists, psychologists, and
popular writers.
It should be emphasized that there is such a thing as a genuinely
scientific human sociobiology or evolutionary psychology. In this field,
falsifiable hypotheses are proposed and tested with real data on human
behavior. The basic methods are akin to those of behavioral ecology, which have
been applied with some success to understanding the behavioral adaptations of
nonhuman animals, and can shed similar light on aspects of human behavior —
although these efforts are complicated by human cultural variability. On the
other hand, there is also a large literature devoted to a kind of pop
sociobiology that deals in untested — and often untestable — speculations, and
it is the pop sociobiologists who are most likely to tout the ethical relevance
of their ostensible discoveries.
When evolutionary psychology emerged, its practitioners were generally
quick to repudiate Social Darwinism and eugenics, labeling them as “misuses” of
evolutionary ideas. It is true that both were based on incoherent reasoning
that is inconsistent with the basic concepts of biological evolution; but it is
also worth remembering that some very important figures in the history of
evolutionary biology did not see these inconsistencies, being blinded, it
seems, by their social and ideological prejudices. The history of these ideas
is another cautionary tale of the fallibility of institutional science when it
comes to getting even its own theories straight.
Just the same, what evolutionary psychology was about, we were told, was
something quite different than Social Darwinism. It avoided the political and
focused on the personal. One area of human life to which the field has devoted
considerable attention is sex, spinning out just-so stories to explain the
“adaptive” nature of every sort of behavior, from infidelity to rape. As with
the epistemological explanations, since natural selection “should” have favored
this or that behavior, it is often simply concluded that it must have
done so. The tacit assumption seems to be that merely reciting the story somehow
renders it factual. (There often even seems to be a sort of relish with which
these stories are elaborated — the more so the more thoroughly caddish the
behavior.) The typical next move is to deplore the very behaviors the
evolutionary psychologist has just designated as part of our evolutionary
heritage, and perhaps our instinct: To be sure, we don’t approve of such things
today, lest anyone get the wrong idea. This deploring is often accompanied by a
pious invocation of the fact-value distinction (even though typically no facts
at all have made an appearance — merely speculations).
There seems to be a thirst for this kind of explanation, but the pop
evolutionary psychologists generally pay little attention to the philosophical
issues raised by their evolutionary scenarios. Most obviously, if “we now know”
that the selfish behavior attributed to our ancestors is morally reprehensible,
how have “we” come to know this? What basis do we have for saying that anything
is wrong at all if our behaviors are no more than the consequence of past
natural selection? And if we desire to be morally better than our ancestors
were, are we even free to do so? Or are we programmed to behave in a certain
way that we now, for some reason, have come to deplore?
On the other hand, there is a more serious philosophical literature that
attempts to confront some of the issues in the foundations of ethics that arise
from reflections on human evolutionary biology — for example, Richard Joyce’s
2006 book The Evolution of Morality. Unfortunately,
much of this literature consists of still more storytelling — scenarios whereby
natural selection might have favored a generalized moral sense or the tendency
to approve of certain behaviors such as cooperation. There is nothing
inherently implausible about such scenarios, but they remain in the realm of
pure speculation and are essentially impossible to test in any rigorous way.
Still, these ideas have gained wide influence.
Part of this evolutionary approach to ethics tends toward a debunking of
morality. Since our standards of morality result from natural selection for
traits that were useful to our ancestors, the debunkers argue, these moral
standards must not refer to any objective ethical truths. But just because
certain beliefs about morality were useful for our ancestors does not make them
necessarily false. It would be hard to make a similar case, for example,
against the accuracy of our visual perception based on its usefulness to our
ancestors, or against the truth of arithmetic based on the same.
True ethical statements — if indeed they exist — are of a very different
sort from true statements of arithmetic or observational science. One might
argue that our ancestors evolved the ability to understand human nature and,
therefore, they could derive true ethical statements from an understanding of
that nature. But this is hardly a novel discovery of modern science: Aristotle
made the latter point in the Nicomachean Ethics. If human beings
are the products of evolution, then it is in some sense true that everything we
do is the result of an evolutionary process — but it is difficult to see what
is added to Aristotle’s understanding if we say that we are able to reason as he
did as the result of an evolutionary process. (A parallel argument could be
made about Kantian ethics.)
Not all advocates of scientism fall for the problems of reducing ethics to
evolution. Sam Harris, in his 2010 book The Moral Landscape, is one advocate
of scientism who takes issue with the whole project of evolutionary ethics. Yet
he wishes to substitute an offshoot of scientism that is perhaps even more
problematic, and certainly more well-worn: utilitarianism. Under Harris’s
ethical framework, the central criteria for judging if a behavior is moral is
whether or not it contributes to the “well-being of conscious creatures.”
Harris’s ideas have all of the problems that have plagued utilitarian
philosophy from the beginning. As utilitarians have for some time, Harris
purports to challenge the fact-value distinction, or rather, to sidestep the
tricky question of values entirely by just focusing on facts. But, as has also
been true of utilitarians for some time, this move ends up being a way to
advance certain values over others without arguing for them, and to leave large
questions about those values unresolved.
Harris does not, for example, address the time-bound nature of such
evaluations: Do we consider only the well-being of creatures that are conscious
at the precise moment of our analysis? If yes, why should we accept such a
bias? What of creatures that are going to possess consciousness in the near
future — or would without human intervention — such as human embryos, whose
destruction Harris staunchly advocates for the purposes of stem cell research?
What of comatose patients, whose consciousness, and prospects for future consciousness,
are uncertain? Harris might respond that he is only concerned with the
well-being of creatures now experiencing consciousness, not any potentially
future conscious creatures. But if so, should he not, for example, advocate
expending all of the earth’s nonrenewable resources in one big here-and-now
blowout, enhancing the physical well-being of those now living, and let future
generations be damned? Yet Harris claims to be a conservationist. Surely the
best justification for resource conservation on the basis of his ethics would
be that it enhances the well-being of future generations of conscious
creatures. If those potential future creatures merit our consideration, why
should we not extend the same consideration to creatures already in existence,
whose potential future involves consciousness?
Moreover, the factual analysis Harris touts cannot nearly bear the weight
of the ethical inquiry he claims it does. Harris argues that the question of
what factors contribute to the “well-being of conscious creatures” is a factual
one, and furthermore that science can provide insights into these factors, and
someday perhaps even give definitive accounts of them. Harris himself has been
involved in research that examines the brain states of human subjects engaged
in a variety of tasks. Although there has been much overhyping of brain
imaging, the limitations of this sort of research are becoming increasingly
obvious. Even on their own terms, these studies at best provide evidence of
correlation, not of causation, and of correlations mixed in with the
unfathomably complex interplay of cause and effect that are the brain and the
mind. These studies inherently claim to get around the problems of
understanding subjective consciousness by examining the brain, but the basic
unlikeness of first-person qualitative experience and third-person events that
can be examined by anyone places fundamental limits on the usual reductive
techniques of empirical science.
We might still grant Harris’s assumption that neuroscience will someday
reveal, in great biochemical and physiological detail, a set of factors highly
associated with a sense of well-being. Even so, there would be limitations on
how much this knowledge would advance human happiness. For comparison, we know
a quite a lot about the physiology of digestion, and we are able to describe in
great detail the physiological differences between the digestive system of a
person who is starving and that of a person who has just eaten a satisfying and
nutritionally balanced meal. But this knowledge contributes little to solving
world hunger. This is because the factor that makes the difference — that is,
the meal — comes from outside the person. Unless the factors causing our
well-being come primarily from within, and are totally independent of what
happens in our environment, Harris’s project will not be the key to achieving
universal well-being.
Harris is aware that external circumstances play a vital role in our sense
of well-being, and he summarizes some research that addresses these factors.
But most of this research is soft science of the very softest sort —
questionnaire surveys that ask people in a variety of circumstances about their
feelings of happiness. As Harris himself notes, most of the results tell us
nothing we did not already know. (Unsurprisingly, Harris, an atheist
polemicist, fails to acknowledge any studies that have supported a spiritual or
religious component in happiness.) Moreover, there is reason for questioning to
what extent the self-reported “happiness” in population surveys relates to real
happiness. Recent data indicating that both
states and countries with high rates of reported “happiness” also have high
rates of suicide suggest that people’s answers to surveys may not always
provide a reliable indicator of societal well-being, or even of happiness.
This, too, is a point as old as philosophy: As Aristotle noted in the Nicomachean
Ethics, there is much disagreement between people as to what happiness is,
“and often even the same man identifies it with different things, with health
when he is ill, with wealth when he is poor.” Again, understanding values
requires philosophy, and cannot simply be sidestepped by wrapping them in a
numerical package. Harris is right that new scientific information can guide
our decisions by enlightening our application of moral principles — a
conclusion that would not have been troubling to Kant or Aquinas. But this is a
far cry from scientific information shaping or determining our moral principles
themselves, an idea for which Harris is unable to make a case.
A striking inconsistency in Harris’s thought is his adherence to
determinism, which seems to go against his insistence that there are right and
wrong choices. This is a tension widely evident in pop sociobiology. Harris
seems to think that free will is an illusion but also that our decisions are
really driven by thoughts that arise unbidden in our brains. He does not
explain the origin of these thoughts nor how their origin relates to moral
choices.
Harris gives a hint of an answer to this question when, in speaking of
criminals, he attributes their actions to “some combination of bad genes, bad
parents, bad ideas, and bad luck.” Each of us, he says, “could have been dealt
a very different hand in life” and “it seems immoral not to recognize just how
much luck is involved in morality itself.” Harris’s reference to “bad genes”
puts him back closer to the territory of eugenics and Social Darwinism than he
seems to realize, making morality the privilege of the lucky few. Although
Harris admits that we have a lot to learn about what makes for happiness, he
does advance his understanding that happy people have “careers that are
intellectually stimulating and financially rewarding” and “basic control over
their lives.”
This view undermines the possibility of happiness and moral behavior for
those who are dealt a bad hand, and so does more to degrade than uplift at the
individual level. But worse, it does little to advance the well-being of
society as a whole. The importance of good circumstances, and guaranteeing
these for as many as possible, is one that is already widely understood and
appreciated. But the question remains how to bring about these circumstances
for everyone, and no economic system has yet been devised to ensure this. Short
of this, difficult discussions of philosophy, justice, politics, and all of the
other fields concerned with public life will be required to understand what the
good life is and how to provide it to many given the limitations and
inequalities of what circumstance brings to each of us. On these points, as
with so many others, scientism tends to present as bold, novel solutions what
are really just the beginning terms of the problem as it is already widely
understood.
The
Persistence of Philosophy
The positivist tradition in philosophy gave scientism a
strong impetus by denying validity to any area of human knowledge outside of
natural science. More recent advocates of scientism have taken the ironic but
logical next step of denying any useful role for philosophy whatsoever, even
the subservient philosophy of the positivist sort. But the last laugh, it
seems, remains with the philosophers — for the advocates of scientism reveal
conceptual confusions that are obvious upon philosophical reflection. Rather
than rendering philosophy obsolete, scientism is setting the stage for its
much-needed revival.
Advocates of scientism today claim the sole mantle of rationality,
frequently equating science with reason itself. Yet it seems the very
antithesis of reason to insist that science can do what it cannot, or even that
it has done what it demonstrably has not. As a scientist, I would never deny
that scientific discoveries can have important implications for metaphysics,
epistemology, and ethics, and that everyone interested in these topics needs to
be scientifically literate. But the claim that science and science alone can
answer longstanding questions in these fields gives rise to countless problems.
In contrast to reason, a defining characteristic of superstition is the
stubborn insistence that something — a fetish, an amulet, a pack of Tarot cards
— has powers which no evidence supports. From this perspective, scientism
appears to have as much in common with superstition as it does with properly
conducted scientific research. Scientism claims that science has already resolved
questions that are inherently beyond its ability to answer.
Of all the fads and foibles in the long history of human credulity,
scientism in all its varied guises — from fanciful cosmology to evolutionary
epistemology and ethics — seems among the more dangerous, both because it
pretends to be something very different from what it really is and because it
has been accorded widespread and uncritical adherence. Continued insistence on
the universal competence of science will serve only to undermine the
credibility of science as a whole. The ultimate outcome will be an increase of
radical skepticism that questions the ability of science to address even the
questions legitimately within its sphere of competence. One longs for a new
Enlightenment to puncture the pretensions of this latest superstition.
No comments:
Post a Comment