By PHILIP E.
AUERSWALD
As U.S. troops were massing in England for the
Normandy invasion, the U.S. Congress engaged in a heated debate about how to
avert mass unemployment when millions of servicemen came home at war’s end.
Their concern followed precedent. Only a dozen years earlier, at the nadir of
the Great Depression, World War I veterans had converged on Washington to
demand early disbursement of congressionally mandated payments. The result was
an ugly confrontation between the “Bonus Marchers” and U.S. Army units led by
none other than the chief of staff, General Douglas MacArthur. Wishing to avoid
a repetition of this disturbing scenario, Congress enacted the GI Bill, signed
into law by President Roosevelt on June 22, 1944.
The GI Bill was a momentous piece of legislation, credited ever since its passage with creating opportunities for an entire generation. That it certainly did. But that success was largely an unanticipated by-product of a more pressing concern that never materialized. Returning veterans accepted the government’s offer of free college tuition and zero-interest home mortgages in numbers far exceeding congressional projections. But the government’s estimates pertaining to the unemployment benefits available to returning veterans turned out to be dramatically exaggerated: Only 20 percent were claimed.
This far better-than-expected outcome did not
end concern over the potential for large-scale unemployment and impoverishment
as two new putative sources of these problems soon came to the fore: The first
was automation, and the second was global overpopulation.
Almost exactly four years after V-J Day, on
August 13, 1949, an MIT professor named Norbert Wiener wrote a letter to Walter
Reuther, president of the United Auto Workers (UAW), containing a darkly
prophetic message. Within a decade or two, Wiener warned, the advent of
automatic automobile assembly lines would result in “disastrous” unemployment.
The power of computers to control machines made such an outcome all but
inevitable. As a creator of this new technology, Wiener wanted to give Reuther
advance notice so that the UAW could help its members prepare for and adapt to
the massive displacement of labor looming on the horizon.
Now, if anyone in 1949 grasped the disruptive
potential of computing machines, it was Norbert Wiener. A prodigy who earned
his Ph.D. from Harvard in mathematical philosophy at age 18, he had contributed
to the development of the first modern computer, created the first automated
machine and laid the groundwork for a new interdisciplinary science of
information and communication that he termed “cybernetics.” His work
anticipated and inspired Marshall McLuhan’s heralded studies of mass media,
provided the initial impetus for the explorations by James Watson and Francis
Crick that led to the discovery of the double helix, and spurred
science-fiction writer William Gibson to coin the term “cyberspace” to describe
a type of virtual world that Wiener himself had envisioned two decades before
the creation of the first web page.
Reuther took Wiener’s letter seriously,
responding promptly by telegram: “Deeply interested in your letter. Would like
to discuss it with you at earliest opportunity following conclusion of our
current negotiations with Ford Motor Company. Will you be able to come to Detroit?”
When the two met in March 1950, they pledged to work together to create a
labor-science council to anticipate and prepare for major technological changes
affecting workers.
At about the same time Reuther and Weiner were
meeting, a brain trust was gathering in the orbit of John D. Rockefeller III to
address another problem: global overpopulation. The basic concern of this group
was both old and simple: Human populations keep growing, but the planet isn’t
getting bigger, so sooner or later disaster will be upon us. Funding from the
Rockefeller Brothers Fund permitted the creation of the Population Council in
1952. John D. Rockefeller III appointed Frederick Osborn to be the Council’s
first president.
The work of the Population Council took its cue
from the famed 1798 masterwork by Reverend Thomas Robert Malthus: An Essay
on the Principle of Population. “The power of population is
indefinitely greater than the power in the earth to produce subsistence for
man”, Malthus wrote. This slim volume has become one of the most celebrated
works of political economy ever published, a distinction made a bit surprising
by the fact that it was published at pretty much exactly the time when history
began to prove its core thesis incorrect. Starting not long after An
Essay on the Principle of Populationappeared in print, global population
levels and per capita income began a long and steady ascent—in tandem. Yet we
have only recently begun to note the strong correlation between population
growth and increased prosperity. For most of the past two centuries Malthusian
fears of demographic doom have obscured the increasingly evident fact of a
global demographic dividend.
In 1893, almost a century after the publication
of Malthus’s book, Henry Adams (grandson of John Quincy Adams) proclaimed that
“two more generations should saturate the world with population and should
exhaust the mines.” At about that time, a new intellectual movement took shape
that combined Malthusian fears with social Darwinism. Its proponents dubbed it
“eugenics.” For the first half of the 20th century the eugenics
movement flourished in the United Kingdom and the United States; the result was
an intellectual architecture that provided justification for some of the most
abhorrent acts that humans have perpetrated upon other humans—the Holocaust
being primary among them.
The Nazi embrace of eugenics largely (though not
entirely) put an end to its appeal in the United Kingdom and the United States
following World War II, but core concerns about the proliferation of people in
poor places found new expression in the global population control programs that
came into being in the 1950s and 1960s, including ones funded by the Population
Council. Frederick Osborn himself had been a founding member of the American
Eugenics Society, and was the author of a 1940 book titled Preface to
Eugenics. Another protagonist of the postwar population control
movement was General William Henry Draper, Jr.—military leader, diplomat, and
venture capital pioneer—who coined the phrase “population bomb” to refer to the
dim prospects for humanity (in particular, cream-skinned humanity) in the face
of a globally increasing population. The phrase lived on in the title of a
hugely influential 1968 book by Paul and Ann Ehrlich, as well as several
subsequent publications, most recently a spring 2010 cover essay in Foreign
Affairs titled “The New Population Bomb.”
So what actually happened over the past two
centuries since Malthus penned his famous treatise, or in the sixty years since
Reuther and Wiener met to discuss the danger of mass technological
unemployment?
With regard to the threat of global
overpopulation, the facts are as I briefly summarized them above: Growth in
population is minimal until the start of the 18th century, at
which point a steady increase begins. Population really starts to take off,
though, after World War II. In the second half of the 20thcentury,
global population more than doubles, from roughly 2.5 billion in 1950 to almost
6 billion in 2000. And the data show that, in material terms at least,
individual well-being (as measured by global per capita income) takes off at
exactly the same time as population.1
This doesn’t necessarily mean that the observed
increase in population directly caused the observed increase in per capita
income; nor does it mean the reverse, for that matter. It just means that the
two processes—increasing population and increasing wealth on a global
scale—have been strongly correlated over the past two millennia.
Why has Malthus so far turned out to be wrong?
First and foremost, there is the global historical regularity known as the
“demographic transition.” If the meaning of words were more connected to the
sound they produce, this technical-sounding term would rhyme with “We’re
saved!” It simply means that as people get richer, they tend to have fewer
children. This effect is so powerful that the fertility rate in Hong Kong today
is lower today than it is in the rest of China, despite the fact that residents
in relatively wealthy Hong Kong are the only ones in China excluded from that
PRC’s draconian one-child policy. The same has been true pretty much everywhere
else in the world. The result: The population bomb turns out to be a dud.2
An insightful paper written in the early 1990s
by a Harvard economics graduate student named Michael Kremer helps us
understand why we should not be surprised to observe today that humanity has
experienced a “population boon” over the centuries, rather than a bomb.3 When
Kremer wrote this paper, the most accomplished theorists in the economics
profession were busy trying to fix an inconsistency between newly fashionable
models of economic growth and a particular feature of economic reality at the
time. The issue was this: The improved approach to studying economic growth
that was then making the rounds predicted that large countries should grow more
rapidly than small countries, because they have more people to invent stuff.
Back in the early 1990s, the world’s most populous countries, China and India,
were not growing more quickly, but more slowly, than other countries, and they
had been doing so for some time. That fact threw sand in the gears of this
particular theory.
Kremer’s approach to this puzzle was to situate
the facts of the late 20th century in a longer historical time
frame—much longer. By considering the growth of human populations since more or
less the beginning of time as it relates to human society, Kremer was able to
look anew at the prediction that large populations actually drive economic
growth. What he found was that the slow growth of large countries such as China
and India was actually an historical aberration and thus not negative evidence
for the theory as such.
Over the very long term, the evidence supports
the claim that the creativity of individuals powers human productivity and the
improvements in societal well-being that follow. More people imply a likelihood
of more ideas; more ideas, in turn, imply more of the great ideas that
ultimately propel human societies toward increased prosperity.
In the two decades following Kremer’s
more-or-less solitary stand in defense of scale effects, his position was
vindicated in dramatic style as the world’s two most populous countries, China
and India, transformed themselves from basket cases to growth engines. Kremer
believed that the likelihood of great invention is pretty much a constant in
all cultures, through all periods of time, and this assumption seems to fit the
data on the long-term evolution of human society pretty well.
As humanity is increasingly liberated from the
daily struggle for survival that was the norm for all millennia prior to the 18th century,
its potential for economic growth through structured social creativity goes up.
Reduced to its quantitative basics, the story of improvements in human
well-being over the period of millennia is the classic S-shaped adoption curve
familiar to anyone who has studied the diffusion of technology. The only
difference in this case is that it’s not a transistor radio or a mobile phone
being adopted but the state of being liberated from a subsistence existence,
with the cognitive freedom that entails. As Bob Litan of the Kauffman
Foundation put it, the generations alive today are living on the “S” of human
history: the steepest part of the slope of human progress.
Just a few decades ago, the average person in
the developing world (or Appalachia) was more likely to see his or her child
die from diarrhea than to make a phone call or turn on an electric light at
home. On a global scale, prosperity was as much a function of the accident of
birth as it was of ability or effort. The result was a persistent rift, not
between rich and poor countries, but between a global majority destined for a
highly localized and materially impoverished existence, and a global minority
blessed with the resources and freedom to travel without restriction in search
of the best in education, career opportunities and living environment. The
result was, and still is today, a world sharply divided between the globally
rich and the locally poor.
Yet after four centuries of sustained advances
in science, innovation and the organization of society, the frontier of
technology is finally reaching the heart of the human community. Never before
have so many people had such great opportunities to connect, create,
contribute, and collaborate—along the way, producing value for society and for
themselves.
The consequence? Predictions of demographic
disaster, consistently pushed back for the two centuries since Malthus, are
finally reaching their expiry date. A combination of entrepreneurship,
technological innovation and broad societal transformation are giving even
children born in the most persistently poor places a chance to benefit from and
contribute to the vitality of global markets and communities of collaborative
action.
What about the pernicious effects of automation?
If growing populations don’t doom societies, won’t the substitution of machine
labor for human labor do the same? No, Norbert Wiener’s prediction of
calamitous post-World War II unemployment did not come to pass. As partial
explanation we can cite the brilliant 1965 book by Herbert A. Simon, in which
he argued technological innovation invariably produces more and better jobs,
not employment crises; at least up to that point in time, he found that general
educational levels posed no barrier to the continuation of the process.4 And,
looking back further, we can easily mock the anti-technological fanaticism of
the early 19th-century Luddites, or recall Frédéric Bastiat’s 1845
open letter to the French Parliament in which he lampooned
protectionism put forward as a means to protect employment.5 But
just because Simon and Bastiat were right in their day does not mean that they
are right in ours, especially as regards the ability of discrete national
education systems to keep up with the accelerating demands of the postmodern
job market.
Indeed, sixty years after Wiener wrote to
Reuther, his darker visions have in some sense been borne out. Robots do now
perform much of the production-line work in auto factories that UAW members
once did. Employment in the auto industry is also now far below the peak levels
reached in 1995. More broadly, the manufacturing workforce in the United States
has atrophied—from 35 percent of non-farm employment in 1960 to 10 percent
today. This was primarily due not, as is widely believed, to “offshoring” to
China and other parts of the world, but rather to automation-driven increases
in manufacturing productivity. (Evidence: Between 1995 and 2002, the United
States lost two million manufacturing jobs; China during the same time period
lost 15 million.)6
In a twist that even Wiener did not anticipate,
the world of cyberspace that he was among the first to imagine has forced
workplace transformations far from the factory floor—ones more rapid and more
extensive than any caused by the advent of automated production. Phenomena as
distinct and seemingly disconnected as the outsourcing of back-office functions
by large corporations, the collapse of the newspaper industry, and the recent
proliferation of options for online education are all manifestations of the fundamental
trends Wiener identified decades ago. First journalists and accountants, then
X-ray technicians, artists and photographers, among many others, have undergone
the disconcerting experience of watching old market structures that previously
would have guaranteed lifelong livelihoods crumble before their eyes. The jury
is still out on whether we can keep running faster, creating more new jobs than
the forces of creative destruction can destroy old ones, even as world
population pressures may strain a finite resource base.
Yet there is ample reason to anticipate a good
outcome here as well. Today, new technologies of communication and
collaboration are enabling not just lone innovators but entire populations to
connect and create at a scale previously unimaginable. Not only do we have more
people, which is good; we have more well-connected people both within and among
societies, which is even better.
Think what you will about the fall 2011 “Occupy
Wall Street” protests, but they are not comparable to the Luddites of the early
1800s who smashed mechanized looms to protest the transformations brought about
by the Industrial Revolution. David Graber, a contributor to the thinking
behind the Occupy Wall Street protests, put it this way: “One of the most abundant
resources on earth is smart, creative, imaginative people. And yet 99.9% of the
power of the human race is not being marshaled right now. . . . All we need to
do is open that spigot a little bit and we could come up with endless ways to
create and produce and distribute.”
In this generation as in generations past,
people deprived of ways to realize their productive potential do become
frustrated in a hurry. Graber and those who share his particular variety of
dissatisfaction acknowledge, just as Wiener did before them, that putting the
genie of technological change back into the bottle is neither possible nor
desirable. Rather, the interesting question—in fact, the only question—for
people in the United States as elsewhere is ultimately this: How do we direct
the inexorable movement of technology to enhance, rather than obstruct, the
ability of people everywhere to realize their productive potential?
To suggest an answer, let’s go back to where we
began, to the expected economic calamities following World War II that never
took place. Back then, the United States underwent the most dramatic and
sustained period of economic expansion that any nation had so far experienced,
even as its population boomed.
What caused that economic expansion? Too often
the post-World War II boom in the United States is attributed entirely to the
work ethic and ingenuity of the Greatest Generation, or other characteristics
of America itself. Certainly, the can-do spirit of millions of veterans
returning to the workforce from the frontlines played a role, as did the need
to satisfy long pent-up domestic demand. But America’s ascent to global
dominance was eased considerably by the fact that every other major center of
production in the world was either obliterated or incapacitated by the war—the
most devastating conflict in human history.
The advantage that the United States suddenly
held over the rest of the world in terms of physical capital was substantially
bolstered by an epochal influx of top talent from every part of the planet—a
massive human capital transfer that continues to pay dividends even today. This
positive insurgency of ability was a key factor in building the global
competitive advantage that the United States enjoyed for two generations as
immigrant and home-grown talent combined with massive investments by government
to turn American universities and corporations into awe-inspiring engines of
innovation.7
Daniel Chee Tsui, born in a farming village in
Henan Province in 1939, came to the United States in 1958 to attend Augustana
College in Rock Island, Illinois, where he was the school’s only student of
Chinese descent. He continued his studies at the University of Chicago,
ultimately making fundamental discoveries relating to semiconductors, for which
he was awarded the 1998 Nobel Prize in Physics. Vinod Khosla, famed
entrepreneur and venture capitalist, came to the United States in 1979 at age
twenty after failing at his first entrepreneurial venture—a soy-milk company
whose intended market was the many people in India without a refrigerator. He
went on to found Sun Microsystems and become a partner in the legendary venture
capital firm Kleiner Perkins Caufield & Byers.
(Entrepreneur-turned-academic Vivek Wadhwa—himself an immigrant to the United
States—has documented that 52 percent of the founders of Silicon Valley’s
start-ups were foreign-born.)
These are specific examples, but they are not
isolated ones. Many more like them (did I mention Albert Einstein?) came to the
United States during the 45-year interval following World War II when
educational and business opportunities in America exceeded those anywhere else
in the world. But the era when we in the United States could assume top talent
would flock to our shores is drawing rapidly to a close—a fact that has little
to do with the United States, and a whole lot to do with everywhere else.
Every time the light of opportunity has started
to shine anew somewhere in the world, the beacon drawing immigrants to the
United States has shone, in relative terms, somewhat less brightly. Countries
that in previous centuries were dominating economic and political powers—China,
India and Turkey notable among them—are surging forward, regaining some of the
ground they lost during eras of conflict or colonization.
Will the gains made elsewhere in the world come
at the expense of the United States, Europe and Japan? The answer is an
emphatic “that depends.” If we ignore the reasons for and sources of the coming
prosperity—or, if we go even further and cut ourselves off from the major
trends driving global history in our lifetimes—then the citizens of currently
rich countries will become poorer. But the real poverty we experience will be
that of imagination, not of circumstance.
Americans cannot “Win the Future” by re-winning
the past. Bringing routinized factory jobs back is a “win the past” strategy,
because most of those jobs never really went overseas to begin with—they went
to machines. Responding to competitive threats from overseas by investing
narrowly in science, technology, engineering and mathematics (STEM) education
is also a “win the past” strategy because 21st-century innovation is
deeply integrative and interdisciplinary. It incorporates design and an
understanding of human behavior in at least equal measure with core STEM
fields—and also (importantly, a hard fact) because real innovation leadership
in the 21st century can only come if America continues to draw
the best talent from around the world, regardless of how well we develop talent
at home. Shutting our borders to immigrant talent doesn’t even qualify as a
“win the past” strategy, since it’s an approach that never created prosperity,
and never will.
So what constitutes a genuine “win the future”
strategy? Simple: Since we can’t beat global prosperity, let’s join it. To do
so we need to repurpose our institutions to make the most of the abundant
opportunities that exist in the global age of entrepreneurship. As individuals
and as a nation, we need to be relentless in finding new ways to connect,
create, contribute and collaborate with those building value for themselves and
their communities elsewhere in the world.
At every stage of institutional repurposing,
incumbent interests will resist. Such is the nature and function of incumbent
power. Too abstract? Think about what happens in the United States anytime
momentum builds to change the status quo in health care, energy, education or
finance. Can you picture the ads? Do they look like reasoned public discourse
or frantic pushback by threatened incumbents? Q.E.D.
That can’t stop us. Ours is an era of enormous,
indeed unprecedented, potential. Human well-being—the fundamental combination
of capacities and opportunities that bounds each person’s experience of
life—will likely grow more over the next quarter century than it has at any
other time in human history. In comparison with the magnitude of these changes,
the political discourse in the United States isn’t just polarized—it is
positively, and unacceptably, Lilliputian.
We’re not alone in that respect. In every corner
of the world, from Abu Dhabi to Zurich, just as in Washington and Wall Street,
yesterday’s power-brokers can be counted on to paint opportunity as threat and
dig in their heels against change. As a consequence, the work of making the
most of a growing humanity’s moment will fall to those hundreds, thousands or
millions of entrepreneurs and innovators who dedicate themselves to discovering
pathways to progress in the decade to come, just as others did in decades
past.
Progress? You ain’t seen nothin’ yet.
1The usefulness of per capita income as an
indicator of human well-being is a subject of longstanding debate; on the
macroeconomic level, the discussion extends to Gross Domestic Product (GDP).
While a summary of this debate would require multiple dissertations (not a
footnote), it is safe to say that for levels of income below roughly $10,000,
per capita income is a fairly good proxy for well-being. For more on this
topic, see the report of the Commission on the Measurement of Economic
Performance and Social Progress, convened by the President of France and
co-chaired by Joseph Stiglitz and Amartya Sen.
2For a further development of this concept, see
Duncan Foley, “Stabilization of Human Population through Economic Increasing
Returns”, Economic Letters (September 2000).
3Kremer, “Population Growth and Technical
Change, One Million B.C. to 1990”, Quarterly Journal of Economics (August
1993).
4Simon, The Shape of Automation for Men and
Management(Harper & Row, 1965).
No comments:
Post a Comment