Monday, January 2, 2012

The triumph of hope over experience

Reducing Real Output by Increasing Federal Spending
by Dwight R. Lee*
The belief that by spending more, the federal government can revive the economy by increasing aggregate demand is an example of the triumph of hope over experience. Many people excuse the recent failures of such stimulus spending with the claim that the spending simply wasn't large enough. This demand-side view is oblivious to the supply-side reality that demanding more does no good unless more has been, or will be, produced. The logic of this reality explains why trying to increase aggregate demand through increased federal spending is not the key to stimulating the economy. The problem is not that aggregate demand is unimportant—it is very important. The problem is that increased real aggregate demand is the result, not the cause, of an increasingly productive and prosperous economy.
The historical evidence clearly shows that very little government spending is necessary for growing prosperity. From the founding of the United States until the early 1930s, the federal government's budget averaged only around three percent of the nation's GDP, which was about half the spending of state and local governments. The federal budget was not balanced every year, but revenues and expenditures were closely balanced over the whole time period. Federal spending and budget deficits increased during wars, but the resulting debt was largely paid off with peacetime budget surpluses. For 28 straight years after the Civil War, for example, the federal budget was in surplus, with the Civil War debt greatly reduced, though not completely eliminated, by 1893.
During most of these 28 years, the economy was expanding, unemployment was low, and real wages were increasing and, by the early 1900s, America had become the world's richest nation. There were economic downturns beginning in 1873 and 1893, but the federal government did little to respond to them. The 1893 downturn caused a federal budget deficit, but the deficit was caused almost entirely by decreased tax revenues rather than increased federal spending. The recovery from these downturns occurred in response to market forces, with neither downturn lasting nearly as long as the Great Depression of the 1930s. This shows that while market economies experience occasional recessions, they can recover—and have recovered and continued growing—without the Keynesian prescription of increased government spending and budget deficits.
This does not mean that federal spending was irrelevant to our early economic success. Most of the federal budget in the 19th century went for such things as national defense, infrastructure, law enforcement, and establishing standards on weights and measures. This spending created a setting in which the power of private enterprise and entrepreneurship could produce wealth. The one big exception was post-Civil War veterans' pensions, paid entirely to Union veterans. According to Jeffrey R. Hummel, veterans' pensions "grew from 2 percent of all federal expenditures in 1866 to 29 percent in 1884." But what the government did not do was just as important as what it did. It rarely used its police power to override the decisions that consumers and producers made in response to the information and incentives communicated through markets.
A Shift in Ideology
Few Americans in the 19th century thought that the government could improve the economy by spending more to create jobs. Rather, the prevailing view was that prosperity resulted from people keeping most of their earnings because their investments and spending would lead to the production of goods and services that consumers valued most. And even fewer thought government could increase economic growth by taking money from some and transferring it to others to increase aggregate demand.
However, ideological changes began taking place in the late 19th century with the Populists and, later, the Progressive view that with regulations and transfers, the federal government could improve on unregulated markets by stimulating more economic output and distributing it more "fairly." By the 1930s, this view was sufficiently widespread to give political traction to the idea that more government spending and control over the economy could reverse the economic downturn that became the Great Depression. The result was that federal spending expanded, and its composition changed.
Politicians had always wanted to transfer income from the general public to favored groups (or voting blocs), and now they had an excuse to do so under the guise of stimulating economic growth. This was bad economics, but the changing ideological view had made it good politics. Taking a little more money from everyone to provide transfers (in the form of subsidies, make-work projects, and bailouts) to a relatively few creates costs so dispersed, disguised and delayed that they are hardly noticed. The benefits are less than the costs, but they are concentrated, readily appreciated, and easily taken credit for by politicians. Not surprisingly, federal spending started increasing. It was about four percent of GDP in 1930; 15 percent in 1950; 20.7 percent in 2008; and estimated to be 25 percent in fiscal year 2011. And the bulk of this spending growth has gone to transferring income from those who earned it to those who have sufficient political influence to take it. Unfortunately, transfer payments make the country poorer than it would otherwise be—as do general increases in federal spending, given the current spending levels.
Government Spending Reduces Real Output
The first problem with government spending as a way of stimulating economic growth is the cost of raising a dollar through taxation. James Payne has estimated this cost at $0.65 per dollar in taxes that the federal government receives. This figure includes the excess burden of taxation; the costs taxpayers incur to comply with the federal tax code and to deal with the audits and other enforcement activities by the IRS; the costs taxpayers incur to avoid or reduce their tax payments; and the costs for funding the activities of the IRS and other federal agencies involved in administering and enforcing the tax code. Moreover, those transfers often subsidize wasteful activities—such as growing cotton in the desert, turning corn into ethanol, and producing so-called green energy in politically connected companies—that fail even with massive subsidies. Also, the opportunity for some to confiscate wealth produced by others, and the desire of others to prevent this confiscation, motivates political "rent seeking" (socially wasteful efforts to benefit one's self at the expense of others by influencing political decisions) that dissipates resources that could have been used productively. Transfers also create incentives for people to substitute government-provided income for income earned through productive effort. And because federal transfers, and the many detailed regulations that invariably accompany them, shelter people against the setbacks imposed by market discipline, they prevent or delay the adjustments required for productive economic coordination.
But couldn't economic productivity be increased by targeting federal spending on hiring the unemployed either directly to work for government or by subsidizing private firms to hire them? Such an approach makes sense only if it produces more value than it costs, and there are several reasons for doubting that it does. First, with the federal government spending well over 20 percent of GDP, and most of this spending reducing economic productivity (spending additional dollars creates less value than it costs), it is unlikely that there are many government jobs left in which additional workers would add to the net productivity of the economy.
Second, assuming that there are government jobs in which the right people could create more value than their opportunity costs, without reliable market prices and wages guiding political decisions, it is very unlikely that political authorities would identify those jobs and match them with the right workers. This would be a problem even if the information were available to place government workers in jobs where they would be most productive. Political influence is far more important than economic productivity when officials decide what government jobs to create and on how much to pay those who are hired. This political influence is also dominant when private firms are subsidized to reduce unemployment by hiring more workers. Those subsidies are more likely to go to firms in politically favored industries that have been generous campaign contributors. Also, workers hired for federally funded or assisted construction projects are required by the 1931 Davis-Bacon Act to be paid the prevailing union wage, which is invariably higher than the market wage.
Third, hiring the unemployed is not the same as hiring people who are unproductive. Spending time looking for a job in which one's contribution is the greatest is a productive activity. Most of the unemployed could get a job quickly if they were willing to take a low enough salary, but it makes sense to pass up jobs as long as the cost of continued search (including a foregone salary) is expected to be more than offset by finding a more productive job. But when the government provides or subsidizes a low-productivity job that pays Davis-Bacon wages, many will cease their job searches, even though continuing to search is more productive than the government jobs are. And it should be noted that workers typically face less incentive to be productive in government jobs than in private-sector jobs.
Fourth, even if an effort is made to hire primarily unemployed workers, many of those actually hired in response to federal stimulus spending are already employed or would have been hired soon anyway. According to a September 2011 study by the Mercatus Center, only 42.1 percent of those hired by organizations receiving stimulus funds from the 2009 American Recovery and Reinvestment Act (ARRA) were unemployed when hired. The same study also reported that 35 percent of the interviewed firms that were required to pay the Davis-Bacon prevailing wage (which required paying as much as 30 percent more) agreed with the statement that they "would... have been able to hire more workers at lower wages" and another 17 percent were not sure. The result is that fewer workers are hired and less value is created for each dollar of the stimulus funds.
The Impotent Multiplier Effect
Despite all these facts, some argue that government spending to hire the unemployed for completely useless tasks and paying them more than they are worth is good for the economy. Their argument is based on the claim that the workers spend their incomes, which starts a cycle of spending that increases economic growth through a multiplier effect. After all, John Maynard Keynes used the multiplier effect as the basis for stating that increasing government spending to hire the unemployed to "dig holes in the ground" is better than not increasing spending. Furthermore, according to Keynesian theory, the multiplier effect is even stronger when the government spending increases the budget deficit.
Interesting stories can be told with the multiplier effect playing the lead role, and some clearly find these stories compelling. But the economic history of the late 19th century has no place in these stories for an obvious reason. For over a quarter of a century after 1865, except for the recession that began in 1873, economic growth was healthy, and yet the federal government was spending, on average, only about three percent of the GDP and running budget surpluses every year. More recent evidence against the multiplier effect comes from our post-World War II experience. From its wartime peak, in 1944, to 1948, the federal government cut spending by 75 percent. The result was an economic boom, despite Keynesian predictions that spending reductions of this magnitude would result in massive unemployment as millions were released from military duty and war-related civilian jobs. From September 1945 to December 1948, the unemployment rate averaged only 3.5 percent.
The problem with the multiplier story is that people respond sensibly to government policy. They know that someone has to pay for government spending, even if it is financed by debt. More debt today means higher taxes in the future to pay for the mounting interest charges and to repay the principal. Of course, the government can default on at least some of the debt through inflation, but inflation is a tax, and taxes discourage productive activity. So, absent outright default, any benefit people receive from deficit spending not only is temporary, but also will have to be paid back one way or another. As Milton Friedman established, people spend far less out of temporary increases in their income, even increases that do not have to be paid back, than they do out of permanent increases. When people recognize that they will have to pay back the temporary increase, they are unlikely to spend much, if any, of it. Furthermore, large increases in deficit spending create uncertainty about how the debt will be paid back, as well as how government expansion will affect the business climate. Such uncertainty has a negative effect on consumption and investment, with the greater negative effect being on investment.
Although consumer spending is lower because of the recent recession than it otherwise would have been, it is not as sensitive to economic uncertainty as business investment is. Indeed, consumers are spending more today than they were before the recession began. According to the National Income and Product Accounts from the U.S. Department of Commerce, the annual rate of consumer spending was $9.8 trillion in the first quarter of 2007 and $10.68 trillion in the second quarter of 2011. It is investment that has declined sharply. According to a report by Robert Higgs, the annual rate of net business investment dropped from $463 billion in the third quarter of 2007 to $144 billion in the fourth quarter of 2010. So, despite the common view that we have to stimulate consumption to revive the economy, the real problem is to reduce the economic uncertainty that is depressing the investment upon which our future productivity depends.
And this brings us back to the primary reason that federal spending isn't stimulating economic growth by increasing aggregate demand. Effective aggregate demand is increased by productivity, not by a printing press or another round of quantitative easing. No matter how much money is created, or borrowed, to finance yet more federal spending and to hopefully increase aggregate demand, effective aggregate demand is always limited by how much has been, or will be, produced in response to that demand. No matter how much money you have, your demand means nothing without the production of goods and services worth demanding. Just ask a Zimbabwean. Only by increasing productivity can effective aggregate demand be increased, and the unfortunate reality is that increasing federal spending is decreasing both.


Footnotes
1. Jeffrey R. Hummel, Emancipating Slaves, Enslaving Free Men: A History of the American Civil War, Chicago: Open Court, 1996, p. 331.
2. For an excellent discussion of this shift in ideology, and its consequences, see Robert Higgs, Crisis and Leviathan: Critical Episodes in the Growth of American Government (Oxford: Oxford University Press, 1987), particularly Chapters 6-8.
3. The discussion in this section covers some of the same points as a more-formal model in Kevin M. Murphy, "Evaluating the Fiscal Stimulus" (January 16, 2009). Seehttp://faculty.chicagobooth.edu/brian.barry/igm/Evaluating_the_fiscal_stimulus.pdf (accessed October 2, 2011).
4. James L. Payne, The Culture of Spending: Why Congress Lives Beyond Our Means (San Francisco: ICS Press, 1991), p. 186.
5. Garrett Jones and David M. Rothschild, "Did Stimulus Dollars Hire the Unemployed? Answers to Questions about the American Recovery and Reinvestment Act," (Mercatus Center working paper no. 11-34, September 2011).
6. See John Maynard Keynes, The General Theory of Employment, Interest and Money (New York: Harcourt, Brace and World, 1936), Chapter 16. Of course, Keynes thought it would be even better to put the workers to productive use.
7.  The National Bureau of Economic Research claims that the 1873 depression lasted sixty-five months, but modern economists are skeptical that it lasted that long. Joseph H. Davis, "An Improved Annual Chronology of U.S. Business Cycles since the 1790s," Journal of Economic History 66(1) (March 2006), revises the length of the 1873 depression to no longer than 24 months.
8.  See David R. Henderson, "The U.S. Postwar Miracle," (Mercatus Center, Nov. 2010). http://mercatus.org/publication/us-postwar-miracle.
9. But see Jeffrey R. Hummel, "Why Default on U.S. Treasuries is Likely," Library of Economics and Liberty (Liberty Fund: August 3, 2009) http://www.econlib.org/library/Columns/y2009/Hummeltbills.html for an argument that the federal government has less to gain from inflation as a way of reducing the value of its debt than it did in the past, and that an outright default is likely.
10. Milton Friedman, Theory of the Consumption Function (Princeton: Princeton University Press for the National Bureau of Economic Research, 1957).
11. See Robert Barro, "Are Government Bonds Net Wealth?" Journal of Political Economy, Vol. 82, No. 6 (Nov.-Dec., 1974): 1095-1117.
12.  See Table 2.3.5, Real Personal Consumption Expenditures by Major Type of Product at http://www.bea.gov/iTable/iTable.cfm?ReqID=9&step=1&acrdn=2 (Accessed September 29, 2011)
13. Robert Higgs, "Private Business Net Investment Remains in a Deep Ditch," The Beacon (Oakland: The Independent Institute, February 20, 2011). http://blog.independent.org/2011/02/20/private-business-net-investment-remains-in-a-deep-ditch/
14. In November of 2008, inflation in Zimbabwe hit an estimated rate of over 79 billion percent per month, or an annual inflation rate of over 89 sextrillion percent. See http://www.cato.org/zimbabwe. Zimbabweans were impoverished despite, or because of, going shopping with a pocket full of bank notes, each with a face value of 10 million Zimbabwe dollars

Prophets of doom


The Great Horse-Manure Crisis of 1894
By Stephen Davies
We commonly read or hear reports to the effect that “If trend  X continues, the result will be disaster.” The subject can be almost anything, but the pattern of these stories is identical. These reports take a current trend and extrapolate it into the future as the basis for their gloomy prognostications. The conclusion is, to quote a character from a famous British sitcom, “We’re doomed, I tell you. We’re doomed!” Unless, that is, we mend our ways according to the author’s prescription. This almost invariably involves restrictions on personal liberty.
These prophets of doom rely on one thing—that their audience will not check the record of such predictions. In fact, the history of prophecy is one of failure and oversight. Many predictions (usually of doom) have not come to pass, while other things have happened that nobody foresaw. Even brief research will turn up numerous examples of both, such as the many predictions in the 1930s—about a decade before the baby boom began—that the populations of most Western countries were about to enter a terminal decline. In other cases, people have made predictions that have turned out to be laughably overmodest, such as the nineteenth-century editor’s much-ridiculed forecast that by 1950 every town in America would have a telephone, or Bill Gates’s remark a few years ago that 64 kilobytes of memory is enough for anyone.
The fundamental problem with most predictions of this kind, and particularly the gloomy ones, is that they make a critical, false assumption: that things will go on as they are. This assumption in turn comes from overlooking one of the basic insights of economics: that people respond to incentives. In a system of free exchange, people receive all kinds of signals that lead them to solve problems. The prophets of doom come to their despondent conclusions because in their world, nobody has any kind of creativity or independence of thought—except for themselves of course.
A classic example of this is a problem that was getting steadily worse about a hundred years ago, so much so that it drove most observers to despair. This was the great horse-manure crisis.
Nineteenth-century cities depended on thousands of horses for their daily functioning. All transport, whether of goods or people, was drawn by horses. London in 1900 had 11,000 cabs, all horse-powered. There were also several thousand buses, each of which required 12 horses per day, a total of more than 50,000 horses. In addition, there were countless carts, drays, and wains, all working constantly to deliver the goods needed by the rapidly growing population of what was then the largest city in the world. Similar figures could be produced for any great city of the time.*
The problem of course was that all these horses produced huge amounts of manure. A horse will on average produce between 15 and 35 pounds of manure per day. Consequently, the streets of nineteenth-century cities were covered by horse manure. This in turn attracted huge numbers of flies, and the dried and ground-up manure was blown everywhere. In New York in 1900, the population of 100,000 horses produced 2.5 million pounds of horse manure per day, which all had to be swept up and disposed of. (See Edwin G. Burrows and Mike Wallace, Gotham: A History of New York City to 1898 [New York: Oxford University Press, 1999]).
In 1898 the first international urban-planning conference convened in New York. It was abandoned after three days, instead of the scheduled ten, because none of the delegates could see any solution to the growing crisis posed by urban horses and their output.
The problem did indeed seem intractable. The larger and richer that cities became, the more horses they needed to function. The more horses, the more manure. Writing in the Times of London in 1894, one writer estimated that in 50 years every street in London would be buried under nine feet of manure. Moreover, all these horses had to be stabled, which used up ever-larger areas of increasingly valuable land. And as the number of horses grew, ever-more land had to be devoted to producing hay to feed them (rather than producing food for people), and this had to be brought into cities and distributed—by horse-drawn vehicles. It seemed that urban civilization was doomed.
Crisis Vanished
Of course, urban civilization was not buried in manure. The great crisis vanished when millions of horses were replaced by motor vehicles. This was possible because of the ingenuity of inventors and entrepreneurs such as Gottlieb Daimler and Henry Ford, and a system that gave them the freedom to put their ideas into practice. Even more important, however, was the existence of the price mechanism. The problems described earlier meant that the price of horse-drawn transport rose steadily as the cost of feeding and housing horses increased. This created strong incentives for people to find alternatives.
No doubt in the Paleolithic era there was panic about the growing exhaustion of flint supplies. Somehow the great flint crisis, like the great horse-manure crisis, never came to pass.
The closest modern counterpart to the late nineteenth-century panic about horse manure is agitation about the future course of oil prices. The price of crude oil is rising, partly due to political uncertainty, but primarily because of rapid growth in China and India. This has led to a spate of articles predicting that oil production will soon peak, that prices will rise, and that, given the central part played by oil products in the modern economy, we are facing intractable problems. We’re doomed!
What this misses is that in a competitive market economy, as any resource becomes more costly, human ingenuity will find alternatives.
We should draw two lessons from this. First, human beings, left to their own devices, will usually find solutions to problems, but only if they are allowed to; that is, if they have economic institutions, such as property rights and free exchange, that create the right incentives and give them the freedom to respond. If these are absent or are replaced by political mechanisms, problems will not be solved.
Second, the sheer difficulty of predicting the future, and in particular of foreseeing the outcome of human creativity, is yet another reason for rejecting the planning or controlling of people’s choices. Above all, we should reject the currently fashionable “precautionary principle,” which would forbid the use of any technology until proved absolutely harmless.
Left to themselves, our grandparents solved the great horse-manure problem. If things had been left to the urban planners, they would almost certainly have turned out worse.
*See Joel Tarr and Clay McShane, “The Centrality of the Horse to the Nineteenth Century American City,” in Raymond Mohl, ed., The Making of Urban America (New York: SR Publishers, 1997), pp. 105–30. See also Ralph Turvey, “Work Horses in Victorian London” at www.turvey.demon.co.uk.

The Imperial Hubris

Empire as a way to death
By Joseph R. Stromberg

OVERVIEW
Pericles' Funeral Oration is widely seen as a noble statement of core Western values. Noble, doubtless, but the rest is arguable (Western Civilization having had a bad day or two). Pericles – the Athenian FDR? – saw the Athenian Empire as the great defender of freedom – freedom defined, however, by the Athenian Empire and its "defensive alliance," the Hellenic NATO aka Delian League. The analogy goes further. Athens was democratic and imperialistic – thus refuting Wilsonian Fallacy #1 that "democracies" are always peaceful and kindly. Like the American globocrats and their NATO counterparts after 1989, the Athenians asserted – in the famous dialogue with the Melians – their "right to rule" after the overthrow of the Persians. For the Americans and NATO, the Soviets' fall raised the question first posed by Southern comedian Brother Dave Gardner in the early sixties, "What will the preachers do, when the Devil is saved?" We know what George Herbert Walker Bush did: he found a lesser devil on whose country he dropped the full weight of humane police action and peacekeeping to the tune of hundreds of thousands of dead Iraqis who never posed the slightest threat to Kennebunkport.

THE AMERICAN EMPIRE'S FIRST CENTURY

The American Empire lurched into existence a hundred years ago with the Spanish-American War. President William McKinley quickly learned how to sail under Two Doctrines. The Outer Doctrine – for public consumption – was that American intervention was uniquely philanthropic: the freedom of the poor Cubans and good government for the Filipinos were our only goals. (Things didn't work out that way – but never mind.)

The Inner Doctrine was a vision of prosperity through economic empire. The Open Door Notes staked the claim. Government support for the expansion of favored corporations into world markets became the central theme of 20th century US foreign policy. Where foreign empires, states, or revolutions threatened this goal, US policy makers would risk war to sustain it. In the end, whatever his outward fuss over "freedom of the seas" and Teutonic "barbarities," Woodrow Wilson's drive to involve Americans in the First Euro-Bloodbath had as much to do with possible threats to the Open Door program as with his "idealism."

After Americans repudiated Wilson's war, a series of Republican Presidents pursued the Open Door with less fanfare. It was emphatically not a period of "isolationism" despite the moderation of those in charge. It seemed to Herbert Hoover that the Open Door and the "territorial integrity of China" were not worth a war. His New Deal successors fitted their policy, especially from 1937, to threats to the Open Door while grumbling about Italian and German inroads into Latin American markets. Once the European war broke out in September 1939, Roosevelt worked to intervene as rapidly as possible.

US wartime military and civilian planning reveals the grand scale of the American leadership's postwar ambitions. They thought in terms of US dominance of the "Grand Area" – later the "Free World," and now, the "New World Order." This planning rested on a mercantilist conception of hegemony. The self-named "wise men" of the northeastern political and corporate Establishment were supremely confident of their ability and right to manage the globe. After bombing their opponents flat, they looked forward to an American Century, only to find the Soviet Union blocking their path into very desirable markets and resources.

The Open Door does not explain everything about the origins of the Cold War but it was a major (even obsessive) concern of policy makers in the late 1940s. Whether the Cold War made any sense at all, it did allow the worldwide extension of US power. It gave an ideological and practical framework for the growth of what can only be called an American Empire.

It also gave us dear old NATO. Debating the treaty in the aftermath of the Berlin Blockade and the Marshall Plan, only a handful of Senators opposed that entangling alliance. Senator Taft said that the pact "will do far more to bring about a third world war than it will ever maintain the peace of the world." This shows how hard it is to foretell things. Taft could not have dreamed that NATO – having achieved its object and having, therefore, no reason to exist – would expand its membership and attack a state which had not attacked a NATO member any more than he could have imagined the wild ride of the Arkansas traveler.

But much more than NATO was at issue. The Wise Men and their National Security managers wanted colossal mobilization blurring the distinction between peace and war. As some of them admitted in the infamous NSC-68, had there been no Soviet Union, they would still have pursued much the same program. This ambitious program almost ran aground on Congressional opposition to its costs (hard to believe now).

The postconstitutional, Presidential War in Korea saved the planners' bacon. It also continued the military practices and moral theory developed in other conflicts. One General commented, "almost the entire Korean peninsula [is]... a terrible mess. Everything is destroyed.... There were no more targets in Korea." General Curtis LeMay noted, "We burned down just about every city in North and South [!] Korea..... we killed off over a million civilian Koreans and drove several million more from their homes." He was not being critical. I shall pass over the "strategy" and "tactics" of the Viet Nam War.

TOTAL WAR AND POST-COLD WAR ADVENTURISM

An Empire – and by any standard there is an American Empire – which subscribes to a doctrine of Total War ought to make everyone nervous. Somewhere along the line from the Pequod War, Sherman's March to the Sea, the bloody so-called "Philippine Insurrection," and the firebombing of Japan and Germany, US leaders – civilian and military – took up the notion that it is reasonable to make war on an Enemy's entire society. Only a few observers like C. Wright Mills and Richard M. Weaver even questioned the doctrine during the High Cold War.

And, sadly, it all ended. For the planners and managers the Soviet collapse was inconvenient – requiring a new ideological rationale, new enemies, and much retargeting – if they stayed in the Empire business. I leave, unsung, the Gulf War, with that lovely phrase about "making the rubble bounce" as well as the hundreds of thousands of Iraqis who have died since that splendid little war under the "humane" mechanisms of "economic warfare." I only add that this style of warfare fails, in detail, the following useful test: Can we conceive of Robert E. Lee using these weapons or tactics?

EMPIRE AS A WAY OF DEATH: MORAL, INSTITUTIONAL, AND CULTURAL

There are many writers who worry themselves sick about "late capitalism" (whatever that might be). It is more to the point to worry about the pattern of late empire. Here we find an array of interlocking ideological, political, and economic facts paralleling those of comparable periods in other civilizations. One of these facts is irresponsible power centered in bureaucracies that aspire to manage all aspects of human life (here Paul Gottfried's After Liberalism is very useful). At the apex of the would-be Universal State stands the figure of Caesar. Oswald Spengler defined "Caesarism" as "that kind of government which, irrespective of any constitutional formulation that it may have, is in its inward self a return to formlessness.... Real importance centered in the wholly personal power exercised by Caesar" or his representatives.

Having allowed the American President to become an Emperor, who dares now be surprised that an "impeached" Executive can, on his own motion, begin bombing a state with which neither the US or NATO was "at war" in the name of human rights and universal do-gooding? Perhaps Mr. Arthur Schlesinger, Jr. needs to take a deeper look at the imperial presidency. The sheer contempt shown for all law – Geneva Convention, UN ephemera, NATO Treaty, and, what ought to matter, our Constitution – shows an "arrogance of power" that would stun the present incumbent's former employer, Senator J. William Fulbright (not to mention his History Professor Carroll Quigley). That so few notice or complain is itself part of the late imperial pattern. Empire, with its many "abridgments of classical liberty" (to quote Richard Weaver) is, in its American form, not the personalistic rule of a Great Khan, but is mediated through mega-colossal bureaucracies, which at times can block the President. Precisely because Presidential power is most unhampered in foreign affairs, recent Presidents have aspired to strut upon the world stage while Rome – or at least Los Angeles – burns.

OUTER DOCTRINE, INNER DOCTRINE, IMPERIAL DOCTRINE

In late empire, the empire itself becomes an ideological value. The Empire is necessary, benevolent, and good. While spin-masters may still deploy universalist rhetoric – "Doin' right ain't got no end," empire is increasingly its own justification. It comes to seem unreasonable that there should be there more than one power in the world. This is the classical imperial doctrine. Some writers refer to this pattern as "Asiatic" – a formula that leaves out several important cases.

Where two empires exist, each calls the other "evil" and asserts its claim to sole universal rule, as in the "Cold War" propaganda duel between Justinian and Chosroes (as recounted by George of Pisidias). The full imperial claim, which arises with late empire, entails the following, as summarized by BYU Historian Hugh Nibley: "(1) the monarch rules over all men; (2) it is God who has ordered him to do so and.... even the proudest claims to be the humble instrument of heaven; (3) it is thus his sacred duty and mission in the world to extend his dominion over the whole earth, and all his wars are holy wars; and (4) to resist him is a crime and sacrilege deserving no other fate than extermination." Clearly, there is room only for one such Benefactor and all others should get out of Dodge. Except for the references to God, this outlook undergirds "the act you've known for all these years" and the propaganda pronouncements of this latest frontier war. The "lateness" of our imperial period is suggested by how little attention the public pays to these exercises. They are now normal, even if few acknowledge that there is an American Empire. And yet, as Garet Garrett wrote in 1954, "The idea of imposing universal peace on the world by force is a barbarian fantasy" and the mental state of a realized empire is "a complex of fear and vaunting."

The late "war," "police action," whatever, provides many examples of the imperial hubris. Thus we witnessed the usual demonization of the Enemy Leader and, then, the Enemy People. The mindless reflex that demands "Unconditional Surrender" soon kicked in. Towards the end (of this phase, anyway) Sandy Berger drew up Skinner Boxes for the Serbs, who would be rewarded with less bombing as they withdrew from square A into B and so on. Bombing after an "agreement" damned sure isn't traditional diplomacy – and it may not even be good behaviorism. But, then, Empire means never having to say you're sorry. Or wrong. But "mistakes" happen.

DOCTRINES, IDEOLOGY, AND PRACTICE

During the splendid little Serbo-American War, imperial spokesmen fielded the old Outer Doctrine of Doing Right alongside the new Imperial Style of just issuing orders whose justice is implicit. (Perhaps this is the real "End of History.") The warmakers' practices simply improved on their old ones: hence the new focused terror bombing in which civilian deaths are all "accidental," "unintended," "collateral," etc., and the Wise Guys' Lessons of Viet Nam: no real press coverage, no casualties, no answering back from Congress, etc.

The ideological babble was deafening, as the sixties "Civilian Militarists" gave way to the young Social Militarists. (What are armed forces for? mused Secretary Albright.) It is beyond belief that these uninformed, half-educated eternal youths, helped out by a few leftover ghouls from the Cold War, wish to tell the world how to live. (Already in 1946, Felix Morley called the US "the world's greatest moralizer on the subject of the conduct of other governments.") After the high-tech smashing of Serbia, the US elite's little sermons about "weapons of mass destruction" (and ordinary guns owned by those terrible rednecks) ring a bit more hollow.

Just as World War I was the War of Austrian Succession and World War II the War of British Succession, this "war" be seen as the War of Soviet Succession (or part of it). This brings us back – like the Freudian return of the repressed – to our old friend the Inner Doctrine: Open Door Empire. As Jude Wanniski points out, NATO's American-run Drang nach Osten has something to do with grabbing political-economic control of all the former Soviet assets in Western Asia. Oil is sometimes mentioned. The old dream of American mercantilist world-overlordship – now misleadingly discussed as "globalization": a mysterious force rising spontaneously out of equally mysterious "late capitalism" – is back. This is why the sober political-economic elites can tolerate the actions of the hippie-bombers. Uncooperative minor states like Serbia that refuse their assigned role must be swept aside. Their actual deeds are beside the point (and similar deeds by others, who do take their orders, go quite unpunished). One wonders if the overgrown, eternally innocent Boy Scouts who are spreading the NATOnic Plague have any idea how dangerous major historical transitions can get? Do they think about World War III? Probably not. Do they think it's clever to poke the wounded but irritable Russian Bear with a stick? Do they yearn for a rerun of the Crimean War? Do they think at all? Who knows? After all, they don't have to think – and that, too, is part of the syndrome of Late Empire.

Sunday, January 1, 2012

A Republic if you can keep it

President Obama Signs Indefinite Detention Bill Into Law
By ACLU
WASHINGTON – President Obama signed the National Defense Authorization Act (NDAA) into law today. The statute contains a sweeping worldwide indefinite detention provision.  While President Obama issued a signing statement saying he had “serious reservations” about the provisions, the statement only applies to how his administration would use the authorities granted by the NDAA, and would not affect how the law is interpreted by subsequent administrations.  The White House had threatened to veto an earlier version of the NDAA, but reversed course shortly before Congress voted on the final bill.

“President Obama's action today is a blight on his legacy because he will forever be known as the president who signed indefinite detention without charge or trial into law,” said Anthony D. Romero, ACLU executive director. “The statute is particularly dangerous because it has no temporal or geographic limitations, and can be used by this and future presidents to militarily detain people captured far from any battlefield.  The ACLU will fight worldwide detention authority wherever we can, be it in court, in Congress, or internationally.”

Under the Bush administration, similar claims of worldwide detention authority were used to hold even a U.S. citizen detained on U.S. soil in military custody, and many in Congress now assert that the NDAA should be used in the same way again. The ACLU believes that any military detention of American citizens or others within the United States is unconstitutional and illegal, including under the NDAA. In addition, the breadth of the NDAA’s detention authority violates international law because it is not limited to people captured in the context of an actual armed conflict as required by the laws of war. 
“We are incredibly disappointed that President Obama signed this new law even though his administration had already claimed overly broad detention authority in court,” said Romero. “Any hope that the Obama administration would roll back the constitutional excesses of George Bush in the war on terror was extinguished today. Thankfully, we have three branches of government, and the final word belongs to the Supreme Court, which has yet to rule on the scope of detention authority. But Congress and the president also have a role to play in cleaning up the mess they have created because no American citizen or anyone else should live in fear of this or any future president misusing the NDAA’s detention authority.”
 The bill also contains provisions making it difficult to transfer suspects out of military detention, which prompted FBI Director Robert Mueller to testify that it could jeopardize criminal investigations.  It also restricts the transfers of cleared detainees from the detention facility at Guantanamo Bay to foreign countries for resettlement or repatriation, making it more difficult to close Guantanamo, as President Obama pledged to do in one of his first acts in office.

Being There


The year when the word ‘progressive’ lost all its meaning
After the events of 2011, radical humanists will have to fight hard to reclaim the p-word.
by Frank Furedi

After the experiences of the past 12 months, it is difficult to give meaning to the idea of a ‘progressive worldview’. Throughout history, progressives came in many shapes and sizes, but whatever their differences might have been, their convictions were similar - they were driven by a positive view of change, innovation and experimentation and by a belief that the world could be a better place tomorrow than today. Despite clashes of opinion over what progress would look like, they assumed that the future could be influenced by political action.

In 2011, the classical ideal of progressivism died, having been displaced by a zombie version that has little to do with the forward-looking, transformative outlook of progressives of the past. The only practical context in which the term progressive is used today is in relation to taxation. Progressive taxation makes sense, of course, because society is entitled to expect greater material contribution from those who earn more than others. But in recent times, progressive taxation has been transformed from a sensible fiscal policy into a naive instrument of social engineering. Historically, the aim of progressives was to realise a positive transformation, whereas today their objective is merely to rearrange the status quo through redistribution.

In recent years, the zombie version of progressivism has become closely linked with the idea of ‘social justice’. Social justice can be defined in many different ways, but in essence it expresses a worldview committed to avoiding uncertainty and risky change through demanding that the state provides us with economic and existential security. From this standpoint, progress is proportional to the expansion of legal and quasi-legal oversight into everyday life. From the perspective of those who demand social justice, the proliferation of ‘rights’ and redistribution of wealth are the main markers of a progressive society.

Paradoxically, the idea of social justice was historically associated with movements that were suspicious of and uncomfortable with progress. The term was coined by the Jesuit Luigi Taparelli in 1840. His aim was to reconstitute theological ideals on a social foundation. In the century that followed, ‘social justice’ was upheld by movements that were fearful of the future and which sought to contain the dynamic towards progress. Probably one of the best known advocates of social justice was Father Charles Edward Coughlin. This remarkable American demagogue and populist xenophobe set up the National Union of Social Justice in 1934. Through his popular radio broadcasts, which regularly attracted audiences of 30 million, he became one of the most influential political figures in the United States. Coughlin praised Hitler and Mussolini’s crusade against communism and denounced President Roosevelt for being in the pocket of Jewish bankers. Here, ‘social justice’ was about condemning crooked financiers and putting forward a narrow, defensive appeal for the redistribution of resources.

Today’s campaigners for social justice bear little resemblance to their ideological ancestors. They’re far more sophisticated and middle class than the followers of Fr Coughlin. But they remain wedded to the idea that the unsettling effects of progress are best contained through state intervention into society. They also maintain the simplistic notion that financiers and bankers are the personification of evil. The current Occupy movement would be horrified by Coughlin’s racist ramblings, yet they would find that some of the ideas expressed in his weekly newspaper, Social Justice, were not a million miles away from their own.

The confusion of social justice with progressivism is symptomatic of today’s implosion of classical political vocabulary. Although this trend transcends left-right political affiliations, its most striking manifestation is in the disintegration of the language of the progressive. Recently, Francis Fukuyama, in his essay ‘The Future of History’, remarked that ‘something strange is going on in the world today’ - which is that despite the intensification of the global crisis of capitalism, anger and frustration have not led to an ‘upsurge in left-wing alternatives’. This ‘lack of left-wing mobilisation’ is down to a ‘failure in the realm of ideas’, he argued.

What 2011 has confirmed is that the way in which the term progressive is used today has little to do with how it was used in the past. The most striking manifestation of this can be seen in the utter estrangement of the left from the idea of progress. The left, classically a movement that was associated with change and progress, has gradually lost its capacity to believe in the future. For most of its existence, the left looked upon the future as a place that would probably be significantly better than the present day. Social change was perceived to be, on balance, a positive thing, and the left tried to harness it towards the realisation of progressive objectives. The present was seen as something which had to be improved upon, reformed or transformed. Today, by contrast, what remains of the left is just as uncomfortable with the future as are other sections of the political class.

Sadly, the confused state of the political lexicon was turned into a virtue in 2011. Political illiteracy came to be celebrated as ‘the new radicalism’. This was the year when commentators extolled the strength of a movement that ‘defies simple characterisations’ - that is, the Occupy movement. Many claimed that the virtue of these occupations is that they refuse to communicate a distinct political message. Instead of serving as a reminder of contemporary disorientation and confusion, political illiteracy was rebranded as a new and subtle form of communication.

2011 was the year when Hal Ashby’s 1979 comedy-drama movie, Being There, provided the model script for political communication. The film follows Chance, a simpleton played by Peter Sellers, whose banal words are interpreted as wise insights springing from a powerful mind. Suddenly, through a series of accidental events, this former gardener becomes a celebrity whose confused musings are held up as a new brand of prophetic insight. Today, ‘being there’ forms the entire basis of the new radical politics. And it is those who question the incoherent ramblings of the characters of ‘Being There 2011’ who are dismissed as hopeless simpletons. ‘Those who deride [Occupy] for its lack of concrete demands simply don’t understand its strategic function’, lectures Gary Younge of the Guardian. Apparently, its strategic function is to ‘create new possibilities’. One can almost hear Chance wowing his audience with inane talk of ‘creating new possibilities’.

The tendency to dismiss clarity of purpose and objectives as old-fashioned and unnecessary represents an acquiescence to confusion and ignorance. It is one thing to lack the political and intellectual resources necessary to formulate a new visionary politics - it is quite another to depict this deficit as a positive thing. When the American political consultant George Lakoff said ‘I think it is a good thing that the Occupy movement is not making specific policy demands’, he gave expression to a zeitgeist that is pleased just to ‘be there’.

But of course, being there is not enough. Public life needs to be refocused around the future, and the reconstitution of progressive politics and ideals is the precondition for making this happen. In the end, what matters are not the words we use to describe ourselves; no, the differences that really matter today are where one stands in relation to the past and the future. Those who are interested in the reconstitution of progressive politics must help to free humanity from its fixation with the present. They need to reacquaint the younger generations with humanity’s history and the lessons of the past, and also adopt a more robust and active orientation towards the future. In 2012, let’s not just pass time being there…

Haven't we been here before?


By Ilargi
It's the sort of question you would expect a child to ask in one of those Grimm Brothers fairy tales, a child that walks so far into the woods that it gets lost, and takes another wrong turn and then another, and the forest feels denser and darker all the time, and it doesn't even run around in circles to return to its trail of breadcrumbs, or it doesn't know, because they've all been eaten by the animals. And then night falls slowly.

That's how I increasingly picture our financial situation. We march forward full of faith and feigned innocence into uncharted territory, telling ourselves we will and must find a way out of this mess, boosted by the high priests of our economic belief systems, the media, economists and politicians.

The children in the fairy tales always escape from the dark in the end, but we're not those children. Getting lost in the woods because you ignore the warnings is in general not an act of bravery, but one of stupidity.

Characters in fairy tales serve to teach their young readers and listeners a lesson about the morals of their societies; these characters don’t perish, they get saved because they timely see the errors of their ways. A morality tale.

But whereas the children in these fairy tales go gently into a good night, we go blindly into a bad one.

Perhaps it's fittingly ironic that this time around the rally came before instead of after the announcement by ECB president Mario Draghi of €489 billion in cheap loans for European banks. It fits right in with all the other things we get totally the wrong way around. About 60% of those loans, by the way, are just regurgitated old stuff.

Looking at what they have come up with in episode 1001 of the bailout drama, and just a brief look will do, there's one conclusion and one only: what they say is not what they think.

The ECB claims that it "hopes" the banks will use the money to purchase peripheral debt, but the ECB knows they won't (and what sort of €489 billion deal depends on "hope" only?). It knows, because the ECB itself, along with other parties, has refused to guarantee that debt.

It may be presented as a good deal, but borrowing at 1% to get a 5% return is not all that attractive when you have a 50% chance of an 80% haircut. Or something along those lines.

The ECB also said they hope banks will use the money to loan out to consumers. Just as big a pile of doo-doo. Banks are shedding assets like they're fleas, because they need reserves. That is a solvency issue. Being able to borrow ever cheaper while handing out ever more doubtful collateral addresses a liquidity issue.

There are a few things that this sort of lending will indeed achieve. It will gobble up bad assets from private banks and transfer them to the risk of the public coffer. Nothing new there. The child just gets deeper into the forest, and the light starts fading. A step by step process perceived as progressing so slowly, it raises no alarm. It's still morally repugnant, but who in charge of this thing has any morals left at all in the first place?

Another effect of those €489 billion is that the divide between the ECB and Germany, in particular its central Bundesbank, will widen, and substantially so. Which endangers the entire Eurozone project.

Whatever plan Europe comes up with, be it the European Financial Stability Facility or the European Stability Mechanism, or this latest one from the ECB, there are only two countries left to carry the vast majority of the risk and the burden. One of those countries, France, will soon be downgraded. So will its banks. This will lead to a downgrade of the EFSF and, if there's still time, the ESM.

There will at that point be one country left to carry the entire rest on its shoulders. Germany's allies and relatively strong partners, Holland, Finland, are way too small to do any heavy lifting. Moreover, Holland is on the verge of a housing collapse.

The EFSF needs to be funded; it can only spend what it has received. Europe has been unable to agree on expanding the Facility. Which is why the ECB now comes with its loan plan. Which did lead to a market rally, but that rally fizzled as soon as the plan was announced, even though it was at least €100 billion larger than expected.

So France soon will no longer be a net contributor to the EFSF. Which is one of the main reasons the expansion didn't materialize. Hence, it's all Germany's responsibility, and Germany is smart enough to understand it's not strong enough to bear that responsibility.

And then out of left field comes Mario Draghi handing out half a trillion euros in loans to 523 different European banks that on average are just about to draw their last breath, selling off profitable assets because they're all buyers are interested in, and keeping the lousy ones, which they now can pledge to the ECB, with a huge chunk of the risk involved landing squarely on the shoulders of the German citizenry.

The chance that Berlin will now look even a lot more serious at cutting its losses while it can has become much bigger with Mr. Draghi's first substantial act as ECB president. It's deceptively simple, really. Germany can't guarantee Greek and Italian and Spanish debt with the risk waiting in the wings of France slumping badly. Not without risking its own wealth, its own coherence as a society, in the process.

Staying in the metaphor of the child lost in the darkening forest (and yes, the Grimm brotheres were German), it's like the child, after taking yet another wrong turn, has stumbled upon a big bad wolf.

And though it's already getting almost too dark to see, the last thing the child does notice is that the wolf looks nothing like its sweet old grandmother.

This Will Even Make The Bears Shudder


S&P 500 Falling Below 600? 
By Tomi Kilgore
United-ICAP senior technical analyst Walter Zimmerman says the S&P 500 could rally a little further into January before beginning a “traumatic decline” for the rest of 2012, dragged down by weakness in Europe.
How traumatic? You might want to sit down for this one.
He thinks the index will reach its 2012 peak in the 1293-1311 zone, then start a “sharp and sustained drop” until December. His downside target is around 579.57.
579.57! The index would have to wipe out the March 2009 lows and fall by more than 50% current levels to reach that target. And the last time the S&P 500 traded below 600 was in the mid 1990s, when the Backstreet Boys burst on the scene and bell-bottom jeans were making a comeback.
Zimmerman’s reasoning is Europe is in an even worse shape now than it was at the beginning of the year.
“If the history of debt tells us anything it is that one cannot solve a debt crisis by lending more money to the bankrupt and the insolvent,” Zimmerman says.
He expects 2012′s price action will mirror what the S&P 500 did from its Oct 2007 peak until it bottomed in March 2009.
“The technical patterns suggest that 2012 will be a terrible year for holding stocks. Even if by some miracle the euro zone hangs together, it is already falling into a deep and enduring recession,” says Zimmerman. “We expect this recession will drag down both the USA and China.”
The S&P 500 was recently up 0.2% at 1268.