Wednesday, January 4, 2012

Austerity and Anarchy


On Austerity, Unrest, And Quantifying Chaos
Politically speaking, austerity is a challenge. While we would expect that governments imposing spending cuts on their voting public may face electability issues, in fact, a recent paper from the Center for Economic Policy Research finds that there is no empirical evidence to confirm this - i.e. a budget-cutting government is no less likely to be re-relected than a spend-heavy government. However, what the CEPR paper does find as a factor in delaying austerity is much more worrisome - a fear of instability and unrest. The authors found a very clear relationship between CHAOS (their variable name for demonstrations, riots, strikes and worse) and expenditure cuts. As JPMorgan notes, austerity sounds straightforward as a policy, until the consequences bite. It remains unclear that the road Europe is taking is less costly in the long run, in economic, political and social terms. The history of Europe over the last 100 years shows that austerity can have severe consequences and outcomes and perhaps most notably, the independent variable that did result in more unrest: higher levels of government debt in the first place.
The passage through time of the author's CHAOS factor shows that since 1994 we have had relative stability but given the ongoing austerity that is being forced (rightfully) upon the most indebted nations in Europe, it is perhaps no longer an issue of electability as technocrats roam freely and much more one of central stability and fear of the empirical link between austerity and anarchy.
JPMorgan recently noted this study:
The authors tested to see if results varied with ethnic fragmentation, inflation, penetration of mass media and the quality of government institutions; they did not. Results are also consistent across time, covering interwar and postwar periods.
The independent variable that did result in more unrest: higher levels of government debt in the first place.
Compounding the problem is the way some decisions are being taken, which may reinforce perceptions of a "democratic deficit" at the EU level, an issue highlighted by Germany’s Constitutional Court. It remains to be seen if Europe can sustain cohesion around its path of most resistance. One sign of rising tensions: the following (staggering) comment by the head of the Bank of France: "A downgrade does not appear to me to be justified when considering economic fundamentals," Noyer said. "Otherwise, they should start by downgrading Britain which has more deficits, as much debt, more inflation, less growth than us and where credit is slumping." At a time of increasing budgetary pressures and declining growth, I suppose there are limits to European solidarity.
The full paper can be found below:

Some empires really were worse than others.


BY JOSHUA E. KEATING 
It's hard to find countries that are nostalgic for colonialism, at least among those that were on the receiving end of it. At the same time, it's hard to escape the impression that some countries had a worse time of it than others. The former British Empire includes rising power India and Africa's most stable and prosperous countries -- Botswana, Ghana, and South Africa. France's former dependencies in Africa and Southeast Asia, from Ivory Coast to Cambodia, don't seem to have fared nearly as well in the post-colonial era.
Some, such as historian Niall Ferguson, have even argued for the positive legacy of the British Empire, seeing the Pax Britannica as an era not merely of imperialist expansion but also of "spreading liberal values in terms of free markets, the rule of law, and ultimately representative government."
But beyond anecdotal observations, is there any evidence that the type of colonialism determined the way former colonies turned out? Were the bloody post-independence civil wars of Angola and Mozambique, for example, a legacy of Portuguese colonialism, or were competition for resources and the Cold War more to blame? How would the recent histories of Algeria and Vietnam have differed if France had let them go peacefully?
Stanford University Ph.D. candidate Alexander Lee, with Professor Kenneth Schultz, looked at Cameroon, a rare country that includes large regions colonized by separate powers, Britain and France, and then united after independence in 1960. The only country with a similar history is Somalia, where it is understandably difficult to get economic data after more than three decades of war.
The results? There may be something to that British-legacy theory: Lee and Schultz found that formerly British rural areas of Cameroon today boast higher levels of wealth and better public services than those in the formerly French territory. To take one example, nearly 40 percent of rural households in the British provinces examined have access to piped water, while less than 15 percent on the French side do. This could suggest that the British colonial system, which had what Lee calls "greater levels of indirect rule and the granting of local-level autonomy to chiefs," was more beneficial -- or at least less damaging -- than the more hands-on French model, which involved a "greater level of forced labor."
It's by no means clear, however, that any brand of colonialism was good for the colonized. Harvard University economist Lakshmi Iyer has found that in India, regions that were under direct British rule have lower levels of public services today compared with those where local leaders retained some level of power; these "native states" include today's high-tech business hubs of Hyderabad and Jaipur. As for Latin America, a forthcoming paper by economists Miriam Bruhn of the World Bank and Francisco Gallego of Chile's Pontificia Universidad Católica found that areas where colonialism depended heavily on labor exploitation have lower levels of economic development today than places where colonists were less closely involved. (In this context, the grim legacy of Belgium's King Leopold II -- who ran his vast territories in today's Democratic Republic of the Congo as a brutal personal plantation -- doesn't seem so surprising.)
In the end, to paraphrase Henry David Thoreau, it seems the best colonist was the one who colonized the least. 

Intelligence cannot conquer stupidity

Think Again: Intelligence
I served in the CIA for 28 years and I can tell you: America's screw-ups come from bad leaders, not lousy spies.
BY PAUL R. PILLAR 
"Presidents Make Decisions Based on Intelligence."
From George W. Bush trumpeting WMD reports about Iraq to this year's Republican presidential candidates vowing to set policy in Afghanistan based on the dictates of the intelligence community, Americans often get the sense that their leaders' hands are guided abroad by their all-knowing spying apparatus. After all, the United States spends about $80 billion on intelligence each year, which provides a flood of important guidance every week on matters ranging from hunting terrorists to countering China's growing military capabilities. This analysis informs policymakers' day-to-day decision-making and sometimes gets them to look more closely at problems, such as the rising threat from al Qaeda in the late 1990s, than they otherwise would.
On major foreign-policy decisions, however, whether going to war or broadly rethinking U.S. strategy in the Arab world (as President Barack Obama is likely doing now), intelligence is not the decisive factor. The influences that really matter are the ones that leaders bring with them into office: their own strategic sense, the lessons they have drawn from history or personal experience, the imperatives of domestic politics, and their own neuroses. A memo or briefing emanating from some unfamiliar corner of the bureaucracy hardly stands a chance.
Besides, one should never underestimate the influence of conventional wisdom. President Lyndon B. Johnson and his inner circle received the intelligence community's gloomy assessments of South Vietnam's ability to stand on its own feet, as well as comparably pessimistic reports from U.S. military leaders on the likely cost and time commitment of a U.S. military effort there. But they lost out to the domino theory -- the idea that if South Vietnam fell to communism, a succession of other countries in the developing world would as well. President Harry Truman decided to intervene in Korea based on the lessons of the past: the Allies' failure to stand up to the Axis powers before World War II and the West's postwar success in firmly responding to communist aggression in Greece and Berlin. President Richard Nixon's historic opening to China was shaped by his brooding in the political wilderness about great-power strategy and his place in it. The Obama administration's recent drumbeating about Iran is largely a function of domestic politics. Advice from Langley, for better or worse, had little to do with any of this.
Intelligence may have figured prominently in Bush's selling of the invasion of Iraq, but it played almost no role in the decision itself. If the intelligence community's assessments pointed to any course of action, it was avoiding a war, not launching one.
When U.S. Secretary of State Colin Powell went before the United Nations in February 2003 to make the case for an invasion of Iraq, he argued, "Saddam Hussein and his regime are concealing their efforts to produce more weapons of mass destruction," an observation he said was "based on solid intelligence." But in a candid interview four months later, Deputy Defense Secretary Paul Wolfowitz acknowledged that weapons of mass destruction were simply "the one issue that everyone could agree on." The intelligence community was raising no alarms about the subject when the Bush administration came into office; indeed, the 2001 edition of the community's comprehensive statement on worldwide threats did not even mention the possibility of Iraqi nuclear weapons or any stockpiles of chemical or biological weapons. The administration did not request the (ultimately flawed) October 2002 intelligence estimate on Iraqi unconventional weapons programs that was central to the official case for invasion -- Democrats in Congress did, and only six senators and a handful of representatives bothered to look at it before voting on the war, according to staff members who kept custody of the copies. Neither Bush nor Condoleezza Rice, then his national security advisor, read the entire estimate at the time, and in any case the public relations rollout of the war was already under way before the document was written.
Had Bush read the intelligence community's report, he would have seen his administration's case for invasion stood on its head. The intelligence officials concluded that Saddam was unlikely to use any weapons of mass destruction against the United States or give them to terrorists -- unless the United States invaded Iraq and tried to overthrow his regime. The intelligence community did not believe, as the president claimed, that the Iraqi regime was an ally of al Qaeda, and it correctly foresaw any attempt to establish democracy in a post-Saddam Iraq as a hard, messy slog.
In a separate prewar assessment, the intelligence community judged that trying to build a new political system in Iraq would be "long, difficult and probably turbulent,"adding that any post-Saddam authority would face a "deeply divided society with a significant chance that domestic groups would engage in violent conflict with each other unless an occupying force prevented them from doing so." Mentions of Iraqis welcoming U.S. soldiers with flowers, or the war paying for itself, were notably absent. Needless to say, none of that made any difference to the White House.
The record of 20th-century U.S. intelligence failures is a familiar one, and mostly indisputable. But whether these failures -- or the successes -- mattered in the big picture is another question.
The CIA predicted both the outbreak and the outcome of the 1967 Six-Day War between Israel and neighboring Arab states, a feat impressive enough that it reportedly won intelligence chief Richard Helms a seat at President Johnson's Tuesday lunch table. Still, top-notch intelligence couldn't help Johnson prevent the war, which produced the basic contours of today's intractable Israeli-Palestinian conflict, and U.S. intelligence completely failed to predict Egypt's surprise attack on Israel six years later. Yet Egypt's nasty surprise in 1973 didn't stop Nixon and Secretary of State Henry Kissinger from then achieving a diplomatic triumph, exploiting the conflict to cement relations with Israel while expanding them with Egypt and the other Arab states -- all at the Soviets' expense.
U.S. intelligence also famously failed to foresee the 1979 Iranian revolution. But it was policymakers' inattention to Iran and sharp disagreements within President Jimmy Carter's administration, not bad intelligence, that kept the United States from making tough decisions before the shah's regime was at death's door. Even after months of disturbances in Iranian cities, the Carter administration -- preoccupied as it was with the Egypt-Israel peace negotiations and the Sandinistas' revolution in Nicaragua -- still had not convened any high-level policy meetings on Iran. "Our decision-making circuits were heavily overloaded," Zbigniew Brzezinski, Carter's national security advisor,later recalled.

An Introduction to the Theory of Money and Credit

The Austrian Theory of Money
By Murray N. Rothbard
The Austrian theory of money virtually begins and ends with Ludwig von Mises's monumental Theory of Money and Credit, published in 1912 (1). Mises's fundamental accomplishment was to take the theory of marginal utility, built up by Austrian economists and other marginalists as the explanation for consumer demand and market price, and apply it to the demand for and the value, or the price, of money. No longer did the theory of money need to be separated from the general economic theory of individual action and utility, of supply, demand, and price; no longer did monetary theory have to suffer isolation in a context of "velocities of circulation," "price levels," and "equations of exchange."
In applying the analysis of supply and demand to money, Mises used the Wicksteedian concept: supply is the total stock of a commodity at any given time; and demand is the total market demand to gain and hold cash balances, built up out of the marginal-utility rankings of units of money on the value scales of individuals on the market. The Wicksteedian concept is particularly appropriate to money for several reasons: first, because the supply of money is either extremely durable in relation to current production, as under the gold standard, or is determined exogenously to the market by government authority; and, second and most important, because money, uniquely among commodities desired and demanded on the market, is acquired not to be consumed, but to be held for later exchange. Demand-to-hold thereby becomes the appropriate concept for analyzing the uniquely broad monetary function of being held as stock for later sale. Mises was also able to explain the demand for cash balances as the resultant of marginal utilities on value scales that are strictly ordinal for each individual. In the course of his analysis Mises built on the insight of his fellow Austrian Franz Cuhel to develop a marginal utility that was strictly ordinal, lexicographic, and purged of all traces of the error of assuming the measurability of utilities.
The relative utilities of money units as against other goods determine each person's demand for cash balances, that is, how much of his income or wealth he will keep in cash balances as against how much he will spend. Applying the law of diminishing (ordinal) marginal utility of money and bearing in mind that money's "use" is to be held for future exchange, Mises arrived implicitly at a falling demand curve for money in relation to the purchasing power of the currency unit. The purchasing power of the money unit, which Mises also termed the "objective exchange-value" of money, was then determined, as in the usual supply-and-demand analysis, by the intersection of the money stock and the demand for cash balance schedule. We can see this visually by putting the purchasing power of the money unit on the y-axis and the quantity of money on the x-axis of the conventional two-dimensional diagram corresponding to the price of any good and its quantity. Mises wrapped up the analysis by pointing out that the total supply of money at any given time is no more or less than the sum of the individual cash balances at that time. No money in a society remains unowned by someone and is therefore outside some individual's cash balances.
While, for purposes of convenience, Mises's analysis may be expressed in the usual supply-and-demand diagram with the purchasing power of the money unit serving as the price of money, relying solely on such a simplified diagram falsifies the theory. For, as Mises pointed out in a brilliant analysis whose lessons have still not been absorbed in the mainstream of economic theory, the purchasing power of the money unit is not simply the inverse of the so-called price level of goods and services. In describing the advantages of money as a general medium of exchange and how such a general medium arose on the market, Mises pointed out that the currency unit serves as unit of account and as a common denominator of all other prices, but that the money commodity itself is still in a state of barter with all other goods and services. Thus, in the premoney state of barter, there is no unitary "price of eggs"; a unit of eggs (say, one dozen) will have many different "prices": the "butter" price in terms of pounds of butter, the "hat" price in terms of hats, the "horse" price in terms of horses, and so on. Every good and service will have an almost infinite array of prices in terms of every other good and service. After one commodity, say gold, is chosen to be the medium for all exchanges, every other good except gold will enjoy a unitary price, so that we know that the price of eggs is one dollar a dozen; the price of a hat is ten dollars, and so on. But while every good and service except gold now has a single price in terms of money, money itself has a virtually infinite array of individual prices in terms of every other good and service. To put it another way, the price of any good is the same thing as its purchasing power in terms of other goods and services. Under barter, if the price of a dozen eggs is two pounds of butter, the purchasing power of a dozen eggs is, inter alia, two pounds of butter. The purchasing power of a dozen eggs will also be one-tenth of a hat, and so on. Conversely, the purchasing power of butter is its price in terms of eggs; in this case the purchasing power of a pound of butter is a half-dozen eggs. After the arrival of money, the purchasing power of a dozen eggs is the same as its money price, in our example, one dollar. The purchasing power of a pound of butter will be 50 cents, of a hat ten dollars, and so forth.
What, then, is the purchasing power, or the price, of a dollar? It will be a vast array of all the goods and services that can be purchased for a dollar, that is, of all the goods and services in the economy. In our example, we would say that the purchasing power of a dollar equals one dozen eggs, or two pounds of butter, or one-tenth of a hat, and so on, for the entire economy. In short, the price, or purchasing power, of the money unit will be an array of the quantities of alternative goods and services that can be purchased for a dollar. Since the array is heterogeneous and specific, it cannot be summed up in some unitary price-level figure.
The fallacy of the price-level concept is further shown by Mises's analysis of precisely how prices rise (that is, the purchasing power of money falls) in response to an increase in the quantity of money (assuming, of course, that the individual demand schedules for cash balances or, more generally, individual value scales remain constant). In contrast to the hermetic neoclassical separation of money and price levels from the relative prices of individual goods and services, Mises showed that an increased supply of money impinges differently upon different spheres of the market and thereby ineluctably changes relative prices.

Another Tale of Two Cities

Weimar and Washington
By Philip Giraldi 
Mark Twain is credited with saying that “History doesn’t repeat itself, but it rhymes.” Today’s United States is often compared to other historic nations, whether at their prime or about to decline and fall depending on one’s own political perspective. Neoconservatives frequently eulogize Washington as a new Rome, promising a worldwide empire without end carried on the back of a Pentagon bristling with advanced weaponry. Other observers also cite Rome but are rather more sanguine, recalling how in the 5th century the empire failed dramatically and fell to barbarian hordes. Still others note the fate of the British Empire, which came apart in the wake of the Second World War, or the Soviets, whose collapse was brought about by 50 years of unsustainable military spending.
But the historical analogy that appears to be most apposite for post-9/11 Washington is that of the Weimar Republic. To be sure, any suggestion that the United States might be following the same course as Germany in the years that led to Nazism must be pursued with caution because few Americans want to believe that the descent into such extremism is even possible in the world’s most venerable constitutional republic. But consider the following: both the United States and Weimar Germany had constitutions in which checks and balances were integrated to maintain a multi-party system, the rule of law, and individual liberties. Both countries were on the receiving end of acts of terrorism that produced a dramatic and violent reaction against the presumed perpetrators of the crimes, so both quickly adopted legislation that abridged many constitutional rights and empowered the head of state to react decisively to further threats. The media fell in line, concerned that criticism would be unpatriotic.
Both the U.S. and Germany possessed politically powerful military-industrial complexes that had a vested interest in encouraging a militarized response to the threats and highly polarized internal politics that enabled politicians to obtain advantage by exploiting national security concerns. Both countries experienced severe financial crises and printed fiat currency to pay the bills, and both had jurists and political supporters who argued that in time of crisis the head of state must be granted special executive authority that transcends the limits placed by the constitution.
The Weimar Republic, which replaced rule by the German emperor in the aftermath of World War I, was a liberal democracy in the 19th-century sense, which means it had a constitution that guaranteed individual and group rights, multi-party systems, and free elections at regular intervals. It took its name from the city of Weimar, where the constitution was drawn up in a national assembly convened in 1919. From the start, Weimar was plagued by a failure to create a sustainable political culture because of the high level of polarization and violence instigated by both the major and fringe parties, even though the relatively moderate Social Democrats were normally dominant.
Adolph Hitler became German chancellor in January 1933. The chancellor was the head of government, but the head of state was President and Field Marshal Paul von Hindenburg. Hindenburg was a hero of the First World War, and he despised the dangerous parvenu Hitler but foolishly thought he could control him. The National Socialist Party was, however, still a minority party in parliament with 33% of the popular vote when Hitler took charge, holding only three out of 11 cabinet positions. Strong socialist, Catholic, and communist parties actively contested the Nazis’ agenda. The media reflected the political divisions, with many papers opposing Hitler and his government.
Hitler benefited from the political paralysis of Weimar, which had forced his Reich chancellor predecessors to rule by presidential decree to bypass the logjam in parliament, but he could not actually legislate in that fashion and did not have a free ride. There was considerable resistance to his policies. All of that changed, however, when the seat of parliament in Berlin, the Reichstag, was burned down on Feb. 27, 1933. It was an act of terrorism that shocked the nation, and it was eventually attributed to an addled Dutch communist named Marinus van der Lubbe, though it was almost certainly carried out by the Nazis themselves. Hitler convinced President Hindenburg to sign a “Reichstag Fire Decree” on the following day, canceling the constitutional guarantees of habeas corpus and freedom of the press, the freedom to organize and assemble, and the privacy of communications. It authorized police search and seizure without any judicial warrant. It was no coincidence that the fire took place two weeks before parliamentary elections in which the Nazis, who beat and otherwise intimidated opponents and “monitored” the polling stations, won nearly 44% of the votes. The opposition, including the technically illegal communists, took 42% and Hitler was denied his majority, but he arrested socialist opponents, barred the communists, and was eventually able to form a government with his parliamentary allies.

Shoes for everyone


Would the Poor Go Barefoot with a Private Shoe Industry?
By Stephen Davies
It is said that while we may rely on private initiative to supply “nonessentials,” some things are so important to a decent life that we cannot trust the vagaries of the competitive market. Some people would not get the vital product or service. The only solution, supposedly, is government provision to all, often free of charge. The problems with this argument, as well as the great benefits of a capitalist economy, are shown by examining the shoe industry.
Most would agree that shoes are essential to a comfortable or decent existence. Today even the poorest have shoes, and most people of modest means have several pairs. Shoes are available in an enormous variety of types, styles, and colors, at modest prices. It was not always so. In America just over 150 years ago, shoes were made locally, on an individual basis, by skilled craftsmen. This may seem idyllic, but it was not. They were extremely expensive in real terms, so much so that they could even be included in a will. Most people had only one pair that would be made to last for years. The poor had no shoes; indeed, being without shoes was one of the classic marks of poverty.
Things began to change in 1848 with the invention of the first shoe-sewing machine, and shoemaking moved from the home and small workshop to factories. However, making shoes was complicated and difficult to mechanize. In particular, the process of “lasting,” by which leather was molded to fit a model foot, proved a great challenge. Moreover, the capital cost of the new machinery was a barrier for many small shoemaking firms.
Two men were to transform this situation in the United States and subsequently elsewhere. The first was Jan Matzeliger, born in 1852, an immigrant to the United States from Dutch Guiana (now Suriname), and the son of a Dutch sea captain and a slave woman. While working in a shoe factory in Massachusetts, Matzeliger devised a method of mechanizing the lasting process. He perfected it after years of work and great expense, and obtained capital to create a production model from two local investors, Charles H. Delnow and Melville S. Nicholls. Matzeliger got a patent in 1883. His machine cut the cost of producing a pair of shoes in half. A hand laster could produce no more than 50 pairs a day. Using his machine, one could produce up to 700 pairs. Matzeliger and his partners set up the Consolidated Lasting Machine Corporation, in association with two new investors, George A. Brown and the second main figure in our story, Sidney W. Winslow. Matzeliger sold his patent rights to the newly formed corporation in exchange for stock, which made him a wealthy man. He died from tuberculosis in 1889.
Winslow was a business genius. The owner of a small shoe factory, he transformed the industry by a crucial business innovation. In 1899 he engineered a merger of the three main shoemaking-machinery companies to form the United Shoe Machinery Corporation (USMC). Instead of selling its machines, the USMC leased them, which meant that shoe manufacturers no longer bore the capital cost, including depreciation, of their machinery. USMC also relieved them of much of the maintenance cost.
The combination of technical invention and business innovation transformed shoemaking. The cost of shoes fell to a fraction of what it had been, while the wages of workers more than doubled by 1905. Thanks to the ease with which producers could obtain the machinery, the industry became very competitive, which encouraged innovation and kept down costs. This led to the situation we enjoy today where even the poorest have shoes and the variety constantly increases. When leasing was applied outside the United States, often through arrangements with the USMC, the industry was transformed there also.
Let us suppose now that shoes were supplied by government. We have much evidence of what the result would be. Everyone would have shoes, but the quality would be poor. There would be almost no variety (except of the Army kind—two sizes: too large and too small) and certainly no “fun” shoes. The cost would be high, and there might even be rationing. If some private supply were allowed, we would have a few private firms providing high-quality shoes at exorbitant cost to the rich and the ruling elite.
Privatize Shoe Production?
Anyone suggesting that perhaps private enterprise should produce shoes more widely would be met with the indignant response: “What! Do you want the poor to go without shoes?” This, of course, is precisely the situation we face with many services provided predominantly or exclusively by government, notably education. The point is that once a product is supplied by government, we find it hard to imagine that it could be provided in any other way without disastrous results. The assertion that a product is essential is supposed to end the argument.
The story of the U.S. shoe-machinery industry also highlights several other points. One is the critical part played in history by productive and creative individuals whose names are not remembered or lauded in the way that those of monarchs, politicians, and generals are. Sidney Winslow did more to benefit millions of people than many “public figures,” yet is almost forgotten. Another is the way a market economy undercuts prejudice. As a black man, Jan Matzeliger faced much prejudice, particularly in his social and religious life. But in the business world his color did not matter, and he had no trouble finding investors. Only his talent and application mattered.
Finally, the story of the USMC shows the bad effects of misguided public policy. An enormously successful business, worth over a billion dollars by 1960 and a model employer, United Shoe was attacked by the Department of Justice in a famous antitrust case, was broken up in 1968, and today no longer exists. (Ironically, the leasing policy was targeted as a tool of USMC’s alleged monopoly practices.) The U.S. shoe manufacturing industry has also mostly vanished.
So when you put on your shoes or go to buy a pair, be thankful and remember Jan Matzeliger and Sidney Winslow. Even more, be thankful that this essential product is not provided by government and imagine what services provided by the government could be like if the contemporary equivalents of those two men were let loose on them.

Europe is as full of bad ideas as it is of bad debts.


How Bad Ideas Worsen Europe’s Debt Meltdown
Europe's Debt Meltdown
By John H. Cochrane 
Conventional wisdom says that sovereign defaults mean the end of the euro: If Greece defaults it has to leave the single currency; German taxpayers have to bail out southern governments to save the union.
This is nonsense. U.S. states and local governments have defaulted on dollar debts, just as companies default. A currency is simply a unit of value, as meters are units of length. If the Greeks had skimped on the olive oil in a liter bottle, that wouldn’t threaten the metric system.
Bailouts are the real threat to the euro. The European Central Bank has been buying Greek, Italian, Portuguese and Spanish debt. It has been lending money to banks that, in turn, buy the debt. There is strong pressure for the ECB to buy or guarantee more. When the debt finally defaults, either the rest of Europe will have to raise trillions of euros in fresh taxes to replenish the central bank, or the euro will inflate away.
Leaving the euro would also be a disaster for Greece, Italy and the others. Reverting to national currencies in a debt crisis means expropriating savings, commerce-destroying capital controls, spiraling inflation and growth-killing isolation. And getting out won’t help these countries avoid default, because their debt promises euros, not drachmas or lira.
Perils of Devaluation
Defenders think that devaluing would fool workers into a bout of “competitiveness,” as if people wouldn’t realize they were being paid in Monopoly money. If devaluing the currency made countries competitive, Zimbabwe would be the richest country on Earth. No Chicago voter would want the governor of Illinois to be able to devalue his way out of his state’s budget and economic troubles. Why do economists think Greek politicians are so much wiser?
The latest plan calls for Europe to be tougher in enforcing deficit rules that are similar to the ones that they blithely ignored for 10 years. Sure, a directive from Brussels is really going to get the Greeks to shape up. Imagine how well it would work if the International Monetary Fund or theUnited Nations tried to veto U.S. budget deficits. (That is, if our Congress passed budgets to begin with.) This plan is mostly a way to let the ECB save face and buy up bad debt with freshly printed euros.
More fiscal union hurts the euro. Think of Poland or Slovakia. Using euros was once a no-brainer: It made sense to use the same currency as all the other small countries around them, just as Illinois wants to use the same currency as Indiana.
Now, it’s not so clear: If using this currency means signing up to bail out Greece and Italy, then maybe adopting the euro isn’t such a good idea. A common currency without a fiscal union could have universal appeal. A currency union with a bailout-based fiscal union will remain a small affair.
Europeans leaders think their job is to stop “contagion,” to “calm markets.” They blame “speculation” for their troubles. They keep looking for the Big Announcement that will soothe markets into rolling over another few hundred billion euros of debt. Alas, the problem is reality, not psychology, and governments are poor psychologists. You just can’t fill a trillion-euro hole with psychology.
President Nicolas Sarkozy of France said Greece is like Lehman Brothers Holdings Inc. and its collapse would bring down the financial system. Greece isn’t Lehman. It doesn’t have trillions of dollars of offsetting derivatives contracts. It isn’t a broker-dealer, whose failure would freeze all sorts of assets. Its creditors don’t have the legal right to seize assets owed to counterparties. Greece is just a plain-vanilla sovereign borrower, like those that have been defaulting since Edward III stiffed the Perruzzi bank in the 1340s.
Banks’ Debt
Sovereign default would damage the financial system, however, for the simple reason that Europe has allowed its banks to load up on debt, kept on the books at face value, and treated as riskless and buffered by no capital.
Indebted governments have been pressuring banks to buy more debt, not less. As banks have been increasing capital, they have loaded up even more on “risk-free” sovereign debt, which they can use as collateral for ECB loans. The big ECB “liquidity operation” that took place yesterday will give banks hundreds of billions of euros to increase their sovereign bets. Bank depositors and creditors have figured this out, and are running for the exits.
By stuffing the banks with sovereign debt, European politicians and regulators are making the inevitable default much more financially dangerous. So much for the faith that regulation will keep banks safe.
The euro’s fatal flaw then wasn’t to unite areas with differing levels and types of development under one currency. After all, Mississippi and Manhattan use the same money. Nor was it to deprive governments of the ephemeral pleasures of devaluation. Nor was it to envision a currency union without fiscal union.
Banking misregulation was the euro’s fatal flaw. Sovereign debt, which can always avoid explicit default when countries print money, doesn’t remain risk-free in a currency union. Yet banking regulators and ECB rules continue to pretend otherwise.
So, by artful application of bad ideas, Europe has taken a plain-vanilla sovereign restructuring and turned it into a banking crisis, a currency crisis, a fiscal crisis, and now a political crisis.
When the era of wishful thinking ends, Europe will face a stark choice. It can have a monetary union without sovereign defaults. That option means fiscal union, accepting real German control of Greek and Italian (and maybe French) budgets. Nobody wants that, with good reason.
Or Europe can have a monetary union without fiscal union. That would work well, but it needs to be based on two central ideas: Sovereigns must be able to default just like companies, and banks, including the central bank, must treat sovereign debt just like company debt.
The final option is a breakup, probably after a crisis and inflation.
The euro, like the meter, is a great idea. Throwing it away would be a real and needless tragedy

Once a Trotskyist always a Trotskyist


Exporting Revolution and Reaping Blowback
By Patrick J. Buchanan 
Friday’s lead stories in The Washington Post and The Wall Street Journal dealt with what both viewed as a national affront and outrage.
Egyptian soldiers, said the Post, “stormed the offices” of three U.S. “democracy-building organizations … in a dramatic escalation of a crackdown by the military-led government that could imperil its relations with the United States.”
The organizations: Freedom House, the International Republican Institute and the National Democratic Institute.
Cairo contends that $65 million in “pro-democracy” funding that IRI, NDI and Freedom House received for use in Egypt constitutes “illegal foreign funding” to influence their elections.
“A Provocation in Egypt,” raged the Post.
An incensed Freedom House President David Kramer said the raids reveal that Egypt’s military “has no intention of allowing the establishment of genuine democracy.”
Leon Panetta phoned the head of the military regime. With $1.3 billion in U.S. military aid on the line, Field Marshal Mohamed Hussein Tantawi backed down. The raids will stop.
Yet this is not the first time U.S. “pro-democracy” groups have been charged with subverting regimes that fail to toe the Washington line.
In December, Vladimir Putin claimed that hundreds of millions of dollars, mostly from U.S. sources, was funneled into his country to influence the recent election, and that Hillary Clinton’s denunciation of the results was a signal for anti-Putin demonstrators to take to Moscow’s streets.
In December also, a top Chinese official charged U.S. Consul General Stephen Young in Hong Kong with trying to spread disorder. “Wherever (Young) goes, there is trouble and so-called color revolutions,” said the pro-Communist Party Hong Kong newspaper Wen Wei Po.
Beijing, added the Post, has been “jittery following this year’s Arab Spring and calls on the Internet for the Chinese to follow suit with a ‘jasmine revolution.’” The Jasmine Revolution was the uprising that forced Tunisia’s dictator to flee at the outset of the Arab Spring.
Yet one need not be an acolyte of the Egyptian, Chinese or Russian regimes to wonder if, perhaps, based on history, they do not have a point.
Does the United States interfere in the internal affairs of nations to subvert regimes by using NGOs to funnel cash to the opposition to foment uprisings or affect elections? Are we using Cold War methods on countries with which we are not at war — to advance our New World Order?
So it would seem. For, repeatedly, Freedom House, IRI and NDI have been identified as instigators of color-coded revolutions to replace autocrats with pro-American “democrats.”
Ukraine’s Orange Revolution was marked by mass demonstrations in Kiev to overturn the election of a pro-Russian leader and bring about his replacement by a pro-Western politician who sought to move his country into NATO. The Orange Revolution first succeeded, but then failed.
A U.S.-engineered Rose Revolution in 2002 overthrew President Eduard Shevardnadze of Georgia and brought about his replacement by Mikheil Saakashvili, who then invaded South Ossetia, to be expelled by the Russian Army.
Following the assassination of ex-Prime Minister Rafik Hariri, a Cedar Revolution, featuring massive demonstrations in Beirut against Syria, effected the withdrawal of its occupation army from Lebanon.
In Belarus, however, marches on parliament failed to overturn an election that returned Alexander Lukashenko to power.
The Tulip Revolution brought about the overthrow of President Askar Akayev in Kyrgyzstan. But that, too, did not turn out as well as we hoped.
When one considers the long record of U.S. intervention in nations far from our borders, that an ex-chairman of Freedom House is the former CIA Director James Woolsey, that the longtime chairman of IRI is the compulsive interventionist John McCain, who has been trading insults with Putin, and that Kenneth Wollack, president of NDI, was once director of legislative affairs for the Israeli lobby AIPAC, it is hard to believe we are clean as a hound’s tooth of the charges being leveled against us, no matter how suspect the source.
One recalls that, in 1960, when the United States said a weather plane had strayed off course, and Nikita Khrushchev said it was a U.S. spy plane they had shot down, the Butcher of Budapest turned out to be telling the truth.
Instead, why is the U.S. government funding Freedom House, IRI and IDI, if not to bring about change in countries whose institutions or policies do not conform to our own?
As Leon Trotsky believed in advancing world communist revolution, neocons and democratists believe we have some inherent right to intervene in nations that fail to share our views and values.
But where did we acquire this right?
And if we are intervening in Egypt to bring about the defeat of the Muslim Brotherhood and Salafis, and the Islamists win as they are winning today, what do we expect the blowback to be? Would we want foreigners funneling hundreds of millions of dollars into our election of 2012?
How would Andrew Jackson have reacted if he caught British agents doing here what we do all over the world?