Thursday, January 5, 2012

The Graduate

Why Should Everyone Else Pay for Other People’s Dumb (and Hedonistic) Career Choices
by Barry Rubin 
I’ve recently made the acquaintance of a young man who has a problem. He is 28 years old; smart, of good moral character, and willing to work hard at part-time jobs. He does not expect anyone else, including the government, to support him.  Yet he is puzzled and increasingly bitter that he cannot make a good living.
What’s his difficulty? It’s not the economy (in this specific case) but the fact that he has a degree in linguistics and is now studying Oriental philosophy at a fine university. His case is not altogether typical, but is immensely revealing.
Here’s the secret: He cannot make a living because the market for people with degrees in linguistics and in Oriental philosophy is limited. He should have known that. Someone should have told him that. The calculation of practicality should have been made. It wasn’t.
As I said, this individual does not want handouts and he has not taken student loans. Many others have. A large proportion of the Occupy Wall Street-and-other-places movement seems to consist of those who have made similar “career” (or non-career) decisions but want others to pay for their pastimes and mistakes.
There are at least three important lessons here of the greatest importance.
First, young people should be taught, as the old saying goes, that the world doesn’t owe them a living. Nothing could seem more obvious, yet this has largely been forgotten. This is especially true in the United States, a country whose prosperity was built on understanding this point. Of course, telling them that the world does owe them a living can be rather popular and lead to one’s election to public office.
Despite the rhetoric employed, the current dominant idea in the United States seems to be not so much that the “rich” (and, in practice, the middle class) have to pay “their fair share” to those who are starving to death in rat-infested squatter camps (of whom there aren’t many), but that they must subsidize upper middle class people who are non-productive yet living very nice lives, often better lives than those who are hard-working and subsidizing them. Those to be subsidized include those who want to work in cushy, unproductive, useless but prestigious jobs but cannot find them, or those who want to work in cushy, unproductive, useless but prestigious jobs and do find them working directly or indirectly for the government, supposedly doing good things.
Indeed, the siphoning off of potentially useful citizens who might possibly engage in some economically productive activity (insert lawyer jokes if you wish) into all sorts of made-up and useless jobs is bleeding society. The problem is not the economic elite’s greed, but the oversized “intellectual” greed. Why do you think university tuitions have skyrocketed?
Know this for sure: a lot of these latter people (in contrast to the former group) do not work very hard and their work is of low quality, in large part because they don’t have to meet serious oversight and their “products” don’t bear any real value. In other words, their main achievement each day is to have good conversations over lunch.
Since when have Americans fallen for the idea that government bureaucrats are so useful and productive that the answer to their problems is to have more such people?
Terrorist attack? Create a giant Homeland Security office so people can write each other memos. Improve education or the environment? Raise the budget of the Department of Education or the Environmental Protection Agency.
Being unable to find a job is quite understandable in the current economy. Being unable to find a job because you have made decisions resulting in your having no qualification for a job and making no attempt to do so is something else entirely.
Glorifying the kinds of jobs that — at this point in history — make things worse, not better, is suicidal.

Looking for suckers

World’s Biggest Economies Face $7.6 Trillion Bond Tab as Rally Seen Fading
By Keith Jenkins and Anchalee Worrachate 
Governments of the world’s leading economies have more than $7.6 trillion of debt maturing this year, with most facing a rise in borrowing costs.
Led by Japan’s $3 trillion and the U.S.’s $2.8 trillion, the amount coming due for the Group of Seven nations and Brazil, RussiaIndia and China is up from $7.4 trillion at this time last year, according to data compiled by Bloomberg. Ten-year bond yields will be higher by year-end for at least seven of the countries, forecasts show.
Investors may demand higher compensation to lend to countries that struggle to finance increasing debt burdens as the global economy slows, surveys show. The International Monetary Fund cut its forecast for growth this year to 4 percent from a prior estimate of 4.5 percent as Europe’s debt crisis spreads, the U.S. struggles to reduce a budget deficit exceeding $1 trillion and China’s property market cools.
“The weight of supply may be a concern,” Stuart Thomson, a money manager in Glasgow at Ignis Asset Management Ltd., which oversees $121 billion, said in a Dec. 28 telephone interview. “Rather than the start of the year being the problem, it’s the middle part of the year that becomes the problem. That’s when we see the slowdown in the global economy having its biggest impact.”
Competition for Buyers
The amount needing to be refinanced rises to more than $8 trillion when interest payments are included. Coming after a year in which Standard & Poor’s cut the U.S.’s rating to AA+ from AAA and put 15 European nations on notice for possible downgrades, the competition to find buyers is heating up.
“It is a big number and obviously because many governments are still in a deficit situation the debt continues to accumulate and that’s one of the biggest problems,” Elwin de Groot, an economist at Rabobank Nederland in Utrecht, Netherlands, part of the world’s biggest agricultural lender, said in an interview on Dec. 27.
While most of the world’s biggest debtors had little trouble financing their debt load in 2011, with Bank of America Merrill Lynch’s Global Sovereign Broad Market Plus Index gaining 6.1 percent, the most since 2008, that may change.
Italy auctioned 7 billion euros ($9.14 billion) of debt on Dec. 29, less than the 8.5 billion euros targeted. With an economy sinking into its fourth recession since 2001, Prime Minister Mario Monti’s government must refinance about $428 billion of securities coming due this year, the third-most, with another $70 billion in interest payments, data compiled by Bloomberg show.
Rising Costs
Borrowing costs for G-7 nations will rise as much as 39 percent from 2011, based on forecasts of 10-year government bond yields by economists and strategists surveyed by Bloomberg in separate surveys. China’s 10-year yields may remain little changed, while India’s are projected to fall to 8.02 percent from 8.36 percent. The survey doesn’t include estimates for Russia and Brazil.
After Italy, France has the most amount of debt coming due, at $367 billion, followed by Germany at $285 billion. Canada has $221 billion, while Brazil has $169 billion, the U.K. has $165 billion, China (PRCH) has $121 billion and India $57 billion. Russia has the least maturing, or $13 billion.
Rising borrowing costs forced Greece, Portugal and Ireland to seek bailouts from the European Union and IMF. Italy’s 10- year yields exceeded 7 percent last month, a level that preceded the request for aid from those three nations.
Bad Combination
“The buyer base for peripheral Europe has obviously shrunk at the same time that the supply coming to the market is increasing, which is not a good combination,” said Michael Riddell, a London-based fund manager at M&G Investments, which oversees about $323 billion.
The two biggest debtors, Japan and the U.S., have shown little trouble attracting demand.
Japan benefits by having a surplus in its current account, which is the broadest measure of trade and means that the nation doesn’t need to rely on foreign investors to finance its budget deficits. The U.S. benefits from the dollar’s role as the world’s primary reserve currency.
Japan’s 10-year bond yields, at less than 1 percent, are the second-lowest in the world, after Switzerland, even though its debt is about twice the size of its economy.
The U.S. attracted $3.04 for each dollar of the $2.135 trillion in notes and bonds sold last year, the most since the government began releasing the data in 1992. The U.S. drew an all-time high bid-to-cover ratio of 9.07 for $30 billion of four-week bills it auctioned on Dec. 20 even though they pay zero percent interest.
Tougher Year
With yields on 10-year Treasuries (USGG10YR) below 2 percent, an increasing number of investors see little chance for U.S. bonds to repeat last year’s gains of 9.79 percent. The U.S pays an average interest rate of about 2.18 percent on its outstanding debt, down from 2.51 percent in 2009, Bloomberg data show.
‘Given how well they have done, we don’t think they’re any longer a very good hedge,” Eric Pellicciaro, head of global rates investment at New York-based BlackRock Inc., which manages $1.14 trillion in fixed-income assets, said in a Dec. 16 telephone interview.
The median estimate of 70 economists and strategists is for Treasury 10-year note yields to rise to 2.60 percent by year-end from 1.95 percent at 11:27 a.m. New York time. In Japan, the forecast for the nation’s benchmark note yield is 1.35 percent, while it’s expected to rise to 2.50 percent in Germany, from 1.90 percent today.
Central Banks
Central banks are bolstering demand by either keeping interest rates at record lows or reducing them, and by purchasing bonds in a policy know as quantitative easing.
The Federal Reserve has said it will keep its target rate for overnight loans between banks between zero and 0.25 percent through mid-2013, and is now selling $400 billion of its short- term Treasuries and reinvesting the proceeds into longer-term government debt in a program traders dubbed Operation Twist.
The Bank of Japan has kept its key rate at or below 0.5 percent since 1995, and expanded the asset-purchase program last year to 20 trillion yen ($260 billion). The Bank of England kept its main rate at a record low 0.5 percent last month, and left its asset-buying target at 275 billion pounds ($431 billion).
The European Central Bank reduced its main refinancing rate twice last quarter, to 1 percent from 1.5 percent. It followed those moves by allotting 489 billion euros of three-year loans to euro-region lenders. That exceeded the median estimate of 293 billion euros in a Bloomberg News survey of economists. The central bank will offer a second three-year loan on Feb. 28.
Flush With Liquidity’
The money from the ECB may be used by banks to buy government bonds, according to Fabrizio Fiorini, the chief investment officer at Aletti Gestielle SGR SpA in Milan.
“The market is now flush with liquidity after measures taken by central banks, particularly the ECB, and that’s great news for risky assets,” Fiorini said in a telephone interview on Dec. 20. “The market will have no problem taking down supply from countries like Spain and Italy in the first quarter. In fact, they should be able to raise money at lower borrowing costs than what we saw in recent months.”
Italy’s sale last week included 2.5 billion euros of 5 percent bonds due in March 2022, which yielded 6.98 percent. That was down from 7.56 percent at an auction Nov. 29. It sold 9 billion euros of bills on Dec. 28 at a rate of 3.251 percent, compared with 6.504 percent at the previous auction on Nov. 25.
‘Phony War’
Investors should be most worried about the period after the ECB’s second three-year longer-term refinancing operation scheduled in February, according to Ignis’s Thomson.
“The amount of liquidity that has been supplied by central banks, with more to come from the ECB in February, suggests the first couple of months will be a sort of phony war as far as the supply is concerned,” Thomson said.
The ECB has bought about 212 billion euros of government bonds since starting a program in May 2010 to contain borrowing costs for Greece, Portugal and Ireland. It began buying Spanish and Italian debt in August, according to people familiar with the trades, who declined to be identified because they weren’t authorized to speak publicly about the transactions.
“There’s a lot of talk that the ECB might have to give more direct support to the governments,” Frances Hudson, who helps manage about $242 billion as a global strategist at Standard Life Investments in Edinburgh, said in a Dec. 22 telephone interview.
Following is a table of bond and bill redemptions and interest payments in 2012 for the Group of Seven countries, Brazil, China, India and Russia, in dollars, using data calculated by Bloomberg as of Dec. 29:
Country    2012 Bond, Bill Redemptions ($)      Coupon Payments
Japan             3,000 billion                   117 billion
U.S.              2,783 billion                   212 billion
Italy               428 billion                    72 billion
France              367 billion                    54 billion
Germany             285 billion                    45 billion
Canada              221 billion                    14 billion
Brazil              169 billion                    31 billion
U.K.                165 billion                    67 billion
China               121 billion                    41 billion
India                57 billion                    39 billion 
Russia                               13 billion                                             9  billion 

Wednesday, January 4, 2012

A necessary reminder


Greed Isn't Just Good—It's Necessary For Capitalism
By WALTER E. WILLIAMS
What human motivation gets the most wonderful things done? It's really a silly question, because the answer is so simple. It turns out that it's human greed that gets the most wonderful things done.
When I say greed, I am not talking about fraud, theft, dishonesty, lobbying for special privileges from government or other forms of despicable behavior. I'm talking about people trying to get as much as they can for themselves.
Let's look at it.
This winter, Texas ranchers may have to fight the cold of night, perhaps blizzards, to run down, feed and care for stray cattle. They make the personal sacrifice of caring for their animals to ensure that New Yorkers can enjoy beef.
Last summer, Idaho potato farmers toiled in blazing sun, in dust and dirt, and maybe being bitten by insects to ensure that New Yorkers had potatoes to go with their beef.
Selfless Takers
Here's my question: Do you think that Texas ranchers and Idaho potato farmers make these personal sacrifices because they love or care about the well-being of New Yorkers?
The fact is, whether they like New Yorkers or not, they make sure that New Yorkers are supplied with beef and potatoes every day of the week.
Why? It's because ranchers and farmers want more for themselves.
In a free-market system, in order for one to get more for himself, he must serve his fellow man.
This is precisely what Adam Smith, the father of economics, meant when he said in his classic "An Inquiry Into the Nature and Causes of the Wealth of Nations" (1776), "It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard to their own interest."
By the way, how much beef and potatoes do you think New Yorkers would enjoy if it all depended upon the politically correct notions of human love and kindness? Personally, I'd grieve for New Yorkers.
Some have suggested that instead of greed, I use "enlightened self-interest." That's OK, but I prefer greed.
Free-market capitalism is relatively new in human history. Prior to the rise of capitalism, the way people amassed great wealth was by looting, plundering and enslaving their fellow man.
Capitalism made it possible to become wealthy by serving one's fellow man.
Capitalists seek to discover what people want and then produce it as efficiently as possible. Free-market capitalism is ruthless in its profit-and-loss discipline.

Austerity and Anarchy


On Austerity, Unrest, And Quantifying Chaos
Politically speaking, austerity is a challenge. While we would expect that governments imposing spending cuts on their voting public may face electability issues, in fact, a recent paper from the Center for Economic Policy Research finds that there is no empirical evidence to confirm this - i.e. a budget-cutting government is no less likely to be re-relected than a spend-heavy government. However, what the CEPR paper does find as a factor in delaying austerity is much more worrisome - a fear of instability and unrest. The authors found a very clear relationship between CHAOS (their variable name for demonstrations, riots, strikes and worse) and expenditure cuts. As JPMorgan notes, austerity sounds straightforward as a policy, until the consequences bite. It remains unclear that the road Europe is taking is less costly in the long run, in economic, political and social terms. The history of Europe over the last 100 years shows that austerity can have severe consequences and outcomes and perhaps most notably, the independent variable that did result in more unrest: higher levels of government debt in the first place.
The passage through time of the author's CHAOS factor shows that since 1994 we have had relative stability but given the ongoing austerity that is being forced (rightfully) upon the most indebted nations in Europe, it is perhaps no longer an issue of electability as technocrats roam freely and much more one of central stability and fear of the empirical link between austerity and anarchy.
JPMorgan recently noted this study:
The authors tested to see if results varied with ethnic fragmentation, inflation, penetration of mass media and the quality of government institutions; they did not. Results are also consistent across time, covering interwar and postwar periods.
The independent variable that did result in more unrest: higher levels of government debt in the first place.
Compounding the problem is the way some decisions are being taken, which may reinforce perceptions of a "democratic deficit" at the EU level, an issue highlighted by Germany’s Constitutional Court. It remains to be seen if Europe can sustain cohesion around its path of most resistance. One sign of rising tensions: the following (staggering) comment by the head of the Bank of France: "A downgrade does not appear to me to be justified when considering economic fundamentals," Noyer said. "Otherwise, they should start by downgrading Britain which has more deficits, as much debt, more inflation, less growth than us and where credit is slumping." At a time of increasing budgetary pressures and declining growth, I suppose there are limits to European solidarity.
The full paper can be found below:

Some empires really were worse than others.


BY JOSHUA E. KEATING 
It's hard to find countries that are nostalgic for colonialism, at least among those that were on the receiving end of it. At the same time, it's hard to escape the impression that some countries had a worse time of it than others. The former British Empire includes rising power India and Africa's most stable and prosperous countries -- Botswana, Ghana, and South Africa. France's former dependencies in Africa and Southeast Asia, from Ivory Coast to Cambodia, don't seem to have fared nearly as well in the post-colonial era.
Some, such as historian Niall Ferguson, have even argued for the positive legacy of the British Empire, seeing the Pax Britannica as an era not merely of imperialist expansion but also of "spreading liberal values in terms of free markets, the rule of law, and ultimately representative government."
But beyond anecdotal observations, is there any evidence that the type of colonialism determined the way former colonies turned out? Were the bloody post-independence civil wars of Angola and Mozambique, for example, a legacy of Portuguese colonialism, or were competition for resources and the Cold War more to blame? How would the recent histories of Algeria and Vietnam have differed if France had let them go peacefully?
Stanford University Ph.D. candidate Alexander Lee, with Professor Kenneth Schultz, looked at Cameroon, a rare country that includes large regions colonized by separate powers, Britain and France, and then united after independence in 1960. The only country with a similar history is Somalia, where it is understandably difficult to get economic data after more than three decades of war.
The results? There may be something to that British-legacy theory: Lee and Schultz found that formerly British rural areas of Cameroon today boast higher levels of wealth and better public services than those in the formerly French territory. To take one example, nearly 40 percent of rural households in the British provinces examined have access to piped water, while less than 15 percent on the French side do. This could suggest that the British colonial system, which had what Lee calls "greater levels of indirect rule and the granting of local-level autonomy to chiefs," was more beneficial -- or at least less damaging -- than the more hands-on French model, which involved a "greater level of forced labor."
It's by no means clear, however, that any brand of colonialism was good for the colonized. Harvard University economist Lakshmi Iyer has found that in India, regions that were under direct British rule have lower levels of public services today compared with those where local leaders retained some level of power; these "native states" include today's high-tech business hubs of Hyderabad and Jaipur. As for Latin America, a forthcoming paper by economists Miriam Bruhn of the World Bank and Francisco Gallego of Chile's Pontificia Universidad Católica found that areas where colonialism depended heavily on labor exploitation have lower levels of economic development today than places where colonists were less closely involved. (In this context, the grim legacy of Belgium's King Leopold II -- who ran his vast territories in today's Democratic Republic of the Congo as a brutal personal plantation -- doesn't seem so surprising.)
In the end, to paraphrase Henry David Thoreau, it seems the best colonist was the one who colonized the least. 

Intelligence cannot conquer stupidity

Think Again: Intelligence
I served in the CIA for 28 years and I can tell you: America's screw-ups come from bad leaders, not lousy spies.
BY PAUL R. PILLAR 
"Presidents Make Decisions Based on Intelligence."
From George W. Bush trumpeting WMD reports about Iraq to this year's Republican presidential candidates vowing to set policy in Afghanistan based on the dictates of the intelligence community, Americans often get the sense that their leaders' hands are guided abroad by their all-knowing spying apparatus. After all, the United States spends about $80 billion on intelligence each year, which provides a flood of important guidance every week on matters ranging from hunting terrorists to countering China's growing military capabilities. This analysis informs policymakers' day-to-day decision-making and sometimes gets them to look more closely at problems, such as the rising threat from al Qaeda in the late 1990s, than they otherwise would.
On major foreign-policy decisions, however, whether going to war or broadly rethinking U.S. strategy in the Arab world (as President Barack Obama is likely doing now), intelligence is not the decisive factor. The influences that really matter are the ones that leaders bring with them into office: their own strategic sense, the lessons they have drawn from history or personal experience, the imperatives of domestic politics, and their own neuroses. A memo or briefing emanating from some unfamiliar corner of the bureaucracy hardly stands a chance.
Besides, one should never underestimate the influence of conventional wisdom. President Lyndon B. Johnson and his inner circle received the intelligence community's gloomy assessments of South Vietnam's ability to stand on its own feet, as well as comparably pessimistic reports from U.S. military leaders on the likely cost and time commitment of a U.S. military effort there. But they lost out to the domino theory -- the idea that if South Vietnam fell to communism, a succession of other countries in the developing world would as well. President Harry Truman decided to intervene in Korea based on the lessons of the past: the Allies' failure to stand up to the Axis powers before World War II and the West's postwar success in firmly responding to communist aggression in Greece and Berlin. President Richard Nixon's historic opening to China was shaped by his brooding in the political wilderness about great-power strategy and his place in it. The Obama administration's recent drumbeating about Iran is largely a function of domestic politics. Advice from Langley, for better or worse, had little to do with any of this.
Intelligence may have figured prominently in Bush's selling of the invasion of Iraq, but it played almost no role in the decision itself. If the intelligence community's assessments pointed to any course of action, it was avoiding a war, not launching one.
When U.S. Secretary of State Colin Powell went before the United Nations in February 2003 to make the case for an invasion of Iraq, he argued, "Saddam Hussein and his regime are concealing their efforts to produce more weapons of mass destruction," an observation he said was "based on solid intelligence." But in a candid interview four months later, Deputy Defense Secretary Paul Wolfowitz acknowledged that weapons of mass destruction were simply "the one issue that everyone could agree on." The intelligence community was raising no alarms about the subject when the Bush administration came into office; indeed, the 2001 edition of the community's comprehensive statement on worldwide threats did not even mention the possibility of Iraqi nuclear weapons or any stockpiles of chemical or biological weapons. The administration did not request the (ultimately flawed) October 2002 intelligence estimate on Iraqi unconventional weapons programs that was central to the official case for invasion -- Democrats in Congress did, and only six senators and a handful of representatives bothered to look at it before voting on the war, according to staff members who kept custody of the copies. Neither Bush nor Condoleezza Rice, then his national security advisor, read the entire estimate at the time, and in any case the public relations rollout of the war was already under way before the document was written.
Had Bush read the intelligence community's report, he would have seen his administration's case for invasion stood on its head. The intelligence officials concluded that Saddam was unlikely to use any weapons of mass destruction against the United States or give them to terrorists -- unless the United States invaded Iraq and tried to overthrow his regime. The intelligence community did not believe, as the president claimed, that the Iraqi regime was an ally of al Qaeda, and it correctly foresaw any attempt to establish democracy in a post-Saddam Iraq as a hard, messy slog.
In a separate prewar assessment, the intelligence community judged that trying to build a new political system in Iraq would be "long, difficult and probably turbulent,"adding that any post-Saddam authority would face a "deeply divided society with a significant chance that domestic groups would engage in violent conflict with each other unless an occupying force prevented them from doing so." Mentions of Iraqis welcoming U.S. soldiers with flowers, or the war paying for itself, were notably absent. Needless to say, none of that made any difference to the White House.
The record of 20th-century U.S. intelligence failures is a familiar one, and mostly indisputable. But whether these failures -- or the successes -- mattered in the big picture is another question.
The CIA predicted both the outbreak and the outcome of the 1967 Six-Day War between Israel and neighboring Arab states, a feat impressive enough that it reportedly won intelligence chief Richard Helms a seat at President Johnson's Tuesday lunch table. Still, top-notch intelligence couldn't help Johnson prevent the war, which produced the basic contours of today's intractable Israeli-Palestinian conflict, and U.S. intelligence completely failed to predict Egypt's surprise attack on Israel six years later. Yet Egypt's nasty surprise in 1973 didn't stop Nixon and Secretary of State Henry Kissinger from then achieving a diplomatic triumph, exploiting the conflict to cement relations with Israel while expanding them with Egypt and the other Arab states -- all at the Soviets' expense.
U.S. intelligence also famously failed to foresee the 1979 Iranian revolution. But it was policymakers' inattention to Iran and sharp disagreements within President Jimmy Carter's administration, not bad intelligence, that kept the United States from making tough decisions before the shah's regime was at death's door. Even after months of disturbances in Iranian cities, the Carter administration -- preoccupied as it was with the Egypt-Israel peace negotiations and the Sandinistas' revolution in Nicaragua -- still had not convened any high-level policy meetings on Iran. "Our decision-making circuits were heavily overloaded," Zbigniew Brzezinski, Carter's national security advisor,later recalled.

An Introduction to the Theory of Money and Credit

The Austrian Theory of Money
By Murray N. Rothbard
The Austrian theory of money virtually begins and ends with Ludwig von Mises's monumental Theory of Money and Credit, published in 1912 (1). Mises's fundamental accomplishment was to take the theory of marginal utility, built up by Austrian economists and other marginalists as the explanation for consumer demand and market price, and apply it to the demand for and the value, or the price, of money. No longer did the theory of money need to be separated from the general economic theory of individual action and utility, of supply, demand, and price; no longer did monetary theory have to suffer isolation in a context of "velocities of circulation," "price levels," and "equations of exchange."
In applying the analysis of supply and demand to money, Mises used the Wicksteedian concept: supply is the total stock of a commodity at any given time; and demand is the total market demand to gain and hold cash balances, built up out of the marginal-utility rankings of units of money on the value scales of individuals on the market. The Wicksteedian concept is particularly appropriate to money for several reasons: first, because the supply of money is either extremely durable in relation to current production, as under the gold standard, or is determined exogenously to the market by government authority; and, second and most important, because money, uniquely among commodities desired and demanded on the market, is acquired not to be consumed, but to be held for later exchange. Demand-to-hold thereby becomes the appropriate concept for analyzing the uniquely broad monetary function of being held as stock for later sale. Mises was also able to explain the demand for cash balances as the resultant of marginal utilities on value scales that are strictly ordinal for each individual. In the course of his analysis Mises built on the insight of his fellow Austrian Franz Cuhel to develop a marginal utility that was strictly ordinal, lexicographic, and purged of all traces of the error of assuming the measurability of utilities.
The relative utilities of money units as against other goods determine each person's demand for cash balances, that is, how much of his income or wealth he will keep in cash balances as against how much he will spend. Applying the law of diminishing (ordinal) marginal utility of money and bearing in mind that money's "use" is to be held for future exchange, Mises arrived implicitly at a falling demand curve for money in relation to the purchasing power of the currency unit. The purchasing power of the money unit, which Mises also termed the "objective exchange-value" of money, was then determined, as in the usual supply-and-demand analysis, by the intersection of the money stock and the demand for cash balance schedule. We can see this visually by putting the purchasing power of the money unit on the y-axis and the quantity of money on the x-axis of the conventional two-dimensional diagram corresponding to the price of any good and its quantity. Mises wrapped up the analysis by pointing out that the total supply of money at any given time is no more or less than the sum of the individual cash balances at that time. No money in a society remains unowned by someone and is therefore outside some individual's cash balances.
While, for purposes of convenience, Mises's analysis may be expressed in the usual supply-and-demand diagram with the purchasing power of the money unit serving as the price of money, relying solely on such a simplified diagram falsifies the theory. For, as Mises pointed out in a brilliant analysis whose lessons have still not been absorbed in the mainstream of economic theory, the purchasing power of the money unit is not simply the inverse of the so-called price level of goods and services. In describing the advantages of money as a general medium of exchange and how such a general medium arose on the market, Mises pointed out that the currency unit serves as unit of account and as a common denominator of all other prices, but that the money commodity itself is still in a state of barter with all other goods and services. Thus, in the premoney state of barter, there is no unitary "price of eggs"; a unit of eggs (say, one dozen) will have many different "prices": the "butter" price in terms of pounds of butter, the "hat" price in terms of hats, the "horse" price in terms of horses, and so on. Every good and service will have an almost infinite array of prices in terms of every other good and service. After one commodity, say gold, is chosen to be the medium for all exchanges, every other good except gold will enjoy a unitary price, so that we know that the price of eggs is one dollar a dozen; the price of a hat is ten dollars, and so on. But while every good and service except gold now has a single price in terms of money, money itself has a virtually infinite array of individual prices in terms of every other good and service. To put it another way, the price of any good is the same thing as its purchasing power in terms of other goods and services. Under barter, if the price of a dozen eggs is two pounds of butter, the purchasing power of a dozen eggs is, inter alia, two pounds of butter. The purchasing power of a dozen eggs will also be one-tenth of a hat, and so on. Conversely, the purchasing power of butter is its price in terms of eggs; in this case the purchasing power of a pound of butter is a half-dozen eggs. After the arrival of money, the purchasing power of a dozen eggs is the same as its money price, in our example, one dollar. The purchasing power of a pound of butter will be 50 cents, of a hat ten dollars, and so forth.
What, then, is the purchasing power, or the price, of a dollar? It will be a vast array of all the goods and services that can be purchased for a dollar, that is, of all the goods and services in the economy. In our example, we would say that the purchasing power of a dollar equals one dozen eggs, or two pounds of butter, or one-tenth of a hat, and so on, for the entire economy. In short, the price, or purchasing power, of the money unit will be an array of the quantities of alternative goods and services that can be purchased for a dollar. Since the array is heterogeneous and specific, it cannot be summed up in some unitary price-level figure.
The fallacy of the price-level concept is further shown by Mises's analysis of precisely how prices rise (that is, the purchasing power of money falls) in response to an increase in the quantity of money (assuming, of course, that the individual demand schedules for cash balances or, more generally, individual value scales remain constant). In contrast to the hermetic neoclassical separation of money and price levels from the relative prices of individual goods and services, Mises showed that an increased supply of money impinges differently upon different spheres of the market and thereby ineluctably changes relative prices.

Another Tale of Two Cities

Weimar and Washington
By Philip Giraldi 
Mark Twain is credited with saying that “History doesn’t repeat itself, but it rhymes.” Today’s United States is often compared to other historic nations, whether at their prime or about to decline and fall depending on one’s own political perspective. Neoconservatives frequently eulogize Washington as a new Rome, promising a worldwide empire without end carried on the back of a Pentagon bristling with advanced weaponry. Other observers also cite Rome but are rather more sanguine, recalling how in the 5th century the empire failed dramatically and fell to barbarian hordes. Still others note the fate of the British Empire, which came apart in the wake of the Second World War, or the Soviets, whose collapse was brought about by 50 years of unsustainable military spending.
But the historical analogy that appears to be most apposite for post-9/11 Washington is that of the Weimar Republic. To be sure, any suggestion that the United States might be following the same course as Germany in the years that led to Nazism must be pursued with caution because few Americans want to believe that the descent into such extremism is even possible in the world’s most venerable constitutional republic. But consider the following: both the United States and Weimar Germany had constitutions in which checks and balances were integrated to maintain a multi-party system, the rule of law, and individual liberties. Both countries were on the receiving end of acts of terrorism that produced a dramatic and violent reaction against the presumed perpetrators of the crimes, so both quickly adopted legislation that abridged many constitutional rights and empowered the head of state to react decisively to further threats. The media fell in line, concerned that criticism would be unpatriotic.
Both the U.S. and Germany possessed politically powerful military-industrial complexes that had a vested interest in encouraging a militarized response to the threats and highly polarized internal politics that enabled politicians to obtain advantage by exploiting national security concerns. Both countries experienced severe financial crises and printed fiat currency to pay the bills, and both had jurists and political supporters who argued that in time of crisis the head of state must be granted special executive authority that transcends the limits placed by the constitution.
The Weimar Republic, which replaced rule by the German emperor in the aftermath of World War I, was a liberal democracy in the 19th-century sense, which means it had a constitution that guaranteed individual and group rights, multi-party systems, and free elections at regular intervals. It took its name from the city of Weimar, where the constitution was drawn up in a national assembly convened in 1919. From the start, Weimar was plagued by a failure to create a sustainable political culture because of the high level of polarization and violence instigated by both the major and fringe parties, even though the relatively moderate Social Democrats were normally dominant.
Adolph Hitler became German chancellor in January 1933. The chancellor was the head of government, but the head of state was President and Field Marshal Paul von Hindenburg. Hindenburg was a hero of the First World War, and he despised the dangerous parvenu Hitler but foolishly thought he could control him. The National Socialist Party was, however, still a minority party in parliament with 33% of the popular vote when Hitler took charge, holding only three out of 11 cabinet positions. Strong socialist, Catholic, and communist parties actively contested the Nazis’ agenda. The media reflected the political divisions, with many papers opposing Hitler and his government.
Hitler benefited from the political paralysis of Weimar, which had forced his Reich chancellor predecessors to rule by presidential decree to bypass the logjam in parliament, but he could not actually legislate in that fashion and did not have a free ride. There was considerable resistance to his policies. All of that changed, however, when the seat of parliament in Berlin, the Reichstag, was burned down on Feb. 27, 1933. It was an act of terrorism that shocked the nation, and it was eventually attributed to an addled Dutch communist named Marinus van der Lubbe, though it was almost certainly carried out by the Nazis themselves. Hitler convinced President Hindenburg to sign a “Reichstag Fire Decree” on the following day, canceling the constitutional guarantees of habeas corpus and freedom of the press, the freedom to organize and assemble, and the privacy of communications. It authorized police search and seizure without any judicial warrant. It was no coincidence that the fire took place two weeks before parliamentary elections in which the Nazis, who beat and otherwise intimidated opponents and “monitored” the polling stations, won nearly 44% of the votes. The opposition, including the technically illegal communists, took 42% and Hitler was denied his majority, but he arrested socialist opponents, barred the communists, and was eventually able to form a government with his parliamentary allies.

Shoes for everyone


Would the Poor Go Barefoot with a Private Shoe Industry?
By Stephen Davies
It is said that while we may rely on private initiative to supply “nonessentials,” some things are so important to a decent life that we cannot trust the vagaries of the competitive market. Some people would not get the vital product or service. The only solution, supposedly, is government provision to all, often free of charge. The problems with this argument, as well as the great benefits of a capitalist economy, are shown by examining the shoe industry.
Most would agree that shoes are essential to a comfortable or decent existence. Today even the poorest have shoes, and most people of modest means have several pairs. Shoes are available in an enormous variety of types, styles, and colors, at modest prices. It was not always so. In America just over 150 years ago, shoes were made locally, on an individual basis, by skilled craftsmen. This may seem idyllic, but it was not. They were extremely expensive in real terms, so much so that they could even be included in a will. Most people had only one pair that would be made to last for years. The poor had no shoes; indeed, being without shoes was one of the classic marks of poverty.
Things began to change in 1848 with the invention of the first shoe-sewing machine, and shoemaking moved from the home and small workshop to factories. However, making shoes was complicated and difficult to mechanize. In particular, the process of “lasting,” by which leather was molded to fit a model foot, proved a great challenge. Moreover, the capital cost of the new machinery was a barrier for many small shoemaking firms.
Two men were to transform this situation in the United States and subsequently elsewhere. The first was Jan Matzeliger, born in 1852, an immigrant to the United States from Dutch Guiana (now Suriname), and the son of a Dutch sea captain and a slave woman. While working in a shoe factory in Massachusetts, Matzeliger devised a method of mechanizing the lasting process. He perfected it after years of work and great expense, and obtained capital to create a production model from two local investors, Charles H. Delnow and Melville S. Nicholls. Matzeliger got a patent in 1883. His machine cut the cost of producing a pair of shoes in half. A hand laster could produce no more than 50 pairs a day. Using his machine, one could produce up to 700 pairs. Matzeliger and his partners set up the Consolidated Lasting Machine Corporation, in association with two new investors, George A. Brown and the second main figure in our story, Sidney W. Winslow. Matzeliger sold his patent rights to the newly formed corporation in exchange for stock, which made him a wealthy man. He died from tuberculosis in 1889.
Winslow was a business genius. The owner of a small shoe factory, he transformed the industry by a crucial business innovation. In 1899 he engineered a merger of the three main shoemaking-machinery companies to form the United Shoe Machinery Corporation (USMC). Instead of selling its machines, the USMC leased them, which meant that shoe manufacturers no longer bore the capital cost, including depreciation, of their machinery. USMC also relieved them of much of the maintenance cost.
The combination of technical invention and business innovation transformed shoemaking. The cost of shoes fell to a fraction of what it had been, while the wages of workers more than doubled by 1905. Thanks to the ease with which producers could obtain the machinery, the industry became very competitive, which encouraged innovation and kept down costs. This led to the situation we enjoy today where even the poorest have shoes and the variety constantly increases. When leasing was applied outside the United States, often through arrangements with the USMC, the industry was transformed there also.
Let us suppose now that shoes were supplied by government. We have much evidence of what the result would be. Everyone would have shoes, but the quality would be poor. There would be almost no variety (except of the Army kind—two sizes: too large and too small) and certainly no “fun” shoes. The cost would be high, and there might even be rationing. If some private supply were allowed, we would have a few private firms providing high-quality shoes at exorbitant cost to the rich and the ruling elite.
Privatize Shoe Production?
Anyone suggesting that perhaps private enterprise should produce shoes more widely would be met with the indignant response: “What! Do you want the poor to go without shoes?” This, of course, is precisely the situation we face with many services provided predominantly or exclusively by government, notably education. The point is that once a product is supplied by government, we find it hard to imagine that it could be provided in any other way without disastrous results. The assertion that a product is essential is supposed to end the argument.
The story of the U.S. shoe-machinery industry also highlights several other points. One is the critical part played in history by productive and creative individuals whose names are not remembered or lauded in the way that those of monarchs, politicians, and generals are. Sidney Winslow did more to benefit millions of people than many “public figures,” yet is almost forgotten. Another is the way a market economy undercuts prejudice. As a black man, Jan Matzeliger faced much prejudice, particularly in his social and religious life. But in the business world his color did not matter, and he had no trouble finding investors. Only his talent and application mattered.
Finally, the story of the USMC shows the bad effects of misguided public policy. An enormously successful business, worth over a billion dollars by 1960 and a model employer, United Shoe was attacked by the Department of Justice in a famous antitrust case, was broken up in 1968, and today no longer exists. (Ironically, the leasing policy was targeted as a tool of USMC’s alleged monopoly practices.) The U.S. shoe manufacturing industry has also mostly vanished.
So when you put on your shoes or go to buy a pair, be thankful and remember Jan Matzeliger and Sidney Winslow. Even more, be thankful that this essential product is not provided by government and imagine what services provided by the government could be like if the contemporary equivalents of those two men were let loose on them.